How to debug langgraph/agents/leo/nodes.py in a Leonardo project

How to debug langgraph/agents/leo/nodes.py

1.⁠ ⁠Run docker compose down
2.⁠ ⁠⁠In docker-compose-dev.yml, comment out ⁠ command: bash -c “python init_pg_checkpointer.py && uvicorn main:app –host 0.0.0.0 –port 8000″ ⁠ and uncomment ⁠ # command: tail -f /dev/null ⁠.
3.⁠ ⁠Run bash bin/dev again. This time, the LlamaBot container will start, but Uvicorn (FastAPI) won’t run, so you won’t be able to access localhost:8000 yet.
4.⁠ ⁠⁠⁠Run this ⁠ docker compose exec -it llamabot uvicorn main:app –host 0.0.0.0 –port 8000 ⁠ to manuallys tart Uvicorn (FastAPI). You should now be able to access localhost:8000
5.⁠ ⁠⁠Now, you can add ⁠ breakpoint() ⁠ in langgraph/agents/leo/nodes.py, like so:

  1. ⁠⁠ # Node
    def leo(state: LlamaPressState):
    breakpoint() # add this line to test if this is being loaded correctly, and that we hit the breakpoint.
    llm = ChatOpenAI(model=”gpt-4.1″)
    llm_with_tools = llm.bind_tools(tools) custom_prompt_instructions_from_llamapress_dev = state.get(“agent_prompt”)
    full_sys_msg = SystemMessage(content=f”””{sys_msg} Here are additional instructions provided by the developer: {custom_prompt_instructions_from_llamapress_dev} “””) return {“messages”: [llm_with_tools.invoke([full_sys_msg] + state[“messages”])]} ⁠

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *