2. Langgraph快速使用

张开发
2026/4/4 19:43:15 15 分钟阅读
2. Langgraph快速使用
安装 LangGraphpip install-U langgraph快速使用本快速入门教程将演示如何使用 LangGraph 的图编程接口Graph API或函数式接口Functional API来搭建一个计算器智能体。1. 定义tools 和 model在本示例中我们将使用 Claude Sonnet 4.5 模型并定义加法、乘法与除法相关工具。fromlangchain.toolsimporttoolfromlangchain.chat_modelsimportinit_chat_model modelinit_chat_model(claude-sonnet-4-6,temperature0)# Define toolstooldefmultiply(a:int,b:int)-int:Multiply a and b. Args: a: First int b: Second int returna*btooldefadd(a:int,b:int)-int:Adds a and b. Args: a: First int b: Second int returnabtooldefdivide(a:int,b:int)-float:Divide a and b. Args: a: First int b: Second int returna/b# Augment the LLM with toolstools[add,multiply,divide]tools_by_name{tool.name:toolfortoolintools}model_with_toolsmodel.bind_tools(tools)2. 定义 state图的状态用于存储对话消息与大模型调用次数。注意LangGraph 中的状态会在智能体整个执行过程中持续保留。通过带 operator.add 运算符的 Annotated 类型可确保新消息会追加到已有消息列表中而非直接覆盖原有内容。fromlangchain.messagesimportAnyMessagefromtyping_extensionsimportTypedDict,AnnotatedimportoperatorclassMessagesState(TypedDict):messages:Annotated[list[AnyMessage],operator.add]llm_calls:int3. 定义model节点模型节点用于调用大语言模型LLM并判断是否需要调用工具。fromlangchain.messagesimportSystemMessagedefllm_call(state:dict):LLM decides whether to call a tool or notreturn{messages:[model_with_tools.invoke([SystemMessage(contentYou are a helpful assistant tasked with performing arithmetic on a set of inputs.)]state[messages])],llm_calls:state.get(llm_calls,0)1}4. 定义tool节点工具节点用于执行工具调用并返回执行结果。fromlangchain.messagesimportToolMessagedeftool_node(state:dict):Performs the tool callresult[]fortool_callinstate[messages][-1].tool_calls:tooltools_by_name[tool_call[name]]observationtool.invoke(tool_call[args])result.append(ToolMessage(contentobservation,tool_call_idtool_call[id]))return{messages:result}5. 定义结束逻辑条件边函数会根据大语言模型LLM是否发起了工具调用来决定路由至工具节点还是直接结束流程。fromtypingimportLiteralfromlanggraph.graphimportStateGraph,START,ENDdefshould_continue(state:MessagesState)-Literal[tool_node,END]:Decide if we should continue the loop or stop based upon whether the LLM made a tool callmessagesstate[messages]last_messagemessages[-1]# If the LLM makes a tool call, then perform an actioniflast_message.tool_calls:returntool_node# Otherwise, we stop (reply to the user)returnEND6. 构建并编译智能体使用 StateGraph 类完成智能体的构建并通过 compile 方法进行编译。# Build workflowagent_builderStateGraph(MessagesState)# Add nodesagent_builder.add_node(llm_call,llm_call)agent_builder.add_node(tool_node,tool_node)# Add edges to connect nodesagent_builder.add_edge(START,llm_call)agent_builder.add_conditional_edges(llm_call,should_continue,[tool_node,END])agent_builder.add_edge(tool_node,llm_call)# Compile the agentagentagent_builder.compile()# Show the agentfromIPython.displayimportImage,display display(Image(agent.get_graph(xrayTrue).draw_mermaid_png()))# Invokefromlangchain.messagesimportHumanMessage messages[HumanMessage(contentAdd 3 and 4.)]messagesagent.invoke({messages:messages})forminmessages[messages]:m.pretty_print()完整代码# Step 1: Define tools and modelfromlangchain.toolsimporttoolfromlangchain.chat_modelsimportinit_chat_model modelinit_chat_model(claude-sonnet-4-6,temperature0)# Define toolstooldefmultiply(a:int,b:int)-int:Multiply a and b. Args: a: First int b: Second int returna*btooldefadd(a:int,b:int)-int:Adds a and b. Args: a: First int b: Second int returnabtooldefdivide(a:int,b:int)-float:Divide a and b. Args: a: First int b: Second int returna/b# Augment the LLM with toolstools[add,multiply,divide]tools_by_name{tool.name:toolfortoolintools}model_with_toolsmodel.bind_tools(tools)# Step 2: Define statefromlangchain.messagesimportAnyMessagefromtyping_extensionsimportTypedDict,AnnotatedimportoperatorclassMessagesState(TypedDict):messages:Annotated[list[AnyMessage],operator.add]llm_calls:int# Step 3: Define model nodefromlangchain.messagesimportSystemMessagedefllm_call(state:dict):LLM decides whether to call a tool or notreturn{messages:[model_with_tools.invoke([SystemMessage(contentYou are a helpful assistant tasked with performing arithmetic on a set of inputs.)]state[messages])],llm_calls:state.get(llm_calls,0)1}# Step 4: Define tool nodefromlangchain.messagesimportToolMessagedeftool_node(state:dict):Performs the tool callresult[]fortool_callinstate[messages][-1].tool_calls:tooltools_by_name[tool_call[name]]observationtool.invoke(tool_call[args])result.append(ToolMessage(contentobservation,tool_call_idtool_call[id]))return{messages:result}# Step 5: Define logic to determine whether to endfromtypingimportLiteralfromlanggraph.graphimportStateGraph,START,END# Conditional edge function to route to the tool node or end based upon whether the LLM made a tool calldefshould_continue(state:MessagesState)-Literal[tool_node,END]:Decide if we should continue the loop or stop based upon whether the LLM made a tool callmessagesstate[messages]last_messagemessages[-1]# If the LLM makes a tool call, then perform an actioniflast_message.tool_calls:returntool_node# Otherwise, we stop (reply to the user)returnEND# Step 6: Build agent# Build workflowagent_builderStateGraph(MessagesState)# Add nodesagent_builder.add_node(llm_call,llm_call)agent_builder.add_node(tool_node,tool_node)# Add edges to connect nodesagent_builder.add_edge(START,llm_call)agent_builder.add_conditional_edges(llm_call,should_continue,[tool_node,END])agent_builder.add_edge(tool_node,llm_call)# Compile the agentagentagent_builder.compile()fromIPython.displayimportImage,display# Show the agentdisplay(Image(agent.get_graph(xrayTrue).draw_mermaid_png()))# Invokefromlangchain.messagesimportHumanMessage messages[HumanMessage(contentAdd 3 and 4.)]messagesagent.invoke({messages:messages})forminmessages[messages]:m.pretty_print()

更多文章