- 大家好,我是同学小张,日常分享AI知识和实战案例
- 欢迎 点赞 + 关注 👏,持续学习,持续干货输出。
- 一起交流💬,一起进步💪。
- 微信公众号也可搜【同学小张】 🙏
本站文章一览:
今天我们继续学习 LangGraph 的基础内容。前面两篇文章我们学习了 LangGraph 的基本用法和条件分支,本文继续学习如何利用 LangGraph 实现一个环(循环)结构。
0. 完整代码
0.1 完整代码
还是先上完整代码,Demo代码参考这里:
from langchain_community.tools.tavily_search import TavilySearchResults tools = [TavilySearchResults(max_results=1)] from langgraph.prebuilt import ToolExecutor tool_executor = ToolExecutor(tools) from langchain_openai import ChatOpenAI # We will set streaming=True so that we can stream tokens # See the streaming section for more information on this. model = ChatOpenAI(temperature=0) from langchain.tools.render import format_tool_to_openai_function functions = [format_tool_to_openai_function(t) for t in tools] model = model.bind_functions(functions) from typing import TypedDict, Annotated, Sequence import operator from langchain_core.messages import BaseMessage class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add] from langgraph.prebuilt import ToolInvocation import json from langchain_core.messages import FunctionMessage # Define the function that determines whether to continue or not def should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if "function_call" not in last_message.additional_kwargs: return "end" # Otherwise if there is, we continue else: return "continue" # Define the function that calls the model def call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]} # Define the function to execute tools def call_tool(state): messages = state['messages'] # Based on the continue condition # we know the last message involves a function call last_message = messages[-1] # We construct an ToolInvocation from the function_call action = ToolInvocation( tool=last_message.additional_kwargs["function_call"]["name"], tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]), ) # We call the tool_executor and get back a response response = tool_executor.invoke(action) # We use the response to create a FunctionMessage function_message = FunctionMessage(content=str(response), name=action.tool) # We return a list, because this will get added to the existing list return {"messages": [function_message]} from langgraph.graph import StateGraph, END # Define a new graph workflow = StateGraph(AgentState) # Define the two nodes we will cycle between workflow.add_node("agent", call_model) workflow.add_node("action", call_tool) # Set the entrypoint as `agent` # This means that this node is the first one called workflow.set_entry_point("agent") # We now add a conditional edge workflow.add_conditional_edges( # First, we define the start node. We use `agent`. # This means these are the edges taken after the `agent` node is called. "agent", # Next, we pass in the function that will determine which node is called next. should_continue, # Finally we pass in a mapping. # The keys are strings, and the values are other nodes. # END is a special node marking that the graph should finish. # What will happen is we will call `should_continue`, and then the output of that # will be matched against the keys in this mapping. # Based on which one it matches, that node will then be called. { # If `tools`, then we call the tool node. "continue": "action", # Otherwise we finish. "end": END } ) # We now add a normal edge from `tools` to `agent`. # This means that after `tools` is called, `agent` node is called next. workflow.add_edge('action', 'agent') # Finally, we compile it! # This compiles it into a LangChain Runnable, # meaning you can use it as you would any other runnable app = workflow.compile() from langchain_core.messages import HumanMessage inputs = {"messages": [HumanMessage(content="what is the weather in sf")]} response = app.invoke(inputs) print(response)
0.2 运行前准备
官方给出的这个Demo程序,需要用到 Tavily API 的key。
需要大家去 这个地方 去自行申请一个。
申请完后,记得将该 API key 添加到你的环境变量中。
TAVILY_API_KEY = "tvly-xxxxxxxxxxx"
1. 代码详解
1.1 创建图 - StateGraph
首先是实例化一个 Graph,与前面用的 Graph 不同,前面用的是 MessageGraph
,现在这个例子中用的是 StateGraph
。
from langgraph.graph import StateGraph, END # Define a new graph workflow = StateGraph(AgentState)
MessageGraph
与 StateGraph
好像也没什么太大的区别:
1.2 添加节点 - node
# Define the two nodes we will cycle between workflow.add_node("agent", call_model) workflow.add_node("action", call_tool)
添加了两个节点:
- 初始节点 agent,用来调用大模型
- 节点 action,用来执行工具
1.3 添加边
# We now add a conditional edge workflow.add_conditional_edges( "agent", should_continue, { # If `tools`, then we call the tool node. "continue": "action", "end": END } ) workflow.add_edge('action', 'agent')
添加了一个条件边,一个普通边。
现在这个图结构应该是这样的:
看一下条件边的条件:should_continue
def should_continue(state): messages = state['messages'] last_message = messages[-1] # If there is no function call, then we finish if "function_call" not in last_message.additional_kwargs: return "end" # Otherwise if there is, we continue else: return "continue"
就是判断是否有工具需要执行,如果有,则走 agent —> action 方向,如果没有,则走 agent —> END 方向。
1.4 总结
所以到这里,结构的实现已经比较清晰了,对比上一篇文章条件分支的代码,其实最主要变化的是这一句:
workflow.add_edge('action', 'agent')
由原来的 action —> END,变成了 action —> agent。
另外,还有一个要注意的点:
def call_model(state): messages = state['messages'] response = model.invoke(messages) # We return a list, because this will get added to the existing list return {"messages": [response]}
在 model.invoke 时,注意 invoke 参数类型,messages = state['messages']
,不是传state类型,而是里面的 messages 类型,也就是HumanMessage
或AIMessage
等类型。
如果觉得本文对你有帮助,麻烦点个赞和关注呗 ~~~
- 大家好,我是 同学小张,日常分享AI知识和实战案例
- 欢迎 点赞 + 关注 👏,持续学习,持续干货输出。
- 一起交流💬,一起进步💪。
- 微信公众号也可搜【同学小张】 🙏
本站文章一览: