国内大模型API调用实战
今天我们就来具体看下具体大模型厂商API的使用。
1.API
deepseek
deepseek支持requests直接调用和openai式的调用,新注册用户有500万tokenimport requests import json url = "https://api.deepseek.com/chat/completions" payload = json.dumps({ "messages": [ { "content": "You are a helpful assistant", "role": "system" }, { "content": "Hi", "role": "user" } ], "model": "deepseek-coder", "frequency_penalty": 0, "max_tokens": 2048, "presence_penalty": 0, "stop": None, "stream": False, "temperature": 1, "top_p": 1, "logprobs": False, "top_logprobs": None }) headers = { 'Content-Type': 'application/json', 'Accept': 'application/json', 'Authorization': 'Bearer <key>' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) from openai import OpenAI # for backward compatibility, you can still use `https://api.deepseek.com/v1` as `base_url`. client = OpenAI(api_key="sk-4f510fa4ca9d4ea7b56cdca99418fd9d", base_url="https://api.deepseek.com") response = client.chat.completions.create( model="deepseek-chat", messages=[ { "role": "system", "content": "You are a helpful assistant"}, { "role": "user", "content": "Hello"}, ], max_tokens=1024, temperature=0.7, stream=False ) print(response.choices[0].message.content)
智普AI
需要安装zhipuai包,新注册用户有赠送token- pip install zhipuai
Python>=3.7
from zhipuai import ZhipuAI client = ZhipuAI(api_key="dd") # 请填写您自己的APIKey response = client.chat.completions.create( model="glm-4", # 填写需要调用的模型名称 messages=[ { "role": "user", "content": "你好!你叫什么名字"}, ], stream=True, ) for chunk in response: print(chunk.choices[0].delta)
kimi
kimi也是支持openai样式的调用,新注册用户的15元的赠送pip install --upgrade 'openai>=1.0'
from openai import OpenAI client = OpenAI( api_key = "sk-KbwNWKBaRwhkvmfIHeaV4xnlpp3haV7qLr0sSU7wIOI9ZSNh", base_url = "https://api.moonshot.cn/v1", ) completion = client.chat.completions.create( model = "moonshot-v1-8k", messages = [ { "role": "system", "content": "你是 Kimi,由 Moonshot AI 提供的人工智能助手,你更擅长中文和英文的对话。你会为用户提供安全,有帮助,准确的回答。同时,你会拒绝一切涉及恐怖主义,种族歧视,黄色暴力等问题的回答。Moonshot AI 为专有名词,不可翻译成其他语言。"}, { "role": "user", "content": "你好,我叫李雷,1+1等于多少?"} ], temperature = 0.3, ) print(completion.choices[0].message.content)
字节豆包
豆包需要安装方舟包pip install 'volcengine-python-sdk[ark]'
from volcenginesdkarkruntime import Ark # Authentication # 1.If you authorize your endpoint using an API key, you can set your api key to environment variable "ARK_API_KEY" # or specify api key by Ark(api_key="${YOUR_API_KEY}"). # Note: If you use an API key, this API key will not be refreshed. # To prevent the API from expiring and failing after some time, choose an API key with no expiration date. # 2.If you authorize your endpoint with Volcengine Identity and Access Management(IAM), set your api key to environment variable "VOLC_ACCESSKEY", "VOLC_SECRETKEY" # or specify ak&sk by Ark(ak="${YOUR_AK}", sk="${YOUR_SK}"). # To get your ak&sk, please refer to this document([https://www.volcengine.com/docs/6291/65568](https://www.volcengine.com/docs/6291/65568)) # For more information,please check this document([https://www.volcengine.com/docs/82379/1263279](https://www.volcengine.com/docs/82379/1263279)) client = Ark() # Non-streaming: print("----- standard request -----") completion = client.chat.completions.create( model="${YOUR_ENDPOINT_ID}", messages = [ { "role": "system", "content": "你是豆包,是由字节跳动开发的 AI 人工智能助手"}, { "role": "user", "content": "常见的十字花科植物有哪些?"}, ], ) print(completion.choices[0].message.content) # Streaming: print("----- streaming request -----") stream = client.chat.completions.create( model="${YOUR_ENDPOINT_ID}", messages = [ { "role": "system", "content": "你是豆包,是由字节跳动开发的 AI 人工智能助手"}, { "role": "user", "content": "常见的十字花科植物有哪些?"}, ], stream=True ) for chunk in stream: if not chunk.choices: continue print(chunk.choices[0].delta.content, end="") print()
讯飞星火
讯飞星火新注册用户都有免费额度,不同模型额度不要,3.5有10万token, 2.0的有200万一年token. 接口是采用websocket- pip install --upgrade spark_ai_python
python 3.8
from sparkai.llm.llm import ChatSparkLLM, ChunkPrintHandler from sparkai.core.messages import ChatMessage #星火认知大模型Spark3.5 Max的URL值,其他版本大模型URL值请前往文档(https://www.xfyun.cn/doc/spark/Web.html)查看 SPARKAI_URL = 'wss://spark-api.xf-yun.com/v3.5/chat' #星火认知大模型调用秘钥信息,请前往讯飞开放平台控制台(https://console.xfyun.cn/services/bm35)查看 SPARKAI_APP_ID = '' SPARKAI_API_SECRET = '' SPARKAI_API_KEY = '' #星火认知大模型Spark3.5 Max的domain值,其他版本大模型domain值请前往文档(https://www.xfyun.cn/doc/spark/Web.html)查看 SPARKAI_DOMAIN = 'generalv3.5' if __name__ == '__main__': spark = ChatSparkLLM( spark_api_url=SPARKAI_URL, spark_app_id=SPARKAI_APP_ID, spark_api_key=SPARKAI_API_KEY, spark_api_secret=SPARKAI_API_SECRET, spark_llm_domain=SPARKAI_DOMAIN, streaming=False, ) messages = [ChatMessage( role="user", content='你好呀' )] handler = ChunkPrintHandler() a = spark.generate([messages], callbacks=[handler]) print(a)
2. API使用总结
可以知道上面的API使用大同小异,有些直接兼容openai的调用格式:
- 需要注册获取api key,进行授权
- api接口基本都支持stream流式输出
- api接口都是典型chat模式,提供不同的角色信息,system,user和assistant, 多轮对话在history中拼接会话信息
大家快去使用吧。