Python 集成 DeepSeek 示例_python 集成开发工具
DeepSeek 简介
DeepSeek 由知名量化资管巨头幻言方量化创立于2023年7月17日,全称杭州深度求索人工智能基础技术研究有限公司。长久以来专注于开发先进的大语言模型(LLM)和相关技术。
DeepSeek 新一代模型的发布意味着AI大模型的应用将逐步走向普惠,助力AI应用广泛落地;同时训练效率大幅提升,也将助力推理算力需求高增。
DeepSeek 相关链接及资料
- DeepSeek 官网:https://www.deepseek.com/
- DeepSeek API文档:https://api-docs.deepseek.com/zh-cn/
- DeepSeek R1 开源项目官网:https://github.com/deepseek-ai/DeepSeek-R1
- DeepSeek V3 开源项目官网:https://github.com/deepseek-ai/DeepSeek-V3
- DeepSeek R1本地部署教程:https://feizhuke.com/deepseek-r1-bendibushu.html
Python 调用 DeepSeek 示例
- 首次调用API
DeepSeek API 采用与 OpenAI 相适配的 API 格式。经由配置的修改,您能够运用 OpenAI SDK 对 DeepSeek API 予以访问,亦或使用于 OpenAI API 兼容的相关软件。
- 通过指定 model='deepseek-chat' 即可调用 DeepSeek-V3
- 通过指定 model='deepseek-reasoner',即可调用 DeepSeek-R1
- 申请API Key
调用API前需要先申请API Key,具体申请过程可参考官网说明
- 安装 openai 依赖
pip3 install openai
- 调用对话API
from openai import OpenAI
# for backward compatibility, you can still use `https://api.deepseek.com/v1` as `base_url`.
client = OpenAI(api_key="", base_url="https://api.deepseek.com")
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hello"},
],
max_tokens=1024,
temperature=0.7,
stream=False
)
print(response.choices[0].message.content)
- 推理模型(deepseek-reasoner)
deepseek-reasoner 是 DeepSeek 推出的推理模型。在输出最终回答之前,模型会先输出一段思维链内容,以提升最终答案的准确性。DeepSeek API 向用户开放 deepseek-reasoner 思维链的内容,以供用户查看、展示、蒸馏使用。
在使用 deepseek-reasoner 时,请先升级 OpenAI SDK 以支持新参数。
pip install -U openai
- 输入参数
max_tokens:最终回答的最大长度(不含思维链输出),默认为 4K,最大为 8K。请注意,思维链的输出最多可以达到 32K tokens
- 输出字段
reasoning_content:思维链内容,与 content 同级
content:最终回答内容
- 上下文长度
API 最大支持 64K 上下文,输出的 reasoning_content 长度不计入 64K 上下文长度中
- 支持的功能
对话补全,对话前缀续写 (Beta)
- 不支持的功能
Function Call、Json Output、FIM 补全 (Beta)
- 不支持的参数
temperature、top_p、presence_penalty、frequency_penalty、logprobs、top_logprobs。请注意,为了兼容已有软件,设置 temperature、top_p、presence_penalty、frequency_penalty 参数不会报错,但也不会生效。设置 logprobs、top_logprobs 会报错
- 非流多示例
from openai import OpenAI
client = OpenAI(api_key="", base_url="https://api.deepseek.com")
# Round 1
messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=messages
)
reasoning_content = response.choices[0].message.reasoning_content
content = response.choices[0].message.content
# Round 2
messages.append({'role': 'assistant', 'content': content})
messages.append({'role': 'user', 'content': "How many Rs are there in the word 'strawberry'?"})
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=messages
)
# ...
- 流式示例
from openai import OpenAI
client = OpenAI(api_key="", base_url="https://api.deepseek.com")
# Round 1
messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=messages,
stream=True
)
reasoning_content = ""
content = ""
for chunk in response:
if chunk.choices[0].delta.reasoning_content:
reasoning_content += chunk.choices[0].delta.reasoning_content
else:
content += chunk.choices[0].delta.content
# Round 2
messages.append({"role": "assistant", "content": content})
messages.append({'role': 'user', 'content': "How many Rs are there in the word 'strawberry'?"})
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=messages,
stream=True
)
# ...
6. 多轮对话
DeepSeek /chat/completions API 是一个“无状态” API,即服务端不记录用户请求的上下文,用户在每次请求时,需将之前所有对话历史拼接好后,传递给对话 API
from openai import OpenAI
client = OpenAI(api_key="", base_url="https://api.deepseek.com")
# Round 1
messages = [{"role": "user", "content": "What's the highest mountain in the world?"}]
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages
)
messages.append(response.choices[0].message)
print(f"Messages Round 1: {messages}")
# Round 2
messages.append({"role": "user", "content": "What is the second?"})
response = client.chat.completions.create(
model="deepseek-chat",
messages=messages
)
messages.append(response.choices[0].message)
print(f"Messages Round 2: {messages}")
在第一轮请求时,传递给API的 message 为:
[
{"role": "user", "content": "What's the highest mountain in the world?"}
]
在第二轮请求时要将第一轮中模型的输出添加到 messages 末尾
[
{"role": "user", "content": "What's the highest mountain in the world?"},
{"role": "assistant", "content": "The highest mountain in the world is Mount Everest."},
{"role": "user", "content": "What is the second?"}
]
本文只是演示基于 DeepSeek API的简单调用,其他更复杂的功能,可以参考官方API文档:
https://api-docs.deepseek.com/zh-cn/