最近调接口总是 timeout

最近调阿里云百炼的 LLM API 总是报错,代码如下

llm = OpenAI(
                api_key=self.__api_key__,
                base_url=self.__api_base__,
                **kwargs
            )
llm.chat.completions.create(
        model=ctx.obj['model'],
        messages=[
            {'role': 'system', 'content': 'You are a helpful assistant.'},
            {'role': 'user', 'content': '请解释国债收益率和价格的关系?'}],
        )

不报错时输出:

2025-04-15 22:03:11,850 - httpx - INFO - HTTP Request: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions "HTTP/1.1 200 OK"

Result: {"id":"chatcmpl-ec8a999f-d3ab-9f2c-947f-db706bab3794","choices":[{"finish_reason":"stop","index":0,"logprobs":null,"message":{"content":"我是通义千问,阿里巴巴集团旗下的超大规模语言模型。我能够回答问题、创作文字,如写故事、公文、技术文档等,还能进行逻辑推理、表达观点、玩游戏等。如果你有任何问题或需要帮助,欢迎随时告诉我!","refusal":null,"role":"assistant","annotations":null,"audio":null,"function_call":null,"tool_calls":null}}],"created":1744725792,"model":"qwen-turbo","object":"chat.completion","service_tier":null,"system_fingerprint":null,"usage":{"completion_tokens":55,"prompt_tokens":22,"total_tokens":77,"completion_tokens_details":null,"prompt_tokens_details":null}}

报错时输出:

2025-04-15 22:04:01,815 - openai._base_client - INFO - Retrying request to /chat/completions in 0.416089 seconds
2025-04-15 22:04:07,242 - openai._base_client - INFO - Retrying request to /chat/completions in 0.796980 seconds
Traceback (most recent call last):
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
    yield
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 250, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
    raise exc from None
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_sync/http_proxy.py", line 316, in handle_request
    stream = stream.start_tls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 376, in start_tls
    return self._stream.start_tls(ssl_context, server_hostname, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 154, in start_tls
    with map_exceptions(exc_map):
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectTimeout: _ssl.c:983: The handshake operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 955, in _request
    response = self._client.send(
               ^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1014, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 249, in handle_request
    with map_httpcore_exceptions():
         ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.12/3.12.8/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectTimeout: _ssl.c:983: The handshake operation timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/tzhu/work/lab/caspian/analyze.py", line 306, in <module>
    llm_analysis()
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/core.py", line 1161, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/core.py", line 1082, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/core.py", line 1697, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/core.py", line 788, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/click/decorators.py", line 33, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/analyze.py", line 285, in test
    completion = llm.chat.completions.create(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 279, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 914, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1242, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 919, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 964, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 964, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1057, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/tzhu/work/lab/caspian/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 974, in _request
    raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.

以上错误不定期出现,出错概率在 80%以上

试过以下的所有模型:

    DEEPSEEK_R1 = 'deepseek-r1'
    DEEPSEEK_V3 = 'deepseek-v3'
    QWEN_MAX_LATEST = 'qwen-max-latest'
    QWEN_MAX = 'qwen-max'
    QWEN_OMNI_TURBO = 'qwen-omni-turbo-latest'
    QWEN_OMNI_TURBO_2025_01_19 = 'qwen-omni-turbo-2025-01-19'
    QWEN_MAX_2025_01_25 = 'qwen-max-2025-01-25'
    QWEN_PLUS = 'qwen-plus'
    QWEN_PLUS_LATEST = 'qwen-plus-latest'
    QWEN_PLUS_2025_01_25 = 'qwen-plus-2025-01-25'
    QWQ_PLUS = 'qwq-plus'
    QWEN_TURBO = 'qwen-turbo'
    QWEN2_5_VL_72B_INSTRUCT = 'qwen2.5-vl-72b-instruct'
    QWEN2_5_14B_INSTRUCT_1M = 'qwen2.5-14b-instruct-1m'

展开
收起
1907705034155180 2025-04-15 22:14:12 57 分享 版权
0 条回答
写回答
取消 提交回答

基于通义系列大模型和开源大模型的一站式大模型服务平台,提供「生成式大模型的全流程应用工具」和「企业大模型的全链路训练工具」。为大模型,也为小应用。 阿里云百炼官网网址:https://www.aliyun.com/product/bailian

还有其他疑问?
咨询AI助理