使用SiliconCloud尝试GraphRag——以《三国演义》为例(手把手教程,适合小白)

简介: 本文介绍了使用不同模型和平台体验GraphRAG的过程。首先通过OpenAI的GPT-4O Mini模型对沈从文的《边城》进行了分析,展示了如何安装GraphRAG并配置参数,最终实现了对文本的有效查询。随后,文章探讨了在国内环境下使用SiliconCloud作为替代方案的可能性,以《三国演义》为例,演示了使用SiliconCloud模型进行相同操作的步骤。此外,还讨论了使用本地模型如Ollama和LM Studio的可能性,尽管受限于硬件条件未能实际运行。最后,提出了混合使用在线对话模型API与本地或在线嵌入模型的方法,并列举了一些能够使GraphRAG流程跑通的大模型。

使用OpenAI模型体验GraphRag——以《边城》为例

在使用SiliconCloud之前,先使用OpenAI的模型看看GraphRag的效果。

GraphRAG是一种基于AI的内容理解和搜索能力,利用LLMs,解析数据以创建知识图谱,并对用户提供的私有数据集回答用户问题的方法。

GitHub地址:https://github.com/microsoft/graphrag

官网:https://microsoft.github.io/graphrag

现在正式开始体验GraphRag吧。

温馨提示

GraphRag Token的消费量比较大,刚开始体验可以不按照官方的配置,改用字数少一点的文本以及换成gpt-4o-mini模型。

以沈从文的《边城》为例。

创建一个Python虚拟环境,安装GraphRag:

pip install graphrag

安装好了之后:

mkdir biancheng
mkdir input

就是创建两个文件夹,也可以手动操作,然后将《边城》txt文件放到input文件夹下,如下所示:

image-20240810091951237

开始初始化:

python -m graphrag.index --init --root ./biancheng

完成后,会出现一些文件,如下所示:

image-20240810092251562

在.env文件中输入OpenAI Api Key,如下所示:

image-20240810092403747

在settings.yaml文件中做一些配置,在这里我的配置如下:

encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${
   
   GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: gpt-4o-mini
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  # api_base: https://<instance>.openai.azure.com
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made
  # temperature: 0 # temperature for sampling
  # top_p: 1 # top-p sampling
  # n: 1 # Number of completions to generate

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${
   
   GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: text-embedding-3-small
    # api_base: https://<instance>.openai.azure.com
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional



chunks:
  size: 1200
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents

input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 1

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 1

community_reports:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 2000
  max_input_length: 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: true
  raw_entities: false
  top_level_nodes: false

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000

global_search:
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

为了节约成本,把模型换成了gpt-4o-mini:

image-20240810092653575

为了后面在Gephi等软件中查看graphml文件,这里改成了true:

image-20240810093039475

这样就配置好了,现在开始索引化:

python -m graphrag.index --root ./biancheng

索引化完成截图:

img

现在可以查看一下生成的节点和边:

image-20240810093551574

image-20240810093633997

现在就可以开始查询了。

先来全局查询:

python -m graphrag.query --root ./biancheng --method global "这篇小说讲了什么主题?"

image-20240810093814596

再来局部查询:

python -m graphrag.query --root ./biancheng --method local "翠翠在白鸡关发生了什么?"

image-20240810093934417

《边城》的字数大约在5万到6万字之间,查看成本:

image-20240810094208222

只花了0.18美元,gpt-4o-mini性价比还是很高的。

使用SiliconCloud尝试GraphRag——以《三国演义》为例

虽然使用OpenAI的模型效果很好,但是在国内使用OpenAI会有一些限制,可能很多人还没有OpenAI Api Key,而且可能暂时也没法弄到,因此可以选择SiliconCloud做替代,SiliconCloud同时提供了兼容OpenAI格式的对话模型与嵌入模型,并有多款先进开源大模型可用,刚注册SiliconCloud会送一些额度,感兴趣就可以马上上手尝试。

在使用SiliconCloud尝试GraphRag时,为了快速把流程跑通,尝试换一个小一点的文本,先以《嫦娥奔月》的故事为例,进行说明。

步骤跟之前的步骤一样,就是在配置的时候,要改一些地方。

首先将Api Key改成SiliconCloud的Api Key:

image-20240810095355942

settings中需要更改的地方。

首先是对话模型部分:

image-20240810095744402

这里我选用的是meta-llama/Meta-Llama-3.1-70B-Instruct模型,关于模型名字怎么写,参考SiliconCloud的文档,文档地址:https://docs.siliconflow.cn/docs/getting-started

image-20240810100146593

接下来是嵌入模型部分:

image-20240810100316696

这里使用的嵌入模型是BAAI/bge-large-en-v1.5,使用BAAI/bge-large-zh-v1.5我这里会出错,大家也可以试一下,目前不知道什么原因。

嵌入模型名称该怎么写也是见文档:

image-20240810100757105

开始索引化:

image-20240810100903227

查看节点:

image-20240810101324223

查看边:

image-20240810101359837

全局提问:

python -m graphrag.query --root ./change1 --method global "这篇故事讲了什么主题?"

image-20240810100944628

局部提问:

python -m graphrag.query --root ./change1 --method local "嫦娥送了什么礼物给天帝?"

image-20240810101052387

现在把流程跑通了,可以尝试《三国演义》了!!

使用同样的设置,三国字数比较多,比较慢,耐心等待:

img

流程完成:

image-20240810101939345

查看节点:

image-20240810102239596

查看边:

image-20240810102601774

全局提问:

python -m graphrag.query --root ./sanguo --method global "三国讲了什么故事?"

image-20240810102020083

局部提问:

python -m graphrag.query --root ./sanguo --method local "赤壁之战是怎么打败曹操的?"

image-20240810102106817

使用本地模型尝试GraphRag

本地尝试GraphRag可以使用Ollama的对话模型,由于Ollama的嵌入模型没有兼容OpenAI的格式,所以嵌入模型可以使用LM Studio。

配置:

encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${
   
   GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: llama3.1:70b
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: http://localhost:11434/v1
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made
  # temperature: 0 # temperature for sampling
  # top_p: 1 # top-p sampling
  # n: 1 # Number of completions to generate

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${
   
   GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.Q2_K.gguf
    api_base: http://localhost:1234/v1
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional



chunks:
  size: 300
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents

input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 1

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 1

community_reports:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 2000
  max_input_length: 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: false
  raw_entities: false
  top_level_nodes: false

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000

global_search:
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

理论上跑的起来,但是我的电脑配置不行,跑不了稍微大一点的模型,没法实测。

混合使用

可以接入在线的对话模型Api,嵌入模型用本地的,但是SiliconCloud目前嵌入模型免费使用,也可以直接使用SiliconCloud的嵌入模型。

为了测试有哪些模型能把GraphRag流程跑通,但有些厂商只提供对话模型没有提供嵌入模型或者提供的嵌入模型也不兼容OpenAI格式该怎么办?

可以使用两个Key,一个Key是SiliconCloud用于使用嵌入模型,一个Key是其它厂商的,用于使用对话模型。

比如可以这样设置:

image-20240810103933743

配置文件可以这样写:

encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${
   
   Other_API_KEY}
  type: openai_chat # or azure_openai_chat
  model: glm-4-air 
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  api_base: https://open.bigmodel.cn/api/paas/v4
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made
  # temperature: 0 # temperature for sampling
  # top_p: 1 # top-p sampling
  # n: 1 # Number of completions to generate

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${
   
   GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    model: BAAI/bge-large-en-v1.5
    api_base: https://api.siliconflow.cn/v1
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional



chunks:
  size: 300
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents

input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 1

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 1

community_reports:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 2000
  max_input_length: 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: false # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: false # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: true
  raw_entities: false
  top_level_nodes: false

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000

global_search:
  # llm_temperature: 0 # temperature for sampling
  # llm_top_p: 1 # top-p sampling
  # llm_n: 1 # Number of completions to generate
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

我尝试了多个大模型,经过我这个简单的测试,能把GraphRag流程跑通的(只是跑通,回答效果不一定好)的有如下这些:

image-20240810104317340

image-20240810104349626

温馨提示

GraphRag Token消耗量很大,请注意额度!!

对于一个两千多字的文本,一次GraphRag基本上就要耗费十多万的Token:

image-20240810105125183

image-20240810105429469

参考

1、https://microsoft.github.io/graphrag/posts/get_started/

2、https://siliconflow.cn/zh-cn/siliconcloud

3、https://github.com/microsoft/graphrag/discussions/321

4、https://github.com/microsoft/graphrag/issues/374

5、https://www.youtube.com/watch?v=BLyGDTNdad0

相关文章
|
机器学习/深度学习 人工智能 自然语言处理
GraphRAG入门指南:构建你的第一个知识图谱驱动应用
【10月更文挑战第28天】随着人工智能和机器学习技术的飞速发展,知识图谱(Knowledge Graph)逐渐成为连接数据和智能应用的重要桥梁。GraphRAG(Graph-based Retrieval-Augmented Generation)是一种结合了知识图谱和自然语言处理的技术,能够在生成文本时利用知识图谱中的结构化信息,从而提高生成质量和相关性。作为一名数据科学家和技术爱好者,我有幸深入研究并实践了GraphRAG技术,现将我的经验和心得整理成这份入门指南,希望能帮助初学者快速上手并构建自己的知识图谱驱动应用。
1479 2
|
存储 自然语言处理 搜索推荐
GraphRAG:构建下一代知识图谱驱动的对话系统
【10月更文挑战第10天】随着自然语言处理(NLP)技术的发展,对话系统已经从简单的基于规则的问答系统演变为能够理解复杂语境并提供个性化服务的智能助手。然而,传统的对话系统往往依赖于预先定义好的模板或有限的知识库,这限制了它们在理解和生成多样化响应方面的能力。为了解决这一问题,GraphRAG(Graph-based Retrieval-Augmented Generation)技术应运而生。GraphRAG结合了大规模的知识图谱和先进的NLP模型,旨在提升对话系统的理解和响应能力。
714 1
|
11月前
|
自然语言处理 数据可视化 API
Qwen系列模型+GraphRAG/LightRAG/Kotaemon从0开始构建中医方剂大模型知识图谱问答
本文详细记录了作者在短时间内尝试构建中医药知识图谱的过程,涵盖了GraphRAG、LightRAG和Kotaemon三种图RAG架构的对比与应用。通过实际操作,作者不仅展示了如何利用这些工具构建知识图谱,还指出了每种工具的优势和局限性。尽管初步构建的知识图谱在数据处理、实体识别和关系抽取等方面存在不足,但为后续的优化和改进提供了宝贵的经验和方向。此外,文章强调了知识图谱构建不仅仅是技术问题,还需要深入整合领域知识和满足用户需求,体现了跨学科合作的重要性。
|
API
求助:使用阿里的通义模型如何支持运行GraphRAG项目呢?
求助:使用阿里的通义模型如何支持运行GraphRAG项目呢?
281 2
|
10月前
|
存储 人工智能 安全
面向法律场景的大模型 RAG 检索增强解决方案
检索增强生成模型结合了信息检索与生成式人工智能的优点,从而在特定场景下提供更为精准和相关的答案。以人工智能平台 PAI 为例,为您介绍在云上使用一站式白盒化大模型应用开发平台 PAI-LangStudio 构建面向法律场景的大模型 RAG 检索增强解决方案,应用构建更简便,开发环境更直观。此外,PAI 平台同样发布了面向医疗、金融和教育领域的 RAG 解决方案。
|
8月前
|
人工智能 测试技术 API
Ollama本地模型部署+API接口调试超详细指南
本文介绍了如何使用Ollama工具下载并部署AI大模型(如DeepSeek-R1、Llama 3.2等)。首先,访问Ollama的官方GitHub页面下载适合系统的版本并安装。接着,在终端输入`ollama`命令验证安装是否成功。然后,通过命令如`ollama run Llama3.2`下载所需的AI模型。下载完成后,可以在控制台与AI模型进行对话,或通过快捷键`control+d`结束会话。为了更方便地与AI互动,可以安装GUI或Web界面。此外,Ollama还提供了API接口,默认支持API调用,用户可以通过Apifox等工具调试这些API。
|
12月前
|
机器学习/深度学习 人工智能 缓存
最佳实践!使用 GraphRAG + GLM-4 对《红楼梦》全文构建中文增强检索
特别介绍`graphrag-practice-chinese`项目,这是一个针对中文优化的GraphRAG应用实例,通过改进文本切分策略、使用中文提示词及选择更适合中文的模型等手段,显著提升了处理中文内容的能力。项目不仅包括详细的搭建指南,还提供了《红楼梦》全文的索引构建与查询测试示例,非常适合个人学习和研究。
2099 1
|
人工智能 Linux Docker
一文详解几种常见本地大模型个人知识库工具部署、微调及对比选型(1)
近年来,大模型在AI领域崭露头角,成为技术创新的重要驱动力。从AlphaGo的胜利到GPT系列的推出,大模型展现出了强大的语言生成、理解和多任务处理能力,预示着智能化转型的新阶段。然而,要将大模型的潜力转化为实际生产力,需要克服理论到实践的鸿沟,实现从实验室到现实世界的落地应用。阿里云去年在云栖大会上发布了一系列基于通义大模型的创新应用,标志着大模型技术开始走向大规模商业化和产业化。这些应用展示了大模型在交通、电力、金融、政务、教育等多个行业的广阔应用前景,并揭示了构建具有行业特色的“行业大模型”这一趋势,大模型知识库概念随之诞生。
155951 30
|
机器学习/深度学习 存储 自然语言处理
基础与构建:GraphRAG架构解析及其在知识图谱中的应用
【10月更文挑战第11天】随着数据的不断增长和复杂化,传统的信息检索和生成方法面临着越来越多的挑战。特别是在处理结构化和半结构化数据时,如何高效地提取、理解和生成内容变得尤为重要。近年来,一种名为Graph Retrieval-Augmented Generation (GraphRAG) 的新架构被提出,它结合了图神经网络(GNNs)和预训练语言模型,以提高多模态数据的理解和生成能力。本文将深入探讨GraphRAG的基础原理、架构设计,并通过实际代码示例展示其在知识图谱中的应用。
1451 0