1.Wang Y, Ke P, Zheng Y, et al. A large-scale chinese short-text conversation dataset[C]//CCF International Conference on Natural Language Processing and Chinese Computing. Springer, Cham, 2020: 91-103.
2.Zhou H, Ke P, Zhang Z, et al. Eva: An open-domain chinese dialogue system with large-scale generative pre-training[J]. arXiv preprint arXiv:2108.01547, 2021.
3.Gu Y, Wen J, Sun H, et al. Eva2. 0: Investigating open-domain chinese dialogue systems with large-scale pre-training[J]. arXiv preprint arXiv:2203.09313, 2022.
4.Roller S, Dinan E, Goyal N, et al. Recipes for Building an Open-Domain Chatbot[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 2021: 300-325.
5.Shuster K, Xu J, Komeili M, et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage[J]. arXiv preprint arXiv:2208.03188, 2022.
6.Bao S, He H, Wang F, et al. PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning[C]//Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 2021: 2513-2525.
7.Mi F, Li Y, Zeng Y, et al. PANGUBOT: Efficient Generative Dialogue Pre-training from Pre-trained Language Model[J]. arXiv preprint arXiv:2203.17090, 2022.
8.Dong L, Yang N, Wang W, et al. Unified language model pre-training for natural language understanding and generation[J]. Advances in Neural Information Processing Systems, 2019, 32.
9.Gu Y, Han X, Liu Z, et al. PPT: Pre-trained Prompt Tuning for Few-shot Learning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 8410-8423.
10.Lester B, Al-Rfou R, Constant N. The Power of Scale for Parameter-Efficient Prompt Tuning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021: 3045-3059.
11.Ju D, Xu J, Boureau Y L, et al. Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls[J]. arXiv preprint arXiv:2208.03295, 2022.
12.Bai Y, Jones A, Ndousse K, et al. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback[J]. arXiv preprint arXiv:2204.05862, 2022.
13.Liu S, Zheng C, Demasi O, et al. Towards Emotional Support Dialog Systems[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021: 3469-3483.
14.Thoppilan R, De Freitas D, Hall J, et al. Lamda: Language models for dialog applications[J]. arXiv preprint arXiv:2201.08239, 2022.
随着人工智能应用在全球范围的普及和风靡,大语言模型技术(Large Language Model,简称 LLM)受到了广泛的关注和应用。而图数据库作为一种处理复杂数据结构的工具,能够为企业构建行业大语言模型提供强大的支持,包括丰富亿万级别的上下文信息,提升模型的应答精度,从而实现企业级的应用效果。同时,Graph+LLM 可以助力快速构建知识图谱,帮助企业更深入地理解和挖掘数据价值。