开发者社区> 大数据指北> 正文
阿里云
为了无法计算的价值
打开APP
阿里云APP内打开

Docker搭建分布式图数据库nebula

简介: 被老大按着完成nebula分布式容器化部署。
+关注继续查看

大家好,我是脚丫先生 (o^^o)

最近在做数据融合分析平台。需要搭建一个分布式图数据库,第一想法就是面向百度和官网搜索,但是大多数只看到单节点搭建,分布式搭建都是基于k8s。自己不想那么把项目搞这么重,于是考利用docker-compose进行分布式搭建。下面进行阐述搭建过程,希望能帮助到大家。

在这里插入图片描述

一、图数据库nebula

Nebula Graph 是开源的第三代分布式图数据库,不仅能够存储万亿个带属性的节点和边,而且还能在高并发场景下满足毫秒级的低时延查询要求。不同于 Gremlin 和 Cypher,Nebula 提供了一种 SQL-LIKE 的查询语言 nGQL,通过三种组合方式(管道、分号和变量)完成对图的 CRUD 的操作。在存储层 Nebula Graph 目前支持 RocksDB 和 HBase 两种方式。

二、集群规划

主机名IPNebula服务
spark1192.168.239.128graphd0, metad-0, storaged-0
spark2192.168.239.129graphd1, metad-1, storaged-1
spark3192.168.239.130graphd2, metad-2, storaged-2

对于运维来说,之前搭建原生的环境,非常之麻烦,大部分时候搭建一个环境需要很多时间,而且交付项目,运维项目,都想把客户掐死。阿西吧

2.1 spark1节点的docker-compose

version: '3.4'
services:
  metad0:
    image: vesoft/nebula-metad:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.129:9559
      - --local_ip=192.168.239.128
      - --ws_ip=0.0.0.0
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.128:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta0:/data/meta
      - ./logs/meta0:/logs
    restart: on-failure
  
  storaged0:
    image: vesoft/nebula-storaged:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --local_ip=192.168.239.128
      - --ws_ip=0.0.0.0
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.128:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage0:/data/storage
      - ./logs/storage0:/logs
    restart: on-failure

  graphd0:
    image: vesoft/nebula-graphd:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --port=9669
      - --ws_ip=0.0.0.0
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.128:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9669:9669"
      - 19669
      - 19670
    volumes:
      - ./logs/graph:/logs
    restart: on-failure

注意:

  • 修改参数meta_server_addrs:

全部Meta服务的IP地址和端口。多个Meta服务用英文逗号(,)分隔

  • 修改参数local_ip:
    Meta服务的本地IP地址。本地IP地址用于识别nebula-metad进程,如果是分布式集群或需要远程访问,请修改为对应地址。

    • 默认参数ws_ip:

    HTTP服务的IP地址。预设值:0.0.0.0。

    2.2 spark2节点的docker-compose(配置与spark1同理)

version: '3.4'
services:
  metad1:
    image: vesoft/nebula-metad:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.129:9559
      - --local_ip=192.168.239.129
      - --ws_ip=0.0.0.0
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.129:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta1:/data/meta
      - ./logs/meta1:/logs
    restart: on-failure
  
  storaged1:
    image: vesoft/nebula-storaged:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --local_ip=192.168.239.129
      - --ws_ip=0.0.0.0
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad1
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.129:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage1:/data/storage
      - ./logs/storage1:/logs
    restart: on-failure
  
  graphd1:
    image: vesoft/nebula-graphd:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --port=9669
      - --ws_ip=0.0.0.0
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad1
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.129:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9669:9669"
      - 19669
      - 19670
    volumes:
      - ./logs/graph1:/logs
    restart: on-failure

2.3 spark3节点的docker-compose(配置与spark1同理)

version: '3.4'
services:
  metad2:
    image: vesoft/nebula-metad:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.129:9559
      - --local_ip=192.168.239.130
      - --ws_ip=0.0.0.0
      - --port=9559
      - --ws_http_port=19559
      - --data_path=/data/meta
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.130:19559/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9559
      - 19559
      - 19560
    volumes:
      - ./data/meta3:/data/meta
      - ./logs/meta3:/logs
    restart: on-failure
  
  storaged2:
    image: vesoft/nebula-storaged:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --local_ip=192.168.239.130
      - --ws_ip=192.168.239.130
      - --port=9779
      - --ws_http_port=19779
      - --data_path=/data/storage
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.130:19779/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - 9779
      - 19779
      - 19780
    volumes:
      - ./data/storage3:/data/storage
      - ./logs/storage3:/logs
    restart: on-failure
  
  graphd2:
    image: vesoft/nebula-graphd:v2.0.0
    privileged: true
    network_mode: host
    environment:
      USER: root
      TZ:   "${TZ}"
    command:
      - --meta_server_addrs=192.168.239.128:9559,192.168.239.129:9559,192.168.239.130:9559
      - --port=9669
      - --ws_ip=0.0.0.0
      - --ws_http_port=19669
      - --log_dir=/logs
      - --v=0
      - --minloglevel=0
    depends_on:
      - metad2
    healthcheck:
      test: ["CMD", "curl", "-sf", "http://192.168.239.128:19669/status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 20s
    ports:
      - "9669:9669"
      - 19669
      - 19670
    volumes:
      - ./logs/graph:/logs
    restart: on-failure

三、客户端

不说这么花里胡哨的,直接上docker-compose

version: '3.4'
services:
  client:
    image: vesoft/nebula-http-gateway:v2
    environment:
      USER: root
    ports:
      - 8080
    networks:
      - nebula-web
  web:
    image: vesoft/nebula-graph-studio:v2
    environment:
      USER: root
      UPLOAD_DIR: ${MAPPING_DOCKER_DIR}
    ports:
      - 7001
    depends_on:
      - client
    volumes:
      - ${UPLOAD_DIR}:${MAPPING_DOCKER_DIR}:rw
    networks:
      - nebula-web
  importer:
    image: vesoft/nebula-importer:v2
    networks:
      - nebula-web
    ports:
      - 5699
    volumes:
      - ${UPLOAD_DIR}:${MAPPING_DOCKER_DIR}:rw
    command:
      - "--port=5699"
      - "--callback=http://nginx:7001/api/import/finish"
  nginx:
    image: nginx:alpine
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/nebula.conf
      - ${UPLOAD_DIR}:${MAPPING_DOCKER_DIR}:rw
    depends_on:
      - client
      - web
    networks:
      - nebula-web
    ports:
      - 7001:7001

networks:
  nebula-web:

谷歌浏览器访问web界面: http://192.168.239.128:7001
在这里插入图片描述

利用SHOW HOSTS;在这里插入图片描述

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
【K8S专栏】Docker容器技术剖析
【K8S专栏】Docker容器技术剖析
50 0
什么,容器太多操作不过来?我选择Docker Compose梭哈(下)
什么,容器太多操作不过来?我选择Docker Compose梭哈(下)
300 0
用 k8s 运行一次性任务 - 每天5分钟玩转 Docker 容器技术(132)
Kubernetes 用 Job 来运行一次性任务,比如批处理程序,完成后容器就退出。
8245 0
k8s 如何 Failover?- 每天5分钟玩转 Docker 容器技术(127)
今天我们讨论 Kubernetes 如何 Failover。
1715 0
通过例子理解 k8s 架构 - 每天5分钟玩转 Docker 容器技术(122)
为了帮助大家更好地理解 Kubernetes 架构,我们部署一个应用来演示各个组件之间是如何协作的。
1934 0
部署 k8s Cluster(上)- 每天5分钟玩转 Docker 容器技术(118)
我们将部署三个节点的 Kubernetes Cluster。
2380 0
+关注
大数据指北
知名国企开发工程师,目前大数据和后端开发,致力全栈。
20
文章
0
问答
文章排行榜
最热
最新
相关电子书
更多
低代码开发师(初级)实战教程
立即下载
阿里巴巴DevOps 最佳实践手册
立即下载
冬季实战营第三期:MySQL数据库进阶实战
立即下载