【Elasticsearch 5.6.12 源码】——【3】启动过程分析(下)

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
简介: 本文主要解决以下问题: 1、ES启动过程中的Node对象都初始化了那些服务?

版权声明:本文为博主原创,转载请注明出处!


简介

本文主要解决以下问题:

1、ES启动过程中的Node对象都初始化了那些服务?

构造流程

Step 1、创建一个List暂存初始化失败时需要释放的资源,并使用临时的Logger对象输出开始初始化的日志。

这里首先创建了一个List<Closeable>然后输出日志initializing ...。代码比较简单:

final List<Closeable> resourcesToClose = new ArrayList<>(); // register everything we need to release in the case of an error
boolean success = false;
 {
      // use temp logger just to say we are starting. we can't use it later on because the node name might not be set
      Logger logger = Loggers.getLogger(Node.class, NODE_NAME_SETTING.get(environment.settings()));
      logger.info("initializing ...");

 }
Step 2、强制设置settingsclient.type的配置为node,设置node.name并检查索引data目录的设置。

这部分首先设置client.typenode,接下来调用TribeServiceprocessSettings方法来处理了“部落”的配置,然后创建NodeEnvironment,检查并设置node.name属性,最后按需检查索引数据的Path的配置并打印一些JVM的信息。代码如下:

 Settings tmpSettings = Settings.builder().put(environment.settings())
                .put(Client.CLIENT_TYPE_SETTING_S.getKey(), CLIENT_TYPE).build();

 tmpSettings = TribeService.processSettings(tmpSettings);

 // create the node environment as soon as possible, to recover the node id and enable logging
 try {
     nodeEnvironment = new NodeEnvironment(tmpSettings, environment);
     resourcesToClose.add(nodeEnvironment);
 } catch (IOException ex) {
     throw new IllegalStateException("Failed to create node environment", ex);
 }
 final boolean hadPredefinedNodeName = NODE_NAME_SETTING.exists(tmpSettings);
 Logger logger = Loggers.getLogger(Node.class, tmpSettings);
 final String nodeId = nodeEnvironment.nodeId();
 tmpSettings = addNodeNameIfNeeded(tmpSettings, nodeId);
 if (DiscoveryNode.nodeRequiresLocalStorage(tmpSettings)) {
    checkForIndexDataInDefaultPathData(tmpSettings, nodeEnvironment, logger);
 }
 // this must be captured after the node name is possibly added to the settings
 final String nodeName = NODE_NAME_SETTING.get(tmpSettings);
 if (hadPredefinedNodeName == false) {
     logger.info("node name [{}] derived from node ID [{}]; set [{}] to override", nodeName, nodeId, NODE_NAME_SETTING.getKey());
 } else {
     logger.info("node name [{}], node ID [{}]", nodeName, nodeId);
 }
Step 3、创建PluginsServiceEnvironment实例。

PluginsService的构造方法中会加载pluginsmodules目录下的jar包,并创建相应的pluginmodule实例。创建完以后,Node的构造方法中会调用pluginsServiceupdatedSettings方法来获取pluginmodule中定义的配置项。接下来Node或使用新的settingsnodeId来创建LocalNodeFactory,并使用最新的settings重新创建Environment对象。代码如下:

 this.pluginsService = new PluginsService(tmpSettings, environment.modulesFile(), environment.pluginsFile(), classpathPlugins);
 this.settings = pluginsService.updatedSettings();
 localNodeFactory = new LocalNodeFactory(settings, nodeEnvironment.nodeId());

 // create the environment based on the finalized (processed) view of the settings
 // this is just to makes sure that people get the same settings, no matter where they ask them from
 this.environment = new Environment(this.settings);
 Environment.assertEquivalent(environment, this.environment);
Step 4、创建ThreadPoolThreadContext实例。

首先,通过pluginsService获取pluginmodule中提供的ExecutorBuilder对象列表。接下来基于settings及获取的ExecutorBuilder对象列表创建ThreadPoolThreadContext实例。代码如下:

 final ThreadPool threadPool = new ThreadPool(settings, executorBuilders.toArray(new ExecutorBuilder[0]));
 resourcesToClose.add(() -> ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS));
 // adds the context to the DeprecationLogger so that it does not need to be injected everywhere
 DeprecationLogger.setThreadContext(threadPool.getThreadContext());
 resourcesToClose.add(() -> DeprecationLogger.removeThreadContext(threadPool.getThreadContext()));
Step 5、依次创建NodeClientResourceWatcherServiceScriptModuleAnalysisModuleSettingsModuleNetworkServiceClusterServiceIngestServiceClusterInfoService等主要模块。

ScriptModule中持有ScriptService通过该服务可以获取到ES中配置的各类脚本引擎的实例。AnalysisModule中持有AnalysisRegistry对象,通过该对象可以获取到ES中配置的各类查询分析器的实例。SettingModule中按类型保存了ES中可以解析的配置对象。NetworkService主要用来解析网络地址,ClusterService用例维护集群的信息。代码如下:

 final List<Setting<?>> additionalSettings = new ArrayList<>(pluginsService.getPluginSettings());
 final List<String> additionalSettingsFilter = new ArrayList<>(pluginsService.getPluginSettingsFilter());
 for (final ExecutorBuilder<?> builder : threadPool.builders()) {
     additionalSettings.addAll(builder.getRegisteredSettings());
 }
 client = new NodeClient(settings, threadPool);
 final ResourceWatcherService resourceWatcherService = new ResourceWatcherService(settings, threadPool);
 final ScriptModule scriptModule = ScriptModule.create(settings, this.environment, resourceWatcherService,
     pluginsService.filterPlugins(ScriptPlugin.class));
 AnalysisModule analysisModule = new AnalysisModule(this.environment, pluginsService.filterPlugins(AnalysisPlugin.class));
 additionalSettings.addAll(scriptModule.getSettings());
 // this is as early as we can validate settings at this point. we already pass them to ScriptModule as well as ThreadPool
 // so we might be late here already
 final SettingsModule settingsModule = new SettingsModule(this.settings, additionalSettings, additionalSettingsFilter);
 scriptModule.registerClusterSettingsListeners(settingsModule.getClusterSettings());
 resourcesToClose.add(resourceWatcherService);
 final NetworkService networkService = new NetworkService(settings,
     getCustomNameResolvers(pluginsService.filterPlugins(DiscoveryPlugin.class)));
 final ClusterService clusterService = new ClusterService(settings, settingsModule.getClusterSettings(), threadPool,
     localNodeFactory::getNode);
 clusterService.addStateApplier(scriptModule.getScriptService());
 resourcesToClose.add(clusterService);
 final IngestService ingestService = new IngestService(clusterService.getClusterSettings(), settings, threadPool, this.environment,
     scriptModule.getScriptService(), analysisModule.getAnalysisRegistry(), pluginsService.filterPlugins(IngestPlugin.class));
 final ClusterInfoService clusterInfoService = newClusterInfoService(settings, clusterService, threadPool, client);
Step 6、创建ModulesBuilder并加入各种Module

ES使用google开源的Guice管理程序中的依赖。加入ModulesBuilder中的Module有:通过PluginsService获取的插件提供的ModuleNodeModule内部持有MonitorServiceClusterModule内部持有ClusterService及相关的ClusterPluginIndicesModule内部持有MapperPluginSearchModule内部持有相关的SearchPluginActionModule内部持有ThreadPoolActionPluginNodeClientCircuitBreakerServiceGatewayModuleRepositoriesModule内部持有RepositoryPluginSttingsModule内部ES可用的各类配置对象等;最好调用modulescreateInjector方法创建应用的“依赖注入器”。

Step 7、收集各pluginLifecycleComponent对象,并出初始化NodeClient

代码如下:

 List<LifecycleComponent> pluginLifecycleComponents = pluginComponents.stream()
     .filter(p -> p instanceof LifecycleComponent)
     .map(p -> (LifecycleComponent) p).collect(Collectors.toList());
 pluginLifecycleComponents.addAll(pluginsService.getGuiceServiceClasses().stream()
     .map(injector::getInstance).collect(Collectors.toList()));
 resourcesToClose.addAll(pluginLifecycleComponents);
 this.pluginLifecycleComponents = Collections.unmodifiableList(pluginLifecycleComponents);
 client.initialize(injector.getInstance(new Key<Map<GenericAction, TransportAction>>() {}),
         () -> clusterService.localNode().getId());

 if (NetworkModule.HTTP_ENABLED.get(settings)) {
     logger.debug("initializing HTTP handlers ...");
     actionModule.initRestHandlers(() -> clusterService.state().nodes());
 }
 logger.info("initialized");
Step 8、调用NodeStart方法,在该方法内依次调用各重要模块的start方法。

依次启动各个关键服务。代码如下:

 // hack around dependency injection problem (for now...)
 injector.getInstance(Discovery.class).setAllocationService(injector.getInstance(AllocationService.class));
 pluginLifecycleComponents.forEach(LifecycleComponent::start);

 injector.getInstance(MappingUpdatedAction.class).setClient(client);
 injector.getInstance(IndicesService.class).start();
 injector.getInstance(IndicesClusterStateService.class).start();
 injector.getInstance(IndicesTTLService.class).start();
 injector.getInstance(SnapshotsService.class).start();
 injector.getInstance(SnapshotShardsService.class).start();
 injector.getInstance(RoutingService.class).start();
 injector.getInstance(SearchService.class).start();
 injector.getInstance(MonitorService.class).start();

 final ClusterService clusterService = injector.getInstance(ClusterService.class);

 final NodeConnectionsService nodeConnectionsService = injector.getInstance(NodeConnectionsService.class);
 nodeConnectionsService.start();
 clusterService.setNodeConnectionsService(nodeConnectionsService);

 // TODO hack around circular dependencies problems
 injector.getInstance(GatewayAllocator.class).setReallocation(clusterService, injector.getInstance(RoutingService.class));

 injector.getInstance(ResourceWatcherService.class).start();
 injector.getInstance(GatewayService.class).start();
 Discovery discovery = injector.getInstance(Discovery.class);
 clusterService.setDiscoverySettings(discovery.getDiscoverySettings());
 clusterService.addInitialStateBlock(discovery.getDiscoverySettings().getNoMasterBlock());
 clusterService.setClusterStatePublisher(discovery::publish);

 // start before the cluster service since it adds/removes initial Cluster state blocks
 final TribeService tribeService = injector.getInstance(TribeService.class);
 tribeService.start();

 // Start the transport service now so the publish address will be added to the local disco node in ClusterService
 TransportService transportService = injector.getInstance(TransportService.class);
 transportService.getTaskManager().setTaskResultsService(injector.getInstance(TaskResultsService.class));
 transportService.start();
 validateNodeBeforeAcceptingRequests(settings, transportService.boundAddress(), pluginsService.filterPlugins(Plugin.class).stream()
 .flatMap(p -> p.getBootstrapChecks().stream()).collect(Collectors.toList()));

 clusterService.addStateApplier(transportService.getTaskManager());
 clusterService.start();
 assert localNodeFactory.getNode() != null;
 assert transportService.getLocalNode().equals(localNodeFactory.getNode())
 : "transportService has a different local node than the factory provided";
 assert clusterService.localNode().equals(localNodeFactory.getNode())
 : "clusterService has a different local node than the factory provided";
 // start after cluster service so the local disco is known
 discovery.start();
 transportService.acceptIncomingRequests();
 discovery.startInitialJoin();
 // tribe nodes don't have a master so we shouldn't register an observer  s
 final TimeValue initialStateTimeout = DiscoverySettings.INITIAL_STATE_TIMEOUT_SETTING.get(settings);
 if (initialStateTimeout.millis() > 0) {
 final ThreadPool thread = injector.getInstance(ThreadPool.class);
 ClusterState clusterState = clusterService.state();
 ClusterStateObserver observer = new ClusterStateObserver(clusterState, clusterService, null, logger, thread.getThreadContext());
 if (clusterState.nodes().getMasterNodeId() == null) {
     logger.debug("waiting to join the cluster. timeout [{}]", initialStateTimeout);
     final CountDownLatch latch = new CountDownLatch(1);
     observer.waitForNextChange(new ClusterStateObserver.Listener() {
  @Override
  public void onNewClusterState(ClusterState state) { latch.countDown(); }

  @Override
  public void onClusterServiceClose() {
  latch.countDown();
  }

  @Override
  public void onTimeout(TimeValue timeout) {
  logger.warn("timed out while waiting for initial discovery state - timeout: {}",
      initialStateTimeout);
  latch.countDown();
  }
     }, state -> state.nodes().getMasterNodeId() != null, initialStateTimeout);

     try {
  latch.await();
     } catch (InterruptedException e) {
  throw new ElasticsearchTimeoutException("Interrupted while waiting for initial discovery state");
     }
 }
 }


 if (NetworkModule.HTTP_ENABLED.get(settings)) {
 injector.getInstance(HttpServerTransport.class).start();
 }

 if (WRITE_PORTS_FILE_SETTING.get(settings)) {
 if (NetworkModule.HTTP_ENABLED.get(settings)) {
     HttpServerTransport http = injector.getInstance(HttpServerTransport.class);
     writePortsFile("http", http.boundAddress());
 }
 TransportService transport = injector.getInstance(TransportService.class);
 writePortsFile("transport", transport.boundAddress());
 }

 // start nodes now, after the http server, because it may take some time
 tribeService.startNodes();
 logger.info("started");
相关实践学习
使用阿里云Elasticsearch体验信息检索加速
通过创建登录阿里云Elasticsearch集群,使用DataWorks将MySQL数据同步至Elasticsearch,体验多条件检索效果,简单展示数据同步和信息检索加速的过程和操作。
ElasticSearch 入门精讲
ElasticSearch是一个开源的、基于Lucene的、分布式、高扩展、高实时的搜索与数据分析引擎。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr(也是基于Lucene)。 ElasticSearch的实现原理主要分为以下几个步骤: 用户将数据提交到Elastic Search 数据库中 通过分词控制器去将对应的语句分词,将其权重和分词结果一并存入数据 当用户搜索数据时候,再根据权重将结果排名、打分 将返回结果呈现给用户 Elasticsearch可以用于搜索各种文档。它提供可扩展的搜索,具有接近实时的搜索,并支持多租户。
目录
相关文章
|
2月前
|
自然语言处理 API 索引
Elasticsearch Analyzer原理分析并实现中文分词
Elasticsearch Analyzer原理分析并实现中文分词
83 0
|
2月前
|
搜索推荐 Java 数据处理
Elasticsearch搜索分析引擎本地部署与远程访问
Elasticsearch搜索分析引擎本地部署与远程访问
|
2月前
|
JSON 监控 Java
Elasticsearch 8.X reindex 源码剖析及提速指南
Elasticsearch 8.X reindex 源码剖析及提速指南
42 0
|
25天前
|
运维 监控 Java
在大数据场景下,Elasticsearch作为分布式搜索与分析引擎,因其扩展性和易用性成为全文检索首选。
【7月更文挑战第1天】在大数据场景下,Elasticsearch作为分布式搜索与分析引擎,因其扩展性和易用性成为全文检索首选。本文讲解如何在Java中集成Elasticsearch,包括安装配置、使用RestHighLevelClient连接、创建索引和文档操作,以及全文检索查询。此外,还涉及高级查询、性能优化和故障排查,帮助开发者高效处理非结构化数据。
31 0
|
2月前
|
存储 Serverless 定位技术
深度探索 Elasticsearch 8.X:function_score 参数解读与实战案例分析
深度探索 Elasticsearch 8.X:function_score 参数解读与实战案例分析
44 0
|
2月前
|
Kubernetes Java 索引
Elasticsearch 源码探究 001——故障探测和恢复机制
Elasticsearch 源码探究 001——故障探测和恢复机制
33 0
|
2月前
|
存储 JSON API
【Elasticsearch专栏 16】深入探索:Elasticsearch的Master选举机制及其影响因素分析
Elasticsearch,开源搜索和分析引擎,以其分布式特性受开发者喜爱。本文聚焦其Master选举过程,关键在于保障集群稳健和高可用。Master负责集群操作,数据节点存储数据。选举在Master不可用时发生,基于Zen Discovery模块,遵循多数派协议。选举过程包括启动发现、选举触发、节点投票和状态同步。相关命令和配置有助于管理选举和集群状态。理解和优化选举机制能提升Elasticsearch集群的性能和稳定性。
51 1
|
2月前
|
Java iOS开发 MacOS
Elasticsearch 6.5源码编译最新版
Elasticsearch 6.5源码编译最新版
29 0
Elasticsearch 6.5源码编译最新版
|
2月前
|
前端开发 Java iOS开发
elasticsearch8.1源码编译笔记
elasticsearch8.1源码编译笔记
94 0
|
2月前
|
Java iOS开发 MacOS
Elasticsearch7.4源码编译记录
Elasticsearch7.4源码编译记录
31 0

热门文章

最新文章