skywalking内部测试服务器安装记录

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
日志服务 SLS,月写入数据量 50GB 1个月
应用实时监控服务-应用监控,每月50GB免费额度
简介: skywalking内部测试服务器安装记录

skywalking内部测试服务器安装记录

1、yum install java-1.8.0-openjdk* -y 安装java1.8包

2、yum install net-tools 目前这个之前已经安装好

3、安装工具与docker-ce(一般docker的早期版本,后面分成ce免费社区版本与ee企业收费版本)

首先yum install -y yum-utils

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce-17.12.1.ce

4、这样就安装上了上面版本了

service docker start 同时重新启动dockert

systemctl enable docker.service 让docker的服务系统启动后自动启动

systemctl status docker.service 看状态

5、支持es7

docker pull elasticsearch:7.3.2

6、先建立/mydata目录,同时

useradd elasticsearch -d /mydata/elasticsearch

passwd elasticsearch 修改成密码是nbacheng@es123

id elasticsearch 查看用户id是否是1000,按官方文档最好是1000

若不是可以考虑修改usermod -u 1000 elasticsearch

把elasticsearch的uid修改成10000也不行啊,主要原因是先用第一次不用映射目录启动docker elasticsearch

7、先运行一次,因为需要拷贝相关目录与文件到宿主机里

docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e TZ=Asia/Shanghai -e"discovery.type=single-node" elasticsearch:7.3.2

8、docker exec -it elasticsearch bash 通过这个可以进去看相关内容

有时候上面的命令会出错,那目前只有重新启动或start 容器了

docker exec -it elasticsearch ls -l /usr/share/elasticsearch/config

列出里面的容器目录文件

必须要第一次干净的下面docker里文件拷贝到宿主机对应的目录里

docker cp elasticsearch:/usr/share/elasticsearch/config/ . 目录及相关文件拷贝到宿主目录里
docker cp elasticsearch:/usr/share/elasticsearch/logs/ .
docker cp elasticsearch:/usr/share/elasticsearch/data/ .
chown -R elasticsearch:elasticsearch *

注意权限问题

9、docker stop elasticsearch 并删除

10、运行起来

docker run -d --name elasticsearch -v /mydata/elasticsearch/data/:/usr/share/elasticsearch/data -v /mydata/elasticsearch/logs/:/usr/share/elasticsearch/logs -v /mydata/elasticsearch/config/:/usr/share/elasticsearch/config -e TZ=Asia/Shanghai -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.3.2

11、查看是否正常

http://192.168.33.113:9200/

类似上面应该是正常的

12、安装elasticsearch可视化管理工具

docker pull mobz/elasticsearch-head:5

docker run -d --name elasticsearchhead -p 9100:9100 -e TZ=Asia/Shanghai mobz/elasticsearch-head:5

需要在elasticsearch.yml中添加:

http.cors.enabled: true http.cors.allow-origin: "*"

重启  elasticsearch容器

docker restart elasticsearch

否则好像有问题,访问不了,用下面的地址进行访问

http://192.168.33.113:9100/

开始查询与浏览数据都没有,需要修改一个文件如下

docker cp elasticsearchhead:/usr/src/app/_site/vendor.js .

vi vendor.js

共有两处

1)6886行

contentType: "application/x-www-form-urlencoded

改成

contentType: "application/json;charset=UTF-8"

2)7573行

var inspectData = s.contentType === "application/x-www-form-urlencoded" &&

改成

var inspectData = s.contentType === "application/json;charset=UTF-8" &&

补充说明

vi中显示行号的命令为

:set nu

vi中跳转到指定行的命令为

:行号

13、安装启动skywalking-oap-server

docker pull apache/skywalking-oap-server:6.6.0-es7
docker run --name oap -d -p 11800:11800 -p 12800:12800 --link elasticsearch:elasticsearch -e SW_STORAGE=elasticsearch -e TZ=Asia/Shanghai -e SW_STORAGE_ES_CLUSTER_NODES=elasticsearch:9200 apache/skywalking-oap-server:6.6.0-es7

好像这样设置时区不起作用啊

docker run --name oap -d -p 11800:11800 -p 12800:12800 -v /etc/localtime:/etc/localtime --link elasticsearch:elasticsearch -e SW_STORAGE=elasticsearch -e SW_STORAGE_ES_CLUSTER_NODES=elasticsearch:9200 apache/skywalking-oap-server:6.6.0-es7

好像这样设置时区起作用,但docker logs日志时间好像又不对

只有想办法容器启动文件里docker-entrypoint.sh修改里面的java启动代码里增加时区了,如增加 -

Duser.timezone=Asia/Shanghai
# add by lqy -Duser.timezone=Asia/Shanghai
exec java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Duser.timezone=Asia/Shanghai \
${JAVA_OPTS} -classpath ${CLASSPATH} org.apache.skywalking.oap.server.starter.OAPServerStartUp "$@"

docker启动后还是要注意时区问题,要检查

第一次启动oap服务需要建立大量的存储表等信息,所以时间比较长,最好等它完成后再进行下一步

14、安装启动skywalking-ui

docker pull apache/skywalking-ui:6.6.0
docker run --name oap-ui -d -p 8088:8080 --link oap:oap -e TZ=Asia/Shanghai -e SW_OAP_ADDRESS=oap:12800 apache/skywalking-ui:6.6.0 --security.user.admin.password=swmonitor@admin

好像安全认证在6.2以后被去掉了 好像这样设置时区不起作用啊

docker run --name oap-ui -d -p 8088:8080 --link oap:oap -v /etc/localtime:/etc/localtime -e SW_OAP_ADDRESS=oap:12800 apache/skywalking-ui:6.6.0

好像这样设置时区起作用,但docker logs日志时间好像又不对

只有想办法容器启动文件里docker-entrypoint.sh修改里面的java启动代码里增加时区了,如增加 -Duser.timezone=Asia/Shanghai

docker启动后还是要注意时区问题,要检查

用我自己编译的覆盖掉

docker cp skywalking-webapp.jar oap-ui:/skywalking/webapp/

在/skywalking/webapp里的webapp.yml增加下面一项也没有用

security:

user:

admin:

password: swmonitor@admin

du -sh * 可以定期看一下 elasticsearch目录下的logs与data容量问题

docker logs -f --tail=1000 elasticsearch 看最后的日志信息

Linux Tomcat 7, Tomcat 8

修改tomcat/bin/catalina.sh,在首行加入如下信息

CATALINA_OPTS="$CATALINA_OPTS -javaagent:/mydata/appuser/tools/agent/skywalking-agent.jar=agent.service_name=tomcat-5110,collector.backend_service=192.168.33.113:11800"; export CATALINA_OPTS
nohup java -Xms128m -Xmx256m -javaagent:/mydata/appuser/tools/agent/skywalking-agent.jar=agent.service_name=hospital-appService,collector.backend_service=192.168.33.113:11800 -Duser.timezone=Asia/Shanghai -jar hospital-appService.jar --env=prodext > /dev/null 2>&1 &
对spring-data-redis:2.2.1 default use lettuce(version:5.2.1). I found that redis tracking dosen't work in skywalking(version: 6.6.0), but when i manually exclude the lettuce and replace with jedis, it works normally.
Because of JRE 1.8 requirements, it is an optional plugin only. https://github.com/apache/skywalking/blob/6.x/docs/en/setup/service-agent/java-agent/README.md#optional-plugins

说明用spring-data-redis:2.2.1 lettuce(version:5.2.1) 在6.6.0里是放到可选插件里的,说明要移到sdk插件里才能跟踪到,在代理里面进行移动

后来查询了一下,docker的日志文件应该是在/var/log下的messages文件里

ui主页上的显示功能图表的时间范围在右下角可以进行选择,有时候需要往下拉才能看到

运行一段时间后出现docker logs -f --tail=1000 oap 看oap出现下面错误,感觉 elasticsearch 分片超过1000了,但默认好像就是1000的,所以就没有办法写过去了,通过网上查询用下面办法可以增加分片数量到10000,结果一修改完就正常了

curl -X PUT -H "Content-Type:application/json" -d '{"transient":{"cluster":{"max_shards_per_node":10000}}}' 'http://192.168.33.113:9200/_cluster/settings'

curl -X GET 'http://192.168.33.113:9200/_nodes/stats'

获得相关参数吧

运行一段时间后又出现下面的问题

内存使用情况,先在 elasticsearch容器里的jvm.options 配置文件内存先都修改成4G内存

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
2020-03-03 09:00:52,542 - org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker -168865433 [pool-12-thread-1] ERROR [] - Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]]
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:915) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.ids(ElasticSearch7Client.java:197) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.dao.MetricsEs7DAO.multiGet(MetricsEs7DAO.java:45) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.syncStorageToCache(MetricsPersistentWorker.java:185) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.prepareBatch(MetricsPersistentWorker.java:121) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.PersistenceWorker.buildBatchRequests(PersistenceWorker.java:76) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$extractDataAndSave$2(PersistenceTimer.java:95) ~[server-core-6.6.0.jar:6.6.0]
at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.extractDataAndSave(PersistenceTimer.java:89) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$start$0(PersistenceTimer.java:67) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://elasticsearch:9200], URI [/service_instance_cpm/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]","bytes_wanted":1056273900,"bytes_limit":1003493785,"durability":"PERMANENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]","bytes_wanted":1056273900,"bytes_limit":1003493785,"durability":"PERMANENT"},"status":429}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:260) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:238) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:915) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.ids(ElasticSearch7Client.java:197) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.dao.MetricsEs7DAO.multiGet(MetricsEs7DAO.java:45) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.syncStorageToCache(MetricsPersistentWorker.java:185) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.prepareBatch(MetricsPersistentWorker.java:121) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.PersistenceWorker.buildBatchRequests(PersistenceWorker.java:76) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$extractDataAndSave$2(PersistenceTimer.java:95) ~[server-core-6.6.0.jar:6.6.0]
at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.extractDataAndSave(PersistenceTimer.java:89) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$start$0(PersistenceTimer.java:67) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
运行一段时间后出现docker logs -f --tail=1000 oap 看oap出现下面错误,感觉 elasticsearch 分片超过1000了,但默认好像就是1000的,所以就没有办法写过去了,通过网上查询用下面办法可以增加分片数量到10000,结果一修改完就正常了
curl -X PUT -H "Content-Type:application/json" -d '{"transient":{"cluster":{"max_shards_per_node":10000}}}' 'http://192.168.33.113:9200/_cluster/settings'
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1385) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.IndicesClient.create(IndicesClient.java:125) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.createIndex(ElasticSearch7Client.java:95) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.HistoryDeleteEsDAO.deleteHistory(HistoryDeleteEsDAO.java:75) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.execute(DataTTLKeeperTimer.java:81) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.lambda$delete$1(DataTTLKeeperTimer.java:74) ~[server-core-6.6.0.jar:6.6.0]
at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.delete(DataTTLKeeperTimer.java:72) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch:9200], URI [/alarm_record-20200229?master_timeout=30s&timeout=30s], status line [HTTP/1.1 400 Bad Request]
2020-02-29 14:21:45,948 - org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client -973210 [pool-12-thread-1] INFO [] - Synchronous bulk took time: 10147 millis, size: 1189
2020-02-29 14:21:47,183 - org.apache.skywalking.oap.server.library.client.elasticsearch.ElasticSearchClient -974445 [I/O dispatcher 1] WARN [] - Bulk [89] executed with failures
下面问题原因是当 Elasticsearch 集群收到大量请求且无法再接收任何请求时,通常会发生“es_rejected_execution_exception”异常。每个节点都有一个线程池队列,可以容纳 50 到 200 个请求,具体取决于您使用的 Elasticsearch 版本。队列已满时,将拒绝新请求。
2020-03-05 09:09:58,486 - org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client -167328843 [pool-12-thread-1] INFO [] - Synchronous bulk took time: 10077 millis, size: 1037
2020-03-05 09:10:01,758 - org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker -167332115 [DataCarrier.REGISTER_L2.BulkConsumePool.0.Thread] ERROR [] - Elasticsearch exception [type=es_rejected_execution_exception, reason=rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{"sequence":9,"last_update_time":0,"heartbeat_time":1583370601542,"node_type":0,"service_id":9,"address_id":0,"name":"zxgbmp-centerserver-pid:28356@zxg","is_address":0,"instance_uuid":"a3e9d2606c1d4cb587cdc8e94af6c9e9","register_time":1581405898590,"properties":"{\"os_name\":\"Linux\",\"host_name\":\"zxg\",\"process_no\":\"28356\",\"language\":\"java\",\"ipv4s\":\"[\\\"172.17.0.1\\\",\\\"192.168.33.112\\\"]\"}","mapping_service_instance_id":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]]
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=es_rejected_execution_exception, reason=rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{"sequence":9,"last_update_time":0,"heartbeat_time":1583370601542,"node_type":0,"service_id":9,"address_id":0,"name":"zxgbmp-centerserver-pid:28356@zxg","is_address":0,"instance_uuid":"a3e9d2606c1d4cb587cdc8e94af6c9e9","register_time":1581405898590,"properties":"{\"os_name\":\"Linux\",\"host_name\":\"zxg\",\"process_no\":\"28356\",\"language\":\"java\",\"ipv4s\":\"[\\\"172.17.0.1\\\",\\\"192.168.33.112\\\"]\"}","mapping_service_instance_id":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.update(RestHighLevelClient.java:868) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.forceUpdate(ElasticSearch7Client.java:221) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.RegisterEsDAO.forceUpdate(RegisterEsDAO.java:56) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.lambda$onWork$0(RegisterPersistentWorker.java:100) ~[server-core-6.6.0.jar:6.6.0]
at java.util.HashMap$Values.forEach(HashMap.java:981) [?:1.8.0_212]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.onWork(RegisterPersistentWorker.java:95) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.access$100(RegisterPersistentWorker.java:40) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker$PersistentConsumer.consume(RegisterPersistentWorker.java:153) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.consume(MultipleChannelsConsumer.java:81) [apm-datacarrier-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.run(MultipleChannelsConsumer.java:52) [apm-datacarrier-6.6.0.jar:6.6.0]
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://elasticsearch:9200], URI [/service_instance_inventory/_update/9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0?refresh=true&timeout=1m], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[828ebbd870e6][172.17.0.2:9300][indices:data/write/update[s]]"}],"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{\"sequence\":9,\"last_update_time\":0,\"heartbeat_time\":1583370601542,\"node_type\":0,\"service_id\":9,\"address_id\":0,\"name\":\"zxgbmp-centerserver-pid:28356@zxg\",\"is_address\":0,\"instance_uuid\":\"a3e9d2606c1d4cb587cdc8e94af6c9e9\",\"register_time\":1581405898590,\"properties\":\"{\\\"os_name\\\":\\\"Linux\\\",\\\"host_name\\\":\\\"zxg\\\",\\\"process_no\\\":\\\"28356\\\",\\\"language\\\":\\\"java\\\",\\\"ipv4s\\\":\\\"[\\\\\\\"172.17.0.1\\\\\\\",\\\\\\\"192.168.33.112\\\\\\\"]\\\"}\",\"mapping_service_instance_id\":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]"},"status":429}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:260) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:238) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.update(RestHighLevelClient.java:868) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.forceUpdate(ElasticSearch7Client.java:221) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.RegisterEsDAO.forceUpdate(RegisterEsDAO.java:56) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.lambda$onWork$0(RegisterPersistentWorker.java:100) ~[server-core-6.6.0.jar:6.6.0]
at java.util.HashMap$Values.forEach(HashMap.java:981) [?:1.8.0_212]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.onWork(RegisterPersistentWorker.java:95) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.access$100(RegisterPersistentWorker.java:40) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker$PersistentConsumer.consume(RegisterPersistentWorker.java:153) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.consume(MultipleChannelsConsumer.java:81) [apm-datacarrier-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.run(MultipleChannelsConsumer.java:52) [apm-datacarrier-6.6.0.jar:6.6.0]
curl -X GET 'http://192.168.33.113:9200/_nodes/stats'
获得相关参数吧
{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "docker-cluster",
"nodes": {
"N3fwuKinRiWTz8LpypmdTw": {
"timestamp": 1583202972869,
"name": "828ebbd870e6",
"transport_address": "172.17.0.2:9300",
"host": "172.17.0.2",
"ip": "172.17.0.2:9300",
"roles": ["ingest",
"master",
"data"],
"attributes": {
"ml.machine_memory": "8201801728",
"xpack.installed": "true",
"ml.max_open_jobs": "20"
},
"indices": {
"docs": {
"count": 1056,
"deleted": 127
},
"store": {
"size_in_bytes": 1286499
},
"indexing": {
"index_total": 0,
"index_time_in_millis": 0,
"index_current": 0,
"index_failed": 0,
"delete_total": 0,
"delete_time_in_millis": 0,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 0,
"query_time_in_millis": 0,
"query_current": 0,
"fetch_total": 0,
"fetch_time_in_millis": 0,
"fetch_current": 0,
"scroll_total": 0,
"scroll_time_in_millis": 0,
"scroll_current": 0,
"suggest_total": 0,
"suggest_time_in_millis": 0,
"suggest_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 0,
"total_time_in_millis": 0,
"total_docs": 0,
"total_size_in_bytes": 0,
"total_stopped_time_in_millis": 0,
"total_throttled_time_in_millis": 0,
"total_auto_throttle_in_bytes": 1468006400
},
"refresh": {
"total": 140,
"total_time_in_millis": 0,
"external_total": 140,
"external_total_time_in_millis": 0,
"listeners": 0
},
"flush": {
"total": 0,
"periodic": 0,
"total_time_in_millis": 0
},
"warmer": {
"current": 0,
"total": 70,
"total_time_in_millis": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"total_count": 0,
"hit_count": 0,
"miss_count": 0,
"cache_size": 0,
"cache_count": 0,
"evictions": 0
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 241,
"memory_in_bytes": 295014,
"terms_memory_in_bytes": 168784,
"stored_fields_memory_in_bytes": 75192,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 8576,
"points_memory_in_bytes": 2730,
"doc_values_memory_in_bytes": 39732,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": -1,
"file_sizes": {
}
},
"translog": {
"operations": 8097,
"size_in_bytes": 1772793,
"uncommitted_operations": 0,
"uncommitted_size_in_bytes": 12870,
"earliest_last_modified_age": 0
},
"request_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1583202973591,
"cpu": {
"percent": 89,
"load_average": {
"1m": 3.53,
"5m": 1.06,
"15m": 0.66
}
},
"mem": {
"total_in_bytes": 8201801728,
"free_in_bytes": 153522176,
"used_in_bytes": 8048279552,
"free_percent": 2,
"used_percent": 98
},
"swap": {
"total_in_bytes": 8455712768,
"free_in_bytes": 8363167744,
"used_in_bytes": 92545024
},
"cgroup": {
"cpuacct": {
"control_group": "/",
"usage_nanos": 75123132920
},
"cpu": {
"control_group": "/",
"cfs_period_micros": 100000,
"cfs_quota_micros": -1,
"stat": {
"number_of_elapsed_periods": 0,
"number_of_times_throttled": 0,
"time_throttled_nanos": 0
}
},
"memory": {
"control_group": "/",
"limit_in_bytes": "9223372036854771712",
"usage_in_bytes": "4691886080"
}
}
},
"process": {
"timestamp": 1583202973591,
"open_file_descriptors": 2836,
"max_file_descriptors": 1048576,
"cpu": {
"percent": 88,
"total_in_millis": 72080
},
"mem": {
"total_virtual_in_bytes": 7096074240
}
},
"jvm": {
"timestamp": 1583202973596,
"uptime_in_millis": 41945,
"mem": {
"heap_used_in_bytes": 251700616,
"heap_used_percent": 5,
"heap_committed_in_bytes": 4277534720,
"heap_max_in_bytes": 4277534720,
"non_heap_used_in_bytes": 112768520,
"non_heap_committed_in_bytes": 122793984,
"pools": {
"young": {
"used_in_bytes": 69354608,
"max_in_bytes": 139591680,
"peak_used_in_bytes": 139591680,
"peak_max_in_bytes": 139591680
},
"survivor": {
"used_in_bytes": 7775632,
"max_in_bytes": 17432576,
"peak_used_in_bytes": 17432576,
"peak_max_in_bytes": 17432576
},
"old": {
"used_in_bytes": 174570376,
"max_in_bytes": 4120510464,
"peak_used_in_bytes": 174570376,
"peak_max_in_bytes": 4120510464
}
}
},
"threads": {
"count": 34,
"peak_count": 34
},
"gc": {
"collectors": {
"young": {
"collection_count": 126,
"collection_time_in_millis": 4179
},
"old": {
"collection_count": 2,
"collection_time_in_millis": 86
}
}
},
"buffer_pools": {
"mapped": {
"count": 438,
"used_in_bytes": 786685,
"total_capacity_in_bytes": 786685
},
"direct": {
"count": 16,
"used_in_bytes": 125069,
"total_capacity_in_bytes": 125068
}
},
"classes": {
"current_loaded_count": 16369,
"total_loaded_count": 16369,
"total_unloaded_count": 0
}
},
"thread_pool": {
"analyze": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ccr": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"data_frame_indexing": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"fetch_shard_started": {
"threads": 4,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 4,
"completed": 3141
},
"fetch_shard_store": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"flush": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"force_merge": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"generic": {
"threads": 6,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 6,
"completed": 332
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"listener": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"management": {
"threads": 2,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 2,
"completed": 42
},
"ml_datafeed": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ml_job_comms": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ml_utility": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 1
},
"refresh": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 11
},
"rollup_indexing": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search_throttled": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"security-token-key": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"watcher": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"write": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
}
},
"fs": {
"timestamp": 1583202973597,
"total": {
"total_in_bytes": 97788563456,
"free_in_bytes": 83441401856,
"available_in_bytes": 83441401856
},
"data": [{
"path": "/usr/share/elasticsearch/data/nodes/0",
"mount": "/usr/share/elasticsearch/data (/dev/mapper/centos-root)",
"type": "xfs",
"total_in_bytes": 97788563456,
"free_in_bytes": 83441401856,
"available_in_bytes": 83441401856
}],
"io_stats": {
"devices": [{
"device_name": "dm-0",
"operations": 16593,
"read_operations": 6325,
"write_operations": 10268,
"read_kilobytes": 76888,
"write_kilobytes": 113093
}],
"total": {
"operations": 16593,
"read_operations": 6325,
"write_operations": 10268,
"read_kilobytes": 76888,
"write_kilobytes": 113093
}
}
},
"transport": {
"server_open": 0,
"rx_count": 0,
"rx_size_in_bytes": 0,
"tx_count": 0,
"tx_size_in_bytes": 0
},
"http": {
"current_open": 1,
"total_opened": 1
},
"breakers": {
"request": {
"limit_size_in_bytes": 2566520832,
"limit_size": "2.3gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"fielddata": {
"limit_size_in_bytes": 1711013888,
"limit_size": "1.5gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"in_flight_requests": {
"limit_size_in_bytes": 4277534720,
"limit_size": "3.9gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 2.0,
"tripped": 0
},
"accounting": {
"limit_size_in_bytes": 4277534720,
"limit_size": "3.9gb",
"estimated_size_in_bytes": 300725,
"estimated_size": "293.6kb",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 4063657984,
"limit_size": "3.7gb",
"estimated_size_in_bytes": 251999928,
"estimated_size": "240.3mb",
"overhead": 1.0,
"tripped": 0
}
},
"script": {
"compilations": 1,
"cache_evictions": 0,
"compilation_limit_triggered": 0
},
"discovery": {
"cluster_state_queue": {
"total": 0,
"pending": 0,
"committed": 0
},
"published_cluster_states": {
"full_states": 2,
"incompatible_diffs": 0,
"compatible_diffs": 38
}
},
"ingest": {
"total": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
},
"pipelines": {
"xpack_monitoring_6": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0,
"processors": [{
"script": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
}
},
{
"gsub": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
}
}]
},
"xpack_monitoring_7": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0,
"processors": []
}
}
},
"adaptive_selection": {
}
}
}
}
相关实践学习
通过云拨测对指定服务器进行Ping/DNS监测
本实验将通过云拨测对指定服务器进行Ping/DNS监测,评估网站服务质量和用户体验。
相关文章
|
5天前
|
运维 Prometheus 监控
如何在测试环境中保持操作系统、浏览器版本和服务器配置的稳定性和一致性?
如何在测试环境中保持操作系统、浏览器版本和服务器配置的稳定性和一致性?
|
24天前
|
监控 Java Linux
Linux系统之安装Ward服务器监控工具
【10月更文挑战第17天】Linux系统之安装Ward服务器监控工具
49 5
Linux系统之安装Ward服务器监控工具
|
1月前
|
存储 监控 网络协议
服务器压力测试是一种评估系统在极端条件下的表现和稳定性的技术
【10月更文挑战第11天】服务器压力测试是一种评估系统在极端条件下的表现和稳定性的技术
108 32
|
1月前
|
缓存 监控 测试技术
服务器压力测试
【10月更文挑战第11天】服务器压力测试
81 31
|
1月前
|
自然语言处理 机器人 Python
ChatGPT使用学习:ChatPaper安装到测试详细教程(一文包会)
ChatPaper是一个基于文本生成技术的智能研究论文工具,能够根据用户输入进行智能回复和互动。它支持快速下载、阅读论文,并通过分析论文的关键信息帮助用户判断是否需要深入了解。用户可以通过命令行或网页界面操作,进行论文搜索、下载、总结等。
44 1
ChatGPT使用学习:ChatPaper安装到测试详细教程(一文包会)
|
1月前
|
人工智能 安全 大数据
ARM 服务器上安装 OpenEuler (欧拉)
openEuler 是华为于2019年开源的操作系统,支持多种处理器架构,包括X86和鲲鹏。截至2020年底,openEuler 拥有3万社区用户、2万多个拉取请求、2000多名贡献者和7032款软件。openEuler 提供高效、稳定、安全的系统,适用于数据库、大数据、云计算和人工智能等场景。本文介绍了在神州鲲泰 R522 服务器上安装 openEuler 的详细步骤,包括下载镜像、配置 RAID 和 BIOS 设置等。
184 0
ARM 服务器上安装 OpenEuler (欧拉)
|
1月前
|
Ubuntu TensorFlow 算法框架/工具
NVIDIA Triton系列05-安装服务器软件
本文介绍了NVIDIA Triton推理服务器的安装方法,涵盖源代码编译、可执行文件安装及Docker容器部署三种方式。重点讲解了在NVIDIA Jetson AGX Orin设备上的安装步骤,适合Ubuntu 18及以上系统。通过检查HTTP端口状态确认服务器运行正常,为后续客户端软件安装做准备。
41 0
NVIDIA Triton系列05-安装服务器软件
|
29天前
|
测试技术 PHP 开发工具
php性能监测模块XHProf安装与测试
【10月更文挑战第13天】php性能监测模块XHProf安装与测试
28 0
|
30天前
|
弹性计算 应用服务中间件 网络安全
ECS服务器使用:SSL证书安装、配置和问题定位指南
本文简要介绍了SSL证书的生成与部署方法,包括使用OpenSSL生成自签名证书和从CA获取证书的步骤,以及在Apache和Nginx服务器上的配置方法。此外,还提供了测试证书是否生效的方法和常见问题的解决策略,帮助确保证书正确安装并解决调试过程中可能遇到的问题。
136 0
|
30天前
|
弹性计算 网络协议 Linux
云服务器评估迁移时间与测试传输速度
云服务器评估迁移时间与测试传输速度