skywalking内部测试服务器安装记录

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
云拨测,每月3000次拨测额度
简介: skywalking内部测试服务器安装记录

skywalking内部测试服务器安装记录

1、yum install java-1.8.0-openjdk* -y 安装java1.8包

2、yum install net-tools 目前这个之前已经安装好

3、安装工具与docker-ce(一般docker的早期版本,后面分成ce免费社区版本与ee企业收费版本)

首先yum install -y yum-utils

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce-17.12.1.ce

4、这样就安装上了上面版本了

service docker start 同时重新启动dockert

systemctl enable docker.service 让docker的服务系统启动后自动启动

systemctl status docker.service 看状态

5、支持es7

docker pull elasticsearch:7.3.2

6、先建立/mydata目录,同时

useradd elasticsearch -d /mydata/elasticsearch

passwd elasticsearch 修改成密码是nbacheng@es123

id elasticsearch 查看用户id是否是1000,按官方文档最好是1000

若不是可以考虑修改usermod -u 1000 elasticsearch

把elasticsearch的uid修改成10000也不行啊,主要原因是先用第一次不用映射目录启动docker elasticsearch

7、先运行一次,因为需要拷贝相关目录与文件到宿主机里

docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e TZ=Asia/Shanghai -e"discovery.type=single-node" elasticsearch:7.3.2

8、docker exec -it elasticsearch bash 通过这个可以进去看相关内容

有时候上面的命令会出错,那目前只有重新启动或start 容器了

docker exec -it elasticsearch ls -l /usr/share/elasticsearch/config

列出里面的容器目录文件

必须要第一次干净的下面docker里文件拷贝到宿主机对应的目录里

docker cp elasticsearch:/usr/share/elasticsearch/config/ . 目录及相关文件拷贝到宿主目录里
docker cp elasticsearch:/usr/share/elasticsearch/logs/ .
docker cp elasticsearch:/usr/share/elasticsearch/data/ .
chown -R elasticsearch:elasticsearch *

注意权限问题

9、docker stop elasticsearch 并删除

10、运行起来

docker run -d --name elasticsearch -v /mydata/elasticsearch/data/:/usr/share/elasticsearch/data -v /mydata/elasticsearch/logs/:/usr/share/elasticsearch/logs -v /mydata/elasticsearch/config/:/usr/share/elasticsearch/config -e TZ=Asia/Shanghai -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.3.2

11、查看是否正常

http://192.168.33.113:9200/

类似上面应该是正常的

12、安装elasticsearch可视化管理工具

docker pull mobz/elasticsearch-head:5

docker run -d --name elasticsearchhead -p 9100:9100 -e TZ=Asia/Shanghai mobz/elasticsearch-head:5

需要在elasticsearch.yml中添加:

http.cors.enabled: true http.cors.allow-origin: "*"

重启  elasticsearch容器

docker restart elasticsearch

否则好像有问题,访问不了,用下面的地址进行访问

http://192.168.33.113:9100/

开始查询与浏览数据都没有,需要修改一个文件如下

docker cp elasticsearchhead:/usr/src/app/_site/vendor.js .

vi vendor.js

共有两处

1)6886行

contentType: "application/x-www-form-urlencoded

改成

contentType: "application/json;charset=UTF-8"

2)7573行

var inspectData = s.contentType === "application/x-www-form-urlencoded" &&

改成

var inspectData = s.contentType === "application/json;charset=UTF-8" &&

补充说明

vi中显示行号的命令为

:set nu

vi中跳转到指定行的命令为

:行号

13、安装启动skywalking-oap-server

docker pull apache/skywalking-oap-server:6.6.0-es7
docker run --name oap -d -p 11800:11800 -p 12800:12800 --link elasticsearch:elasticsearch -e SW_STORAGE=elasticsearch -e TZ=Asia/Shanghai -e SW_STORAGE_ES_CLUSTER_NODES=elasticsearch:9200 apache/skywalking-oap-server:6.6.0-es7

好像这样设置时区不起作用啊

docker run --name oap -d -p 11800:11800 -p 12800:12800 -v /etc/localtime:/etc/localtime --link elasticsearch:elasticsearch -e SW_STORAGE=elasticsearch -e SW_STORAGE_ES_CLUSTER_NODES=elasticsearch:9200 apache/skywalking-oap-server:6.6.0-es7

好像这样设置时区起作用,但docker logs日志时间好像又不对

只有想办法容器启动文件里docker-entrypoint.sh修改里面的java启动代码里增加时区了,如增加 -

Duser.timezone=Asia/Shanghai
# add by lqy -Duser.timezone=Asia/Shanghai
exec java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Duser.timezone=Asia/Shanghai \
${JAVA_OPTS} -classpath ${CLASSPATH} org.apache.skywalking.oap.server.starter.OAPServerStartUp "$@"

docker启动后还是要注意时区问题,要检查

第一次启动oap服务需要建立大量的存储表等信息,所以时间比较长,最好等它完成后再进行下一步

14、安装启动skywalking-ui

docker pull apache/skywalking-ui:6.6.0
docker run --name oap-ui -d -p 8088:8080 --link oap:oap -e TZ=Asia/Shanghai -e SW_OAP_ADDRESS=oap:12800 apache/skywalking-ui:6.6.0 --security.user.admin.password=swmonitor@admin

好像安全认证在6.2以后被去掉了 好像这样设置时区不起作用啊

docker run --name oap-ui -d -p 8088:8080 --link oap:oap -v /etc/localtime:/etc/localtime -e SW_OAP_ADDRESS=oap:12800 apache/skywalking-ui:6.6.0

好像这样设置时区起作用,但docker logs日志时间好像又不对

只有想办法容器启动文件里docker-entrypoint.sh修改里面的java启动代码里增加时区了,如增加 -Duser.timezone=Asia/Shanghai

docker启动后还是要注意时区问题,要检查

用我自己编译的覆盖掉

docker cp skywalking-webapp.jar oap-ui:/skywalking/webapp/

在/skywalking/webapp里的webapp.yml增加下面一项也没有用

security:

user:

admin:

password: swmonitor@admin

du -sh * 可以定期看一下 elasticsearch目录下的logs与data容量问题

docker logs -f --tail=1000 elasticsearch 看最后的日志信息

Linux Tomcat 7, Tomcat 8

修改tomcat/bin/catalina.sh,在首行加入如下信息

CATALINA_OPTS="$CATALINA_OPTS -javaagent:/mydata/appuser/tools/agent/skywalking-agent.jar=agent.service_name=tomcat-5110,collector.backend_service=192.168.33.113:11800"; export CATALINA_OPTS
nohup java -Xms128m -Xmx256m -javaagent:/mydata/appuser/tools/agent/skywalking-agent.jar=agent.service_name=hospital-appService,collector.backend_service=192.168.33.113:11800 -Duser.timezone=Asia/Shanghai -jar hospital-appService.jar --env=prodext > /dev/null 2>&1 &
对spring-data-redis:2.2.1 default use lettuce(version:5.2.1). I found that redis tracking dosen't work in skywalking(version: 6.6.0), but when i manually exclude the lettuce and replace with jedis, it works normally.
Because of JRE 1.8 requirements, it is an optional plugin only. https://github.com/apache/skywalking/blob/6.x/docs/en/setup/service-agent/java-agent/README.md#optional-plugins

说明用spring-data-redis:2.2.1 lettuce(version:5.2.1) 在6.6.0里是放到可选插件里的,说明要移到sdk插件里才能跟踪到,在代理里面进行移动

后来查询了一下,docker的日志文件应该是在/var/log下的messages文件里

ui主页上的显示功能图表的时间范围在右下角可以进行选择,有时候需要往下拉才能看到

运行一段时间后出现docker logs -f --tail=1000 oap 看oap出现下面错误,感觉 elasticsearch 分片超过1000了,但默认好像就是1000的,所以就没有办法写过去了,通过网上查询用下面办法可以增加分片数量到10000,结果一修改完就正常了

curl -X PUT -H "Content-Type:application/json" -d '{"transient":{"cluster":{"max_shards_per_node":10000}}}' 'http://192.168.33.113:9200/_cluster/settings'

curl -X GET 'http://192.168.33.113:9200/_nodes/stats'

获得相关参数吧

运行一段时间后又出现下面的问题

内存使用情况,先在 elasticsearch容器里的jvm.options 配置文件内存先都修改成4G内存

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
2020-03-03 09:00:52,542 - org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker -168865433 [pool-12-thread-1] ERROR [] - Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]]
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=circuit_breaking_exception, reason=[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:915) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.ids(ElasticSearch7Client.java:197) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.dao.MetricsEs7DAO.multiGet(MetricsEs7DAO.java:45) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.syncStorageToCache(MetricsPersistentWorker.java:185) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.prepareBatch(MetricsPersistentWorker.java:121) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.PersistenceWorker.buildBatchRequests(PersistenceWorker.java:76) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$extractDataAndSave$2(PersistenceTimer.java:95) ~[server-core-6.6.0.jar:6.6.0]
at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.extractDataAndSave(PersistenceTimer.java:89) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$start$0(PersistenceTimer.java:67) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://elasticsearch:9200], URI [/service_instance_cpm/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]","bytes_wanted":1056273900,"bytes_limit":1003493785,"durability":"PERMANENT"}],"type":"circuit_breaking_exception","reason":"[parent] Data too large, data for [] would be [1056273900/1007.3mb], which is larger than the limit of [1003493785/957mb], real usage: [1056273728/1007.3mb], new bytes reserved: [172/172b], usages [request=0/0b, fielddata=82095/80.1kb, in_flight_requests=172/172b, accounting=12929020/12.3mb]","bytes_wanted":1056273900,"bytes_limit":1003493785,"durability":"PERMANENT"},"status":429}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:260) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:238) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:915) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.ids(ElasticSearch7Client.java:197) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.dao.MetricsEs7DAO.multiGet(MetricsEs7DAO.java:45) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.syncStorageToCache(MetricsPersistentWorker.java:185) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.MetricsPersistentWorker.prepareBatch(MetricsPersistentWorker.java:121) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.analysis.worker.PersistenceWorker.buildBatchRequests(PersistenceWorker.java:76) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$extractDataAndSave$2(PersistenceTimer.java:95) ~[server-core-6.6.0.jar:6.6.0]
at java.util.ArrayList.forEach(ArrayList.java:1257) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.extractDataAndSave(PersistenceTimer.java:89) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.PersistenceTimer.lambda$start$0(PersistenceTimer.java:67) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
运行一段时间后出现docker logs -f --tail=1000 oap 看oap出现下面错误,感觉 elasticsearch 分片超过1000了,但默认好像就是1000的,所以就没有办法写过去了,通过网上查询用下面办法可以增加分片数量到10000,结果一修改完就正常了
curl -X PUT -H "Content-Type:application/json" -d '{"transient":{"cluster":{"max_shards_per_node":10000}}}' 'http://192.168.33.113:9200/_cluster/settings'
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1418) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1385) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.IndicesClient.create(IndicesClient.java:125) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.createIndex(ElasticSearch7Client.java:95) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.HistoryDeleteEsDAO.deleteHistory(HistoryDeleteEsDAO.java:75) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.execute(DataTTLKeeperTimer.java:81) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.lambda$delete$1(DataTTLKeeperTimer.java:74) ~[server-core-6.6.0.jar:6.6.0]
at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_212]
at org.apache.skywalking.oap.server.core.storage.ttl.DataTTLKeeperTimer.delete(DataTTLKeeperTimer.java:72) ~[server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.util.RunnableWithExceptionProtection.run(RunnableWithExceptionProtection.java:36) [apm-util-6.6.0.jar:6.6.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch:9200], URI [/alarm_record-20200229?master_timeout=30s&timeout=30s], status line [HTTP/1.1 400 Bad Request]
2020-02-29 14:21:45,948 - org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client -973210 [pool-12-thread-1] INFO [] - Synchronous bulk took time: 10147 millis, size: 1189
2020-02-29 14:21:47,183 - org.apache.skywalking.oap.server.library.client.elasticsearch.ElasticSearchClient -974445 [I/O dispatcher 1] WARN [] - Bulk [89] executed with failures
下面问题原因是当 Elasticsearch 集群收到大量请求且无法再接收任何请求时,通常会发生“es_rejected_execution_exception”异常。每个节点都有一个线程池队列,可以容纳 50 到 200 个请求,具体取决于您使用的 Elasticsearch 版本。队列已满时,将拒绝新请求。
2020-03-05 09:09:58,486 - org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client -167328843 [pool-12-thread-1] INFO [] - Synchronous bulk took time: 10077 millis, size: 1037
2020-03-05 09:10:01,758 - org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker -167332115 [DataCarrier.REGISTER_L2.BulkConsumePool.0.Thread] ERROR [] - Elasticsearch exception [type=es_rejected_execution_exception, reason=rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{"sequence":9,"last_update_time":0,"heartbeat_time":1583370601542,"node_type":0,"service_id":9,"address_id":0,"name":"zxgbmp-centerserver-pid:28356@zxg","is_address":0,"instance_uuid":"a3e9d2606c1d4cb587cdc8e94af6c9e9","register_time":1581405898590,"properties":"{\"os_name\":\"Linux\",\"host_name\":\"zxg\",\"process_no\":\"28356\",\"language\":\"java\",\"ipv4s\":\"[\\\"172.17.0.1\\\",\\\"192.168.33.112\\\"]\"}","mapping_service_instance_id":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]]
org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=es_rejected_execution_exception, reason=rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{"sequence":9,"last_update_time":0,"heartbeat_time":1583370601542,"node_type":0,"service_id":9,"address_id":0,"name":"zxgbmp-centerserver-pid:28356@zxg","is_address":0,"instance_uuid":"a3e9d2606c1d4cb587cdc8e94af6c9e9","register_time":1581405898590,"properties":"{\"os_name\":\"Linux\",\"host_name\":\"zxg\",\"process_no\":\"28356\",\"language\":\"java\",\"ipv4s\":\"[\\\"172.17.0.1\\\",\\\"192.168.33.112\\\"]\"}","mapping_service_instance_id":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177) ~[elasticsearch-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1706) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1683) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1446) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.update(RestHighLevelClient.java:868) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.forceUpdate(ElasticSearch7Client.java:221) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.RegisterEsDAO.forceUpdate(RegisterEsDAO.java:56) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.lambda$onWork$0(RegisterPersistentWorker.java:100) ~[server-core-6.6.0.jar:6.6.0]
at java.util.HashMap$Values.forEach(HashMap.java:981) [?:1.8.0_212]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.onWork(RegisterPersistentWorker.java:95) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.access$100(RegisterPersistentWorker.java:40) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker$PersistentConsumer.consume(RegisterPersistentWorker.java:153) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.consume(MultipleChannelsConsumer.java:81) [apm-datacarrier-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.run(MultipleChannelsConsumer.java:52) [apm-datacarrier-6.6.0.jar:6.6.0]
Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://elasticsearch:9200], URI [/service_instance_inventory/_update/9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0?refresh=true&timeout=1m], status line [HTTP/1.1 429 Too Many Requests]
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[828ebbd870e6][172.17.0.2:9300][indices:data/write/update[s]]"}],"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [55728585][indices:data/write/update[s]]: update {[service_instance_inventory][_doc][9_a3e9d2606c1d4cb587cdc8e94af6c9e9_0_0], doc_as_upsert[false], doc[index {[null][_doc][null], source[{\"sequence\":9,\"last_update_time\":0,\"heartbeat_time\":1583370601542,\"node_type\":0,\"service_id\":9,\"address_id\":0,\"name\":\"zxgbmp-centerserver-pid:28356@zxg\",\"is_address\":0,\"instance_uuid\":\"a3e9d2606c1d4cb587cdc8e94af6c9e9\",\"register_time\":1581405898590,\"properties\":\"{\\\"os_name\\\":\\\"Linux\\\",\\\"host_name\\\":\\\"zxg\\\",\\\"process_no\\\":\\\"28356\\\",\\\"language\\\":\\\"java\\\",\\\"ipv4s\\\":\\\"[\\\\\\\"172.17.0.1\\\\\\\",\\\\\\\"192.168.33.112\\\\\\\"]\\\"}\",\"mapping_service_instance_id\":0}]}], scripted_upsert[false], detect_noop[true]} on EsThreadPoolExecutor[name = 828ebbd870e6/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6fba3a0b[Running, pool size = 2, active threads = 2, queued tasks = 199, completed tasks = 4622331]]"},"status":429}
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:260) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:238) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:212) ~[elasticsearch-rest-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1403) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1373) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.elasticsearch.client.RestHighLevelClient.update(RestHighLevelClient.java:868) ~[elasticsearch-rest-high-level-client-7.0.0.jar:7.0.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch7.client.ElasticSearch7Client.forceUpdate(ElasticSearch7Client.java:221) ~[storage-elasticsearch7-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.RegisterEsDAO.forceUpdate(RegisterEsDAO.java:56) ~[storage-elasticsearch-plugin-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.lambda$onWork$0(RegisterPersistentWorker.java:100) ~[server-core-6.6.0.jar:6.6.0]
at java.util.HashMap$Values.forEach(HashMap.java:981) [?:1.8.0_212]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.onWork(RegisterPersistentWorker.java:95) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker.access$100(RegisterPersistentWorker.java:40) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.oap.server.core.register.worker.RegisterPersistentWorker$PersistentConsumer.consume(RegisterPersistentWorker.java:153) [server-core-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.consume(MultipleChannelsConsumer.java:81) [apm-datacarrier-6.6.0.jar:6.6.0]
at org.apache.skywalking.apm.commons.datacarrier.consumer.MultipleChannelsConsumer.run(MultipleChannelsConsumer.java:52) [apm-datacarrier-6.6.0.jar:6.6.0]
curl -X GET 'http://192.168.33.113:9200/_nodes/stats'
获得相关参数吧
{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "docker-cluster",
"nodes": {
"N3fwuKinRiWTz8LpypmdTw": {
"timestamp": 1583202972869,
"name": "828ebbd870e6",
"transport_address": "172.17.0.2:9300",
"host": "172.17.0.2",
"ip": "172.17.0.2:9300",
"roles": ["ingest",
"master",
"data"],
"attributes": {
"ml.machine_memory": "8201801728",
"xpack.installed": "true",
"ml.max_open_jobs": "20"
},
"indices": {
"docs": {
"count": 1056,
"deleted": 127
},
"store": {
"size_in_bytes": 1286499
},
"indexing": {
"index_total": 0,
"index_time_in_millis": 0,
"index_current": 0,
"index_failed": 0,
"delete_total": 0,
"delete_time_in_millis": 0,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 0,
"query_time_in_millis": 0,
"query_current": 0,
"fetch_total": 0,
"fetch_time_in_millis": 0,
"fetch_current": 0,
"scroll_total": 0,
"scroll_time_in_millis": 0,
"scroll_current": 0,
"suggest_total": 0,
"suggest_time_in_millis": 0,
"suggest_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 0,
"total_time_in_millis": 0,
"total_docs": 0,
"total_size_in_bytes": 0,
"total_stopped_time_in_millis": 0,
"total_throttled_time_in_millis": 0,
"total_auto_throttle_in_bytes": 1468006400
},
"refresh": {
"total": 140,
"total_time_in_millis": 0,
"external_total": 140,
"external_total_time_in_millis": 0,
"listeners": 0
},
"flush": {
"total": 0,
"periodic": 0,
"total_time_in_millis": 0
},
"warmer": {
"current": 0,
"total": 70,
"total_time_in_millis": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"total_count": 0,
"hit_count": 0,
"miss_count": 0,
"cache_size": 0,
"cache_count": 0,
"evictions": 0
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 241,
"memory_in_bytes": 295014,
"terms_memory_in_bytes": 168784,
"stored_fields_memory_in_bytes": 75192,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 8576,
"points_memory_in_bytes": 2730,
"doc_values_memory_in_bytes": 39732,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": -1,
"file_sizes": {
}
},
"translog": {
"operations": 8097,
"size_in_bytes": 1772793,
"uncommitted_operations": 0,
"uncommitted_size_in_bytes": 12870,
"earliest_last_modified_age": 0
},
"request_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1583202973591,
"cpu": {
"percent": 89,
"load_average": {
"1m": 3.53,
"5m": 1.06,
"15m": 0.66
}
},
"mem": {
"total_in_bytes": 8201801728,
"free_in_bytes": 153522176,
"used_in_bytes": 8048279552,
"free_percent": 2,
"used_percent": 98
},
"swap": {
"total_in_bytes": 8455712768,
"free_in_bytes": 8363167744,
"used_in_bytes": 92545024
},
"cgroup": {
"cpuacct": {
"control_group": "/",
"usage_nanos": 75123132920
},
"cpu": {
"control_group": "/",
"cfs_period_micros": 100000,
"cfs_quota_micros": -1,
"stat": {
"number_of_elapsed_periods": 0,
"number_of_times_throttled": 0,
"time_throttled_nanos": 0
}
},
"memory": {
"control_group": "/",
"limit_in_bytes": "9223372036854771712",
"usage_in_bytes": "4691886080"
}
}
},
"process": {
"timestamp": 1583202973591,
"open_file_descriptors": 2836,
"max_file_descriptors": 1048576,
"cpu": {
"percent": 88,
"total_in_millis": 72080
},
"mem": {
"total_virtual_in_bytes": 7096074240
}
},
"jvm": {
"timestamp": 1583202973596,
"uptime_in_millis": 41945,
"mem": {
"heap_used_in_bytes": 251700616,
"heap_used_percent": 5,
"heap_committed_in_bytes": 4277534720,
"heap_max_in_bytes": 4277534720,
"non_heap_used_in_bytes": 112768520,
"non_heap_committed_in_bytes": 122793984,
"pools": {
"young": {
"used_in_bytes": 69354608,
"max_in_bytes": 139591680,
"peak_used_in_bytes": 139591680,
"peak_max_in_bytes": 139591680
},
"survivor": {
"used_in_bytes": 7775632,
"max_in_bytes": 17432576,
"peak_used_in_bytes": 17432576,
"peak_max_in_bytes": 17432576
},
"old": {
"used_in_bytes": 174570376,
"max_in_bytes": 4120510464,
"peak_used_in_bytes": 174570376,
"peak_max_in_bytes": 4120510464
}
}
},
"threads": {
"count": 34,
"peak_count": 34
},
"gc": {
"collectors": {
"young": {
"collection_count": 126,
"collection_time_in_millis": 4179
},
"old": {
"collection_count": 2,
"collection_time_in_millis": 86
}
}
},
"buffer_pools": {
"mapped": {
"count": 438,
"used_in_bytes": 786685,
"total_capacity_in_bytes": 786685
},
"direct": {
"count": 16,
"used_in_bytes": 125069,
"total_capacity_in_bytes": 125068
}
},
"classes": {
"current_loaded_count": 16369,
"total_loaded_count": 16369,
"total_unloaded_count": 0
}
},
"thread_pool": {
"analyze": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ccr": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"data_frame_indexing": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"fetch_shard_started": {
"threads": 4,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 4,
"completed": 3141
},
"fetch_shard_store": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"flush": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"force_merge": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"generic": {
"threads": 6,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 6,
"completed": 332
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"listener": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"management": {
"threads": 2,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 2,
"completed": 42
},
"ml_datafeed": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ml_job_comms": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"ml_utility": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 1
},
"refresh": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 11
},
"rollup_indexing": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"search_throttled": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"security-token-key": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"watcher": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"write": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
}
},
"fs": {
"timestamp": 1583202973597,
"total": {
"total_in_bytes": 97788563456,
"free_in_bytes": 83441401856,
"available_in_bytes": 83441401856
},
"data": [{
"path": "/usr/share/elasticsearch/data/nodes/0",
"mount": "/usr/share/elasticsearch/data (/dev/mapper/centos-root)",
"type": "xfs",
"total_in_bytes": 97788563456,
"free_in_bytes": 83441401856,
"available_in_bytes": 83441401856
}],
"io_stats": {
"devices": [{
"device_name": "dm-0",
"operations": 16593,
"read_operations": 6325,
"write_operations": 10268,
"read_kilobytes": 76888,
"write_kilobytes": 113093
}],
"total": {
"operations": 16593,
"read_operations": 6325,
"write_operations": 10268,
"read_kilobytes": 76888,
"write_kilobytes": 113093
}
}
},
"transport": {
"server_open": 0,
"rx_count": 0,
"rx_size_in_bytes": 0,
"tx_count": 0,
"tx_size_in_bytes": 0
},
"http": {
"current_open": 1,
"total_opened": 1
},
"breakers": {
"request": {
"limit_size_in_bytes": 2566520832,
"limit_size": "2.3gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"fielddata": {
"limit_size_in_bytes": 1711013888,
"limit_size": "1.5gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"in_flight_requests": {
"limit_size_in_bytes": 4277534720,
"limit_size": "3.9gb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 2.0,
"tripped": 0
},
"accounting": {
"limit_size_in_bytes": 4277534720,
"limit_size": "3.9gb",
"estimated_size_in_bytes": 300725,
"estimated_size": "293.6kb",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 4063657984,
"limit_size": "3.7gb",
"estimated_size_in_bytes": 251999928,
"estimated_size": "240.3mb",
"overhead": 1.0,
"tripped": 0
}
},
"script": {
"compilations": 1,
"cache_evictions": 0,
"compilation_limit_triggered": 0
},
"discovery": {
"cluster_state_queue": {
"total": 0,
"pending": 0,
"committed": 0
},
"published_cluster_states": {
"full_states": 2,
"incompatible_diffs": 0,
"compatible_diffs": 38
}
},
"ingest": {
"total": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
},
"pipelines": {
"xpack_monitoring_6": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0,
"processors": [{
"script": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
}
},
{
"gsub": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0
}
}]
},
"xpack_monitoring_7": {
"count": 0,
"time_in_millis": 0,
"current": 0,
"failed": 0,
"processors": []
}
}
},
"adaptive_selection": {
}
}
}
}
相关文章
|
5天前
|
NoSQL 关系型数据库 MySQL
涉及rocketMQ,jemeter等性能测试服务器的安装记录
涉及rocketMQ,jemeter等性能测试服务器的安装记录
21 1
|
5天前
|
监控 JavaScript 网络协议
Linux系统之安装uptime-kuma服务器监控面板
【5月更文挑战第12天】Linux系统之安装uptime-kuma服务器监控面板
18 0
|
5天前
|
XML 网络安全 开发工具
如何下载并安装 SAP ABAPGit,并完成 ABAP 服务器上 SSL 证书的配置试读版
如何下载并安装 SAP ABAPGit,并完成 ABAP 服务器上 SSL 证书的配置试读版
11 0
|
5天前
|
Web App开发 测试技术 Python
【如何学习python自动化测试】—— 浏览器驱动的安装 以及 如何更新driver
【如何学习python自动化测试】—— 浏览器驱动的安装 以及 如何更新driver
8 0
|
5天前
|
Web App开发 测试技术 C++
Playwright安装与Python集成:探索跨浏览器测试的奇妙世界
Playwright是新兴的跨浏览器测试工具,相比Selenium,它支持Chrome、Firefox、WebKit,执行速度快,选择器更稳定。安装Playwright只需一条`pip install playwright`的命令,随后的`playwright install`会自动添加浏览器,无需处理浏览器驱动问题。这一优势免去了Selenium中匹配驱动的烦恼。文章适合寻求高效自动化测试解决方案的开发者。
14 2
|
5天前
|
网络安全 Docker 容器
测试开发环境下centos7.9下安装docker的minio
测试开发环境下centos7.9下安装docker的minio
19 1
|
5天前
|
监控 测试技术 Apache
如何测试服务器性能?
通过以上步骤,您可以全面评估服务器的性能,找出潜在问题,并采取措施来提高服务器的性能和稳定性。这对于确保服务器在实际生产环境中能够高效运行非常重要。
23 1
|
5天前
|
网络协议 安全 测试技术
性能工具之emqtt-bench BenchMark 测试示例
【4月更文挑战第19天】在前面两篇文章中介绍了emqtt-bench工具和MQTT的入门压测,本文示例 emqtt_bench 对 MQTT Broker 做 Beachmark 测试,让大家对 MQTT消息中间 BenchMark 测试有个整体了解,方便平常在压测工作查阅。
136 7
性能工具之emqtt-bench BenchMark 测试示例
|
5天前
|
机器学习/深度学习 数据采集 人工智能
【专栏】AI在软件测试中的应用,如自动执行测试用例、识别缺陷和优化测试设计
【4月更文挑战第27天】本文探讨了AI在软件测试中的应用,如自动执行测试用例、识别缺陷和优化测试设计。AI辅助工具利用机器学习、自然语言处理和图像识别提高效率,但面临数据质量、模型解释性、维护更新及安全性挑战。未来,AI将更注重用户体验,提升透明度,并在保护隐私的同时,通过联邦学习等技术共享知识。AI在软件测试领域的前景广阔,但需解决现有挑战。
|
5天前
|
测试技术
如何管理测试用例?测试用例有什么管理工具?YesDev
该文档介绍了测试用例和测试用例库的管理。测试用例是描述软件测试方案的详细步骤,包括测试目标、环境、输入、步骤和预期结果。测试用例库用于组织和管理这些用例,强调简洁性、完整性和可维护性。管理者可以创建、删除、重命名用例库,搜索和管理用例,以及通过层级目录结构来组织用例。此外,还支持通过Excel导入和导出测试用例,以及使用脑图查看用例关系。后台管理允许配置全局别名,如用例状态、优先级和执行结果。

热门文章

最新文章