数据监控ElasticStack全家桶之容器化部署|Java 开发实战

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器镜像服务 ACR,镜像仓库100个 不限时长
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: Elastic Stack,它作为一个大数据平台的技术栈,在运维监控这个垂直领域,已经提供了一套完整的技术解决方案,从日志分析,到指标监控,再到软件性能监控和可用性监控,都有产品级的开箱即用的方案。

开篇

如果我们要监控企业的IT基础设施或者说完成整个软件的端到端的全链路监控,可以通过Elastic Stack,它作为一个大数据平台的技术栈,在运维监控这个垂直领域,已经提供了一套完整的技术解决方案,从日志分析,到指标监控,再到软件性能监控和可用性监控,都有产品级的开箱即用的方案。

这篇文章将教大家如何安装使用

技术选型

Filebeat 7.3.0

kafka 1.1.0

logstash 7.3.0

elasticserarch 7.1.1

kibana 7.1.1

zookeeper

下载镜像

docker pull wurstmeister/kafka:1.1.0
docker pull wurstmeister/zookeeper
docker pull kibana:7.1.1
docker pull logstash:7.3.0
docker pull grafana/grafana:6.3.2
docker pull elasticsearch:7.1.1
docker pull mobz/elasticsearch-head:5-alpine
复制代码

创建容器

# elasticsearch-head
docker run -d --name elasticsearch-head --network host mobz/elasticsearch-head:5-alpine
# elasticsearch
docker run -d --name elasticsearch --network host -e "discovery.type=single-node" elasticsearch:7.1.1
# kibana
docker run -d --name kibana --network host -p 5601:5601 kibana:7.1.1
# logstash
docker run -d --name logstash --network host -d logstash:7.3.0
# mysql
docker run --name mysql -e MYSQL_ROOT_PASSWORD=root --network host -d mysql:latest
# grafana
docker run -d --name grafana --network host grafana/grafana:6.3.2
# 查看docker 容器日志
docker logs -f [containerID]
## 应用访问
http://192.168.104.102:9100/   elasticsearch-head
http://192.168.104.102:9200/   elasticsearch      
http://192.168.104.102:3000/   grafana      admin/admin
http://192.168.104.102:3306/   mysql         root/root 
复制代码

修改elasticsearchelasticsearch.yml,追加

http.cors.enabled: true
http.cors.allow-origin: "*"
复制代码

参考使用Docker快速搭建Kafka开发环境

filebeat的下载与安装

[root@pit-server-104-102 filebeat-7.3.0-linux-x86 64]# pwd/opt/filebeat-7.3.0-linux-x86 64
[rootapit-server-104-102 filebeat-7.3.0-linux-x86 64]
# 下载filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.0-linux-x86_64.tar.gz
# 解压filebeat.tar.gz
tar -zxvf filebeat-7.3.0-linux-x86_64.tar.gz
## filebeat的多种启动方式
./filebeat test config -c my-filebeat.yml
./filebeat -e -c my-filebeat.yml -d "publish"
# 后台启动filebeat输出日志保存
nohup ./filebeat -e -c my-filebeat.yml > /tmp/filebeat.log 2>&1 &
# 不挂断后台运行 将所有标准输出及标准错误输出到/dev/null空设备,即没有任何输出
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &  
# filebeat启动后,查看相关输出信息
./filebeat -e -c filebeat.yml -d "publish"
# 参数说明
-e:输出到标准输出,默认输出到syslog和logs下
-c:指定配置文件
-d:输出debug信息
复制代码

filebeat的配置文件

[rootcpit-server-104-102 filebeat-7.3.0-linux-x86 64]# pwd/opt/filebeat-7.3.0-linux-x86 64
[rootepit-server-104-102 filebeat-7.3.0-linux-x86 64]#ls
[rootepit-server-104-102 filebeat-7.3.0-linux-x86 64]# data fields.yml filebeat filebeat.reference.yml filebeat.yml kibana LICENSE.txt logs module modules.d my-filebeat.yml NOTICE.txt
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/test.log
  fields:
    serviceName: jp-filebeat
  fields_under_root: true
# 输出内容到kafka
output.kafka:
  enabled: true
  hosts: ["kafka1:9092"]
  topic: 'stream-in'
  required_acks: 1
## 注意
# kafka1为hostname, 可修改 etc/hosts文件,添加 kafka_cluster的ip -> kafka1
复制代码
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/*.log
  fields:
    serviceName:jp-filebeat
  fields_under_root: true
# 输出内容到logstash
output.logstash:
  hosts: ["192.168.104.102:5045"]
复制代码

查看filebeat与kafka的TCP连接

rootapit-server-104-102 00T # netstat -atunlp grep filebeat 
tcp 0 192.168.104.102 33362 192.168.104.102:9092  ESTABLISHED 20485/./filebeat  
tcp 0 192.168.104.102 33366 192.168.104.102:9092  ESTARITSHED 22538/./filebeat  
[rootapit-server-104-102 opt]# netstat -atunlp  grep 9092 
tcp 日 0 192.168.104 102:33362 192.168.104.102:9092  ESTABLISHED 20485/./filebeat  
tcp 日 0 192.168.104.102:33366 192.168.104.102:9092  ESTABLISHED 22538/./filebeat  
tcp6  日 0 192.168.104.102:9092  LISTEN  24244/java  13083/java  
tcp6  日 0 192.168.104 102:33394 192.168.104.102:9092  33362 ESTABLISHED 
tcp6  0 192.168.104.102:9092  192.168.104.102 ESTABLISHED 13083/java  13083/java  
tcp6  0 192.168.104.102:9092  192.168.104.102:33402 ESTABLISHED 
0
tcp6  0 192.168.104.102:9092  192 168.104.102:33394 ESTABLISHED 13083/java  13083/java  
tcp6  日 0 192.168.104.102:9092  192.168.104.102:33396 ESTABLISHED 
tcp6  0 192.168.104.102:33402 192.168.104.102 ESTABLISHED 23417/java  
0
tcp6  日 日 192.168.104.102:9092  192.168.704.102 33366 ESTABLISHED 13083/java  
tcp6[rootcpit-server-104-102 opt]#0 192.168.104 102:33396192.168.104.102:9092 ESTABLISHED24244,3号土掘金技术社区
[rootapit-server-104-102 filebeat-7.3.0-linux-x86 64]# tail -f /tmp/filebeat.log
2019-09-13T17:02:37.581+080日  INFO  [monitoring]  log/1og.go:118 starting metrics logging every 30s 
2019-09-13T17:02:37.581+080日  INFO  instance/beat.go:421  filebeat start running. 
2019-09-13T17:02:37.581+0800  INFO  registrar/registrar.go:145  Loading registrar data from /opt/filebeat-7.3.0-linux-x86 64/dat  a/registry/filebeat/data.json 
2019-09-13T17:02:37.582+080日  INFO  registrar/registrar.go:152  States Loaded from registrar: 3 
2019-09-13T17:02:37.582+0800  WARN  beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured  modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using 
Logstash pipelines," you can ignore this warning.
2019-09-13T17:02:37.582+0800  INFO  crawler/crawler.go:72 Loading Inputs: 1 
2019-09-13T17:02:37.583+0800  INFO  log/input.go:148  Configured paths: [/opt/test.log] 
2019-09-13T17:02:37.583+0800  INFO  input/input.go:114  Starting input of type: log;ID:14366783169406880347 
2019-09-13T17:02:37.583+0800  INFO  crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1  
2019-09-13T17:02:37.685+0800  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"svstem":{"ticks":120,"time":{"ms":5}},"total":{"ticks":330,"time":{"ms":10},"value":330},"user":{"ticks":210, "time  
":f"ms":5+H}."handles":{"limit":f"hard":1000000."soft":100000}."oDen":8}."info":f"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5""uptime":{"ms":660048}},"memstats":{"gc next":7180096,"memory alloc":5240880,"memory total":16066056,"rss":270336}, "runtime" :{"gorouti nes":35}},"filebeat":{"harvester":{"open files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":l,"ev ents":{"active":0}}},"reqistrar":{"states":{"current":3}},"svstem":{"load":{"1"0.22,"15":0.27,"5":0.15, "norm":{"1":0.055, "15":0.0675,"5":0.0375}}}}}}
2019-09-13T17:03:07.685+08日日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics 6":{"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":3}},"total":{"ticks":340,"time":{"ms":9},"value":340},"user":{"ticks":220, "time' 
:f"ms":6h}}."handles":{"limit":{"hard":1000000."soft":1000000}."oben":8}."info":{"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5". uptime":{"ms":690052}},"memstats":{"ac next":7180096,"memorvalloc":5529288,"memorv total":16354464,"rss":270336},"runtime":{"goroutin es":35}},"filebeat":{"harvester":{"open files":l,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":l,"even ents":{"active":0}}},"registrar":{"states":{"current":3}},"svstem":{"load":{"I":0.13, "15":0.26, "5":0.13,"norm":{"1":0.0325, "15":0.065,5";0,0325}}}}}}
2019-09-13717:03:37 684+0800  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":130,"time":{"ms":7}},"total":{"ticks":360,"time":{"ms":16},"value":360},"user":{"ticks":230, "time  
":{"ms":9}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeralid":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5""uptime":{"ms":720052}},"memstats":{"gc next":4955952,"memor alloc":2487768,"memorv total":16651512},"runtime":{"goroutines":35}},"fi lebeat":{"harvester":{"open files":l,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"activ e":0}h."reaistrar":f"states":f"current":3}}."svstem":{"load":{"1":0.19."15":0.26."5":0.15."norm":f"1":0.0475. "15":0.065,"5":0.0375HH}}}
2019-09-13T17:04:07.684+08日日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"svstem":{"ticks":140,"time":{"ms":7}},"totyl":{"ticks":370,"time":{"ms":14},"value":370}, "user":{"ticks":230,"time  
":{"ms":7}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open"8},"info":{"ephemeralid":"0230f8a0-bcca-4bbd-b90c-364efd25cdds""uptime":{"ms":750049}},"memstats":{"gc next":4955952,"memory alloc":3816784,"memory total":17980528},"runtime":{"goroutines":35}}, "fi lebeat":{"events":{"added":1,"done":1},"harvester":{"open files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"batches":1,"total":1}},"outputs":{"kafka":{"bvtes read":53,"bytes write":443}},"pipeline":{"clients":1, "events":"active":0."oublished":1."total":1}."aueue":{"acked":1HH}."reaistrar":{"states":{"current":3."uodate":1}."writes":{"success":1."total":l}},"system":{"load":{"1":0.12,"15":0.25,"5":0.14,"norm":{"1":0.03,"15":0,0625,"5":日,035}}}}}}
2019-09-13T17:04:37.685+080日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":4}},"total":{"ticks":390,"time":{"ms":22},"value":390},"user":{"ticks":250, "time  
":{"ms":18}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdds'"uptime":{"ms":780052}},"memstats":{"gc next":6960736"memory alloc":3499464,"memory total":19308920},"runtime":{"goroutines":35}},"f
ilebeat":{"events":{"added":1,"done":1},"harvester":{"open files":1,"running":1}},"libbeat":{"confia":{"module":{"running":0}}, "output"  {"events":{"acked":1,"batches":1,"total";1}},"outputs kafka":{"bytes read":53,"bytes write":446}},"pipeline":{"clients":1,"events": 
{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"registrar":{"states":{"current":3,"update":1},"writes":{"success":1, "total"::1}},"system":{"load":{"1":0.07,"15":日.24,"5":0.12,"norm":{"1";日,0175,"15";日.06,"5";θ,θ3}}}}}}
2019-09-13T17:05:07.684+080日  TNEO  monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":6}},"total":{"ticks":410,"time":{"ms":9},"value":410}, "user":{"ticks":260, "time' 
:{"ms":3}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeral_id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5", uptime":{"ms":810048}},"memstats":{"gc next":6960736,"memory alloc":3806320"memory total":19615776},"runtime":{"goroutines":35}},"fil ebeat":{"harvester":{"open files":1,"running":1}},"libbeat":{"confiq":{"module":{"running"0}},"pipeline":{"clients":1, "events":{"active":0}}},"registrar":{"states":{"current":3}},"system":{"load":{"1":0.04,"15";0.24,"5":0.11,"nom":{"1":0.01,"15::06,"*9+@27533]货}
2019-09-13T17:05:37.685+080日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics s":{"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":3}},"total":{"ticks":410,"time":{"ms":6},"value":410}, "user":t"Icks 260, tie 
:f"ms":31+}."handles":f"limit":{"hard":1000000."soft":1000000}."oDen":8}."info":{"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5", uptime":{"ms":840050}},"memstats":{"ac next":6960736,"memorv alloc":4094520,"memorv total":19903976},"runtime":{"goroutines":35}}, "fin

往测试的log文件里写内容echo '321312' >> test.log

logstash

  • logstash的linux安装
  • logstash的Docker安装

容器默认启动配置文件为/usr/share/logstash/pipeline/logstash.conf

测试一个logstash(标准的输入输出)

bin/logatsh -e 'input{ stdin {}} output { stdout {}}

提示:

Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting
在原命令bin/logstash -f config/logstash-sample.conf的末尾加上--path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/logstash-sample.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/test-logstash.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动前先检查配置文件是否正确
bin/logstash -f logstash-sample.conf --config.test_and_exit
# 参数说明
--path.data是指存放数据的路径
--config.reload.automatic 热加载配置文件,修改配置文件后无需重新启动
[2819-09-13709:68:35,643][INF01egstash.outputs.slasicsearch] hwElastisearchoutput{:class>Logstash::utputs::ElasticSearch”,:hostsmm["http://192.168.104.102:92001}
2819-09-13T89108135.6851[INFo Ilogstash.javapipeline  do.monitoring-logstash"."pipeline workers"o"pipeline.batch.size"mo2."pipeline.batch.delay"5."pipeline.maxinflight"mo2. ithreade>"ecthreadi02697fe5c rup 
2819-09-13700:68:35.823][INF0 1fLogstash.javapipelin
2819-89-13189108:35,8431[INF0 1Ilogstash.agen Pipelinesrunning{icount-2.Irunningpipelines>main"monitoring-logstash"L. inon_running_pipelines>>[0} 
[2019-09.13T09:08:36,8277[INF0 ILogstash.agent  successfully started Logstash API endpoint {:port=-9Ge1}  
/usr/share/logstash/vendor/bundle/jruby/2.5B/gems/awesame_print-1.7.0/lib/awesme.print/formatters/basefomatter,rbi3 waming! constant 1lFixmum is deprecated
"input”>{"type”=:“loq'
goent"
ephemeral_id"> "6238fBaB-bcca-4bbd-b90c-364efd25cdd5"。
"warsian" m:“7.3.0. 
nid">"24e5f934-6584-44bf-B1ef-sbe2h090414a"。
tvop  m:“filobeat"  
"heatime" ->"pit-server-104-102" me55age” "123"。
-log”o* {"offset’=> 日. filo" =): {
"path" => "/opt/test,log"
“gtimestamp”=2019-09-13T89:61:17.7132,
thast
“name”*:"pit-servee-104-102"2sorvicethes" . "le-filebeat" .
"ecs”
"version"** “1.8.1'}。
“taps”=【
[9]-_grokparsefailure""@wersion"s "1"
"input"=* {"type"=> "log"}.
"agent"
“varsian" =: “7.3.0,
ephmeral i aeofisn asa4 tad bno atasfasae:
"tyoe"m> "filshegt"
"hestnzo":“pit-server-104-102'“message”-“321312”,
“name”=:"pit-server-104-102"
"gtimestamp”=*2019-09-13T89:01:42.7152, 

logstash配置文件

test-logstash.conf

# 从kafka中读取内容
input {
  kafka {
    bootstrap_servers => "192.168.104.102:9092"
    topics => ["stream-in"]
    codec => "json"
  }
}
# 内容过滤
filter {
  grok {
      match => {"message" => "%{COMBINEDAPACHELOG}"}
  }
}
#}
# 输出内容到控制台显示
output {
    stdout { codec => rubydebug}
}
复制代码

logstash-es.conf

# 从kafka中读取内容
input {
  kafka {
    bootstrap_servers => "192.168.104.102:9092"
    topics => ["stream-in"]
    codec => "json"
  }
}
# 内容过滤
filter {
  json {
      source => "message"
  }
}
# 输出内容到elasticsearch
output {
  elasticsearch {
    hosts => ["192.168.104.102:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}
复制代码

kafka

docker 中kafka的安装路径在/opt/kafka_2.12-2.3.0/bin下,

bash-4.4# kafka. -topics.sh --zookeeper 192.168.104.102 --list consumer offsets
stream-in bash-4.4#|  
bash-4.4# pwd
/opt/kafka  2.12-1.1.0/bin  
bash-4.4# ls
connect-distributed.sh  kafka -console-consumer.sh  kafka-  -delete-records.sh  kafka -reassign-partitions  5.sh  kafka -server-stop.sh kafka-verifiable-producer.sh  zookeeper-server-stop.sh  
kafka-acls.sh connect-standalone.sh kafka-console-producer.sh -consumer-groups.sh kafka.  -log-dirs.sh  kafka-replay-log-producer r.sh  kafka-simple-consumer-shell.sh  trogdor.sh  zookeeper-shell.sh  
Kafka kafka -mirror-maker.sh  kafka-replica-verificatiol  on.sh kafka-streams-application-reset.sh  windows 
kafka-  bash-4.4# kafka-configs.sh  -broker-api-versions.sh kafka.  kafka-delegation-tokens.sh  -consumer-perf-test.sh  Kafka Kafka.  -producer-perf-test.sh  -preferred-replica-election.sh  kafka kafka-run-class.sh  -server-start.sh  kafka kafka-topics.sh verifiable-consumer.sh  zookeeper-server-start.sh zookeeper-security-migration.sh 
# tree查看文件结构
yum -y install tree
# 查看kafka在zookeeper中的所有topic
kafka-topics.sh --zookeeper 192.168.104.102:2181 --list
# 此种方式查询到的是所有 topic 的所有 group.id ,  而没有根据 topic 查询 group.id 的方式。
kafka-consumer-groups.sh --bootstrap-server kafka1:9092 --list
# 给topic插入数据
kafka-console-producer.sh --broker-list kafka1:9092 --topic stream-in
# Kafka中启动Consumer来查询topic内容:
kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic stream-in -from-beginning
复制代码

zookeeper

[zk: localhost:2181(CONNECTED)日]ls /
[log dir event notification, isr change notification, zookeeper, admin, consumers, cluster,config,latest_producer_id_block, controller, brokers, controller_epoch][zk:localhost:2181(CONNECTED) 1] ls /con
consumers config  controller  controller epoch  
[zk:localhost:2181(CONNECTED) 1] ls /config/topics
consumer offsets,stream-in] zk: localhost:2181(CONNECTED)21 
rootcpit-server-104-102:/opt/zookeeper-3.4.13/bin# ls
README.txt zkCleanup.sh zkCli.cmd zkcli.sh zkEnv.cmd  zkEnv.sh  zkServer.cmd  zkServer.sh zkTxnLogToolkit.cmd zkTxnLogToolkit.sh  
rootpit-server-104-102:/opt/zookeeper-3.4.13/bin# pwd/opt/zookeeper-3.4.13/bin
rootcpit-server-104-102:/opt/zookeeper-3.4.13/bin#  

./zkCli.sh

[zk:localhost:2181(CONNECTED) 7] ls /brokers/ids[1]
[zk:localhost:2181(CONNECTED) 8] get /brokers/ids/1
{"listener security protocol map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://  1:9092"],"jmx port":-1,"host":  1","timestamp":"1568172305410","port":9092,"version":4} 
cZxid = 0x150
ctime = Wed Sep 11 03:25:05 UTC 2019 mZxid = 0x150
mtime = Wed Sep 11 03:25:05 UTC 2019 pZxid = 0x150 cversion =0 dataVersion =日 aclVersion =日
ephemeralOwner=0x10149e6e5680017 dataLength =182
numChildren =日  [zk:localhost:2181(CONNECTED)9] 

发生问题时的参考

  1. kafka连接后使用的是主机名导致连接失败
  2. 如何避免将Kafka broker机器的hostname注册进zookeeper
  3. Configure the Kafka output
  4. Kafka查看topic、consumer group状态命令
  5. Kafka Shell基本命令(包括topic的增删改查)
  6. Kibana开启中文语言
  7. mysql Client does not support authentication protocol
  8. 查看filebeat进程 ps -ef | grep filebeat关于filebeat&kafka兼容性的描述(已官方描述为准)

51514c3b51698e317672a72de0fe4f0.png

相关文章
|
10天前
|
前端开发 JavaScript Java
基于Java+Springboot+Vue开发的服装商城管理系统
基于Java+Springboot+Vue开发的服装商城管理系统(前后端分离),这是一项为大学生课程设计作业而开发的项目。该系统旨在帮助大学生学习并掌握Java编程技能,同时锻炼他们的项目设计与开发能力。通过学习基于Java的服装商城管理系统项目,大学生可以在实践中学习和提升自己的能力,为以后的职业发展打下坚实基础。
33 2
基于Java+Springboot+Vue开发的服装商城管理系统
|
10天前
|
移动开发 前端开发 HTML5
Twaver-HTML5基础学习(20)数据容器(3)_数据的批量加载(节省性能方法)
本文介绍了Twaver HTML5中数据的批量加载方法,通过使用`box.startBatch()`可以在大量数据加载时提高性能。文章通过示例代码展示了如何在React组件中使用批量加载功能,以减少界面重绘次数并提升效率。
26 1
Twaver-HTML5基础学习(20)数据容器(3)_数据的批量加载(节省性能方法)
|
7天前
|
XML 存储 JSON
Java程序部署
Java程序部署
|
10天前
|
XML 存储 JSON
Twaver-HTML5基础学习(19)数据容器(2)_数据序列化_XML、Json
本文介绍了Twaver HTML5中的数据序列化,包括XML和JSON格式的序列化与反序列化方法。文章通过示例代码展示了如何将DataBox中的数据序列化为XML和JSON字符串,以及如何从这些字符串中反序列化数据,重建DataBox中的对象。此外,还提到了用户自定义属性的序列化注册方法。
27 1
|
8天前
|
前端开发 JavaScript Java
基于Java+Springboot+Vue开发的大学竞赛报名管理系统
基于Java+Springboot+Vue开发的大学竞赛报名管理系统(前后端分离),这是一项为大学生课程设计作业而开发的项目。该系统旨在帮助大学生学习并掌握Java编程技能,同时锻炼他们的项目设计与开发能力。通过学习基于Java的大学竞赛报名管理系统项目,大学生可以在实践中学习和提升自己的能力,为以后的职业发展打下坚实基础。
23 3
基于Java+Springboot+Vue开发的大学竞赛报名管理系统
|
9天前
|
前端开发 JavaScript Java
基于Java+Springboot+Vue开发的蛋糕商城管理系统
基于Java+Springboot+Vue开发的蛋糕商城管理系统(前后端分离),这是一项为大学生课程设计作业而开发的项目。该系统旨在帮助大学生学习并掌握Java编程技能,同时锻炼他们的项目设计与开发能力。通过学习基于Java的蛋糕商城管理系统项目,大学生可以在实践中学习和提升自己的能力,为以后的职业发展打下坚实基础。
21 3
基于Java+Springboot+Vue开发的蛋糕商城管理系统
|
9天前
|
前端开发 JavaScript Java
基于Java+Springboot+Vue开发的美容预约管理系统
基于Java+Springboot+Vue开发的美容预约管理系统(前后端分离),这是一项为大学生课程设计作业而开发的项目。该系统旨在帮助大学生学习并掌握Java编程技能,同时锻炼他们的项目设计与开发能力。通过学习基于Java的美容预约管理系统项目,大学生可以在实践中学习和提升自己的能力,为以后的职业发展打下坚实基础。
21 3
基于Java+Springboot+Vue开发的美容预约管理系统
|
10天前
|
前端开发 JavaScript Java
基于Java+Springboot+Vue开发的房产销售管理系统
基于Java+Springboot+Vue开发的房产销售管理系统(前后端分离),这是一项为大学生课程设计作业而开发的项目。该系统旨在帮助大学生学习并掌握Java编程技能,同时锻炼他们的项目设计与开发能力。通过学习基于Java的房产销售管理系统项目,大学生可以在实践中学习和提升自己的能力,为以后的职业发展打下坚实基础。
25 3
基于Java+Springboot+Vue开发的房产销售管理系统
|
9天前
|
存储 网络协议 Java
Java NIO 开发
本文介绍了Java NIO(New IO)及其主要组件,包括Channel、Buffer和Selector,并对比了NIO与传统IO的优势。文章详细讲解了FileChannel、SocketChannel、ServerSocketChannel、DatagramChannel及Pipe.SinkChannel和Pipe.SourceChannel等Channel实现类,并提供了示例代码。通过这些示例,读者可以了解如何使用不同类型的通道进行数据读写操作。
Java NIO 开发
|
10天前
|
XML 移动开发 JSON
Twaver-HTML5基础学习(18)数据容器(1)_增删查改、遍历数据容器、包含网元判断
本文介绍了Twaver HTML5中的数据容器(DataBox),包括如何进行增删查改操作、遍历数据容器以及判断网元是否存在于数据容器中。DataBox用于管理所有的网元对象,如ElementBox、LayerBox、AlarmBox等,并通过示例代码展示了其常用方法的使用。
25 1
Twaver-HTML5基础学习(18)数据容器(1)_增删查改、遍历数据容器、包含网元判断

相关产品

  • 容器服务Kubernetes版
  • 下一篇
    无影云桌面