数据监控ElasticStack全家桶之容器化部署|Java 开发实战

本文涉及的产品
容器镜像服务 ACR,镜像仓库100个 不限时长
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: Elastic Stack,它作为一个大数据平台的技术栈,在运维监控这个垂直领域,已经提供了一套完整的技术解决方案,从日志分析,到指标监控,再到软件性能监控和可用性监控,都有产品级的开箱即用的方案。

开篇

如果我们要监控企业的IT基础设施或者说完成整个软件的端到端的全链路监控,可以通过Elastic Stack,它作为一个大数据平台的技术栈,在运维监控这个垂直领域,已经提供了一套完整的技术解决方案,从日志分析,到指标监控,再到软件性能监控和可用性监控,都有产品级的开箱即用的方案。

这篇文章将教大家如何安装使用

技术选型

Filebeat 7.3.0

kafka 1.1.0

logstash 7.3.0

elasticserarch 7.1.1

kibana 7.1.1

zookeeper

下载镜像

docker pull wurstmeister/kafka:1.1.0
docker pull wurstmeister/zookeeper
docker pull kibana:7.1.1
docker pull logstash:7.3.0
docker pull grafana/grafana:6.3.2
docker pull elasticsearch:7.1.1
docker pull mobz/elasticsearch-head:5-alpine
复制代码

创建容器

# elasticsearch-head
docker run -d --name elasticsearch-head --network host mobz/elasticsearch-head:5-alpine
# elasticsearch
docker run -d --name elasticsearch --network host -e "discovery.type=single-node" elasticsearch:7.1.1
# kibana
docker run -d --name kibana --network host -p 5601:5601 kibana:7.1.1
# logstash
docker run -d --name logstash --network host -d logstash:7.3.0
# mysql
docker run --name mysql -e MYSQL_ROOT_PASSWORD=root --network host -d mysql:latest
# grafana
docker run -d --name grafana --network host grafana/grafana:6.3.2
# 查看docker 容器日志
docker logs -f [containerID]
## 应用访问
http://192.168.104.102:9100/   elasticsearch-head
http://192.168.104.102:9200/   elasticsearch      
http://192.168.104.102:3000/   grafana      admin/admin
http://192.168.104.102:3306/   mysql         root/root 
复制代码

修改elasticsearchelasticsearch.yml,追加

http.cors.enabled: true
http.cors.allow-origin: "*"
复制代码

参考使用Docker快速搭建Kafka开发环境

filebeat的下载与安装

[root@pit-server-104-102 filebeat-7.3.0-linux-x86 64]# pwd/opt/filebeat-7.3.0-linux-x86 64
[rootapit-server-104-102 filebeat-7.3.0-linux-x86 64]
# 下载filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.0-linux-x86_64.tar.gz
# 解压filebeat.tar.gz
tar -zxvf filebeat-7.3.0-linux-x86_64.tar.gz
## filebeat的多种启动方式
./filebeat test config -c my-filebeat.yml
./filebeat -e -c my-filebeat.yml -d "publish"
# 后台启动filebeat输出日志保存
nohup ./filebeat -e -c my-filebeat.yml > /tmp/filebeat.log 2>&1 &
# 不挂断后台运行 将所有标准输出及标准错误输出到/dev/null空设备,即没有任何输出
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &  
# filebeat启动后,查看相关输出信息
./filebeat -e -c filebeat.yml -d "publish"
# 参数说明
-e:输出到标准输出,默认输出到syslog和logs下
-c:指定配置文件
-d:输出debug信息
复制代码

filebeat的配置文件

[rootcpit-server-104-102 filebeat-7.3.0-linux-x86 64]# pwd/opt/filebeat-7.3.0-linux-x86 64
[rootepit-server-104-102 filebeat-7.3.0-linux-x86 64]#ls
[rootepit-server-104-102 filebeat-7.3.0-linux-x86 64]# data fields.yml filebeat filebeat.reference.yml filebeat.yml kibana LICENSE.txt logs module modules.d my-filebeat.yml NOTICE.txt
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/test.log
  fields:
    serviceName: jp-filebeat
  fields_under_root: true
# 输出内容到kafka
output.kafka:
  enabled: true
  hosts: ["kafka1:9092"]
  topic: 'stream-in'
  required_acks: 1
## 注意
# kafka1为hostname, 可修改 etc/hosts文件,添加 kafka_cluster的ip -> kafka1
复制代码
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /opt/*.log
  fields:
    serviceName:jp-filebeat
  fields_under_root: true
# 输出内容到logstash
output.logstash:
  hosts: ["192.168.104.102:5045"]
复制代码

查看filebeat与kafka的TCP连接

rootapit-server-104-102 00T # netstat -atunlp grep filebeat 
tcp 0 192.168.104.102 33362 192.168.104.102:9092  ESTABLISHED 20485/./filebeat  
tcp 0 192.168.104.102 33366 192.168.104.102:9092  ESTARITSHED 22538/./filebeat  
[rootapit-server-104-102 opt]# netstat -atunlp  grep 9092 
tcp 日 0 192.168.104 102:33362 192.168.104.102:9092  ESTABLISHED 20485/./filebeat  
tcp 日 0 192.168.104.102:33366 192.168.104.102:9092  ESTABLISHED 22538/./filebeat  
tcp6  日 0 192.168.104.102:9092  LISTEN  24244/java  13083/java  
tcp6  日 0 192.168.104 102:33394 192.168.104.102:9092  33362 ESTABLISHED 
tcp6  0 192.168.104.102:9092  192.168.104.102 ESTABLISHED 13083/java  13083/java  
tcp6  0 192.168.104.102:9092  192.168.104.102:33402 ESTABLISHED 
0
tcp6  0 192.168.104.102:9092  192 168.104.102:33394 ESTABLISHED 13083/java  13083/java  
tcp6  日 0 192.168.104.102:9092  192.168.104.102:33396 ESTABLISHED 
tcp6  0 192.168.104.102:33402 192.168.104.102 ESTABLISHED 23417/java  
0
tcp6  日 日 192.168.104.102:9092  192.168.704.102 33366 ESTABLISHED 13083/java  
tcp6[rootcpit-server-104-102 opt]#0 192.168.104 102:33396192.168.104.102:9092 ESTABLISHED24244,3号土掘金技术社区
[rootapit-server-104-102 filebeat-7.3.0-linux-x86 64]# tail -f /tmp/filebeat.log
2019-09-13T17:02:37.581+080日  INFO  [monitoring]  log/1og.go:118 starting metrics logging every 30s 
2019-09-13T17:02:37.581+080日  INFO  instance/beat.go:421  filebeat start running. 
2019-09-13T17:02:37.581+0800  INFO  registrar/registrar.go:145  Loading registrar data from /opt/filebeat-7.3.0-linux-x86 64/dat  a/registry/filebeat/data.json 
2019-09-13T17:02:37.582+080日  INFO  registrar/registrar.go:152  States Loaded from registrar: 3 
2019-09-13T17:02:37.582+0800  WARN  beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured  modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using 
Logstash pipelines," you can ignore this warning.
2019-09-13T17:02:37.582+0800  INFO  crawler/crawler.go:72 Loading Inputs: 1 
2019-09-13T17:02:37.583+0800  INFO  log/input.go:148  Configured paths: [/opt/test.log] 
2019-09-13T17:02:37.583+0800  INFO  input/input.go:114  Starting input of type: log;ID:14366783169406880347 
2019-09-13T17:02:37.583+0800  INFO  crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1  
2019-09-13T17:02:37.685+0800  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"svstem":{"ticks":120,"time":{"ms":5}},"total":{"ticks":330,"time":{"ms":10},"value":330},"user":{"ticks":210, "time  
":f"ms":5+H}."handles":{"limit":f"hard":1000000."soft":100000}."oDen":8}."info":f"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5""uptime":{"ms":660048}},"memstats":{"gc next":7180096,"memory alloc":5240880,"memory total":16066056,"rss":270336}, "runtime" :{"gorouti nes":35}},"filebeat":{"harvester":{"open files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":l,"ev ents":{"active":0}}},"reqistrar":{"states":{"current":3}},"svstem":{"load":{"1"0.22,"15":0.27,"5":0.15, "norm":{"1":0.055, "15":0.0675,"5":0.0375}}}}}}
2019-09-13T17:03:07.685+08日日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics 6":{"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":3}},"total":{"ticks":340,"time":{"ms":9},"value":340},"user":{"ticks":220, "time' 
:f"ms":6h}}."handles":{"limit":{"hard":1000000."soft":1000000}."oben":8}."info":{"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5". uptime":{"ms":690052}},"memstats":{"ac next":7180096,"memorvalloc":5529288,"memorv total":16354464,"rss":270336},"runtime":{"goroutin es":35}},"filebeat":{"harvester":{"open files":l,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":l,"even ents":{"active":0}}},"registrar":{"states":{"current":3}},"svstem":{"load":{"I":0.13, "15":0.26, "5":0.13,"norm":{"1":0.0325, "15":0.065,5";0,0325}}}}}}
2019-09-13717:03:37 684+0800  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":130,"time":{"ms":7}},"total":{"ticks":360,"time":{"ms":16},"value":360},"user":{"ticks":230, "time  
":{"ms":9}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeralid":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5""uptime":{"ms":720052}},"memstats":{"gc next":4955952,"memor alloc":2487768,"memorv total":16651512},"runtime":{"goroutines":35}},"fi lebeat":{"harvester":{"open files":l,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"activ e":0}h."reaistrar":f"states":f"current":3}}."svstem":{"load":{"1":0.19."15":0.26."5":0.15."norm":f"1":0.0475. "15":0.065,"5":0.0375HH}}}
2019-09-13T17:04:07.684+08日日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"svstem":{"ticks":140,"time":{"ms":7}},"totyl":{"ticks":370,"time":{"ms":14},"value":370}, "user":{"ticks":230,"time  
":{"ms":7}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open"8},"info":{"ephemeralid":"0230f8a0-bcca-4bbd-b90c-364efd25cdds""uptime":{"ms":750049}},"memstats":{"gc next":4955952,"memory alloc":3816784,"memory total":17980528},"runtime":{"goroutines":35}}, "fi lebeat":{"events":{"added":1,"done":1},"harvester":{"open files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":1,"batches":1,"total":1}},"outputs":{"kafka":{"bvtes read":53,"bytes write":443}},"pipeline":{"clients":1, "events":"active":0."oublished":1."total":1}."aueue":{"acked":1HH}."reaistrar":{"states":{"current":3."uodate":1}."writes":{"success":1."total":l}},"system":{"load":{"1":0.12,"15":0.25,"5":0.14,"norm":{"1":0.03,"15":0,0625,"5":日,035}}}}}}
2019-09-13T17:04:37.685+080日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":4}},"total":{"ticks":390,"time":{"ms":22},"value":390},"user":{"ticks":250, "time  
":{"ms":18}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdds'"uptime":{"ms":780052}},"memstats":{"gc next":6960736"memory alloc":3499464,"memory total":19308920},"runtime":{"goroutines":35}},"f
ilebeat":{"events":{"added":1,"done":1},"harvester":{"open files":1,"running":1}},"libbeat":{"confia":{"module":{"running":0}}, "output"  {"events":{"acked":1,"batches":1,"total";1}},"outputs kafka":{"bytes read":53,"bytes write":446}},"pipeline":{"clients":1,"events": 
{"active":0,"published":1,"total":1},"queue":{"acked":1}}},"registrar":{"states":{"current":3,"update":1},"writes":{"success":1, "total"::1}},"system":{"load":{"1":0.07,"15":日.24,"5":0.12,"norm":{"1";日,0175,"15";日.06,"5";θ,θ3}}}}}}
2019-09-13T17:05:07.684+080日  TNEO  monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics ":{"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":6}},"total":{"ticks":410,"time":{"ms":9},"value":410}, "user":{"ticks":260, "time' 
:{"ms":3}}},"handles":{"limit":{"hard":1000000,"soft":1000000},"open":8},"info":{"ephemeral_id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5", uptime":{"ms":810048}},"memstats":{"gc next":6960736,"memory alloc":3806320"memory total":19615776},"runtime":{"goroutines":35}},"fil ebeat":{"harvester":{"open files":1,"running":1}},"libbeat":{"confiq":{"module":{"running"0}},"pipeline":{"clients":1, "events":{"active":0}}},"registrar":{"states":{"current":3}},"system":{"load":{"1":0.04,"15";0.24,"5":0.11,"nom":{"1":0.01,"15::06,"*9+@27533]货}
2019-09-13T17:05:37.685+080日  INFO  [monitoring]  log/log.go:145 Non-zero metrics in the last 30s {"monitoring":{"metrics s":{"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":3}},"total":{"ticks":410,"time":{"ms":6},"value":410}, "user":t"Icks 260, tie 
:f"ms":31+}."handles":f"limit":{"hard":1000000."soft":1000000}."oDen":8}."info":{"eohemeral id":"0230f8a0-bcca-4bbd-b90c-364efd25cdd5", uptime":{"ms":840050}},"memstats":{"ac next":6960736,"memorv alloc":4094520,"memorv total":19903976},"runtime":{"goroutines":35}}, "fin

往测试的log文件里写内容echo '321312' >> test.log

logstash

  • logstash的linux安装
  • logstash的Docker安装

容器默认启动配置文件为/usr/share/logstash/pipeline/logstash.conf

测试一个logstash(标准的输入输出)

bin/logatsh -e 'input{ stdin {}} output { stdout {}}

提示:

Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting
在原命令bin/logstash -f config/logstash-sample.conf的末尾加上--path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/logstash-sample.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动多个logstash实例的命令
bin/logstash -f config/test-logstash.conf --config.reload.automatic --path.data=/usr/share/logstash/jpdata
# 启动前先检查配置文件是否正确
bin/logstash -f logstash-sample.conf --config.test_and_exit
# 参数说明
--path.data是指存放数据的路径
--config.reload.automatic 热加载配置文件,修改配置文件后无需重新启动
[2819-09-13709:68:35,643][INF01egstash.outputs.slasicsearch] hwElastisearchoutput{:class>Logstash::utputs::ElasticSearch”,:hostsmm["http://192.168.104.102:92001}
2819-09-13T89108135.6851[INFo Ilogstash.javapipeline  do.monitoring-logstash"."pipeline workers"o"pipeline.batch.size"mo2."pipeline.batch.delay"5."pipeline.maxinflight"mo2. ithreade>"ecthreadi02697fe5c rup 
2819-09-13700:68:35.823][INF0 1fLogstash.javapipelin
2819-89-13189108:35,8431[INF0 1Ilogstash.agen Pipelinesrunning{icount-2.Irunningpipelines>main"monitoring-logstash"L. inon_running_pipelines>>[0} 
[2019-09.13T09:08:36,8277[INF0 ILogstash.agent  successfully started Logstash API endpoint {:port=-9Ge1}  
/usr/share/logstash/vendor/bundle/jruby/2.5B/gems/awesame_print-1.7.0/lib/awesme.print/formatters/basefomatter,rbi3 waming! constant 1lFixmum is deprecated
"input”>{"type”=:“loq'
goent"
ephemeral_id"> "6238fBaB-bcca-4bbd-b90c-364efd25cdd5"。
"warsian" m:“7.3.0. 
nid">"24e5f934-6584-44bf-B1ef-sbe2h090414a"。
tvop  m:“filobeat"  
"heatime" ->"pit-server-104-102" me55age” "123"。
-log”o* {"offset’=> 日. filo" =): {
"path" => "/opt/test,log"
“gtimestamp”=2019-09-13T89:61:17.7132,
thast
“name”*:"pit-servee-104-102"2sorvicethes" . "le-filebeat" .
"ecs”
"version"** “1.8.1'}。
“taps”=【
[9]-_grokparsefailure""@wersion"s "1"
"input"=* {"type"=> "log"}.
"agent"
“varsian" =: “7.3.0,
ephmeral i aeofisn asa4 tad bno atasfasae:
"tyoe"m> "filshegt"
"hestnzo":“pit-server-104-102'“message”-“321312”,
“name”=:"pit-server-104-102"
"gtimestamp”=*2019-09-13T89:01:42.7152, 

logstash配置文件

test-logstash.conf

# 从kafka中读取内容
input {
  kafka {
    bootstrap_servers => "192.168.104.102:9092"
    topics => ["stream-in"]
    codec => "json"
  }
}
# 内容过滤
filter {
  grok {
      match => {"message" => "%{COMBINEDAPACHELOG}"}
  }
}
#}
# 输出内容到控制台显示
output {
    stdout { codec => rubydebug}
}
复制代码

logstash-es.conf

# 从kafka中读取内容
input {
  kafka {
    bootstrap_servers => "192.168.104.102:9092"
    topics => ["stream-in"]
    codec => "json"
  }
}
# 内容过滤
filter {
  json {
      source => "message"
  }
}
# 输出内容到elasticsearch
output {
  elasticsearch {
    hosts => ["192.168.104.102:9200"]
    index => "logstash-%{+YYYY.MM.dd}"
  }
}
复制代码

kafka

docker 中kafka的安装路径在/opt/kafka_2.12-2.3.0/bin下,

bash-4.4# kafka. -topics.sh --zookeeper 192.168.104.102 --list consumer offsets
stream-in bash-4.4#|  
bash-4.4# pwd
/opt/kafka  2.12-1.1.0/bin  
bash-4.4# ls
connect-distributed.sh  kafka -console-consumer.sh  kafka-  -delete-records.sh  kafka -reassign-partitions  5.sh  kafka -server-stop.sh kafka-verifiable-producer.sh  zookeeper-server-stop.sh  
kafka-acls.sh connect-standalone.sh kafka-console-producer.sh -consumer-groups.sh kafka.  -log-dirs.sh  kafka-replay-log-producer r.sh  kafka-simple-consumer-shell.sh  trogdor.sh  zookeeper-shell.sh  
Kafka kafka -mirror-maker.sh  kafka-replica-verificatiol  on.sh kafka-streams-application-reset.sh  windows 
kafka-  bash-4.4# kafka-configs.sh  -broker-api-versions.sh kafka.  kafka-delegation-tokens.sh  -consumer-perf-test.sh  Kafka Kafka.  -producer-perf-test.sh  -preferred-replica-election.sh  kafka kafka-run-class.sh  -server-start.sh  kafka kafka-topics.sh verifiable-consumer.sh  zookeeper-server-start.sh zookeeper-security-migration.sh 
# tree查看文件结构
yum -y install tree
# 查看kafka在zookeeper中的所有topic
kafka-topics.sh --zookeeper 192.168.104.102:2181 --list
# 此种方式查询到的是所有 topic 的所有 group.id ,  而没有根据 topic 查询 group.id 的方式。
kafka-consumer-groups.sh --bootstrap-server kafka1:9092 --list
# 给topic插入数据
kafka-console-producer.sh --broker-list kafka1:9092 --topic stream-in
# Kafka中启动Consumer来查询topic内容:
kafka-console-consumer.sh --bootstrap-server kafka1:9092 --topic stream-in -from-beginning
复制代码

zookeeper

[zk: localhost:2181(CONNECTED)日]ls /
[log dir event notification, isr change notification, zookeeper, admin, consumers, cluster,config,latest_producer_id_block, controller, brokers, controller_epoch][zk:localhost:2181(CONNECTED) 1] ls /con
consumers config  controller  controller epoch  
[zk:localhost:2181(CONNECTED) 1] ls /config/topics
consumer offsets,stream-in] zk: localhost:2181(CONNECTED)21 
rootcpit-server-104-102:/opt/zookeeper-3.4.13/bin# ls
README.txt zkCleanup.sh zkCli.cmd zkcli.sh zkEnv.cmd  zkEnv.sh  zkServer.cmd  zkServer.sh zkTxnLogToolkit.cmd zkTxnLogToolkit.sh  
rootpit-server-104-102:/opt/zookeeper-3.4.13/bin# pwd/opt/zookeeper-3.4.13/bin
rootcpit-server-104-102:/opt/zookeeper-3.4.13/bin#  

./zkCli.sh

[zk:localhost:2181(CONNECTED) 7] ls /brokers/ids[1]
[zk:localhost:2181(CONNECTED) 8] get /brokers/ids/1
{"listener security protocol map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://  1:9092"],"jmx port":-1,"host":  1","timestamp":"1568172305410","port":9092,"version":4} 
cZxid = 0x150
ctime = Wed Sep 11 03:25:05 UTC 2019 mZxid = 0x150
mtime = Wed Sep 11 03:25:05 UTC 2019 pZxid = 0x150 cversion =0 dataVersion =日 aclVersion =日
ephemeralOwner=0x10149e6e5680017 dataLength =182
numChildren =日  [zk:localhost:2181(CONNECTED)9] 

发生问题时的参考

  1. kafka连接后使用的是主机名导致连接失败
  2. 如何避免将Kafka broker机器的hostname注册进zookeeper
  3. Configure the Kafka output
  4. Kafka查看topic、consumer group状态命令
  5. Kafka Shell基本命令(包括topic的增删改查)
  6. Kibana开启中文语言
  7. mysql Client does not support authentication protocol
  8. 查看filebeat进程 ps -ef | grep filebeat关于filebeat&kafka兼容性的描述(已官方描述为准)

51514c3b51698e317672a72de0fe4f0.png

相关文章
|
6天前
|
Java 程序员 容器
Java中的变量和常量:数据的‘小盒子’和‘铁盒子’有啥不一样?
在Java中,变量是一个可以随时改变的数据容器,类似于一个可以反复打开的小盒子。定义变量时需指定数据类型和名称。例如:`int age = 25;` 表示定义一个整数类型的变量 `age`,初始值为25。 常量则是不可改变的数据容器,类似于一个锁死的铁盒子,定义时使用 `final` 关键字。例如:`final int MAX_SPEED = 120;` 表示定义一个名为 `MAX_SPEED` 的常量,值为120,且不能修改。 变量和常量的主要区别在于变量的数据可以随时修改,而常量的数据一旦确定就不能改变。常量主要用于防止意外修改、提高代码可读性和便于维护。
|
4天前
|
缓存 监控 开发者
掌握Docker容器化技术:提升开发效率的利器
在现代软件开发中,Docker容器化技术成为提升开发效率和应用部署灵活性的重要工具。本文介绍Docker的基本概念,并分享Dockerfile最佳实践、容器网络配置、环境变量和秘密管理、容器监控与日志管理、Docker Compose以及CI/CD集成等技巧,帮助开发者更高效地利用Docker。
|
6天前
|
存储 缓存 安全
在 Java 编程中,创建临时文件用于存储临时数据或进行临时操作非常常见
在 Java 编程中,创建临时文件用于存储临时数据或进行临时操作非常常见。本文介绍了使用 `File.createTempFile` 方法和自定义创建临时文件的两种方式,详细探讨了它们的使用场景和注意事项,包括数据缓存、文件上传下载和日志记录等。强调了清理临时文件、确保文件名唯一性和合理设置文件权限的重要性。
18 2
|
6天前
|
Java
Java 8 引入的 Streams 功能强大,提供了一种简洁高效的处理数据集合的方式
Java 8 引入的 Streams 功能强大,提供了一种简洁高效的处理数据集合的方式。本文介绍了 Streams 的基本概念和使用方法,包括创建 Streams、中间操作和终端操作,并通过多个案例详细解析了过滤、映射、归并、排序、分组和并行处理等操作,帮助读者更好地理解和掌握这一重要特性。
14 2
|
8天前
|
SQL 安全 Java
安全问题已经成为软件开发中不可忽视的重要议题。对于使用Java语言开发的应用程序来说,安全性更是至关重要
在当今网络环境下,Java应用的安全性至关重要。本文深入探讨了Java安全编程的最佳实践,包括代码审查、输入验证、输出编码、访问控制和加密技术等,帮助开发者构建安全可靠的应用。通过掌握相关技术和工具,开发者可以有效防范安全威胁,确保应用的安全性。
21 4
|
10天前
|
缓存 监控 Java
如何运用JAVA开发API接口?
本文详细介绍了如何使用Java开发API接口,涵盖创建、实现、测试和部署接口的关键步骤。同时,讨论了接口的安全性设计和设计原则,帮助开发者构建高效、安全、易于维护的API接口。
32 4
|
11天前
|
存储 分布式计算 Java
存算分离与计算向数据移动:深度解析与Java实现
【11月更文挑战第10天】随着大数据时代的到来,数据量的激增给传统的数据处理架构带来了巨大的挑战。传统的“存算一体”架构,即计算资源与存储资源紧密耦合,在处理海量数据时逐渐显露出其局限性。为了应对这些挑战,存算分离(Disaggregated Storage and Compute Architecture)和计算向数据移动(Compute Moves to Data)两种架构应运而生,成为大数据处理领域的热门技术。
32 2
|
15天前
|
SQL Java 程序员
倍增 Java 程序员的开发效率
应用计算困境:Java 作为主流开发语言,在数据处理方面存在复杂度高的问题,而 SQL 虽然简洁但受限于数据库架构。SPL(Structured Process Language)是一种纯 Java 开发的数据处理语言,结合了 Java 的架构灵活性和 SQL 的简洁性。SPL 提供简洁的语法、完善的计算能力、高效的 IDE、大数据支持、与 Java 应用无缝集成以及开放性和热切换特性,能够大幅提升开发效率和性能。
|
16天前
|
存储 Java 关系型数据库
在Java开发中,数据库连接是应用与数据交互的关键环节。本文通过案例分析,深入探讨Java连接池的原理与最佳实践
在Java开发中,数据库连接是应用与数据交互的关键环节。本文通过案例分析,深入探讨Java连接池的原理与最佳实践,包括连接创建、分配、复用和释放等操作,并通过电商应用实例展示了如何选择合适的连接池库(如HikariCP)和配置参数,实现高效、稳定的数据库连接管理。
33 2
|
16天前
|
SQL 监控 Java
Java连接池技术的最新发展,包括高性能与低延迟、智能化管理与监控、扩展性与兼容性等方面
本文探讨了Java连接池技术的最新发展,包括高性能与低延迟、智能化管理与监控、扩展性与兼容性等方面。同时,结合最佳实践,介绍了如何选择合适的连接池库、合理配置参数、使用监控工具及优化数据库操作,以实现高效稳定的数据库访问。示例代码展示了如何使用HikariCP连接池。
10 2

相关产品

  • 容器服务Kubernetes版