gctrace
gctrace用途主要是用于跟踪GC的不同阶段的耗时与GC前后的内存量对比。
使用的时候仅需在启动的时候添加GODEBUG='gctrace=1'的命令参数即可。如下
➜ server git:(master) GODEBUG='gctrace=1' go run ./cmd/main.go
gc 1 @0.024s 1%: 0.027+1.0+0.022 ms clock, 0.21+1.1/0.58/0.008+0.17 ms cpu, 4->4->0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 2 @0.031s 1%: 0.022+0.59+0.032 ms clock, 0.18+0.64/0.74/0.12+0.26 ms cpu, 4->4->0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 3 @0.038s 1%: 0.012+0.41+0.014 ms clock, 0.096+0.22/0.46/0+0.11 ms cpu, 4->4->0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 4 @0.047s 1%: 0.057+0.32+0.012 ms clock, 0.45+0.35/0.37/0.003+0.097 ms cpu, 4->4->0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 5 @0.053s 1%: 0.024+0.31+0.022 ms clock, 0.19+0.18/0.46/0.054+0.17 ms cpu, 4->4->0 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 6 @0.058s 1%: 0.025+0.52+0.038 ms clock, 0.20+0.53/0.46/0+0.30 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 7 @0.063s 1%: 0.037+0.46+0.014 ms clock, 0.30+0.20/0.60/0+0.11 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 8 @0.070s 1%: 0.026+0.32+0.002 ms clock, 0.21+0.097/0.54/0.19+0.016 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 9 @0.077s 1%: 0.030+0.39+0.018 ms clock, 0.24+0.068/0.41/0.16+0.14 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 10 @0.083s 1%: 0.027+0.46+0.005 ms clock, 0.21+0.46/0.55/0.012+0.042 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 11 @0.087s 2%: 0.039+0.44+0.018 ms clock, 0.31+0.26/0.62/0+0.14 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 12 @0.093s 2%: 0.026+0.43+0.003 ms clock, 0.21+0.40/0.47/0+0.030 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
参数说明
gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
@#s time in seconds since program start
% percentage of time spent in GC since program start
+...+# wall-clock/CPU times for the phases of the GC
->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
The phases are stop-the-world (STW) sweep termination, concurrent
mark and scan, and STW mark termination. The CPU times
for mark/scan are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call and all phases are STW.
举例分析
gc 12 @0.093s 2%: 0.026+0.43+0.003 ms clock, 0.21+0.40/0.47/0+0.030 ms cpu, 4->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 8 P
gc 12: 这是第12次gc。
@0.093s: 这次gc的markTermination阶段完成后,距离runtime启动到现在的时间。
2%:当目前为止,gc的标记工作(包括两次mark阶段的STW和并发标记)所用的CPU时间占总CPU的百分比。
0.026+0.43+0.003 ms clock:按顺序分成三部分,0.026表示mark阶段的STW时间(单P的);0.43表示并发标记用的时间(所有P的);0.003表示markTermination阶段的STW时间(单P的)。
0.21+0.40/0.47/0+0.030 ms cpu:按顺序分成三部分,0.21表示整个进程在mark阶段STW停顿时间(0.026 * 8);0.40/0.47/0有三块信息,0.40是mutator assists占用的时间,0.47是dedicated mark workers+fractional mark worker占用的时间,0是idle mark workers占用的时间。这三块时间加起来会接近0.43*8(P的个数);0.030 ms表示整个进程在markTermination阶段STW停顿时间(0.003 * 8)。
4->4->1 MB:按顺序分成三部分,4表示开始mark阶段前的heap_live大小;4表示开始markTermination阶段前的heap_live大小;1表示被标记对象的大小。
4 MB goal:表示下一次触发GC的内存占用阀值是4MB,等于2MB * 2,向上取整。
8 P:本次gc共有多少个P。
补充说明
一、heap_live要结合go的内存管理来理解。因为go按照不同的对象大小,会分配不同页数的span。span是对内存页进行管理的基本单元,每页8k大小。所以肯定会出现span中有内存是空闲着没被用上的。
不过怎么用go先不管,反正是把它划分给程序用了。而heap_live就表示所有span的大小。
而程序到底用了多少呢?就是在GC扫描对象时,扫描到的存活对象大小就是已用的大小。对应上面就是4MB。
二、mark worker分为三种,dedicated、fractional和idle。分别表示标记工作干活时的专注程度。dedicated最专注,除非被抢占打断,否则一直干活。idle最偷懒,干一点活就退出,控制权让给出别的goroutine。它们都是并发标记工作里的worker。
pprof
平时在工程中使用,需要通过go func来触发,此处专门抽取出来进行分析
import (
"net/http"
_ "net/http/pprof"
"testing"
)
func TestPprof(t *testing.T) {
http.ListenAndServe("0.0.0.0:8899", nil)
}
在浏览器中输入http://127.0.0.1:8899/debug/pprof/可以看到一个汇总页面,
页面显示如下:
其中heap项是我们需要关注的信息,进入页面可以看到信息如下,
包括一些汇总信息和各个go routine的内存开销,不过这里除了第一行信息比较直观,其他的信息太离散。
heap profile: 2(inused_objects): 1568(inused_bytes) [2(allocated_objects): 1568(allocted_bytes)] @ heap/1048576
go tool pprof分析工具
➜ ~ go tool pprof -inuse_space http://127.0.0.1:8899/debug/pprof/heap
Fetching profile over HTTP from http://127.0.0.1:8899/debug/pprof/heap
Saved profile in /Users/shareit/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
Type: inuse_space
Time: Jul 28, 2022 at 2:10pm (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof)
有两个选项,-alloc_space和-inuse_space,优先使用-inuse_space来分析,因为直接分析导致问题的现场比分析历史数据肯定要直观的多,一个函数alloc_space多不一定就代表它会导致进程的RSS高,因为我们比较幸运可以在线下复现这个OOM的场景,所以直接用-inuse_space。
这个命令进入后,是一个类似gdb的交互式界面,输入top命令可以前10大的内存分配,
(pprof) top
Showing nodes accounting for 2049.89kB, 100% of 2049.89kB total
Showing top 10 nodes out of 15
flat flat% sum% cum cum%
1537.69kB 75.01% 75.01% 1537.69kB 75.01% runtime.allocm
512.20kB 24.99% 100% 512.20kB 24.99% runtime.malg
0 0% 100% 1025.12kB 50.01% runtime.mcall
0 0% 100% 512.56kB 25.00% runtime.mstart
0 0% 100% 512.56kB 25.00% runtime.mstart0
0 0% 100% 512.56kB 25.00% runtime.mstart1
0 0% 100% 1537.69kB 75.01% runtime.newm
0 0% 100% 512.20kB 24.99% runtime.newproc.func1
0 0% 100% 512.20kB 24.99% runtime.newproc1
0 0% 100% 1025.12kB 50.01% runtime.park_m
flat是堆栈中当前层的inuse内存值,cum是堆栈中本层级的累计inuse内存值(包括调用的函数的inuse内存值,上面的层级),
可以看到目前内存都是未被使用的,而且都来自runtime的分配。此处如果发现flat%列占比较高,则可以继续利用list命令进行深入追踪。
(pprof) list malg
Total: 2MB
ROUTINE ======================== runtime.malg in /Users/shareit/Workspace/work/service/go/src/runtime/proc.go
512.20kB 512.20kB (flat, cum) 24.99% of Total
. . 4029: execLock.unlock()
. . 4030:}
. . 4031:
. . 4032:// Allocate a new g, with a stack big enough for stacksize bytes.
. . 4033:func malg(stacksize int32) *g {
512.20kB 512.20kB 4034: newg := new(g)
. . 4035: if stacksize >= 0 {
. . 4036: stacksize = round2(_StackSystem + stacksize)
. . 4037: systemstack(func() {
. . 4038: newg.stack = stackalloc(uint32(stacksize))
. . 4039: })
可以直接看到哪个函数在哪一行代码产生了多少的内存!
不过如果可以导出文件的测试环境就更方便了。
➜ ~ go tool pprof -inuse_space -cum -svg http://127.0.0.1:8899/debug/pprof/heap > ~/Desktop/heap_inuse.svg
Fetching profile over HTTP from http://127.0.0.1:8899/debug/pprof/heap
Saved profile in /Users/shareit/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.002.pb.gz
failed to execute dot. Is Graphviz installed? Error: exec: "dot": executable file not found in $PATH
说明没有安装graphviz,通过brew安装
➜ ~ brew install graphviz
Running brew update --auto-update...
==> Auto-updated Homebrew!
➜ ~ go tool pprof -inuse_space -cum -svg http://127.0.0.1:8899/debug/pprof/heap > ~/Desktop/heap_inuse.svg
Fetching profile over HTTP from http://127.0.0.1:8899/debug/pprof/heap
Saved profile in /Users/shareit/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz
这次成功输出,我们打开图片如下
参考
- GCTRACE:https://godoc.org/runtime
- Golang profiling:https://blog.golang.org/profiling-go-programs