前言
- sync.Pool的核心作用 - 读源码,缓存稍后会频繁使用的对象+减轻GC压力
- sync.Pool的Put与Get - Put的顺序为local private-> local shared,Get的顺序为 local private -> local shared -> remote shared ->victim -> New
- 思考sync.Pool应用的核心场景 - 高频使用且生命周期短的对象,且初始化始终一致,如fmt
- 探索Go1.13引入victim的作用 - 了解victim cache的机制
使用
package main import ( "fmt" "reflect" "sync" ) func syncPool() { var sp = sync.Pool{ // Tip: 声明对象池的New函数,这里以一个简单的int为例 New: func() interface{} { return 100 }, } // Tip: 从对象池里获取一个对象 data := sp.Get().(int) fmt.Println(data) // Tip: 往对象池里放回一个对象 sp.Put(data) fmt.Println(reflect.ValueOf(sp)) }
源码
Get
//Get local private -> local shared -> remote shared ->victim -> New func (p *Pool) Get() interface{} { x := l.private//先从local private拿 l.private = nil if x == nil { x, _ = l.shared.popHead()//private没有的话从local shared拿 if x == nil { x = p.getSlow(pid)//local没有就从remote shared拿,再没有就从victim拿,详情看下面函数代码 } } if x == nil && p.New != nil { x = p.New()//都没有只能New一个了 } return x } func (p *Pool) getSlow(pid int) interface{} { // See the comment in pin regarding ordering of the loads. size := runtime_LoadAcquintptr(&p.localSize) // load-acquire locals := p.local // load-consume //这里,从其他的process shared上面取 // Try to steal one element from other procs. for i := 0; i < int(size); i++ { l := indexLocal(locals, (pid+i+1)%int(size)) if x, _ := l.shared.popTail(); x != nil { return x } } // Try the victim cache. We do this after attempting to steal // from all primary caches because we want objects in the // victim cache to age out if at all possible. size = atomic.LoadUintptr(&p.victimSize) if uintptr(pid) >= size { return nil } locals = p.victim//没有的话从victim上拿 return nil }
Put
// Put adds x to the pool. //local private-> local shared func (p *Pool) Put(x interface{}) { if l.private == nil { l.private = x x = nil } if x != nil { l.shared.pushHead(x) } }
理解
sync.pool
sync.pool是一个对象池,对象池的声明,核心是一个New
从对象池获取一个对象Get,把一个对象放入对象池Put
对象池的核心作用:缓存会频繁使用的对象,减轻GC压力
local对应的就是当前goroutine运行在GMP模型中的P,process
其他几个process就是remote
长生命周期没必要使用pool,思考GC相关的问题即可
New里面是没有任何传参的,如果想要每次New出来的东西都不一样的话,是不适合用pool的
victim
victim在poolCleanup的时候被赋值,而poolCLeanup在GC的时候被调用
先把victim和victimsize情空,再把local和localsize的数据全放过来,再把local和localsize的数据放过来
func poolCleanup() { // This function is called with the world stopped, at the beginning of a garbage collection. // It must not allocate and probably should not call any runtime functions. // Because the world is stopped, no pool user can be in a // pinned section (in effect, this has all Ps pinned). // Drop victim caches from all pools. for _, p := range oldPools { p.victim = nil p.victimSize = 0 } // Move primary cache to victim cache. for _, p := range allPools { p.victim = p.local p.victimSize = p.localSize p.local = nil p.localSize = 0 } // The pools with non-empty primary caches now have non-empty // victim caches and no pools have primary caches. oldPools, allPools = allPools, nil }
在getslow的时候,victim被使用,locals=p.victim,综合来看victim地位相当于是脏数据,使用的频率是不会太高的,
引用这个victim给sync.pool带来了怎样的提升呢?最关键的还是在减轻GC的压力,本来是直接清空local,现在只需要把先移到victim上,对victim再慢慢操作,减少GC带来的抖动这些问题