Android窗口管理分析(4):Android View绘制内存的分配、传递、使用

简介: Android窗口管理分析(4):Android View绘制内存的分配、传递、使用

前文Android匿名共享内存(Ashmem)原理分析了匿名共享内存,它最主要的作用就是View视图绘制,Android视图是按照一帧一帧显示到屏幕的,而每一帧都会占用一定的存储空间,通过Ashmem机制APP与SurfaceFlinger共享绘图数据,提高图形处理性能,本文就看Android是怎么利用Ashmem分配及绘制的:


View视图内存的分配


前文Window添加流程中描述了:在添加窗口的时候,WMS会为APP分配一个WindowState,以标识当前窗口并用于窗口管理,同时向SurfaceFlinger端请求分配Layer抽象图层,在SurfaceFlinger分配Layer的时候创建了两个比较关键的Binder对象,用于填充WMS端Surface,一个是sp<IBinder> handle:是每个窗口标识的句柄,将来WMS同SurfaceFlinger通信的时候方便找到对应的图层。另一个是sp<IGraphicBufferProducer> gbp :共享内存分配的关键对象,同时兼具Binder通信的功能,用来传递指令共享内存的句柄,注意,这里只是抽象创建了对象,并未真正分配每一帧的内存,内存的分配要等到真正绘制的时候才会申请,首先看一下分配流程:


  • 分配的时机:什么时候分配
  • 分配的手段:如何分配
  • 传递的方式:如何跨进程传递


Surface被抽象成一块画布,只要拥有Surface就可以绘图,其根本原理就是Surface握有可以绘图的一块内存,这块内存是APP端在需要的时候,通过sp<IGraphicBufferProducer> gbp向SurfaceFlinger申请的,那么首先看一下APP端如何获得sp<IGraphicBufferProducer> gbp这个服务代理的,之后再看如何利用它申请内存,在WMS利用向SurfaceFlinger申请填充Surface的时候,会请求SurfaceFlinger分配这把剑,并将其句柄交给自己

sp<SurfaceControl> SurfaceComposerClient::createSurface(
        const String8& name,  uint32_t w, uint32_t h, PixelFormat format, uint32_t flags){
      sp<SurfaceControl> sur;
      ...
      if (mStatus == NO_ERROR) {
        sp<IBinder> handle;
        sp<IGraphicBufferProducer> gbp;
        <!--关键点1 获取图层的关键信息handle, gbp-->
        status_t err = mClient->createSurface(name, w, h, format, flags,
                &handle, &gbp);
         <!--关键点2 根据返回的图层关键信息 创建SurfaceControl对象-->
        if (err == NO_ERROR) {
            sur = new SurfaceControl(this, handle, gbp);
        }
    }
    return sur;
}

看关键点1,这里其实就是建立了一个sp<IGraphicBufferProducer> gbp容器,并请求SurfaceFlinger分配填充内容,SurfaceFlinger收到请求后会为WMS建立与APP端对应的Layer,同时为其分配sp<IGraphicBufferProducer> gbp,并填充到Surface中返回给APP,

status_t SurfaceFlinger::createNormalLayer(const sp<Client>& client,
        const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,
        sp<IBinder>* handle, sp<IGraphicBufferProducer>* gbp, sp<Layer>* outLayer){
    ...
    <!--关键点 1 -->
    *outLayer = new Layer(this, client, name, w, h, flags);
    status_t err = (*outLayer)->setBuffers(w, h, format, flags);
    <!--关键点 2-->
    if (err == NO_ERROR) {
        *handle = (*outLayer)->getHandle();
        *gbp = (*outLayer)->getProducer();
    }
  return err;
}
    void Layer::onFirstRef() {
        // Creates a custom BufferQueue for SurfaceFlingerConsumer to use
        sp<IGraphicBufferProducer> producer;
        sp<IGraphicBufferConsumer> consumer;
        BufferQueue::createBufferQueue(&producer, &consumer);
          <!--创建producer与consumer-->
        mProducer = new MonitoredProducer(producer, mFlinger);
        mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(consumer, mTextureName);
        mSurfaceFlingerConsumer->setConsumerUsageBits(getEffectiveUsage(0));
        mSurfaceFlingerConsumer->setContentsChangedListener(this);
        mSurfaceFlingerConsumer->setName(mName);
    <!--三缓冲还是双缓冲-->
    #ifdef TARGET_DISABLE_TRIPLE_BUFFERING
    #warning "disabling triple buffering"
        mSurfaceFlingerConsumer->setDefaultMaxBufferCount(2);
    #else
        mSurfaceFlingerConsumer->setDefaultMaxBufferCount(3);
    #endif
        const sp<const DisplayDevice> hw(mFlinger->getDefaultDisplayDevice());
        updateTransformHint(hw);
    }

通过TARGET_DISABLE_TRIPLE_BUFFERING看是用三缓冲还是双缓冲,MaxBufferCoun。

void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
        sp<IGraphicBufferConsumer>* outConsumer,
        const sp<IGraphicBufferAlloc>& allocator) {
    sp<BufferQueueCore> core(new BufferQueueCore(allocator));
    sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core));
    sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
    *outProducer = producer;
    *outConsumer = consumer;
}

从上面两个函数可以很清楚的看到Producer/Consumer的模型原样,也就说每个图层Layer都有自己的producer/ consumer,sp<IGraphicBufferProducer> gbp对应的其实是BufferQueueProducer,而BufferQueueProducer是一个Binder通信对象,在服务端是:

class BufferQueueProducer : public BnGraphicBufferProducer,
                            private IBinder::DeathRecipient {}

在APP端是

class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>{}

IGraphicBufferProducer Binder实体在SurfaceFlinger中创建后,打包到Surface对象,并通过binder通信传递给APP端,APP段通过反序列化将其恢复出来,如下:

status_t Surface::readFromParcel(const Parcel* parcel, bool nameAlreadyRead) {
    if (parcel == nullptr) return BAD_VALUE;
    status_t res = OK;
    if (!nameAlreadyRead) {
        name = readMaybeEmptyString16(parcel);
        // Discard this for now
        int isSingleBuffered;
        res = parcel->readInt32(&isSingleBuffered);
        if (res != OK) {
            return res;
        }
    }
    sp<IBinder> binder;
    res = parcel->readStrongBinder(&binder);
    if (res != OK) return res;
   <!--interface_cast会将其转换成BpGraphicBufferProducer-->
    graphicBufferProducer = interface_cast<IGraphicBufferProducer>(binder);
    return OK;
}

自此,APP端就获得了申请内存的句柄BpGraphicBufferProducer,它真正发挥作用是在第一次绘图时,为了方便理解,先看软件绘制:ViewRootImpl中的drawSoftware

   private boolean drawSoftware(Surface surface, AttachInfo attachInfo, int xoff, int yoff,
            boolean scalingRequired, Rect dirty) {
            final Canvas canvas;
        try {
            final int left = dirty.left;
            final int top = dirty.top;
            final int right = dirty.right;
            final int bottom = dirty.bottom;
            <!--关键点1 获取绘图内存-->
            canvas = mSurface.lockCanvas(dirty);
        try {
           try {
               <!--关键点2 绘图-->
               mView.draw(canvas);
            }              
        } finally {
            try {
            <!--关键点 3 绘图结束 ,通知surfacefling混排,更新显示界面-->
              surface.unlockCanvasAndPost(canvas);
            } catch (IllegalArgumentException e) {}

先看关键点1,内存的分配时机其实就在这里,直接进入到native层

static jlong nativeLockCanvas(JNIEnv* env, jclass clazz,
        jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
    ...
    status_t err = surface->lock(&outBuffer, dirtyRectPtr);
    ...
    sp<Surface> lockedSurface(surface);
    lockedSurface->incStrong(&sRefBaseOwner);
    return (jlong) lockedSurface.get();
}

surface.cpp的lock会进一步调用dequeueBuffer函数来请求分配内存:

int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {
    ...
    int buf = -1;
    sp<Fence> fence;
    nsecs_t now = systemTime();
    <!--申请buffer,并获得标识符-->
    status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence,
            reqWidth, reqHeight, reqFormat, reqUsage);
    ...
    if ((result & IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION) || gbuf == 0) {
    <!--申请的内存是在surfaceflinger进程中,Surface通过调用requestBuffer将图形缓冲区映射到Surface所在进程-->        
        result = mGraphicBufferProducer->requestBuffer(buf, &gbuf);
   ...
}

最终会调用BpGraphicBufferProducer的dequeueBuffer向服务端请求分配内存,这里用到了匿名共享内存的知识,在Linux中一切都是文件,共享内存也看成一个文件。分配成功之后,需要跨进程传递tmpfs临时文件的描述符fd。先看下申请的逻辑:

    class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>{
    virtual status_t dequeueBuffer(int *buf, sp<Fence>* fence, bool async,
            uint32_t w, uint32_t h, uint32_t format, uint32_t usage) {
        Parcel data, reply;
        data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
        data.writeInt32(async);
        data.writeInt32(w);
        data.writeInt32(h);
        data.writeInt32(format);
        data.writeInt32(usage);
        //通过BpBinder将要什么的buffer的相关参数保存到data,发送给BBinder
        status_t result = remote()->transact(DEQUEUE_BUFFER, data, &reply);
        if (result != NO_ERROR) {
            return result;
        }
        //BBinder给BpBinder返回了一个int,并不是缓冲区的内存
        *buf = reply.readInt32();
        bool nonNull = reply.readInt32();
        if (nonNull) {
            *fence = new Fence();
            reply.read(**fence);
        }
        result = reply.readInt32();
        return result;
    }
}

在client侧,也就是BpGraphicBufferProducer侧,通过DEQUEUE_BUFFER后核心只返回了一个*buf = reply.readInt32();其实是数组mSlots的下标,在BufferQueue中有个和mSlots对应的数组,一一对应,

status_t BnGraphicBufferProducer::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
      case DEQUEUE_BUFFER: {
            CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
            bool async      = data.readInt32();
            uint32_t w      = data.readInt32();
            uint32_t h      = data.readInt32();
            uint32_t format = data.readInt32();
            uint32_t usage  = data.readInt32();
            int buf;
            sp<Fence> fence;
            //调用BufferQueue的dequeueBuffer
            //也返回一个int的buf
            int result = dequeueBuffer(&buf, &fence, async, w, h, format, usage);
            //将buf和fence写入parcel,通过binder传给client
            reply->writeInt32(buf);
            reply->writeInt32(fence != NULL);
            if (fence != NULL) {
                reply->write(*fence);
            }
            reply->writeInt32(result);
            return NO_ERROR;
}

可以看到BnGraphicBufferProducer端获取到长宽及格式,之后利用BufferQueueProducer的dequeueBuffer来申请内存,内存可能已经申请,也可能未申请,未申请,则直接申请新内存,每个surface可以对应多块(6.0好像是64)内存:

status_t BufferQueueProducer::dequeueBuffer(int *outSlot,
        sp<android::Fence> *outFence, uint32_t width, uint32_t height,
        PixelFormat format, uint32_t usage) {
    ...
        sp<GraphicBuffer> graphicBuffer(mCore->mAllocator->createGraphicBuffer(
                width, height, format, usage,
                {mConsumerName.string(), mConsumerName.size()}, &error));

mCore其实就是上面的BufferQueueCore,mCore->mAllocator = new GraphicBufferAlloc(),最终会利用GraphicBufferAlloc对象分配共享内存:

sp<GraphicBuffer> GraphicBufferAlloc::createGraphicBuffer(uint32_t width,
        uint32_t height, PixelFormat format, uint32_t usage,
        std::string requestorName, status_t* error) {
    <!--直接new新建-->
    sp<GraphicBuffer> graphicBuffer(new GraphicBuffer(
            width, height, format, usage, std::move(requestorName)));
    status_t err = graphicBuffer->initCheck();
    return graphicBuffer;
}

从上面看到,直接new GraphicBuffer新建图像内存,

GraphicBuffer::GraphicBuffer(uint32_t inWidth, uint32_t inHeight,
        PixelFormat inFormat, uint32_t inUsage, std::string requestorName)
    : BASE(), mOwner(ownData), mBufferMapper(GraphicBufferMapper::get()),
      mInitCheck(NO_ERROR), mId(getUniqueId()), mGenerationNumber(0){
    ...
    handle = NULL;
    mInitCheck = initSize(inWidth, inHeight, inFormat, inUsage,
            std::move(requestorName));
}
status_t GraphicBuffer::initSize(uint32_t inWidth, uint32_t inHeight,
        PixelFormat inFormat, uint32_t inUsage, std::string requestorName)
{
    GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
    uint32_t outStride = 0;
    <!--请求分配内存-->
    status_t err = allocator.allocate(inWidth, inHeight, inFormat, inUsage,
            &handle, &outStride, mId, std::move(requestorName));
    if (err == NO_ERROR) {
        width = static_cast<int>(inWidth);
        height = static_cast<int>(inHeight);
        format = inFormat;
        usage = static_cast<int>(inUsage);
        stride = static_cast<int>(outStride);
    }
    return err;
}
 status_t GraphicBufferAllocator::allocate(uint32_t width, uint32_t height,
        PixelFormat format, uint32_t usage, buffer_handle_t* handle,
        uint32_t* stride, uint64_t graphicBufferId, std::string requestorName)
{
     ...
    auto descriptor = mDevice->createDescriptor();
    auto error = descriptor->setDimensions(width, height);
    error = descriptor->setFormat(static_cast<android_pixel_format_t>(format));
    error = descriptor->setProducerUsage(
            static_cast<gralloc1_producer_usage_t>(usage));
    error = descriptor->setConsumerUsage(
            static_cast<gralloc1_consumer_usage_t>(usage));
    <!--这里的device就是抽象的硬件设备-->
    error = mDevice->allocate(descriptor, graphicBufferId, handle);
    error = mDevice->getStride(*handle, stride);
    ...
    return NO_ERROR;
}

上面代码的mDevice就是利用hw_get_module及gralloc1_open获取到的硬件抽象层device,hw_get_module装载HAL模块,会加载相应的.so文件gralloc.default.so,它实现位于 hardware/libhardware/modules/gralloc.cpp中,最后将device映射的函数操作加载进来。这里我们关心的是allocate函数,先分析普通图形缓冲区的分配,它最终会调用gralloc_alloc_buffer()利用匿名共享内存进行分配,之前的文章Android匿名共享内存(Ashmem)原理分析了Android是如何通过匿名共享内存进行通信的,这里就直接用了:

static int gralloc_alloc_buffer(alloc_device_t* dev,
        size_t size, int usage, buffer_handle_t* pHandle)
{
    int err = 0;
    int fd = -1;
    size = roundUpToPageSize(size);
    // 创建共享内存,并且设定名字跟size
    fd = ashmem_create_region("gralloc-buffer", size);
    if (err == 0) {
        private_handle_t* hnd = new private_handle_t(fd, size, 0);
        gralloc_module_t* module = reinterpret_cast<gralloc_module_t*>(
                dev->common.module);
         // 执行mmap,将内存映射到自己的进程
        err = mapBuffer(module, hnd);
        if (err == 0) {
            *pHandle = hnd;
        }
    }
    return err;
}

mapBuffer会进一步调用ashmem的驱动,在tmpfs新建文件,同时开辟虚拟内存,

int mapBuffer(gralloc_module_t const* module,
            private_handle_t* hnd)
    {
        void* vaddr; 
        // vaddr有个毛用?
        return gralloc_map(module, hnd, &vaddr);
    }
static int gralloc_map(gralloc_module_t const* module,
        buffer_handle_t handle,
        void** vaddr)
{
    private_handle_t* hnd = (private_handle_t*)handle;
    if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER)) {
        size_t size = hnd->size;
        void* mappedAddress = mmap(0, size,
                PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);
        if (mappedAddress == MAP_FAILED) {
            return -errno;
        }
        hnd->base = intptr_t(mappedAddress) + hnd->offset;
    }
    *vaddr = (void*)hnd->base;
    return 0;
}


View绘制内存的传递


分配之后,会继续利用BpGraphicBufferProducer的requestBuffer,申请将共享内存给映射到当前进程:

virtual status_t requestBuffer(int bufferIdx, sp<GraphicBuffer>* buf) {
    Parcel data, reply;
    data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
    data.writeInt32(bufferIdx);
    status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
    if (result != NO_ERROR) {
        return result;
    }
    bool nonNull = reply.readInt32();
    if (nonNull) {
        *buf = new GraphicBuffer();
        reply.read(**buf);
    }
    result = reply.readInt32();
    return result;
}

private_handle_t对象用来抽象图形缓冲区,其中存储着与共享内存对应tmpfs文件的fd,GraphicBuffer对象会通过序列化,将这个fd会利用Binder通信传递给App进程,APP端获取到fd之后,便可以同mmap将共享内存映射到自己的进程空间,进而进行图形绘制。等到APP端对GraphicBuffer的反序列化的时候,会将共享内存mmap到当前进程空间:

status_t Parcel::read(Flattenable& val) const  
{  
    // size  
    const size_t len = this->readInt32();  
    const size_t fd_count = this->readInt32();  
    // payload  
    void const* buf = this->readInplace(PAD_SIZE(len));  
    if (buf == NULL)  
        return BAD_VALUE;  
    int* fds = NULL;  
    if (fd_count) {  
        fds = new int[fd_count];  
    }  
    status_t err = NO_ERROR;  
    for (size_t i=0 ; i<fd_count && err==NO_ERROR ; i++) {  
        fds[i] = dup(this->readFileDescriptor());  
        if (fds[i] < 0) err = BAD_VALUE;  
    }  
    if (err == NO_ERROR) {  
        err = val.unflatten(buf, len, fds, fd_count);  
    }  
    if (fd_count) {  
        delete [] fds;  
    }  
    return err;  
}  

进而调用GraphicBuffer::unflatten:

status_t GraphicBuffer::unflatten(void const* buffer, size_t size,
        int fds[], size_t count)
{
   ...
    mOwner = ownHandle;
    <!--将共享内存映射当前内存空间-->
    if (handle != 0) {
        status_t err = mBufferMapper.registerBuffer(handle);
    }
    return NO_ERROR;
}

mBufferMapper.registerBuffer函数对应gralloc_register_buffer

struct private_module_t HAL_MODULE_INFO_SYM = {
    .base = {
        .common = {
            .tag = HARDWARE_MODULE_TAG,
            .version_major = 1,
            .version_minor = 0,
            .id = GRALLOC_HARDWARE_MODULE_ID,
            .name = "Graphics Memory Allocator Module",
            .author = "The Android Open Source Project",
            .methods = &gralloc_module_methods
        },
        .registerBuffer = gralloc_register_buffer,
        .unregisterBuffer = gralloc_unregister_buffer,
        .lock = gralloc_lock,
        .unlock = gralloc_unlock,
    },
    .framebuffer = 0,
    .flags = 0,
    .numBuffers = 0,
    .bufferMask = 0,
    .lock = PTHREAD_MUTEX_INITIALIZER,
    .currentBuffer = 0,
};

最后会调用gralloc_register_buffer,通过mmap真正将tmpfs文件映射到进程空间:

static int gralloc_register_buffer(gralloc_module_t const* module,
                                   buffer_handle_t handle)
{
    ...
    if (cb->ashmemSize > 0 && cb->mappedPid != getpid()) {
        void *vaddr;
        <!--mmap-->
        int err = map_buffer(cb, &vaddr);
        cb->mappedPid = getpid();
    }
    return 0;
}

终于我们用到tmpfs中文件对应的描述符fd0->cb->fd

static int map_buffer(cb_handle_t *cb, void **vaddr)
{
    if (cb->fd < 0 || cb->ashmemSize <= 0) {
        return -EINVAL;
    }
    void *addr = mmap(0, cb->ashmemSize, PROT_READ | PROT_WRITE,
                      MAP_SHARED, cb->fd, 0);
    cb->ashmemBase = intptr_t(addr);
    cb->ashmemBasePid = getpid();
    *vaddr = addr;
    return 0;
}

到这里内存传递成功,App端就可以应用这块内存进行图形绘制了。


View绘制内存的使用


关于内存的使用,我们回到之前的Surface lock函数,内存经过反序列化,拿到内存地址后,会封装一个ANativeWindow_Buffer返回给上层调用:

status_t Surface::lock(
        ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds)
{
     ...
        void* vaddr;
        <!--lock获取地址-->
        status_t res = backBuffer->lock(
                GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN,
                newDirtyRegion.bounds(), &vaddr);
        if (res != 0) {
            err = INVALID_OPERATION;
        } else {
            mLockedBuffer = backBuffer;
            outBuffer->width  = backBuffer->width;
            outBuffer->height = backBuffer->height;
            outBuffer->stride = backBuffer->stride;
            outBuffer->format = backBuffer->format;
                <!--关键点 设置虚拟内存的地址-->
            outBuffer->bits   = vaddr;
        }
    }
    return err;
}

ANativeWindow_Buffer的数据结构如下,其中bits字段与虚拟内存地址对应,

typedef struct ANativeWindow_Buffer {
    // The number of pixels that are show horizontally.
    int32_t width;
    // The number of pixels that are shown vertically.
    int32_t height;
    // The number of *pixels* that a line in the buffer takes in
    // memory.  This may be >= width.
    int32_t stride;
    // The format of the buffer.  One of WINDOW_FORMAT_*
    int32_t format;
    // The actual bits.
    void* bits;
    // Do not touch.
    uint32_t reserved[6];
} ANativeWindow_Buffer;

如何使用,看下Canvas的draw

static void nativeLockCanvas(JNIEnv* env, jclass clazz,
        jint nativeObject, jobject canvasObj, jobject dirtyRectObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
    ...
    status_t err = surface->lock(&outBuffer, &dirtyBounds);
    ...
    <!--SkBitmap-->
    SkBitmap bitmap;
    ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);
    <!--为SkBitmap填充配置-->
    bitmap.setConfig(convertPixelFormat(outBuffer.format), outBuffer.width, outBuffer.height, bpr);
    <!--为SkBitmap填充格式-->
    if (outBuffer.format == PIXEL_FORMAT_RGBX_8888) {
        bitmap.setIsOpaque(true);
    }
    <!--为SkBitmap填充内存-->
    if (outBuffer.width > 0 && outBuffer.height > 0) {
        bitmap.setPixels(outBuffer.bits);
    } else {
        // be safe with an empty bitmap.
        bitmap.setPixels(NULL);
    }
    <!--创建native SkCanvas-->
    SkCanvas* nativeCanvas = SkNEW_ARGS(SkCanvas, (bitmap));
    swapCanvasPtr(env, canvasObj, nativeCanvas);
   ...
}

对于2D绘图,会用skia库会填充Bitmap对应的共享内存,如此即可完成绘制,本文不深入Skia库,有兴趣自行分析。绘制完成后,通过unlock直接通知SurfaceFlinger服务进行图层合成。


Android View局部重绘的原理


拿TextView来说,如果内容发生了改变,就会触发重绘,加入当前视图中还包含其他View,这个时候,可能只会触发TextView及其父层级View的重绘,其他View不重绘,为什么呢?这个时候传递给SurfaceFlinger的UI数据如何保证完整呢?其实在lockCanvas的时候,默认是又一次数据拷贝的,也就是将之前绘制的UI数据拷贝到最新的申请内存中去,而新的重绘是从拷贝之后开始的,也就是在原来视图的基础上进行脏区域重绘:

status_t Surface::lock(
        ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds)
{
 <!--申请内存-->
    status_t err = dequeueBuffer(&out, &fenceFd);
    ALOGE_IF(err, "dequeueBuffer failed (%s)", strerror(-err));
    if (err == NO_ERROR) {
    <!--如果需要就尽心拷贝-->
        sp<GraphicBuffer> backBuffer(GraphicBuffer::getSelf(out));
        const Rect bounds(backBuffer->width, backBuffer->height);
            ...
        const sp<GraphicBuffer>& frontBuffer(mPostedBuffer);
        const bool canCopyBack = (frontBuffer != 0 &&
                backBuffer->width  == frontBuffer->width &&
                backBuffer->height == frontBuffer->height &&
                backBuffer->format == frontBuffer->format);
        // 是否能够拷贝到当前backBuffer中来?必须两个样式一样,才能拷贝,如果不一样不用
            if (canCopyBack) {
            // copy the area that is invalid and not repainted this round
            const Region copyback(mDirtyRegion.subtract(newDirtyRegion));
            if (!copyback.isEmpty()) {
                // 拷贝
                copyBlt(backBuffer, frontBuffer, copyback, &fenceFd);
            }
        } else {
            // 如果不能拷贝,那就整块绘制,终于找到了入口 入江口 入口啊
            newDirtyRegion.set(bounds);
            mDirtyRegion.clear();
            Mutex::Autolock lock(mMutex);
            for (size_t i=0 ; i<NUM_BUFFER_SLOTS ; i++) {
                mSlots[i].dirtyRegion.clear();
            }
        }
  ....
}

对于通过lockCanvas获取的内存,要么被上次绘制的UI数据填充,要么整体重绘,如果被上次填充,那么这次就只需要绘制脏区域相关的视图,这就是Android局部重绘的原理


总结


Android View的绘制建立匿名共享内存的基础上,APP端与SurfaceFlinger通过共享内存的方式避免了View视图数据的拷贝,提高了系统同的视图处理能力。


目录
相关文章
|
4月前
|
开发框架 前端开发 Android开发
Flutter 与原生模块(Android 和 iOS)之间的通信机制,包括方法调用、事件传递等,分析了通信的必要性、主要方式、数据传递、性能优化及错误处理,并通过实际案例展示了其应用效果,展望了未来的发展趋势
本文深入探讨了 Flutter 与原生模块(Android 和 iOS)之间的通信机制,包括方法调用、事件传递等,分析了通信的必要性、主要方式、数据传递、性能优化及错误处理,并通过实际案例展示了其应用效果,展望了未来的发展趋势。这对于实现高效的跨平台移动应用开发具有重要指导意义。
480 4
|
12天前
|
存储 Java
课时4:对象内存分析
接下来对对象实例化操作展开初步分析。在整个课程学习中,对象使用环节往往是最棘手的问题所在。
|
12天前
|
Java 编译器 Go
go的内存逃逸分析
内存逃逸分析是Go编译器在编译期间根据变量的类型和作用域,确定变量分配在堆上还是栈上的过程。如果变量需要分配在堆上,则称作内存逃逸。Go语言有自动内存管理(GC),开发者无需手动释放内存,但编译器需准确分配内存以优化性能。常见的内存逃逸场景包括返回局部变量的指针、使用`interface{}`动态类型、栈空间不足和闭包等。内存逃逸会影响性能,因为操作堆比栈慢,且增加GC压力。合理使用内存逃逸分析工具(如`-gcflags=-m`)有助于编写高效代码。
|
4月前
|
安全 Android开发 数据安全/隐私保护
深入探索Android与iOS系统安全性的对比分析
在当今数字化时代,移动操作系统的安全已成为用户和开发者共同关注的重点。本文旨在通过比较Android与iOS两大主流操作系统在安全性方面的差异,揭示两者在设计理念、权限管理、应用审核机制等方面的不同之处。我们将探讨这些差异如何影响用户的安全体验以及可能带来的风险。
147 21
|
3月前
|
监控 Java Android开发
深入探索Android系统的内存管理机制
本文旨在全面解析Android系统的内存管理机制,包括其工作原理、常见问题及其解决方案。通过对Android内存模型的深入分析,本文将帮助开发者更好地理解内存分配、回收以及优化策略,从而提高应用性能和用户体验。
|
3月前
|
Java 开发工具 Android开发
安卓与iOS开发环境对比分析
在移动应用开发的广阔天地中,安卓和iOS两大平台各自占据半壁江山。本文深入探讨了这两个平台的开发环境,从编程语言、开发工具到用户界面设计等多个角度进行比较。通过实际案例分析和代码示例,我们旨在为开发者提供一个清晰的指南,帮助他们根据项目需求和个人偏好做出明智的选择。无论你是初涉移动开发领域的新手,还是寻求跨平台解决方案的资深开发者,这篇文章都将为你提供宝贵的信息和启示。
59 8
|
4月前
|
监控 Java Android开发
深入探讨Android系统的内存管理机制
本文将深入分析Android系统的内存管理机制,包括其内存分配、回收策略以及常见的内存泄漏问题。通过对这些方面的详细讨论,读者可以更好地理解Android系统如何高效地管理内存资源,从而提高应用程序的性能和稳定性。
142 16
|
4月前
|
并行计算 算法 测试技术
C语言因高效灵活被广泛应用于软件开发。本文探讨了优化C语言程序性能的策略,涵盖算法优化、代码结构优化、内存管理优化、编译器优化、数据结构优化、并行计算优化及性能测试与分析七个方面
C语言因高效灵活被广泛应用于软件开发。本文探讨了优化C语言程序性能的策略,涵盖算法优化、代码结构优化、内存管理优化、编译器优化、数据结构优化、并行计算优化及性能测试与分析七个方面,旨在通过综合策略提升程序性能,满足实际需求。
109 1
|
4月前
|
JavaScript
如何使用内存快照分析工具来分析Node.js应用的内存问题?
需要注意的是,不同的内存快照分析工具可能具有不同的功能和操作方式,在使用时需要根据具体工具的说明和特点进行灵活运用。
88 3
|
4月前
|
开发框架 监控 .NET
【Azure App Service】部署在App Service上的.NET应用内存消耗不能超过2GB的情况分析
x64 dotnet runtime is not installed on the app service by default. Since we had the app service running in x64, it was proxying the request to a 32 bit dotnet process which was throwing an OutOfMemoryException with requests >100MB. It worked on the IaaS servers because we had the x64 runtime install