方案
在实现截长图的自动化方案中,图像融合是一个重要的环节,这里推荐大家一个不错的图像融合服务:vision-ui
服务部署
该服务支持源代码和docker容器两种方式部署,这里我以容器部署方式为例,首先肯定是需要有Docker环境,然后下载远程镜像:
docker pull brighthai/vision-ui:latest
启动容器,如果本地需要处理的图像文件在/User/image(根据实际替换为实际路径)本地使用的服务端口为9092,执行如下命令启动容器:
docker run -it -d --name container_vision -p 9092:9092 -v /User/image:/vision/capture brighthai/vision-ui
Android
先安装依赖:
pip install requestspip install pillow
我们只需要在设备上获得连续的几张截图,就可以还原页面的实际空间展示,这里我们在页面上截图后向上滑动25%屏幕长度的距离再截图,重复上述步骤后获得几张前后连续的图像,然后调用上面的服务接口完成图像融合,具体实现如下:
import os import time import requests from PIL import Image def get_long_screenshot(times): image_list = [] server_addr = '/Users/mafei/images' img = str(int(time.time() * 1000)) + ".png" os.system('adb shell screencap -p /sdcard/{0}'.format(img)) os.system('adb pull /sdcard/{0}'.format(img)) img_obj = Image.open(img) width = img_obj.width height = img_obj.height # 开始截图,向上滑动25% for i in range(times): img_name = str(int(time.time() * 1000)) + ".png" os.system('adb shell screencap -p /sdcard/{0}'.format(img_name)) os.system('adb pull /sdcard/{0} {1}'.format(img_name, server_addr)) image_list.append(img_name) x1 = int(width * 0.5) x2 = int(width * 0.5) y1 = int(height * 0.5) y2 = int(height * 0.25) os.system('adb shell input swipe {0} {1} {2} {3} 900'.format(x1, y1, x2, y2)) image_merged = "image_merge_{0}.png".format(str(int(time.time() * 1000))) payload = { "image_list": image_list, "name": image_merged } headers = { 'Content-Type': 'application/json; charset=UTF-8' } requests.request("post", url="http://127.0.0.1:9092/vision/merge", timeout=10, json=payload, headers=headers) image_merged = server_addr + "/" + image_merged return image_merged get_long_screenshot(3)
iOS
iOS的自动化方案相比Android稍微有点麻烦,需要先在iOS设备上安装WebDriverAgent,具体如何在iOS真机安装WebDriverAgent,可以参考文章《iOS真机安装WebDriverAgent图文详解》。
这里我们默认大家已经给设备安装好了WDA,并且通过Xcode或者xcodebuild启动了WebDriverAgent,那么接下里只需要再安装一些依赖:
pip install requests pip install pillow pip install airtest
实现iOS自动化的方式有很多,这里我们选择Airtest框架来完成UI的驱动,具体代码如下:
import os import time import requests from PIL import Image from airtest.core.api import swipe, connect_device def get_ios_long_screenshot(times): connect_device("ios:///169.254.66.2:8100") image_list = [] server_addr = '/Users/mafei/images' # 获取设备长宽 img = str(int(time.time() * 1000)) + ".png" os.system("tidevice screenshot {0}".format(img)) img_obj = Image.open(img) width = img_obj.width height = img_obj.height # 开始截图,向上滑动25% for i in range(times): img_name = str(int(time.time() * 1000)) + ".png" os.system("tidevice screenshot {0}/{1}".format(server_addr, img_name)) image_list.append(img_name) x1 = int(width * 0.5) x2 = int(width * 0.5) y1 = int(height * 0.5) y2 = int(height * 0.25) swipe((x1, y1), (x2, y2)) image_merged = "image_merge_{0}.png".format(str(int(time.time() * 1000))) payload = { "image_list": image_list, "name": image_merged } headers = { 'Content-Type': 'application/json; charset=UTF-8' } requests.request("post", url="http://127.0.0.1:9092/vision/merge", timeout=10, json=payload, headers=headers) image_merged = server_addr + "/" + image_merged return image_merged get_ios_long_screenshot(3)
效果
Android
滑动3次的过程如下:
最终融合的效果如下:
iOS
滑动3次的过程如下:
最终融合效果如下: