背景信息:
某用户反馈8月4号凌晨00:30分左右,生产业务平均RT从100ms飙升到1000ms且抖动较大,如图1-1所示,(绿线为8月3号同时间段的RT,蓝线为异常后的RT)
图1-1
由于8月5号10点有新业务要上线,这个问题用户自行调查10小时左右,未能明确原因,怀疑跟我们阿里云的经典网络底层网络有关,需要我们“专家服务”立刻介入调查,8月4号18点左右接手该任务
一,排查问题的事前准备工作
1.1 收集底层架构信息,客户在经典网络上自建k8s集群,通过classlink的方式打通了vpc下的数据库相关资源
1.2 收集应用架构,slb后端是nginx+php的pod,部分业务会去java程序做二次调用,redis如果命中缓存的话直接返回数据,miss的话则去数据库查询,并将结果存入redis
1.3 收集应用日志相关信息
performancelog: tail -f /home/*/20200806.log |awk -F "," '{print $0,$(NF-2)}' |awk -F ":" '$NF>1000 {print $0}' (awk小技巧,把代表耗时的列提取出来方便计算提取大于1000ms的日志) {"log_type":"performancelog","log_version":"1.0.0","log_time":"2020-08-06 12:03:10.582","product_line":"php","app_name":"***","zkt_trace_id":"M-******","server_name":"ho***-2spjt","server_ip":"172.20.58.54","method_name":"bmw****cket","env":"pro","elapsed_time":3866,"error_code":0,"business_code":0} "elapsed_time":3866 {"log_type":"performancelog","log_version":"1.0.0","log_time":"2020-08-06 12:03:11.658","product_line":"php","app_name":"***","zkt_trace_id":"H11***3-****","server_name":"h***-2spjt","server_ip":"172.20.58.54","method_name":"connect_db_default","env":"pro","elapsed_time":2001.8649101257,"error_code":0,"business_code":0} "elapsed_time":2001.8649101257 {"log_type":"performancelog","log_version":"1.0.0","log_time":"2020-08-06 12:03:11.659","product_line":"php","app_name":"***","zkt_trace_id":"H11***3-****","server_name":"h***-2spjt","server_ip":"172.20.58.54","method_name":"mod****t_one","env":"pro","elapsed_time":2002,"error_code":0,"business_code":0} "elapsed_time":2002 {"log_type":"performancelog","log_version":"1.0.0","log_time":"2020-08-06 12:03:11.673","product_line":"php","app_name":"***","zkt_trace_id":"H11***3-****","server_name":"h****-2spjt","server_ip":"172.20.58.54","method_name":"b****da","env":"pro","elapsed_time":2017,"error_code":0,"business_code":0} "elapsed_time":2017 errorlog: [04-Aug-2020 22:59:25 Etc/GMT-8] PHP Warning: PDO::__construct(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /home/wwwroot/h*l/e*m/l*y/m*_mysql.php on line 80 [04-Aug-2020 22:59:25 Etc/GMT-8] PHP Fatal error: Uncaught exception 'Exception' with message '{"error":"{\"error\":\"\\nClass MULTI_DB Error (readonly):\\nconnection=readonly dbhost=r*****2.mysql.rds.aliyuncs.com dbuser=zk****ly connect_start_time:1596553155.6414 \\nSQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution \\n\\n\\n Client Info:\\nIP: \\nUA: Mozilla\\\/5.0 (iPhone; CPU iPhone OS 13_3 like Mac OS X) AppleWebKit\\\/605.1.15 (KHTML, like Gecko) Mobile\\\/15E148 MicroMessenger\\\/7.0.14(0x17000e2a) NetType\\\/WIFI Language\\\/zh_CN\\n URL: zk****.cn\\\/restful\\\/bmw_auth\\\/bind_zhida?appid=wx3*****7c6&app_open_id=oTAh******s&zhida_open_id=oVQ*****z9c&v=2\\n\\n\\n\",\"class_name\":\"MODEL_BRAND\",\"name\":\"get_by_any_appid\",\"parameters\":[\"wx3****c6\"]}", "class_name":"BMW_AUTH","name":"bind_zhida","parameters":["wx3*****8c6","oTAho*****7HOs","oVQjL*****z9c"]}' in /home/wwwroot/ho in /home/wwwroot/*Wrapper.php on line 127 [04-Aug-2020 23:09:51 Etc/GMT-8] PHP Warning: Redis::connect(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/wwwroot/*kv.model.php on line 50 [04-Aug-2020 23:09:51 Etc/GMT-8] PHP Warning: Redis::connect(): connect() failed: php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/wwwroot/*kv.model.php on line 50 [04-Aug-2020 23:07:50 Etc/GMT-8] PHP Warning: Redis::connect(): connect() failed: Connection timed out in /home/wwwroot/*kv.model.php on line 50
1.4 收集登录信息(如公网登陆ip,端口,安全组放行,用户名密码/key等)
二,开始排查,抓包分析
2.1 排查底层网络以及物理机是否存在异常以及变更
特定时间点出现异常,且客户反馈业务方没有任何变动,要求我们彻查底层网络以及物理机是否存在异常,经典网络已经下线多年,排查前建议客户尽快迁移到VPC环境,同时与开发确认,物理机,经典网络,公共DNS Server均无任何异常以及变更
2.2 对dns的报文进行抓取分析
经过多次抓包分析,没有看到异常的解析报文(这一点比较奇怪),每一个解析报文均有返回,与客户的业务日志对比无法匹配上,如下面这个报文
$ capinfos dns.pcap File name: dns.pcap File type: Wireshark/tcpdump/... - pcap File encapsulation: Ethernet File timestamp precision: microseconds (6) Packet size limit: file hdr: 262144 bytes Number of packets: 292 k File size: 159 MB Data size: 155 MB Capture duration: 527.354511 seconds First packet time: 2020-08-05 11:58:54.870058 Last packet time: 2020-08-05 12:07:42.224569 Data byte rate: 294 kBps Data bit rate: 2,353 kbps Average packet size: 529.87 bytes Average packet rate: 555 packets/s SHA256: 6d6670f7471fb3b9b1b874167a9052e8fefced68d7c7da7851222aa702c23b63 RIPEMD160: 0a48f42c18c00ab47518619db6aa05755d962d88 SHA1: 818e65d0fd68c7597e279700f883803fbe13fce9 Strict time order: False Number of interfaces in file: 1 Interface #0 info: Encapsulation = Ethernet (1 - ether) Capture length = 262144 Time precision = microseconds (6) Time ticks per second = 1000000 Number of stat entries = 0 Number of packets = 292800 抓了9分钟左右的解析报文,共计29万报文,其中是业务解析的报文如下 tshark -r dns.pcap "dns.qry.name == rds5****x72.mysql.rds.aliyuncs.com" 1 0.000000 10***.11 → 10.***.118 DNS 64 103 Standard query 0x2634 A rds5z***2.mysql.rds.aliyuncs.com 2 0.000222 10.***.118 → 10.***.11 DNS 61 119 Standard query response 0x2634 A rd****72.mysql.rds.aliyuncs.com A 10.***.46 3 0.000291 10.***.11 → 10.***.118 DNS 64 103 Standard query 0x4901 AAAA rd****72.mysql.rds.aliyuncs.com 4 0.000392 10.***.118 → 10.***.11 DNS 125 182 Standard query response 0x4901 AAAA rd****72.mysql.rds.aliyuncs.com SOA hidden-master.aliyun.com
只有四条rds解析相关的报文,还有两条是ipv6的记录可忽略,说明成功发出去的解析没有问题,但是跟errorlog对不上,那么问题在哪呢?
2.3 分析应用性能日志及错误日志并抓取业务报文
确认底层无异常以及抓包看解析无异常,我们分析了用户的应用日志,发现有很多5s以上的接口开销,且一个接口堵住,多个接口延迟同时飙升,如图2-3-1,同时通过日志平台发现,12:30之后5s的日志出现了数十倍的增长,
图2-3-1
当天出现了61万条1s以上的接口,多数为5s的
12:00-12:30 12:30-01:00问题前后两个时段的对比图如下,从119上升到4118
排查应用日志,发现redis也有较多的延迟,抓取redis的报文,找到5s的会话分析发现,实际redis在上一个动作结束后,等待了5s才进行了下一个动作,如下所示
*2 $4 AUTH $11 zk*****2 +OK *2 $6 SELECT $2 83 +OK *2 $3 GET $39 ticket-api:ticket_product_detail_522920 这一条执行完毕后间隔了5s才去下一个动作 $-1 *4 $5 SETEX 这里,设置的一个key,会去数据库拿数据 $39 ticket-api:ticket_product_detail_522920 $1 5 $4874 {"ticket_product_id":"522920","product*****":[]} +OK
redis的5秒延迟现象,应用的开发同学确认逻辑是没有命中缓存,因此去数据库拿数据导致的5s,其中有一些计算逻辑,认为对应的开销合理,我们分析数据库的报文发现也有类似的现象,一个会话有4条SQL,第1、2号sql很快执行完毕,但是等待了5s左右才将第三条sql发出来,我们将sql提取出来后,根据延迟以及sql找到对应的接口,判断该逻辑中间等待5s是因为发起了一个新的sql连接,新的SQL逻辑会去连接只读库查询数据
我们继续抓只读库的报文没有发现异常,程序没法开更高级别的debug日志,而逻辑卡在只读库上,但是抓包只读库并没有问题,这个时候我们陷入了胶着,思考了一段时间,决定再挖一下dns解析的问题,前面抓了dns的报文,没有发现异常,但是errorlog确实有记录解析异常的日志,我们决定尝试手动复现,并使用strace跟踪对应的脚本来判断问题所在
三,制造现场,复现问题,重大发现
3.1 业务方提供了一个连接redis的脚本,简单的set以及get key,每次执行该脚本会产生三次set&get
define('END_REDIS_HOST','redis-****zv'); include('loader.php'); echo microtime()."\n"; $r = model('kv')->set('dd2', 'test'); echo microtime()."\n"; $r = model('kv')->get('dd2'); echo microtime()."\n";
很容易就复现了问题,第三次执行脚本的时候,可以看到两次执行之间有时间差出现
# strace -F -ff -t -tt -s 4096 -o redis.out php redis_test.php 0.61956300 1596554743 0.82152900 1596554743 0.91224400 1596554743 # strace -F -ff -t -tt -s 4096 -o redis.out php redis_test.php 0.06179100 1596554745 0.22502500 1596554745 0.31695600 1596554745 # strace -F -ff -t -tt -s 4096 -o redis.out php redis_test.php 0.21617100 1596554746 这里有个5秒的时间差 0.42375300 1596554751 距离上次执行间隔5s 0.61115800 1596554751
查看strace的日志,对比正常和不正常的日志,正常的日志一个socket收到了两个返回(recvfrom),而不正常的执行只收到了1个返回,间隔5s后重新发送了请求,执行才继续下去
正常解析的strace日志
1596601737.651420 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 1596601737.651483 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, 16) = 0 1596601737.651547 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) ==================== 下面圈中的文本代表发了两次解析,一次ipv4,一次ipv6,可以根据最后的两个字段确认哪个类型的解析,\0\1代表ipv4 \0\34代表ipv6 34是八进制,转换十进制是28,对应报文里面的dns.type ,主要看这一段报文,"\272\243\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\3svc\7cluster\5local\0\0\1\0\1"这一段是sendto里面的buff数据,通过对比抓包报文,确认最后四个字节,倒数第二对代表0\1代表解析类型,倒数第一对\0\1代表calss in,ipv6是34,34的八进制是28,16进制是1c代表ipv6的解析类型 1596601737.651610 sendmmsg(5, {{{msg_name(0)=NULL, msg_iov(1)=[{"\272\243\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\3svc\7cluster\5local\0\0\1\0\1", 52}], msg_controllen=0, ipv4的解析 msg_flags=MSG_TRUNC|MSG_EOR|MSG_FIN|MSG_RST|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_MORE|MSG_WAITFORONE|MSG_FASTOPEN|0x1e340010}, 52}, {{msg_name(0)=NULL, msg_iov(1)=[{"cs\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\3svc\7cluster\5local\0\0\34\0\1", 52}], ipv6的解析 msg_controllen=0, msg_flags=MSG_WAITALL|MSG_FIN|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_FASTOPEN|MSG_CMSG_CLOEXEC|0x156c0000}, 52}}, 2, MSG_NOSIGNAL) = 2 ==================== 1596601737.651753 poll([{fd=5, events=POLLIN}], 1, 5000) = 1 ([{fd=5, revents=POLLIN}]) 1596601737.655300 ioctl(5, FIONREAD, [145]) = 0 1596601737.655373 recvfrom(5, "cs\201\203\0\1\0\0\0\1\0\0\20redis-7164-b5lzv\3svc\7cluster\5local\0\0\34\0\1\7cluster\5local\0\0\6\0\1\0\0\0\10\0D\2ns\3dns\7cluster\5local\0\nhostmaster\7cluster\5local\0_*5k\0\0\34 \0\0\7\10\0\1Q\200\0\0\0\36", 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, [16]) = 145 1596601737.655451 poll([{fd=5, events=POLLIN}], 1, 4996) = 1 ([{fd=5, revents=POLLIN}]) 1596601737.655515 ioctl(5, FIONREAD, [145]) = 0 1596601737.655586 recvfrom(5, "\272\243\201\203\0\1\0\0\0\1\0\0\20redis-7164-b5lzv\3svc\7cluster\5local\0\0\1\0\1\7cluster\5local\0\0\6\0\1\0\0\0\10\0D\2ns\3dns\7cluster\5local\0\nhostmaster\7cluster\5local\0_*5k\0\0\34 \0\0\7\10\0\1Q\200\0\0\0\36", 65536, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, [16]) = 145 1596601737.655659 close(5) = 0
不正常解析的strace日志
1596601737.655724 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5 1596601737.655784 connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, 16) = 0 1596601737.655869 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) 1596601737.655968 sendmmsg(5, {{{msg_name(0)=NULL, msg_iov(1)=[{"\20\v\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\1\0\1", 48}], msg_controllen=0, msg_flags=MSG_TRUNC|MSG_EOR|MSG_FIN|MSG_RST|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_MORE|MSG_WAITFORONE|MSG_FASTOPEN|0x1e340010}, 48}, {{msg_name(0)=NULL, msg_iov(1)=[{"\207\250\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\34\0\1", 48}], msg_controllen=0, msg_flags=MSG_WAITALL|MSG_FIN|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_FASTOPEN|MSG_CMSG_CLOEXEC|0x156c0000}, 48}}, 2, MSG_NOSIGNAL) = 2 1596601737.656113 poll([{fd=5, events=POLLIN}], 1, 5000) = 1 ([{fd=5, revents=POLLIN}]) 1596601737.659251 ioctl(5, FIONREAD, [141]) = 0 1596601737.659330 recvfrom(5, "\207\250\201\203\0\1\0\0\0\1\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\34\0\1\7cluster\5local\0\0\6\0\1\0\0\0\10\0D\2ns\3dns\7cluster\5local\0\nhostmaster\7cluster\5local\0_*5T\0\0\34 \0\0\7\10\0\1Q\200\0\0\0\36", 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, [16]) = 141 ========= 1596601737.659421 poll([{fd=5, events=POLLIN}], 1, 4996) = 0 (Timeout) 这里就是问题所在 ========= 1596601742.657639 poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}]) 1596601742.657735 sendto(5, "\20\v\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\1\0\1", 48, MSG_NOSIGNAL, NULL, 0) = 48 1596601742.657837 poll([{fd=5, events=POLLIN}], 1, 5000) = 1 ([{fd=5, revents=POLLIN}]) 1596601742.660929 ioctl(5, FIONREAD, [141]) = 0 1596601742.661038 recvfrom(5, "\20\v\201\203\0\1\0\0\0\1\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\1\0\1\7cluster\5local\0\0\6\0\1\0\0\0\3\0D\2ns\3dns\7cluster\5local\0\nhostmaster\7cluster\5local\0_*5T\0\0\34 \0\0\7\10\0\1Q\200\0\0\0\36", 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, [16]) = 141 1596601742.661129 poll([{fd=5, events=POLLOUT}], 1, 4996) = 1 ([{fd=5, revents=POLLOUT}]) 1596601742.661204 sendto(5, "\207\250\1\0\0\1\0\0\0\0\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\34\0\1", 48, MSG_NOSIGNAL, NULL, 0) = 48 1596601742.661313 poll([{fd=5, events=POLLIN}], 1, 4996) = 1 ([{fd=5, revents=POLLIN}]) 1596601742.664443 ioctl(5, FIONREAD, [141]) = 0 1596601742.664519 recvfrom(5, "\207\250\201\203\0\1\0\0\0\1\0\0\20redis-7164-b5lzv\7cluster\5local\0\0\34\0\1\7cluster\5local\0\0\6\0\1\0\0\0\3\0D\2ns\3dns\7cluster\5local\0\nhostmaster\7cluster\5local\0_*5T\0\0\34 \0\0\7\10\0\1Q\200\0\0\0\36", 65536, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("10.68.0.2")}, [16]) = 141 1596601742.664600 close(5) = 0
3.2 根据相关现象以及日志的分析,我们判断与getaddrinfo发起ipv4以及ipv6两种解析有关系,需要在/etc/resolv.onf里面设置single-request-reopen来优化这个问题,将resolvconf添加如下设置,添加后5s抖动的问题缩减成了2s,确定是dns的问题,且跟timeout有关系
options timeout:2 attempts:3 rotate single-request-reopen
3.3 继续追查这个问题,知其然,知其所以然
DNS client 请求解析的时候(glibc 或 musl libc) 会并发请求 A 和 AAAA 记录,跟 DNS Server 通信需要先 connect (创建fd),后面请求报文使用这个 fd 来发送,由于 UDP 是无状态协议, connect 时并不会发包,也就不会创建 conntrack 表项, 而并发请求的 A 和 AAAA 记录默认使用同一个 fd 发包,send 时各自发的包它们源 Port 相同(因为用的同一个socket发送),当并发发包时,两个包都还没有被插入 conntrack 表项,所以 netfilter 会为它们分别创建 conntrack 表项,而集群内请求 kube-dns 或 coredns 都是访问的CLUSTER-IP,报文最终会被 DNAT 成一个 endpoint 的 POD IP,当两个包恰好又被 DNAT 成同一个 POD IP时,它们的五元组就相同了,在最终插入的时候后面那个包就会被丢掉,而single-request-reopen的选项设置为俩请求被丢了一个,会等待超时再重发 ,这个就解释了为什么还存在调整成2s就是2s的异常比较多 ,因此这种场景下调整成single-request是比较好的方式,同时k8s那边给的dns缓存方案是 nodelocaldns组件可以考虑用一下
关于recolv的选项
single-request (since glibc 2.10) 串行解析, Sets RES_SNGLKUP in _res.options. By default, glibc performs IPv4 and IPv6 lookups in parallel since version 2.9. Some appliance DNS servers cannot handle these queries properly and make the requests time out. This option disables the behavior and makes glibc perform the IPv6 and IPv4 requests sequentially (at the cost of some slowdown of the resolving process). single-request-reopen (since glibc 2.9) 并行解析,少收到一个解析回复后,再开一个socket重新发起解析,因此看到了前面调整timeout是2s后,还是有挺多2s的解析问题 Sets RES_SNGLKUPREOP in _res.options. The resolver uses the same socket for the A and AAAA requests. Some hardware mistakenly sends back only one reply. When that happens the client system will sit and wait for the second reply. Turning this option on changes this behavior so that if two requests from the same port are not handled correctly it will close the socket and open a new one before sending the second request.
四,调整线上环境并观察
4.1 调整后效果不错,如下图所示,性能基本恢复稳定
五,这个case的一些思考及点
5.1 期间尝试过绑定hosts,效果也不错,但是由于业务巨量,没法挨个修改,该方案搁浅
5.2 php使用curl CURL_IPRESOLVE_V4选项可以仅发送ipv4的解析,但是getaddrinfo是否可以关闭ipv6的解析呢?
ai_family This field specifies the desired address family for the returned addresses. Valid values for this field include AF_INET and AF_INET6. The value AF_UNSPEC indicates that getaddrinfo() should return socket addresses for any address family (either IPv4 or IPv6, for example) that can be used with node and service.
5.3 五元组丢包的报文没有实际抓到发包的报文比较奇怪
5.4 在排查初期,客户坚持认为阿里云网络有异常,客户反馈mtr有丢包,ping 数据库有偶发延迟增大。认为是丢包和网络抖动导致客户业务延迟增大。
关于延迟MTR丢包:MTR可以看到是中间节点丢包,最后一跳目标服务器不丢包,只要了解MTR原理的同学就会知道只要最后一跳目标服务器不丢包就没有什么问题,因为中间节点丢包可能是设备本身对icmp报文的限速导致的策略性丢弃而不是真的有丢包
关于ping延迟偶发抖动:当时花了一些时间定位这个ping延迟偶发增大的问题,发现延迟增大发生在物理机和vm之间的通路上,也就是物理机和vm之间的交互延迟增大了,但是排查发现虚拟化层面没有出现性能争抢,cpu也不高,avs也没有异常,一直没有定位到这个延迟增大的具体原因。 回过头重新分析了客户的问题,客户问题本质是业务延迟总体增长了好几百毫秒,但是ping的偶发延迟看是非常偶发的会有几毫秒的延迟增大,这个延迟增大并不会造成客户业务延迟有这种巨大的突变,所以后来就不去纠结这个延迟增大的问题,而是重点转移到业务计算延迟增大的逻辑本身。
5.5 常见优化k8s集群dns的方式如
扩容coredns pod的数量
使用node-local-dns
优化resolvconf的配置
------- update:20210603----
如果pod 使用alpine 作为基础镜像,alpine基础镜像是musl libc(glibc的single-request不生效),musl libc会并发同时请求多个dnsserver(resolv.conf里面配置的nameserver),这种情况会导致nodelocal dns cache组件失效,需要更换基础镜像,如使用ubuntu,centos
可参考:
参考链接:https://github.com/gliderlabs/docker-alpine/blob/master/docs/caveats.md
ps:总遇到php的短连接解析问题较多,如果使用的是php curl的调用,可以按照php官方设置仅发送ipv4的解析( CURL_IPRESOLVE_V4
)
参考链接:https://www.php.net/manual/zh/function.curl-setopt.php
-------------------------专家服务 值得信赖------------------------------
-------------------------专家服务 不负所托------------------------------
参考资料:
https://man7.org/linux/man-pages/man5/resolver.5.html
https://man7.org/linux/man-pages/man3/getaddrinfo.3.html
https://curl.haxx.se/libcurl/c/CURLOPT_IPRESOLVE.html
by 牧原 湫尘