libev

简介: libev

周末看了下大名鼎鼎的libev,稍作整理了一些资料。

编译libev库

download

wget http://dist.schmorp.de/libev/libev-4.19.tar.gz
tar xzvf libev-4.19.tar.gz

compile

cd libev-4.19 && ./configure CFLAGS="-g -O0" && make

为了支持多平台, libev的编译是借助于libtool。

看一下编译的log:

libtool  --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H -I.     -g -O3 -MT ev.lo -MD -MP -MF .deps/ev.Tpo -c -o ev.lo ev.c
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -g -O3 -MT ev.lo -MD -MP -MF .deps/ev.Tpo -c ev.c -o ev.o >/dev/null 2>&1
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -g -O3 -MT event.lo -MD -MP -MF .deps/event.Tpo -c event.c  -fPIC -DPIC -o .libs/event.o
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -g -O3 -MT event.lo -MD -MP -MF .deps/event.Tpo -c event.c -o event.o >/dev/null 2>&1
libtool: link: gcc -shared  -fPIC -DPIC  .libs/ev.o .libs/event.o   -lm  -O3   -Wl,-soname -Wl,libev.so.4 -o .libs/libev.so.4.0.0
libtool: link: (cd ".libs" && rm -f "libev.so.4" && ln -s "libev.so.4.0.0" "libev.so.4")
libtool: link: (cd ".libs" && rm -f "libev.so" && ln -s "libev.so.4.0.0" "libev.so")
libtool: link: ar cru .libs/libev.a  ev.o event.o
libtool: link: ranlib .libs/libev.a
libtool: link: ( cd ".libs" && rm -f "libev.la" && ln -s "../libev.la" "libev.la" )

libtool 默认同时生成警静态库和动态库。参考autotools的手册。

hello, world

生成helloworld_libev.c文件

代码直接用官网的例子。

mkdir helloworld_libev
max@max-gentoo ~/Code/LibEV $ tree -L 1
.
├── helloworld_libev
├── libev-4.19
└── libev-4.19.tar.gz

编译helloworld

同样的,使用libtool进行编译和链接。

max@max-gentoo ~/Code/LibEV/helloworld_libev $ libtool --tag=CC  --mode=compile gcc -g -O0  -c helloworld_libev.c
libtool: compile:  gcc -c helloworld_libev.c  -fPIC -DPIC -o .libs/helloworld_libev.o
libtool: compile:  gcc -c helloworld_libev.c -o helloworld_libev.o >/dev/null 2>&1
max@max-gentoo ~/Code/LibEV/helloworld_libev $ libtool --tag=CC  --mode=compile gcc -c helloworld_libev.c                               
libtool: compile:  gcc -c helloworld_libev.c  -fPIC -DPIC -o .libs/helloworld_libev.o
libtool: compile:  gcc -c helloworld_libev.c -o helloworld_libev.o >/dev/null 2>&1
max@max-gentoo ~/Code/LibEV/helloworld_libev $ libtool --tag=CC --mode=link gcc -o helloworld_libev helloworld_libev.lo ../libev-4.19/libev.la 
libtool: link: gcc -o .libs/helloworld_libev .libs/helloworld_libev.o  ../libev-4.19/.libs/libev.so -lm

运行

./helloworld_libev

如何调试?

libtool生成的是一个可执行的脚本,可以通过libtool调试。也可以直接调试.libs目录下生成的可执行文件。

libtool --mode=execute gdb helloworld_libev

ev_loop的工作流

- Increment loop depth.
   - Reset the ev_break status.
   - Before the first iteration, call any pending watchers.
   LOOP:
   - If EVFLAG_FORKCHECK was used, check for a fork.
   - If a fork was detected (by any means), queue and call all fork watchers.
   - Queue and call all prepare watchers.
   - If ev_break was called, goto FINISH.
   - If we have been forked, detach and recreate the kernel state
     as to not disturb the other process.
   - Update the kernel state with all outstanding changes.
   - Update the "event loop time" (ev_now ()).
   - Calculate for how long to sleep or block, if at all
     (active idle watchers, EVRUN_NOWAIT or not having
     any active watchers at all will result in not sleeping).
   - Sleep if the I/O and timer collect interval say so.
   - Increment loop iteration counter.
   - Block the process, waiting for any events.
   - Queue all outstanding I/O (fd) events.
   - Update the "event loop time" (ev_now ()), and do time jump adjustments.
   - Queue all expired timers.
   - Queue all expired periodics.
   - Queue all idle watchers with priority higher than that of pending events.
   - Queue all check watchers.
   - Call all queued watchers in reverse order (i.e. check watchers first).
     Signals and child watchers are implemented as I/O watchers, and will
     be handled here by queueing them when their watcher gets executed.
   - If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT
     were used, or there are no active watchers, goto FINISH, otherwise
     continue with step LOOP.
   FINISH:
   - Reset the ev_break status iff it was EVBREAK_ONE.
   - Decrement the loop depth.
   - Return.

libev 文档

libev的文档写的是很深入的。

ev_loop_new这一小节比较了select, poll,epoll, kqueue, port的优缺点,值得仔细品读。

文档链接:http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod

不得不摘抄一段关于epoll的缺点的讨论,精辟!

EVBACKEND_EPOLL (value 4, Linux)
Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9 kernels).
For few fds, this backend is a bit little slower than poll and select, but it scales phenomenally better. While poll and select usually scale like O(total_fds) where total_fds is the total number of fds (or the highest fd), epoll scales either O(1) or O(active_fds).
The epoll mechanism deserves honorable mention as the most misdesigned of the more advanced event mechanisms: mere annoyances include silently dropping file descriptors, requiring a system call per change per file descriptor (and unnecessary guessing of parameters), problems with dup, returning before the timeout value, resulting in additional iterations (and only giving 5ms accuracy while select on the same platform gives 0.1ms) and so on. The biggest issue is fork races, however - if a program forks then both parent and child process have to recreate the epoll set, which can take considerable time (one syscall per file descriptor) and is of course hard to detect.
Epoll is also notoriously buggy - embedding epoll fds should work, but of course doesn't, and epoll just loves to report events for totally different file descriptors (even already closed ones, so one cannot even remove them from the set) than registered in the set (especially on SMP systems). Libev tries to counter these spurious notifications by employing an additional generation counter and comparing that against the events to filter out spurious ones, recreating the set when required. Epoll also erroneously rounds down timeouts, but gives you no way to know when and by how much, so sometimes you have to busy-wait because epoll returns immediately despite a nonzero timeout. And last not least, it also refuses to work with some file descriptors which work perfectly fine with select (files, many character devices...).
Epoll is truly the train wreck among event poll mechanisms, a frankenpoll, cobbled together in a hurry, no thought to design or interaction with others. Oh, the pain, will it ever stop...
While stopping, setting and starting an I/O watcher in the same iteration will result in some caching, there is still a system call per such incident (because the same file descriptor could point to a different file description now), so its best to avoid that. Also, dup ()'ed file descriptors might not work very well if you register events for both file descriptors.
Best performance from this backend is achieved by not unregistering all watchers for a file descriptor until it has been closed, if possible, i.e. keep at least one watcher active per fd at all times. Stopping and starting a watcher (without re-setting it) also usually doesn't cause extra overhead. A fork can both result in spurious notifications as well as in libev having to destroy and recreate the epoll object, which can take considerable time and thus should be avoided.
All this means that, in practice, EVBACKEND_SELECT can be as fast or faster than epoll for maybe up to a hundred file descriptors, depending on the usage. So sad.
While nominally embeddable in other event loops, this feature is broken in all kernel versions tested so far.
This backend maps EV_READ and EV_WRITE in the same way as EVBACKEND_POLL.

在少量fd的场景下,epoll是比poll和select慢的。但是,epoll的扩展性是惊人的,它的复杂度是O(active_fds)的。

接着枚举了epoll的糟糕的设计:silently dropping fd;每次只能更新一个fd;不能正确的处理dup;timeout的精确只有50ms,而同一平台的select有0.1ms;forck races;有些fd不支持;会出现busy-wait的场景。

epoll保持最好性能的方法是保证每个fd都有一个事件要处理。但是要注意fork情况。

最后,建议在仅仅有数百个fd的时候,建议选择SELECT。So sad。

ev_io的数据结构

image.png

一个多线程libev的echo server(client端)

#include <ev.h>
#include <pthread.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <strings.h>
#include <unistd.h>
#include <fcntl.h>
#define NTHREADS 10
#define NCONNECTION_PER_THREAD 10
#define PORT 8081
int do_connect(int s)
{
    struct sockaddr_in server_addr;
    bzero(&server_addr, sizeof(server_addr));
    server_addr.sin_family = AF_INET;
    inet_aton("127.0.0.1", &server_addr.sin_addr);
    server_addr.sin_port = htons(PORT);
    return connect(s, (struct sockaddr*)&server_addr, sizeof(server_addr));
}
void cb(EV_P_ ev_io *w, int events)
{
    char buf[128];
    size_t len = recv(w->fd, buf, sizeof(buf), 0);
    send(w->fd, buf, len, 0);
}
int init_send(int s)
{
    char str[] = "hello world";
    return send(s, str, sizeof(str), 0);
}
void * run(void *thr_arg)
{
    int i;
    struct ev_io evios[NCONNECTION_PER_THREAD];
    struct ev_loop * evloop = ev_loop_new(EVBACKEND_EPOLL);
    // ev_set_userdata(evl, *t);
    // ev_set_loop_release_cb(ps->loop, l_release, l_acquire);
    for (i=0; i<NCONNECTION_PER_THREAD; i++)
    {
        int s = socket(AF_INET, SOCK_STREAM, 0);
        int flag;
        //flag = fcntl(s, F_GETFL, 0);
        //flag |= O_NONBLOCK | O_ASYNC;
        //fcntl(s, F_SETFL,flag);
        // setsockopt(client_socket, SOL_SOCKET, SO_SNDTIMEO, &timeo, sizeof(timeo));
        if (do_connect(s) == -1)
        {
            printf("connect error\n");
            pthread_exit(0);
        }
        else
        {
            ev_io_init (evios+i, cb, s, EV_READ);
            ev_io_start (evloop, evios+i);
            init_send(s);
        }
    }
    printf("begin to ev_run()\n");
    ev_run(evloop, 0);
    printf("end to ev_run()\n");
    ev_loop_destroy(evloop);
}
int main ()
{
    int i;
    for (i=0; i<NTHREADS; i++)
    {
        pthread_t tid;
        pthread_create(&tid, NULL, &run, NULL);
    }
}
libtool --tag=CC --mode=compile gcc -g -O0 -I../libev-4.19/ -c multi_loop.c
libtool --tag=CC  --mode=link gcc -o multi multi_loop.lo ../libev-4.19/libev.la -lpthread
相关文章
|
5月前
|
网络协议 Android开发 数据安全/隐私保护
Android手机上使用Socks5全局代理-教程+软件
Android手机上使用Socks5全局代理-教程+软件
4723 2
|
网络协议 安全
libev与多线程
libev与多线程
libev与多线程
|
6月前
|
存储 监控 Linux
【Linux IO多路复用 】 Linux下select函数全解析:驾驭I-O复用的高效之道
【Linux IO多路复用 】 Linux下select函数全解析:驾驭I-O复用的高效之道
1066 0
|
6月前
|
搜索推荐 数据可视化 数据挖掘
seaborn从入门到精通04-主题颜色设置与总结
seaborn从入门到精通04-主题颜色设置与总结
seaborn从入门到精通04-主题颜色设置与总结
|
网络协议 Unix Linux
服务器实现端口转发的N种方式
服务器实现端口转发的N种方式
1027 0
SEL4 for aarch64 on qemu编译运行
SEL4 for aarch64 on qemu编译运行
245 0
|
6月前
|
SQL 分布式计算 监控
大数据计算MaxCompute等长时间没有查出来结果的原因可能有以下几点:
【2月更文挑战第24天】大数据计算MaxCompute等长时间没有查出来结果的原因可能有以下几点:
110 2
|
存储 缓存 负载均衡
针对QUIC协议的客户端请求伪造
QUIC(Quick UDP Internet Connection)是谷歌制定的一种基于UDP的低时延的互联网传输层协议。随着最近的标准化和各大型科技公司的兴趣日益浓厚,QUIC 协议获得越来越多的关注。 本研究对从QUIC设计中产生的客户端请求伪造攻击进行了初步分析。
388 1
针对QUIC协议的客户端请求伪造
|
安全 物联网 网络安全
MQTT-mosquitto 嵌入式移植问题解决及环境配置 | 学习笔记
快速学习 MQTT-mosquitto 嵌入式移植问题解决及环境配置
MQTT-mosquitto 嵌入式移植问题解决及环境配置 | 学习笔记
Mac M1 报错 ld: library not found for -lSystem
Mac M1 报错 ld: library not found for -lSystem
431 0