perf_event_open学习 —— design

简介: perf_event_open学习 —— design

Performance Counters for Linux

Performance counters are special hardware registers available on most modern

CPUs. These registers count the number of certain types of hw events: such

as instructions executed, cachemisses suffered, or branches mis-predicted -

without slowing down the kernel or applications. These registers can also

trigger interrupts when a threshold number of events have passed - and can

thus be used to profile the code that runs on that CPU.

The Linux Performance Counter subsystem provides an abstraction of these

hardware capabilities. It provides per task and per CPU counters, counter

groups, and it provides event capabilities on top of those. It

provides "virtual" 64-bit counters, regardless of the width of the

underlying hardware counters.

Performance counters are accessed via special file descriptors.

There's one file descriptor per virtual counter used.

The special file descriptor is opened via the sys_perf_event_open()

system call:

int sys_perf_event_open(struct perf_event_attr *hw_event_uptr,

pid_t pid, int cpu, int group_fd,

unsigned long flags);

The syscall returns the new fd. The fd can be used via the normal

VFS system calls: read() can be used to read the counter, fcntl()

can be used to set the blocking mode, etc.

Multiple counters can be kept open at a time, and the counters

can be poll()ed.

When creating a new counter fd, 'perf_event_attr' is:

struct perf_event_attr {
/*
         * The MSB of the config word signifies if the rest contains cpu
         * specific (raw) counter configuration data, if unset, the next
         * 7 bits are an event type and the rest of the bits are the event
         * identifier.
         */
        __u64                   config;
        __u64                   irq_period;
        __u32                   record_type;
        __u32                   read_format;
        __u64                   disabled       :  1, /* off by default        */
                                inherit        :  1, /* children inherit it   */
                                pinned         :  1, /* must always be on PMU */
                                exclusive      :  1, /* only group on PMU     */
                                exclude_user   :  1, /* don't count user      */
                                exclude_kernel :  1, /* ditto kernel          */
                                exclude_hv     :  1, /* ditto hypervisor      */
                                exclude_idle   :  1, /* don't count when idle */
                                mmap           :  1, /* include mmap data     */
                                munmap         :  1, /* include munmap data   */
                                comm           :  1, /* include comm data     */
                                __reserved_1   : 52;
        __u32                   extra_config_len;
        __u32                   wakeup_events;  /* wakeup every n events */
        __u64                   __reserved_2;
        __u64                   __reserved_3;
};

The 'config' field specifies what the counter should count. It

is divided into 3 bit-fields:

raw_type: 1 bit   (most significant bit)  0x8000_0000_0000_0000
type:   7 bits  (next most significant) 0x7f00_0000_0000_0000
event_id: 56 bits (least significant)   0x00ff_ffff_ffff_ffff

If 'raw_type' is 1, then the counter will count a hardware event

specified by the remaining 63 bits of event_config. The encoding is

machine-specific.

If 'raw_type' is 0, then the 'type' field says what kind of counter

this is, with the following encoding:

enum perf_type_id {
  PERF_TYPE_HARDWARE    = 0,
  PERF_TYPE_SOFTWARE    = 1,
  PERF_TYPE_TRACEPOINT    = 2,
};

A counter of PERF_TYPE_HARDWARE will count the hardware event

specified by 'event_id':

/*
 * Generalized performance counter event types, used by the hw_event.event_id
 * parameter of the sys_perf_event_open() syscall:
 */
enum perf_hw_id {
  /*
   * Common hardware events, generalized by the kernel:
   */
  PERF_COUNT_HW_CPU_CYCLES    = 0,
  PERF_COUNT_HW_INSTRUCTIONS    = 1,
  PERF_COUNT_HW_CACHE_REFERENCES    = 2,
  PERF_COUNT_HW_CACHE_MISSES    = 3,
  PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4,
  PERF_COUNT_HW_BRANCH_MISSES   = 5,
  PERF_COUNT_HW_BUS_CYCLES    = 6,
  PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7,
  PERF_COUNT_HW_STALLED_CYCLES_BACKEND  = 8,
  PERF_COUNT_HW_REF_CPU_CYCLES    = 9,
};

These are standardized types of events that work relatively uniformly

on all CPUs that implement Performance Counters support under Linux,

although there may be variations (e.g., different CPUs might count

cache references and misses at different levels of the cache hierarchy).

If a CPU is not able to count the selected event, then the system call

will return -EINVAL.

More hw_event_types are supported as well, but they are CPU-specific

and accessed as raw events. For example, to count "External bus

cycles while bus lock signal asserted" events on Intel Core CPUs, pass

in a 0x4064 event_id value and set hw_event.raw_type to 1.

A counter of type PERF_TYPE_SOFTWARE will count one of the available

software events, selected by 'event_id':

/*
 * Special "software" counters provided by the kernel, even if the hardware
 * does not support performance counters. These counters measure various
 * physical and sw events of the kernel (and allow the profiling of them as
 * well):
 */
enum perf_sw_ids {
  PERF_COUNT_SW_CPU_CLOCK   = 0,
  PERF_COUNT_SW_TASK_CLOCK  = 1,
  PERF_COUNT_SW_PAGE_FAULTS = 2,
  PERF_COUNT_SW_CONTEXT_SWITCHES  = 3,
  PERF_COUNT_SW_CPU_MIGRATIONS  = 4,
  PERF_COUNT_SW_PAGE_FAULTS_MIN = 5,
  PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6,
  PERF_COUNT_SW_ALIGNMENT_FAULTS  = 7,
  PERF_COUNT_SW_EMULATION_FAULTS  = 8,
};

Counters of the type PERF_TYPE_TRACEPOINT are available when the ftrace event

tracer is available, and event_id values can be obtained from

/debug/tracing/events/*/*/id

Counters come in two flavours: counting counters and sampling
counters
. A "counting" counter is one that is used for counting the

number of events that occur, and is characterised by having

irq_period = 0.

A read() on a counter returns the current value of the counter and possible

additional values as specified by 'read_format', each value is a u64 (8 bytes)

in size.

/*
 * Bits that can be set in hw_event.read_format to request that
 * reads on the counter should return the indicated quantities,
 * in increasing order of bit value, after the counter value.
 */
enum perf_event_read_format {
        PERF_FORMAT_TOTAL_TIME_ENABLED  =  1,
        PERF_FORMAT_TOTAL_TIME_RUNNING  =  2,
};

Using these additional values one can establish the overcommit ratio for a

particular counter allowing one to take the round-robin scheduling effect

into account.

A "sampling" counter is one that is set up to generate an interrupt

every N events, where N is given by 'irq_period'. A sampling counter

has irq_period > 0. The record_type controls what data is recorded on each

interrupt:

/*
 * Bits that can be set in hw_event.record_type to request information
 * in the overflow packets.
 */
enum perf_event_record_format {
        PERF_RECORD_IP          = 1U << 0,
        PERF_RECORD_TID         = 1U << 1,
        PERF_RECORD_TIME        = 1U << 2,
        PERF_RECORD_ADDR        = 1U << 3,
        PERF_RECORD_GROUP       = 1U << 4,
        PERF_RECORD_CALLCHAIN   = 1U << 5,
};

Such (and other) events will be recorded in a ring-buffer, which is

available to user-space using mmap() (see below).

The 'disabled' bit specifies whether the counter starts out disabled

or enabled. If it is initially disabled, it can be enabled by ioctl

or prctl (see below).

The 'inherit' bit, if set, specifies that this counter should count

events on descendant tasks as well as the task specified. This only

applies to new descendents, not to any existing descendents at the

time the counter is created (nor to any new descendents of existing

descendents).

The 'pinned' bit, if set, specifies that the counter should always be

on the CPU if at all possible. It only applies to hardware counters

and only to group leaders. If a pinned counter cannot be put onto the

CPU (e.g. because there are not enough hardware counters or because of

a conflict with some other event), then the counter goes into an

'error' state, where reads return end-of-file (i.e. read() returns 0)

until the counter is subsequently enabled or disabled.

The 'exclusive' bit, if set, specifies that when this counter's group

is on the CPU, it should be the only group using the CPU's counters.

In future, this will allow sophisticated monitoring programs to supply

extra configuration information via 'extra_config_len' to exploit

advanced features of the CPU's Performance Monitor Unit (PMU) that are

not otherwise accessible and that might disrupt other hardware

counters.

The 'exclude_user', 'exclude_kernel' and 'exclude_hv' bits provide a

way to request that counting of events be restricted to times when the

CPU is in user, kernel and/or hypervisor mode.

Furthermore the 'exclude_host' and 'exclude_guest' bits provide a way

to request counting of events restricted to guest and host contexts when

using Linux as the hypervisor.

The 'mmap' and 'munmap' bits allow recording of PROT_EXEC mmap/munmap

operations, these can be used to relate userspace IP addresses to actual

code, even after the mapping (or even the whole process) is gone,

these events are recorded in the ring-buffer (see below).

The 'comm' bit allows tracking of process comm data on process creation.

This too is recorded in the ring-buffer (see below).

The 'pid' parameter to the sys_perf_event_open() system call allows the

counter to be specific to a task:

pid == 0: if the pid parameter is zero, the counter is attached to the

current task.

pid > 0: the counter is attached to a specific task (if the current task

has sufficient privilege to do so)

pid < 0: all tasks are counted (per cpu counters)

The 'cpu' parameter allows a counter to be made specific to a CPU:

cpu >= 0: the counter is restricted to a specific CPU

cpu == -1: the counter counts on all CPUs

(Note: the combination of 'pid == -1' and 'cpu == -1' is not valid.)

A 'pid > 0' and 'cpu == -1' counter is a per task counter that counts

events of that task and 'follows' that task to whatever CPU the task

gets schedule to. Per task counters can be created by any user, for

their own tasks.

A 'pid == -1' and 'cpu == x' counter is a per CPU counter that counts

all events on CPU-x. Per CPU counters need CAP_PERFMON or CAP_SYS_ADMIN

privilege.

The 'flags' parameter is currently unused and must be zero.

The 'group_fd' parameter allows counter "groups" to be set up. A

counter group has one counter which is the group "leader". The leader

is created first, with group_fd = -1 in the sys_perf_event_open call

that creates it. The rest of the group members are created

subsequently, with group_fd giving the fd of the group leader.

(A single counter on its own is created with group_fd = -1 and is

considered to be a group with only 1 member.)

A counter group is scheduled onto the CPU as a unit, that is, it will

only be put onto the CPU if all of the counters in the group can be

put onto the CPU. This means that the values of the member counters

can be meaningfully compared, added, divided (to get ratios), etc.,

with each other, since they have counted events for the same set of

executed instructions.

Like stated, asynchronous events, like counter overflow or PROT_EXEC mmap

tracking are logged into a ring-buffer. This ring-buffer is created and

accessed through mmap().

The mmap size should be 1+2^n pages, where the first page is a meta-data page

(struct perf_event_mmap_page) that contains various bits of information such

as where the ring-buffer head is.

/*
 * Structure of the page that can be mapped via mmap
 */
struct perf_event_mmap_page {
        __u32   version;                /* version number of this structure */
        __u32   compat_version;         /* lowest version this is compat with */
/*
         * Bits needed to read the hw counters in user-space.
         *
         *   u32 seq;
         *   s64 count;
         *
         *   do {
         *     seq = pc->lock;
         *
         *     barrier()
         *     if (pc->index) {
         *       count = pmc_read(pc->index - 1);
         *       count += pc->offset;
         *     } else
         *       goto regular_read;
         *
         *     barrier();
         *   } while (pc->lock != seq);
         *
         * NOTE: for obvious reason this only works on self-monitoring
         *       processes.
         */
        __u32   lock;                   /* seqlock for synchronization */
        __u32   index;                  /* hardware counter identifier */
        __s64   offset;                 /* add to hardware counter value */
/*
         * Control data for the mmap() data buffer.
         *
         * User-space reading this value should issue an rmb(), on SMP capable
         * platforms, after reading this value -- see perf_event_wakeup().
         */
        __u32   data_head;              /* head in the data section */
};

NOTE: the hw-counter userspace bits are arch specific and are currently only

implemented on powerpc.

The following 2^n pages are the ring-buffer which contains events of the form:

#define PERF_RECORD_MISC_KERNEL          (1 << 0)
#define PERF_RECORD_MISC_USER            (1 << 1)
#define PERF_RECORD_MISC_OVERFLOW        (1 << 2)
struct perf_event_header {
        __u32   type;
        __u16   misc;
        __u16   size;
};
enum perf_event_type {
/*
         * The MMAP events record the PROT_EXEC mappings so that we can
         * correlate userspace IPs to code. They have the following structure:
         *
         * struct {
         *      struct perf_event_header        header;
         *
         *      u32                             pid, tid;
         *      u64                             addr;
         *      u64                             len;
         *      u64                             pgoff;
         *      char                            filename[];
         * };
         */
        PERF_RECORD_MMAP                 = 1,
        PERF_RECORD_MUNMAP               = 2,
/*
         * struct {
         *      struct perf_event_header        header;
         *
         *      u32                             pid, tid;
         *      char                            comm[];
         * };
         */
        PERF_RECORD_COMM                 = 3,
/*
         * When header.misc & PERF_RECORD_MISC_OVERFLOW the event_type field
         * will be PERF_RECORD_*
         *
         * struct {
         *      struct perf_event_header        header;
         *
         *      { u64                   ip;       } && PERF_RECORD_IP
         *      { u32                   pid, tid; } && PERF_RECORD_TID
         *      { u64                   time;     } && PERF_RECORD_TIME
         *      { u64                   addr;     } && PERF_RECORD_ADDR
         *
         *      { u64                   nr;
         *        { u64 event, val; }   cnt[nr];  } && PERF_RECORD_GROUP
         *
         *      { u16                   nr,
         *                              hv,
         *                              kernel,
         *                              user;
         *        u64                   ips[nr];  } && PERF_RECORD_CALLCHAIN
         * };
         */
};

NOTE: PERF_RECORD_CALLCHAIN is arch specific and currently only implemented

on x86.

Notification of new events is possible through poll()/select()/epoll() and

fcntl() managing signals.

Normally a notification is generated for every page filled, however one can

additionally set perf_event_attr.wakeup_events to generate one every

so many counter overflow events.

Future work will include a splice() interface to the ring-buffer.

Counters can be enabled and disabled in two ways: via ioctl and via

prctl. When a counter is disabled, it doesn't count or generate

events but does continue to exist and maintain its count value.

An individual counter can be enabled with

ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);

or disabled with

ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);

For a counter group, pass PERF_IOC_FLAG_GROUP as the third argument.

Enabling or disabling the leader of a group enables or disables the

whole group; that is, while the group leader is disabled, none of the

counters in the group will count. Enabling or disabling a member of a

group other than the leader only affects that counter - disabling an

non-leader stops that counter from counting but doesn't affect any

other counter.

Additionally, non-inherited overflow counters can use

ioctl(fd, PERF_EVENT_IOC_REFRESH, nr);

to enable a counter for 'nr' events, after which it gets disabled again.

A process can enable or disable all the counter groups that are

attached to it, using prctl:

prctl(PR_TASK_PERF_EVENTS_ENABLE);
  prctl(PR_TASK_PERF_EVENTS_DISABLE);

This applies to all counters on the current process, whether created

by this process or by another, and doesn't affect any counters that

this process has created on other processes. It only enables or

disables the group leaders, not any other members in the groups.

Arch requirements

If your architecture does not have hardware performance metrics, you can

still use the generic software counters based on hrtimers for sampling.

So to start with, in order to add HAVE_PERF_EVENTS to your Kconfig, you

will need at least this:

  • asm/perf_event.h - a basic stub will suffice at first
  • support for atomic64 types (and associated helper functions)

If your architecture does have hardware capabilities, you can override the

weak stub hw_perf_event_init() to register hardware counters.

Architectures that have d-cache aliassing issues, such as Sparc and ARM,

should select PERF_USE_VMALLOC in order to avoid these for perf mmap().

相关文章
|
移动开发 前端开发 数据安全/隐私保护
iOS发布证书.p12文件无密码解决办法及导出带密码的新.p12文件方法
iOS发布证书.p12文件无密码解决办法及导出带密码的新.p12文件方法
640 0
|
缓存 并行计算 C++
实践教程|旋转目标检测模型-TensorRT 部署(C++)
实践教程|旋转目标检测模型-TensorRT 部署(C++)
534 0
|
数据采集 人工智能 安全
软件测试中的人工智能应用与挑战
在这篇文章中,我们将深入探讨人工智能(AI)在软件测试中的应用及其所面临的挑战。通过分析当前的技术趋势和具体案例,揭示AI如何提高测试效率和准确性,并指出在实施过程中遇到的主要问题及可能的解决途径。
388 1
|
11月前
|
数据采集 存储 NoSQL
AArch64架构调用链性能数据采集原理
本次分享的主题是AArch64架构调用链性能数据采集原理,由阿里云苏轩楠分享。主要分为五个部分: 1. 术语解释 2. Frame Pointer RegisterStack Unwind 3. Dwarf-based Stack Unwind 4. /BRBE/CSRE Stack Unwind 5. Kernel-space Stack Unwind&eBPF Unwinders
314 0
|
10月前
|
搜索推荐 数据挖掘
优质网络舆情监测系统大盘点
一款出色的网络舆情监测系统,不仅能够助力相关主体迅速捕捉舆情信息,有效应对危机,还能够助力其更好地把握舆论动态,维护自身形象。那么,市场上有哪些比较好的网络舆情监测系统呢?这里,本文有为各位整理了一些好用的舆情检测系统,以供各位参考!
464 0
|
XML 算法 自动驾驶
使用URDF和Xacro构建差速轮式机器人模型
前言 本篇文章介绍的是ROS高效进阶内容,使用URDF 语言(xml格式)做一个差速轮式机器人模型,并使用URDF的增强版xacro,对机器人模型文件进行二次优化。 差速轮式机器人:两轮差速底盘由两个动力轮位于底盘左右两侧,两轮独立控制速度,通过给定不同速度实现底盘转向控制。一般会配有一到两个辅助支撑的万向轮。 此次建模,不引入算法,只是把机器人模型的样子做出来,所以只使用 rivz 进行可视化显示。 机器人的定义和构成 机器人定义:机器人是一种自动化的机器,所不同的是这种机器具备一些与人或生物相似的智能能力,如感知能力、规划能力、动作能力和协同能力,是一种具有高级灵活性的自动化机器
348 15
|
11月前
|
存储 SQL 缓存
Perf Arm SPE介绍与使用
本次分享的主题是 Perf Arm-SPE 的介绍及使用,本次分享主要介绍如何在倚天 710 平台上利用 Arm-SPE 特性定位伪共享问题、分析内存访问、分析指令延时以及监控访存延时等功能。 1. 背景介绍 2. Arm SPE的原理 3. Arm SPE在倚天服务器上的应用 4. Arm SPE 更多特性与功能的探索
925 0
|
机器学习/深度学习 人工智能 自然语言处理
栩栩如生,音色克隆,Bert-vits2文字转语音打造鬼畜视频实践(Python3.10)
诸公可知目前最牛逼的TTS免费开源项目是哪一个?没错,是Bert-vits2,没有之一。它是在本来已经极其强大的Vits项目中融入了Bert大模型,基本上解决了VITS的语气韵律问题,在效果非常出色的情况下训练的成本开销普通人也完全可以接受。
栩栩如生,音色克隆,Bert-vits2文字转语音打造鬼畜视频实践(Python3.10)
|
数据采集 机器学习/深度学习 数据可视化
Pandas在数据分析中有广泛的应用场景
Pandas在数据分析中有广泛的应用场景
402 2
|
机器学习/深度学习 传感器 物联网
【Python机器学习专栏】机器学习在物联网(IoT)中的集成
【4月更文挑战第30天】本文探讨了机器学习在物联网(IoT)中的应用,包括数据收集预处理、实时分析决策和模型训练更新。机器学习被用于智能家居、工业自动化和健康监测等领域,例如预测居民行为以优化能源效率和设备维护。Python是支持物联网项目机器学习集成的重要工具,文中给出了一个使用`scikit-learn`预测温度的简单示例。尽管面临数据隐私、安全性和模型解释性等挑战,但物联网与机器学习的结合将持续推动各行业的创新和智能化。
511 1