[笔记]音视频学习之ffmpeg实践《一》常用结构体及裁剪画面思路(上)

简介: [笔记]音视频学习之ffmpeg实践《一》常用结构体及裁剪画面思路

AVFrame

结构体

typedef struct AVFrame {
#define AV_NUM_DATA_POINTERS 8
    /**图像数据
     * pointer to the picture/channel planes.
     * This might be different from the first allocated byte
     * - encoding: Set by user
     * - decoding: set by AVCodecContext.get_buffer()
     */
    uint8_t *data[AV_NUM_DATA_POINTERS];
    /**
     * Size, in bytes, of the data for each picture/channel plane.
     *
     * For audio, only linesize[0] may be set. For planar audio, each channel
     * plane must be the same size.
     *
     * - encoding: Set by user
     * - decoding: set by AVCodecContext.get_buffer()
     */
    int linesize[AV_NUM_DATA_POINTERS];
    /**
     * pointers to the data planes/channels.
     *
     * For video, this should simply point to data[].
     *
     * For planar audio, each channel has a separate data pointer, and
     * linesize[0] contains the size of each channel buffer.
     * For packed audio, there is just one data pointer, and linesize[0]
     * contains the total size of the buffer for all channels.
     *
     * Note: Both data and extended_data will always be set by get_buffer(),
     * but for planar audio with more channels that can fit in data,
     * extended_data must be used by the decoder in order to access all
     * channels.
     *
     * encoding: unused
     * decoding: set by AVCodecContext.get_buffer()
     */
    uint8_t **extended_data;
    /**宽高
     * width and height of the video frame
     * - encoding: unused
     * - decoding: Read by user.
     */
    int width, height;
    /**
     * number of audio samples (per channel) described by this frame
     * - encoding: Set by user
     * - decoding: Set by libavcodec
     */
    int nb_samples;
    /**
     * format of the frame, -1 if unknown or unset
     * Values correspond to enum AVPixelFormat for video frames,
     * enum AVSampleFormat for audio)
     * - encoding: unused
     * - decoding: Read by user.
     */
    int format;
    /**是否是关键帧
     * 1 -> keyframe, 0-> not
     * - encoding: Set by libavcodec.
     * - decoding: Set by libavcodec.
     */
    int key_frame;
    /**帧类型(I,B,P)
     * Picture type of the frame, see ?_TYPE below.
     * - encoding: Set by libavcodec. for coded_picture (and set by user for input).
     * - decoding: Set by libavcodec.
     */
    enum AVPictureType pict_type;
    /**
     * pointer to the first allocated byte of the picture. Can be used in get_buffer/release_buffer.
     * This isn't used by libavcodec unless the default get/release_buffer() is used.
     * - encoding:
     * - decoding:
     */
    uint8_t *base[AV_NUM_DATA_POINTERS];
    /**
     * sample aspect ratio for the video frame, 0/1 if unknown/unspecified
     * - encoding: unused
     * - decoding: Read by user.
     */
    AVRational sample_aspect_ratio;
    /**
     * presentation timestamp in time_base units (time when frame should be shown to user)
     * If AV_NOPTS_VALUE then frame_rate = 1/time_base will be assumed.
     * - encoding: MUST be set by user.
     * - decoding: Set by libavcodec.
     */
    int64_t pts;
    /**
     * reordered pts from the last AVPacket that has been input into the decoder
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t pkt_pts;
    /**
     * dts from the last AVPacket that has been input into the decoder
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t pkt_dts;
    /**
     * picture number in bitstream order
     * - encoding: set by
     * - decoding: Set by libavcodec.
     */
    int coded_picture_number;
    /**
     * picture number in display order
     * - encoding: set by
     * - decoding: Set by libavcodec.
     */
    int display_picture_number;
    /**
     * quality (between 1 (good) and FF_LAMBDA_MAX (bad))
     * - encoding: Set by libavcodec. for coded_picture (and set by user for input).
     * - decoding: Set by libavcodec.
     */
    int quality;
    /**
     * is this picture used as reference
     * The values for this are the same as the MpegEncContext.picture_structure
     * variable, that is 1->top field, 2->bottom field, 3->frame/both fields.
     * Set to 4 for delayed, non-reference frames.
     * - encoding: unused
     * - decoding: Set by libavcodec. (before get_buffer() call)).
     */
    int reference;
    /**QP表
     * QP table
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    int8_t *qscale_table;
    /**
     * QP store stride
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    int qstride;
    /**
     *
     */
    int qscale_type;
    /**跳过宏块表
     * mbskip_table[mb]>=1 if MB didn't change
     * stride= mb_width = (width+15)>>4
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    uint8_t *mbskip_table;
    /**运动矢量表
     * motion vector table
     * @code
     * example:
     * int mv_sample_log2= 4 - motion_subsample_log2;
     * int mb_width= (width+15)>>4;
     * int mv_stride= (mb_width << mv_sample_log2) + 1;
     * motion_val[direction][x + y*mv_stride][0->mv_x, 1->mv_y];
     * @endcode
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    int16_t (*motion_val[2])[2];
    /**宏块类型表
     * macroblock type table
     * mb_type_base + mb_width + 2
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    uint32_t *mb_type;
    /**DCT系数
     * DCT coefficients
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    short *dct_coeff;
    /**参考帧列表
     * motion reference frame index
     * the order in which these are stored can depend on the codec.
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    int8_t *ref_index[2];
    /**
     * for some private data of the user
     * - encoding: unused
     * - decoding: Set by user.
     */
    void *opaque;
    /**
     * error
     * - encoding: Set by libavcodec. if flags&CODEC_FLAG_PSNR.
     * - decoding: unused
     */
    uint64_t error[AV_NUM_DATA_POINTERS];
    /**
     * type of the buffer (to keep track of who has to deallocate data[*])
     * - encoding: Set by the one who allocates it.
     * - decoding: Set by the one who allocates it.
     * Note: User allocated (direct rendering) & internal buffers cannot coexist currently.
     */
    int type;
    /**
     * When decoding, this signals how much the picture must be delayed.
     * extra_delay = repeat_pict / (2*fps)
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    int repeat_pict;
    /**
     * The content of the picture is interlaced.
     * - encoding: Set by user.
     * - decoding: Set by libavcodec. (default 0)
     */
    int interlaced_frame;
    /**
     * If the content is interlaced, is top field displayed first.
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    int top_field_first;
    /**
     * Tell user application that palette has changed from previous frame.
     * - encoding: ??? (no palette-enabled encoder yet)
     * - decoding: Set by libavcodec. (default 0).
     */
    int palette_has_changed;
    /**
     * codec suggestion on buffer type if != 0
     * - encoding: unused
     * - decoding: Set by libavcodec. (before get_buffer() call)).
     */
    int buffer_hints;
    /**
     * Pan scan.
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    AVPanScan *pan_scan;
    /**
     * reordered opaque 64bit (generally an integer or a double precision float
     * PTS but can be anything).
     * The user sets AVCodecContext.reordered_opaque to represent the input at
     * that time,
     * the decoder reorders values as needed and sets AVFrame.reordered_opaque
     * to exactly one of the values provided by the user through AVCodecContext.reordered_opaque
     * @deprecated in favor of pkt_pts
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t reordered_opaque;
    /**
     * hardware accelerator private data (FFmpeg-allocated)
     * - encoding: unused
     * - decoding: Set by libavcodec
     */
    void *hwaccel_picture_private;
    /**
     * the AVCodecContext which ff_thread_get_buffer() was last called on
     * - encoding: Set by libavcodec.
     * - decoding: Set by libavcodec.
     */
    struct AVCodecContext *owner;
    /**
     * used by multithreading to store frame-specific info
     * - encoding: Set by libavcodec.
     * - decoding: Set by libavcodec.
     */
    void *thread_opaque;
    /**
     * log2 of the size of the block which a single vector in motion_val represents:
     * (4->16x16, 3->8x8, 2-> 4x4, 1-> 2x2)
     * - encoding: unused
     * - decoding: Set by libavcodec.
     */
    uint8_t motion_subsample_log2;
    /**(音频)采样率
     * Sample rate of the audio data.
     *
     * - encoding: unused
     * - decoding: read by user
     */
    int sample_rate;
    /**
     * Channel layout of the audio data.
     *
     * - encoding: unused
     * - decoding: read by user.
     */
    uint64_t channel_layout;
    /**
     * frame timestamp estimated using various heuristics, in stream time base
     * Code outside libavcodec should access this field using:
     * av_frame_get_best_effort_timestamp(frame)
     * - encoding: unused
     * - decoding: set by libavcodec, read by user.
     */
    int64_t best_effort_timestamp;
    /**
     * reordered pos from the last AVPacket that has been input into the decoder
     * Code outside libavcodec should access this field using:
     * av_frame_get_pkt_pos(frame)
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t pkt_pos;
    /**
     * duration of the corresponding packet, expressed in
     * AVStream->time_base units, 0 if unknown.
     * Code outside libavcodec should access this field using:
     * av_frame_get_pkt_duration(frame)
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t pkt_duration;
    /**
     * metadata.
     * Code outside libavcodec should access this field using:
     * av_frame_get_metadata(frame)
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    AVDictionary *metadata;
    /**
     * decode error flags of the frame, set to a combination of
     * FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there
     * were errors during the decoding.
     * Code outside libavcodec should access this field using:
     * av_frame_get_decode_error_flags(frame)
     * - encoding: unused
     * - decoding: set by libavcodec, read by user.
     */
    int decode_error_flags;
#define FF_DECODE_ERROR_INVALID_BITSTREAM   1
#define FF_DECODE_ERROR_MISSING_REFERENCE   2
    /**
     * number of audio channels, only used for audio.
     * Code outside libavcodec should access this field using:
     * av_frame_get_channels(frame)
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t channels;
} AVFrame;

主要变量含义

uint8_t *data[AV_NUM_DATA_POINTERS]:解码后原始数据(对视频来说是YUV,RGB,对音频来说是PCM)
int linesize[AV_NUM_DATA_POINTERS]:data中“一行”数据的大小。注意:未必等于图像的宽,一般大于图像的宽。
int width, height:视频帧宽和高(1920x1080,1280x720...)
int nb_samples:音频的一个AVFrame中可能包含多个音频帧,在此标记包含了几个
int format:解码后原始数据类型(YUV420,YUV422,RGB24...)
int key_frame:是否是关键帧
enum AVPictureType pict_type:帧类型(I,B,P...)
AVRational sample_aspect_ratio:宽高比(16:9,4:3...)
int64_t pts:显示时间戳
int coded_picture_number:编码帧序号
int display_picture_number:显示帧序号
int8_t *qscale_table:QP表
uint8_t *mbskip_table:跳过宏块表
int16_t (*motion_val[2])[2]:运动矢量表
uint32_t *mb_type:宏块类型表
short *dct_coeff:DCT系数,这个没有提取过
int8_t *ref_index[2]:运动估计参考帧列表(貌似H.264这种比较新的标准才会涉及到多参考帧)
int interlaced_frame:是否是隔行扫描
uint8_t motion_subsample_log2:一个宏块中的运动矢量采样个数,取log的

补充

比如

yuv数据

data[0] 存储 y数组,data[1]存储 u数组,data[2] 存储v数组

linesize[0] 存储y 一行的size ,linesize[1] 存储 u 一行的size,linesize[2] 存储 v一行的size

width/height不必解释

format 表示数据格式 yuv420/yuv444/rgb24 etc.

基于YUV420 AVFrame的裁剪

原图

AVFormatContext

FFMPEG结构体分析

AVStream

裁剪功能实现

左上角裁剪

思路

由于默认显示起始点就是从左上角开始的,所以实际上我们只需要改变avframe显示的宽高即可

实现

...   
  //AVFrame* pCropFrame 获取之后
  int wDst = pCropFrame->width * 2 / 4;
  int hDst = pCropFrame->height * 2 / 4;
    //修改宽高都为之前一半
  pCropFrame->width = wDst;
  pCropFrame->height = hDst;
  ...
  //显示pCropFrame即可

右上角裁剪

思路

起始坐标从 1/2 原图片宽开始 将右上角部分替换到左上角位置,最后修改avframe 宽高为原来一半

用 右上角 1/2 hSrc * 1/2 wSrc 像素内容 替换 左上角内容

实现

//获得pCropFrame
    ...
    int wDst = pCropFrame->width * 2 / 4;
  int hDst = pCropFrame->height * 2 /4 ;
  int wSrc = pCropFrame->width ;
  int hSrc = pCropFrame->height;
  //左上角
  //pCropFrame->width = wDst;
  //pCropFrame->height = hDst;
  //右上角
  int lineSizeY = pCropFrame->linesize[0];
  int lineSizeU = pCropFrame->linesize[1];
  int lineSizeV = pCropFrame->linesize[2];
  //crop Y
  for (size_t line = 0; line < hDst; line++) //行循环
  {
        //起始[0] + 行中心
      memcpy(pCropFrame->data[0]+ line*lineSizeY,
        pCropFrame->data[0]+ line*lineSizeY + lineSizeY / 2,
        lineSizeY/2);
  }
  //crop U
  for (size_t line = 0; line < hDst/2; line++)//行循环
  {
    memcpy(pCropFrame->data[1] + line*lineSizeU,
      pCropFrame->data[1] + lineSizeU / 2 + line*lineSizeU,
      lineSizeU / 2);
  }
  //crop V
  for (size_t line = 0; line < hDst/2; line++)//行循环
  {
    memcpy(pCropFrame->data[2] + line*lineSizeV,
      pCropFrame->data[2] + lineSizeV / 2 + line*lineSizeV,
      lineSizeV / 2);
  }
  pCropFrame->width = wDst;
  pCropFrame->height = hDst;
  ...
  //显示pCropFrame

相关文章
|
25天前
|
Linux 开发工具 Android开发
FFmpeg开发笔记(六十)使用国产的ijkplayer播放器观看网络视频
ijkplayer是由Bilibili基于FFmpeg3.4研发并开源的播放器,适用于Android和iOS,支持本地视频及网络流媒体播放。本文详细介绍如何在新版Android Studio中导入并使用ijkplayer库,包括Gradle版本及配置更新、导入编译好的so文件以及添加直播链接播放代码等步骤,帮助开发者顺利进行App调试与开发。更多FFmpeg开发知识可参考《FFmpeg开发实战:从零基础到短视频上线》。
100 2
FFmpeg开发笔记(六十)使用国产的ijkplayer播放器观看网络视频
|
1月前
|
并行计算 PyTorch TensorFlow
Ubuntu安装笔记(一):安装显卡驱动、cuda/cudnn、Anaconda、Pytorch、Tensorflow、Opencv、Visdom、FFMPEG、卸载一些不必要的预装软件
这篇文章是关于如何在Ubuntu操作系统上安装显卡驱动、CUDA、CUDNN、Anaconda、PyTorch、TensorFlow、OpenCV、FFMPEG以及卸载不必要的预装软件的详细指南。
3273 3
|
1月前
|
编解码 语音技术 内存技术
FFmpeg开发笔记(五十八)把32位采样的MP3转换为16位的PCM音频
《FFmpeg开发实战:从零基础到短视频上线》一书中的“5.1.2 把音频流保存为PCM文件”章节介绍了将媒体文件中的音频流转换为原始PCM音频的方法。示例代码直接保存解码后的PCM数据,保留了原始音频的采样频率、声道数量和采样位数。但在实际应用中,有时需要特定规格的PCM音频。例如,某些语音识别引擎仅接受16位PCM数据,而标准MP3音频通常采用32位采样,因此需将32位MP3音频转换为16位PCM音频。
51 0
FFmpeg开发笔记(五十八)把32位采样的MP3转换为16位的PCM音频
|
1月前
|
Ubuntu 应用服务中间件 nginx
Ubuntu安装笔记(三):ffmpeg(3.2.16)源码编译opencv(3.4.0)
本文是关于Ubuntu系统中使用ffmpeg 3.2.16源码编译OpenCV 3.4.0的安装笔记,包括安装ffmpeg、编译OpenCV、卸载OpenCV以及常见报错处理。
140 2
Ubuntu安装笔记(三):ffmpeg(3.2.16)源码编译opencv(3.4.0)
|
1月前
|
XML 开发工具 Android开发
FFmpeg开发笔记(五十六)使用Media3的Exoplayer播放网络视频
ExoPlayer最初是为了解决Android早期MediaPlayer控件对网络视频兼容性差的问题而推出的。现在,Android官方已将其升级并纳入Jetpack的Media3库,使其成为音视频操作的统一引擎。新版ExoPlayer支持多种协议,解决了设备和系统碎片化问题,可在整个Android生态中一致运行。通过修改`build.gradle`文件、布局文件及Activity代码,并添加必要的权限,即可集成并使用ExoPlayer进行网络视频播放。具体步骤包括引入依赖库、配置播放界面、编写播放逻辑以及添加互联网访问权限。
127 1
FFmpeg开发笔记(五十六)使用Media3的Exoplayer播放网络视频
|
1月前
|
Web App开发 安全 程序员
FFmpeg开发笔记(五十五)寒冬里的安卓程序员可进阶修炼的几种姿势
多年的互联网寒冬在今年尤为凛冽,坚守安卓开发愈发不易。面对是否转行或学习新技术的迷茫,安卓程序员可从三个方向进阶:1)钻研谷歌新技术,如Kotlin、Flutter、Jetpack等;2)拓展新功能应用,掌握Socket、OpenGL、WebRTC等专业领域技能;3)结合其他行业,如汽车、游戏、安全等,拓宽职业道路。这三个方向各有学习难度和保饭碗指数,助你在安卓开发领域持续成长。
69 1
FFmpeg开发笔记(五十五)寒冬里的安卓程序员可进阶修炼的几种姿势
|
2月前
|
XML Java Android开发
FFmpeg开发笔记(五十二)移动端的国产视频播放器GSYVideoPlayer
GSYVideoPlayer是一款国产移动端视频播放器,支持弹幕、滤镜、广告等功能,采用IJKPlayer、Media3(EXOPlayer)、MediaPlayer及AliPlayer多种内核。截至2024年8月,其GitHub星标数达2万。集成时需使用新版Android Studio,并按特定步骤配置依赖与权限。提供了NormalGSYVideoPlayer、GSYADVideoPlayer及ListGSYVideoPlayer三种控件,支持HLS、RTMP等多种直播链接。
95 18
FFmpeg开发笔记(五十二)移动端的国产视频播放器GSYVideoPlayer
|
29天前
|
Linux API 开发工具
FFmpeg开发笔记(五十九)Linux编译ijkplayer的Android平台so库
ijkplayer是由B站研发的移动端播放器,基于FFmpeg 3.4,支持Android和iOS。其源码托管于GitHub,截至2024年9月15日,获得了3.24万星标和0.81万分支,尽管已停止更新6年。本文档介绍了如何在Linux环境下编译ijkplayer的so库,以便在较新的开发环境中使用。首先需安装编译工具并调整/tmp分区大小,接着下载并安装Android SDK和NDK,最后下载ijkplayer源码并编译。详细步骤包括环境准备、工具安装及库编译等。更多FFmpeg开发知识可参考相关书籍。
81 0
FFmpeg开发笔记(五十九)Linux编译ijkplayer的Android平台so库
|
1月前
|
缓存 监控 计算机视觉
视频监控笔记(三):opencv结合ffmpeg获取rtsp摄像头相关信息
本文介绍了如何使用OpenCV结合FFmpeg获取RTSP摄像头信息,包括网络架构、视频监控系统组成、以及如何读取和显示网络摄像头视频流。
41 1
|
1月前
|
网络协议 应用服务中间件 nginx
FFmpeg错误笔记(一):nginx-rtmp-module推流出现 Server error: Already publishing
这篇文章讨论了在使用nginx-rtmp-module进行RTMP推流时遇到的“Server error: Already publishing”错误,分析了错误原因,并提供了详细的解决办法,包括修改nginx配置文件和终止异常的TCP连接。
124 0
FFmpeg错误笔记(一):nginx-rtmp-module推流出现 Server error: Already publishing

热门文章

最新文章