ffmpeg.c(4.3.1)源码剖析(一)

简介: ffmpeg.c(4.3.1)源码剖析(一)

前言

本文对 ffmpeg.c 源码进行学习及剖析。


一、FFmpeg 源码结构图

链接:ffmpeg整体流程.jpg

下面对上述图片进行介绍:

  • 函数背景色
  • 函数在图中以方框的形式表现出来。不同的背景色标志了该函数不同的作用:
  • 粉红色背景函数:FFmpeg 的 API 函数。
  • 白色背景的函数:FFmpeg 的内部函数。
  • 黄色背景的函数:URLProtocol 结构体中的函数,包含了读写各种协议的功能。
  • 绿色背景的函数:AVOutputFormat 结构体中的函数,包含了读写各种封装格式的功能。
  • 蓝色背景的函数:AVCodec 结构体中的函数,包含了编解码的功能。
  • 区域
  • 整个关系图可以分为以下几个区域:
  • 左边区域——架构函数区域:这些函数并不针对某一特定的视频格式。
  • 右上方黄色区域——协议处理函数区域:不同的协议(RTP,RTMP,FILE) 会调用不同的协议处理函数。
  • 右边中间绿色区域——封装格式处理函数区域:不同的封装格式(MKV,FLV,MPEG2TS,AVI)会调用不同的封装格式处理函数。
  • 右边下方蓝色区域——编解码函数区域:不同的编码标准(HEVC,H.264,MPEG2)会调用不同的编解码函数。
  • 箭头线
  • 为了把调用关系表示的更明显,图中的箭头线也使用了不同的颜色:
  • 红色的箭头线:标志了编码的流程。
  • 其他颜色的箭头线:标志了函数之间的调用关系。其中:
  • 调用 URLProtocol 结构体中的函数用黄色箭头线标识;
  • 调用 AVOutputFormat 结构体中的函数用绿色箭头线标识;
  • 调用 AVCodec 结构体中的函数用蓝色箭头线标识。
  • 函数所在的文件
  • 每个函数标识了它所在的文件路径。
  • 左边区域(架构函数)
  • 右上区域(URLProtocol 协议处理函数),URLProtocol 结构体包含如下协议处理函数指针:
  • url_open():打开
  • url_read():读取
  • url_write():写入
  • url_seek():调整进度
  • url_close():关闭
  • 下面举个例子,说明不同的协议对应着上述接口有不同的实现函数:
  • File 协议(即文件)对应的 URLProtocol 结构体 ff_file_protocol:
  • url_open() -> file_open() -> open()
  • url_read() -> file_read() -> read()
  • url_write() -> file_write() -> write()
  • url_seek() -> file_seek() -> lseek()
  • url_close() -> file_close() -> close()
  • RTMP 协议(libRTMP)对应的 URLProtocol 结构体 ff_librtmp_protocol:
  • url_open() -> rtmp_open() -> RTMP_Init(),RTMP_SetupURL(),RTMP_Connect(),RTMP_ConnectStream()
  • url_read() -> rtmp_read() -> RTMP_Read()
  • url_write() -> rtmp_write() -> RTMP_Write()
  • url_seek() -> rtmp_read_seek() -> RTMP_SendSeek()
  • url_close() -> rtmp_close() -> RTMP_Close()
  • UDP 协议对应的 URLProtocol 结构体 ff_udp_protocol:
  • url_open() -> udp_open()
  • url_read() -> udp_read()
  • url_write() -> udp_write()
  • url_seek() -> udp_close()
  • url_close() -> udp_close()
  • 右中区域(AVOutputFormat 封装格式处理函数)
  • AVOutputFormat 包含如下封装格式处理函数指针:
  • write_header():写文件头
  • write_packet():写一帧数据
  • write_trailer():写文件尾
  • 下面举个例子,说明不同的封装格式对应着上述接口有不同的实现函数:
  • FLV 封装格式对应的 AVOutputFormat 结构体 ff_flv_muxer:
  • write_header() -> flv_write_header()
  • write_packet() – > flv_write_packet()
  • write_trailer() -> flv_write_trailer()
  • MKV 封装格式对应的 AVOutputFormat 结构体 ff_matroska_muxer:
  • write_header() -> mkv_write_header()
  • write_packet() – > mkv_write_flush_packet()
  • write_trailer() -> mkv_write_trailer()
  • MPEG2TS 封装格式对应的 AVOutputFormat 结构体 ff_mpegts_muxer:
  • write_header() -> mpegts_write_header()
  • write_packet() -> mpegts_write_packet()
  • write_trailer() -> mpegts_write_end()
  • AVI 封装格式对应的 AVOutputFormat 结构体 ff_avi_muxer:
  • write_header() -> avi_write_header()
  • write_packet() -> avi_write_packet()
  • write_trailer() -> avi_write_trailer()
  • 右下区域(AVCodec 编解码函数)
  • AVCodec 包含如下编解码函数指针:
  • init():初始化
  • encode2():编码一帧数据
  • close():关闭
  • 下面举个例子,说明不同的编解码器对应着上述接口有不同的实现函数:
  • HEVC 编码器对应的 AVCodec 结构体 ff_libx265_encoder:
  • init() -> libx265_encode_init() -> x265_param_alloc(),x265_param_default_preset(),
    x265_encoder_open()
  • encode2() -> libx265_encode_frame() -> x265_encoder_encode()
  • close() -> libx265_encode_close() -> x265_param_free(),x265_encoder_close()
  • H.264 编码器对应的 AVCodec 结构体 ff_libx264_encoder:
  • init() -> X264_init() -> x264_param_default(),x264_encoder_open(),x264_encoder_headers()
  • encode2() -> X264_frame() -> x264_encoder_encode()
  • close() -> X264_close() -> x264_encoder_close()
  • VP8 编码器(libVPX)对应的 AVCodec 结构体 ff_libvpx_vp8_encoder:
  • init() -> vpx_init() -> vpx_codec_enc_config_default()
  • encode2() -> vp8_encode() -> vpx_codec_enc_init(), vpx_codec_encode()
  • close() -> vp8_free() -> vpx_codec_destroy()

二、ffmpeg.h 头文件详解

ffmpeg.h 文件内容如下所示:

/*
 * This file is part of FFmpeg.
 *
 * FFmpeg is free software; you can redistribute it and/or
 * modify it under the terms of the GNU Lesser General Public
 * License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 *
 * FFmpeg is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * Lesser General Public License for more details.
 *
 * You should have received a copy of the GNU Lesser General Public
 * License along with FFmpeg; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
 */
#ifndef FFTOOLS_FFMPEG_H
#define FFTOOLS_FFMPEG_H
#include "config.h"
#include <stdint.h>
#include <stdio.h>
#include <signal.h>
#include "cmdutils.h"
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include "libavcodec/avcodec.h"
#include "libavfilter/avfilter.h"
#include "libavutil/avutil.h"
#include "libavutil/dict.h"
#include "libavutil/eval.h"
#include "libavutil/fifo.h"
#include "libavutil/hwcontext.h"
#include "libavutil/pixfmt.h"
#include "libavutil/rational.h"
#include "libavutil/thread.h"
#include "libavutil/threadmessage.h"
#include "libswresample/swresample.h"
#define VSYNC_AUTO       -1
#define VSYNC_PASSTHROUGH 0
#define VSYNC_CFR         1
#define VSYNC_VFR         2
#define VSYNC_VSCFR       0xfe
#define VSYNC_DROP        0xff
#define MAX_STREAMS 1024    /* arbitrary sanity check value */
enum HWAccelID {
    HWACCEL_NONE = 0,
    HWACCEL_AUTO,
    HWACCEL_GENERIC,
    HWACCEL_VIDEOTOOLBOX,
    HWACCEL_QSV,
};
typedef struct HWAccel {
    const char *name;
    int (*init)(AVCodecContext *s);
    enum HWAccelID id;
    enum AVPixelFormat pix_fmt;
} HWAccel;
typedef struct HWDevice {
    const char *name;
    enum AVHWDeviceType type;
    AVBufferRef *device_ref;
} HWDevice;
/* select an input stream for an output stream */
typedef struct StreamMap {
    int disabled;           /* 1 is this mapping is disabled by a negative map */
    int file_index;
    int stream_index;
    int sync_file_index;
    int sync_stream_index;
    char *linklabel;       /* name of an output link, for mapping lavfi outputs */
} StreamMap;
typedef struct {
    int  file_idx,  stream_idx,  channel_idx; // input
    int ofile_idx, ostream_idx;               // output
} AudioChannelMap;
typedef struct OptionsContext {
    OptionGroup *g;
    /* input/output options */
    int64_t start_time;
    int64_t start_time_eof;
    int seek_timestamp;
    const char *format;
    SpecifierOpt *codec_names;
    int        nb_codec_names;
    SpecifierOpt *audio_channels;
    int        nb_audio_channels;
    SpecifierOpt *audio_sample_rate;
    int        nb_audio_sample_rate;
    SpecifierOpt *frame_rates;
    int        nb_frame_rates;
    SpecifierOpt *frame_sizes;
    int        nb_frame_sizes;
    SpecifierOpt *frame_pix_fmts;
    int        nb_frame_pix_fmts;
    /* input options */
    int64_t input_ts_offset;
    int loop;
    int rate_emu;
    int accurate_seek;
    int thread_queue_size;
    SpecifierOpt *ts_scale;
    int        nb_ts_scale;
    SpecifierOpt *dump_attachment;
    int        nb_dump_attachment;
    SpecifierOpt *hwaccels;
    int        nb_hwaccels;
    SpecifierOpt *hwaccel_devices;
    int        nb_hwaccel_devices;
    SpecifierOpt *hwaccel_output_formats;
    int        nb_hwaccel_output_formats;
    SpecifierOpt *autorotate;
    int        nb_autorotate;
    /* output options */
    StreamMap *stream_maps;
    int     nb_stream_maps;
    AudioChannelMap *audio_channel_maps; /* one info entry per -map_channel */
    int           nb_audio_channel_maps; /* number of (valid) -map_channel settings */
    int metadata_global_manual;
    int metadata_streams_manual;
    int metadata_chapters_manual;
    const char **attachments;
    int       nb_attachments;
    int chapters_input_file;
    int64_t recording_time;
    int64_t stop_time;
    uint64_t limit_filesize;
    float mux_preload;
    float mux_max_delay;
    int shortest;
    int bitexact;
    int video_disable;
    int audio_disable;
    int subtitle_disable;
    int data_disable;
    /* indexed by output file stream index */
    int   *streamid_map;
    int nb_streamid_map;
    SpecifierOpt *metadata;
    int        nb_metadata;
    SpecifierOpt *max_frames;
    int        nb_max_frames;
    SpecifierOpt *bitstream_filters;
    int        nb_bitstream_filters;
    SpecifierOpt *codec_tags;
    int        nb_codec_tags;
    SpecifierOpt *sample_fmts;
    int        nb_sample_fmts;
    SpecifierOpt *qscale;
    int        nb_qscale;
    SpecifierOpt *forced_key_frames;
    int        nb_forced_key_frames;
    SpecifierOpt *force_fps;
    int        nb_force_fps;
    SpecifierOpt *frame_aspect_ratios;
    int        nb_frame_aspect_ratios;
    SpecifierOpt *rc_overrides;
    int        nb_rc_overrides;
    SpecifierOpt *intra_matrices;
    int        nb_intra_matrices;
    SpecifierOpt *inter_matrices;
    int        nb_inter_matrices;
    SpecifierOpt *chroma_intra_matrices;
    int        nb_chroma_intra_matrices;
    SpecifierOpt *top_field_first;
    int        nb_top_field_first;
    SpecifierOpt *metadata_map;
    int        nb_metadata_map;
    SpecifierOpt *presets;
    int        nb_presets;
    SpecifierOpt *copy_initial_nonkeyframes;
    int        nb_copy_initial_nonkeyframes;
    SpecifierOpt *copy_prior_start;
    int        nb_copy_prior_start;
    SpecifierOpt *filters;
    int        nb_filters;
    SpecifierOpt *filter_scripts;
    int        nb_filter_scripts;
    SpecifierOpt *reinit_filters;
    int        nb_reinit_filters;
    SpecifierOpt *fix_sub_duration;
    int        nb_fix_sub_duration;
    SpecifierOpt *canvas_sizes;
    int        nb_canvas_sizes;
    SpecifierOpt *pass;
    int        nb_pass;
    SpecifierOpt *passlogfiles;
    int        nb_passlogfiles;
    SpecifierOpt *max_muxing_queue_size;
    int        nb_max_muxing_queue_size;
    SpecifierOpt *guess_layout_max;
    int        nb_guess_layout_max;
    SpecifierOpt *apad;
    int        nb_apad;
    SpecifierOpt *discard;
    int        nb_discard;
    SpecifierOpt *disposition;
    int        nb_disposition;
    SpecifierOpt *program;
    int        nb_program;
    SpecifierOpt *time_bases;
    int        nb_time_bases;
    SpecifierOpt *enc_time_bases;
    int        nb_enc_time_bases;
} OptionsContext;
typedef struct InputFilter {
    AVFilterContext    *filter;
    struct InputStream *ist;
    struct FilterGraph *graph;
    uint8_t            *name;
    enum AVMediaType    type;   // AVMEDIA_TYPE_SUBTITLE for sub2video
    AVFifoBuffer *frame_queue;
    // parameters configured for this input
    int format;
    int width, height;
    AVRational sample_aspect_ratio;
    int sample_rate;
    int channels;
    uint64_t channel_layout;
    AVBufferRef *hw_frames_ctx;
    int eof;
} InputFilter;
typedef struct OutputFilter {
    AVFilterContext     *filter;
    struct OutputStream *ost;
    struct FilterGraph  *graph;
    uint8_t             *name;
    /* temporary storage until stream maps are processed */
    AVFilterInOut       *out_tmp;
    enum AVMediaType     type;
    /* desired output stream properties */
    int width, height;
    AVRational frame_rate;
    int format;
    int sample_rate;
    uint64_t channel_layout;
    // those are only set if no format is specified and the encoder gives us multiple options
    int *formats;
    uint64_t *channel_layouts;
    int *sample_rates;
} OutputFilter;
typedef struct FilterGraph {
    int            index;
    const char    *graph_desc;
    AVFilterGraph *graph;
    int reconfiguration;
    InputFilter   **inputs;
    int          nb_inputs;
    OutputFilter **outputs;
    int         nb_outputs;
} FilterGraph;
typedef struct InputStream {
    int file_index;
    AVStream *st;
    int discard;             /* true if stream data should be discarded */
    int user_set_discard;
    int decoding_needed;     /* non zero if the packets must be decoded in 'raw_fifo', see DECODING_FOR_* */
#define DECODING_FOR_OST    1
#define DECODING_FOR_FILTER 2
    AVCodecContext *dec_ctx;
    AVCodec *dec;
    AVFrame *decoded_frame;
    AVFrame *filter_frame; /* a ref of decoded_frame, to be sent to filters */
    int64_t       start;     /* time when read started */
    /* predicted dts of the next packet read for this stream or (when there are
     * several frames in a packet) of the next frame in current packet (in AV_TIME_BASE units) */
    int64_t       next_dts;
    int64_t       dts;       ///< dts of the last packet read for this stream (in AV_TIME_BASE units)
    int64_t       next_pts;  ///< synthetic pts for the next decode frame (in AV_TIME_BASE units)
    int64_t       pts;       ///< current pts of the decoded frame  (in AV_TIME_BASE units)
    int           wrap_correction_done;
    int64_t filter_in_rescale_delta_last;
    int64_t min_pts; /* pts with the smallest value in a current stream */
    int64_t max_pts; /* pts with the higher value in a current stream */
    // when forcing constant input framerate through -r,
    // this contains the pts that will be given to the next decoded frame
    int64_t cfr_next_pts;
    int64_t nb_samples; /* number of samples in the last decoded audio frame before looping */
    double ts_scale;
    int saw_first_ts;
    AVDictionary *decoder_opts;
    AVRational framerate;               /* framerate forced with -r */
    int top_field_first;
    int guess_layout_max;
    int autorotate;
    int fix_sub_duration;
    struct { /* previous decoded subtitle and related variables */
        int got_output;
        int ret;
        AVSubtitle subtitle;
    } prev_sub;
    struct sub2video {
        int64_t last_pts;
        int64_t end_pts;
        AVFifoBuffer *sub_queue;    ///< queue of AVSubtitle* before filter init
        AVFrame *frame;
        int w, h;
        unsigned int initialize; ///< marks if sub2video_update should force an initialization
    } sub2video;
    int dr1;
    /* decoded data from this stream goes into all those filters
     * currently video and audio only */
    InputFilter **filters;
    int        nb_filters;
    int reinit_filters;
    /* hwaccel options */
    enum HWAccelID hwaccel_id;
    enum AVHWDeviceType hwaccel_device_type;
    char  *hwaccel_device;
    enum AVPixelFormat hwaccel_output_format;
    /* hwaccel context */
    void  *hwaccel_ctx;
    void (*hwaccel_uninit)(AVCodecContext *s);
    int  (*hwaccel_get_buffer)(AVCodecContext *s, AVFrame *frame, int flags);
    int  (*hwaccel_retrieve_data)(AVCodecContext *s, AVFrame *frame);
    enum AVPixelFormat hwaccel_pix_fmt;
    enum AVPixelFormat hwaccel_retrieved_pix_fmt;
    AVBufferRef *hw_frames_ctx;
    /* stats */
    // combined size of all the packets read
    uint64_t data_size;
    /* number of packets successfully read for this stream */
    uint64_t nb_packets;
    // number of frames/samples retrieved from the decoder
    uint64_t frames_decoded;
    uint64_t samples_decoded;
    int64_t *dts_buffer;
    int nb_dts_buffer;
    int got_output;
} InputStream;
typedef struct InputFile {
    AVFormatContext *ctx;
    int eof_reached;      /* true if eof reached */
    int eagain;           /* true if last read attempt returned EAGAIN */
    int ist_index;        /* index of first stream in input_streams */
    int loop;             /* set number of times input stream should be looped */
    int64_t duration;     /* actual duration of the longest stream in a file
                             at the moment when looping happens */
    AVRational time_base; /* time base of the duration */
    int64_t input_ts_offset;
    int64_t ts_offset;
    int64_t last_ts;
    int64_t start_time;   /* user-specified start time in AV_TIME_BASE or AV_NOPTS_VALUE */
    int seek_timestamp;
    int64_t recording_time;
    int nb_streams;       /* number of stream that ffmpeg is aware of; may be different
                             from ctx.nb_streams if new streams appear during av_read_frame() */
    int nb_streams_warn;  /* number of streams that the user was warned of */
    int rate_emu;
    int accurate_seek;
#if HAVE_THREADS
    AVThreadMessageQueue *in_thread_queue;
    pthread_t thread;           /* thread reading from this file */
    int non_blocking;           /* reading packets from the thread should not block */
    int joined;                 /* the thread has been joined */
    int thread_queue_size;      /* maximum number of queued packets */
#endif
} InputFile;
enum forced_keyframes_const {
    FKF_N,
    FKF_N_FORCED,
    FKF_PREV_FORCED_N,
    FKF_PREV_FORCED_T,
    FKF_T,
    FKF_NB
};
#define ABORT_ON_FLAG_EMPTY_OUTPUT        (1 <<  0)
#define ABORT_ON_FLAG_EMPTY_OUTPUT_STREAM (1 <<  1)
extern const char *const forced_keyframes_const_names[];
typedef enum {
    ENCODER_FINISHED = 1,
    MUXER_FINISHED = 2,
} OSTFinished ;
typedef struct OutputStream {
    int file_index;          /* file index */
    int index;               /* stream index in the output file */
    int source_index;        /* InputStream index */
    AVStream *st;            /* stream in the output file */
    int encoding_needed;     /* true if encoding needed for this stream */
    int frame_number;
    /* input pts and corresponding output pts
       for A/V sync */
    struct InputStream *sync_ist; /* input stream to sync against */
    int64_t sync_opts;       /* output frame counter, could be changed to some true timestamp */ // FIXME look at frame_number
    /* pts of the first frame encoded for this stream, used for limiting
     * recording time */
    int64_t first_pts;
    /* dts of the last packet sent to the muxer */
    int64_t last_mux_dts;
    // the timebase of the packets sent to the muxer
    AVRational mux_timebase;
    AVRational enc_timebase;
    AVBSFContext            *bsf_ctx;
    AVCodecContext *enc_ctx;
    AVCodecParameters *ref_par; /* associated input codec parameters with encoders options applied */
    AVCodec *enc;
    int64_t max_frames;
    AVFrame *filtered_frame;
    AVFrame *last_frame;
    int last_dropped;
    int last_nb0_frames[3];
    void  *hwaccel_ctx;
    /* video only */
    AVRational frame_rate;
    int is_cfr;
    int force_fps;
    int top_field_first;
    int rotate_overridden;
    double rotate_override_value;
    AVRational frame_aspect_ratio;
    /* forced key frames */
    int64_t forced_kf_ref_pts;
    int64_t *forced_kf_pts;
    int forced_kf_count;
    int forced_kf_index;
    char *forced_keyframes;
    AVExpr *forced_keyframes_pexpr;
    double forced_keyframes_expr_const_values[FKF_NB];
    /* audio only */
    int *audio_channels_map;             /* list of the channels id to pick from the source stream */
    int audio_channels_mapped;           /* number of channels in audio_channels_map */
    char *logfile_prefix;
    FILE *logfile;
    OutputFilter *filter;
    char *avfilter;
    char *filters;         ///< filtergraph associated to the -filter option
    char *filters_script;  ///< filtergraph script associated to the -filter_script option
    AVDictionary *encoder_opts;
    AVDictionary *sws_dict;
    AVDictionary *swr_opts;
    AVDictionary *resample_opts;
    char *apad;
    OSTFinished finished;        /* no more packets should be written for this stream */
    int unavailable;                     /* true if the steram is unavailable (possibly temporarily) */
    int stream_copy;
    // init_output_stream() has been called for this stream
    // The encoder and the bitstream filters have been initialized and the stream
    // parameters are set in the AVStream.
    int initialized;
    int inputs_done;
    const char *attachment_filename;
    int copy_initial_nonkeyframes;
    int copy_prior_start;
    char *disposition;
    int keep_pix_fmt;
    /* stats */
    // combined size of all the packets written
    uint64_t data_size;
    // number of packets send to the muxer
    uint64_t packets_written;
    // number of frames/samples sent to the encoder
    uint64_t frames_encoded;
    uint64_t samples_encoded;
    /* packet quality factor */
    int quality;
    int max_muxing_queue_size;
    /* the packets are buffered here until the muxer is ready to be initialized */
    AVFifoBuffer *muxing_queue;
    /* packet picture type */
    int pict_type;
    /* frame encode sum of squared error values */
    int64_t error[4];
} OutputStream;
typedef struct OutputFile {
    AVFormatContext *ctx;
    AVDictionary *opts;
    int ost_index;       /* index of the first stream in output_streams */
    int64_t recording_time;  ///< desired length of the resulting file in microseconds == AV_TIME_BASE units
    int64_t start_time;      ///< start time in microseconds == AV_TIME_BASE units
    uint64_t limit_filesize; /* filesize limit expressed in bytes */
    int shortest;
    int header_written;
} OutputFile;
extern InputStream **input_streams;
extern int        nb_input_streams;
extern InputFile   **input_files;
extern int        nb_input_files;
extern OutputStream **output_streams;
extern int         nb_output_streams;
extern OutputFile   **output_files;
extern int         nb_output_files;
extern FilterGraph **filtergraphs;
extern int        nb_filtergraphs;
extern char *vstats_filename;
extern char *sdp_filename;
extern float audio_drift_threshold;
extern float dts_delta_threshold;
extern float dts_error_threshold;
extern int audio_volume;
extern int audio_sync_method;
extern int video_sync_method;
extern float frame_drop_threshold;
extern int do_benchmark;
extern int do_benchmark_all;
extern int do_deinterlace;
extern int do_hex_dump;
extern int do_pkt_dump;
extern int copy_ts;
extern int start_at_zero;
extern int copy_tb;
extern int debug_ts;
extern int exit_on_error;
extern int abort_on_flags;
extern int print_stats;
extern int qp_hist;
extern int stdin_interaction;
extern int frame_bits_per_raw_sample;
extern AVIOContext *progress_avio;
extern float max_error_rate;
extern char *videotoolbox_pixfmt;
extern int filter_nbthreads;
extern int filter_complex_nbthreads;
extern int vstats_version;
extern const AVIOInterruptCB int_cb;
extern const OptionDef options[];
extern const HWAccel hwaccels[];
#if CONFIG_QSV
extern char *qsv_device;
#endif
extern HWDevice *filter_hw_device;
void term_init(void);
void term_exit(void);
void reset_options(OptionsContext *o, int is_input);
void show_usage(void);
void opt_output_file(void *optctx, const char *filename);
void remove_avoptions(AVDictionary **a, AVDictionary *b);
void assert_avoptions(AVDictionary *m);
int guess_input_channel_layout(InputStream *ist);
enum AVPixelFormat choose_pixel_fmt(AVStream *st, AVCodecContext *avctx, AVCodec *codec, enum AVPixelFormat target);
void choose_sample_fmt(AVStream *st, AVCodec *codec);
int configure_filtergraph(FilterGraph *fg);
int configure_output_filter(FilterGraph *fg, OutputFilter *ofilter, AVFilterInOut *out);
void check_filter_outputs(void);
int ist_in_filtergraph(FilterGraph *fg, InputStream *ist);
int filtergraph_is_simple(FilterGraph *fg);
int init_simple_filtergraph(InputStream *ist, OutputStream *ost);
int init_complex_filtergraph(FilterGraph *fg);
void sub2video_update(InputStream *ist, int64_t heartbeat_pts, AVSubtitle *sub);
int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame);
int ffmpeg_parse_options(int argc, char **argv);
int videotoolbox_init(AVCodecContext *s);
int qsv_init(AVCodecContext *s);
HWDevice *hw_device_get_by_name(const char *name);
int hw_device_init_from_string(const char *arg, HWDevice **dev);
void hw_device_free_all(void);
int hw_device_setup_for_decode(InputStream *ist);
int hw_device_setup_for_encode(OutputStream *ost);
int hw_device_setup_for_filter(FilterGraph *fg);
int hwaccel_decode_init(AVCodecContext *avctx);
int main_ffmpeg431(int argc, char** argv);
#endif /* FFTOOLS_FFMPEG_H */

全局变量与结构体:

  • 输入:
  • InputStream **input_streams = NULL;
  • int nb_input_streams = 0;
  • InputFile **input_files = NULL;
  • int nb_input_files = 0;
  • 输出:
  • OutputStream **output_streams = NULL;
  • int nb_output_streams = 0;
  • OutputFile **output_files = NULL;
  • int nb_output_files = 0;

其中: input_streams 是输入流的数组,nb_input_streams 是输入流的个数。 input_files 是输入文件(也可能是设备)的数组,nb_input_files 是输入文件的个数。下面的输出相关的变量们就不用解释了。

可以看出,文件和流是分别保存的。于是,可以想象,结构 InputStream 中应有其所属的文件在 InputFile 中的序号(file_index) 。输入流数组应是这样填充的:每当在输入文件中找到一个流时,就把它添加到 input_streams 中,所以一个输入文件对应的输入流在 input_streams 中是紧靠着的,于是 InputFile 结构中应有其第一个流在 input_streams 中的开始序号(ist_index)和被放在 input_streams 中的流的总个数(nb_streams)。

在输出流 output_streams 中,除了要保存其所在的输出文件在 output_files 中的序号(index),还应保存其对应的输入流在 input_streams 中的序号( source_index),也应保存其在所属输出文件中的流序号(file_index)。而输出文件中呢,只需保存它的第一个流在 output_streams 中的序号(ost_index)。

流和文件都准备好了,下面就是转换,那么转换过程是怎样的呢?

答:首先打开输入文件们,然后根据输入流们准备并打开解码器们,然后跟据输出流们准备并打开编码器们,然后创建输出文件们,然后为所有输出文件们写好头部,然后就在循环中把输入流转换到输出流并写入输出文件中,转换完后跳出循环,然后写入文件尾,最后关闭所有的输出文件


ffmpeg.c(4.3.1)源码剖析(二)https://developer.aliyun.com/article/1473998

目录
相关文章
|
28天前
|
Ubuntu 应用服务中间件 nginx
Ubuntu安装笔记(三):ffmpeg(3.2.16)源码编译opencv(3.4.0)
本文是关于Ubuntu系统中使用ffmpeg 3.2.16源码编译OpenCV 3.4.0的安装笔记,包括安装ffmpeg、编译OpenCV、卸载OpenCV以及常见报错处理。
118 2
Ubuntu安装笔记(三):ffmpeg(3.2.16)源码编译opencv(3.4.0)
|
6月前
|
缓存 网络协议 Windows
FFmpeg开发笔记(六)如何访问Github下载FFmpeg源码
在国内访问GitHub不稳定时,可以采取三种解决方法。首先,通过网站(<https://ping.chinaz.com/github.com>)找到快速响应的GitHub IP,将其添加到本地hosts文件,然后刷新DNS缓存以正常访问。其次,使用代下载网站如(<https://d.serctl.com/>)下载GitHub上的压缩包。最后,可从国内镜像站点,如码云(<https://gitee.com/mirrors/ffmpeg>),下载FFmpeg等开源代码。这些方法有助于绕过访问限制,确保FFmpeg学习与开发的顺利进行。
144 3
FFmpeg开发笔记(六)如何访问Github下载FFmpeg源码
|
6月前
|
编解码 容器
ffmpeg.c(4.3.1)源码剖析(三)
ffmpeg.c(4.3.1)源码剖析(三)
51 1
|
6月前
ffmpeg.c(4.3.1)源码剖析(二)
ffmpeg.c(4.3.1)源码剖析(二)
92 0
|
6月前
|
存储 编解码 缓存
FFmpeg之旅:深入解析FFplay源码
FFmpeg之旅:深入解析FFplay源码
599 0
|
6月前
|
编解码 网络协议 API
ffmpeg命令行工具源码之结构体分析1-命令行参数(未完结,持续更新)
ffmpeg作为多媒体文件转换工具,至少需要有一个要转换的输入文件信息(不仅仅是普通文件,还可以是摄像头设备,网络流等),和通常至少需要一个输出格式的文件(输出文件不仅仅指普通的文件,网络协议比如RTP协议,RTSP协议都可以理解为输出文件),ffmpeg的文件的转换过程主要由以下几个流程 (1)解封装 (2)解码 (3)过滤器 (4)编码 (5)封装 因此ffmpeg工具涉及的结构体主要就从这几个方面来说明这些结构体的含义。
73 0
|
6月前
|
Linux API C++
音视频windows安装ffmpeg6.0并使用vs调试源码笔记
音视频windows安装ffmpeg6.0并使用vs调试源码笔记
230 0
|
编译器 开发工具 C语言
FFmpeg开发笔记(一):ffmpeg介绍、windows开发环境搭建(mingw和msvc,无需源码编译)
FFmpeg开发笔记(一):ffmpeg介绍、windows开发环境搭建(mingw和msvc,无需源码编译)
FFmpeg开发笔记(一):ffmpeg介绍、windows开发环境搭建(mingw和msvc,无需源码编译)
|
C++ 编解码
把自定义的decoder加入ffmpeg源码
第一步: 在libavcodec目录下新建mkdecoder.c,并加入一下代码: [cpp] view plain copy   /*   *实现一个自己的decoder,编码工作其实就是把pkt的数据拷贝到frame  *作者:缪国凯(MK)   *821486004@qq.
1504 0
|
C++
把自定义的demuxer加入ffmpeg源码
.简介:把上一篇文章中的demuxer加入ffmpeg源码中去,使可以用命令行方式调用自定义的demuxer 第一步: 在libavformat目录下新建mkdemuxer.c和mkdemuxer.
1630 0
下一篇
无影云桌面