帧的接收

简介:     关于帧的接收,其实在前面Napi机制中,我们已经能够明白大致流程,或者网卡驱动的流程. 但是这里仍要在说一下,注意一些细节,和系统的勾画一下画面.    其实这个流程挺乏味的,单线程,剧情单一你懂的~.
    关于帧的接收,其实在前面Napi机制中,我们已经能够明白大致流程,或者网卡驱动的流程. 但是这里仍要在说一下,注意一些细节,和系统的勾画一下画面.
   其实这个流程挺乏味的,单线程,剧情单一你懂的~.~  .
   我们就从中断说起,网卡由硬件MAC 和PHY构成,记得以前看x86中断机制的时候,记得里面讲过8259A中断控制器芯片,用来连接外设,以供cpu处理中断.关于中断的初始化,在系统启动的时候,甚至是汇编层面代码,初始化中断向量表等.当然在后来的init/main.c ,start_kernel依然有对中断的处理函数.
     参考内核 2.6.32.61 

点击(此处)折叠或打开

  1. /*
  2.  * do_IRQ handles all normal device IRQ's (the special
  3.  * SMP cross-CPU interrupts have their own specific
  4.  * handlers).
  5.  */
  6. void __irq_entry do_IRQ(unsigned int irq)
  7. {
  8.     irq_enter();
  9.     __DO_IRQ_SMTC_HOOK(irq);
  10.     generic_handle_irq(irq);
  11.     irq_exit();
  12. }
这是来自arch/mips/kernel/irq.c中的代码,(硬件架构是mips)
我们看到 generic_handle_irq ( irq ) ;它会根据irq查询中断向量表找到当初我们网卡驱动初始化时注册的中断历程

点击(此处)折叠或打开

  1. request_irq(unsigned int irq, irq_handler_t handler, unsigned long flags,
  2.      const char *name, void *dev)
  3. {
  4.     return request_threaded_irq(irq, handler, NULL, flags, name, dev);
  5. }
include/linux/interrupt.h 中断注册函数api.
我们来看irq_exit:

点击(此处)折叠或打开

  1. /*
  2.  * Exit an interrupt context. Process softirqs if needed and possible:
  3.  */
  4. void irq_exit(void)
  5. {
  6.     account_system_vtime(current);
  7.     trace_hardirq_exit();
  8.     sub_preempt_count(IRQ_EXIT_OFFSET);
  9.     if (!in_interrupt() && local_softirq_pending())  // 判断是否已经退出中断,并且有挂起的软中断需要处理
  10.         invoke_softirq();

  11.     rcu_irq_exit();
  12. #ifdef CONFIG_NO_HZ
  13.     /* Make sure that timer wheel updates are propagated */
  14.     if (idle_cpu(smp_processor_id()) && !in_interrupt() && !need_resched())
  15.         tick_nohz_stop_sched_tick(0);
  16. #endif
  17.     preempt_enable_no_resched();
  18. }

点击(此处)折叠或打开

  1. #ifdef __ARCH_IRQ_EXIT_IRQS_DISABLED
  2. # define invoke_softirq()    __do_softirq()
  3. #else
  4. # define invoke_softirq()    do_softirq()
  5. #endif

点击(此处)折叠或打开

  1. /*
  2.  * We restart softirq processing MAX_SOFTIRQ_RESTART times,
  3.  * and we fall back to softirqd after that.
  4.  *
  5.  * This number has been established via experimentation.
  6.  * The two things to balance is latency against fairness -
  7.  * we want to handle softirqs as soon as possible, but they
  8.  * should not be able to lock up the box.
  9.  */
  10. #define MAX_SOFTIRQ_RESTART 10

  11. DEFINE_TRACE(softirq_raise);

  12. asmlinkage void __do_softirq(void)
  13. {
  14.     struct softirq_action *h;
  15.     __u32 pending;
  16.     int max_restart = MAX_SOFTIRQ_RESTART;
  17.     int cpu;

  18.     pending = local_softirq_pending();
  19.     account_system_vtime(current);

  20.     __local_bh_disable((unsigned long)__builtin_return_address(0));
  21.     lockdep_softirq_enter();

  22.     cpu = smp_processor_id();
  23. restart:
  24.     /* Reset the pending bitmask before enabling irqs */
  25.     set_softirq_pending(0);

  26.     local_irq_enable();

  27.     h = softirq_vec;

  28.     do {
  29.         if (pending & 1) {
  30.             int prev_count = preempt_count();
  31.             kstat_incr_softirqs_this_cpu(h - softirq_vec);

  32.             trace_softirq_entry(h, softirq_vec);
  33.             h->action(h);
  34.             trace_softirq_exit(h, softirq_vec);
  35.             if (unlikely(prev_count != preempt_count())) {
  36.                 printk(KERN_ERR "huh, entered softirq %td %s %p"
  37.                  "with preempt_count %08x,"
  38.                  " exited with %08x?\n", h - softirq_vec,
  39.                  softirq_to_name[h - softirq_vec],
  40.                  h->action, prev_count, preempt_count());
  41.                 preempt_count() = prev_count;
  42.             }

  43.             rcu_bh_qs(cpu);
  44.         }
  45.         h++;
  46.         pending >>= 1;
  47.     } while (pending);

  48.     local_irq_disable();

  49.     pending = local_softirq_pending();
  50.     if (pending && --max_restart)
  51.         goto restart;

  52.     if (pending)
  53.         wakeup_softirqd();

  54.     lockdep_softirq_exit();

  55.     account_system_vtime(current);
  56.     _local_bh_enable();
  57. }
 __do_softirq它最终会调用接收软中断 net_rx_action ,关于软中断初始化是在 net_dev_init里

点击(此处)折叠或打开

  1. open_softirq(NET_TX_SOFTIRQ, net_tx_action);
  2.     open_softirq(NET_RX_SOFTIRQ, net_rx_action);
这里处理了大部分的软中断,但是还有一些不能及时处理,就需要wakeup_softirqd即唤醒 ksoftirqd这个守护进程,后续处理.它同样也是调用net_rx_action
我们回过头看看驱动里注册的XXX_isr :

点击(此处)折叠或打开

  1. void rxqueue_isr(BL_CPU_RX_QUEUE_ID_DTE queue_id)
  2. {
  3.     bl_api_ctrl_cpu_rx_queue_interrupt(queue_id,
  4.     CE_BL_INTERRUPT_ACTION_DISABLE);
  5.     napi_schedule(&global_napi);
  6.     return;
  7. }
这里只是随便举个例子,重点是里面的 napi_schedule ( & global_napi ) ; 我们看到isr几乎什么都没做,就马上退出了.
struct napi_struct global_napi;

点击(此处)折叠或打开

  1. /**
  2.  *    napi_schedule - schedule NAPI poll
  3.  *    @n: napi context
  4.  *
  5.  * Schedule NAPI poll routine to be called if it is not already
  6.  * running.
  7.  */
  8. static inline void napi_schedule(struct napi_struct *n)
  9. {
  10.     if (napi_schedule_prep(n))   //判断napi是否运行 
  11.         __napi_schedule(n);
  12. }

点击(此处)折叠或打开

  1. /**
  2.  *    napi_schedule_prep - check if napi can be scheduled
  3.  *    @n: napi context
  4.  *
  5.  * Test if NAPI routine is already running, and if not mark
  6.  * it as running. This is used as a condition variable
  7.  * insure only one NAPI poll instance runs. We also make
  8.  * sure there is no pending NAPI disable.
  9.  */
  10. static inline int napi_schedule_prep(struct napi_struct *n)
  11. {
  12.     return !napi_disable_pending(n) &&
  13.         !test_and_set_bit(NAPI_STATE_SCHED, &n->state);
  14. }
上面这个函数的注释说的很清晰.主要判断napi的状态

点击(此处)折叠或打开

  1. /**
  2.  * __napi_schedule - schedule for receive
  3.  * @n: entry to schedule
  4.  *
  5.  * The entry's receive function will be scheduled to run
  6.  */
  7. void __napi_schedule(struct napi_struct *n)
  8. {
  9.     unsigned long flags;

  10.     trace_net_napi_schedule(n);

  11.     local_irq_save(flags);
  12.     list_add_tail(&n->poll_list, &__get_cpu_var(softnet_data).poll_list);
  13.     __raise_softirq_irqoff(NET_RX_SOFTIRQ);
  14.     local_irq_restore(flags);
  15. }
而这个函数就是开启软中断,并把收到帧的设备加入到cpu接收poll链表.
cpu接收队列的定义和初始化也是在net_dev_init里。这里不多说,struct  softnet_data  .我们就看看软中断的历程吧

点击(此处)折叠或打开

  1. static void net_rx_action(struct softirq_action *h)
  2. {
  3.     struct list_head *list = &__get_cpu_var(softnet_data).poll_list;
  4.     unsigned long time_limit = jiffies + 2;
  5.     int budget = netdev_budget;
  6.     void *have;

  7.     local_irq_disable();

  8.     while (!list_empty(list)) {    //查询链表直到空
  9.         struct napi_struct *n;
  10.         int work, weight;

  11.         /* If softirq window is exhuasted then punt.
  12.          * Allow this to run for 2 jiffies since which will allow
  13.          * an average latency of 1.5/HZ.
  14.          */
  15.         if (unlikely(budget = 0 || time_after(jiffies, time_limit)))    // 定时处理
  16.             goto softnet_break;

  17.         local_irq_enable();

  18.         /* Even though interrupts have been re-enabled, this
  19.          * access is safe because interrupts can only add new
  20.          * entries to the tail of this list, and only ->poll()
  21.          * calls can remove this head entry from the list.
  22.          */
  23.         n = list_first_entry(list, struct napi_struct, poll_list);

  24.         have = netpoll_poll_lock(n);

  25.         weight = n->weight;

  26.         /* This NAPI_STATE_SCHED test is for avoiding a race
  27.          * with netpoll's poll_napi(). Only the entity which
  28.          * obtains the lock and sees NAPI_STATE_SCHED set will
  29.          * actually make the ->poll() call. Therefore we avoid
  30.          * accidently calling ->poll() when NAPI is not scheduled.
  31.          */
  32.         work = 0;
  33.         if (test_bit(NAPI_STATE_SCHED, &n->state)) {
  34.             trace_net_napi_poll(n);
  35.             work = n->poll(n, weight);
  36.             trace_napi_poll(n);
  37.         }

  38.         WARN_ON_ONCE(work > weight);

  39.         budget -= work;

  40.         local_irq_disable();

  41.         /* Drivers must not modify the NAPI state if they
  42.          * consume the entire weight. In such cases this code
  43.          * still "owns" the NAPI instance and therefore can
  44.          * move the instance around on the list at-will.
  45.          */
  46.         if (unlikely(work == weight)) {
  47.             if (unlikely(napi_disable_pending(n))) {
  48.                 local_irq_enable();
  49.                 napi_complete(n);
  50.                 local_irq_disable();
  51.             } else
  52.                 list_move_tail(&n->poll_list, list);
  53.         }

  54.         netpoll_poll_unlock(have);
  55.     }
  56. out:
  57.     local_irq_enable();

  58. #ifdef CONFIG_NET_DMA
  59.     /*
  60.      * There may not be any more sk_buffs coming right now, so push
  61.      * any pending DMA copies to hardware
  62.      */
  63.     dma_issue_pending_all();
  64. #endif

  65.     return;

  66. softnet_break:
  67.     __get_cpu_var(netdev_rx_stat).time_squeeze++;
  68.     __raise_softirq_irqoff(NET_RX_SOFTIRQ);
  69.     goto out;
  70. }
这个函数很明显,获取cpu接收poll链表.并查询处理,直到空.当然有时候这个链表或许很长,总不能让它一直执行吧,那其他进程什么的,难道喝西北风饿死么?!
我们看到代码有这样一句:

点击(此处)折叠或打开

  1. /* If softirq window is exhuasted then punt.
  2.          * Allow this to run for 2 jiffies since which will allow
  3.          * an average latency of 1.5/HZ.
  4.          */
  5.         if (unlikely(budget = 0 || time_after(jiffies, time_limit)))
  6.             goto softnet_break;
然后就是调用poll函数 :
work = n->poll(n, weight);
它一般就是从dma或者队列缓冲区里读取数据包 ,到内存.然后传递到上层.
poll注册:
 netif_napi_add(dummy_dev, &global_napi, rxqueue_poll, 128);  
这里也仅仅是一个接口实例.仅供参考.

点击(此处)折叠或打开

  1. int rxqueue_poll(struct napi_struct *napi, int budget)
  2. {
  3.     int rx_packet_cnt = 0;
  4.     static int empty_count = 0;
        
            bl_api_ctrl_cpu_rx_queue_interrupt(param_queue_id,    //清中断
            CE_BL_INTERRUPT_ACTION_CLEAR);

  1.     while (rx_packet_cnt budget)    // 当有高速流量的时候,队列满一次处理不完的时候,poll返回值等于weight. 
  2.     {
  3.     if (netdev_read_packet())   //获取数据包,然后到上层
  4.     {
  5.      empty_count++;
  6.      break;
  7.     }
  8.     rx_packet_cnt++;
  9.     }
  10.     if(rx_packet_cnt budget && empty_count > 1)
  11.     {
  12.     empty_count = 0;
  13.     napi_complete(napi); 
  14.     bl_api_ctrl_cpu_rx_queue_interrupt(param_queue_id,                    //恢复中断
  15.      CE_BL_INTERRUPT_ACTION_ENABLE);
  16.     }
  17.     return rx_packet_cnt;
  18. }
在队列满的时候,即poll返回值等于weight时,由于这个时候关闭了中断,所以在这里停留的时间越久,相对丢包就越多.也是网络性能的一个参考点。
这里面会把数据帧传给netif_recevice_skb

点击(此处)折叠或打开

  1. /**
  2.  *    netif_receive_skb - process receive buffer from network
  3.  *    @skb: buffer to process
  4.  *
  5.  *    netif_receive_skb() is the main receive data processing function.
  6.  *    It always succeeds. The buffer may be dropped during processing
  7.  *    for congestion control or by the protocol layers.
  8.  *
  9.  *    This function may only be called from softirq context and interrupts
  10.  *    should be enabled.
  11.  *
  12.  *    Return values (usually ignored):
  13.  *    NET_RX_SUCCESS: no congestion
  14.  *    NET_RX_DROP: packet was dropped
  15.  */
  16. int netif_receive_skb(struct sk_buff *skb)
  17. {
  18.     struct packet_type *ptype, *pt_prev;
  19.     struct net_device *orig_dev;
  20.     struct net_device *master;
  21.     struct net_device *null_or_orig;
  22.     struct net_device *null_or_bond;
  23.     int ret = NET_RX_DROP;
  24.     __be16 type;

  25.     if (!skb->tstamp.tv64)
  26.         net_timestamp(skb);

  27.     if (vlan_tx_tag_present(skb) && vlan_hwaccel_do_receive(skb))
  28.         return NET_RX_SUCCESS;

  29.     /* if we've gotten here through NAPI, check netpoll */
  30.     if (netpoll_receive_skb(skb))
  31.         return NET_RX_DROP;

  32.     trace_net_dev_receive(skb);

  33.     if (!skb->skb_iif)
  34.         skb->skb_iif = skb->dev->ifindex;

  35.     null_or_orig = NULL;
  36.     orig_dev = skb->dev;
  37.     master = ACCESS_ONCE(orig_dev->master);
  38.     if (master) {
  39.         if (skb_bond_should_drop(skb, master))
  40.             null_or_orig = orig_dev; /* deliver only exact match */
  41.         else
  42.             skb->dev = master;
  43.     }

  44.     __get_cpu_var(netdev_rx_stat).total++;

  45.     skb_reset_network_header(skb);
  46.     skb_reset_transport_header(skb);
  47.     skb->mac_len = skb->network_header - skb->mac_header;

  48.     pt_prev = NULL;

  49.     rcu_read_lock();

  50. #ifdef CONFIG_NET_CLS_ACT                                 //qdisc  入口队列处理
  51.     if (skb->tc_verd & TC_NCLS) {
  52.         skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);
  53.         goto ncls;
  54.     }
  55. #endif

  56.     list_for_each_entry_rcu(ptype, &ptype_all, list) {
  57.         if (ptype->dev == null_or_orig || ptype->dev == skb->dev ||
  58.          ptype->dev == orig_dev) {
  59.             if (pt_prev)
  60.                 ret = deliver_skb(skb, pt_prev, orig_dev);
  61.             pt_prev = ptype;
  62.         }
  63.     }

  64. #ifdef CONFIG_NET_CLS_ACT                          ////qdisc  入口队列处理
  65.     skb = handle_ing(skb, &pt_prev, &ret, orig_dev);
  66.     if (!skb)
  67.         goto out;
  68. ncls:
  69. #endif

  70.     skb = handle_bridge(skb, &pt_prev, &ret, orig_dev);    // bridge 处理
  71.     if (!skb)
  72.         goto out;
  73.     skb = handle_macvlan(skb, &pt_prev, &ret, orig_dev);    //macvlan 
  74.     if (!skb)
  75.         goto out;

  76.     /*
  77.      * Make sure frames received on VLAN interfaces stacked on
  78.      * bonding interfaces still make their way to any base bonding
  79.      * device that may have registered for a specific ptype. The
  80.      * handler may have to adjust skb->dev and orig_dev.
  81.      */
  82.     null_or_bond = NULL;
  83.     if ((skb->dev->priv_flags & IFF_802_1Q_VLAN) &&
  84.      (vlan_dev_real_dev(skb->dev)->priv_flags & IFF_BONDING)) {
  85.         null_or_bond = vlan_dev_real_dev(skb->dev);
  86.     }

  87.     type = skb->protocol;
  88.     list_for_each_entry_rcu(ptype,
  89.             &ptype_base[ntohs(type) & PTYPE_HASH_MASK], list) {
  90.         if (ptype->type == type && (ptype->dev == null_or_orig ||          //根据type查找相关的协议处理模块
  91.          ptype->dev == skb->dev || ptype->dev == orig_dev ||
  92.          ptype->dev == null_or_bond)) {
  93.             if (pt_prev)
  94.                 ret = deliver_skb(skb, pt_prev, orig_dev);
  95.             pt_prev = ptype;
  96.         }
  97.     }

  98.     if (pt_prev) {
  99.         ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
  100.     } else {
  101.         kfree_skb(skb);
  102.         /* Jamal, now you will not able to escape explaining
  103.          * me how you were going to use this. :-)
  104.          */
  105.         ret = NET_RX_DROP;
  106.     }

  107. out:
  108.     rcu_read_unlock();
  109.     return ret;
  110. }

我们先看两个链表的查询 ptype_all 和ptype_base. 前者是为嗅探做准备的,比如tcpdump工具分析包.后者就是具体的协议,真正发送给上层协议的.比如ip_rcv等.
对于ptype_all我们看看dev_add_pack就明白了:

点击(此处)折叠或打开

  1. /*******************************************************************************

  2.         Protocol management and registration routines

  3. *******************************************************************************/

  4. /*
  5.  *    Add a protocol ID to the list. Now that the input handler is
  6.  *    smarter we can dispense with all the messy stuff that used to be
  7.  *    here.
  8.  *
  9.  *     Protocol handlers, mangling input packets,
  10.  *    MUST BE last in hash buckets and checking protocol handlers
  11.  *    MUST start from promiscuous ptype_all chain in net_bh.
  12.  *    It is true now, do not change it.
  13.  *    Explanation follows: if protocol handler, mangling packet, will
  14.  *    be the first on list, it is not able to sense, that packet
  15.  *    is cloned and should be copied-on-write, so that it will
  16.  *    change it and subsequent readers will get broken packet.
  17.  *                            --ANK (980803)
  18.  */

  19. /**
  20.  *    dev_add_pack - add packet handler
  21.  *    @pt: packet type declaration
  22.  *
  23.  *    Add a protocol handler to the networking stack. The passed &packet_type
  24.  *    is linked into kernel lists and may not be freed until it has been
  25.  *    removed from the kernel lists.
  26.  *
  27.  *    This call does not sleep therefore it can not
  28.  *    guarantee all CPU's that are in middle of receiving packets
  29.  *    will see the new packet type (until the next received packet).
  30.  */

  31. void dev_add_pack(struct packet_type *pt)
  32. {
  33.     int hash;

  34.     spin_lock_bh(&ptype_lock);
  35.     if (pt->type == htons(ETH_P_ALL))
  36.         list_add_rcu(&pt->list, &ptype_all);
  37.     else {
  38.         hash = ntohs(pt->type) & PTYPE_HASH_MASK;
  39.         list_add_rcu(&pt->list, &ptype_base[hash]);
  40.     }
  41.     spin_unlock_bh(&ptype_lock);
  42. }
  43. EXPORT_SYMBOL(dev_add_pack);
我们看到只有协议类型是 ETH_P_ALL才会添加到ptype_all链表.
流量控制的核心代码在net/sched中.之前我们说过,当设备open时会调用dev_activate激活qdisc.
然后我们看#ifdef CONFIG_NET_CLS_ACT 的部分,这里是处理入口队列的部分,如果配置了入口队列规则或者其他,就会深入处理.大部分的功能发挥在了出口队列,流量控制tc--qos.

我们来看第一个判断:skb->tc_verd & TC_NCLS   默认情况下是没有人赋值skb->tc_verd 所以与的结果肯定是0 .

点击(此处)折叠或打开

  1. #ifdef CONFIG_NET_CLS_ACT
  2. /* TODO: Maybe we should just force sch_ingress to be compiled in
  3.  * when CONFIG_NET_CLS_ACT is? otherwise some useless instructions
  4.  * a compare and 2 stores extra right now if we dont have it on
  5.  * but have CONFIG_NET_CLS_ACT
  6.  * NOTE: This doesnt stop any functionality; if you dont have
  7.  * the ingress scheduler, you just cant add policies on ingress.
  8.  *
  9.  */
  10. static int ing_filter(struct sk_buff *skb)
  11. {
  12.     struct net_device *dev = skb->dev;
  13.     u32 ttl = G_TC_RTTL(skb->tc_verd);
  14.     struct netdev_queue *rxq;
  15.     int result = TC_ACT_OK;
  16.     struct Qdisc *q;

  17.     if (MAX_RED_LOOP ttl++) {
  18.         printk(KERN_WARNING
  19.          "Redir loop detected Dropping packet (%d->%d)\n",
  20.          skb->iif, dev->ifindex);
  21.         return TC_ACT_SHOT;
  22.     }

  23.     skb->tc_verd = SET_TC_RTTL(skb->tc_verd, ttl);
  24.     skb->tc_verd = SET_TC_AT(skb->tc_verd, AT_INGRESS);

  25.     rxq = &dev->rx_queue;

  26.     q = rxq->qdisc;
  27.     if (q != &noop_qdisc) {
  28.         spin_lock(qdisc_lock(q));
  29.         if (likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state)))
  30.             result = qdisc_enqueue_root(skb, q);
  31.         spin_unlock(qdisc_lock(q));
  32.     }

  33.     return result;
  34. }

  35. static inline struct sk_buff *handle_ing(struct sk_buff *skb,
  36.                      struct packet_type **pt_prev,
  37.                      int *ret, struct net_device *orig_dev)
  38. {
  39.     if (skb->dev->rx_queue.qdisc == &noop_qdisc)
  40.         goto out;

  41.     if (*pt_prev) {
  42.         *ret = deliver_skb(skb, *pt_prev, orig_dev);
  43.         *pt_prev = NULL;
  44.     } else {
  45.         /* Huh? Why does turning on AF_PACKET affect this? */
  46.         skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);
  47.     }

  48.     switch (ing_filter(skb)) {
  49.     case TC_ACT_SHOT:
  50.     case TC_ACT_STOLEN:
  51.         kfree_skb(skb);
  52.         return NULL;
  53.     }

  54. out:
  55.     skb->tc_verd = 0;
  56.     return skb;
  57. }
  58. #endif


我们看ing_filter里q != &noop_qdisc这个判断,默认它们是相等的,我们应该记得以前讲过qdisc的初始化.默认就是noop_qdisc.所以默认的情况就是什么也不做,就返回了,当然
这里没有深入讨论,以后会深入分析入口队列流量控制的应用.

再接着就是判断是不是属于桥handle_bridge 。关于bridge也需要单独分析.至少到这里整个流程我想大家都明白了吧.


相关文章
|
7天前
|
传感器
CAN 帧有哪些类型
CAN帧主要有五种类型:数据帧,用于传输数据;远程帧,用于请求数据;错误帧,表示检测到错误;过载帧,表示接收器需要延时;帧间隔,用于分隔不同的帧。
|
12天前
|
编解码 缓存 算法
视频帧里的I帧、P帧、B帧是什么?
I帧、P帧、B帧是视频编码中的基本概念。I帧是帧内编码帧,无需参考其他帧即可解码;P帧是前向预测编码帧,基于前一帧解码;B帧是双向预测编码帧,基于前后帧解码。IDR帧是一种特殊的I帧,用于即时解码刷新,防止错误传播。GOP(Group of Pictures)是一组连续的画面,第一个帧为I帧,gop_size设置越大,画质越好,但解码延迟增加。OpenGOP允许GOP间的帧依赖,而ClosedGOP则不允许。DTS(解码时间戳)和PTS(显示时间戳)分别用于解码和显示时间控制。
|
2月前
|
网络协议 算法 安全
802.11帧结构与WiFi控制帧、管理帧、数据帧
【9月更文挑战第26天】该内容详细介绍了802.11帧结构,包括帧头、帧体和帧尾三部分,并分别阐述了各部分的功能和作用。此外,还介绍了WiFi控制帧、管理帧和数据帧的功能及类型,涵盖了RTS/CTS、ACK、信标帧、关联请求/响应帧、认证帧等内容,解释了它们在网络通信中的具体应用。
190 3
|
6月前
|
网络协议
计算机网络四种帧介绍,广播帧、未知帧、同网帧、异网帧
计算机网络四种帧介绍,广播帧、未知帧、同网帧、异网帧
|
IDE 自动驾驶 安全
CAN FD网络中每秒最多可以发送多少帧报文?
随着总线技术在汽车电子领域越来越广泛和深入的应用,特别是自动驾驶技术的迅速发展,汽车电子对总线宽度和数据传输速率的要求也越来也高,传统CAN(1MBit/s,8Bytes Payload)已难以满足日益增加的需求。
为什么发出去的2833 RTP流不能收号
为什么发出去的2833 RTP流不能收号
为什么发出去的2833 RTP流不能收号
|
机器学习/深度学习 并行计算 算法
MMFlow :帧与帧之间的追光者
光流(Optical Flow),字面理解为“光的流动”,更准确的说法为:时变图像上的二维运动场,是视频数据的重要视觉线索,在动作识别、视频理解、视频分割、目标跟踪以及全景拼接等领域,都有广泛应用。
617 0
MMFlow :帧与帧之间的追光者
|
机器学习/深度学习 缓存
【计算机网络】数据链路层 : 选择重传协议 SR ( 帧分类 | “发送方“ 确认帧、超时事件 | “接受方“ 接收帧机制 | 滑动窗口长度 | 计算示例 )★
【计算机网络】数据链路层 : 选择重传协议 SR ( 帧分类 | “发送方“ 确认帧、超时事件 | “接受方“ 接收帧机制 | 滑动窗口长度 | 计算示例 )★
593 0
|
机器学习/深度学习 缓存
【计算机网络】数据链路层 : 后退 N 帧协议 GBN ( 滑动窗口 | 发送窗口长度 | “发送方“ 累计确认、超时机制 | “接收方“ 按序接收、确认帧发送机制 | 计算示例 )★(一)
【计算机网络】数据链路层 : 后退 N 帧协议 GBN ( 滑动窗口 | 发送窗口长度 | “发送方“ 累计确认、超时机制 | “接收方“ 按序接收、确认帧发送机制 | 计算示例 )★(一)
650 0
|
机器学习/深度学习
【计算机网络】数据链路层 : 后退 N 帧协议 GBN ( 滑动窗口 | 发送窗口长度 | “发送方“ 累计确认、超时机制 | “接收方“ 按序接收、确认帧发送机制 | 计算示例 )★(二)
【计算机网络】数据链路层 : 后退 N 帧协议 GBN ( 滑动窗口 | 发送窗口长度 | “发送方“ 累计确认、超时机制 | “接收方“ 按序接收、确认帧发送机制 | 计算示例 )★(二)
730 0