论文阅读之 Inferring Analogous Attributes CVPR 2014

简介: Inferring Analogous Attributes     CVPR  2014 Chao-Yeh Chen and Kristen Grauman   Abstract: The appearance of an attribute can vary considerably from class to class (e.

Inferring Analogous Attributes     CVPR  2014

Chao-Yeh Chen and Kristen Grauman

 

Abstract:

The appearance of an attribute can vary considerably from class to class (e.g., a “fluffy” dog vs. a “fluffy” towel), making standard class-independent attribute models break down. Yet, training object-specific models for each attribute can be impractical, and defeats the purpose of using attributes to bridge category boundaries. We propose a novel form of transfer learning that addresses this dilemma. We develop a tensor factorization approach which, given a sparse set of class-specific attribute classifiers, can infer new ones for object-attribute pairs unobserved during training. For example, even though the system has no labeled images of striped dogs, it can use its knowledge of other attributes and objects to tailor “stripedness” to the dog category. With two large-scale datasets, we demonstrate both the need for category-sensitive attributes as well as our method’s successful transfer. Our inferred attribute classifiers perform similarly well to those trained with the luxury of labeled class-specific instances, and much better than those restricted to traditional modes of transfer.

 

 

从上图可以看出,通过学习一些特定目标的属性分类器,我们可以类推出相似的属性分类器.该分类器是对目标敏感的,虽然没有特定种类的带标签的训练图像.

 

1.Introduction:

 

本文的核心贡献有3点:

1.First, performing transfer jointly in the space of two labeled aspects of the data—namely, categories and attributes—is new. Critically, this means our method is not confined to transfer along same-object or same-attribute boundaries; rather, it discovers analogical relationships based on some mixture of previously seen objects and attributes.

第一点,就是与传统的转移学习不同,本文的转移是联合的转移,即:目标种类和属性的转移.

 

2.Second, our approach produces a discriminative model for an attribute with zero training examples from that category.

第二点,就是产生一种判别性的模型,尽管该类属性没有训练样本.

 

3.Third, while prior methods often require information about which classes should transfer to which [2, 29, 26, 1] (e.g., that a motorcycle detector might transfer well to a bicycle), our approach naturally discovers where transfer is possible based on how the observed attribute models relate. It can transfer easily between multiple classes at once, not only pairs, and we avoid the guesswork of manually specifying where transfer is likely.

第三点,就是本文所提出的方法不需要关于什么转移到什么的信息.而可以在多种类别之间很方便的转移.

 

2. Related Work

In contrast, our approach implicitly discovers analogical relationships among object-sensitive attribute classifiers, and our goal is to generate
novel category-sensitive attribute classifiers.

 

3. Approach

Given training images labeled by their category and one or more attributes, our method produces as output a series of category-sensitive attribute classifiers. Some of those classifiers are explicitly trained with the labeled data, while the rest are inferred by our method. We show how to create these analogous attribute classifiers via tensor completion.

In the following, we first describe how we train category-sensitive classifiers (Sec. 3.1). Then we define the tensor of attributes (Sec. 3.2) and show how we use it to infer analogous models (Sec. 3.3). Finally, we discuss certain salient aspects of the method design (Sec. 3.4).

 

3.1. Learning Category-Sensitive Attributes

在现有的系统当中,属性的训练是通过一种种类之间相互独立的方式 ( in a category-independent manner )进行.

在这个工作中,我们挑战传统的训练方式,即:in a completely category-indenpent mannner.

while attributes’ visual cues are often shared among some objects, the sharing is not universal. It can dilute(稀释) the learning process to pool cross-category exemplars indiscriminately. (在某些物体中,属性的视觉线索通常是共享的,但是这种共享不是普遍的.能够非判别性的稀释学习过程来集中跨种类的样本).

 

一种比较 naive 的做法就是,instead train category-sensitive attributes would be to partition training exemplars by their category labels, and train one attribute per category. 当有足够的 attribute + object combinations 的带标签的样本时,这种策略可能是足够的.但是,初步实验证明该方法是次于训练单个普遍的属性.我们归结了两点原因:

1.even in large-scale collections, the long-tailed distribution of object/scene/attribute occurrences in the real world means that some label pairs will be undersampled, leaving inadequate exemplars to build a statistically sound model,

2.this naive approach completely ignores attributes’ inter-class semantic ties. 属性类别之间的语意连接.

 

To overcome these shortcomings, we instead use an importance-weighted support vector machine (SVM) to train each category-sensitive attribute. 每一个训练样本(xi, yi)都包括一个图像描述xi,和标签yi 属于{-1, +1}.

 

相关文章
|
3月前
|
存储 开发框架 .NET
【博士每天一篇文献-综述】A Comprehensive Survey of Continual Learning Theory, Method and Application
本文综述了持续学习的理论基础、方法论和应用实践,探讨了五种主要的解决策略,包括基于回放、架构、表示、优化和正则化的方法,并深入分析了持续学习的不同场景、分类、评价指标以及面临的挑战和解决方案。
42 1
【博士每天一篇文献-综述】A Comprehensive Survey of Continual Learning Theory, Method and Application
|
3月前
|
存储 算法
【博士每天一篇文献-算法】On tiny episodic memories in continual learning
本文研究了在连续学习环境中使用小型情节记忆来解决灾难性遗忘问题,通过实证分析发现经验重播(ER)方法在连续学习中的表现优于现有最先进方法,并且重复训练对过去任务的小型记忆可以提升泛化性能。
20 1
【博士每天一篇文献-算法】On tiny episodic memories in continual learning
|
3月前
|
存储 机器学习/深度学习 算法
【博士每天一篇文献-算法】Fearnet Brain-inspired model for incremental learning
本文介绍了FearNet,一种受大脑记忆机制启发的神经网络模型,用于解决增量学习中的灾难性遗忘问题。FearNet不存储先前的例子,而是使用由海马体复合体和内侧前额叶皮层启发的双记忆系统,以及一个受基底外侧杏仁核启发的模块来决定使用哪个记忆系统进行回忆,有效减轻了灾难性遗忘,且在多个数据集上取得了优异的性能。
28 6
|
3月前
|
机器学习/深度学习 存储 数据采集
【博士每天一篇文献-综述】A survey on few-shot class-incremental learning
本文是一篇关于少量样本增量学习(Few-shot Class-Incremental Learning, FSCIL)的综述,提出了一种新的分类方法,将FSCIL分为五个子类别,并提供了广泛的文献回顾和性能评估,讨论了FSCIL的定义、挑战、相关学习问题以及在计算机视觉领域的应用。
58 5
|
6月前
|
机器学习/深度学习 算法 图形学
【论文泛读】NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
【论文泛读】NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
|
机器学习/深度学习 自然语言处理 算法
【论文精读】COLING 2022-KiPT: Knowledge-injected Prompt Tuning for Event Detection
事件检测旨在通过识别和分类事件触发词(最具代表性的单词)来从文本中检测事件。现有的大部分工作严重依赖复杂的下游网络,需要足够的训练数据。
163 0
【论文精读】COLING 2022-KiPT: Knowledge-injected Prompt Tuning for Event Detection
|
机器学习/深度学习 人工智能 算法
【Nature论文浅析】基于模型的AlphaGo Zero
【Nature论文浅析】基于模型的AlphaGo Zero
123 0
|
机器学习/深度学习 自然语言处理 算法
CVPR2022 oral | MetaFormer才是探索Transformer的源泉,衍生PoolFormer速度喜人(一)
CVPR2022 oral | MetaFormer才是探索Transformer的源泉,衍生PoolFormer速度喜人(一)
127 0
|
机器学习/深度学习 人工智能 PyTorch
CVPR2022 oral | MetaFormer才是探索Transformer的源泉,衍生PoolFormer速度喜人(二)
CVPR2022 oral | MetaFormer才是探索Transformer的源泉,衍生PoolFormer速度喜人(二)
107 0
|
机器学习/深度学习 缓存 算法
7 Papers & Radios | 两栖机器龟登上Nature封面;深度去模糊综述论文入选IJCV
7 Papers & Radios | 两栖机器龟登上Nature封面;深度去模糊综述论文入选IJCV