【论文复现】ArcFace: Additive Angular Margin Loss for Deep Face Recognition

简介: 论文的标题:《ArcFace: Additive Angular Margin Loss for Deep Face Recognition》论文下载链接:https://arxiv.org/pdf/1801.07698v1.pdf

一、核心思想


本篇文论提出了一种新的几何可解释性的损失函数:ArcFace。在L2正则化之后的weights和features基础之上,引入了使得角度空间中类间的决策边界最大化,如下图所示:


74.png


上图是ArcFace的几何解释:(a)蓝色和绿色点代表了两个不同类别的向量特征,比如蓝色代表一些猫的图片向量特征,绿色代表一些狗的图片向量特征。ArcFace可以直接进一步增加两种类别间隔。(2)右边更加直观地解释了角度和角度间隔。ArcFace的角度间隔代表了(超)球面上不同种类样本的几何间隔。


二、背景介绍


  • 深度卷积网络能够将面部图像映射到嵌入特征向量中(通常在姿势输入进行归一化步骤之后)。
  • 这样,同一个人的特征之间的距离很小,而不同个体的特征之间的距离很大。
  • 深度卷积网络的人脸识别方法在以下三个主要方面有所不同:(1)训练数据:
  • 许多数据集的大小各不相同
  • 数据集带有标注噪声
  • 作者发现,MegaFace和FaceScrub数据集之间存在数百张重叠的人脸图像,针对MS-Celeb-1M, MegaFace及FaceScrub做了整理,并将整理过后的dataset公开。
  • 训练数据量表上的数量级差异–>行业的人脸识别模型比学术界的模型要好得多
  • 训练数据的差异也使得某些深度网络的人脸识别结果无法完全重现。
  • (2)网络架构和设置:
  • ResNet,Inception-ResNet,VGG和Google Inception V1
  • 训练速度与模型精度之间的权衡
  • (3)损失函数:
  • 欧氏边界损失函数
  • 基于角余弦余量的损失函数


三、ArcFace 损失函数的演变


该部分我们主要解释下从Softmax到ArcFace演变历程


1. Softmax


  • 其中m代表了batch size个样本,n代表了类别个数
  • Softmax损失函数没有明确优化目标,这个目标就是正样本能够有更高的相似度,负样本能够有更低的相似度,也就是说并没有扩大决策边界。


2. Weights Normalisation


,Softmax公式中分子的部分可以表示为:

,那么我们有如下公式:


权重归一化之后,loss也就只跟特征向量和权重之间的角度有关了。在Sphereface论文中表明,权值归一化可以提高一点点效果。


3. MultiplicativeAngular Margin


在SphereFace中,角度乘以角度间隔m从而扩大m倍:

其中

由于cosine不是单调的,所以用了一个分段函数给转成单调的,

其中:

引入实际上相当于训练的时候加入了softmax帮助收敛,并且权重由动态超参数控制:


4. Feature Normalisation


特征和权重正则化消除了径向变化并且让每个特征都分布在超球面上,本文将特征超球面半径设为s=64,也就是将缩放为s,sphereface的loss变为:


5. Additive Cosine Margin


本文把m设为0.35. 与sphereface相比,cosine-face有三个好处:(1)没有超参数简单易实现,(2)清晰且不用softmax监督也能收敛,(3)性能明显提升



6. Additive AngularMargin


服从于:


四、不同损失函数


75.png


五、结果对比


76.png


六、基于 MNIST Dataset的ARCFace 的Pytorch实现


1 导入包

import torch 
import torch.nn.functional as F
from torch import nn, optim 
from torch.utils.data import DataLoader
from torchvision import transforms as T, datasets
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt 
import plotly.express as px
from tqdm.notebook import tqdm
from sklearn.metrics import accuracy_score
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')


2 数据预处理

transform = T.Compose([
    T.ToTensor(),
    T.Normalize((0.5,), (0.5,))
])
trainset = datasets.MNIST('../input/mnist-dataset-pytorch', train = True, transform = transform)
testset = datasets.MNIST('../input/mnist-dataset-pytorch', train = False, transform = transform)


3 ArcFace CNN Model


77.png

class ArcFace(nn.Module):
    def __init__(self,in_features,out_features,margin = 0.7 ,scale = 64):
        super().__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.scale = scale
        self.margin = margin 
        self.weights = nn.Parameter(torch.FloatTensor(out_features,in_features))
        nn.init.xavier_normal_(self.weights)
    def forward(self,features,targets):
        cos_theta = F.linear(features,F.normalize(self.weights),bias=None)
        cos_theta = cos_theta.clip(-1+1e-7, 1-1e-7)
        arc_cos = torch.acos(cos_theta)
        M = F.one_hot(targets, num_classes = self.out_features) * self.margin
        arc_cos = arc_cos + M
        cos_theta_2 = torch.cos(arc_cos)
        logits = cos_theta_2 * self.scale
        return logits
class MNIST_Model(nn.Module):
    def __init__(self):
        super(MNIST_Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50,3)
        self.arc_face = ArcFace(in_features = 3, out_features = 10)
    def forward(self,features,targets = None):
        x = F.relu(F.max_pool2d(self.conv1(features), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        _,c,h,w = x.shape
        x = x.view(-1, c*h*w)
        x = F.relu(self.fc1(x))
        x = F.normalize(self.fc2(x))
        if targets is not None:
            logits = self.arc_face(x,targets)
            return logits
        return x

model = MNIST_Model()
model.to(device)

MNIST_Model(
  (conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2_drop): Dropout2d(p=0.5, inplace=False)
  (fc1): Linear(in_features=320, out_features=50, bias=True)
  (fc2): Linear(in_features=50, out_features=3, bias=True)
  (arc_face): ArcFace()
)


4 模型训练

class TrainModel():
    def __init__(self,criterion = None,optimizer = None,schedular = None,device = None):
        self.criterion = criterion
        self.optimizer = optimizer
        self.schedular = schedular
        self.device = device
    def accuracy(self,logits,labels):
        ps = torch.argmax(logits,dim = 1).detach().cpu().numpy()
        acc = accuracy_score(ps,labels.detach().cpu().numpy())
        return acc
    def get_dataloader(self,trainset,validset):
        trainloader = DataLoader(trainset,batch_size = 64, num_workers = 4, pin_memory = True)
        validloader = DataLoader(validset,batch_size = 64, num_workers = 4, pin_memory = True)
        return trainloader, validloader
    def train_batch_loop(self,model,trainloader,i):
        epoch_loss = 0.0
        epoch_acc = 0.0
        pbar_train = tqdm(trainloader, desc = "Epoch" + " [TRAIN] " + str(i+1))
        for t,data in enumerate(pbar_train):
            images,labels = data
            images = images.to(device)
            labels = labels.to(device)
            logits = model(images,labels)
            loss = self.criterion(logits,labels)
            self.optimizer.zero_grad()
            loss.backward()
            self.optimizer.step()
            epoch_loss += loss.item()
            epoch_acc += self.accuracy(logits,labels)
            pbar_train.set_postfix({'loss' : '%.6f' %float(epoch_loss/(t+1)), 'acc' : '%.6f' %float(epoch_acc/(t+1))})
        return epoch_loss / len(trainloader), epoch_acc / len(trainloader)
    def valid_batch_loop(self,model,validloader,i):
        epoch_loss = 0.0
        epoch_acc = 0.0
        pbar_valid = tqdm(validloader, desc = "Epoch" + " [VALID] " + str(i+1))
        for v,data in enumerate(pbar_valid):
            images,labels = data
            images = images.to(device)
            labels = labels.to(device)
            logits = model(images,labels)
            loss = self.criterion(logits,labels)
            epoch_loss += loss.item()
            epoch_acc += self.accuracy(logits,labels)
            pbar_valid.set_postfix({'loss' : '%.6f' %float(epoch_loss/(v+1)), 'acc' : '%.6f' %float(epoch_acc/(v+1))})
        return epoch_loss / len(validloader), epoch_acc / len(validloader)
    def run(self,model,trainset,validset,epochs):
        trainloader,validloader = self.get_dataloader(trainset,validset)
        for i in range(epochs):
            model.train()
            avg_train_loss, avg_train_acc = self.train_batch_loop(model,trainloader,i)
            model.eval()
            avg_valid_loss, avg_valid_acc = self.valid_batch_loop(model,validloader,i)
        return model 
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr = 0.0001)
model = TrainModel(criterion, optimizer, device).run(model, trainset, testset, 20)


5 提取图片向量

emb = []
y = []
testloader = DataLoader(testset,batch_size = 64)
with torch.no_grad():
    for images,labels in tqdm(testloader):
        images = images.to(device)
        embeddings = model(images)
        emb += [embeddings.detach().cpu()]
        y += [labels]
    embs = torch.cat(emb).cpu().numpy()
    y = torch.cat(y).cpu().numpy()

tsne_df = pd.DataFrame(
    np.column_stack((embs, y)),
    columns = ["x","y","z","targets"]
)
fig = px.scatter_3d(tsne_df, x='x', y='y', z='z',
              color='targets')
fig.show()


78.png


https://www.kaggle.com/parthdhameliya77/simple-arcface-implementation-on-mnist-dataset


参考资料


相关文章
|
3月前
|
机器学习/深度学习 人工智能 算法
【博士每天一篇文献-算法】Seeing is believing_ Brain-inspired modular training for mechanistic interpretability
这篇文章提出了一种模仿大脑结构和功能的训练正则化方法,称为大脑启发的模块化训练(BIMT),通过在几何空间中嵌入神经元并增加与连接长度成比例的正则化项来促进神经网络的模块化和稀疏化,增强了网络的可解释性,并在多种任务和数据集上验证了其有效性。
39 2
|
机器学习/深度学习 编解码 自然语言处理
FCT: The Fully Convolutional Transformer for Medical Image Segmentation 论文解读
我们提出了一种新的transformer,能够分割不同形态的医学图像。医学图像分析的细粒度特性所带来的挑战意味着transformer对其分析的适应仍处于初级阶段。
255 0
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(2)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(2)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for  Learning Sentence Representations  from Pairwise and Triple- wise  Perspective in Angular Space(2)
|
编解码 资源调度 自然语言处理
【计算机视觉】Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP(OVSeg)
基于掩码的开放词汇语义分割。 从效果上来看,OVSeg 可以与 Segment Anything 结合,完成细粒度的开放语言分割。
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(7)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(7)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(5)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(5)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(10)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(10)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(4)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(4)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(3)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(3)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(8)
带你读《2022技术人的百宝黑皮书》——A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple- wise Perspective in Angular Space(8)