tf.nn.embedding_lookup

简介:
embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None)
    Looks up `ids` in a list of embedding tensors.
    
    This function is used to perform parallel lookups on the list of
    tensors in `params`.  It is a generalization of
    @{tf.gather}, where `params` is
    interpreted as a partitioning of a large embedding tensor.  `params` may be
    a `PartitionedVariable` as returned by using `tf.get_variable()` with a
    partitioner.
    
    If `len(params) > 1`, each element `id` of `ids` is partitioned between
    the elements of `params` according to the `partition_strategy`.
    In all strategies, if the id space does not evenly divide the number of
    partitions, each of the first `(max_id + 1) % len(params)` partitions will
    be assigned one more id.
    
    If `partition_strategy` is `"mod"`, we assign each id to partition
    `p = id % len(params)`. For instance,
    13 ids are split across 5 partitions as:
    `[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`
    
    If `partition_strategy` is `"div"`, we assign ids to partitions in a
    contiguous manner. In this case, 13 ids are split across 5 partitions as:
    `[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`
    
    The results of the lookup are concatenated into a dense
    tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.
    
    Args:
      params: A single tensor representing the complete embedding tensor,
        or a list of P tensors all of same shape except for the first dimension,
        representing sharded embedding tensors.  Alternatively, a
        `PartitionedVariable`, created by partitioning along dimension 0. Each
        element must be appropriately sized for the given `partition_strategy`.
      ids: A `Tensor` with type `int32` or `int64` containing the ids to be looked
        up in `params`.
      partition_strategy: A string specifying the partitioning strategy, relevant
        if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default
        is `"mod"`.
      name: A name for the operation (optional).
      validate_indices: DEPRECATED. If this operation is assigned to CPU, values
        in `indices` are always validated to be within range.  If assigned to GPU,
        out-of-bound indices result in safe but unspecified behavior, which may
        include raising an error.
      max_norm: If provided, embedding values are l2-normalized to the value of
        max_norm.
    
    Returns:
      A `Tensor` with the same type as the tensors in `params`.
    
    Raises:
      ValueError: If `params` is empty.

embedding_lookup(params, ids)其实就是按照ids顺序返回params中的第ids行。
比如说,ids=[1,3,2],就是返回params中第1,3,2行。返回结果为由params的1,3,2行组成的tensor.
最近在看,一起学习。

# -*- coding= utf-8 -*-
import tensorflow as tf
import numpy as np

a = [[0.1, 0.2, 0.3], [1.1, 1.2, 1.3], [2.1, 2.2, 2.3], [3.1, 3.2, 3.3], [4.1, 4.2, 4.3]]
a = np.asarray(a)
idx1 = tf.Variable([0, 2, 3, 1], tf.int32)
idx2 = tf.Variable([[0, 2, 3, 1], [4, 0, 2, 2]], tf.int32)
out1 = tf.nn.embedding_lookup(a, idx1)
out2 = tf.nn.embedding_lookup(a, idx2)
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    print sess.run(out1)
    print out1
    print '=================='
    print sess.run(out2)
    print out2

输出:

[[ 0.1  0.2  0.3]
 [ 2.1  2.2  2.3]
 [ 3.1  3.2  3.3]
 [ 1.1  1.2  1.3]]
Tensor("embedding_lookup:0", shape=(4, 3), dtype=float64)
==================
[[[ 0.1  0.2  0.3]
  [ 2.1  2.2  2.3]
  [ 3.1  3.2  3.3]
  [ 1.1  1.2  1.3]]

 [[ 4.1  4.2  4.3]
  [ 0.1  0.2  0.3]
  [ 2.1  2.2  2.3]
  [ 2.1  2.2  2.3]]]
Tensor("embedding_lookup_1:0", shape=(2, 4, 3), dtype=float64)





目录
相关文章
|
7月前
|
机器学习/深度学习 PyTorch 算法框架/工具
归一化技术比较研究:Batch Norm, Layer Norm, Group Norm
本文将使用合成数据集对三种归一化技术进行比较,并在每种配置下分别训练模型。记录训练损失,并比较模型的性能。
362 2
|
7月前
|
机器学习/深度学习 资源调度 监控
PyTorch使用Tricks:Dropout,R-Dropout和Multi-Sample Dropout等 !!
PyTorch使用Tricks:Dropout,R-Dropout和Multi-Sample Dropout等 !!
106 0
|
2月前
|
机器学习/深度学习 PyTorch 算法框架/工具
Pytorch学习笔记(八):nn.ModuleList和nn.Sequential函数详解
PyTorch中的nn.ModuleList和nn.Sequential函数,包括它们的语法格式、参数解释和具体代码示例,展示了如何使用这些函数来构建和管理神经网络模型。
86 1
|
4月前
|
算法框架/工具 数据格式
tf.keras.layers.Conv2D
【8月更文挑战第20天】tf.keras.layers.Conv2D。
41 2
|
PyTorch 算法框架/工具
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
515 2
PyTorch中 nn.Conv2d与nn.ConvTranspose2d函数的用法
|
机器学习/深度学习 PyTorch 算法框架/工具
Pytorch torch.nn库以及nn与nn.functional有什么区别?
Pytorch torch.nn库以及nn与nn.functional有什么区别?
105 0
|
机器学习/深度学习 传感器 算法
论文分享:「FED BN」使用LOCAL BATCH NORMALIZATION方法解决Non-iid问题
论文分享:「FED BN」使用LOCAL BATCH NORMALIZATION方法解决Non-iid问题
149 0
|
机器学习/深度学习 PyTorch 算法框架/工具
|
PyTorch 算法框架/工具 数据格式
|
机器学习/深度学习 PyTorch 算法框架/工具
PyTorch中 torch.nn与torch.nn.functional的区别
PyTorch中 torch.nn与torch.nn.functional的区别
259 2