开发者社区> 问答> 正文

ValueError:应该定义‘density’输入的维度。发现“没有”

我一直在研究TensorFlow 2模型,但我经常碰到这个错误。我试着为每一层定义形状,但仍然没有改变。此外,只有在输入层中指定sparse=True时才会出现错误,因为我必须指定我的输入张量是sparse,而脚本的其他部分需要它。Tensorflow版本:版本:2.0.0-beta1。如果我使用比这个更新的版本,由于输入的稀疏性,还会出现其他模糊的错误。值得注意的是,TF 2.0在这种类型的输入上似乎存在许多问题。 当前方法定义:

def make_feed_forward_model():
    #'''
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
    #'''

然后,当我运行以下,错误出现:

model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

Traceback:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-56-720f117bb231> in <module>
      1 # Feel free to use an architecture of your choice.
----> 2 model = make_feed_forward_model()
      3 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

<ipython-input-55-5f35f6f22300> in make_feed_forward_model()
     18     #embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
     19     #pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(inputs)
---> 20     dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
     21     dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
     22     dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
    614           # Build layer if applicable (if the `build` method has been
    615           # overridden).
--> 616           self._maybe_build(inputs)
    617 
    618           # Wrapping `call` function in autograph to allow for dynamic control

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   1964         # operations.
   1965         with tf_utils.maybe_init_scope(self):
-> 1966           self.build(input_shapes)
   1967       # We must set self.built since user defined build functions are not
   1968       # constrained to set self.built.

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\layers\core.py in build(self, input_shape)
   1003     input_shape = tensor_shape.TensorShape(input_shape)
   1004     if tensor_shape.dimension_value(input_shape[-1]) is None:
-> 1005       raise ValueError('The last dimension of the inputs to `Dense` '
   1006                        'should be defined. Found `None`.')
   1007     last_dim = tensor_shape.dimension_value(input_shape[-1])

ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`.

编辑:SparseTensor错误 如果我使用的是比TF 2.0 -beta1更新的版本,那么训练就完全失败了:

ValueError: The two structures don't have the same nested structure.

    First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)

    Second structure: type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))

    More specifically: Substructure "type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)" is not
    Entire first structure:
    .
    Entire second structure:
    .

编辑2:添加batch_size到输入层后出错:

def make_feed_forward_model():  
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True, batch_size=HPARAMS.batch_size)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

当我运行model.compile():

TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. 

Contents: SparseTensor(indices=Tensor("sample/indices_3:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_3:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_3:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.

问题来源StackOverflow 地址:/questions/59384131/valueerror-dimension-of-the-inputs-to-dense-should-be-defined-found-none

展开
收起
kun坤 2019-12-26 15:40:28 1669 0
1 条回答
写回答
取消 提交回答
  • 这是因为当输入张量是稀疏的时候,这个张量的值是(None,None)而不是(HPARAMS.max_seq_length,)

    inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True)
    print(inputs.shape)
    # output: (?, ?)
    

    这似乎也是一个悬而未决的问题。 一个解决方案是编写自定义层子类化层类(参考这个)。 作为一个变通的方法(在tf-gpu 2.0.0上测试),在输入层中添加批量大小工作正常:

    inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True ,batch_size=32)
    print(inputs.shape)
    # output: (32, 100)
    
    2019-12-26 15:40:37
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
低代码开发师(初级)实战教程 立即下载
冬季实战营第三期:MySQL数据库进阶实战 立即下载
阿里巴巴DevOps 最佳实践手册 立即下载