admin管理员组

文章数量:1390204

When creating a sequential MLP taking a batched input of shape [batch,n_channels,1], calling keras.layers.Input forcibly squeezes the last axis resulting in issues in an NN i'm attempting to create that lifts each float on the last axis to vectors in dimension n.

This results in an expected axis -1 of input shape to have value 1, but received input with shape (None, n_channels) error that should not happen as the input received by the sequential code is listed as having the correct shape inputs=tf.Tensor(shape=(None, n_channels, 1), dtype=float32).

See code snippet attached bellow, where:

X.shape=TensorShape([1028, 128]) y.shape=TensorShape([1028, 128])

Please note that n_channels (128 here), can vary.

class FONet(keras.Model):
    """
    DeepONet model

    The model doesn't encode/decode the function
    Inputs are assumed to be encoded functions on a list of sensor points

    """

    def __init__(self, lifting_dims, act):
        """
        Args
        ----
        - lifting_dims: dimensions of the layers of the lifting NN
        - d: dimensions of neural networks between the lifting and projection
        - k_max: see init argument for the InnerLayer class 
        - act: activation function for all relevant neural network layers
        """
        super().__init__()

        self.d = d
        self.L = len(d)-1

        self.R = tf.keras.Sequential()
        self.R.add(tf.keras.layers.Input(shape=(1,)))
        for dim in lifting_dims:
            print(f"{dim=}")
            self.R.add(tf.keras.layers.Dense(dim, activation=act))
        self.R.add(tf.keras.layers.Dense(d[0]))
    
    def call(self, inputs):
        inputs = tf.expand_dims(inputs, axis=-1)
        print(f"{inputs.shape=}")
        lifted = self.R(inputs)

        return lifted

lift_dims = [200,200]

fonet_model = FONet(lift_dims, tf.keras.activations.sigmoid)

# Setup (optim):
lr = 1e-3
ep = 250
loss = tf.keras.losses.MSE
optimizer = tf.keras.optimizers.Adam(learning_rate = lr)
fonet_modelpile(loss = loss, optimizer = optimizer)
fonet_model.summary()
fonet_model.R.summary()

print(f"{X.shape=}")
print(f"{y.shape=}")

history = fonet_model.fit(X, y, verbose=0, epochs=ep, batch_size=32, shuffle=True)

plt.title("Training set")
plt.xlabel("epochs")
plt.ylabel("MSE")
plt.loglog(history.history["loss"])

this results in error

Input 0 of layer "dense_1" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (None, 128)

Arguments received by Sequential.call():
  • inputs=tf.Tensor(shape=(None, 128, 1), dtype=float32)
  • training=None
  • mask=None

I'd expect no error since the inputs are listed as having the correct shape. I interpret this a layers.Input squeezing the inputs seemingly automatically, but don't know why. This also happens when calling layers.Dense with the input_shape argument.

When creating a sequential MLP taking a batched input of shape [batch,n_channels,1], calling keras.layers.Input forcibly squeezes the last axis resulting in issues in an NN i'm attempting to create that lifts each float on the last axis to vectors in dimension n.

This results in an expected axis -1 of input shape to have value 1, but received input with shape (None, n_channels) error that should not happen as the input received by the sequential code is listed as having the correct shape inputs=tf.Tensor(shape=(None, n_channels, 1), dtype=float32).

See code snippet attached bellow, where:

X.shape=TensorShape([1028, 128]) y.shape=TensorShape([1028, 128])

Please note that n_channels (128 here), can vary.

class FONet(keras.Model):
    """
    DeepONet model

    The model doesn't encode/decode the function
    Inputs are assumed to be encoded functions on a list of sensor points

    """

    def __init__(self, lifting_dims, act):
        """
        Args
        ----
        - lifting_dims: dimensions of the layers of the lifting NN
        - d: dimensions of neural networks between the lifting and projection
        - k_max: see init argument for the InnerLayer class 
        - act: activation function for all relevant neural network layers
        """
        super().__init__()

        self.d = d
        self.L = len(d)-1

        self.R = tf.keras.Sequential()
        self.R.add(tf.keras.layers.Input(shape=(1,)))
        for dim in lifting_dims:
            print(f"{dim=}")
            self.R.add(tf.keras.layers.Dense(dim, activation=act))
        self.R.add(tf.keras.layers.Dense(d[0]))
    
    def call(self, inputs):
        inputs = tf.expand_dims(inputs, axis=-1)
        print(f"{inputs.shape=}")
        lifted = self.R(inputs)

        return lifted

lift_dims = [200,200]

fonet_model = FONet(lift_dims, tf.keras.activations.sigmoid)

# Setup (optim):
lr = 1e-3
ep = 250
loss = tf.keras.losses.MSE
optimizer = tf.keras.optimizers.Adam(learning_rate = lr)
fonet_modelpile(loss = loss, optimizer = optimizer)
fonet_model.summary()
fonet_model.R.summary()

print(f"{X.shape=}")
print(f"{y.shape=}")

history = fonet_model.fit(X, y, verbose=0, epochs=ep, batch_size=32, shuffle=True)

plt.title("Training set")
plt.xlabel("epochs")
plt.ylabel("MSE")
plt.loglog(history.history["loss"])

this results in error

Input 0 of layer "dense_1" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (None, 128)

Arguments received by Sequential.call():
  • inputs=tf.Tensor(shape=(None, 128, 1), dtype=float32)
  • training=None
  • mask=None

I'd expect no error since the inputs are listed as having the correct shape. I interpret this a layers.Input squeezing the inputs seemingly automatically, but don't know why. This also happens when calling layers.Dense with the input_shape argument.

Share Improve this question asked Mar 14 at 9:25 The_DoccThe_Docc 12 bronze badges 1
  • Have you tried adding custom layer that does the reshaping the input to be desired input for the next layer? – Dhana D. Commented Mar 14 at 9:29
Add a comment  | 

1 Answer 1

Reset to default 0

Adding the follwing layer after inputs solves the issue by force reshaping inputs, as suggested in the comments:

class ReshapeLayer(keras.layers.Layer):
    def __init__(self, **kwargs):
        super(ReshapeLayer, self).__init__(**kwargs)

    def build(self):
        pass

    def call(self, inputs):
        return tf.expand_dims(inputs, axis=-1)

本文标签: