admin管理员组文章数量:1333437
I have questions about the implementation of the DMLPCNN model network, built with a convolutional layer, an MLP layer, a parametric pooling layer, a convolutional layer, an MLP layer, a global average pooling layer, and an output layer (activation function - logistic regression). The parameter configuration of the two MLP blocks is: kernel = 20, micronetwork width = 32, activation function = tanh. The last layer of the network is a logistic regression function, whose output signal should reflect the degradation state of a machine. I'm working with vibration signals, and what I've done so far is shown in this code:
batch_size = 32
num_features = 1
X_train_normalized shape: (2522, 2560, 1)
y_train_normalized shape: (2522, 2560, 1)
X_train_normalized:- Minimum: 0.0, Maximum: 1.0
y_train_normalized:- Minimum: 0.0, Maximum: 1.0
X_val_normalized shape: (281, 2560, 1)
y_val_normalized shape: (281, 2560, 1)
X_val_normalized:- Minimum: 0.0, Maximum: 1.0
y_train_normalized:- Minimum: 0.0, Maximum: 1.0
#parametric implementation
class PNormPooling(tf.keras.layers.Layer):
def __init__(self, pool_size, p_initializer='ones', **kwargs):
super(PNormPooling, self).__init__(**kwargs)
self.pool_size = pool_size
self.p = tf.Variable(initial_value=tf.ones(1), trainable=True, name='p') # Learnable parameter p
def call(self, inputs):
# Compute the p-norm pooling
x = tf.abs(inputs)
x = tf.pow(x, self.p)
x = tf.nn.avg_pool1d(x, ksize=self.pool_size, strides=self.pool_size, padding='VALID')
x = tf.pow(x, 1.0 / self.p)
return x
model = Sequential()
model.add(Conv1D(256, kernel_size=20, activation='relu', input_shape=input_shape, kernel_regularizer=regularizers.l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(32, activation='tanh',kernel_regularizer=regularizers.l2(0.01)))
model.add(PNormPooling(pool_size=2)) # Use PNormPooling
model.add(Conv1D(256, kernel_size=20, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(32, activation='tanh', kernel_regularizer=regularizers.l2(0.01))) # segundo bloco MLP
model.add(GlobalAveragePooling1D())
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
optimizer = Adam(learning_rate=0.005)
modelpile(optimizer=optimizer, loss='mean_squared_error', metrics=['mae'])
model.summary()
callbacks = [
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True),
tf.keras.callbacks.ModelCheckpoint('melhor_modelo.h5', monitor='val_loss', save_best_only=True),
tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, verbose=1)
]
history = model.fit(X_train_normalizado, y_train_normalizado, epochs=50, batch_size=32, validation_data=(X_val_normalizado, y_val_normalizado))
X_test = data_teste .reshape((data_teste.shape[0], data_teste.shape[1], 1))
y_test = labels_teste.reshape((labels_teste.shape[0], labels_teste.shape[1], 1))
y_pred = model.predict(X_test_normalizado)
#
I hope to have the predicted signal, similar to the one in the attached figure.
[1]: .png
[2]: .png
[3]: .png
[4]: .png
[5]: .png
[6]: .png
[7]: .png
本文标签: deep learninghow to implement a DMLPCNN modelwith two MLPs blocks in pythonStack Overflow
版权声明:本文标题:deep learning - how to implement a DMLPCNN model, with two MLPs blocks in python - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742316592a2451939.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论