Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

Python Keras Input 0 of layer batch_normalization is incompatible with the layer

I am using CIFAR-10 Dataset to train some MLP models. I want to try data augmentation as the code block below.

learning_rate = 0.01
batch_size = 32
epoch = 50

(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# convert from integers to floats
train_images = train_images.astype('float32')
test_images = test_images.astype('float32')
# normalize to range 0-1
train_images = train_images / 255.0
test_images = test_images / 255.0

train_labels = keras.utils.to_categorical(train_labels, num_classes=10)
test_labels = keras.utils.to_categorical(test_labels, num_classes=10)

augment = keras.preprocessing.image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True)
it_train = augment.flow(train_images, train_labels, batch_size=batch_size)

And this is the model I use, you can see below.

optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
model = models.Sequential()
model.add(layers.Dense(units=1000, activation=activation, input_dim=3072))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(units=300, activation=activation))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(units=100, activation=activation))
model.add(layers.BatchNormalization())
model.add(layers.Dropout(0.2))
model.add(layers.Dense(units=10, activation='softmax'))

This is the line I train the model.

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

history = model.fit(it_train, steps_per_epoch=len(train_images), epochs=epoch, validation_data=(test_images, test_labels))

However, I get this error. CIFAR10 dataset is 32x32x3 and contains 10 labels.

ValueError: Input 0 of layer batch_normalization is incompatible with the layer: expected ndim=2, found ndim=4. Full shape received: (None, None, None, 1000)

What can I do to get rid of this error?

>Solution :

The input shape of CIRFAR is (32, 32, 3) but your model’s input isn’t taking that shape. You can try as follows for your model input.

model = keras.Sequential()

# Before 1st dense layer adding a Flatten layer that will flat the 
# coming tensor of shape (32, 32, 3).
model.add(keras.layers.Flatten(input_shape=(32, 32, 3)))
model.add(keras.layers.Dense(units=1000, activation=activation))

model.add(keras.layers.BatchNormalization())
...
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading