I need to train a model when the label themselves are images. I want to apply the same data augmentations to both the input image and the output image. following this answer, I have zipped two generators:
# create augmentation generators for the input images and label images
image_datagen = ImageDataGenerator(rotation_range=45, width_shift_range=0.2, height_shift_range=0.2, brightness_range=(0.5,1.5),
fill_mode="reflect", horizontal_flip=True ,zoom_range=0.3,preprocessing_function = self.apply_kernels)
desity_datagen = ImageDataGenerator(rotation_range=45, width_shift_range=0.2, height_shift_range=0.2, brightness_range=(0.5,1.5),
fill_mode="reflect", horizontal_flip=True ,zoom_range=0.3,preprocessing_function = self.apply_kernels)
image_generator = image_datagen.flow_from_directory(self.train_data_dir, batch_size=batch_size, class_mode=None, seed=seed)
density_generator = desity_datagen.flow_from_directory(self.train_label_dir, batch_size=batch_size, class_mode=None, seed=seed)
# Combine the image and label generator
self.train_generator = zip(image_generator, density_generator)
I have built it inside a generator calss which I initialize using:
gen = data_generator(path_to_images, path_to_labels, batch_size);
I am not including the whole class because I am not sure it is required. I will edit and add it if needed. I am trying to call the next batch from both generators to see if it works:
image,label = gen.train_generator.next()
print(labels.shape)
And I get
AttributeError: ‘zip’ object has no attribute ‘next’
I understand why I get it, though I don’t know how to get a single batch.
Using list(gen.train_generator) is too big for memory.
>Solution :
This will do what you want:
gen.train_generator.__next__()
although @OlvinRoght’s comment/answer is cleaner.