No description has been provided for this image

[K3AE5] - Advanced denoiser and classifier model¶

Episode 5 : Construction of an advanced denoiser and classifier model

Objectives :¶

  • Building a multiple output model, able to denoise and classify
  • Understanding a more complex advanced programming model

The calculation needs being important, it is preferable to use a very simple dataset such as MNIST.
The use of a GPU is often indispensable.

What we're going to do :¶

  • Defining a multiple output model using Keras procedural programing model
  • Build the model
  • Train it
  • Follow the learning process

Data Terminology :¶

  • clean_train, clean_test for noiseless images
  • noisy_train, noisy_test for noisy images
  • class_train, class_test for the classes to which the images belong
  • denoised_test for denoised images at the output of the model
  • classcat_test for class prediction in model output (is a softmax)
  • classid_test class prediction (ie: argmax of classcat_test)

Step 1 - Init python stuff¶

1.1 - Init¶

InĀ [1]:
import os
os.environ['KERAS_BACKEND'] = 'torch'

import keras

import numpy as np
from skimage import io
import random

from modules.AE5_builder    import AE5_builder
from modules.MNIST          import MNIST
from modules.ImagesCallback import ImagesCallback

import fidle


# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3AE5')


FIDLE - Environment initialization

Version              : 2.3.2
Run id               : K3AE5
Run dir              : ./run/K3AE5
Datasets dir         : /lustre/fswork/projects/rech/mlh/uja62cb/fidle-project/datasets-fidle
Start time           : 22/12/24 21:30:22
Hostname             : r3i6n0 (Linux)
Tensorflow log level : Info + Warning + Error  (=0)
Update keras cache   : False
Update torch cache   : False
Save figs            : ./run/K3AE5/figs (True)
keras                : 3.7.0
numpy                : 2.1.2
sklearn              : 1.5.2
yaml                 : 6.0.2
skimage              : 0.24.0
matplotlib           : 3.9.2
pandas               : 2.2.3
torch                : 2.5.0

1.2 - Parameters¶

prepared_dataset : Filename of the prepared dataset (Need 400 Mo, but can be in ./data)
dataset_seed : Random seed for shuffling dataset
scale : % of the dataset to use (1. for 100%)
latent_dim : Dimension of the latent space
train_prop : Percentage for train (the rest being for the test) batch_size : Batch size
epochs : Nb of epochs for training
fit_verbosity is the verbosity during training : 0 = silent, 1 = progress bar, 2 = one line per epoch

InĀ [2]:
prepared_dataset = './data/mnist-noisy.h5'
dataset_seed     = None

scale            = .1

train_prop       = .8
batch_size       = 128
epochs           = 10
fit_verbosity    = 1

Override parameters (batch mode) - Just forget this cell

InĀ [3]:
fidle.override('prepared_dataset', 'dataset_seed', 'scale')
fidle.override('train_prop', 'batch_size', 'epochs', 'fit_verbosity')
** Overrided parameters : **
scale                : 1
** Overrided parameters : **
epochs               : 30
fit_verbosity        : 2

Step 2 - Retrieve dataset¶

With our MNIST class, in one call, we can reload, rescale, shuffle and split our previously saved dataset :-)

InĀ [4]:
clean_train,clean_test, noisy_train,noisy_test, class_train,class_test = MNIST.reload_prepared_dataset(
                                                                                    scale      = scale, 
                                                                                    train_prop = train_prop,
                                                                                    seed       = dataset_seed,
                                                                                    shuffle    = True,
                                                                                    filename   = prepared_dataset )
Loaded.
rescaled (1).
Seeded (None)
Shuffled.
splited (0.8).
clean_train shape is :  (56000, 28, 28, 1)
clean_test  shape is :  (14000, 28, 28, 1)
noisy_train shape is :  (56000, 28, 28, 1)
noisy_test  shape is :  (14000, 28, 28, 1)
class_train shape is :  (56000,)
class_test  shape is :  (14000,)
Blake2b digest is    :  296f23be5f05c00ca9e9

Step 3 - Build model¶

InĀ [5]:
builder = AE5_builder( ae   = { 'latent_dim':10 }, 
                       cnn1 = { 'lc1':8,  'lc2':16, 'ld':100 }, 
                       cnn2 = { 'lc1':32, 'lc2':64, 'ld':50 } )

model = builder.create_model()
InĀ [6]:
model.compile(optimizer='rmsprop', 
              loss={'ae':'binary_crossentropy', 'classifier':'sparse_categorical_crossentropy'},
              loss_weights={'ae':1., 'classifier':1.},
              metrics={'classifier':'accuracy'} )

Step 4 - Train¶

20' on a CPU
1'30 on a GPU (V100, IDRIS)

InĀ [7]:
# ---- Callback : Images
#
fidle.utils.mkdir( run_dir + '/images')
filename = run_dir + '/images/image-{epoch:03d}-{i:02d}.jpg'

encoder = model.get_layer('ae').get_layer('encoder')
decoder = model.get_layer('ae').get_layer('decoder')

callback_images = ImagesCallback(filename, x=clean_test[:5], encoder=encoder,decoder=decoder)
InĀ [8]:
chrono = fidle.Chrono()
chrono.start()

history = model.fit(noisy_train, [clean_train, class_train],
                 batch_size      = batch_size,
                 epochs          = epochs,
                 verbose         = fit_verbosity,
                 validation_data = (noisy_test, [clean_test, class_test]),
                 callbacks       = [ callback_images ] ) 


chrono.show()
Epoch 1/30
/lustre/fswork/projects/rech/mlh/uja62cb/local/fidle-k3/lib/python3.12/site-packages/keras/src/backend/common/backend_utils.py:91: UserWarning: You might experience inconsistencies across backends when calling conv transpose with kernel_size=3, stride=2, dilation_rate=1, padding=same, output_padding=1.
  warnings.warn(
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step
438/438 - 11s - 26ms/step - ae_loss: 0.2621 - classifier_accuracy: 0.6411 - classifier_loss: 1.0617 - loss: 1.3238 - val_ae_loss: 0.2087 - val_classifier_accuracy: 0.8189 - val_classifier_loss: 0.5657 - val_loss: 0.7744
Epoch 2/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1881 - classifier_accuracy: 0.8048 - classifier_loss: 0.6023 - loss: 0.7904 - val_ae_loss: 0.1768 - val_classifier_accuracy: 0.8612 - val_classifier_loss: 0.4198 - val_loss: 0.5966
Epoch 3/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1720 - classifier_accuracy: 0.8302 - classifier_loss: 0.5225 - loss: 0.6945 - val_ae_loss: 0.1727 - val_classifier_accuracy: 0.8694 - val_classifier_loss: 0.3958 - val_loss: 0.5685
Epoch 4/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1644 - classifier_accuracy: 0.8450 - classifier_loss: 0.4854 - loss: 0.6498 - val_ae_loss: 0.1636 - val_classifier_accuracy: 0.8780 - val_classifier_loss: 0.3742 - val_loss: 0.5378
Epoch 5/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1605 - classifier_accuracy: 0.8522 - classifier_loss: 0.4599 - loss: 0.6203 - val_ae_loss: 0.1613 - val_classifier_accuracy: 0.8809 - val_classifier_loss: 0.3605 - val_loss: 0.5218
Epoch 6/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1576 - classifier_accuracy: 0.8595 - classifier_loss: 0.4365 - loss: 0.5941 - val_ae_loss: 0.1616 - val_classifier_accuracy: 0.8894 - val_classifier_loss: 0.3438 - val_loss: 0.5053
Epoch 7/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1555 - classifier_accuracy: 0.8603 - classifier_loss: 0.4290 - loss: 0.5845 - val_ae_loss: 0.1574 - val_classifier_accuracy: 0.8893 - val_classifier_loss: 0.3464 - val_loss: 0.5039
Epoch 8/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1539 - classifier_accuracy: 0.8677 - classifier_loss: 0.4118 - loss: 0.5656 - val_ae_loss: 0.1543 - val_classifier_accuracy: 0.8846 - val_classifier_loss: 0.3532 - val_loss: 0.5075
Epoch 9/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1526 - classifier_accuracy: 0.8692 - classifier_loss: 0.4046 - loss: 0.5572 - val_ae_loss: 0.1540 - val_classifier_accuracy: 0.8879 - val_classifier_loss: 0.3380 - val_loss: 0.4920
Epoch 10/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1514 - classifier_accuracy: 0.8718 - classifier_loss: 0.3947 - loss: 0.5460 - val_ae_loss: 0.1522 - val_classifier_accuracy: 0.8858 - val_classifier_loss: 0.3408 - val_loss: 0.4930
Epoch 11/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1505 - classifier_accuracy: 0.8731 - classifier_loss: 0.3902 - loss: 0.5407 - val_ae_loss: 0.1521 - val_classifier_accuracy: 0.8897 - val_classifier_loss: 0.3334 - val_loss: 0.4854
Epoch 12/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1496 - classifier_accuracy: 0.8758 - classifier_loss: 0.3818 - loss: 0.5314 - val_ae_loss: 0.1507 - val_classifier_accuracy: 0.8944 - val_classifier_loss: 0.3270 - val_loss: 0.4777
Epoch 13/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1488 - classifier_accuracy: 0.8767 - classifier_loss: 0.3778 - loss: 0.5266 - val_ae_loss: 0.1518 - val_classifier_accuracy: 0.8914 - val_classifier_loss: 0.3288 - val_loss: 0.4807
Epoch 14/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1481 - classifier_accuracy: 0.8797 - classifier_loss: 0.3703 - loss: 0.5185 - val_ae_loss: 0.1511 - val_classifier_accuracy: 0.8915 - val_classifier_loss: 0.3372 - val_loss: 0.4884
Epoch 15/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1475 - classifier_accuracy: 0.8799 - classifier_loss: 0.3678 - loss: 0.5153 - val_ae_loss: 0.1494 - val_classifier_accuracy: 0.8926 - val_classifier_loss: 0.3245 - val_loss: 0.4738
Epoch 16/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1470 - classifier_accuracy: 0.8819 - classifier_loss: 0.3605 - loss: 0.5075 - val_ae_loss: 0.1496 - val_classifier_accuracy: 0.8950 - val_classifier_loss: 0.3255 - val_loss: 0.4751
Epoch 17/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1465 - classifier_accuracy: 0.8837 - classifier_loss: 0.3589 - loss: 0.5054 - val_ae_loss: 0.1494 - val_classifier_accuracy: 0.8877 - val_classifier_loss: 0.3458 - val_loss: 0.4952
Epoch 18/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1460 - classifier_accuracy: 0.8843 - classifier_loss: 0.3519 - loss: 0.4979 - val_ae_loss: 0.1489 - val_classifier_accuracy: 0.8941 - val_classifier_loss: 0.3241 - val_loss: 0.4730
Epoch 19/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1456 - classifier_accuracy: 0.8841 - classifier_loss: 0.3521 - loss: 0.4976 - val_ae_loss: 0.1483 - val_classifier_accuracy: 0.8971 - val_classifier_loss: 0.3231 - val_loss: 0.4714
Epoch 20/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1450 - classifier_accuracy: 0.8867 - classifier_loss: 0.3461 - loss: 0.4911 - val_ae_loss: 0.1478 - val_classifier_accuracy: 0.8951 - val_classifier_loss: 0.3240 - val_loss: 0.4718
Epoch 21/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1447 - classifier_accuracy: 0.8868 - classifier_loss: 0.3460 - loss: 0.4907 - val_ae_loss: 0.1483 - val_classifier_accuracy: 0.8954 - val_classifier_loss: 0.3265 - val_loss: 0.4748
Epoch 22/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1442 - classifier_accuracy: 0.8877 - classifier_loss: 0.3426 - loss: 0.4869 - val_ae_loss: 0.1496 - val_classifier_accuracy: 0.8938 - val_classifier_loss: 0.3257 - val_loss: 0.4753
Epoch 23/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1439 - classifier_accuracy: 0.8866 - classifier_loss: 0.3442 - loss: 0.4881 - val_ae_loss: 0.1478 - val_classifier_accuracy: 0.8956 - val_classifier_loss: 0.3239 - val_loss: 0.4718
Epoch 24/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1436 - classifier_accuracy: 0.8904 - classifier_loss: 0.3381 - loss: 0.4817 - val_ae_loss: 0.1487 - val_classifier_accuracy: 0.8936 - val_classifier_loss: 0.3276 - val_loss: 0.4762
Epoch 25/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1433 - classifier_accuracy: 0.8899 - classifier_loss: 0.3373 - loss: 0.4805 - val_ae_loss: 0.1492 - val_classifier_accuracy: 0.8939 - val_classifier_loss: 0.3230 - val_loss: 0.4722
Epoch 26/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1428 - classifier_accuracy: 0.8925 - classifier_loss: 0.3323 - loss: 0.4752 - val_ae_loss: 0.1485 - val_classifier_accuracy: 0.8921 - val_classifier_loss: 0.3302 - val_loss: 0.4787
Epoch 27/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1426 - classifier_accuracy: 0.8911 - classifier_loss: 0.3304 - loss: 0.4730 - val_ae_loss: 0.1476 - val_classifier_accuracy: 0.8894 - val_classifier_loss: 0.3351 - val_loss: 0.4828
Epoch 28/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1423 - classifier_accuracy: 0.8905 - classifier_loss: 0.3333 - loss: 0.4757 - val_ae_loss: 0.1474 - val_classifier_accuracy: 0.8916 - val_classifier_loss: 0.3283 - val_loss: 0.4757
Epoch 29/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1420 - classifier_accuracy: 0.8934 - classifier_loss: 0.3246 - loss: 0.4666 - val_ae_loss: 0.1482 - val_classifier_accuracy: 0.8888 - val_classifier_loss: 0.3351 - val_loss: 0.4833
Epoch 30/30
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
438/438 - 11s - 25ms/step - ae_loss: 0.1418 - classifier_accuracy: 0.8909 - classifier_loss: 0.3316 - loss: 0.4734 - val_ae_loss: 0.1482 - val_classifier_accuracy: 0.8924 - val_classifier_loss: 0.3282 - val_loss: 0.4765
Duration :  329.5 seconds

Save model weights

InĀ [9]:
os.makedirs(f'{run_dir}/models', exist_ok=True)

model.save_weights(f'{run_dir}/models/model.weights.h5')

Step 5 - History¶

InĀ [10]:
fidle.scrawler.history(history,  plot={'Loss':['loss', 'val_loss'],
                                 'Accuracy':['classifier_accuracy','val_classifier_accuracy']}, save_as='01-history')
Saved: ./run/K3AE5/figs/01-history_0
No description has been provided for this image
Saved: ./run/K3AE5/figs/01-history_1
No description has been provided for this image

Step 6 - Denoising progress¶

InĀ [11]:
imgs=[]
for epoch in range(0,epochs,4):
    for i in range(5):
        filename = run_dir + '/images/image-{epoch:03d}-{i:02d}.jpg'.format(epoch=epoch, i=i)
        img      = io.imread(filename)
        imgs.append(img)      

fidle.utils.subtitle('Real images (clean_test) :')
fidle.scrawler.images(clean_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as='02-original-real')

fidle.utils.subtitle('Noisy images (noisy_test) :')
fidle.scrawler.images(noisy_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as='03-original-noisy')

fidle.utils.subtitle('Evolution during the training period (denoised_test) :')
fidle.scrawler.images(imgs, None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, y_padding=0.1, save_as='04-learning')

fidle.utils.subtitle('Noisy images (noisy_test) :')
fidle.scrawler.images(noisy_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as=None)

fidle.utils.subtitle('Real images (clean_test) :')
fidle.scrawler.images(clean_test[:5], None, indices='all', columns=5, x_size=2,y_size=2, interpolation=None, save_as=None)


Real images (clean_test) :

Saved: ./run/K3AE5/figs/02-original-real
No description has been provided for this image


Noisy images (noisy_test) :

Saved: ./run/K3AE5/figs/03-original-noisy
No description has been provided for this image


Evolution during the training period (denoised_test) :

Saved: ./run/K3AE5/figs/04-learning
No description has been provided for this image


Noisy images (noisy_test) :

No description has been provided for this image


Real images (clean_test) :

No description has been provided for this image

Step 7 - Evaluation¶

Note : We will use the following data:
clean_train, clean_test for noiseless images
noisy_train, noisy_test for noisy images
class_train, class_test for the classes to which the images belong
denoised_test for denoised images at the output of the model
classcat_test for class prediction in model output (is a softmax)
classid_test class prediction (ie: argmax of classcat_test)




7.1 - Reload our best model¶

InĀ [12]:
builder = AE5_builder( ae   = { 'latent_dim':10 }, 
                       cnn1 = { 'lc1':8,  'lc2':16, 'ld':100 }, 
                       cnn2 = { 'lc1':32, 'lc2':64, 'ld':50 } )

model = builder.create_model()

model.load_weights(f'{run_dir}/models/model.weights.h5')

7.2 - Let's make a prediction¶

Note that our model will returns 2 outputs : denoised images from output 1 and class prediction from output 2

InĀ [13]:
outputs = model.predict(noisy_test, verbose=0)

denoised = outputs['ae']
classcat = outputs['classifier']

print('Denoised images   (denoised_test) shape : ', denoised.shape)
print('Predicted classes (classcat_test) shape : ', classcat.shape)
Denoised images   (denoised_test) shape :  (14000, 28, 28, 1)
Predicted classes (classcat_test) shape :  (14000, 10)

7.3 - Denoised images¶

InĀ [14]:
i=random.randint(0,len(denoised)-8)
j=i+8

fidle.utils.subtitle('Noisy test images (input):')
fidle.scrawler.images(noisy_test[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='05-test-noisy')

fidle.utils.subtitle('Denoised images (output):')
fidle.scrawler.images(denoised[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='06-test-predict')

fidle.utils.subtitle('Real test images :')
fidle.scrawler.images(clean_test[i:j], None, indices='all', columns=8, x_size=2,y_size=2, interpolation=None, save_as='07-test-real')


Noisy test images (input):

Saved: ./run/K3AE5/figs/05-test-noisy
No description has been provided for this image


Denoised images (output):

Saved: ./run/K3AE5/figs/06-test-predict
No description has been provided for this image


Real test images :

Saved: ./run/K3AE5/figs/07-test-real
No description has been provided for this image

7.4 - Class prediction¶

Note: The evaluation requires the noisy images as input (noisy_test) and the 2 expected outputs:

  • the images without noise (clean_test)
  • the classes (class_test)
InĀ [15]:
# We need to (re)compile our resurrected model (to specify loss and metrics)
#
model.compile(optimizer='rmsprop', 
              loss={'ae':'binary_crossentropy', 'classifier':'sparse_categorical_crossentropy'},
              loss_weights={'ae':1., 'classifier':1.},
              metrics={'classifier':'accuracy'} )


# Get an evaluation
#
score = model.evaluate(noisy_test, [clean_test, class_test], verbose=0)

# And show results
#
fidle.utils.subtitle("Accuracy :")
print(f'Classification accuracy : {score[1]:4.4f}')

fidle.utils.subtitle("Few examples :")
classid_test  = np.argmax(classcat, axis=-1)
fidle.scrawler.images(noisy_test, class_test, range(0,200), columns=12, x_size=1, y_size=1, y_pred=classid_test, save_as='04-predictions')


Accuracy :

Classification accuracy : 0.1482


Few examples :

Saved: ./run/K3AE5/figs/04-predictions
No description has been provided for this image
InĀ [16]:
fidle.end()

End time : 22/12/24 21:36:29
Duration : 00:06:06 323ms
This notebook ends here :-)
https://fidle.cnrs.fr



No description has been provided for this image