No description has been provided for this image

[K3BHPD2] - Regression with a Dense Network (DNN) - Advanced code¶

A more advanced implementation of the precedent example, using Keras3

Objectives :¶

  • Predicts housing prices from a set of house features.
  • Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.

The Boston Housing Prices Dataset consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide these information :

  • CRIM: This is the per capita crime rate by town
  • ZN: This is the proportion of residential land zoned for lots larger than 25,000 sq.ft
  • INDUS: This is the proportion of non-retail business acres per town
  • CHAS: This is the Charles River dummy variable (this is equal to 1 if tract bounds river; 0 otherwise)
  • NOX: This is the nitric oxides concentration (parts per 10 million)
  • RM: This is the average number of rooms per dwelling
  • AGE: This is the proportion of owner-occupied units built prior to 1940
  • DIS: This is the weighted distances to five Boston employment centers
  • RAD: This is the index of accessibility to radial highways
  • TAX: This is the full-value property-tax rate per 10,000 dollars
  • PTRATIO: This is the pupil-teacher ratio by town
  • B: This is calculated as 1000(Bk — 0.63)^2, where Bk is the proportion of people of African American descent by town
  • LSTAT: This is the percentage lower status of the population
  • MEDV: This is the median value of owner-occupied homes in 1000 dollars

What we're going to do :¶

  • (Retrieve data)
  • (Preparing the data)
  • (Build a model)
  • Train and save the model
  • Restore saved model
  • Evaluate the model
  • Make some predictions

Step 1 - Import and init¶

You can also adjust the verbosity by changing the value of TF_CPP_MIN_LOG_LEVEL :

  • 0 = all messages are logged (default)
  • 1 = INFO messages are not printed.
  • 2 = INFO and WARNING messages are not printed.
  • 3 = INFO , WARNING and ERROR messages are not printed.
In [1]:
import os
os.environ['KERAS_BACKEND'] = 'torch'

import keras

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys

from IPython.display import Markdown
from importlib import reload

import fidle

# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3BHPD2')


FIDLE - Environment initialization

Version              : 2.3.2
Run id               : K3BHPD2
Run dir              : ./run/K3BHPD2
Datasets dir         : /lustre/fswork/projects/rech/mlh/uja62cb/fidle-project/datasets-fidle
Start time           : 22/12/24 21:21:02
Hostname             : r3i5n3 (Linux)
Tensorflow log level : Info + Warning + Error  (=0)
Update keras cache   : False
Update torch cache   : False
Save figs            : ./run/K3BHPD2/figs (True)
keras                : 3.7.0
numpy                : 2.1.2
sklearn              : 1.5.2
yaml                 : 6.0.2
matplotlib           : 3.9.2
pandas               : 2.2.3
torch                : 2.5.0

Verbosity during training :

  • 0 = silent
  • 1 = progress bar
  • 2 = one line per epoch
In [2]:
fit_verbosity = 1

Override parameters (batch mode) - Just forget this cell

In [3]:
fidle.override('fit_verbosity')
** Overrided parameters : **
fit_verbosity        : 2

Step 2 - Retrieve data¶

2.1 - Option 1 : From Keras¶

Boston housing is a famous historic dataset, so we can get it directly from Keras datasets

In [4]:
# (x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)

2.2 - Option 2 : From a csv file¶

More fun !

In [5]:
data = pd.read_csv(f'{datasets_dir}/BHPD/origine/BostonHousing.csv', header=0)

display(data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',data.isna().sum().sum(), '  Shape is : ', data.shape)
  crim zn indus chas nox rm age dis rad tax ptratio b lstat medv
0 0.01 18.00 2.31 0.00 0.54 6.58 65.20 4.09 1.00 296.00 15.30 396.90 4.98 24.00
1 0.03 0.00 7.07 0.00 0.47 6.42 78.90 4.97 2.00 242.00 17.80 396.90 9.14 21.60
2 0.03 0.00 7.07 0.00 0.47 7.18 61.10 4.97 2.00 242.00 17.80 392.83 4.03 34.70
3 0.03 0.00 2.18 0.00 0.46 7.00 45.80 6.06 3.00 222.00 18.70 394.63 2.94 33.40
4 0.07 0.00 2.18 0.00 0.46 7.15 54.20 6.06 3.00 222.00 18.70 396.90 5.33 36.20
Missing Data :  0   Shape is :  (506, 14)

Step 3 - Preparing the data¶

3.1 - Split data¶

We will use 80% of the data for training and 20% for validation.
x will be input data and y the expected output

In [6]:
# ---- Split => train, test
#
data       = data.sample(frac=1., axis=0)
data_train = data.sample(frac=0.7, axis=0)
data_test  = data.drop(data_train.index)

# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv',  axis=1)
y_train = data_train['medv']
x_test  = data_test.drop('medv',   axis=1)
y_test  = data_test['medv']

print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test  : ',x_test.shape,  'y_test  : ',y_test.shape)
Original data shape was :  (506, 14)
x_train :  (354, 13) y_train :  (354,)
x_test  :  (152, 13) y_test  :  (152,)

3.2 - Data normalization¶

Note :

  • All input data must be normalized, train and test.
  • To do this we will subtract the mean and divide by the standard deviation.
  • But test data should not be used in any way, even for normalization.
  • The mean and the standard deviation will therefore only be calculated with the train data.
In [7]:
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))

mean = x_train.mean()
std  = x_train.std()
x_train = (x_train - mean) / std
x_test  = (x_test  - mean) / std

display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))

x_train, y_train = np.array(x_train), np.array(y_train)
x_test,  y_test  = np.array(x_test),  np.array(y_test)
Before normalization :
  crim zn indus chas nox rm age dis rad tax ptratio b lstat
count 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00
mean 3.20 11.59 10.96 0.07 0.55 6.29 67.92 3.81 9.44 402.58 18.40 357.06 12.36
std 6.63 23.01 6.85 0.26 0.12 0.73 28.10 2.06 8.69 169.75 2.19 91.22 6.99
min 0.01 0.00 0.46 0.00 0.39 3.56 6.00 1.13 1.00 187.00 12.60 0.32 1.92
25% 0.08 0.00 5.13 0.00 0.45 5.88 42.50 2.09 4.00 276.25 16.90 375.24 6.98
50% 0.24 0.00 8.56 0.00 0.53 6.18 76.50 3.37 5.00 315.00 18.90 391.77 10.66
75% 3.40 20.00 18.10 0.00 0.62 6.64 94.30 5.21 24.00 666.00 20.20 396.19 16.63
max 51.14 100.00 27.74 1.00 0.87 8.78 100.00 12.13 24.00 711.00 22.00 396.90 37.97
After normalization :
  crim zn indus chas nox rm age dis rad tax ptratio b lstat
count 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00 354.00
mean -0.00 0.00 0.00 -0.00 0.00 -0.00 -0.00 0.00 -0.00 0.00 0.00 -0.00 -0.00
std 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
min -0.48 -0.50 -1.53 -0.28 -1.42 -3.74 -2.20 -1.30 -0.97 -1.27 -2.65 -3.91 -1.49
25% -0.47 -0.50 -0.85 -0.28 -0.86 -0.55 -0.90 -0.84 -0.63 -0.74 -0.69 0.20 -0.77
50% -0.45 -0.50 -0.35 -0.28 -0.18 -0.14 0.31 -0.22 -0.51 -0.52 0.23 0.38 -0.24
75% 0.03 0.37 1.04 -0.28 0.62 0.49 0.94 0.68 1.68 1.55 0.82 0.43 0.61
max 7.23 3.84 2.45 3.55 2.76 3.43 1.14 4.04 1.68 1.82 1.64 0.44 3.66

Step 4 - Build a model¶

More informations about :

  • Optimizer
  • Activation
  • Loss
  • Metrics
In [8]:
def get_model_v1(shape):
  
  model = keras.models.Sequential()
  model.add(keras.layers.Input(shape, name="InputLayer"))
  model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
  model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
  model.add(keras.layers.Dense(1, name='Output'))

  model.compile(optimizer = 'rmsprop',
                loss      = 'mse',
                metrics   = ['mae', 'mse'] )
  return model

5 - Train the model¶

5.1 - Get it¶

In [9]:
model=get_model_v1( (13,) )

model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ Dense_n1 (Dense)                     │ (None, 64)                  │             896 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ Dense_n2 (Dense)                     │ (None, 64)                  │           4,160 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ Output (Dense)                       │ (None, 1)                   │              65 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 5,121 (20.00 KB)
 Trainable params: 5,121 (20.00 KB)
 Non-trainable params: 0 (0.00 B)

5.2 - Add callback¶

In [10]:
os.makedirs(f'{run_dir}/models',   mode=0o750, exist_ok=True)
save_dir = f'{run_dir}/models/best_model.keras'

savemodel_callback = keras.callbacks.ModelCheckpoint( filepath=save_dir, monitor='val_mae', mode='max', save_best_only=True)

5.3 - Train it¶

In [11]:
history = model.fit(x_train,
                    y_train,
                    epochs          = 50,
                    batch_size      = 10,
                    verbose         = fit_verbosity,
                    validation_data = (x_test, y_test),
                    callbacks       = [savemodel_callback])
Epoch 1/50
36/36 - 1s - 33ms/step - loss: 504.0892 - mae: 20.2701 - mse: 504.0892 - val_loss: 313.8943 - val_mae: 15.9113 - val_mse: 313.8943
Epoch 2/50
36/36 - 0s - 7ms/step - loss: 268.2964 - mae: 13.9221 - mse: 268.2964 - val_loss: 121.4570 - val_mae: 9.0979 - val_mse: 121.4570
Epoch 3/50
36/36 - 0s - 7ms/step - loss: 94.1940 - mae: 7.5216 - mse: 94.1940 - val_loss: 52.0099 - val_mae: 5.7546 - val_mse: 52.0099
Epoch 4/50
36/36 - 0s - 7ms/step - loss: 46.8844 - mae: 5.0371 - mse: 46.8844 - val_loss: 31.1393 - val_mae: 4.2664 - val_mse: 31.1393
Epoch 5/50
36/36 - 0s - 7ms/step - loss: 33.1741 - mae: 4.1702 - mse: 33.1741 - val_loss: 21.2424 - val_mae: 3.4802 - val_mse: 21.2424
Epoch 6/50
36/36 - 0s - 7ms/step - loss: 28.6976 - mae: 3.7654 - mse: 28.6976 - val_loss: 18.3379 - val_mae: 3.2438 - val_mse: 18.3379
Epoch 7/50
36/36 - 0s - 7ms/step - loss: 25.8940 - mae: 3.5451 - mse: 25.8940 - val_loss: 16.1607 - val_mae: 3.1166 - val_mse: 16.1607
Epoch 8/50
36/36 - 0s - 7ms/step - loss: 23.9242 - mae: 3.4136 - mse: 23.9242 - val_loss: 13.6759 - val_mae: 2.7929 - val_mse: 13.6759
Epoch 9/50
36/36 - 0s - 7ms/step - loss: 22.6142 - mae: 3.2796 - mse: 22.6142 - val_loss: 12.9297 - val_mae: 2.7073 - val_mse: 12.9297
Epoch 10/50
36/36 - 0s - 7ms/step - loss: 21.0243 - mae: 3.1341 - mse: 21.0243 - val_loss: 13.6224 - val_mae: 2.8486 - val_mse: 13.6224
Epoch 11/50
36/36 - 0s - 7ms/step - loss: 19.6825 - mae: 3.0867 - mse: 19.6825 - val_loss: 13.6975 - val_mae: 2.7909 - val_mse: 13.6975
Epoch 12/50
36/36 - 0s - 7ms/step - loss: 19.1371 - mae: 3.0048 - mse: 19.1371 - val_loss: 12.2408 - val_mae: 2.7137 - val_mse: 12.2408
Epoch 13/50
36/36 - 0s - 7ms/step - loss: 18.3602 - mae: 2.9735 - mse: 18.3602 - val_loss: 10.4140 - val_mae: 2.5042 - val_mse: 10.4140
Epoch 14/50
36/36 - 0s - 7ms/step - loss: 17.4970 - mae: 2.8553 - mse: 17.4970 - val_loss: 10.0471 - val_mae: 2.4488 - val_mse: 10.0471
Epoch 15/50
36/36 - 0s - 7ms/step - loss: 17.0819 - mae: 2.8090 - mse: 17.0819 - val_loss: 9.8079 - val_mae: 2.4644 - val_mse: 9.8079
Epoch 16/50
36/36 - 0s - 7ms/step - loss: 16.2253 - mae: 2.7502 - mse: 16.2253 - val_loss: 10.1293 - val_mae: 2.4558 - val_mse: 10.1293
Epoch 17/50
36/36 - 0s - 7ms/step - loss: 15.9765 - mae: 2.7330 - mse: 15.9765 - val_loss: 9.3152 - val_mae: 2.3447 - val_mse: 9.3152
Epoch 18/50
36/36 - 0s - 7ms/step - loss: 15.2028 - mae: 2.6942 - mse: 15.2028 - val_loss: 8.7731 - val_mae: 2.3297 - val_mse: 8.7731
Epoch 19/50
36/36 - 0s - 7ms/step - loss: 15.2136 - mae: 2.6573 - mse: 15.2136 - val_loss: 9.4598 - val_mae: 2.3441 - val_mse: 9.4598
Epoch 20/50
36/36 - 0s - 7ms/step - loss: 14.4396 - mae: 2.6348 - mse: 14.4396 - val_loss: 8.7904 - val_mae: 2.3630 - val_mse: 8.7904
Epoch 21/50
36/36 - 0s - 7ms/step - loss: 14.2857 - mae: 2.6226 - mse: 14.2857 - val_loss: 11.7327 - val_mae: 2.7035 - val_mse: 11.7327
Epoch 22/50
36/36 - 0s - 7ms/step - loss: 14.2466 - mae: 2.5884 - mse: 14.2466 - val_loss: 9.6170 - val_mae: 2.3997 - val_mse: 9.6170
Epoch 23/50
36/36 - 0s - 7ms/step - loss: 13.9027 - mae: 2.5799 - mse: 13.9027 - val_loss: 9.6761 - val_mae: 2.4461 - val_mse: 9.6761
Epoch 24/50
36/36 - 0s - 7ms/step - loss: 13.5278 - mae: 2.5680 - mse: 13.5278 - val_loss: 8.2778 - val_mae: 2.2438 - val_mse: 8.2778
Epoch 25/50
36/36 - 0s - 7ms/step - loss: 13.1984 - mae: 2.5181 - mse: 13.1984 - val_loss: 8.2566 - val_mae: 2.2434 - val_mse: 8.2566
Epoch 26/50
36/36 - 0s - 7ms/step - loss: 13.1560 - mae: 2.4832 - mse: 13.1560 - val_loss: 8.4924 - val_mae: 2.2746 - val_mse: 8.4924
Epoch 27/50
36/36 - 0s - 7ms/step - loss: 12.7709 - mae: 2.4726 - mse: 12.7709 - val_loss: 8.9141 - val_mae: 2.3369 - val_mse: 8.9141
Epoch 28/50
36/36 - 0s - 7ms/step - loss: 12.4796 - mae: 2.4296 - mse: 12.4796 - val_loss: 11.1180 - val_mae: 2.6564 - val_mse: 11.1180
Epoch 29/50
36/36 - 0s - 7ms/step - loss: 12.3882 - mae: 2.4903 - mse: 12.3882 - val_loss: 9.1900 - val_mae: 2.3682 - val_mse: 9.1900
Epoch 30/50
36/36 - 0s - 7ms/step - loss: 12.4658 - mae: 2.4400 - mse: 12.4658 - val_loss: 8.0953 - val_mae: 2.2214 - val_mse: 8.0953
Epoch 31/50
36/36 - 0s - 7ms/step - loss: 12.2393 - mae: 2.4089 - mse: 12.2393 - val_loss: 8.1036 - val_mae: 2.2190 - val_mse: 8.1036
Epoch 32/50
36/36 - 0s - 7ms/step - loss: 11.7950 - mae: 2.3836 - mse: 11.7950 - val_loss: 8.4817 - val_mae: 2.2751 - val_mse: 8.4817
Epoch 33/50
36/36 - 0s - 7ms/step - loss: 11.7656 - mae: 2.3711 - mse: 11.7656 - val_loss: 8.8034 - val_mae: 2.2703 - val_mse: 8.8034
Epoch 34/50
36/36 - 0s - 7ms/step - loss: 11.5989 - mae: 2.3616 - mse: 11.5989 - val_loss: 8.4788 - val_mae: 2.2942 - val_mse: 8.4788
Epoch 35/50
36/36 - 0s - 7ms/step - loss: 11.4104 - mae: 2.3433 - mse: 11.4104 - val_loss: 8.1913 - val_mae: 2.2221 - val_mse: 8.1913
Epoch 36/50
36/36 - 0s - 7ms/step - loss: 11.2820 - mae: 2.3459 - mse: 11.2820 - val_loss: 8.3538 - val_mae: 2.2082 - val_mse: 8.3538
Epoch 37/50
36/36 - 0s - 7ms/step - loss: 11.0402 - mae: 2.3209 - mse: 11.0402 - val_loss: 10.0886 - val_mae: 2.4600 - val_mse: 10.0886
Epoch 38/50
36/36 - 0s - 7ms/step - loss: 11.1212 - mae: 2.3429 - mse: 11.1212 - val_loss: 8.9846 - val_mae: 2.3410 - val_mse: 8.9846
Epoch 39/50
36/36 - 0s - 7ms/step - loss: 10.9695 - mae: 2.2781 - mse: 10.9695 - val_loss: 9.5102 - val_mae: 2.3031 - val_mse: 9.5102
Epoch 40/50
36/36 - 0s - 7ms/step - loss: 10.8220 - mae: 2.2819 - mse: 10.8220 - val_loss: 8.3202 - val_mae: 2.1936 - val_mse: 8.3202
Epoch 41/50
36/36 - 0s - 7ms/step - loss: 10.3957 - mae: 2.2363 - mse: 10.3957 - val_loss: 9.6119 - val_mae: 2.3390 - val_mse: 9.6119
Epoch 42/50
36/36 - 0s - 7ms/step - loss: 10.2897 - mae: 2.2376 - mse: 10.2897 - val_loss: 9.9161 - val_mae: 2.3472 - val_mse: 9.9161
Epoch 43/50
36/36 - 0s - 7ms/step - loss: 10.3316 - mae: 2.2337 - mse: 10.3316 - val_loss: 8.8876 - val_mae: 2.2405 - val_mse: 8.8876
Epoch 44/50
36/36 - 0s - 7ms/step - loss: 10.3797 - mae: 2.2195 - mse: 10.3797 - val_loss: 9.3457 - val_mae: 2.2572 - val_mse: 9.3457
Epoch 45/50
36/36 - 0s - 7ms/step - loss: 10.0972 - mae: 2.2261 - mse: 10.0972 - val_loss: 8.3382 - val_mae: 2.1792 - val_mse: 8.3382
Epoch 46/50
36/36 - 0s - 7ms/step - loss: 10.0753 - mae: 2.2529 - mse: 10.0753 - val_loss: 8.8636 - val_mae: 2.2892 - val_mse: 8.8636
Epoch 47/50
36/36 - 0s - 7ms/step - loss: 9.7222 - mae: 2.1923 - mse: 9.7222 - val_loss: 9.0218 - val_mae: 2.2316 - val_mse: 9.0218
Epoch 48/50
36/36 - 0s - 7ms/step - loss: 9.8261 - mae: 2.1911 - mse: 9.8261 - val_loss: 10.5458 - val_mae: 2.4497 - val_mse: 10.5458
Epoch 49/50
36/36 - 0s - 7ms/step - loss: 9.6306 - mae: 2.2001 - mse: 9.6306 - val_loss: 9.3719 - val_mae: 2.2492 - val_mse: 9.3719
Epoch 50/50
36/36 - 0s - 7ms/step - loss: 9.5192 - mae: 2.1664 - mse: 9.5192 - val_loss: 8.7695 - val_mae: 2.2178 - val_mse: 8.7695

Step 6 - Evaluate¶

6.1 - Model evaluation¶

MAE = Mean Absolute Error (between the labels and predictions)
A mae equal to 3 represents an average error in prediction of $3k.

In [12]:
score = model.evaluate(x_test, y_test, verbose=0)

print('x_test / loss      : {:5.4f}'.format(score[0]))
print('x_test / mae       : {:5.4f}'.format(score[1]))
print('x_test / mse       : {:5.4f}'.format(score[2]))
x_test / loss      : 8.7695
x_test / mae       : 2.2178
x_test / mse       : 8.7695

6.2 - Training history¶

What was the best result during our training ?

In [13]:
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
min( val_mae ) : 2.1792
In [14]:
fidle.scrawler.history( history, plot={'MSE' :['mse', 'val_mse'],
                        'MAE' :['mae', 'val_mae'],
                        'LOSS':['loss','val_loss']}, save_as='01-history')
Saved: ./run/K3BHPD2/figs/01-history_0
No description has been provided for this image
Saved: ./run/K3BHPD2/figs/01-history_1
No description has been provided for this image
Saved: ./run/K3BHPD2/figs/01-history_2
No description has been provided for this image

Step 7 - Restore a model :¶

7.1 - Reload model¶

In [15]:
loaded_model = keras.models.load_model(f'{run_dir}/models/best_model.keras')
loaded_model.summary()
print("Loaded.")
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ Dense_n1 (Dense)                     │ (None, 64)                  │             896 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ Dense_n2 (Dense)                     │ (None, 64)                  │           4,160 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ Output (Dense)                       │ (None, 1)                   │              65 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 10,244 (40.02 KB)
 Trainable params: 5,121 (20.00 KB)
 Non-trainable params: 0 (0.00 B)
 Optimizer params: 5,123 (20.01 KB)
Loaded.

7.2 - Evaluate it :¶

In [16]:
score = loaded_model.evaluate(x_test, y_test, verbose=0)

print('x_test / loss      : {:5.4f}'.format(score[0]))
print('x_test / mae       : {:5.4f}'.format(score[1]))
print('x_test / mse       : {:5.4f}'.format(score[2]))
x_test / loss      : 313.8943
x_test / mae       : 15.9113
x_test / mse       : 313.8943

7.3 - Make a prediction¶

In [17]:
my_data = [ 1.26425925, -0.48522739,  1.0436489 , -0.23112788,  1.37120745,
       -2.14308942,  1.13489104, -1.06802005,  1.71189006,  1.57042287,
        0.77859951,  0.14769795,  2.7585581 ]
real_price = 10.4

my_data=np.array(my_data).reshape(1,13)
In [18]:
predictions = loaded_model.predict( my_data, verbose=fit_verbosity )
print("Prediction : {:.2f} K$   Reality : {:.2f} K$".format(predictions[0][0], real_price))
1/1 - 0s - 3ms/step
Prediction : 6.60 K$   Reality : 10.40 K$
In [19]:
fidle.end()

End time : 22/12/24 21:21:17
Duration : 00:00:16 635ms
This notebook ends here :-)
https://fidle.cnrs.fr


No description has been provided for this image