[K3BHPD2] - Regression with a Dense Network (DNN) - Advanced code¶
A more advanced implementation of the precedent example, using Keras3Objectives :¶
- Predicts housing prices from a set of house features.
- Understanding the principle and the architecture of a regression with a dense neural network with backup and restore of the trained model.
The Boston Housing Prices Dataset consists of price of houses in various places in Boston.
Alongside with price, the dataset also provide these information :
- CRIM: This is the per capita crime rate by town
- ZN: This is the proportion of residential land zoned for lots larger than 25,000 sq.ft
- INDUS: This is the proportion of non-retail business acres per town
- CHAS: This is the Charles River dummy variable (this is equal to 1 if tract bounds river; 0 otherwise)
- NOX: This is the nitric oxides concentration (parts per 10 million)
- RM: This is the average number of rooms per dwelling
- AGE: This is the proportion of owner-occupied units built prior to 1940
- DIS: This is the weighted distances to five Boston employment centers
- RAD: This is the index of accessibility to radial highways
- TAX: This is the full-value property-tax rate per 10,000 dollars
- PTRATIO: This is the pupil-teacher ratio by town
- B: This is calculated as 1000(Bk — 0.63)^2, where Bk is the proportion of people of African American descent by town
- LSTAT: This is the percentage lower status of the population
- MEDV: This is the median value of owner-occupied homes in 1000 dollars
What we're going to do :¶
- (Retrieve data)
- (Preparing the data)
- (Build a model)
- Train and save the model
- Restore saved model
- Evaluate the model
- Make some predictions
Step 1 - Import and init¶
You can also adjust the verbosity by changing the value of TF_CPP_MIN_LOG_LEVEL :
- 0 = all messages are logged (default)
- 1 = INFO messages are not printed.
- 2 = INFO and WARNING messages are not printed.
- 3 = INFO , WARNING and ERROR messages are not printed.
In [1]:
import os
os.environ['KERAS_BACKEND'] = 'torch'
import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os,sys
from IPython.display import Markdown
from importlib import reload
import fidle
# Init Fidle environment
run_id, run_dir, datasets_dir = fidle.init('K3BHPD2')
FIDLE - Environment initialization
Version : 2.3.0 Run id : K3BHPD2 Run dir : ./run/K3BHPD2 Datasets dir : /gpfswork/rech/mlh/uja62cb/fidle-project/datasets-fidle Start time : 03/03/24 21:03:55 Hostname : r3i7n8 (Linux) Tensorflow log level : Warning + Error (=1) Update keras cache : False Update torch cache : False Save figs : ./run/K3BHPD2/figs (True) keras : 3.0.4 numpy : 1.24.4 sklearn : 1.3.2 yaml : 6.0.1 matplotlib : 3.8.2 pandas : 2.1.3 torch : 2.1.1
Verbosity during training :
- 0 = silent
- 1 = progress bar
- 2 = one line per epoch
In [2]:
fit_verbosity = 1
Override parameters (batch mode) - Just forget this cell
In [3]:
fidle.override('fit_verbosity')
** Overrided parameters : ** fit_verbosity : 2
Step 2 - Retrieve data¶
2.1 - Option 1 : From Keras¶
Boston housing is a famous historic dataset, so we can get it directly from Keras datasets
In [4]:
# (x_train, y_train), (x_test, y_test) = keras.datasets.boston_housing.load_data(test_split=0.2, seed=113)
2.2 - Option 2 : From a csv file¶
More fun !
In [5]:
data = pd.read_csv(f'{datasets_dir}/BHPD/origine/BostonHousing.csv', header=0)
display(data.head(5).style.format("{0:.2f}"))
print('Missing Data : ',data.isna().sum().sum(), ' Shape is : ', data.shape)
crim | zn | indus | chas | nox | rm | age | dis | rad | tax | ptratio | b | lstat | medv | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.01 | 18.00 | 2.31 | 0.00 | 0.54 | 6.58 | 65.20 | 4.09 | 1.00 | 296.00 | 15.30 | 396.90 | 4.98 | 24.00 |
1 | 0.03 | 0.00 | 7.07 | 0.00 | 0.47 | 6.42 | 78.90 | 4.97 | 2.00 | 242.00 | 17.80 | 396.90 | 9.14 | 21.60 |
2 | 0.03 | 0.00 | 7.07 | 0.00 | 0.47 | 7.18 | 61.10 | 4.97 | 2.00 | 242.00 | 17.80 | 392.83 | 4.03 | 34.70 |
3 | 0.03 | 0.00 | 2.18 | 0.00 | 0.46 | 7.00 | 45.80 | 6.06 | 3.00 | 222.00 | 18.70 | 394.63 | 2.94 | 33.40 |
4 | 0.07 | 0.00 | 2.18 | 0.00 | 0.46 | 7.15 | 54.20 | 6.06 | 3.00 | 222.00 | 18.70 | 396.90 | 5.33 | 36.20 |
Missing Data : 0 Shape is : (506, 14)
In [6]:
# ---- Split => train, test
#
data = data.sample(frac=1., axis=0)
data_train = data.sample(frac=0.7, axis=0)
data_test = data.drop(data_train.index)
# ---- Split => x,y (medv is price)
#
x_train = data_train.drop('medv', axis=1)
y_train = data_train['medv']
x_test = data_test.drop('medv', axis=1)
y_test = data_test['medv']
print('Original data shape was : ',data.shape)
print('x_train : ',x_train.shape, 'y_train : ',y_train.shape)
print('x_test : ',x_test.shape, 'y_test : ',y_test.shape)
Original data shape was : (506, 14) x_train : (354, 13) y_train : (354,) x_test : (152, 13) y_test : (152,)
3.2 - Data normalization¶
Note :
- All input data must be normalized, train and test.
- To do this we will subtract the mean and divide by the standard deviation.
- But test data should not be used in any way, even for normalization.
- The mean and the standard deviation will therefore only be calculated with the train data.
In [7]:
display(x_train.describe().style.format("{0:.2f}").set_caption("Before normalization :"))
mean = x_train.mean()
std = x_train.std()
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
display(x_train.describe().style.format("{0:.2f}").set_caption("After normalization :"))
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
crim | zn | indus | chas | nox | rm | age | dis | rad | tax | ptratio | b | lstat | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 |
mean | 3.79 | 11.94 | 10.99 | 0.08 | 0.55 | 6.30 | 68.38 | 3.84 | 9.55 | 408.95 | 18.52 | 357.58 | 12.47 |
std | 9.44 | 23.91 | 6.77 | 0.27 | 0.12 | 0.72 | 28.01 | 2.15 | 8.68 | 167.30 | 2.16 | 89.86 | 7.09 |
min | 0.01 | 0.00 | 0.46 | 0.00 | 0.39 | 3.86 | 6.00 | 1.13 | 1.00 | 187.00 | 12.60 | 0.32 | 1.73 |
25% | 0.08 | 0.00 | 5.13 | 0.00 | 0.45 | 5.89 | 45.62 | 2.09 | 4.00 | 281.00 | 17.40 | 375.46 | 6.95 |
50% | 0.23 | 0.00 | 9.69 | 0.00 | 0.54 | 6.22 | 76.25 | 3.28 | 5.00 | 332.00 | 19.10 | 392.05 | 11.11 |
75% | 3.52 | 19.38 | 18.10 | 0.00 | 0.62 | 6.63 | 94.10 | 5.24 | 24.00 | 666.00 | 20.20 | 396.24 | 16.72 |
max | 88.98 | 100.00 | 27.74 | 1.00 | 0.87 | 8.78 | 100.00 | 12.13 | 24.00 | 711.00 | 22.00 | 396.90 | 37.97 |
crim | zn | indus | chas | nox | rm | age | dis | rad | tax | ptratio | b | lstat | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 | 354.00 |
mean | 0.00 | -0.00 | -0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 | -0.00 | 0.00 | -0.00 | -0.00 |
std | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
min | -0.40 | -0.50 | -1.55 | -0.29 | -1.44 | -3.37 | -2.23 | -1.26 | -0.98 | -1.33 | -2.75 | -3.98 | -1.51 |
25% | -0.39 | -0.50 | -0.87 | -0.29 | -0.90 | -0.57 | -0.81 | -0.81 | -0.64 | -0.76 | -0.52 | 0.20 | -0.78 |
50% | -0.38 | -0.50 | -0.19 | -0.29 | -0.15 | -0.11 | 0.28 | -0.26 | -0.52 | -0.46 | 0.27 | 0.38 | -0.19 |
75% | -0.03 | 0.31 | 1.05 | -0.29 | 0.62 | 0.46 | 0.92 | 0.65 | 1.66 | 1.54 | 0.78 | 0.43 | 0.60 |
max | 9.02 | 3.68 | 2.47 | 3.48 | 2.76 | 3.44 | 1.13 | 3.86 | 1.66 | 1.81 | 1.61 | 0.44 | 3.59 |
In [8]:
def get_model_v1(shape):
model = keras.models.Sequential()
model.add(keras.layers.Input(shape, name="InputLayer"))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n1'))
model.add(keras.layers.Dense(64, activation='relu', name='Dense_n2'))
model.add(keras.layers.Dense(1, name='Output'))
model.compile(optimizer = 'rmsprop',
loss = 'mse',
metrics = ['mae', 'mse'] )
return model
In [9]:
model=get_model_v1( (13,) )
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩ │ Dense_n1 (Dense) │ (None, 64) │ 896 │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ Dense_n2 (Dense) │ (None, 64) │ 4,160 │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ Output (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴───────────────────────────┴────────────┘
Total params: 5,121 (20.00 KB)
Trainable params: 5,121 (20.00 KB)
Non-trainable params: 0 (0.00 B)
5.2 - Add callback¶
In [10]:
os.makedirs(f'{run_dir}/models', mode=0o750, exist_ok=True)
save_dir = f'{run_dir}/models/best_model.keras'
savemodel_callback = keras.callbacks.ModelCheckpoint( filepath=save_dir, monitor='val_mae', mode='max', save_best_only=True)
5.3 - Train it¶
In [11]:
history = model.fit(x_train,
y_train,
epochs = 50,
batch_size = 10,
verbose = fit_verbosity,
validation_data = (x_test, y_test),
callbacks = [savemodel_callback])
Epoch 1/50 36/36 - 1s - 17ms/step - loss: 493.6011 - mae: 20.2427 - mse: 493.6011 - val_loss: 341.0601 - val_mae: 16.7986 - val_mse: 341.0601 Epoch 2/50 36/36 - 0s - 7ms/step - loss: 266.0305 - mae: 14.0415 - mse: 266.0305 - val_loss: 123.4517 - val_mae: 9.2439 - val_mse: 123.4517 Epoch 3/50 36/36 - 0s - 7ms/step - loss: 89.1800 - mae: 7.3974 - mse: 89.1800 - val_loss: 45.0105 - val_mae: 5.0383 - val_mse: 45.0105 Epoch 4/50 36/36 - 0s - 7ms/step - loss: 41.5153 - mae: 4.7728 - mse: 41.5153 - val_loss: 27.8717 - val_mae: 3.7812 - val_mse: 27.8717 Epoch 5/50 36/36 - 0s - 7ms/step - loss: 28.2129 - mae: 3.8822 - mse: 28.2129 - val_loss: 23.9261 - val_mae: 3.4637 - val_mse: 23.9261 Epoch 6/50 36/36 - 0s - 7ms/step - loss: 23.7061 - mae: 3.5151 - mse: 23.7061 - val_loss: 19.8429 - val_mae: 3.1191 - val_mse: 19.8429 Epoch 7/50 36/36 - 0s - 7ms/step - loss: 20.6968 - mae: 3.2075 - mse: 20.6968 - val_loss: 18.0988 - val_mae: 3.0355 - val_mse: 18.0988 Epoch 8/50 36/36 - 0s - 7ms/step - loss: 18.8830 - mae: 3.0206 - mse: 18.8830 - val_loss: 17.4124 - val_mae: 2.9270 - val_mse: 17.4124 Epoch 9/50 36/36 - 0s - 7ms/step - loss: 17.6313 - mae: 2.8941 - mse: 17.6313 - val_loss: 18.5794 - val_mae: 3.2609 - val_mse: 18.5794 Epoch 10/50 36/36 - 0s - 7ms/step - loss: 16.4499 - mae: 2.8509 - mse: 16.4499 - val_loss: 14.5376 - val_mae: 2.7345 - val_mse: 14.5376 Epoch 11/50 36/36 - 0s - 7ms/step - loss: 15.8920 - mae: 2.7311 - mse: 15.8920 - val_loss: 16.0627 - val_mae: 2.9593 - val_mse: 16.0627 Epoch 12/50 36/36 - 0s - 7ms/step - loss: 15.1865 - mae: 2.6388 - mse: 15.1865 - val_loss: 13.1932 - val_mae: 2.6353 - val_mse: 13.1932 Epoch 13/50 36/36 - 0s - 7ms/step - loss: 14.3766 - mae: 2.5535 - mse: 14.3766 - val_loss: 12.8520 - val_mae: 2.6333 - val_mse: 12.8520 Epoch 14/50 36/36 - 0s - 7ms/step - loss: 15.4640 - mae: 2.6057 - mse: 15.4640 - val_loss: 14.8078 - val_mae: 2.9964 - val_mse: 14.8078 Epoch 15/50 36/36 - 0s - 7ms/step - loss: 13.9767 - mae: 2.5368 - mse: 13.9767 - val_loss: 12.1847 - val_mae: 2.5825 - val_mse: 12.1847 Epoch 16/50 36/36 - 0s - 7ms/step - loss: 13.1856 - mae: 2.4713 - mse: 13.1856 - val_loss: 12.9782 - val_mae: 2.6933 - val_mse: 12.9782 Epoch 17/50 36/36 - 0s - 7ms/step - loss: 12.6747 - mae: 2.3993 - mse: 12.6747 - val_loss: 11.6720 - val_mae: 2.5430 - val_mse: 11.6720 Epoch 18/50 36/36 - 0s - 7ms/step - loss: 12.5394 - mae: 2.3968 - mse: 12.5394 - val_loss: 11.6363 - val_mae: 2.5255 - val_mse: 11.6363 Epoch 19/50 36/36 - 0s - 7ms/step - loss: 12.4856 - mae: 2.3898 - mse: 12.4856 - val_loss: 11.0329 - val_mae: 2.4791 - val_mse: 11.0329 Epoch 20/50 36/36 - 0s - 7ms/step - loss: 12.0378 - mae: 2.3448 - mse: 12.0378 - val_loss: 11.2594 - val_mae: 2.5047 - val_mse: 11.2594 Epoch 21/50 36/36 - 0s - 7ms/step - loss: 11.6567 - mae: 2.2828 - mse: 11.6567 - val_loss: 11.2495 - val_mae: 2.5307 - val_mse: 11.2495 Epoch 22/50 36/36 - 0s - 7ms/step - loss: 11.6504 - mae: 2.3002 - mse: 11.6504 - val_loss: 10.8859 - val_mae: 2.4696 - val_mse: 10.8859 Epoch 23/50 36/36 - 0s - 7ms/step - loss: 11.5240 - mae: 2.3100 - mse: 11.5240 - val_loss: 10.8954 - val_mae: 2.4972 - val_mse: 10.8954 Epoch 24/50 36/36 - 0s - 7ms/step - loss: 11.2201 - mae: 2.2651 - mse: 11.2201 - val_loss: 12.8221 - val_mae: 2.7870 - val_mse: 12.8221 Epoch 25/50 36/36 - 0s - 7ms/step - loss: 11.0176 - mae: 2.2470 - mse: 11.0176 - val_loss: 11.8832 - val_mae: 2.6617 - val_mse: 11.8832 Epoch 26/50 36/36 - 0s - 7ms/step - loss: 10.9160 - mae: 2.2569 - mse: 10.9160 - val_loss: 10.9789 - val_mae: 2.4548 - val_mse: 10.9789 Epoch 27/50 36/36 - 0s - 7ms/step - loss: 10.4882 - mae: 2.1815 - mse: 10.4882 - val_loss: 10.4530 - val_mae: 2.4472 - val_mse: 10.4530 Epoch 28/50 36/36 - 0s - 7ms/step - loss: 11.1855 - mae: 2.2374 - mse: 11.1855 - val_loss: 10.5666 - val_mae: 2.4572 - val_mse: 10.5666 Epoch 29/50 36/36 - 0s - 7ms/step - loss: 10.2158 - mae: 2.1716 - mse: 10.2158 - val_loss: 10.7902 - val_mae: 2.5123 - val_mse: 10.7902 Epoch 30/50 36/36 - 0s - 7ms/step - loss: 10.2155 - mae: 2.1681 - mse: 10.2155 - val_loss: 10.2773 - val_mae: 2.4195 - val_mse: 10.2773 Epoch 31/50 36/36 - 0s - 7ms/step - loss: 9.6463 - mae: 2.0780 - mse: 9.6463 - val_loss: 10.0762 - val_mae: 2.3595 - val_mse: 10.0762 Epoch 32/50 36/36 - 0s - 7ms/step - loss: 9.8593 - mae: 2.1474 - mse: 9.8593 - val_loss: 10.5630 - val_mae: 2.4590 - val_mse: 10.5630 Epoch 33/50 36/36 - 0s - 7ms/step - loss: 9.6232 - mae: 2.0777 - mse: 9.6232 - val_loss: 11.5181 - val_mae: 2.6591 - val_mse: 11.5181 Epoch 34/50 36/36 - 0s - 7ms/step - loss: 9.7686 - mae: 2.1237 - mse: 9.7686 - val_loss: 10.5678 - val_mae: 2.4996 - val_mse: 10.5678 Epoch 35/50 36/36 - 0s - 7ms/step - loss: 9.3478 - mae: 2.0382 - mse: 9.3478 - val_loss: 9.4186 - val_mae: 2.2815 - val_mse: 9.4186 Epoch 36/50 36/36 - 0s - 7ms/step - loss: 9.3224 - mae: 2.0297 - mse: 9.3224 - val_loss: 9.2144 - val_mae: 2.2636 - val_mse: 9.2144 Epoch 37/50 36/36 - 0s - 7ms/step - loss: 9.1203 - mae: 2.0386 - mse: 9.1203 - val_loss: 9.4299 - val_mae: 2.2643 - val_mse: 9.4299 Epoch 38/50 36/36 - 0s - 7ms/step - loss: 9.0384 - mae: 2.0354 - mse: 9.0384 - val_loss: 9.5968 - val_mae: 2.2516 - val_mse: 9.5968 Epoch 39/50 36/36 - 0s - 7ms/step - loss: 8.8424 - mae: 1.9986 - mse: 8.8424 - val_loss: 9.1567 - val_mae: 2.2511 - val_mse: 9.1567 Epoch 40/50 36/36 - 0s - 7ms/step - loss: 8.5851 - mae: 1.9638 - mse: 8.5851 - val_loss: 10.4007 - val_mae: 2.4222 - val_mse: 10.4007 Epoch 41/50 36/36 - 0s - 7ms/step - loss: 8.9099 - mae: 1.9884 - mse: 8.9099 - val_loss: 8.9467 - val_mae: 2.2494 - val_mse: 8.9467 Epoch 42/50 36/36 - 0s - 7ms/step - loss: 8.4496 - mae: 1.9683 - mse: 8.4496 - val_loss: 8.8751 - val_mae: 2.2414 - val_mse: 8.8751 Epoch 43/50 36/36 - 0s - 7ms/step - loss: 8.7206 - mae: 1.9570 - mse: 8.7206 - val_loss: 9.1273 - val_mae: 2.2022 - val_mse: 9.1273 Epoch 44/50 36/36 - 0s - 7ms/step - loss: 8.1297 - mae: 1.8946 - mse: 8.1297 - val_loss: 10.8188 - val_mae: 2.6183 - val_mse: 10.8188 Epoch 45/50 36/36 - 0s - 7ms/step - loss: 8.0518 - mae: 1.8567 - mse: 8.0518 - val_loss: 8.5146 - val_mae: 2.1888 - val_mse: 8.5146 Epoch 46/50 36/36 - 0s - 7ms/step - loss: 8.0243 - mae: 1.9089 - mse: 8.0243 - val_loss: 9.1647 - val_mae: 2.2703 - val_mse: 9.1647 Epoch 47/50 36/36 - 0s - 7ms/step - loss: 7.9399 - mae: 1.8959 - mse: 7.9399 - val_loss: 9.4159 - val_mae: 2.2678 - val_mse: 9.4159 Epoch 48/50 36/36 - 0s - 7ms/step - loss: 8.0311 - mae: 1.9282 - mse: 8.0311 - val_loss: 9.3263 - val_mae: 2.3064 - val_mse: 9.3263 Epoch 49/50 36/36 - 0s - 7ms/step - loss: 7.5075 - mae: 1.8373 - mse: 7.5075 - val_loss: 8.7493 - val_mae: 2.1762 - val_mse: 8.7493 Epoch 50/50 36/36 - 0s - 7ms/step - loss: 7.6569 - mae: 1.8810 - mse: 7.6569 - val_loss: 9.3255 - val_mae: 2.2860 - val_mse: 9.3255
In [12]:
score = model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
x_test / loss : 9.6799 x_test / mae : 2.3068 x_test / mse : 9.6799
6.2 - Training history¶
What was the best result during our training ?
In [13]:
print("min( val_mae ) : {:.4f}".format( min(history.history["val_mae"]) ) )
min( val_mae ) : 2.1762
In [14]:
fidle.scrawler.history( history, plot={'MSE' :['mse', 'val_mse'],
'MAE' :['mae', 'val_mae'],
'LOSS':['loss','val_loss']}, save_as='01-history')
Saved: ./run/K3BHPD2/figs/01-history_0
Saved: ./run/K3BHPD2/figs/01-history_1
Saved: ./run/K3BHPD2/figs/01-history_2
Step 7 - Restore a model :¶
7.1 - Reload model¶
In [15]:
loaded_model = keras.models.load_model(f'{run_dir}/models/best_model.keras')
loaded_model.summary()
print("Loaded.")
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩ │ Dense_n1 (Dense) │ (None, 64) │ 896 │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ Dense_n2 (Dense) │ (None, 64) │ 4,160 │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ Output (Dense) │ (None, 1) │ 65 │ └─────────────────────────────────┴───────────────────────────┴────────────┘
Total params: 10,244 (40.02 KB)
Trainable params: 5,121 (20.00 KB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 5,123 (20.01 KB)
Loaded.
7.2 - Evaluate it :¶
In [16]:
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('x_test / loss : {:5.4f}'.format(score[0]))
print('x_test / mae : {:5.4f}'.format(score[1]))
print('x_test / mse : {:5.4f}'.format(score[2]))
x_test / loss : 344.4955 x_test / mae : 16.8026 x_test / mse : 344.4955
7.3 - Make a prediction¶
In [17]:
my_data = [ 1.26425925, -0.48522739, 1.0436489 , -0.23112788, 1.37120745,
-2.14308942, 1.13489104, -1.06802005, 1.71189006, 1.57042287,
0.77859951, 0.14769795, 2.7585581 ]
real_price = 10.4
my_data=np.array(my_data).reshape(1,13)
In [18]:
predictions = loaded_model.predict( my_data, verbose=fit_verbosity )
print("Prediction : {:.2f} K$ Reality : {:.2f} K$".format(predictions[0][0], real_price))
1/1 - 0s - 2ms/step Prediction : 6.16 K$ Reality : 10.40 K$
In [19]:
fidle.end()
End time : 03/03/24 21:04:10
Duration : 00:00:15 465ms
This notebook ends here :-)
https://fidle.cnrs.fr