You are on page 1of 13

Credit risk assessment is a critical task in the finance industry, where banks and financial institutions evaluate the

risk of lending money


to individuals or businesses. Traditionally, credit risk assessment has relied on statistical models and scoring systems to predict the
likelihood of defaults or delinquencies. However, with the advancements in deep learning and artificial intelligence, there is an opportunity
to enhance credit risk assessment beyond traditional scoring models. Deep learning models can analyze complex patterns and
relationships in large datasets, providing more accurate and reliable predictions of credit risk. In this tutorial, we will explore how to
leverage deep learning techniques to build a credit risk assessment model using Python. We will use the keras library to create a neural
network model that can effectively evaluate credit risk based on various factors and features. By the end of this tutorial, you will have a
deep understanding of how to apply deep learning in the finance industry to improve credit risk assessment processes.

Table of Contents Setting up the Environment Data Preprocessing Building the Model Training the Model Evaluation and Fine-Tuning Comparison with
Traditional Models Conclusion Stay tuned as we dive into the world of deep learning for credit risk assessment, where traditional scoring models meet
cutting-edge technology. Let’s get started! Setting up the Environment Installing necessary libraries and setting up the Python environment for deep
learning The following code segment demonstrates how to set up your Python environment for deep learning in credit risk assessment. It includes installing
essential libraries, importing required modules and configuring the environment for consistency and reproducibility.
# Install necessary libraries
#subprocess.call(['pip', 'install', 'yfinance'])

# Importing required libraries


import yfinance as yf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Setting up the Python environment


np.random.seed(42)
plt.style.use('ggplot')

print("Environment setup completed.")

Environment setup completed.

By executing these steps, you ensure that your environment is equipped with the essential tools and configurations needed to proceed
with deep learning for credit risk assessment. Let’s move on to the next section to preprocess the data for model training.

3. Data Preprocessing: Cleaning and preparing the credit risk


dataset for model training.
Cleaning and preparing the credit risk dataset for model training In this section, we will focus on cleaning and preparing the credit risk
dataset for training our deep learning model. It is crucial to preprocess the data effectively to ensure the model’s accuracy and
performance.

# Importing necessary libraries


import numpy as np
import pandas as pd

# Create a DataFrame with the sample data to continue the script execution
data = {
'Income': [50000, 60000, 70000],
'Credit_Score': [600, 700, 750],
'Education': ['Graduate', 'High School', 'Graduate'],
'Marital_Status': ['Single', 'Married', 'Single'],
'Risk': [0, 1, 0]
}

credit_data = pd.DataFrame(data)

# Ensure the column names 'Unnamed: 0' and 'Customer_ID' are not present before dropping
if 'Unnamed: 0' in credit_data.columns:
credit_data = credit_data.drop(['Unnamed: 0'], axis=1)

if 'Customer_ID' in credit_data.columns:
credit_data = credit_data.drop(['Customer_ID'], axis=1)

# Handling missing values


credit_data['Income'].fillna(credit_data['Income'].mean(), inplace=True)
credit_data['Credit_Score'].fillna(credit_data['Credit_Score'].median(), inplace=True)

# Encoding categorical variables


credit_data = pd.get_dummies(credit_data, columns=['Education', 'Marital_Status'], drop_first=True)

# Splitting the data into features and target variable


X = credit_data.drop('Risk', axis=1)
y = credit_data['Risk']

# Normalizing the features


from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
# Save the preprocessed data
np.save('X.npy', X_scaled)
np.save('y.npy', y)

By executing this code segment, we effectively clean and preprocess the credit risk dataset, handling missing values, encoding
categorical variables, splitting the data into features and target variable and normalizing the features. This preparation is essential before
training the deep learning model for credit risk assessment.

# 4. Building the Model ## 4.1. Importing necessary libraries To build a deep learning model for credit risk assessment, we first need to import the
necessary libraries and modules, including numpy, pandas, train_test_split from sklearn.model_selection and Dense from keras.layers.
# Importing necessary libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense

2024-04-23 21:44:05.224708: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is opti


mized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate co
mpiler flags.
# 4.2. Load preprocessed data Next, we load the preprocessed data that we cleaned and prepared in the previous section. We have the data stored in the
X.npy and y.npy files, which contain the features and target variable, respectively.
# Load preprocessed data
X = np.load('X.npy')
y = np.load('y.npy')

# 4.3. Splitting the data into training and testing sets Before building the model, it’s essential to split the data into training and testing sets. We will use 80%
of the data for training and 20% for testing to evaluate the model’s performance.
# Splitting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 4.4. Building the deep learning model Now we create a deep learning model using the Sequential API from Keras. We start by initializing a Sequential
model and adding Dense layers with specified units and activation functions.

# Building the deep learning model


model = Sequential()
model.add(Dense(64, input_shape=(X_train.shape[1],), activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

/Users/abdelkarimabdallah/anaconda3/lib/python3.11/site-packages/keras/src/layers/core/dense.py:88: UserWarning
: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `
Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
# 4.5. Compiling the model After creating the model architecture, we compile it by specifying the optimizer, loss function and metrics to monitor during
training. Here, we use the Adam optimizer and binary cross-entropy loss.
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# 4.6. Training the model We train the model on the training data for a specified number of epochs and batch size. Additionally, we provide the validation
data to evaluate the model’s performance on the test set during training.
# Train the model
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test))

Epoch 1/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step - accuracy: 0.5000 - loss: 0.6661 - val_accuracy: 1.0000 - val_loss: 0.6466
Epoch 2/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step - accuracy: 0.5000 - loss: 0.6539 - val_accuracy: 1.0000 - val_loss: 0.63
96
Epoch 3/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.6421 - val_accuracy: 1.0000 - val_loss: 0.63
27
Epoch 4/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.6305 - val_accuracy: 1.0000 - val_loss: 0.62
60
Epoch 5/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.6195 - val_accuracy: 1.0000 - val_loss: 0.62
03
Epoch 6/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy: 1.0000 - loss: 0.6093 - val_accuracy: 1.0000 - val_loss: 0.61
49
Epoch 7/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.5993 - val_accuracy: 1.0000 - val_loss: 0.60
99
Epoch 8/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 138ms/step - accuracy: 1.0000 - loss: 0.5894 - val_accuracy: 1.0000 - val_loss: 0.6
052
Epoch 9/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.5796 - val_accuracy: 1.0000 - val_loss: 0.60
08
Epoch 10/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.5698 - val_accuracy: 1.0000 - val_loss: 0.59
67
Epoch 11/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy: 1.0000 - loss: 0.5601 - val_accuracy: 1.0000 - val_loss: 0.59
33
Epoch 12/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 1.0000 - loss: 0.5505 - val_accuracy: 1.0000 - val_loss: 0.59
00
Epoch 13/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.5411 - val_accuracy: 1.0000 - val_loss: 0.58
67
Epoch 14/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.5318 - val_accuracy: 1.0000 - val_loss: 0.58
35
Epoch 15/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.5224 - val_accuracy: 1.0000 - val_loss: 0.58
06
Epoch 16/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 1.0000 - loss: 0.5133 - val_accuracy: 1.0000 - val_loss: 0.57
79
Epoch 17/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 1.0000 - loss: 0.5044 - val_accuracy: 1.0000 - val_loss: 0.57
56
Epoch 18/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.4959 - val_accuracy: 1.0000 - val_loss: 0.57
35
Epoch 19/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.4875 - val_accuracy: 1.0000 - val_loss: 0.57
11
Epoch 20/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step - accuracy: 1.0000 - loss: 0.4790 - val_accuracy: 1.0000 - val_loss: 0.56
88
Epoch 21/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.4705 - val_accuracy: 1.0000 - val_loss: 0.56
64
Epoch 22/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step - accuracy: 1.0000 - loss: 0.4620 - val_accuracy: 1.0000 - val_loss: 0.56
38
Epoch 23/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.4534 - val_accuracy: 1.0000 - val_loss: 0.56
12
Epoch 24/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 1.0000 - loss: 0.4454 - val_accuracy: 1.0000 - val_loss: 0.55
87
Epoch 25/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 1.0000 - loss: 0.4373 - val_accuracy: 1.0000 - val_loss: 0.55
61
Epoch 26/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - accuracy: 1.0000 - loss: 0.4291 - val_accuracy: 1.0000 - val_loss: 0.55
37
Epoch 27/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.4209 - val_accuracy: 1.0000 - val_loss: 0.55
11
Epoch 28/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 149ms/step - accuracy: 1.0000 - loss: 0.4127 - val_accuracy: 1.0000 - val_loss: 0.5
484
Epoch 29/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 1.0000 - loss: 0.4043 - val_accuracy: 1.0000 - val_loss: 0.54
57
Epoch 30/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.3963 - val_accuracy: 1.0000 - val_loss: 0.54
24
Epoch 31/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.3883 - val_accuracy: 1.0000 - val_loss: 0.53
86
Epoch 32/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 89ms/step - accuracy: 1.0000 - loss: 0.3802 - val_accuracy: 1.0000 - val_loss: 0.53
47
Epoch 33/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.3720 - val_accuracy: 1.0000 - val_loss: 0.53
05
Epoch 34/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.3640 - val_accuracy: 1.0000 - val_loss: 0.52
63
Epoch 35/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 87ms/step - accuracy: 1.0000 - loss: 0.3561 - val_accuracy: 1.0000 - val_loss: 0.52
20
Epoch 36/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.3481 - val_accuracy: 1.0000 - val_loss: 0.51
76
Epoch 37/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 1.0000 - loss: 0.3400 - val_accuracy: 1.0000 - val_loss: 0.51
34
Epoch 38/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 1.0000 - loss: 0.3320 - val_accuracy: 1.0000 - val_loss: 0.50
93
Epoch 39/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.3240 - val_accuracy: 1.0000 - val_loss: 0.50
53
Epoch 40/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 1.0000 - loss: 0.3161 - val_accuracy: 1.0000 - val_loss: 0.50
13
Epoch 41/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 84ms/step - accuracy: 1.0000 - loss: 0.3086 - val_accuracy: 1.0000 - val_loss: 0.49
74
Epoch 42/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 1.0000 - loss: 0.3010 - val_accuracy: 1.0000 - val_loss: 0.49
35
Epoch 43/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.2933 - val_accuracy: 1.0000 - val_loss: 0.48
97
Epoch 44/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 73ms/step - accuracy: 1.0000 - loss: 0.2858 - val_accuracy: 1.0000 - val_loss: 0.48
54
Epoch 45/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.2784 - val_accuracy: 1.0000 - val_loss: 0.48
10
Epoch 46/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.2710 - val_accuracy: 1.0000 - val_loss: 0.47
65
Epoch 47/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 70ms/step - accuracy: 1.0000 - loss: 0.2636 - val_accuracy: 1.0000 - val_loss: 0.47
19
Epoch 48/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 71ms/step - accuracy: 1.0000 - loss: 0.2563 - val_accuracy: 1.0000 - val_loss: 0.46
74
Epoch 49/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 1.0000 - loss: 0.2490 - val_accuracy: 1.0000 - val_loss: 0.46
29
Epoch 50/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.2417 - val_accuracy: 1.0000 - val_loss: 0.45
85
# 4.7. Saving the model Once the model is trained, we save it to a file for future use or deployment in credit risk assessment applications.
# Save the model
model.save('credit_risk_model.h5')

WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`.
This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my
_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
# 4.8. Plotting training and validation metrics We plot the training and validation accuracy and loss values to visualize the model’s performance during
training and ensure it’s learning effectively.
# Plot training & validation accuracy values
import matplotlib.pyplot as plt

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')

# Plot training & validation loss values


plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')

<matplotlib.legend.Legend at 0x13f08d7d0>
By following these steps, we successfully build a deep learning model for credit risk assessment using Keras, train the model, save it and visualize its
training progress. This model can now be utilized for predicting credit risk based on the provided features.
# 5. Training the Model Training the deep learning model on the preprocessed credit risk dataset.
# Training the deep learning model on the preprocessed credit risk dataset
model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test))
# Save the trained model
model.save('trained_credit_risk_model.h5')

Epoch 1/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 157ms/step - accuracy: 1.0000 - loss: 0.2348 - val_accuracy: 1.0000 - val_loss: 0.4
541
Epoch 2/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.2280 - val_accuracy: 1.0000 - val_loss: 0.44
96
Epoch 3/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step - accuracy: 1.0000 - loss: 0.2213 - val_accuracy: 1.0000 - val_loss: 0.44
49
Epoch 4/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.2147 - val_accuracy: 1.0000 - val_loss: 0.44
02
Epoch 5/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.2080 - val_accuracy: 1.0000 - val_loss: 0.43
53
Epoch 6/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 83ms/step - accuracy: 1.0000 - loss: 0.2016 - val_accuracy: 1.0000 - val_loss: 0.43
05
Epoch 7/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.1952 - val_accuracy: 1.0000 - val_loss: 0.42
56
Epoch 8/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.1891 - val_accuracy: 1.0000 - val_loss: 0.42
08
Epoch 9/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - accuracy: 1.0000 - loss: 0.1831 - val_accuracy: 1.0000 - val_loss: 0.41
60
Epoch 10/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 1.0000 - loss: 0.1772 - val_accuracy: 1.0000 - val_loss: 0.41
12
Epoch 11/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss: 0.1714 - val_accuracy: 1.0000 - val_loss: 0.40
65
Epoch 12/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 1.0000 - loss: 0.1656 - val_accuracy: 1.0000 - val_loss: 0.40
18
Epoch 13/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.1602 - val_accuracy: 1.0000 - val_loss: 0.39
70
Epoch 14/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step - accuracy: 1.0000 - loss: 0.1549 - val_accuracy: 1.0000 - val_loss: 0.39
20
Epoch 15/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - accuracy: 1.0000 - loss: 0.1496 - val_accuracy: 1.0000 - val_loss: 0.38
69
Epoch 16/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 83ms/step - accuracy: 1.0000 - loss: 0.1445 - val_accuracy: 1.0000 - val_loss: 0.38
17
Epoch 17/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 140ms/step - accuracy: 1.0000 - loss: 0.1396 - val_accuracy: 1.0000 - val_loss: 0.3
766
Epoch 18/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step - accuracy: 1.0000 - loss: 0.1349 - val_accuracy: 1.0000 - val_loss: 0.37
16
Epoch 19/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 1.0000 - loss: 0.1303 - val_accuracy: 1.0000 - val_loss: 0.36
66
Epoch 20/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss: 0.1258 - val_accuracy: 1.0000 - val_loss: 0.36
17
Epoch 21/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 106ms/step - accuracy: 1.0000 - loss: 0.1214 - val_accuracy: 1.0000 - val_loss: 0.3
570
Epoch 22/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 261ms/step - accuracy: 1.0000 - loss: 0.1171 - val_accuracy: 1.0000 - val_loss: 0.3
526
Epoch 23/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss: 0.1130 - val_accuracy: 1.0000 - val_loss: 0.34
81
Epoch 24/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 71ms/step - accuracy: 1.0000 - loss: 0.1091 - val_accuracy: 1.0000 - val_loss: 0.34
37
Epoch 25/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 1.0000 - loss: 0.1053 - val_accuracy: 1.0000 - val_loss: 0.33
93
Epoch 26/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - accuracy: 1.0000 - loss: 0.1015 - val_accuracy: 1.0000 - val_loss: 0.33
48
Epoch 27/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step - accuracy: 1.0000 - loss: 0.0980 - val_accuracy: 1.0000 - val_loss: 0.33
03
Epoch 28/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0946 - val_accuracy: 1.0000 - val_loss: 0.32
59
Epoch 29/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.0913 - val_accuracy: 1.0000 - val_loss: 0.32
16
Epoch 30/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.0881 - val_accuracy: 1.0000 - val_loss: 0.31
73
Epoch 31/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 1.0000 - loss: 0.0851 - val_accuracy: 1.0000 - val_loss: 0.31
33
Epoch 32/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 83ms/step - accuracy: 1.0000 - loss: 0.0821 - val_accuracy: 1.0000 - val_loss: 0.30
92
Epoch 33/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - accuracy: 1.0000 - loss: 0.0793 - val_accuracy: 1.0000 - val_loss: 0.30
53
Epoch 34/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 94ms/step - accuracy: 1.0000 - loss: 0.0765 - val_accuracy: 1.0000 - val_loss: 0.30
14
Epoch 35/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 1.0000 - loss: 0.0739 - val_accuracy: 1.0000 - val_loss: 0.29
74
Epoch 36/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 75ms/step - accuracy: 1.0000 - loss: 0.0714 - val_accuracy: 1.0000 - val_loss: 0.29
36
Epoch 37/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 104ms/step - accuracy: 1.0000 - loss: 0.0689 - val_accuracy: 1.0000 - val_loss: 0.2
898
Epoch 38/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 96ms/step - accuracy: 1.0000 - loss: 0.0666 - val_accuracy: 1.0000 - val_loss: 0.28
62
Epoch 39/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 99ms/step - accuracy: 1.0000 - loss: 0.0644 - val_accuracy: 1.0000 - val_loss: 0.28
27
Epoch 40/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 89ms/step - accuracy: 1.0000 - loss: 0.0623 - val_accuracy: 1.0000 - val_loss: 0.27
92
Epoch 41/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.0602 - val_accuracy: 1.0000 - val_loss: 0.27
57
Epoch 42/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - accuracy: 1.0000 - loss: 0.0583 - val_accuracy: 1.0000 - val_loss: 0.27
22
Epoch 43/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.0564 - val_accuracy: 1.0000 - val_loss: 0.26
88
Epoch 44/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.0545 - val_accuracy: 1.0000 - val_loss: 0.26
55
Epoch 45/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0528 - val_accuracy: 1.0000 - val_loss: 0.26
21
Epoch 46/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.0511 - val_accuracy: 1.0000 - val_loss: 0.25
89
Epoch 47/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0495 - val_accuracy: 1.0000 - val_loss: 0.25
57
Epoch 48/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.0480 - val_accuracy: 1.0000 - val_loss: 0.25
27
Epoch 49/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.0464 - val_accuracy: 1.0000 - val_loss: 0.24
98
Epoch 50/50
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.0450 - val_accuracy: 1.0000 - val_loss: 0.24
70
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`.
This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my
_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`.
Following these steps, we train the deep learning model on the preprocessed credit risk dataset, specifying the number of epochs and batch size for
training. Additionally, we save the trained model for future use in credit risk assessment applications.
# 6. Evaluation and Fine-Tuning To ensure the effectiveness of our deep learning model for credit risk assessment, we need to evaluate its performance on
the test set and make necessary adjustments through fine-tuning.
# Evaluate the model on the test set
eval_results = model.evaluate(X_test, y_test)

# Print the evaluation results


print("\nEvaluation Results:")
print(f"Loss: {eval_results[0]}")
print(f"Accuracy: {eval_results[1]}")

# Fine-tuning the model


# Increase the number of epochs and observe performance
history_fine_tuned = model.fit(X_train, y_train, epochs=100, batch_size=32, validation_data=(X_test, y_test))

# Plot training & validation accuracy values for fine-tuned model


plt.figure()
plt.plot(history_fine_tuned.history['accuracy'])
plt.plot(history_fine_tuned.history['val_accuracy'])
plt.title('Fine-Tuned Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')

# Plot training & validation loss values for fine-tuned model


plt.figure()
plt.plot(history_fine_tuned.history['loss'])
plt.plot(history_fine_tuned.history['val_loss'])
plt.title('Fine-Tuned Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')

1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 34ms/step - accuracy: 1.0000 - loss: 0.2470

Evaluation Results:
Loss: 0.24697278439998627
Accuracy: 1.0
Epoch 1/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 126ms/step - accuracy: 1.0000 - loss: 0.0436 - val_accuracy: 1.0000 - val_loss: 0.2
443
Epoch 2/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 1.0000 - loss: 0.0422 - val_accuracy: 1.0000 - val_loss: 0.24
16
Epoch 3/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy: 1.0000 - loss: 0.0410 - val_accuracy: 1.0000 - val_loss: 0.23
90
Epoch 4/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 70ms/step - accuracy: 1.0000 - loss: 0.0397 - val_accuracy: 1.0000 - val_loss: 0.23
65
Epoch 5/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 86ms/step - accuracy: 1.0000 - loss: 0.0385 - val_accuracy: 1.0000 - val_loss: 0.23
41
Epoch 6/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 88ms/step - accuracy: 1.0000 - loss: 0.0373 - val_accuracy: 1.0000 - val_loss: 0.23
17
Epoch 7/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 80ms/step - accuracy: 1.0000 - loss: 0.0363 - val_accuracy: 1.0000 - val_loss: 0.22
94
Epoch 8/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 87ms/step - accuracy: 1.0000 - loss: 0.0352 - val_accuracy: 1.0000 - val_loss: 0.22
69
Epoch 9/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.0342 - val_accuracy: 1.0000 - val_loss: 0.22
43
Epoch 10/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.0332 - val_accuracy: 1.0000 - val_loss: 0.22
16
Epoch 11/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.0322 - val_accuracy: 1.0000 - val_loss: 0.21
90
Epoch 12/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy: 1.0000 - loss: 0.0313 - val_accuracy: 1.0000 - val_loss: 0.21
65
Epoch 13/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.0305 - val_accuracy: 1.0000 - val_loss: 0.21
41
Epoch 14/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.0297 - val_accuracy: 1.0000 - val_loss: 0.21
18
Epoch 15/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 1.0000 - loss: 0.0288 - val_accuracy: 1.0000 - val_loss: 0.20
95
Epoch 16/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy: 1.0000 - loss: 0.0281 - val_accuracy: 1.0000 - val_loss: 0.20
73
Epoch 17/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.0273 - val_accuracy: 1.0000 - val_loss: 0.20
52
Epoch 18/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.0266 - val_accuracy: 1.0000 - val_loss: 0.20
30
Epoch 19/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 1.0000 - loss: 0.0259 - val_accuracy: 1.0000 - val_loss: 0.20
09
Epoch 20/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 123ms/step - accuracy: 1.0000 - loss: 0.0252 - val_accuracy: 1.0000 - val_loss: 0.1
988
Epoch 21/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - accuracy: 1.0000 - loss: 0.0246 - val_accuracy: 1.0000 - val_loss: 0.19
69
Epoch 22/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.0239 - val_accuracy: 1.0000 - val_loss: 0.19
49
Epoch 23/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.0233 - val_accuracy: 1.0000 - val_loss: 0.19
30
Epoch 24/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 74ms/step - accuracy: 1.0000 - loss: 0.0228 - val_accuracy: 1.0000 - val_loss: 0.19
12
Epoch 25/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.0222 - val_accuracy: 1.0000 - val_loss: 0.18
94
Epoch 26/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 91ms/step - accuracy: 1.0000 - loss: 0.0217 - val_accuracy: 1.0000 - val_loss: 0.18
76
Epoch 27/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.0211 - val_accuracy: 1.0000 - val_loss: 0.18
57
Epoch 28/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 64ms/step - accuracy: 1.0000 - loss: 0.0206 - val_accuracy: 1.0000 - val_loss: 0.18
39
Epoch 29/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0201 - val_accuracy: 1.0000 - val_loss: 0.18
21
Epoch 30/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.0197 - val_accuracy: 1.0000 - val_loss: 0.18
04
Epoch 31/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.0192 - val_accuracy: 1.0000 - val_loss: 0.17
87
Epoch 32/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 72ms/step - accuracy: 1.0000 - loss: 0.0188 - val_accuracy: 1.0000 - val_loss: 0.17
70
Epoch 33/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.0184 - val_accuracy: 1.0000 - val_loss: 0.17
54
Epoch 34/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 93ms/step - accuracy: 1.0000 - loss: 0.0179 - val_accuracy: 1.0000 - val_loss: 0.17
38
Epoch 35/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.0176 - val_accuracy: 1.0000 - val_loss: 0.17
23
Epoch 36/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 1.0000 - loss: 0.0172 - val_accuracy: 1.0000 - val_loss: 0.17
08
Epoch 37/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 84ms/step - accuracy: 1.0000 - loss: 0.0168 - val_accuracy: 1.0000 - val_loss: 0.16
94
Epoch 38/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step - accuracy: 1.0000 - loss: 0.0164 - val_accuracy: 1.0000 - val_loss: 0.16
79
Epoch 39/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 64ms/step - accuracy: 1.0000 - loss: 0.0161 - val_accuracy: 1.0000 - val_loss: 0.16
65
Epoch 40/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 85ms/step - accuracy: 1.0000 - loss: 0.0158 - val_accuracy: 1.0000 - val_loss: 0.16
50
Epoch 41/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 139ms/step - accuracy: 1.0000 - loss: 0.0154 - val_accuracy: 1.0000 - val_loss: 0.1
636
Epoch 42/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss: 0.0151 - val_accuracy: 1.0000 - val_loss: 0.16
22
Epoch 43/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy: 1.0000 - loss: 0.0148 - val_accuracy: 1.0000 - val_loss: 0.16
08
Epoch 44/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 76ms/step - accuracy: 1.0000 - loss: 0.0145 - val_accuracy: 1.0000 - val_loss: 0.15
94
Epoch 45/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - accuracy: 1.0000 - loss: 0.0142 - val_accuracy: 1.0000 - val_loss: 0.15
82
Epoch 46/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy: 1.0000 - loss: 0.0139 - val_accuracy: 1.0000 - val_loss: 0.15
70
Epoch 47/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 84ms/step - accuracy: 1.0000 - loss: 0.0137 - val_accuracy: 1.0000 - val_loss: 0.15
58
Epoch 48/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.0134 - val_accuracy: 1.0000 - val_loss: 0.15
46
Epoch 49/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.0131 - val_accuracy: 1.0000 - val_loss: 0.15
34
Epoch 50/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - accuracy: 1.0000 - loss: 0.0129 - val_accuracy: 1.0000 - val_loss: 0.15
21
Epoch 51/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - accuracy: 1.0000 - loss: 0.0126 - val_accuracy: 1.0000 - val_loss: 0.15
09
Epoch 52/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 57ms/step - accuracy: 1.0000 - loss: 0.0124 - val_accuracy: 1.0000 - val_loss: 0.14
97
Epoch 53/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy: 1.0000 - loss: 0.0122 - val_accuracy: 1.0000 - val_loss: 0.14
85
Epoch 54/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step - accuracy: 1.0000 - loss: 0.0120 - val_accuracy: 1.0000 - val_loss: 0.14
74
Epoch 55/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 67ms/step - accuracy: 1.0000 - loss: 0.0117 - val_accuracy: 1.0000 - val_loss: 0.14
63
Epoch 56/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy: 1.0000 - loss: 0.0115 - val_accuracy: 1.0000 - val_loss: 0.14
53
Epoch 57/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy: 1.0000 - loss: 0.0113 - val_accuracy: 1.0000 - val_loss: 0.14
42
Epoch 58/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0111 - val_accuracy: 1.0000 - val_loss: 0.14
32
Epoch 59/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0109 - val_accuracy: 1.0000 - val_loss: 0.14
21
Epoch 60/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step - accuracy: 1.0000 - loss: 0.0107 - val_accuracy: 1.0000 - val_loss: 0.14
11
Epoch 61/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0106 - val_accuracy: 1.0000 - val_loss: 0.14
01
Epoch 62/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step - accuracy: 1.0000 - loss: 0.0104 - val_accuracy: 1.0000 - val_loss: 0.13
91
Epoch 63/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0102 - val_accuracy: 1.0000 - val_loss: 0.13
82
Epoch 64/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0100 - val_accuracy: 1.0000 - val_loss: 0.13
72
Epoch 65/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0099 - val_accuracy: 1.0000 - val_loss: 0.13
62
Epoch 66/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step - accuracy: 1.0000 - loss: 0.0097 - val_accuracy: 1.0000 - val_loss: 0.13
53
Epoch 67/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0095 - val_accuracy: 1.0000 - val_loss: 0.13
44
Epoch 68/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step - accuracy: 1.0000 - loss: 0.0094 - val_accuracy: 1.0000 - val_loss: 0.13
35
Epoch 69/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0092 - val_accuracy: 1.0000 - val_loss: 0.13
26
Epoch 70/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0091 - val_accuracy: 1.0000 - val_loss: 0.13
17
Epoch 71/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0089 - val_accuracy: 1.0000 - val_loss: 0.13
09
Epoch 72/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0088 - val_accuracy: 1.0000 - val_loss: 0.13
00
Epoch 73/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0087 - val_accuracy: 1.0000 - val_loss: 0.12
92
Epoch 74/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 53ms/step - accuracy: 1.0000 - loss: 0.0085 - val_accuracy: 1.0000 - val_loss: 0.12
83
Epoch 75/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0084 - val_accuracy: 1.0000 - val_loss: 0.12
75
Epoch 76/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0083 - val_accuracy: 1.0000 - val_loss: 0.12
67
Epoch 77/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0082 - val_accuracy: 1.0000 - val_loss: 0.12
59
Epoch 78/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0080 - val_accuracy: 1.0000 - val_loss: 0.12
51
Epoch 79/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0079 - val_accuracy: 1.0000 - val_loss: 0.12
43
Epoch 80/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0078 - val_accuracy: 1.0000 - val_loss: 0.12
36
Epoch 81/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0077 - val_accuracy: 1.0000 - val_loss: 0.12
28
Epoch 82/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0076 - val_accuracy: 1.0000 - val_loss: 0.12
21
Epoch 83/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0075 - val_accuracy: 1.0000 - val_loss: 0.12
14
Epoch 84/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0074 - val_accuracy: 1.0000 - val_loss: 0.12
07
Epoch 85/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0073 - val_accuracy: 1.0000 - val_loss: 0.11
99
Epoch 86/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0072 - val_accuracy: 1.0000 - val_loss: 0.11
92
Epoch 87/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 61ms/step - accuracy: 1.0000 - loss: 0.0071 - val_accuracy: 1.0000 - val_loss: 0.11
85
Epoch 88/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0070 - val_accuracy: 1.0000 - val_loss: 0.11
78
Epoch 89/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0069 - val_accuracy: 1.0000 - val_loss: 0.11
71
Epoch 90/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 54ms/step - accuracy: 1.0000 - loss: 0.0068 - val_accuracy: 1.0000 - val_loss: 0.11
65
Epoch 91/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0067 - val_accuracy: 1.0000 - val_loss: 0.11
58
Epoch 92/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 55ms/step - accuracy: 1.0000 - loss: 0.0066 - val_accuracy: 1.0000 - val_loss: 0.11
52
Epoch 93/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy: 1.0000 - loss: 0.0065 - val_accuracy: 1.0000 - val_loss: 0.11
45
Epoch 94/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy: 1.0000 - loss: 0.0064 - val_accuracy: 1.0000 - val_loss: 0.11
39
Epoch 95/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 56ms/step - accuracy: 1.0000 - loss: 0.0063 - val_accuracy: 1.0000 - val_loss: 0.11
32
Epoch 96/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 167ms/step - accuracy: 1.0000 - loss: 0.0062 - val_accuracy: 1.0000 - val_loss: 0.1
126
Epoch 97/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 90ms/step - accuracy: 1.0000 - loss: 0.0062 - val_accuracy: 1.0000 - val_loss: 0.11
20
Epoch 98/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 105ms/step - accuracy: 1.0000 - loss: 0.0061 - val_accuracy: 1.0000 - val_loss: 0.1
114
Epoch 99/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - accuracy: 1.0000 - loss: 0.0060 - val_accuracy: 1.0000 - val_loss: 0.11
08
Epoch 100/100
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 96ms/step - accuracy: 1.0000 - loss: 0.0059 - val_accuracy: 1.0000 - val_loss: 0.11
02
<matplotlib.legend.Legend at 0x13f657150>
By evaluating the model on the test set, printing the evaluation results and fine-tuning the model with an increased number of epochs, we can observe and
optimize the performance of our deep learning model for credit risk assessment. The plotted accuracy and loss values provide insights into the
effectiveness of the fine-tuned model.
# 7. Comparison with Traditional Models Evaluating the Logistic Regression Model
# Importing necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix

# Load the preprocessed data


X = np.load('X.npy')
y = np.load('y.npy')

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Build and train Logistic Regression model


log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)

# Make predictions on the test set


y_pred = log_reg.predict(X_test)

# Evaluate the Logistic Regression model


accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)

# Print the evaluation results


print("\nLogistic Regression Model Evaluation:")
print(f"Accuracy: {accuracy}")
print("Confusion Matrix:")
print(conf_matrix)

# Save the Logistic Regression model


import joblib
joblib.dump(log_reg, 'logistic_regression_model.pkl')

Logistic Regression Model Evaluation:


Accuracy: 1.0
Confusion Matrix:
[[1]]
/Users/abdelkarimabdallah/anaconda3/lib/python3.11/site-packages/sklearn/metrics/_classification.py:386: UserWa
rning: A single label was found in 'y_true' and 'y_pred'. For the confusion matrix to have the correct shape, u
se the 'labels' parameter to pass all known labels.
warnings.warn(
['logistic_regression_model.pkl']

In this section, we contrast the deep learning approach with traditional credit scoring models, specifically evaluating a Logistic Regression model’s
performance on the preprocessed credit risk dataset. By training the Logistic Regression model, making predictions on the test set and evaluating its
accuracy and confusion matrix, we gain insights into how traditional models fare in credit risk assessment compared to deep learning models. The Logistic
Regression model evaluation results give us valuable information about its accuracy and predictive capabilities. This comparison sets the stage for further
analysis and discussion on the effectiveness of deep learning models in credit risk assessment applications.

Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js

You might also like