Bootcamps

Explora nuestra extensa colección de cursos diseñados para ayudarte a dominar varios temas y habilidades. Ya seas un principiante o un aprendiz avanzado, aquí hay algo para todos.

Academia

Aprende en vivo

Únete a nosotros en nuestros talleres gratuitos, webinars y otros eventos para aprender más sobre nuestros programas y comenzar tu camino para convertirte en desarrollador.

Próximos eventos en vivo

Catálogo de contenidos

Para los geeks autodidactas, este es nuestro extenso catálogo de contenido con todos los materiales y tutoriales que hemos desarrollado hasta el día de hoy.

Tiene sentido comenzar a aprender leyendo y viendo videos sobre los fundamentos y cómo funcionan las cosas.

Full-Stack Software Developer - 16w

Data Science and Machine Learning - 16 wks

Buscar en lecciones


IngresarEmpezar
← Regresar a lecciones

Weekly Coding Challenge

Todas las semanas escogemos un proyecto de la vida real para que construyas tu portafolio y te prepares para conseguir un trabajo. Todos nuestros proyectos están construidos con ChatGPT como co-pilot!

Únete al reto

Podcast: Code Sets You Free

Un podcast de cultura tecnológica donde aprenderás a luchar contra los enemigos que te bloquean en tu camino para convertirte en un profesional exitoso en tecnología.

Escuchar el podcast
Editar en Github
Abrir en Colab

Explorando Redes Neuronales

RNA en Python

A continuación veremos cómo podemos implementar RNA (Redes de Neuronas Artificiales) en Python. Para ello, utilizaremos la librería keras sobre tensorflow (que es lo más común).

Clasificación de conjuntos de datos textuales

Vamos a utilizar el conjunto de datos de inicio de diabetes de los indios Pima. Este es un conjunto de datos de Machine Learning estándar del repositorio de Machine Learning de UCI. Describe los datos de los registros médicos de los pacientes de los indios Pima y si tuvieron un inicio de diabetes dentro de los cinco años.

Paso 1. Lectura del conjunto de datos

In [1]:
import pandas as pd
from sklearn.model_selection import train_test_split

total_data = pd.read_csv("https://raw.githubusercontent.com/4GeeksAcademy/machine-learning-content/master/assets/clean-pima-indians-diabetes.csv")

X = total_data.drop("8", axis = 1)
y = total_data["8"]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)

X_train.head()
Out[1]:
01234567
60-0.547919-1.154694-3.572597-1.288212-0.692891-4.060474-0.507006-1.041549
6181.530847-0.2783730.6666180.217261-0.692891-0.4813512.4466701.425995
346-0.8448850.566649-1.194501-0.0963790.027790-0.4178920.550035-0.956462
294-1.1418521.255187-0.987710-1.288212-0.692891-1.280942-0.6580122.702312
2310.6399470.4101640.5632231.0327262.5197811.803195-0.7063341.085644

El conjunto train lo utilizaremos para entrenar el modelo, mientras que con el test lo evaluaremos para medir su grado de efectividad. Además, generalmente es una buena práctica normalizar los datos antes de entrenar una red neuronal artificial (RNA). Se pueden aplicar dos tipos: de 0 a 1 o de -1 a 1.

Paso 2: Inicialización y entrenamiento del modelo

Los modelos en Keras se definen como una secuencia de capas. Creamos un modelo secuencial y agregamos capas una a una hasta que estemos satisfechos con nuestra arquitectura de red.

La capa de entrada siempre tendrá tantas neuronas como variables predictoras. En este caso, tenemos un total de 8 (de la 0 a la 7). A continuación, añadimos dos capas ocultas, una de 12 neuronas y otra de 8. Por último, la cuarta capa, de salida, tendrá una única neurona, ya que el problema es dicotómico. Si fuese de n clases, la red tendría n salidas.

NOTA: Hemos creado una red por defecto con capas ocultas y neuronas en cada capa oculta aleatorias. Normalmente se suele empezar así y a continuación hacer una optimización de hiperparámetros.

In [2]:
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import set_random_seed

set_random_seed(42)

model = Sequential()
model.add(Dense(12, input_shape = (8,), activation = "relu"))
model.add(Dense(8, activation = "relu"))
model.add(Dense(1, activation = "sigmoid"))
2023-08-07 16:28:58.493490: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-07 16:28:58.522067: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-07 16:28:58.522701: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-07 16:28:59.257939: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

A continuación, una vez que el modelo está definido, podemos compilarlo. El backend elige automáticamente la mejor manera de representar la red para entrenar y hacer predicciones para ejecutar en su hardware, como CPU o GPU o incluso distribuido.

Al compilar, debemos especificar algunas propiedades adicionales requeridas al entrenar la red. Recordemos que entrenar una red significa encontrar el mejor conjunto de pesos para asignar entradas a salidas en nuestro conjunto de datos.

In [3]:
model.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"])
model
Out[3]:
<keras.src.engine.sequential.Sequential at 0x7f8a9d74d5d0>

Definiremos el optimizador conocido como adam. Esta es una versión popular del descenso de gradiente porque se sintoniza automáticamente y brinda buenos resultados en una amplia gama de problemas. recopilaremos e informaremos la precisión de la clasificación, definida a través del argumento de las métricas.

El entrenamiento ocurre en épocas (epoch) y cada época se divide en lotes (batch).

  • Epoch: Una pasada por todas las filas del conjunto de datos de entrenamiento.
  • Batch: Una o más muestras consideradas por el modelo dentro de una época antes de que se actualicen los pesos.

El proceso de entrenamiento se ejecutará durante un número fijo de iteraciones, que son las épocas. También debemos establecer la cantidad de filas del conjunto de datos que se consideran antes de que se actualicen los pesos del modelo dentro de cada época, lo que se denomina tamaño de batch y se establece mediante el argumento batch_size (tamaño_lote).

Para este problema, ejecutaremos una pequeña cantidad de epochs (150) y usaremos un tamaño de batch relativamente pequeño de 10:

In [4]:
# Ajustar el modelo de keras en el conjunto de datos
model.fit(X_train, y_train, epochs = 150, batch_size = 10)
Epoch 1/150
62/62 [==============================] - 1s 884us/step - loss: 0.7338 - accuracy: 0.4756
Epoch 2/150
62/62 [==============================] - 0s 766us/step - loss: 0.6790 - accuracy: 0.6173
Epoch 3/150
62/62 [==============================] - 0s 662us/step - loss: 0.6336 - accuracy: 0.6873
Epoch 4/150
62/62 [==============================] - 0s 664us/step - loss: 0.5910 - accuracy: 0.7296
Epoch 5/150
62/62 [==============================] - 0s 659us/step - loss: 0.5552 - accuracy: 0.7508
Epoch 6/150
62/62 [==============================] - 0s 669us/step - loss: 0.5276 - accuracy: 0.7541
Epoch 7/150
62/62 [==============================] - 0s 668us/step - loss: 0.5077 - accuracy: 0.7606
Epoch 8/150
62/62 [==============================] - 0s 666us/step - loss: 0.4935 - accuracy: 0.7573
Epoch 9/150
62/62 [==============================] - 0s 679us/step - loss: 0.4827 - accuracy: 0.7655
Epoch 10/150
62/62 [==============================] - 0s 745us/step - loss: 0.4748 - accuracy: 0.7720
Epoch 11/150
62/62 [==============================] - 0s 732us/step - loss: 0.4691 - accuracy: 0.7720
Epoch 12/150
62/62 [==============================] - 0s 718us/step - loss: 0.4661 - accuracy: 0.7720
Epoch 13/150
62/62 [==============================] - 0s 712us/step - loss: 0.4625 - accuracy: 0.7720
Epoch 14/150
62/62 [==============================] - 0s 717us/step - loss: 0.4587 - accuracy: 0.7785
Epoch 15/150
62/62 [==============================] - 0s 710us/step - loss: 0.4566 - accuracy: 0.7769
Epoch 16/150
62/62 [==============================] - 0s 714us/step - loss: 0.4539 - accuracy: 0.7801
Epoch 17/150
62/62 [==============================] - 0s 723us/step - loss: 0.4525 - accuracy: 0.7818
Epoch 18/150
62/62 [==============================] - 0s 725us/step - loss: 0.4507 - accuracy: 0.7818
Epoch 19/150
62/62 [==============================] - 0s 703us/step - loss: 0.4485 - accuracy: 0.7834
Epoch 20/150
62/62 [==============================] - 0s 674us/step - loss: 0.4469 - accuracy: 0.7850
Epoch 21/150
62/62 [==============================] - 0s 693us/step - loss: 0.4443 - accuracy: 0.7866
Epoch 22/150
62/62 [==============================] - 0s 744us/step - loss: 0.4428 - accuracy: 0.7866
Epoch 23/150
62/62 [==============================] - 0s 725us/step - loss: 0.4410 - accuracy: 0.7834
Epoch 24/150
62/62 [==============================] - 0s 744us/step - loss: 0.4390 - accuracy: 0.7899
Epoch 25/150
62/62 [==============================] - 0s 747us/step - loss: 0.4374 - accuracy: 0.7883
Epoch 26/150
62/62 [==============================] - 0s 745us/step - loss: 0.4370 - accuracy: 0.7899
Epoch 27/150
62/62 [==============================] - 0s 691us/step - loss: 0.4352 - accuracy: 0.7915
Epoch 28/150
62/62 [==============================] - 0s 689us/step - loss: 0.4354 - accuracy: 0.7866
Epoch 29/150
62/62 [==============================] - 0s 731us/step - loss: 0.4327 - accuracy: 0.7915
Epoch 30/150
62/62 [==============================] - 0s 753us/step - loss: 0.4314 - accuracy: 0.7915
Epoch 31/150
62/62 [==============================] - 0s 746us/step - loss: 0.4302 - accuracy: 0.7915
Epoch 32/150
62/62 [==============================] - 0s 734us/step - loss: 0.4297 - accuracy: 0.7899
Epoch 33/150
62/62 [==============================] - 0s 733us/step - loss: 0.4282 - accuracy: 0.7932
Epoch 34/150
62/62 [==============================] - 0s 717us/step - loss: 0.4269 - accuracy: 0.7964
Epoch 35/150
62/62 [==============================] - 0s 740us/step - loss: 0.4256 - accuracy: 0.8013
Epoch 36/150
62/62 [==============================] - 0s 708us/step - loss: 0.4247 - accuracy: 0.8013
Epoch 37/150
62/62 [==============================] - 0s 747us/step - loss: 0.4236 - accuracy: 0.7948
Epoch 38/150
62/62 [==============================] - 0s 753us/step - loss: 0.4225 - accuracy: 0.7948
Epoch 39/150
62/62 [==============================] - 0s 738us/step - loss: 0.4220 - accuracy: 0.7980
Epoch 40/150
62/62 [==============================] - 0s 740us/step - loss: 0.4208 - accuracy: 0.7980
Epoch 41/150
62/62 [==============================] - 0s 767us/step - loss: 0.4198 - accuracy: 0.7980
Epoch 42/150
62/62 [==============================] - 0s 702us/step - loss: 0.4196 - accuracy: 0.8029
Epoch 43/150
62/62 [==============================] - 0s 695us/step - loss: 0.4185 - accuracy: 0.8029
Epoch 44/150
62/62 [==============================] - 0s 674us/step - loss: 0.4178 - accuracy: 0.7980
Epoch 45/150
62/62 [==============================] - 0s 713us/step - loss: 0.4166 - accuracy: 0.7980
Epoch 46/150
62/62 [==============================] - 0s 679us/step - loss: 0.4164 - accuracy: 0.7948
Epoch 47/150
62/62 [==============================] - 0s 718us/step - loss: 0.4158 - accuracy: 0.7997
Epoch 48/150
62/62 [==============================] - 0s 685us/step - loss: 0.4148 - accuracy: 0.7980
Epoch 49/150
62/62 [==============================] - 0s 694us/step - loss: 0.4140 - accuracy: 0.8046
Epoch 50/150
62/62 [==============================] - 0s 692us/step - loss: 0.4129 - accuracy: 0.8029
Epoch 51/150
62/62 [==============================] - 0s 688us/step - loss: 0.4123 - accuracy: 0.7980
Epoch 52/150
62/62 [==============================] - 0s 715us/step - loss: 0.4117 - accuracy: 0.8013
Epoch 53/150
62/62 [==============================] - 0s 690us/step - loss: 0.4104 - accuracy: 0.8062
Epoch 54/150
62/62 [==============================] - 0s 711us/step - loss: 0.4087 - accuracy: 0.8078
Epoch 55/150
62/62 [==============================] - 0s 689us/step - loss: 0.4093 - accuracy: 0.8094
Epoch 56/150
62/62 [==============================] - 0s 693us/step - loss: 0.4082 - accuracy: 0.8046
Epoch 57/150
62/62 [==============================] - 0s 693us/step - loss: 0.4070 - accuracy: 0.8094
Epoch 58/150
62/62 [==============================] - 0s 665us/step - loss: 0.4061 - accuracy: 0.8062
Epoch 59/150
62/62 [==============================] - 0s 676us/step - loss: 0.4055 - accuracy: 0.8127
Epoch 60/150
62/62 [==============================] - 0s 672us/step - loss: 0.4054 - accuracy: 0.8111
Epoch 61/150
62/62 [==============================] - 0s 699us/step - loss: 0.4048 - accuracy: 0.8127
Epoch 62/150
62/62 [==============================] - 0s 669us/step - loss: 0.4031 - accuracy: 0.8111
Epoch 63/150
62/62 [==============================] - 0s 703us/step - loss: 0.4021 - accuracy: 0.8111
Epoch 64/150
62/62 [==============================] - 0s 670us/step - loss: 0.4017 - accuracy: 0.8127
Epoch 65/150
62/62 [==============================] - 0s 687us/step - loss: 0.4015 - accuracy: 0.8143
Epoch 66/150
62/62 [==============================] - 0s 670us/step - loss: 0.4011 - accuracy: 0.8078
Epoch 67/150
62/62 [==============================] - 0s 675us/step - loss: 0.3993 - accuracy: 0.8094
Epoch 68/150
62/62 [==============================] - 0s 681us/step - loss: 0.3992 - accuracy: 0.8192
Epoch 69/150
62/62 [==============================] - 0s 684us/step - loss: 0.3973 - accuracy: 0.8192
Epoch 70/150
62/62 [==============================] - 0s 687us/step - loss: 0.3972 - accuracy: 0.8192
Epoch 71/150
62/62 [==============================] - 0s 676us/step - loss: 0.3966 - accuracy: 0.8111
Epoch 72/150
62/62 [==============================] - 0s 689us/step - loss: 0.3955 - accuracy: 0.8192
Epoch 73/150
62/62 [==============================] - 0s 686us/step - loss: 0.3962 - accuracy: 0.8062
Epoch 74/150
62/62 [==============================] - 0s 675us/step - loss: 0.3949 - accuracy: 0.8160
Epoch 75/150
62/62 [==============================] - 0s 678us/step - loss: 0.3937 - accuracy: 0.8241
Epoch 76/150
62/62 [==============================] - 0s 693us/step - loss: 0.3945 - accuracy: 0.8208
Epoch 77/150
62/62 [==============================] - 0s 694us/step - loss: 0.3941 - accuracy: 0.8176
Epoch 78/150
62/62 [==============================] - 0s 667us/step - loss: 0.3932 - accuracy: 0.8176
Epoch 79/150
62/62 [==============================] - 0s 687us/step - loss: 0.3915 - accuracy: 0.8208
Epoch 80/150
62/62 [==============================] - 0s 683us/step - loss: 0.3903 - accuracy: 0.8176
Epoch 81/150
62/62 [==============================] - 0s 690us/step - loss: 0.3906 - accuracy: 0.8062
Epoch 82/150
62/62 [==============================] - 0s 700us/step - loss: 0.3899 - accuracy: 0.8241
Epoch 83/150
62/62 [==============================] - 0s 688us/step - loss: 0.3890 - accuracy: 0.8160
Epoch 84/150
62/62 [==============================] - 0s 680us/step - loss: 0.3879 - accuracy: 0.8257
Epoch 85/150
62/62 [==============================] - 0s 673us/step - loss: 0.3871 - accuracy: 0.8290
Epoch 86/150
62/62 [==============================] - 0s 676us/step - loss: 0.3858 - accuracy: 0.8257
Epoch 87/150
62/62 [==============================] - 0s 699us/step - loss: 0.3857 - accuracy: 0.8306
Epoch 88/150
62/62 [==============================] - 0s 746us/step - loss: 0.3858 - accuracy: 0.8257
Epoch 89/150
62/62 [==============================] - 0s 719us/step - loss: 0.3839 - accuracy: 0.8274
Epoch 90/150
62/62 [==============================] - 0s 701us/step - loss: 0.3857 - accuracy: 0.8225
Epoch 91/150
62/62 [==============================] - 0s 671us/step - loss: 0.3852 - accuracy: 0.8241
Epoch 92/150
62/62 [==============================] - 0s 658us/step - loss: 0.3837 - accuracy: 0.8290
Epoch 93/150
62/62 [==============================] - 0s 676us/step - loss: 0.3828 - accuracy: 0.8274
Epoch 94/150
62/62 [==============================] - 0s 710us/step - loss: 0.3821 - accuracy: 0.8290
Epoch 95/150
62/62 [==============================] - 0s 717us/step - loss: 0.3815 - accuracy: 0.8322
Epoch 96/150
62/62 [==============================] - 0s 715us/step - loss: 0.3815 - accuracy: 0.8322
Epoch 97/150
62/62 [==============================] - 0s 712us/step - loss: 0.3811 - accuracy: 0.8274
Epoch 98/150
62/62 [==============================] - 0s 716us/step - loss: 0.3815 - accuracy: 0.8306
Epoch 99/150
62/62 [==============================] - 0s 721us/step - loss: 0.3798 - accuracy: 0.8274
Epoch 100/150
62/62 [==============================] - 0s 715us/step - loss: 0.3802 - accuracy: 0.8339
Epoch 101/150
62/62 [==============================] - 0s 718us/step - loss: 0.3783 - accuracy: 0.8322
Epoch 102/150
62/62 [==============================] - 0s 715us/step - loss: 0.3797 - accuracy: 0.8306
Epoch 103/150
62/62 [==============================] - 0s 730us/step - loss: 0.3795 - accuracy: 0.8257
Epoch 104/150
62/62 [==============================] - 0s 742us/step - loss: 0.3783 - accuracy: 0.8322
Epoch 105/150
62/62 [==============================] - 0s 728us/step - loss: 0.3777 - accuracy: 0.8306
Epoch 106/150
62/62 [==============================] - 0s 750us/step - loss: 0.3778 - accuracy: 0.8290
Epoch 107/150
62/62 [==============================] - 0s 744us/step - loss: 0.3764 - accuracy: 0.8290
Epoch 108/150
62/62 [==============================] - 0s 731us/step - loss: 0.3766 - accuracy: 0.8306
Epoch 109/150
62/62 [==============================] - 0s 727us/step - loss: 0.3767 - accuracy: 0.8290
Epoch 110/150
62/62 [==============================] - 0s 726us/step - loss: 0.3770 - accuracy: 0.8290
Epoch 111/150
62/62 [==============================] - 0s 729us/step - loss: 0.3764 - accuracy: 0.8257
Epoch 112/150
62/62 [==============================] - 0s 715us/step - loss: 0.3746 - accuracy: 0.8290
Epoch 113/150
62/62 [==============================] - 0s 659us/step - loss: 0.3742 - accuracy: 0.8290
Epoch 114/150
62/62 [==============================] - 0s 670us/step - loss: 0.3744 - accuracy: 0.8322
Epoch 115/150
62/62 [==============================] - 0s 681us/step - loss: 0.3741 - accuracy: 0.8339
Epoch 116/150
62/62 [==============================] - 0s 684us/step - loss: 0.3728 - accuracy: 0.8339
Epoch 117/150
62/62 [==============================] - 0s 680us/step - loss: 0.3730 - accuracy: 0.8339
Epoch 118/150
62/62 [==============================] - 0s 671us/step - loss: 0.3711 - accuracy: 0.8355
Epoch 119/150
62/62 [==============================] - 0s 679us/step - loss: 0.3717 - accuracy: 0.8355
Epoch 120/150
62/62 [==============================] - 0s 668us/step - loss: 0.3710 - accuracy: 0.8339
Epoch 121/150
62/62 [==============================] - 0s 686us/step - loss: 0.3710 - accuracy: 0.8355
Epoch 122/150
62/62 [==============================] - 0s 717us/step - loss: 0.3706 - accuracy: 0.8388
Epoch 123/150
62/62 [==============================] - 0s 700us/step - loss: 0.3695 - accuracy: 0.8371
Epoch 124/150
62/62 [==============================] - 0s 722us/step - loss: 0.3711 - accuracy: 0.8388
Epoch 125/150
62/62 [==============================] - 0s 731us/step - loss: 0.3698 - accuracy: 0.8355
Epoch 126/150
62/62 [==============================] - 0s 678us/step - loss: 0.3683 - accuracy: 0.8339
Epoch 127/150
62/62 [==============================] - 0s 690us/step - loss: 0.3675 - accuracy: 0.8388
Epoch 128/150
62/62 [==============================] - 0s 731us/step - loss: 0.3689 - accuracy: 0.8371
Epoch 129/150
62/62 [==============================] - 0s 736us/step - loss: 0.3675 - accuracy: 0.8339
Epoch 130/150
62/62 [==============================] - 0s 745us/step - loss: 0.3664 - accuracy: 0.8339
Epoch 131/150
62/62 [==============================] - 0s 744us/step - loss: 0.3663 - accuracy: 0.8355
Epoch 132/150
62/62 [==============================] - 0s 733us/step - loss: 0.3658 - accuracy: 0.8274
Epoch 133/150
62/62 [==============================] - 0s 680us/step - loss: 0.3663 - accuracy: 0.8371
Epoch 134/150
62/62 [==============================] - 0s 717us/step - loss: 0.3674 - accuracy: 0.8355
Epoch 135/150
62/62 [==============================] - 0s 748us/step - loss: 0.3645 - accuracy: 0.8420
Epoch 136/150
62/62 [==============================] - 0s 739us/step - loss: 0.3643 - accuracy: 0.8355
Epoch 137/150
62/62 [==============================] - 0s 744us/step - loss: 0.3646 - accuracy: 0.8339
Epoch 138/150
62/62 [==============================] - 0s 741us/step - loss: 0.3642 - accuracy: 0.8355
Epoch 139/150
62/62 [==============================] - 0s 733us/step - loss: 0.3636 - accuracy: 0.8388
Epoch 140/150
62/62 [==============================] - 0s 735us/step - loss: 0.3630 - accuracy: 0.8371
Epoch 141/150
62/62 [==============================] - 0s 726us/step - loss: 0.3640 - accuracy: 0.8322
Epoch 142/150
62/62 [==============================] - 0s 749us/step - loss: 0.3619 - accuracy: 0.8371
Epoch 143/150
62/62 [==============================] - 0s 738us/step - loss: 0.3606 - accuracy: 0.8355
Epoch 144/150
62/62 [==============================] - 0s 733us/step - loss: 0.3610 - accuracy: 0.8371
Epoch 145/150
62/62 [==============================] - 0s 738us/step - loss: 0.3609 - accuracy: 0.8404
Epoch 146/150
62/62 [==============================] - 0s 751us/step - loss: 0.3626 - accuracy: 0.8355
Epoch 147/150
62/62 [==============================] - 0s 739us/step - loss: 0.3615 - accuracy: 0.8355
Epoch 148/150
62/62 [==============================] - 0s 745us/step - loss: 0.3584 - accuracy: 0.8420
Epoch 149/150
62/62 [==============================] - 0s 745us/step - loss: 0.3592 - accuracy: 0.8371
Epoch 150/150
62/62 [==============================] - 0s 744us/step - loss: 0.3576 - accuracy: 0.8420
Out[4]:
<keras.src.callbacks.History at 0x7f8a60af23d0>
In [5]:
_, accuracy = model.evaluate(X_train, y_train)

print(f"Accuracy: {accuracy}")
20/20 [==============================] - 0s 726us/step - loss: 0.3535 - accuracy: 0.8420
Accuracy: 0.8420195579528809

El tiempo de entrenamiento de un modelo dependerá, en primer lugar, del tamaño del conjunto de datos (instancias y características), y también de la tipología de modelo y su configuración.

El accuracy del conjunto de entrenamiento es de un 86,97%.

Paso 3: Predicción del modelo

In [6]:
y_pred = model.predict(X_test)
y_pred[:15]
5/5 [==============================] - 0s 783us/step
Out[6]:
array([[2.6933843e-01],
       [5.7993677e-02],
       [7.6992743e-02],
       [4.8524177e-01],
       [3.1675667e-01],
       [6.4265609e-01],
       [7.3388085e-04],
       [2.8476545e-01],
       [8.7694836e-01],
       [4.1469648e-01],
       [1.6080230e-01],
       [8.2213795e-01],
       [2.1518065e-01],
       [5.3527528e-01],
       [1.2730679e-01]], dtype=float32)

Como vemos, el modelo no devuelve las clases 0 y 1 directamente, sino que requiere de un preprocesamiento previo:

In [7]:
y_pred_round = [round(x[0]) for x in y_pred]
y_pred_round[:15]
Out[7]:
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0]

Con los datos en crudo es muy complicado saber si el modelo está acertando o no. Para ello, debemos compararlo con la realidad. Existe una gran cantidad de métricas para medir la efectividad de un modelo a la hora de predecir, entre ellas la precisión (accuracy), que es la fracción de predicciones que el modelo realizó correctamente.

In [8]:
from sklearn.metrics import accuracy_score

accuracy_score(y_test, y_pred_round)
Out[8]:
0.7272727272727273

Paso 4: Guardado del modelo

Una vez tenemos el modelo que estábamos buscando (presumiblemente tras la optimización de hiperparámetros), para poder utilizarlo a futuro es necesario almacenarlo en nuestro directorio.

In [9]:
model.save("keras_8-12-8-1_42.keras")

Añadir un nombre explicativo al modelo es vital, ya que en el caso de perder el código que lo ha generado sabremos qué arquitectura tiene (en este caso decimos 8-12-8-1 porque tiene 8 neuronas en la capa de entrada, 12 y 8 en las dos capas ocultas y una neurona en la capa de salida) y además la semilla para replicar los componentes aleatorios del modelo, que en este caso lo hacemos añadiendo un número al nombre del archivo, el 42.

Clasificación de conjuntos de imágenes

A continuación se muestra un ejemplo simple de cómo entrenar una red neuronal para clasificar imágenes del dataset MNIST. MNIST es un conjunto de datos de imágenes de dígitos escritos a mano, desde 0 hasta 9.

Paso 1. Lectura del conjunto de datos

In [10]:
from tensorflow.keras.datasets import mnist

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Normalizar los datos (transformamos los valores de los píxeles de 0-255 a 0-1)
X_train, X_test = X_train / 255.0, X_test / 255.0
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 1s 0us/step

Los valores de los píxeles de las imágenes se normalizan para que estén en el rango de 0 a 1 en lugar de 0 a 255.

Paso 2: Inicialización y entrenamiento del modelo

Se define la arquitectura de la red neuronal. En este caso, estamos utilizando un modelo secuencial simple con una capa de aplanamiento que transforma las imágenes 2D en vectores 1D, una capa densa con 128 neuronas y una capa de salida con 10 neuronas.

A continuación se proporciona una forma alternativa a la anterior para crear una RNA. Ambas son válidas:

In [11]:
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import set_random_seed

set_random_seed(42)

model = Sequential([
  # Capa que aplana la imagen de entrada de 28x28 píxeles a un vector de 784 elementos
  Flatten(input_shape = (28, 28)),
  # Capa oculta densa con 128 neuronas y función de activación ReLU
  Dense(128, activation = "relu"),
  # Capa de salida con 10 neuronas (una para cada dígito del 0 al 9)
  Dense(10)
])

También añadimos el compilador de la red para definir el optimizador y la función de pérdida, como hicimos anteriormente:

In [12]:
from tensorflow.keras.losses import SparseCategoricalCrossentropy

model.compile(optimizer = "adam", loss = SparseCategoricalCrossentropy(from_logits = True), metrics = ["accuracy"])

Se entrena el modelo en el conjunto de entrenamiento durante un cierto número de épocas. Cuando se trabaja con imágenes es menos común utilizar el parámetro del batch_size:

In [13]:
model.fit(X_train, y_train, epochs = 5)
Epoch 1/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.2530 - accuracy: 0.9276
Epoch 2/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.1111 - accuracy: 0.9671
Epoch 3/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.0759 - accuracy: 0.9757
Epoch 4/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.0566 - accuracy: 0.9831
Epoch 5/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.0432 - accuracy: 0.9865
Out[13]:
<keras.src.callbacks.History at 0x7f8a600fb950>
In [14]:
_, accuracy = model.evaluate(X_train, y_train)

print(f"Accuracy: {accuracy}")
1875/1875 [==============================] - 1s 671us/step - loss: 0.0441 - accuracy: 0.9858
Accuracy: 0.9858166575431824

El tiempo de entrenamiento de un modelo dependerá, en primer lugar, del tamaño del conjunto de datos (instancias y características), y también de la tipología de modelo y su configuración.

Paso 3: Predicción del modelo

In [15]:
test_loss, test_acc = model.evaluate(X_test,  y_test, verbose=2)

print('\nTest accuracy:', test_acc)
313/313 - 0s - loss: 0.0841 - accuracy: 0.9751 - 266ms/epoch - 851us/step

Test accuracy: 0.9750999808311462

Paso 4: Guardado del modelo

Una vez tenemos el modelo que estábamos buscando (presumiblemente tras la optimización de hiperparámetros), para poder utilizarlo a futuro es necesario almacenarlo en nuestro directorio.

In [ ]:
model.save("keras_28x28-128-10_42.keras")

Añadir un nombre explicativo al modelo es vital, ya que en el caso de perder el código que lo ha generado sabremos qué arquitectura tiene (en este caso decimos 28x28-128-10 porque tiene una capa de entrada de 28 x 28 píxeles, 128 neuronas en la única capa oculta que tiene y 10 neuronas en la capa de salida) y además la semilla para replicar los componentes aleatorios del modelo, que en este caso lo hacemos añadiendo un número al nombre del archivo, el 42.