{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "93304745", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import tensorflow as tf\n", "from tensorflow.keras.layers import Dense, Input, concatenate, Concatenate\n", "from tensorflow.keras.models import Model, Sequential\n", "from tensorflow.keras.utils import Sequence, plot_model\n", "from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau\n", "import matplotlib.pyplot as plt\n", "\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.datasets import load_wine" ] }, { "cell_type": "markdown", "id": "f0d4a88c", "metadata": {}, "source": [ "In our classes we will use TensorFlow and Keras, however, a project can be done with any technology you want. You can even use pure NumPy and implement neural networks by yourself or write it in C or Assembler." ] }, { "cell_type": "markdown", "id": "f4a2f032", "metadata": {}, "source": [ "TensorFlow by default allocates nearly all available memory on the GPU. It's wise to set the memory_growth parameter then it will take just as much memory as needed." ] }, { "cell_type": "code", "execution_count": null, "id": "c09b116c", "metadata": {}, "outputs": [], "source": [ "physical_devices = tf.config.list_physical_devices('GPU')\n", "print(physical_devices)\n", "for gpu in physical_devices:\n", " tf.config.experimental.set_memory_growth(gpu, enable=True)" ] }, { "cell_type": "markdown", "id": "455fa82f", "metadata": {}, "source": [ "Before we jump to neural networks let's start with a single perceptron. It's a simple mathematical model that calculates a weighted sum and applies a selected function called an activation function." ] }, { "cell_type": "code", "execution_count": null, "id": "10b4546e", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(1, input_shape=(2,))) \n", "# that's how we create a single layer. First parameter specifies how many neurons do we want, since we want to have\n", "# a single perceptron we use 1. Then we specify the input shape so bascially the number of variables \n", "model.summary()" ] }, { "cell_type": "markdown", "id": "9aeb9cc6", "metadata": {}, "source": [ "Above we created a simple neural network with just one neuron accepting input consisting of two values. As you can see such a network/perceptron has 3 parameters. Two of them are the weights associated with each input. Additionally, there is one more parameter called bias. It can be treated as a weight associated with an additional constant input equal to one.\n", "\n", "The formula for a single perceptron looks as follows.\n", "$$ \\sum (x_i*w_i) + bias$$\n", "\n", "Let's check if it really works" ] }, { "cell_type": "code", "execution_count": null, "id": "52ae8466", "metadata": {}, "outputs": [], "source": [ "model.weights" ] }, { "cell_type": "markdown", "id": "6c4f5cc7", "metadata": {}, "source": [ "Above you should see two weights as kernel and one bias. These weights are randomly initialized. Let's see if the formula is correct." ] }, { "cell_type": "code", "execution_count": null, "id": "e311cd82", "metadata": {}, "outputs": [], "source": [ "x = np.array([[2,1]]) # dummy input for calculations\n", "model.predict(x), x @ model.weights[0].numpy() + model.weights[1].numpy()" ] }, { "cell_type": "markdown", "id": "77b73b40", "metadata": {}, "source": [ "Method predict simply run neural network with given input.\n", "\n", "Seems it works since the results are the same, btw @ is an operator for matrix multiplication. If you don't trust it you can write it step by step with scalar multiplication." ] }, { "cell_type": "markdown", "id": "4dfb4103", "metadata": {}, "source": [ "OK but you mentioned something about an activation function.\n", "\n", "That's right, here we just used linear function f(x) = x, but in general, you can use whatever function you want e.g. tanh. It's applied after the weighted sum. So let's update our formula and try it once again.\n", "$$ f(\\sum (x_i*w_i) + bias)$$" ] }, { "cell_type": "code", "execution_count": null, "id": "a0cb8f2b", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(1, 'tanh', input_shape=(2,))) # the second parameter specifies the activation function\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "875894fc", "metadata": {}, "outputs": [], "source": [ "model.weights" ] }, { "cell_type": "code", "execution_count": null, "id": "83ad6cf4", "metadata": {}, "outputs": [], "source": [ "model.predict(x), x @ model.weights[0].numpy() + model.weights[1].numpy()" ] }, { "cell_type": "markdown", "id": "e77db638", "metadata": {}, "source": [ "Now result of predict is different than our manual computations. That's because in predict a tanh activation function is used. Let's add tanh to our formula and see if the results match" ] }, { "cell_type": "code", "execution_count": null, "id": "068f461b", "metadata": {}, "outputs": [], "source": [ "np.tanh(x @ model.weights[0].numpy() + model.weights[1].numpy())" ] }, { "cell_type": "markdown", "id": "3c94068c", "metadata": {}, "source": [ "The results might not be exactly the same due to numerical errors and the implementation of tanh, but it's clear it works.\n", "\n", "So why do we want to use different activation functions and which of them to use?\n", "\n", "We will return to that question later on. \n" ] }, { "cell_type": "markdown", "id": "c1868c9e", "metadata": {}, "source": [ "Now let's talk about those weights and how to obtain them. \n", "Well, the idea is simple. Firstly you need a training set with predictors ($x$) and target ($y$). In our example let's try to predict the price of a flat ($y$) based on its area ($x$). Of course, more than one feature can and should be used e.g. floor, year, district, ..., but for clarity, let's use a single feature. " ] }, { "cell_type": "code", "execution_count": null, "id": "a7d07231", "metadata": { "scrolled": true }, "outputs": [], "source": [ "np.random.seed(41)\n", "x = np.random.rand(200,1)*20+50" ] }, { "cell_type": "code", "execution_count": null, "id": "2b22068e", "metadata": { "scrolled": true }, "outputs": [], "source": [ "np.random.seed(27)\n", "y = .5*x + np.random.rand(*x.shape)*3 + np.log(x-49)*2" ] }, { "cell_type": "code", "execution_count": null, "id": "a642c697", "metadata": {}, "outputs": [], "source": [ "plt.plot(x,y, '.')\n", "plt.xlabel(\"$m^2$\")\n", "plt.ylabel(\"price in bitcoins\")\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "b7d6689f", "metadata": {}, "source": [ "Ok, we created a dataset (a fake one, but perfect for this exercise). Now let's create a perceptron that will solve this task. Which activation function should be used?" ] }, { "cell_type": "code", "execution_count": null, "id": "c7b59edd", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(1, input_shape=(1,))) #now we have only one input parameter\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "00a0170d", "metadata": { "scrolled": true }, "outputs": [], "source": [ "model.predict(x)" ] }, { "cell_type": "code", "execution_count": null, "id": "c2deab8c", "metadata": {}, "outputs": [], "source": [ "plt.plot(x,y, '.', label='data')\n", "plt.plot(np.sort(x,0), model.predict(np.sort(x,0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "6b8e57d7", "metadata": {}, "source": [ "That's our prediction. Probably we are not even close. It's not strange, weights are initialized in a randomized way. What we need now is a measure of this error called a loss function. Once it's defined, the process of training weights is just an optimization task and the goal is to find such weights that minimize the loss function. In fact, any optimization algorithm can be used here including random search - try a lot of random weights and choose the best ones. But in most cases, stochastic gradient descent is used - in this algorithm using derivative iteratively the weights are updated" ] }, { "cell_type": "markdown", "id": "afe24098", "metadata": {}, "source": [ "The first idea might be simply to calculate $prediction - y$. Do you know why it fails? The problem is we will keep a sign so positive and negative errors will cancel each other. We can improve this approach by taking an absolute of it. This metric is called a mean absolute error and again there is one problem. It's not differentiable and some algorithms for optimization require the loss function to be differentiable. That's why we will use the so-called mean squared error $(prediction - y)^2$. It works more or less like mean absolute error + it's differentiable + lower impact of small errors and higher impact of big errors.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ee6d979b", "metadata": {}, "outputs": [], "source": [ "mse = lambda x,y: ((y - x)**2).mean() # our loss function" ] }, { "cell_type": "markdown", "id": "bfeca8e3", "metadata": {}, "source": [ "I said you can use even random search so let's give it a try" ] }, { "cell_type": "code", "execution_count": null, "id": "ebd76d4b", "metadata": {}, "outputs": [], "source": [ "bestWeights = model.get_weights()\n", "pred = model.predict(x, verbose=0)\n", "bestError = mse(y, pred)\n", "bestError\n", "\n", "for _ in range(200):\n", " weights = [np.random.randn(1,1), np.random.randn(1)]\n", " model.set_weights(weights)\n", " pred = model.predict(x, verbose=0)\n", " err = mse(y, pred)\n", " if err < bestError:\n", " bestError = err\n", " bestWeights = model.get_weights()\n", " print(err)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "83a93991", "metadata": {}, "outputs": [], "source": [ "model.set_weights(bestWeights)\n", "plt.plot(x,y, '.', label='data')\n", "plt.plot(np.sort(x,0), model.predict(np.sort(x,0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "03f789d7", "metadata": {}, "source": [ "Ok, it looks better but there should be a better way than a random search. Of course, there is. As I said before any optimization technique can be used here but in most cases, a version of stochastic gradient descent is used. Here we take advantage of a differentiable loss function and iteratively step by step change the weights in a direction set by gradient descent. " ] }, { "cell_type": "markdown", "id": "7b43b5f6", "metadata": {}, "source": [ "There are two parameters related to optimizing neural network weights using this algorithm number of epochs and batch size. The batch size determines the number of rows used for one update of weights. The general rule is the higher the better. The number of epochs determines how many times we will go through the whole dataset in the training process. The total number of weights updates is size of data / batch size * number of epochs" ] }, { "cell_type": "markdown", "id": "b2c810b9", "metadata": {}, "source": [ "This function is already implemented so we just have to call the fit function. Before we have to call a compile method where we say which loss function should be used and if we want to collect some additional metrics" ] }, { "cell_type": "code", "execution_count": null, "id": "adb372be", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(1, input_shape=(1,)))\n", "model.compile(loss='mse', metrics='mae')\n", "model.fit(x,y, epochs=300, batch_size=16)" ] }, { "cell_type": "markdown", "id": "e6629144", "metadata": {}, "source": [ "How to adjust the number of epochs? When you look at the history you should be able to determine when it doesn't make much sense to continue the optimization process. At some point, there is no longer any progress. We can use so-called callbacks - functions executed at various stages of the model training e.g. after each epoch. We can monitor the progress and stop learning when it's not progressing anymore. Such a callback is already implemented and is called EarlyStopping. It can be used together with the ReduceLROnPlateau callback. Before stopping the learning phase first it reduces the learning rate allowing the reduction of loss function even more. " ] }, { "cell_type": "code", "execution_count": null, "id": "1de1301e", "metadata": {}, "outputs": [], "source": [ "early = EarlyStopping(monitor='loss', patience=15, restore_best_weights=True)\n", "reduce = ReduceLROnPlateau(monitor='loss', patience=6)\n", "\n", "model = Sequential()\n", "model.add(Dense(1, input_shape=(1,)))\n", "model.compile(loss='mse', metrics='mae')\n", "model.fit(x,y, epochs=1000, batch_size=16, callbacks=[early, reduce])" ] }, { "cell_type": "code", "execution_count": null, "id": "2fbd1acc", "metadata": {}, "outputs": [], "source": [ "plt.plot(x,y, '.', label='data')\n", "plt.plot(np.sort(x,0), model.predict(np.sort(x,0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "6d3ed606", "metadata": {}, "source": [ "The result is not perfect but with such a model it cannot be better. This model can return only linear output and there isn't a line that will fit much better to the data we have. What we have to do is to extend the model. Let's create a neural network with more than one neuron. Typically in classical neural networks neurons are organized in layers. Outputs of the previous layer is an input for the next layer. All inputs are connected to each neuron from a given layer thus we call such architecture fully connected or dense.\n" ] }, { "cell_type": "markdown", "id": "380db2e4", "metadata": {}, "source": [ "Here we can return to the activation functions.\n", "\n", "First, let's list the most popular functions" ] }, { "cell_type": "code", "execution_count": null, "id": "b1bd2fa3", "metadata": { "scrolled": false }, "outputs": [], "source": [ "print(\"linear\")\n", "r = np.linspace(-7,7)\n", "plt.plot(r,r)\n", "plt.grid()\n", "plt.show()\n", "print(\"tanh\")\n", "plt.plot(r, np.tanh(r))\n", "plt.grid()\n", "plt.show()\n", "print(\"sigmoid\")\n", "plt.plot(r, 1/(1 + np.exp(-r)))\n", "plt.grid()\n", "plt.show()\n", "print(\"relu\")\n", "plt.plot(r, np.where(r<0,0,r))\n", "plt.grid()\n", "plt.show()\n", "print(\"leakyrelu\")\n", "plt.plot(r, np.where(r<0,r*.1,r))\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "2029aa65", "metadata": {}, "source": [ "Now how to decide which one to use. As you probably noticed some functions have a limited range of values e.g. sigmoid returns values from 0 to 1 hence it's a perfect option when the output should be treated as a probability. \n", "\n", "In dense layers it doesn't make much sense to use linear activation. Do you know why? What's a linear transformation of linear transformation? \n", "\n", "Good default choice is relu" ] }, { "cell_type": "markdown", "id": "0afcc60a", "metadata": {}, "source": [ "The process of weight selection uses the derivative of an activation function. As you probably noticed for most of them derivative is close to zero for big positive and negative values. That's why we want to stay close to 0. To achieve that it's a good practice to introduce a data scaling process. It's a good idea to try different options, but classical standardization should be enough in most cases" ] }, { "cell_type": "code", "execution_count": null, "id": "b6dbc625", "metadata": { "scrolled": true }, "outputs": [], "source": [ "ss_x = StandardScaler()\n", "ss_y = StandardScaler()\n", "\n", "transformed_x = ss_x.fit_transform(x)\n", "transformed_y = ss_y.fit_transform(y)" ] }, { "cell_type": "code", "execution_count": null, "id": "d022528f", "metadata": { "scrolled": true }, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(64, activation='relu', input_shape=(1,))) # 64 neurons in the first layer\n", "model.add(Dense(1)) # no need to specify input shape. Why?\n", "\n", "model.compile(loss='mse', metrics='mae')\n", "\n", "early = EarlyStopping(monitor='loss', patience=15, restore_best_weights=True)\n", "reduce = ReduceLROnPlateau(monitor='loss', patience=6)\n", "\n", "model.fit(transformed_x,transformed_y, epochs=500, batch_size=16, callbacks=[early, reduce])" ] }, { "cell_type": "code", "execution_count": null, "id": "77a731df", "metadata": {}, "outputs": [], "source": [ "plt.plot(transformed_x,transformed_y, '.', label='data')\n", "plt.plot(np.sort(transformed_x, 0), model.predict(np.sort(transformed_x, 0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "1e6e3ede", "metadata": {}, "source": [ "You might wonder why do we use sort for plotting the prediction. Well it's just to draw a nice line. When the points are in a random order and we plot lines between them it might be quite chaotic." ] }, { "cell_type": "code", "execution_count": null, "id": "e64b87a5", "metadata": {}, "outputs": [], "source": [ "plt.plot(transformed_x,transformed_y, '.', label='data')\n", "plt.plot(transformed_x, model.predict(transformed_x), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "725a5528", "metadata": {}, "source": [ "To avoid it we can just plot points, but a line looks better" ] }, { "cell_type": "code", "execution_count": null, "id": "f976bbec", "metadata": {}, "outputs": [], "source": [ "plt.plot(transformed_x,transformed_y, '.', label='data')\n", "plt.plot(transformed_x, model.predict(transformed_x), '.', label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "0c826308", "metadata": {}, "source": [ "# Task 1\n", "What's the best architecture? Play here https://playground.tensorflow.org/" ] }, { "cell_type": "markdown", "id": "2180a5f3", "metadata": {}, "source": [ "# Task 2\n", "Try different architectures. Use different number of layers, neurons, activation functions" ] }, { "cell_type": "code", "execution_count": null, "id": "967903a5", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "8a54637f", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "3ed9dbad", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "d2bdc31e", "metadata": {}, "source": [ "Selecting the best number of layers and neurons is an optimization task itself. There are no strict rules. In general, you should increase the number of neurons as long as it improves the results. Be aware of overfitting - model learning noise in the data. Always split dataset to train, validation and training.\n", "\n", "Remember even the most complex network works still like a single perceptron - it's just an equation with weights determined using an optimization algorithm.\n", "\n", "Lifehack: since selecting architecture is an optimization task we can use an optimization algorithm to solve it. The most obvious representation would be a vector of size equal to the maximum number of layers we want and each element representing the number of neurons in such a layer. There are a few problems with this approach - it doesn't make sense to have neurons in the next layer if we have 0 neurons in the previous which is allowed in this representation. Removing 0 from the considered range is an option but then we will fix the number of layers. Also in most cases, it's not recommended to increase the number of neurons in the next layer. To avoid these issues we can change the representation. The first element will still be the number of neurons in the first layer, but all remainings will be the number from 0 to 1 meaning the fraction of neurons from the previous layer used in this layer. For example $[64, 1, .5, .5, 0, 0]$ would mean architecture with 4 layers with 64, 64, 32, and 16 neurons." ] }, { "cell_type": "markdown", "id": "143a36d2", "metadata": {}, "source": [ "To avoid overfitting we split our dataset to training, validation, and test sets. It's a good practice in most cases to shuffle the data since the initial order might have some problems. For example data can be sorted by a target value. Then if we split it in half training and tests se" ] }, { "cell_type": "code", "execution_count": null, "id": "b2164e31", "metadata": {}, "outputs": [], "source": [ "np.random.seed(31)\n", "idx = np.arange(len(x))\n", "np.random.shuffle(idx)\n", "idx" ] }, { "cell_type": "code", "execution_count": null, "id": "50911996", "metadata": {}, "outputs": [], "source": [ "train = idx[:int(.8*len(x))]\n", "val = idx[int(.8*len(x)):int(.9*len(x))]\n", "test = idx[int(.9*len(x)):]" ] }, { "cell_type": "code", "execution_count": null, "id": "bda7f44f", "metadata": {}, "outputs": [], "source": [ "train_x, train_y = transformed_x[train], transformed_y[train]\n", "val_x, val_y = transformed_x[val], transformed_y[val]\n", "test_x, test_y = transformed_x[test], transformed_y[test]" ] }, { "cell_type": "markdown", "id": "432061e0", "metadata": {}, "source": [ "What's the problem here? Well it's not a big issue, but data from validation and test sets should not be used to determine scaling parameters. So the scaler should be fit on the training data and then applied to validation and test datasets. Can you do it now?" ] }, { "cell_type": "code", "execution_count": null, "id": "846b33ac", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "62d9c533", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(64, activation='relu', input_shape=(1,))) \n", "model.add(Dense(1, activation='linear'))\n", "\n", "model.compile(loss='mse', metrics='mae')\n", "\n", "early = EarlyStopping(patience=15, restore_best_weights=True) # we don't specify monitor, by default it's val_loss\n", "reduce = ReduceLROnPlateau(patience=6)\n", "\n", "model.fit(train_x,train_y, validation_data=(val_x, val_y), epochs=500, batch_size=16, callbacks=[early, reduce])" ] }, { "cell_type": "code", "execution_count": null, "id": "cdd6ad20", "metadata": {}, "outputs": [], "source": [ "\n", "plt.plot(test_x,test_y, '.', label='data')\n", "plt.plot(np.sort(test_x, 0), model.predict(np.sort(test_x, 0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "688f8d7f", "metadata": {}, "source": [ "There are two popular functions to create a neural network model. For simple architectures Sequential can be used. For more complex Model is a suitable choice.\n", "\n", "\n", "In the sequential model, we assume layers are executed sequentially one by one. In Model, we can create architectures with different connections, multiple inputs or outputs." ] }, { "cell_type": "code", "execution_count": null, "id": "44aeb2d4", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(64, activation='relu', input_shape=(1,))) \n", "model.add(Dense(32, activation='relu', input_shape=(1,))) \n", "model.add(Dense(16, activation='relu', input_shape=(1,))) \n", "model.add(Dense(1, activation='linear'))" ] }, { "cell_type": "code", "execution_count": null, "id": "e85206c5", "metadata": {}, "outputs": [], "source": [ "plot_model(model)" ] }, { "cell_type": "code", "execution_count": null, "id": "30fcfaff", "metadata": {}, "outputs": [], "source": [ "inputLayer = Input(shape=(1,))\n", "dense1 = Dense(64, activation='relu')(inputLayer)\n", "dense2 = Dense(32, activation='relu')(dense1)\n", "dense3 = Dense(16, activation='relu')(dense2)\n", "dense4 = Dense(1, activation='relu')(dense3)\n", "model = Model(inputs=inputLayer, outputs=dense4)" ] }, { "cell_type": "code", "execution_count": null, "id": "b1916b12", "metadata": {}, "outputs": [], "source": [ "plot_model(model)" ] }, { "cell_type": "code", "execution_count": null, "id": "5aeb6004", "metadata": {}, "outputs": [], "source": [ "inputLayer = Input(shape=(1,))\n", "dense1 = Dense(64, activation='relu')(inputLayer)\n", "dense2 = Dense(32, activation='relu')(dense1)\n", "concat = Concatenate()([dense1, dense2])\n", "dense3 = Dense(16, activation='relu')(concat)\n", "dense4 = Dense(1, activation='relu')(dense3)\n", "model = Model(inputs=inputLayer, outputs=dense4)" ] }, { "cell_type": "code", "execution_count": null, "id": "240c8a14", "metadata": {}, "outputs": [], "source": [ "plot_model(model)" ] }, { "cell_type": "markdown", "id": "44142fc3", "metadata": {}, "source": [ "# Task 3\n", "Create a model with two inputs, first processed by three Dense layers, second by two, then concatenated, processed by two Dense layers and split it to two outputs\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6d5d4b51", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "925e28c6", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "9a293aa0", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "8d851d7c", "metadata": {}, "source": [ "Data generators are useful tools for neural network training. It allows to create data separately for each batch. Thanks to that there is no need to load all the data to the memory, which can be an issue for example when dealing with images. Also when we don't have data but a function for a generation then instead of generating a fixed number of rows in advance a data generator can be used. It's also useful for the data augmentation process." ] }, { "cell_type": "code", "execution_count": null, "id": "446f549a", "metadata": {}, "outputs": [], "source": [ "class DataGenerator(Sequence):\n", " def __init__(self, x, y, batch_size, shuffle=True):\n", " self.x = x\n", " self.y = y\n", " self.indexes = np.arange(len(y))\n", " self.batch_size = batch_size\n", " self.shuffle = shuffle\n", " self.on_epoch_end()\n", "\n", " def __len__(self):\n", " 'Denotes the number of batches per epoch'\n", " return int(np.floor(len(self.indexes) / self.batch_size))\n", "\n", " def __getitem__(self, index):\n", " 'Generate one batch of data'\n", " # Generate indexes of the batch. \n", " # During training and prediction this function will be called in range(0, __len__())\n", " indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]\n", " \n", " # Generate data\n", " X, y = self.__data_generation(indexes)\n", "\n", " return X, y\n", "\n", " def on_epoch_end(self):\n", " 'Updates indexes after each epoch'\n", " if self.shuffle == True:\n", " np.random.shuffle(self.indexes)\n", "\n", " def __data_generation(self, idx):\n", " X = np.empty((self.batch_size, 1))\n", " y = np.empty((self.batch_size), )\n", "\n", " for i, ID in enumerate(idx):\n", " # Store sample\n", " X[i,] = self.x[ID]\n", "\n", " # Store class\n", " y[i] = self.y[ID]\n", "\n", " return X, y" ] }, { "cell_type": "code", "execution_count": null, "id": "2d621e9f", "metadata": {}, "outputs": [], "source": [ "trainGenerator = DataGenerator(train_x, train_y, 16)\n", "valGenerator = DataGenerator(val_x, val_y, 1, shuffle=False)" ] }, { "cell_type": "code", "execution_count": null, "id": "ae76a3e8", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(64, activation='relu', input_shape=(1,))) \n", "model.add(Dense(1, activation='linear'))\n", "\n", "model.compile(loss='mse', metrics='mae')\n", "\n", "early = EarlyStopping(patience=15, restore_best_weights=True) # we don't specify monitor, by default it's val_loss\n", "reduce = ReduceLROnPlateau(patience=6)\n", "\n", "model.fit(trainGenerator, validation_data=valGenerator, epochs=500, batch_size=16, callbacks=[early, reduce])" ] }, { "cell_type": "code", "execution_count": null, "id": "19e83f6f", "metadata": {}, "outputs": [], "source": [ "plt.plot(test_x,test_y, '.', label='data')\n", "plt.plot(np.sort(test_x, 0), model.predict(np.sort(test_x, 0)), label='prediction')\n", "plt.legend()\n", "plt.grid()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "34890d07", "metadata": {}, "source": [ "Of course, this generator doesn't make much sense since the data are already in the memory and we don't do anything with them. It's just a template you can use later when it's really needed." ] }, { "cell_type": "markdown", "id": "2e45ea08", "metadata": {}, "source": [ "# Task 4\n", "Predict the quality of wine using neural network. \n", " - Treat target as continuous.\n", " - Treat target as discrete classes." ] }, { "cell_type": "code", "execution_count": null, "id": "7e5c252d", "metadata": {}, "outputs": [], "source": [ "data = load_wine()" ] }, { "cell_type": "code", "execution_count": null, "id": "cfb56e2a", "metadata": {}, "outputs": [], "source": [ "x = data['data']\n", "y = data['target']" ] }, { "cell_type": "code", "execution_count": null, "id": "9abc92fe", "metadata": {}, "outputs": [], "source": [ "x" ] }, { "cell_type": "code", "execution_count": null, "id": "086aeb2f", "metadata": {}, "outputs": [], "source": [ "y" ] }, { "cell_type": "code", "execution_count": null, "id": "d1c06afc", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "bfd10b21", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "82c191cf", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "32125902", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "61e6bcbd", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "819228bc", "metadata": {}, "outputs": [], "source": [ "#try also this model\n", "#check why and how it works\n", "model = Sequential()\n", "model.add(Dense(16, 'relu', input_shape=(13,)))\n", "model.add(Dense(3, 'softmax'))\n", "model.compile(loss='sparse_categorical_crossentropy', metrics=['acc'])" ] }, { "cell_type": "markdown", "id": "c4b39b3f", "metadata": {}, "source": [ "# Task 5\n", "Think how to use neural network for a time series with known and unknown length." ] }, { "cell_type": "code", "execution_count": null, "id": "465c0080", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "31694efa", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 5 }