{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "282849fe", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from tensorflow.keras.layers import LSTM, GRU, Dense, Input, RepeatVector, TimeDistributed, SimpleRNN\n", "from tensorflow.keras.layers import Reshape, GlobalMaxPool1D, Lambda, Concatenate\n", "from tensorflow.keras.models import Model, Sequential\n", "from tensorflow.keras.utils import Sequence, plot_model\n", "from tensorflow.keras.callbacks import EarlyStopping\n", "import tensorflow.keras.backend as K\n", "import tensorflow as tf\n", "import matplotlib.pyplot as plt\n", "from matplotlib import animation\n", "from IPython.display import HTML\n", "\n", "from sklearn.metrics import precision_recall_curve, precision_score, recall_score, accuracy_score\n", "from sklearn.metrics import average_precision_score, roc_auc_score\n", "\n", "%matplotlib inline" ] }, { "cell_type": "code", "execution_count": null, "id": "671b53fe", "metadata": {}, "outputs": [], "source": [ "tf.config.list_physical_devices()" ] }, { "cell_type": "code", "execution_count": null, "id": "adf758b4", "metadata": {}, "outputs": [], "source": [ "physical_devices = tf.config.list_physical_devices('GPU')\n", "print(physical_devices)\n", "for gpu in physical_devices:\n", " tf.config.experimental.set_memory_growth(gpu, enable=True)" ] }, { "cell_type": "markdown", "id": "b5fe6435", "metadata": {}, "source": [ "Today we will work with time series. Compared to conventional tabular data in time series there is one additional dimension, usually a time but not always. Examples of time series are the number of cars on a given street each hour, and energy consumption each minute. Also, series like text can be treated with similar techniques." ] }, { "cell_type": "markdown", "id": "7712e60a", "metadata": {}, "source": [ "Let's create a dataset for our exercise. We will use a trigonometric function with some noise." ] }, { "cell_type": "code", "execution_count": null, "id": "6acd5f9e", "metadata": { "scrolled": true }, "outputs": [], "source": [ "data = np.sin(np.linspace(0, 100, 10000)) + np.sin(np.linspace(13, 150, 10000))\n", "data += np.random.randn(*data.shape)/100\n", "for i in np.random.randint(0, 9000, 20):\n", " plt.plot(data[i:i+50])\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "238b4b6e", "metadata": {}, "outputs": [], "source": [ "data.shape" ] }, { "cell_type": "markdown", "id": "292d3892", "metadata": {}, "source": [ "As the first step let's use a part of this series and try to predict the next value. To prepare data function sliding_window_view might be useful. What's important it creates view not a new array so we don't waste memory." ] }, { "cell_type": "code", "execution_count": null, "id": "43e025ff", "metadata": {}, "outputs": [], "source": [ "np.lib.stride_tricks.sliding_window_view(np.arange(200), windowSize)" ] }, { "cell_type": "code", "execution_count": null, "id": "cbf7d563", "metadata": { "scrolled": true }, "outputs": [], "source": [ "windowSize = 50\n", "x = np.lib.stride_tricks.sliding_window_view(data, windowSize)[:-1] # for the last entry we will not have a target\n", "y = data[windowSize:]\n", "for i in np.random.randint(0, 9000, 20):\n", " plt.plot(x[i])\n", " plt.plot(windowSize+1, y[i], 'ro')\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "2a76a824", "metadata": {}, "source": [ "
\n", "\n", "We need to split the data into training, validation, and test sets. This time it might not be the best idea to split the data randomly. Do you know why? Click to see a hint\n", " \n", "Imagine such a situation, in the test set you have a prediction of the 200th timestamp based on timestamps number 150-199, and in the training set, there is a prediction of the 201st timestamp based on timestamps number 151-200.\n", " \n", "
" ] }, { "cell_type": "markdown", "id": "864b7c26", "metadata": {}, "source": [ "We don't want any overlap between these sets. Imagine we have 500 timestamps. If the last training row consist of timestamps 250-299 as predictors and 300 as a target then first validation row shouldn't be 251-300 but 301-350" ] }, { "cell_type": "markdown", "id": "3b1956d8", "metadata": {}, "source": [ "**Task 1**
Split data into training, validation, and test sets" ] }, { "cell_type": "code", "execution_count": null, "id": "6dc9edee", "metadata": {}, "outputs": [], "source": [ "train_frac = 0.8\n", "val_frac = 0.1\n", "test_frac = 0.1\n" ] }, { "cell_type": "markdown", "id": "08f27f69", "metadata": {}, "source": [ "Time series can be processed using neural networks. The very basic approach would be to treat each timestamp as separate input and use a classical Dense layer. What should be an input shape of such a network? \n", "\n", "**Task2**
Create, fit and evaluate such a model" ] }, { "cell_type": "code", "execution_count": null, "id": "4979f712", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "0adafd52", "metadata": { "scrolled": true }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "1c834a4d", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "bf503b4c", "metadata": { "scrolled": true }, "outputs": [], "source": [ "pred = model.predict(test_x)\n", "for i in np.random.randint(0, len(test_y), 10):\n", " plt.plot(range(windowSize), test_x[i], label='predictor')\n", " plt.plot(windowSize+1, test_y[i], \"bo\", label=\"y_true\")\n", " plt.plot(windowSize+1, pred[i], \"ro\", label=\"y_pred\")\n", " plt.legend()\n", " plt.grid()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "bbd06a21", "metadata": {}, "source": [ "There are a few problems with this approach. Firstly we assume a fixed number of timestamps, secondly, we don't treat the data as a series. There are approaches without these problems. Let's try to invent them.\n", "\n", "The first idea would be to apply a dense layer to each timestamp independently. How many parameters are in such a model? The problem is the size of the output would be equal to the size of the input and each timestamp of the output would be produced based on just one timestamp from the input.
\n", "We can slightly modify such a model to create the most basic version of a recurrent neural network. We are going to add one additional input to the model and as this input, we will use the output from the previous timestamp. This model is already implemented in keras and is called a Simple RNN" ] }, { "cell_type": "code", "execution_count": null, "id": "21b3ac8f", "metadata": {}, "outputs": [], "source": [ "seqLen = 50\n", "model = Sequential()\n", "model.add(SimpleRNN(64, input_shape=(seqLen,1)))\n", "model.add(Dense(1))\n", "model.compile(loss='mse')\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "eaba014d", "metadata": {}, "source": [ "Now the input is two-dimensional (number of timestamps, number of features) so we have to provide two numbers as the input shape. In general, we should reshape our input data, but since we have just one feature it will work anyway." ] }, { "cell_type": "code", "execution_count": null, "id": "565ea486", "metadata": { "scrolled": true }, "outputs": [], "source": [ "early = EarlyStopping(patience=7, restore_best_weights=True)\n", "model.fit(train_x,train_y, batch_size=64, epochs=50, validation_data=[val_x, val_y], callbacks=[early])" ] }, { "cell_type": "code", "execution_count": null, "id": "cc9676ed", "metadata": {}, "outputs": [], "source": [ "model.evaluate(test_x, test_y)" ] }, { "cell_type": "code", "execution_count": null, "id": "8314cc9f", "metadata": {}, "outputs": [], "source": [ "pred = model.predict(test_x)\n", "for i in np.random.randint(0, len(test_y), 10):\n", " plt.plot(range(windowSize), test_x[i], label='predictor')\n", " plt.plot(windowSize+1, test_y[i], \"bo\", label=\"y_true\")\n", " plt.plot(windowSize+1, pred[i], \"ro\", label=\"y_pred\")\n", " plt.legend()\n", " plt.grid()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "e590db9e", "metadata": {}, "source": [ "Check if number of parameters changes with sequence length" ] }, { "cell_type": "code", "execution_count": null, "id": "109856bc", "metadata": {}, "outputs": [], "source": [ "seqLen = 500\n", "model = Sequential()\n", "model.add(SimpleRNN(1, input_shape=(seqLen,1)))\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "8c1e6705", "metadata": {}, "source": [ "Can we not specify sequence length at all?" ] }, { "cell_type": "code", "execution_count": null, "id": "2acec53b", "metadata": {}, "outputs": [], "source": [ "seqLen = None\n", "model = Sequential()\n", "model.add(SimpleRNN(1, input_shape=(seqLen,1)))\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "33cc962d", "metadata": {}, "source": [ "I've said before that in rnn models we have an output for each timestamp but after the training there was just one number returned. Although the intermediate outputs are calculated, the default approach is to return just the last one. We can change this behavior by setting parameter return_sequences to true." ] }, { "cell_type": "code", "execution_count": null, "id": "19dcee14", "metadata": {}, "outputs": [], "source": [ "seqLen = 50\n", "model = Sequential()\n", "model.add(SimpleRNN(1, input_shape=(seqLen,1)))\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "7c9a422e", "metadata": {}, "outputs": [], "source": [ "model.predict(test_x[:3]).shape, model.predict(test_x[:3])" ] }, { "cell_type": "code", "execution_count": null, "id": "621d3e41", "metadata": {}, "outputs": [], "source": [ "seqLen = 50\n", "model = Sequential()\n", "model.add(SimpleRNN(1, return_sequences=True, input_shape=(seqLen,1)))\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "278faccb", "metadata": {}, "outputs": [], "source": [ "model.predict(test_x[:3]).shape, model.predict(test_x[:3])" ] }, { "cell_type": "markdown", "id": "136333c8", "metadata": {}, "source": [ "**Task 3**
create a model with more than one recurrent layer." ] }, { "cell_type": "code", "execution_count": null, "id": "4d0c86e0", "metadata": {}, "outputs": [], "source": [ "seqLen = 50\n", "model = Sequential()\n", "\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "115070db", "metadata": {}, "source": [ "There is one serious problem with SimpleRNN, these models don't have a good memory. When processing let's say timestamp number 20 there is basically no information left from the beginning of the series. That's why two extensions of it are much more popular: LSTM and GRU. Inside each cell there are some mini neural networks determining how much information to forget and add to the current state. We will not go deeply into detail about how these cells work. That would be covered by a lecture. \n", "\n", "https://www.researchgate.net/profile/Savvas-Varsamopoulos/publication/329362532/figure/fig5/AS:699592479870977@1543807253596/Structure-of-the-LSTM-cell-and-equations-that-describe-the-gates-of-an-LSTM-cell_W640.jpg\n", "\n", "LSTMs are more complex than GRU thus they are more suitable for harder tasks while GRU cells are rather lightweight and require less computational power but might perform not as well. Both cells already have nonlinear transofmrations inside so there is no need to apply another activation function." ] }, { "cell_type": "code", "execution_count": null, "id": "fa08fa48", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(SimpleRNN(2, input_shape=(None,2)))\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "39c1c60a", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(GRU(2, input_shape=(None,2)))\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "b0b1d059", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(LSTM(2, input_shape=(None,2)))\n", "model.summary()" ] }, { "cell_type": "markdown", "id": "58f69fe6", "metadata": {}, "source": [ "**Task 4**
Repeat the prediction task but use GRU instead of SimpleRNN. Check number of parameters while using the same number of layers and neurons" ] }, { "cell_type": "code", "execution_count": null, "id": "79469dd7", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "10e287ce", "metadata": {}, "outputs": [], "source": [ "early = EarlyStopping(patience=7, restore_best_weights=True)\n", "model.fit(train_x,train_y, batch_size=64, epochs=50, validation_data=[val_x, val_y], callbacks=[early])" ] }, { "cell_type": "code", "execution_count": null, "id": "ad1c01e6", "metadata": {}, "outputs": [], "source": [ "model.evaluate(test_x, test_y)" ] }, { "cell_type": "code", "execution_count": null, "id": "afc67a2f", "metadata": {}, "outputs": [], "source": [ "pred = model.predict(test_x)\n", "for i in np.random.randint(0, len(test_y), 10):\n", " plt.plot(range(windowSize), test_x[i], label='predictor')\n", " plt.plot(windowSize+1, test_y[i], \"bo\", label=\"y_true\")\n", " plt.plot(windowSize+1, pred[i], \"ro\", label=\"y_pred\")\n", " plt.legend()\n", " plt.grid()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "4c73a899", "metadata": {}, "source": [ "**Task 5**
Repeat the prediction task but use LSTM instead of SimpleRNN. Check number of parameters" ] }, { "cell_type": "code", "execution_count": null, "id": "95675977", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "d6ae7e8b", "metadata": { "scrolled": true }, "outputs": [], "source": [ "early = EarlyStopping(patience=7, restore_best_weights=True)\n", "model.fit(train_x,train_y, batch_size=64, epochs=50, validation_data=[val_x, val_y], callbacks=[early])" ] }, { "cell_type": "code", "execution_count": null, "id": "aa05829c", "metadata": {}, "outputs": [], "source": [ "model.evaluate(test_x, test_y)" ] }, { "cell_type": "code", "execution_count": null, "id": "8eecaa5a", "metadata": { "scrolled": true }, "outputs": [], "source": [ "pred = model.predict(test_x)\n", "for i in np.random.randint(0, len(test_y), 10):\n", " plt.plot(range(windowSize), test_x[i], label='predictor')\n", " plt.plot(windowSize+1, test_y[i], \"bo\", label=\"y_true\")\n", " plt.plot(windowSize+1, pred[i], \"ro\", label=\"y_pred\")\n", " plt.legend()\n", " plt.grid()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "4c46d565", "metadata": {}, "source": [ "Ok, we know how to predict the next value, but what if we want to look further into the future?
**Task 6**
Predict the next 12 values using the same model without retraining" ] }, { "cell_type": "code", "execution_count": null, "id": "b711bab7", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "227ad104", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "9ef01236", "metadata": {}, "source": [ "
\n", "\n", "How to modify the model to predict those 12 values at once? It's simple, think about it. You can see the answer after clicking here.\n", " \n", "Change the number of neurons in the last Dense layer\n", " \n", "
\n", "\n" ] }, { "cell_type": "markdown", "id": "e272b646", "metadata": {}, "source": [ "**Task 7**
Create such a model and train it. Remember to change the target since it consists of 12 values per row right now." ] }, { "cell_type": "code", "execution_count": null, "id": "f2126e3e", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "5414e95b", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b745c240", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "2b2c3712", "metadata": {}, "source": [ "We can slightly enhance the training process by guiding it. Right now we ignore nearly all outputs from the recurrent layer. We can include it in the loss function. At each timestamp, we want to have the prediction for the next 12 values e.g. when processing timestamp number 13 at the output of the network we want to have timestamps 14-26.
**Task 8**
\n", "Modify the model and the target for this type of training." ] }, { "cell_type": "code", "execution_count": null, "id": "828024cd", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "b7f666cb", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "37ea1074", "metadata": {}, "source": [ "# Autoencoder\n", "Autoencoder is a type of model where we aim to return the input as an output, however, there is a bottleneck in the architecture so the network needs to learn how to compress the information. Such an architecture consists of two submodels - encoder and decoder. The encoder takes an input and compresses it to the so-called latent representation. The decoder takes the latent representation produced by the encoder and recreates the input. You can think about it as trainable compression and decompression. However, this compression is not lossless" ] }, { "cell_type": "code", "execution_count": null, "id": "86f944da", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(LSTM(32, return_sequences=True, input_shape=(50,1)))\n", "model.add(LSTM(6)) # bottleneck.\n", "model.add(RepeatVector(50)) # repeating given number of times to restore time dimension\n", "model.add(LSTM(64, return_sequences=True))\n", "model.add(LSTM(32, return_sequences=True))\n", "model.add(Dense(1)) # \n", "model.compile(optimizer='adam', loss='mse', metrics='mae')\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "3200f7f9", "metadata": {}, "outputs": [], "source": [ "class SeriesDataGen(Sequence):\n", " \n", " def __init__(self, df, sequenceLen=50, batchSize=1, shuffle=True):\n", " \n", " self.sequenceLen = sequenceLen\n", " self.batchSize = batchSize\n", " self.shuffle = shuffle\n", " self.df = df\n", " \n", " def on_epoch_end(self):\n", " self.df = self.df.sample(frac=1).reset_index(drop=True)\n", " \n", " def __get_data(self, batches):\n", " output = np.zeros((len(batches), self.sequenceLen))\n", " for i,x in batches.reset_index(drop=True).iterrows():\n", " output[i] = np.sin(np.linspace(x.start, x.start+x.freq, self.sequenceLen)) * x.amplitude\n", " if self.shuffle:\n", " output += np.random.randn(*output.shape)/100\n", " \n", " return output, output\n", " \n", " def __getitem__(self, index):\n", " \n", " batches = self.df.iloc[index * self.batchSize:(index + 1) * self.batchSize]\n", " X, y = self.__get_data(batches) \n", " return X.reshape(-1, self.sequenceLen, 1), y.reshape(-1, self.sequenceLen, 1)\n", " \n", " def __len__(self):\n", " return len(self.df) // self.batchSize" ] }, { "cell_type": "code", "execution_count": null, "id": "f2db1dd6", "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame()\n", "df['start'] = np.random.randn(5000)\n", "df['freq'] = np.random.rand(5000)+4\n", "df['amplitude'] = np.random.rand(5000)/2+.75\n", "train = SeriesDataGen(df, batchSize=16)\n", "\n", "df = pd.DataFrame()\n", "df['start'] = np.random.randn(512)\n", "df['freq'] = np.random.rand(512)+4\n", "df['amplitude'] = np.random.rand(512)/2+.75\n", "val = SeriesDataGen(df, batchSize=16, shuffle=False)\n", "\n", "df = pd.DataFrame()\n", "df['start'] = np.random.randn(500)\n", "df['freq'] = np.random.rand(500)+4\n", "df['amplitude'] = np.random.rand(500)/2+.75\n", "test = SeriesDataGen(df, batchSize=1, shuffle=False)" ] }, { "cell_type": "code", "execution_count": null, "id": "4d5b096a", "metadata": {}, "outputs": [], "source": [ "model.fit(train, epochs=5, validation_data=val)" ] }, { "cell_type": "code", "execution_count": null, "id": "ca018455", "metadata": {}, "outputs": [], "source": [ "model.evaluate(test)" ] }, { "cell_type": "code", "execution_count": null, "id": "5a821c84", "metadata": { "scrolled": true }, "outputs": [], "source": [ "for i in range(10):\n", " x,y = test.__getitem__(i)\n", " ypred = model.predict(x)\n", " plt.plot(y[0], alpha=.5, label='yTrue')\n", " plt.plot(ypred[0], alpha=.5, label='yPred')\n", " plt.grid()\n", " plt.legend()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "2ad7e8b6", "metadata": {}, "source": [ "What's the reason for the poor performance in the first timestamps?" ] }, { "cell_type": "markdown", "id": "ba18a7f1", "metadata": {}, "source": [ "**Task 9**
Check how does the size of the bottleneck affect the performance." ] }, { "cell_type": "code", "execution_count": null, "id": "4f915aaa", "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(LSTM(32, return_sequences=True, input_shape=(50,1)))\n", "model.add(LSTM(64))\n", "model.add(RepeatVector(50))\n", "model.add(LSTM(64, return_sequences=True))\n", "model.add(LSTM(32, return_sequences=True))\n", "model.add(TimeDistributed(Dense(1)))\n", "model.compile(optimizer='adam', loss='mse', metrics='mse')\n", "model.summary()" ] }, { "cell_type": "code", "execution_count": null, "id": "98a12810", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "7689c304", "metadata": {}, "source": [ "Encoder and decoder can be two separate network which can be used separately." ] }, { "cell_type": "code", "execution_count": null, "id": "c66d36a9", "metadata": {}, "outputs": [], "source": [ "encoder = Sequential([\n", " LSTM(32, return_sequences=True, input_shape=(50,1)),\n", " LSTM(2)\n", "])\n", "decoder = Sequential([\n", " RepeatVector(50, input_shape=(2,)),\n", " LSTM(64, return_sequences=True),\n", " LSTM(32, return_sequences=True),\n", " TimeDistributed(Dense(1))\n", "])\n", "model = Sequential([\n", " encoder,\n", " decoder\n", "])\n", "model.compile('adam', loss='mse', metrics=['mae'])" ] }, { "cell_type": "code", "execution_count": null, "id": "857d10d2", "metadata": {}, "outputs": [], "source": [ "model.fit(train, epochs=10, validation_data=val)" ] }, { "cell_type": "code", "execution_count": null, "id": "dc921e5e", "metadata": {}, "outputs": [], "source": [ "for i in range(10):\n", " x,y = test.__getitem__(i)\n", " ypred = model.predict(x)\n", " plt.plot(y[0], alpha=.5, label='yTrue')\n", " plt.plot(ypred[0], alpha=.5, label='yPred')\n", " plt.grid()\n", " plt.legend()\n", " plt.show()" ] }, { "cell_type": "markdown", "id": "ecced7c2", "metadata": {}, "source": [ "The decoder was trained to recreate a series of length 50 from two numbers. Right now we can provide any two numbers to the decoder." ] }, { "cell_type": "code", "execution_count": null, "id": "f6523c16", "metadata": {}, "outputs": [], "source": [ "plt.plot(decoder.predict(np.array([[0,0]]))[0])" ] }, { "cell_type": "markdown", "id": "8528bda7", "metadata": {}, "source": [ "We can also see how changing one number modify the output." ] }, { "cell_type": "code", "execution_count": null, "id": "57866ed0", "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots()\n", "\n", "ax.set_xlim(( 0, 50))\n", "ax.set_ylim((-2, 2))\n", "\n", "line, = ax.plot([], [], lw=2)" ] }, { "cell_type": "code", "execution_count": null, "id": "e057495f", "metadata": {}, "outputs": [], "source": [ "def init():\n", " line.set_data([], [])\n", " return (line,)\n", "\n", "def animate(i):\n", " x = np.arange(50)\n", " y = decoder.predict(np.array([[np.abs(100-i)/50-1,0]]), verbose=0)[0]\n", " line.set_data(x,y)\n", " return (line,)\n", "\n", "anim = animation.FuncAnimation(fig, animate, init_func=init,\n", " frames=200, interval=20, blit=True)\n", "HTML(anim.to_html5_video())" ] }, { "cell_type": "code", "execution_count": null, "id": "dca1ba2a", "metadata": {}, "outputs": [], "source": [ "def init():\n", " line.set_data([], [])\n", " return (line,)\n", "\n", "def animate(i):\n", " x = np.arange(50)\n", " y = decoder.predict(np.array([[0,np.abs(100-i)/50-1]]), verbose=0)[0]\n", " line.set_data(x,y)\n", " return (line,)\n", "\n", "anim = animation.FuncAnimation(fig, animate, init_func=init,\n", " frames=100, interval=20, blit=True)\n", "HTML(anim.to_html5_video())" ] }, { "cell_type": "code", "execution_count": null, "id": "20fbbbc9", "metadata": { "scrolled": true }, "outputs": [], "source": [ "i1, i2 = np.random.randint(0,100,2)\n", "while np.abs(test.__getitem__(i1)[1][0] - test.__getitem__(i2)[1][0]).sum() < 65:\n", " i1, i2 = np.random.randint(0,100,2)\n", "plt.plot(test.__getitem__(i1)[1][0])\n", "plt.plot(test.__getitem__(i2)[1][0])\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "fa334d08", "metadata": {}, "outputs": [], "source": [ "encoder.predict(test.__getitem__(i1)[1]), encoder.predict(test.__getitem__(i2)[1])" ] }, { "cell_type": "code", "execution_count": null, "id": "65a8181a", "metadata": {}, "outputs": [], "source": [ "np.linspace(encoder.predict(test.__getitem__(i1)[1]), encoder.predict(test.__getitem__(i2)[1]), 1000)" ] }, { "cell_type": "markdown", "id": "1b931a02", "metadata": {}, "source": [ "**Task 10**
Create an animation presenting a smooth transition between two curves selected in previous cells. After learning CNNs in the second part of this course try it with images." ] }, { "cell_type": "code", "execution_count": null, "id": "cef57e97", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "27c4fe50", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "2faa4d02", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "id": "f8670478", "metadata": {}, "source": [ "Useful materials:\n", "\n", "https://github.com/ageron/handson-ml2/blob/master/15_processing_sequences_using_rnns_and_cnns.ipynb" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 5 }