Update chapter numbers after the SVM chapter goes online

This commit is contained in:
Aurélien Geron
2021-10-15 22:18:08 +13:00
parent ce4fccf74c
commit a655f25a65
15 changed files with 41 additions and 41 deletions

View File

@@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 15 Processing Sequences Using RNNs and CNNs**"
"**Chapter 14 Processing Sequences Using RNNs and CNNs**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 15._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 14._"
]
},
{
@@ -1778,7 +1778,7 @@
"source": [
"Now let's create the model:\n",
"\n",
"* We could feed the note values directly to the model, as floats, but this would probably not give good results. Indeed, the relationships between notes are not that simple: for example, if you replace a C3 with a C4, the melody will still sound fine, even though these notes are 12 semi-tones apart (i.e., one octave). Conversely, if you replace a C3 with a C\\#3, it's very likely that the chord will sound horrible, despite these notes being just next to each other. So we will use an `Embedding` layer to convert each note to a small vector representation (see Chapter 16 for more details on embeddings). We will use 5-dimensional embeddings, so the output of this first layer will have a shape of `[batch_size, window_size, 5]`.\n",
"* We could feed the note values directly to the model, as floats, but this would probably not give good results. Indeed, the relationships between notes are not that simple: for example, if you replace a C3 with a C4, the melody will still sound fine, even though these notes are 12 semi-tones apart (i.e., one octave). Conversely, if you replace a C3 with a C\\#3, it's very likely that the chord will sound horrible, despite these notes being just next to each other. So we will use an `Embedding` layer to convert each note to a small vector representation (see Chapter 15 for more details on embeddings). We will use 5-dimensional embeddings, so the output of this first layer will have a shape of `[batch_size, window_size, 5]`.\n",
"* We will then feed this data to a small WaveNet-like neural network, composed of a stack of 4 `Conv1D` layers with doubling dilation rates. We will intersperse these layers with `BatchNormalization` layers for faster better convergence.\n",
"* Then one `LSTM` layer to try to capture long-term patterns.\n",
"* And finally a `Dense` layer to produce the final note probabilities. It will predict one probability for each chorale in the batch, for each time step, and for each possible note (including silence). So the output shape will be `[batch_size, window_size, 47]`."