mirror of
https://github.com/ArthurDanjou/handson-ml3.git
synced 2026-01-22 16:00:28 +01:00
Update chapter numbers after the SVM chapter goes online
This commit is contained in:
@@ -4,14 +4,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Chapter 13 – Loading and Preprocessing Data with TensorFlow**"
|
||||
"**Chapter 12 – Loading and Preprocessing Data with TensorFlow**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"_This notebook contains all the sample code and solutions to the exercises in chapter 13._"
|
||||
"_This notebook contains all the sample code and solutions to the exercises in chapter 12._"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1881,7 +1881,7 @@
|
||||
"\n",
|
||||
"## 9.\n",
|
||||
"### a.\n",
|
||||
"_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 10); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._"
|
||||
"_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 9); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2407,7 +2407,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 16 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:"
|
||||
"Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 15 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2620,7 +2620,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We get about 73.5% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers."
|
||||
"We get about 73.5% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 15. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2628,7 +2628,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### e.\n",
|
||||
"_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 16). This rescaled mean embedding can then be passed to the rest of your model._"
|
||||
"_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 15). This rescaled mean embedding can then be passed to the rest of your model._"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -2735,7 +2735,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The model is not better using embeddings (but we will do better in Chapter 16). The pipeline looks fast enough (we optimized it earlier)."
|
||||
"The model is not better using embeddings (but we will do better in Chapter 15). The pipeline looks fast enough (we optimized it earlier)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user