From cbfefe7a97ba07e836ffb5224bcca8af560b292d Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 17:02:03 -0400 Subject: [PATCH 1/4] Change function argument In Exercise 9, function `mnist_dataset` was called with the wrong argument. --- 13_loading_and_preprocessing_data.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 144b216..8561f33 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2040,8 +2040,8 @@ "outputs": [], "source": [ "train_set = mnist_dataset(train_filepaths, shuffle_buffer_size=60000)\n", - "valid_set = mnist_dataset(train_filepaths)\n", - "test_set = mnist_dataset(train_filepaths)" + "valid_set = mnist_dataset(valid_filepaths)\n", + "test_set = mnist_dataset(test_filepaths)" ] }, { From c3cbfd04d5e80ec88e95cd5b1e3c0829bb760d3f Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 17:51:42 -0400 Subject: [PATCH 2/4] Adding two missing words --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 8561f33..4e9a935 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2274,7 +2274,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `
` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense a tool like Apache Beam for that." + "But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `
` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense to use a tool like Apache Beam for that." ] }, { From a83d4885dce9bd247bbe384f92f3e22df03e9b27 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 18:51:06 -0400 Subject: [PATCH 3/4] Correct a small typo One missing word. --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 4e9a935..c258b82 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2473,7 +2473,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary bigger:" + "Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary is bigger:" ] }, { From 08e387005399bba46e5b1aa605e467f47ef50272 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 19:20:18 -0400 Subject: [PATCH 4/4] Correct small "code typo" --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index c258b82..7d7ff12 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2540,7 +2540,7 @@ "source": [ "class BagOfWords(keras.layers.Layer):\n", " def __init__(self, n_tokens, dtype=tf.int32, **kwargs):\n", - " super().__init__(dtype=tf.int32, **kwargs)\n", + " super().__init__(dtype=dtype, **kwargs)\n", " self.n_tokens = n_tokens\n", " def call(self, inputs):\n", " one_hot = tf.one_hot(inputs, self.n_tokens)\n",