mirror of
https://github.com/ArthurDanjou/handson-ml3.git
synced 2026-01-14 12:14:36 +01:00
Improve alignment between notebook and book section headers
This commit is contained in:
@@ -84,14 +84,16 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Large margin classification"
|
||||
"# Linear SVM Classification"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The next few code cells generate the first figures in chapter 5. The first actual code sample comes after:"
|
||||
"The next few code cells generate the first figures in chapter 5. The first actual code sample comes after.\n",
|
||||
"\n",
|
||||
"**Code to generate Figure 5–1. Large margin classification**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -175,7 +177,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sensitivity to feature scales"
|
||||
"**Code to generate Figure 5–2. Sensitivity to feature scales**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -220,7 +222,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sensitivity to outliers"
|
||||
"## Soft Margin Classification\n",
|
||||
"**Code to generate Figure 5–3. Hard margin sensitivity to outliers**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -278,14 +281,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Large margin *vs* margin violations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is the first code example in chapter 5:"
|
||||
"**This is the first code example in chapter 5:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -325,7 +321,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now let's generate the graph comparing different regularization settings:"
|
||||
"**Code to generate Figure 5–4. Large margin versus fewer margin violations**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -408,7 +404,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Non-linear classification"
|
||||
"# Nonlinear SVM Classification"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–5. Adding features to make a dataset linearly separable**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -471,6 +474,13 @@
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Here is second code example in the chapter:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
@@ -490,6 +500,13 @@
|
||||
"polynomial_svm_clf.fit(X, y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–6. Linear SVM classifier using polynomial features**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
@@ -513,6 +530,20 @@
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Polynomial Kernel"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Next code example:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
@@ -528,6 +559,13 @@
|
||||
"poly_kernel_svm_clf.fit(X, y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–7. SVM classifiers with a polynomial kernel**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
@@ -564,6 +602,20 @@
|
||||
"plt.show()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Similarity Features"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–8. Similarity features using the Gaussian RBF**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
@@ -644,6 +696,20 @@
|
||||
" print(\"Phi({}, {}) = {}\".format(x1_example, landmark, k))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Gaussian RBF Kernel"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Next code example:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
@@ -657,6 +723,13 @@
|
||||
"rbf_kernel_svm_clf.fit(X, y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–9. SVM classifiers using an RBF kernel**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
@@ -701,7 +774,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Regression\n"
|
||||
"# SVM Regression"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -716,6 +789,13 @@
|
||||
"y = (4 + 3 * X + np.random.randn(m, 1)).ravel()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Next code example:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
@@ -728,6 +808,13 @@
|
||||
"svm_reg.fit(X, y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–10. SVM Regression**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
@@ -807,6 +894,13 @@
|
||||
"**Note**: to be future-proof, we set `gamma=\"scale\"`, as this will be the default value in Scikit-Learn 0.22."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Next code example:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
@@ -819,6 +913,13 @@
|
||||
"svm_poly_reg.fit(X, y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–11. SVM Regression using a second-degree polynomial kernel**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
@@ -855,7 +956,15 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Under the hood"
|
||||
"# Under the Hood\n",
|
||||
"## Decision Function and Predictions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Code to generate Figure 5–12. Decision function for the iris dataset**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -917,7 +1026,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Small weight vector results in a large margin"
|
||||
"**Code to generate Figure 5–13. A smaller weight vector results in a larger margin**"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -976,7 +1085,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Hinge loss"
|
||||
"**Code to generate the Hinge Loss figure:**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user