mirror of
https://github.com/ArthurDanjou/ml_exercises.git
synced 2026-01-14 12:14:38 +01:00
873 lines
40 KiB
Plaintext
873 lines
40 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Putting it all together: Analyzing a Toy Dataset\n",
|
|
"\n",
|
|
"In this example, we're working with an artificial dataset from a production process, where a small fraction of the produced products are faulty. The task is to predict from the conditions under which a product is to be produced, whether the product will be ok or scrap."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# first load some libraries that are needed later\n",
|
|
"import numpy as np\n",
|
|
"import pandas as pd\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"from scipy.stats import pearsonr\n",
|
|
"# machine learning stuff\n",
|
|
"from sklearn.metrics import accuracy_score, balanced_accuracy_score\n",
|
|
"from sklearn.preprocessing import OneHotEncoder, StandardScaler\n",
|
|
"from sklearn.linear_model import LogisticRegression\n",
|
|
"from sklearn.model_selection import GridSearchCV, train_test_split\n",
|
|
"from sklearn import tree\n",
|
|
"# interactive plotting (parallel coordinate plot)\n",
|
|
"import plotly.express as px\n",
|
|
"# suppress unnecessary warnings\n",
|
|
"import warnings\n",
|
|
"warnings.simplefilter(action='ignore', category=FutureWarning)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Loading the data\n",
|
|
"\n",
|
|
"The data is available as a `.csv` file, which stands for \"comma-separated values\", which is just a text file with one data point per row. You can export this kind of format from Excel (thereby making the data easier to share) and then read it in with the `pandas` library.\n",
|
|
"\n",
|
|
"The toy dataset consists of production data for 3 different types of products. The variables in the dataset are:\n",
|
|
"- `height`, `width`, `depth`: dimensions of the product\n",
|
|
"- `product`: categorical variable with values `1`, `5`, or `17` depending on the type of product that was produced\n",
|
|
"- `faulty`: binary variable that indicates if the produced product is faulty (`1`) or ok (`0`)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# we are given the dataset toydata1.csv\n",
|
|
"# load the csv file into a dataframe with pandas\n",
|
|
"df = pd.read_csv(\"../data/toydata1.csv\")\n",
|
|
"# look at the raw data (first 5 rows)\n",
|
|
"df.head()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# more concise overview (e.g. how many values per column, mean of the values in each column, etc)\n",
|
|
"df.describe()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Exploratory Analysis\n",
|
|
"\n",
|
|
"To get a better feeling for what we're dealing with here, we examine the different variables in more detail.\n",
|
|
"\n",
|
|
"- Do we have an equal amount of samples for each of the three product types or is one of the subgroups underrepresented?\n",
|
|
"- In what ranges are the features and are there differences amongst the three products?\n",
|
|
"- Are there correlations between the variables?\n",
|
|
"- Can we already identify some variables that tell us that a product is faulty?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# plot histograms for the different variables\n",
|
|
"df.hist(bins=50, layout=(1, 5), figsize=(15, 2));"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"These histograms show the distribution of the values for each variable, i.e., on the x-axis you see the range of values and on the y-axis how many samples have a value in the respective interval.\n",
|
|
"\n",
|
|
"**Take a second to examine these histograms - what do they already tell you?**\n",
|
|
"- Do we have to worry about underrepresented subgroups due to the different product types?\n",
|
|
"- Where might the 3 peaks in the distribution of the depth variable come from?\n",
|
|
"- What do you notice about the height and width variables?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# verify counts for the categorical variable\n",
|
|
"df[\"product\"].value_counts()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# see if the variation in the depth variable is related to the different product types\n",
|
|
"plt.figure()\n",
|
|
"colors = [\"r\", \"b\", \"g\"]\n",
|
|
"# plot one histogram per product type using different colors\n",
|
|
"for i, prod in enumerate(sorted(df[\"product\"].unique())):\n",
|
|
" plt.hist(df[\"depth\"][df[\"product\"] == prod], bins=20, color=colors[i], alpha=0.7, label=f\"product {prod}\")\n",
|
|
"plt.legend()\n",
|
|
"plt.xlabel(\"depth\");"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# look at the correlation matrix to see the correlations between all variables\n",
|
|
"# for more info on what these numbers mean see here: https://en.wikipedia.org/wiki/Correlation_and_dependence\n",
|
|
"corr_mat = df.corr()\n",
|
|
"# uncomment the part below to see the table in color\n",
|
|
"corr_mat #.style.background_gradient(cmap='coolwarm', axis=None).set_precision(2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We've already seen that the depth variable and product variable are connected, which explains their high correlation. The height and width variables also show a fairly high correlation of 0.72 and we had already seen that they also have very similar looking histograms, so lets investigate this further."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# examine the correlation between height and width in more detail with a scatter plot\n",
|
|
"plt.figure(figsize=(5.5, 5))\n",
|
|
"plt.scatter(df[\"height\"], df[\"width\"], alpha=0.3)\n",
|
|
"plt.xlabel(\"height\")\n",
|
|
"plt.ylabel(\"width\")\n",
|
|
"plt.title(f\"Correlation: {pearsonr(df['height'], df['width'])[0]:.3f}\"); # just compute the same correlation again"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Questions:**\n",
|
|
"- If all that someone had told you was that two variables have a linear correlation of 0.7, is this the scatter plot that you would have imagined for the two variables? (You might also want to look at the Wikipedia article again for some other example plots)\n",
|
|
"- Why is the correlation coefficient for these two variables so large?\n",
|
|
"- What would you expect the correlation coefficient to be if you only consider the large blob in the middle (i.e., ignore the points at (0, 0))?\n",
|
|
"\n",
|
|
"In reality, it often happens that two variables seem to be perfectly correlated (i.e., they have a correlation coefficient of (almost) 1), but when you look closer, then this is just due to the fact that, for example, two sensors are off at the same time, but for the part where they're on, they actually aren't giving redundant values. Therefore be careful before throwing away \"rendundant\" variables and always verify the correlation with a scatter plot!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# now check if these variables already give a hint on how to identify the faulty products\n",
|
|
"# (they both also had a fairly high negative correlation with the faulty variable)\n",
|
|
"plt.figure()\n",
|
|
"plt.scatter(df[\"height\"], df[\"width\"], c=df[\"faulty\"], alpha=0.3) # color the points based on the faulty variable\n",
|
|
"plt.xlabel(\"height\")\n",
|
|
"plt.ylabel(\"width\")\n",
|
|
"plt.colorbar()\n",
|
|
"# and check what the correlation coefficient is without the (0, 0) points\n",
|
|
"plt.title(f\"Correlation: {pearsonr(df['height'][df['height'] > 0], df['width'][df['width'] > 0])[0]:.3f}\");"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Clearly, not all faulty products are equal: some are within the \"regular\" data (i.e., the purple points), while some are outliers at (0, 0). \n",
|
|
"\n",
|
|
"The department that gave us the data tells us that the points where height=width=0 are products where something went wrong during production and the process was aborted. However, instead of marking the respective values as `NaN`, this was recorded by setting some of the variables to \"impossible\" values. Real data is just messy like that."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# make an interactive parallel coordinate plot \n",
|
|
"# (make sure you're using a modern browser for this, i.e., not the Internet Explorer!)\n",
|
|
"# (works with pandas as well, but doesn't look that great: pd.plotting.parallel_coordinates(df, \"faulty\"))\n",
|
|
"fig = px.parallel_coordinates(df, color=\"faulty\")\n",
|
|
"fig\n",
|
|
"# each line corresponds to one sample, where the indivdual values for each variable are marked\n",
|
|
"# at the respective axis and then these values are connected by a line\n",
|
|
"# -> you can select parts of the samples by clicking and draging the mouse over one of the axis (when you see a cross)\n",
|
|
"# e.g., try to select only those samples that do not have a height and width of 0\n",
|
|
"# (a click on the selection removes it again, you can also drag the axis to change their order)\n",
|
|
"# do you notice any patterns?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Supervised Learning\n",
|
|
"\n",
|
|
"Now that we've become more familiar with the dataset, it's time to tackle the real task, i.e., to try to predict whether a product will be faulty. This is a classification problem (each product either belongs to the class \"faulty\" or the class \"ok\", there is no in between)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# \"product\" is a categorical variable; for it to be handled correctly,\n",
|
|
"# we have to transform it into a one-hot encoded vector\n",
|
|
"e = OneHotEncoder(sparse=False, categories='auto')\n",
|
|
"ohe = e.fit_transform(df[\"product\"].to_numpy()[:, None])\n",
|
|
"df = df.join(pd.DataFrame(ohe, columns=[f\"product_{i}\" for i in e.categories_[0]], index=df.index))\n",
|
|
"df.head() # notice the additional columns with zeros and a one"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# from the dataframe we now extract our features ...\n",
|
|
"feature_cols = [\"product_1\", \"product_5\", \"product_17\", \"height\", \"width\", \"depth\"]\n",
|
|
"X = df[feature_cols].to_numpy() # convert df into a numpy array\n",
|
|
"# ... and the vector with labels\n",
|
|
"y = df[\"faulty\"].to_numpy()\n",
|
|
"# to evaluate our prediction model, we need to split off a test dataset\n",
|
|
"# later we will use the train_test_split function from sklearn to do this, \n",
|
|
"# but this just goes to show that there is no magic behind it\n",
|
|
"np.random.seed(10)\n",
|
|
"idx = np.random.permutation(len(df)) # shuffled range of values from 0 to len(df)\n",
|
|
"train_idx = idx[:2000] # 2/3 of the samples are in the training set\n",
|
|
"test_idx = idx[2000:]\n",
|
|
"X_train = X[train_idx] # pick out the rows from X corresponding to these indices\n",
|
|
"X_test = X[test_idx]\n",
|
|
"y_train = y[train_idx]\n",
|
|
"y_test = y[test_idx]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# see how imbalanced the label distribution in the training and test sets is\n",
|
|
"print(f\"Fraction of ok items in training set: {1-np.mean(y_train):.3f}\")\n",
|
|
"print(f\"Fraction of ok items in test set: {1-np.mean(y_test):.3f}\")\n",
|
|
"# and check the (balanced) accuracy for a stupid baseline model that always predicts zeros\n",
|
|
"# (notice how the value for the accuray is the same as the fraction of ok items above)\n",
|
|
"print(\"----- Stupid baseline (always predict 'ok'): -----\")\n",
|
|
"print(f\"Accuracy on training data: {accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")\n",
|
|
"# since we have a very unbalanced class distribution in this dataset, the balanced accuracy\n",
|
|
"# is the evaluation metric that we actually care about"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# let's try a (shallow) decision tree!\n",
|
|
"clf = tree.DecisionTreeClassifier(max_depth=2, random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"# same evaluation as for the stupid baseline above\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Questions:** \\\n",
|
|
"Have a look at the values for (balanced) accuracy and compare them to the scores obtained with the stupid baseline: Do you think we're on the right track, i.e., does this seem like a useful model?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# now plot the tree\n",
|
|
"tree.plot_tree(clf, feature_names=feature_cols, filled=True, class_names=np.array(clf.classes_, dtype=str), proportion=True);"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The decision tree has its root at the top (where you start) and the leaves (i.e., those nodes that don't branch off anymore) at the bottom (where you stop and make the final prediction). Each node in the tree shows in the first line the variable based on which the next split is made incl. the threshold value (except for leaf nodes), then the current Gini impurity (i.e., how homogeneous the labels of all the samples that ended up in this node are; this is what the decision tree internally optimizes, i.e., notice how the value gets smaller on at least one side after a split), then the fraction of samples that ended up in this node, and the distribution of samples into the different classes, as well as the class that would be predicted for a sample at this point.\n",
|
|
"\n",
|
|
"**Questions:** \\\n",
|
|
"Have a look at the tree and the decisions that are made in it: What has the decision tree actually learned, i.e., which samples does it classify as faulty and which as ok? Does this model help us on our quest to identify production conditions that result in faulty products?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# let's do what we probably should have done in the beginning and \n",
|
|
"# remove the outliers (i.e., keep only samples with a height > 0)\n",
|
|
"df_new = df[df[\"height\"] > 0.]\n",
|
|
"# create a train/test split again, this time using the sklearn function\n",
|
|
"X_train, X_test, y_train, y_test = train_test_split(df_new[feature_cols].to_numpy(), \n",
|
|
" df_new[\"faulty\"].to_numpy(), \n",
|
|
" test_size=0.33, random_state=15)\n",
|
|
"# see how imbalanced the label distribution in the training and test sets is\n",
|
|
"print(f\"Fraction of ok items in training set: {1-np.mean(y_train):.3f}\")\n",
|
|
"print(f\"Fraction of ok items in test set: {1-np.mean(y_test):.3f}\")\n",
|
|
"# and what the stupid baselien is now (since we've removed only 'faulty' points, \n",
|
|
"# the class distributions are even more unbalanced and the accuracy even higher)\n",
|
|
"print(\"----- Stupid baseline (always predict 'ok'): -----\")\n",
|
|
"print(f\"Accuracy on training data: {accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# decision tree on data without outliers\n",
|
|
"clf = tree.DecisionTreeClassifier(max_depth=3, random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# plot the tree again\n",
|
|
"plt.figure(figsize=(15, 10))\n",
|
|
"tree.plot_tree(clf, feature_names=feature_cols, filled=True, class_names=np.array(clf.classes_, dtype=str), proportion=True);\n",
|
|
"# notice how in the leaf nodes where the tree predicts \"faulty\", there are only very few data points"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Questions:** \\\n",
|
|
"What do you think of the model now?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# maybe we just need to give the tree the freedom to make more splits? (i.e., increase its depth)\n",
|
|
"clf = tree.DecisionTreeClassifier(max_depth=100, random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Questions:** \\\n",
|
|
"Is this a better model? If anything, is the model over- or underfitting?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# when the tree is too large (or you're using a random forest),\n",
|
|
"# check the feature importances instead of plotting the tree\n",
|
|
"dict(zip(feature_cols, clf.feature_importances_))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Garbage in, garbage out...\n",
|
|
"\n",
|
|
"Clearly, we're missing some important information, as we are unable to identify the non-outlier faulty products. I.e., we need more data (not necessarily more samples, but certainly more features).\n",
|
|
"\n",
|
|
"So we go back to the person that gave us the data and ask if they have an idea what else might be causing the products to break and if there are additional sensor measurements available that we could look at. They give us a new dataset `toydata2.csv`, which additionally contains the variable `temp`, which indicates the temperature at which a product was produced."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# load this new data\n",
|
|
"df = pd.read_csv(\"../data/toydata2.csv\")\n",
|
|
"df.head() # same as before just an additional column"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# look at the variables again -> just like depth, temp has 3 peaks in the distribution\n",
|
|
"df.hist(bins=50, layout=(1,6), figsize=(15,2));"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# see if the variation in temp are indeed related to the different products\n",
|
|
"plt.figure()\n",
|
|
"colors = [\"r\", \"b\", \"g\"]\n",
|
|
"for i, prod in enumerate(sorted(df[\"product\"].unique())):\n",
|
|
" plt.hist(df[\"temp\"][df[\"product\"] == prod], bins=20, color=colors[i], alpha=0.7, label=f\"product {prod}\")\n",
|
|
"plt.legend()\n",
|
|
"plt.xlabel(\"temp\");"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# make another interactive parallel coordinates plot\n",
|
|
"columns = [\"height\", \"width\", \"depth\", \"product\", \"temp\", \"faulty\"]\n",
|
|
"fig = px.parallel_coordinates(df[columns], color=\"temp\")\n",
|
|
"fig"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"By clicking and dragging on the different axis, select the data such that you remove the outliers (i.e., keep only samples with height/width > 0) and then select the faulty products (i.e., with faulty = 1).\n",
|
|
"\n",
|
|
"**Questions:** \\\n",
|
|
"Do you notice any patterns? How would you explain to the stakeholders why some of their products are faulty?\n",
|
|
"\n",
|
|
"(In this case, we can derive the relevant insights already from the plot. However, in real problems, the solution is usually not this obvious, so lets try to see how we could also solve this with ML.)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# transform the categorical column again - this time using pandas directly\n",
|
|
"df = pd.concat([df, pd.get_dummies(df[\"product\"], prefix=\"product\")], axis=1)\n",
|
|
"df.head()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# remove outliers again\n",
|
|
"df_new = df[df[\"height\"] > 0.]\n",
|
|
"# let's try with temp as an additional feature\n",
|
|
"feature_cols = [\"product_1\", \"product_5\", \"product_17\", \"height\", \"width\", \"depth\", \"temp\"]\n",
|
|
"X = df_new[feature_cols].to_numpy()\n",
|
|
"y = df_new[\"faulty\"].to_numpy()\n",
|
|
"# split into train/test sets again\n",
|
|
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=15)\n",
|
|
"# see how imbalanced the label distribution in the training and test sets is\n",
|
|
"print(f\"Fraction of ok items in training set: {1-np.mean(y_train):.3f}\")\n",
|
|
"print(f\"Fraction of ok items in test set: {1-np.mean(y_test):.3f}\")\n",
|
|
"# and check the stupid baseline again (this is the same as before since the data contains the same samples)\n",
|
|
"print(\"----- Stupid baseline (always predict 'ok'): -----\")\n",
|
|
"print(f\"Accuracy on training data: {accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, np.zeros_like(y_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, np.zeros_like(y_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# train a decision tree again (the parameters here were set as an initial guess\n",
|
|
"# based on our understanding of the problem as well as the decision tree model)\n",
|
|
"clf = tree.DecisionTreeClassifier(max_depth=6, min_samples_leaf=50, class_weight=\"balanced\", random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Questions:** \\\n",
|
|
"What do you think of the model now? If anything, is the model over- or underfitting?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# plot the tree\n",
|
|
"plt.figure(figsize=(20, 15))\n",
|
|
"tree.plot_tree(clf, feature_names=feature_cols, filled=True, class_names=np.array(clf.classes_, dtype=str), proportion=True);"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As you can see, the tree is quite big and therefore also more tedious to interpret. Additionally, we see that many of the splits right before the leaf nodes are made without any change in the predicted class (e.g., all the nodes remain orange). This happens, because the tree itself only cares about the Gini impurity, which indeed still decreases after these splits. However, since this is not helpful for us, lets prune on the tree by cutting off these unnecessary splits, which can be done by setting the parameter `ccp_alpha`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# prune the tree by setting ccp_alpha\n",
|
|
"clf = tree.DecisionTreeClassifier(max_depth=6, min_samples_leaf=50, class_weight=\"balanced\", ccp_alpha=0.01, random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")\n",
|
|
"# plot the graph\n",
|
|
"plt.figure(figsize=(15, 10))\n",
|
|
"tree.plot_tree(clf, feature_names=feature_cols, filled=True, class_names=np.array(clf.classes_, dtype=str), proportion=True);"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Notice how the (balanced) accuracy stayed the same after the pruning.\n",
|
|
"\n",
|
|
"=> Look at this pruned tree and understand which decisions are made (e.g., manually make the same splits on the parallel coordinates plot), i.e., verify that the tree is reaching the same conclusion as we did before."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Hyperparameter Tuning\n",
|
|
"\n",
|
|
"We started out with some initial hyperparameter settings for the decision tree, which already gave us quite good results. However, lets see if we can do even better by systematically testing different hyperparameter combinations, i.e., use a grid search with cross-validation to find an optimal value for `max_depth` and `min_samples_leaf`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# to use a grid search, we first need to instantiate our model (including the settings we know we want to use)\n",
|
|
"clf = tree.DecisionTreeClassifier(class_weight=\"balanced\", ccp_alpha=0.01, random_state=1)\n",
|
|
"# additionally, we need to define the values we want to try for each parameter \n",
|
|
"# (keys in the dict must match the name of the model parameter!)\n",
|
|
"params = {\n",
|
|
" \"max_depth\": [2, 3, 4, 5, 6, 7, 8],\n",
|
|
" \"min_samples_leaf\": [1, 5, 10, 25, 50, 75, 100, 125]\n",
|
|
"}\n",
|
|
"# then pass both the model and the parameter values into the grid search\n",
|
|
"# normally, the grid search would use the internal .score() function of the model to select the best parameters,\n",
|
|
"# however, since for a classifier this is the accuracy, we here need to tell the grid search that\n",
|
|
"# it should select the best model based on the balanced accuracy instead\n",
|
|
"gs = GridSearchCV(clf, params, scoring='balanced_accuracy')\n",
|
|
"# the grid search object then can be used like all the other sklearn models\n",
|
|
"gs.fit(X_train, y_train)\n",
|
|
"# after it is done, we can check which were the best parameter values\n",
|
|
"# -> max_depth=5 is what the tree before after pruning had as well\n",
|
|
"# -> min_samples_leaf=1 does not seem like a good choice \n",
|
|
"# (=> always look at the results for all parameter combinations (as we do below), don't just trust the best settings)\n",
|
|
"print(gs.best_params_)\n",
|
|
"# and evalute this best model on test set (the grid search already trained the best model\n",
|
|
"# on the whole dataset for us and we can call .predict() on the grid search object directly)\n",
|
|
"print(f\"Accuracy on training data: {gs.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {gs.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, gs.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, gs.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# overall cross-validation results (lots of stuff...)\n",
|
|
"gs.cv_results_"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# we're really just interested in the mean test scores for each parameter combination\n",
|
|
"for i, p in enumerate(gs.cv_results_[\"params\"]):\n",
|
|
" print(p, gs.cv_results_[\"mean_test_score\"][i])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# plot the results as a heatmap to make it easier to see the performance differences\n",
|
|
"plt.figure()\n",
|
|
"plt.imshow(gs.cv_results_[\"mean_test_score\"].reshape(len(params[\"max_depth\"]), len(params[\"min_samples_leaf\"])))\n",
|
|
"plt.colorbar()\n",
|
|
"plt.xlabel(\"min_samples_leaf\")\n",
|
|
"plt.ylabel(\"max_depth\")\n",
|
|
"plt.xticks(range(len(params[\"min_samples_leaf\"])), params[\"min_samples_leaf\"])\n",
|
|
"plt.yticks(range(len(params[\"max_depth\"])), params[\"max_depth\"])\n",
|
|
"plt.title(\"Grid Search Results: Balanced Accuracy\");"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Note:** This plot helps us do two things:\n",
|
|
"1. Verify that the parameter search was exhaustive, i.e., that we've covered a good range of values for each parameter such that it is unlikely that we've missed the best settings in our search.\n",
|
|
"2. Select the actual parameter values that we want to use for the final model (instead of blindly trusting the values that the grid search had selected for us): notice how with a depth of 5 or greater, all trees with a `min_samples_leaf` setting of 50 or less have the same performance and the grid search simply picked the first model with the best performance. However, as we know a decision tree with a `min_samples_leaf` setting of 1 could in theory memorize individual points, which is not what we want (although this is unlikely with a depth of only 5 and pruning). Therefore, to ensure that we really get robust results, we should instead choose those parameter settings that result in the most regularized model that still produces good results, i.e., in this case a low value for `max_depth` (5) and a high value for `min_samples_leaf` (50).\n",
|
|
"\n",
|
|
"\n",
|
|
"### Using a Logistic Regression Model\n",
|
|
"\n",
|
|
"Now that we've obtained very good results with a decision tree, lets see if we can do equally well on this dataset with a linear model (i.e., a logistic regression model, since we have a classification problem)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# try a different classifier: logistic regression\n",
|
|
"# first, try the model with the default parameter settings\n",
|
|
"clf = LogisticRegression()\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# unbalanced class distributions => set parameter class_weight!! \n",
|
|
"# (most sklearn classifiers have this parameter - use it!)\n",
|
|
"clf = LogisticRegression(class_weight=\"balanced\", random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The performance is still a lot lower than what we got with a decision tree... Furthermore, you saw in both cases that the model threw a `ConvergenceWarning`. While this usually isn't too tragic in practice (in most cases the results are still quite good), in many cases this warning occurs when the data isn't normally distributed (i.e., violates the model's assumptions) and the results often get better when you transform the data accordingly. Therefore, we now use the `StandardScaler` to ensure each feature has a mean of 0 and a standard deviation of 1."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# scale the data\n",
|
|
"scaler = StandardScaler()\n",
|
|
"# training data: fit & transform \n",
|
|
"# (fit: compute mean and std of each feature; transform: subtract mean from each feature and divide by std)\n",
|
|
"X_train = scaler.fit_transform(X_train)\n",
|
|
"# test data: only transform, so the data is comparable!\n",
|
|
"X_test = scaler.transform(X_test)\n",
|
|
"# try logreg again -> much better!\n",
|
|
"clf = LogisticRegression(class_weight=\"balanced\", random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# try L1 regularization for feature selection\n",
|
|
"# (the parameter C determines the strength of the regularization -> smaller values = more regularization)\n",
|
|
"clf = LogisticRegression(class_weight=\"balanced\", penalty='l1', C=0.1, solver='liblinear', random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# the coefficients tell us why an item was classified as faulty:\n",
|
|
"# higher temperatures lead to faulty items, but we have different offsets for the different products, \n",
|
|
"# i.e., product 3 can handle higher temperatures than product 1\n",
|
|
"# -> features with very small coefficients can be removed\n",
|
|
"dict(zip(feature_cols, clf.coef_[0]))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# do a manual feature selection based on the coefficients of the L1 regularized model\n",
|
|
"feature_cols = [\"product_1\", \"product_17\", \"temp\"]\n",
|
|
"# construct a new feature matrix and create the train/test split with this new matrix again\n",
|
|
"X = df_new[feature_cols].to_numpy()\n",
|
|
"y = df_new[\"faulty\"].to_numpy()\n",
|
|
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=15)\n",
|
|
"# and don't forget to scale the data again!\n",
|
|
"scaler = StandardScaler()\n",
|
|
"X_train = scaler.fit_transform(X_train)\n",
|
|
"X_test = scaler.transform(X_test)\n",
|
|
"# train the model again with most of the default parameter setting\n",
|
|
"clf = LogisticRegression(class_weight=\"balanced\", random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")\n",
|
|
"# the performance gets even a tiny bit better, i.e., sometimes less data can be more,\n",
|
|
"# because additional features can also introduce noise patterns on which a model might overfit"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# by default the logreg model uses L2 regularization with C=1.\n",
|
|
"# since now we've manually selected the features and know that all of these are important for the task\n",
|
|
"# we can set C to a higher value to use less regularization\n",
|
|
"clf = LogisticRegression(class_weight=\"balanced\", penalty='l2', C=1000., random_state=1)\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
"print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
"print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"While it was a bit more work to set up the logistic regression model appropriately, incl. extra data preprocessing steps, we now even got a balanced accuracy on the test set that is slightly higher than that of the decision tree (0.938 instead of 0.935)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.2"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|