mirror of
https://github.com/ArthurDanjou/ml_exercises.git
synced 2026-01-14 12:14:38 +01:00
184 lines
10 KiB
Plaintext
184 lines
10 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Predicting hard drive failures\n",
|
|
"\n",
|
|
"**Scenario:** In a data center with many hard drives, occasionally, one of these drives will fail. To prevent possible data loss, it's a data scientist's (i.e., your) task to predict as soon as possible in advance when a drive might fail.\n",
|
|
"\n",
|
|
"The original data can be downloaded from [backblaze](https://www.backblaze.com/b2/hard-drive-test-data.html).\n",
|
|
"It was already cleaned and restructured for your convenience (see `data/hdf_data`). This preprocessing process included:\n",
|
|
"\n",
|
|
"- removing NaNs\n",
|
|
"- removing SMART variables with zero variance\n",
|
|
"- keeping only data from the most frequent drive model (to avoid artifacts due to differences in SMART recordings)\n",
|
|
"- creating a dataframe where each drive is one data point with the information whether it failed or not (= class label)\n",
|
|
"\n",
|
|
"The original data consisted of daily SMART statistics measurements for all drives that were installed in the data center at this time (i.e., measurements for each drive until it failed). Your task is to build a binary classification model, which receives the measurements from all drives every day and should predict which of these drives are likely to fail in the next hours or days. To train such a model, you are given a simplified dataset, which includes only a single measurement per drive, either from some random time point during the year if the drive did not fail (class 0), or the SMART statistics on the day the drive failed (csv files ending in `_0`) or from a few days before the drive failed (e.g., `_1` for 1 day before it failed, `_7` for 7 days, etc). This means by using, e.g., the data from `df_2016_0.csv` you can build a model that can predict whether a drive will fail today, while a model trained on the data in `df_2016_7.csv` can predict whether a drive will fail one week from now. (Normally, you would make use of the measurements over time and, e.g., track maximum values up to now or do some other feature engineering to improve the performance, but for the sake of simplicity we only use these individual snapshots here.) \n",
|
|
"\n",
|
|
"Use the data from 2016 for training the model and tuning hyperparameters and the data from 2017 for the final evaluation to get a realistic performance estimate of how well the model can handle some slight data drifts etc.\n",
|
|
"\n",
|
|
"More about the SMART attributes used as features in this problem can be found on [Wikipedia](https://en.wikipedia.org/wiki/S.M.A.R.T.)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import numpy as np\n",
|
|
"import pandas as pd\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"from sklearn.dummy import DummyClassifier\n",
|
|
"from sklearn.model_selection import train_test_split\n",
|
|
"from sklearn.metrics import balanced_accuracy_score\n",
|
|
"# don't get unneccessary warnings\n",
|
|
"import warnings\n",
|
|
"warnings.simplefilter(action='ignore', category=FutureWarning)\n",
|
|
"\n",
|
|
"# these \"magic commands\" are helpful if you plan to import functions from another script\n",
|
|
"# where you keep changing things, i.e., if you change a function in the script\n",
|
|
"# it will automagically be reloaded in the notebook so you work with the latest version\n",
|
|
"%load_ext autoreload\n",
|
|
"%autoreload 2"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# load the data with the SMART statistics of the drives.\n",
|
|
"# with the data ending in _0, we can learn to predict if a drive has failed or is working properly right now;\n",
|
|
"# try, e.g., df_2016_7.csv to predict failures a week in advance\n",
|
|
"df = pd.read_csv(\"../data/hdf_data/df_2016_0.csv\")\n",
|
|
"# have a look at what we've loaded\n",
|
|
"df.head()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# construct training and test data from this dataframe\n",
|
|
"# -> use the smart statistics as features & \"failure\" as the target\n",
|
|
"feat_cols = [c for c in df.columns if c.startswith(\"smart\")]\n",
|
|
"X = df[feat_cols].to_numpy()\n",
|
|
"y = df[\"failure\"].to_numpy()\n",
|
|
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=23)\n",
|
|
"# see how imbalanced the label distribution in the training and test sets is\n",
|
|
"print(f\"Fraction of ok items in training set: {1-np.mean(y_train):.3f}\")\n",
|
|
"print(f\"Fraction of ok items in test set: {1-np.mean(y_test):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def eval_clf(clf, X_train, y_train, X_test, y_test):\n",
|
|
" \"\"\"\n",
|
|
" Function to evaluate a trained classifier: prints accuracy and balanced accuracy scores.\n",
|
|
" \n",
|
|
" Inputs:\n",
|
|
" - clf: the trained classifier\n",
|
|
" - X_train, y_train: the training data\n",
|
|
" - X_test, y_test: the test data\n",
|
|
" \"\"\"\n",
|
|
" print(f\"Accuracy on training data: {clf.score(X_train, y_train):.3f}\")\n",
|
|
" print(f\"Accuracy on test data: {clf.score(X_test, y_test):.3f}\")\n",
|
|
" print(f\"Balanced accuracy on training data: {balanced_accuracy_score(y_train, clf.predict(X_train)):.3f}\")\n",
|
|
" print(f\"Balanced accuracy on test data: {balanced_accuracy_score(y_test, clf.predict(X_test)):.3f}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# train a dummy model\n",
|
|
"clf = DummyClassifier(strategy=\"most_frequent\")\n",
|
|
"clf = clf.fit(X_train, y_train)\n",
|
|
"# evaluate the model\n",
|
|
"# later, make sure to pass the correct training and test data, e.g., in case you scaled your data etc.\n",
|
|
"eval_clf(clf, X_train, y_train, X_test, y_test)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"-------------------------------------------------------------------------------------\n",
|
|
"You're already given this rudimentary prediction pipeline, now your job is to improve it. Below are some things you might want to try, but feel free to get creative! Have a look at the [cheat sheet](https://github.com/cod3licious/ml_exercises/blob/main/other/cheatsheet.pdf) for more ideas and a concise overview of the relevant steps when developing a machine learning solution in any data science project. \n",
|
|
"\n",
|
|
"The previous notebook, \"analyze toydata\", deals with a very similar problem and can serve as a guideline for this exercise. For an example of how to use the t-SNE algorithm, have a look at the first notebook, \"visualize text\" (but please note that since you don't have sparse data here, there is no need to transform the data with a kernel PCA before using t-SNE).\n",
|
|
"\n",
|
|
"### (Suggested) Steps\n",
|
|
"\n",
|
|
"#### a) Get a better understanding of the problem\n",
|
|
"- Create a t-SNE plot of the data (from the features; color the dots in the scatter plot with the target variable): Do you think a classification model will do well on this data?\n",
|
|
"- Look at the variables in more detail: Are they normally/uniformly distributed?\n",
|
|
"- Try different kinds of models in place of the `DummyClassifier` (e.g., decision tree, linear model, SVM) and play around with the hyperparameters a little bit to get a better feeling for the problem.\n",
|
|
"- Would outlier detection make sense here? Why (not)?\n",
|
|
"\n",
|
|
"#### b) Improve the prediction performance\n",
|
|
"- Try different normalizations of the data (e.g., using the `StandardScaler`): How do the t-SNE plot and performance of the different models change? Why does a decision tree not improve? Can you apply some other transformations to make the features more normally distributed?\n",
|
|
"- Are any variables highly correlated? How does the performance change when you remove some features? Do you have any other feature engineering ideas? Again observe how your previous results change as you modify the input features!\n",
|
|
"- Systematically find optimal hyperparameters for your models using a `GridSearchCV` and decide what you want to use as your final model.\n",
|
|
"\n",
|
|
"#### c) Final evaluation & model interpretation\n",
|
|
"- Try to better understand what your model is doing: Which variables are the most predictive of failures?\n",
|
|
"- Predict failures multiple days in advance by training and evaluating your models on the other csv files from 2016 (e.g., `df_2016_7.csv` for 7 days before the drive fails). How many days in advance is a reliable prediction possible (e.g., plot \"days before failure\" vs \"balanced accuracy\")?\n",
|
|
"- Evaluate your final model (trained on a complete dataframe from 2016) on the respective data from 2017.\n",
|
|
"\n",
|
|
"#### d) Presentation of results\n",
|
|
"Clean up your code & think about which results you want to present + the story they tell:\n",
|
|
"- What is the best model that you found & its performance?\n",
|
|
"- Which preprocessing steps had the most impact on the performance?\n",
|
|
"- What worked and what didn't for the different models?\n",
|
|
"- Which of the SMART statistics indicate that a drive will fail?\n",
|
|
"- How many days in advance can you predict a hard drive failure?\n",
|
|
"- How well does your model perform on the new data from 2017?\n",
|
|
"- What have you learned in this case study? Did any of the results surprise you?\n",
|
|
"-------------------------------------------------------------------------------------"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.8.5"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|