内容简介:Datasets may have missing values, and this can cause problems for many machine learning algorithms.As such, it is good practice to identify and replace missing values for each column in your input data prior to modeling your prediction task. This is called
Datasets may have missing values, and this can cause problems for many machine learning algorithms.
As such, it is good practice to identify and replace missing values for each column in your input data prior to modeling your prediction task. This is called missing data imputation, or imputing for short.
A sophisticated approach involves defining a model to predict each missing feature as a function of all other features and to repeat this process of estimating feature values multiple times. The repetition allows the refined estimated values for other features to be used as input in subsequent iterations of predicting missing values. This is generally referred to as iterative imputation.
In this tutorial, you will discover how to use iterative imputation strategies for missing data in machine learning.
After completing this tutorial, you will know:
- Missing values must be marked with NaN values and can be replaced with iteratively estimated values.
- How to load a CSV value with missing values and mark the missing values with NaN values and report the number and percentage of missing values for each column.
- How to impute missing values with iterative models as a data preparation method when evaluating models and when fitting a final model to make predictions on new data.
Let’s get started.
Iterative Imputation for Missing Values in Machine Learning
Photo by Gergely Csatari , some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
- Iterative Imputation
- Horse Colic Dataset
- Iterative Imputation With IterativeImputer
- IterativeImputer Data Transform
- IterativeImputer and Model Evaluation
- IterativeImputer and Different Imputation Order
- IterativeImputer and Different Number of Iterations
- IterativeImputer Transform When Making a Prediction
Iterative Imputation
A dataset may have missing values.
These are rows of data where one or more values or columns in that row are not present. The values may be missing completely or they may be marked with a special character or value, such as a question mark “?”.
Values could be missing for many reasons, often specific to the problem domain, and might include reasons such as corrupt measurements or unavailability.
Most machine learning algorithms require numeric input values, and a value to be present for each row and column in a dataset. As such, missing values can cause problems for machine learning algorithms.
As such, it is common to identify missing values in a dataset and replace them with a numeric value. This is called data imputing, or missing data imputation.
One approach to imputing missing values is to use an iterative imputation model .
Iterative imputation refers to a process where each feature is modeled as a function of the other features, e.g. a regression problem where missing values are predicted. Each feature is imputed sequentially, one after the other, allowing prior imputed values to be used as part of a model in predicting subsequent features.
It is iterative because this process is repeated multiple times, allowing ever improved estimates of missing values to be calculated as missing values across all features are estimated.
This approach may be generally referred to as fully conditional specification (FCS) or multivariate imputation by chained equations (MICE).
This methodology is attractive if the multivariate distribution is a reasonable description of the data. FCS specifies the multivariate imputation model on a variable-by-variable basis by a set of conditional densities, one for each incomplete variable. Starting from an initial imputation, FCS draws imputations by iterating over the conditional densities. A low number of iterations (say 10–20) is often sufficient.
— mice: Multivariate Imputation by Chained Equations in R , 2009.
Different regression algorithms can be used to estimate the missing values for each feature, although linear methods are often used for simplicity. The number of iterations of the procedure is often kept small, such as 10. Finally, the order that features are processed sequentially can be considered, such as from the feature with the least missing values to the feature with the most missing values.
Now that we are familiar with iterative methods for missing value imputation, let’s take a look at a dataset with missing values.
Horse Colic Dataset
The horse colic dataset describes medical characteristics of horses with colic and whether they lived or died.
There are 300 rows and 26 input variables with one output variable. It is a binary classification prediction task that involves predicting 1 if the horse lived and 2 if the horse died.
A naive model can achieve a classification accuracy of about 67 percent, and a top performing model can achieve an accuracy of about 85.2 percent using three repeats of 10-fold cross-validation. This defines the range of expected modeling performance on the dataset.
The dataset has many missing values for many of the columns where each missing value is marked with a question mark character (“?”).
Below provides an example of rows from the dataset with marked missing values.
2,1,530101,38.50,66,28,3,3,?,2,5,4,4,?,?,?,3,5,45.00,8.40,?,?,2,2,11300,00000,00000,2 1,1,534817,39.2,88,20,?,?,4,1,3,4,2,?,?,?,4,2,50,85,2,2,3,2,02208,00000,00000,2 2,1,530334,38.30,40,24,1,1,3,1,3,3,1,?,?,?,1,1,33.00,6.70,?,?,1,2,00000,00000,00000,1 1,9,5290409,39.10,164,84,4,1,6,2,2,4,4,1,2,5.00,3,?,48.00,7.20,3,5.30,2,1,02208,00000,00000,1 ...
You can learn more about the dataset here:
No need to download the dataset as we will download it automatically in the worked examples.
Marking missing values with a NaN (not a number) value in a loaded dataset using Python is a best practice.
We can load the dataset using the read_csv() Pandas function and specify the “na_values” to load values of ‘?’ as missing, marked with a NaN value.
... # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?')
Once loaded, we can review the loaded data to confirm that “?” values are marked as NaN.
... # summarize the first few rows print(dataframe.head())
We can then enumerate each column and report the number of rows with missing values for the column.
... # summarize the number of rows with missing values for each column for i in range(dataframe.shape[1]): # count number of rows with missing values n_miss = dataframe[[i]].isnull().sum() perc = n_miss / dataframe.shape[0] * 100 print('> %d, Missing: %d (%.1f%%)' % (i, n_miss, perc))
Tying this together, the complete example of loading and summarizing the dataset is listed below.
# summarize the horse colic dataset from pandas import read_csv # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # summarize the first few rows print(dataframe.head()) # summarize the number of rows with missing values for each column for i in range(dataframe.shape[1]): # count number of rows with missing values n_miss = dataframe[[i]].isnull().sum() perc = n_miss / dataframe.shape[0] * 100 print('> %d, Missing: %d (%.1f%%)' % (i, n_miss, perc))
Running the example first loads the dataset and summarizes the first five rows.
We can see that the missing values that were marked with a “?” character have been replaced with NaN values.
0 1 2 3 4 5 6 ... 21 22 23 24 25 26 27 0 2.0 1 530101 38.5 66.0 28.0 3.0 ... NaN 2.0 2 11300 0 0 2 1 1.0 1 534817 39.2 88.0 20.0 NaN ... 2.0 3.0 2 2208 0 0 2 2 2.0 1 530334 38.3 40.0 24.0 1.0 ... NaN 1.0 2 0 0 0 1 3 1.0 9 5290409 39.1 164.0 84.0 4.0 ... 5.3 2.0 1 2208 0 0 1 4 2.0 1 530255 37.3 104.0 35.0 NaN ... NaN 2.0 2 4300 0 0 2 [5 rows x 28 columns]
Next, we can see the list of all columns in the dataset and the number and percentage of missing values.
We can see that some columns (e.g. column indexes 1 and 2) have no missing values and other columns (e.g. column indexes 15 and 21) have many or even a majority of missing values.
> 0, Missing: 1 (0.3%) > 1, Missing: 0 (0.0%) > 2, Missing: 0 (0.0%) > 3, Missing: 60 (20.0%) > 4, Missing: 24 (8.0%) > 5, Missing: 58 (19.3%) > 6, Missing: 56 (18.7%) > 7, Missing: 69 (23.0%) > 8, Missing: 47 (15.7%) > 9, Missing: 32 (10.7%) > 10, Missing: 55 (18.3%) > 11, Missing: 44 (14.7%) > 12, Missing: 56 (18.7%) > 13, Missing: 104 (34.7%) > 14, Missing: 106 (35.3%) > 15, Missing: 247 (82.3%) > 16, Missing: 102 (34.0%) > 17, Missing: 118 (39.3%) > 18, Missing: 29 (9.7%) > 19, Missing: 33 (11.0%) > 20, Missing: 165 (55.0%) > 21, Missing: 198 (66.0%) > 22, Missing: 1 (0.3%) > 23, Missing: 0 (0.0%) > 24, Missing: 0 (0.0%) > 25, Missing: 0 (0.0%) > 26, Missing: 0 (0.0%) > 27, Missing: 0 (0.0%)
Now that we are familiar with the horse colic dataset that has missing values, let’s look at how we can use iterative imputation.
Iterative Imputation With IterativeImputer
The scikit-learn machine learning library provides the IterativeImputer class that supports iterative imputation.
In this section, we will explore how to effectively use the IterativeImputer class.
IterativeImputer Data Transform
It is a data transform that is first configured based on the method used to estimate the missing values. By default, a BayesianRidge model is employed that uses a function of all other input features. Features are filled in ascending order, from those with the fewest missing values to those with the most.
... # define imputer imputer = IterativeImputer(estimator=BayesianRidge(), n_nearest_features=None, imputation_order='ascending')
Then the imputer is fit on a dataset.
... # fit on the dataset imputer.fit(X)
The fit imputer is then applied to a dataset to create a copy of the dataset with all missing values for each column replaced with an estimated value.
... # transform the dataset Xtrans = imputer.transform(X)
The IterativeImputer class cannot be used directly because it is experimental.
If you try to use it directly, you will get an error as follows:
ImportError: cannot import name 'IterativeImputer'
Instead, you must add an additional import statement to add support for the IterativeImputer class, as follows:
... from sklearn.experimental import enable_iterative_imputer
We can demonstrate its usage on the horse colic dataset and confirm it works by summarizing the total number of missing values in the dataset before and after the transform.
The complete example is listed below.
# iterative imputation transform for the horse colic dataset from numpy import isnan from pandas import read_csv from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # split into input and output elements data = dataframe.values X, y = data[:, :-1], data[:, -1] # print total missing print('Missing: %d' % sum(isnan(X).flatten())) # define imputer imputer = IterativeImputer() # fit on the dataset imputer.fit(X) # transform the dataset Xtrans = imputer.transform(X) # print total missing print('Missing: %d' % sum(isnan(Xtrans).flatten()))
Running the example first loads the dataset and reports the total number of missing values in the dataset as 1,605.
The transform is configured, fit, and performed and the resulting new dataset has no missing values, confirming it was performed as we expected.
Each missing value was replaced with a value estimated by the model.
Missing: 1605 Missing: 0
IterativeImputer and Model Evaluation
It is a good practice to evaluate machine learning models on a dataset using k-fold cross-validation .
To correctly apply iterative missing data imputation and avoid data leakage, it is required that the models for each column are calculated on the training dataset only, then applied to the train and test sets for each fold in the dataset.
This can be achieved by creating a modeling pipeline where the first step is the iterative imputation, then the second step is the model. This can be achieved using the Pipeline class .
For example, the Pipeline below uses an IterativeImputer with the default strategy, followed by a random forest model.
... # define modeling pipeline model = RandomForestClassifier() imputer = IterativeImputer() pipeline = Pipeline(steps=[('i', imputer), ('m', model)])
We can evaluate the imputed dataset and random forest modeling pipeline for the horse colic dataset with repeated 10-fold cross-validation.
The complete example is listed below.
# evaluate iterative imputation and random forest for the horse colic dataset from numpy import mean from numpy import std from pandas import read_csv from sklearn.ensemble import RandomForestClassifier from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # split into input and output elements data = dataframe.values X, y = data[:, :-1], data[:, -1] # define modeling pipeline model = RandomForestClassifier() imputer = IterativeImputer() pipeline = Pipeline(steps=[('i', imputer), ('m', model)]) # define model evaluation cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise') print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Running the example correctly applies data imputation to each fold of the cross-validation procedure.
The pipeline is evaluated using three repeats of 10-fold cross-validation and reports the mean classification accuracy on the dataset as about 81.4 percent which is a good score.
Mean Accuracy: 0.814 (0.063)
How do we know that using a default iterative strategy is good or best for this dataset?
The answer is that we don’t.
IterativeImputer and Different Imputation Order
By default, imputation is performed in ascending order from the feature with the least missing values to the feature with the most.
This makes sense as we want to have more complete data when it comes time to estimating missing values for columns where the majority of values are missing.
Nevertheless, we can experiment with different imputation order strategies, such as descending, right-to-left (Arabic), left-to-right (Roman), and random.
The example below evaluates and compares each available imputation order configuration.
# compare iterative imputation strategies for the horse colic dataset from numpy import mean from numpy import std from pandas import read_csv from sklearn.ensemble import RandomForestClassifier from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from matplotlib import pyplot # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # split into input and output elements data = dataframe.values X, y = data[:, :-1], data[:, -1] # evaluate each strategy on the dataset results = list() strategies = ['ascending', 'descending', 'roman', 'arabic', 'random'] for s in strategies: # create the modeling pipeline pipeline = Pipeline(steps=[('i', IterativeImputer(imputation_order=s)), ('m', RandomForestClassifier())]) # evaluate the model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # store results results.append(scores) print('>%s %.3f (%.3f)' % (s, mean(scores), std(scores))) # plot model performance for comparison pyplot.boxplot(results, labels=strategies, showmeans=True) pyplot.xticks(rotation=45) pyplot.show()
Running the example evaluates each imputation order on the horse colic dataset using repeated cross-validation.
Your specific results may vary given the stochastic nature of the learning algorithm; consider running the example a few times.
The mean accuracy of each strategy is reported along the way. The results suggest little difference between most of the methods, with descending (opposite of the default) performing the best. The results suggest that right-to-left (Arabic) order might be better for this dataset with an accuracy of about 80.4 percent.
>ascending 0.801 (0.071) >descending 0.797 (0.059) >roman 0.802 (0.060) >arabic 0.804 (0.068) >random 0.802 (0.061)
At the end of the run, a box and whisker plot is created for each set of results, allowing the distribution of results to be compared.
Box and Whisker Plot of Imputation Order Strategies Applied to the Horse Colic Dataset
IterativeImputer and Different Number of Iterations
By default, the IterativeImputer will repeat the number of iterations 10 times.
It is possible that a large number of iterations may begin to bias or skew the estimate and that few iterations may be preferred. The number of iterations of the procedure can be specified via the “ max_iter ” argument.
It may be interesting to evaluate different numbers of iterations. The example below compares different values for “ max_iter ” from 1 to 20.
# compare iterative imputation number of iterations for the horse colic dataset from numpy import mean from numpy import std from pandas import read_csv from sklearn.ensemble import RandomForestClassifier from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.pipeline import Pipeline from matplotlib import pyplot # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # split into input and output elements data = dataframe.values X, y = data[:, :-1], data[:, -1] # evaluate each strategy on the dataset results = list() strategies = [str(i) for i in range(1, 21)] for s in strategies: # create the modeling pipeline pipeline = Pipeline(steps=[('i', IterativeImputer(max_iter=int(s))), ('m', RandomForestClassifier())]) # evaluate the model cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # store results results.append(scores) print('>%s %.3f (%.3f)' % (s, mean(scores), std(scores))) # plot model performance for comparison pyplot.boxplot(results, labels=strategies, showmeans=True) pyplot.xticks(rotation=45) pyplot.show()
Running the example evaluates each number of iterations on the horse colic dataset using repeated cross-validation.
Your specific results may vary given the stochastic nature of the learning algorithm; consider running the example a few times.
The results suggest that very few iterations, such as 1 or 2, might be as or more effective than 9-12 iterations on this dataset.
>1 0.820 (0.072) >2 0.813 (0.078) >3 0.801 (0.066) >4 0.817 (0.067) >5 0.808 (0.071) >6 0.799 (0.059) >7 0.804 (0.058) >8 0.809 (0.070) >9 0.812 (0.068) >10 0.800 (0.058) >11 0.818 (0.064) >12 0.810 (0.073) >13 0.808 (0.073) >14 0.799 (0.067) >15 0.812 (0.075) >16 0.814 (0.057) >17 0.812 (0.060) >18 0.810 (0.069) >19 0.810 (0.057) >20 0.802 (0.067)
At the end of the run, a box and whisker plot is created for each set of results, allowing the distribution of results to be compared.
Box and Whisker Plot of Number of Imputation Iterations on the Horse Colic Dataset
IterativeImputer Transform When Making a Prediction
We may wish to create a final modeling pipeline with the iterative imputation and random forest algorithm, then make a prediction for new data.
This can be achieved by defining the pipeline and fitting it on all available data, then calling the predict() function, passing new data in as an argument.
Importantly, the row of new data must mark any missing values using the NaN value.
... # define new data row = [2,1,530101,38.50,66,28,3,3,nan,2,5,4,4,nan,nan,nan,3,5,45.00,8.40,nan,nan,2,2,11300,00000,00000]
The complete example is listed below.
# iterative imputation strategy and prediction for the hose colic dataset from numpy import nan from pandas import read_csv from sklearn.ensemble import RandomForestClassifier from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.pipeline import Pipeline # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/horse-colic.csv' dataframe = read_csv(url, header=None, na_values='?') # split into input and output elements data = dataframe.values X, y = data[:, :-1], data[:, -1] # create the modeling pipeline pipeline = Pipeline(steps=[('i', IterativeImputer()), ('m', RandomForestClassifier())]) # fit the model pipeline.fit(X, y) # define new data row = [2,1,530101,38.50,66,28,3,3,nan,2,5,4,4,nan,nan,nan,3,5,45.00,8.40,nan,nan,2,2,11300,00000,00000] # make a prediction yhat = pipeline.predict([row]) # summarize prediction print('Predicted Class: %d' % yhat[0])
Running the example fits the modeling pipeline on all available data.
A new row of data is defined with missing values marked with NaNs and a classification prediction is made.
Predicted Class: 2
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Related Tutorials
- Results for Standard Classification and Regression Machine Learning Datasets
- How to Handle Missing Data with Python
Papers
- mice: Multivariate Imputation by Chained Equations in R , 2009.
- A Method of Estimation of Missing Values in Multivariate Data Suitable for use with an Electronic Computer , 1960.
APIs
Dataset
Summary
In this tutorial, you discovered how to use iterative imputation strategies for missing data in machine learning.
Specifically, you learned:
- Missing values must be marked with NaN values and can be replaced with iteratively estimated values.
- How to load a CSV value with missing values and mark the missing values with NaN values and report the number and percentage of missing values for each column.
- How to impute missing values with iterative models as a data preparation method when evaluating models and when fitting a final model to make predictions on new data.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
以上所述就是小编给大家介绍的《Iterative Imputation for Missing Values in Machine Learning》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
奇点系列
彼得•蒂尔、里德•霍夫曼、本•霍洛维茨、埃里克•杰克逊等 / 高玉芳、路蒙佳、杨晓红、徐彬等 / 中信出版社 / 2015-6-20 / 182.00
1.硅谷创投教父、PayPal创始人彼得•蒂尔、LinkedIn创始人里德•霍夫曼、创业导师本•霍洛维茨、“PayPal黑帮”初创成员埃里克•杰克逊联合作品。 2.彼得•蒂尔与埃隆•马斯克的首次交锋,PayPal从0到1改变全球金融的生死突围,商业硬汉的创业史诗,揭秘“PayPal黑帮”的创业维艰与联盟关系。 3.《人民日报》推荐创业者必读书目!“奇点系列”的作者们以及“PayPal黑......一起来看看 《奇点系列》 这本书的介绍吧!