Data Leakage in Machine Learning
How to detect and avoid data leakage
Data leakage occurs when the data used in training process contains information about what the model is trying to predict. It sounds like “cheating” but we are not aware of it so it is better to call it “leakage”. Data leakage is a serious and widespread problem in data mining and machine learning which needs to be handled well to obtain a robust and generalized predictive model.
There are different reasons for data leakage. Some of them are very obvious but some are harder to spot at first glance. In this post, I will explain the reasons of data leakage, how it misleads, and the ways to detect and avoid data leakage.
You probably know them but I just want to mention about two terms that I will often use in this post:
- Target variable: What the model is trying to predict
- Features: The data used by the model to predict the target variable
Data Leakage Examples
Obvious cases
The most obvious cause of data leakage is to include target variable as a feature which completely destroys the purpose of “prediction”. This is likely to be done by mistake but make sure target variable is distinguished from the features.
Another common cause of data leakage is to include test data with training data. It is very important to test the models with new, previously unseen data. Including test data in training process would defeat this purpose.
These two cases are not very likely to occur because they can easily be spotted. The more dangerous causes are the ones which are able to sneak.
Giveaway features
Giveaway features are the features that expose information about the target variable and would not be available after the model is deployed.
- Example: Consider we are building a model to predict a certain medical condition. A feature indicating whether a patient had a surgery related to that medical condition causes data leakage and should never be included as a feature in the training data. Indication of a surgery is highly predictive of the medical condition and would probably not be available in all cases. If we already know that a patient had a surgery related to a medical condition, we may not even need a predictive model to start with.
- Example: Consider a model that predicts if a user will stay on a website. Including features that expose information about future visits will cause data leakage. We should only use features about the current session because information about the future sessions are not normally available after the model is deployed.
Leakage during preprocessing
There are many preprocessing steps to explore or clean the data.
- Finding parameters for normalizing or rescaling
- Min/max values of a feature
- Distribution of a feature variable to estimate missing values
- Removing outliers
These steps should be done using only the training set. If we use entire dataset to perform these operations, data leakage may occur. Applying preprocessing techniques to entire dataset will cause the model to learn not only training set but also test set. We all know test set should be new, previously unseen data.
When dealing with time-series data, we should pay more attention to data leakage. For example, if we somehow use data from the future when doing computations for current features or predictions, it is higly likely to end up with a leaked model.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Making Things See
Greg Borenstein / Make / 2012-2-3 / USD 39.99
Welcome to the Vision Revolution. With Microsoft's Kinect leading the way, you can now use 3D computer vision technology to build digital 3D models of people and objects that you can manipulate with g......一起来看看 《Making Things See》 这本书的介绍吧!