Data Leakage in Machine Learning

栏目: IT技术 · 发布时间: 5年前

Data Leakage in Machine Learning

Data Leakage in Machine Learning

How to detect and avoid data leakage

Photo by Drew Beamer on Unsplash

Data leakage occurs when the data used in training process contains information about what the model is trying to predict. It sounds like “cheating” but we are not aware of it so it is better to call it “leakage”. Data leakage is a serious and widespread problem in data mining and machine learning which needs to be handled well to obtain a robust and generalized predictive model.

There are different reasons for data leakage. Some of them are very obvious but some are harder to spot at first glance. In this post, I will explain the reasons of data leakage, how it misleads, and the ways to detect and avoid data leakage.

You probably know them but I just want to mention about two terms that I will often use in this post:

  • Target variable: What the model is trying to predict
  • Features: The data used by the model to predict the target variable

Data Leakage Examples

Obvious cases

The most obvious cause of data leakage is to include target variable as a feature which completely destroys the purpose of “prediction”. This is likely to be done by mistake but make sure target variable is distinguished from the features.

Another common cause of data leakage is to include test data with training data. It is very important to test the models with new, previously unseen data. Including test data in training process would defeat this purpose.

These two cases are not very likely to occur because they can easily be spotted. The more dangerous causes are the ones which are able to sneak.

Giveaway features

Giveaway features are the features that expose information about the target variable and would not be available after the model is deployed.

  • Example: Consider we are building a model to predict a certain medical condition. A feature indicating whether a patient had a surgery related to that medical condition causes data leakage and should never be included as a feature in the training data. Indication of a surgery is highly predictive of the medical condition and would probably not be available in all cases. If we already know that a patient had a surgery related to a medical condition, we may not even need a predictive model to start with.
  • Example: Consider a model that predicts if a user will stay on a website. Including features that expose information about future visits will cause data leakage. We should only use features about the current session because information about the future sessions are not normally available after the model is deployed.

Leakage during preprocessing

There are many preprocessing steps to explore or clean the data.

  • Finding parameters for normalizing or rescaling
  • Min/max values of a feature
  • Distribution of a feature variable to estimate missing values
  • Removing outliers

These steps should be done using only the training set. If we use entire dataset to perform these operations, data leakage may occur. Applying preprocessing techniques to entire dataset will cause the model to learn not only training set but also test set. We all know test set should be new, previously unseen data.

When dealing with time-series data, we should pay more attention to data leakage. For example, if we somehow use data from the future when doing computations for current features or predictions, it is higly likely to end up with a leaked model.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

文明之光(第四册)

文明之光(第四册)

吴军 / 人民邮电出版社 / 2017-3 / 69.00元

计算机科学家吴军博士继创作《浪潮之巅》、《数学之美》之后,将视角拉回到人类文明史,以他独具的观点从对人类文明产生了重大影响却在过去被忽略的历史故事里,选择了有意思的几十个片段特写,有机地展现了一幅人类文明发展的画卷。《文明之光》系列创作历经整整四年,本书为其第四卷。 作者所选的创作素材来自于十几年来在世界各地的所见所闻,对其内容都有着深刻的体会和认识。《文明之光》系列第四册每个章节依然相对独......一起来看看 《文明之光(第四册)》 这本书的介绍吧!

SHA 加密
SHA 加密

SHA 加密工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试