19

Data Leakage in Machine Learning

 4 years ago
source link: https://mc.ai/data-leakage-in-machine-learning/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neQzInN.jpg!web

Data Leakage in Machine Learning

How to detect and avoid data leakage

Photo by Drew Beamer on Unsplash

Data leakage occurs when the data used in training process contains information about what the model is trying to predict. It sounds like “cheating” but we are not aware of it so it is better to call it “leakage”. Data leakage is a serious and widespread problem in data mining and machine learning which needs to be handled well to obtain a robust and generalized predictive model.

There are different reasons for data leakage. Some of them are very obvious but some are harder to spot at first glance. In this post, I will explain the reasons of data leakage, how it misleads, and the ways to detect and avoid data leakage.

You probably know them but I just want to mention about two terms that I will often use in this post:

  • Target variable: What the model is trying to predict
  • Features: The data used by the model to predict the target variable

Data Leakage Examples

Obvious cases

The most obvious cause of data leakage is to include target variable as a feature which completely destroys the purpose of “prediction”. This is likely to be done by mistake but make sure target variable is distinguished from the features.

Another common cause of data leakage is to include test data with training data. It is very important to test the models with new, previously unseen data. Including test data in training process would defeat this purpose.

These two cases are not very likely to occur because they can easily be spotted. The more dangerous causes are the ones which are able to sneak.

Giveaway features

Giveaway features are the features that expose information about the target variable and would not be available after the model is deployed.

  • Example: Consider we are building a model to predict a certain medical condition. A feature indicating whether a patient had a surgery related to that medical condition causes data leakage and should never be included as a feature in the training data. Indication of a surgery is highly predictive of the medical condition and would probably not be available in all cases. If we already know that a patient had a surgery related to a medical condition, we may not even need a predictive model to start with.
  • Example: Consider a model that predicts if a user will stay on a website. Including features that expose information about future visits will cause data leakage. We should only use features about the current session because information about the future sessions are not normally available after the model is deployed.

Leakage during preprocessing

There are many preprocessing steps to explore or clean the data.

  • Finding parameters for normalizing or rescaling
  • Min/max values of a feature
  • Distribution of a feature variable to estimate missing values
  • Removing outliers

These steps should be done using only the training set. If we use entire dataset to perform these operations, data leakage may occur. Applying preprocessing techniques to entire dataset will cause the model to learn not only training set but also test set. We all know test set should be new, previously unseen data.

When dealing with time-series data, we should pay more attention to data leakage. For example, if we somehow use data from the future when doing computations for current features or predictions, it is higly likely to end up with a leaked model.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK