Transformation & Scaling of Numeric Features: Intuition

栏目: IT技术 · 发布时间: 4年前

内容简介:Dataset is published for explaining the capability of each applicant’s repaying a loan? Below is the distribution of the Target feature and some of the independent features. Target feature has imbalanced data problem because the positive class only has 8%

Dataset is published for explaining the capability of each applicant’s repaying a loan? Below is the distribution of the Target feature and some of the independent features. Target feature has imbalanced data problem because the positive class only has 8% ratio of the full data.

Target Feature: Loan Default

Below are some of the important numeric independent features and their histogram. They are picked just for the exercise explanation. They all are looking in different ranges and scales e.g. AMT_ANNIUITY is in mn range but OWN_CAR_AGE can max go till 90.

AMT_ANNUITY; AMT_CREDIT; AMT_GOODS_PRICE; AMT_INCOME_TOTAL; DAYS_BIRTH; OWN_CAR_AGE;

Histogram of numeric features: Original Data

Below is the original data’s combined boxplot, it is looking highly skewed because of different scale and skewed features in one space.

Combined Box Plot: Original Data

Transformation

Normally distributed features are an assumption in Statistical algorithms. Deep learning & Regression-type algorithms also benefit from normally distributed data.

Transformation is required to treat the skewed features and make them normally distributed. Right skewed features can be transformed to normality with Square Root/ Cube Root/ Logarithm transformation.

As per the above histograms AMT_ANNUITY, AMT_CREDIT, AMT_GOODS_PRICE, AMT_INCOME_TOTAL, & OWN_CAR_AGE are skewed numeric features and DAYS_BIRTH are normally distributed.

Skewness can be because of one of the two reasons

  • Present of extreme abnormal outliers, which may not be important to us.
  • Or Feature’s natural distribution is skewed, and the tail is important to us. This is a case in most real-life cases

Introduction of log transformation:As the left graph exhibits, the output of the Log function for positive values increases very slowly. So higher values are marginalized more as compared to the lower observations.

Effects of Transformation:Skewed Numeric feature may get normally distributed after log transformation. For example, in the below graph AMT_CREDIT is normally distributed after the log transformation.

Before and After Log Transformation: AMT_CREDIT
  • Effect of log transformation on skewed target feature(case of regression): log transformation may treat the skewed feature to normality. And, if our target feature is normally distributed, the algorithm will give equal importance to all the samples. Its also called homoscedasticity. It’s equivalent to treating the imbalanced data problem in categorical target feature like we have in our given dataset. So it’s good to have a normally distributed target feature.
  • Effect of log transformation on the skewed independent feature: log transformation may bring the independent feature to normality like above where AMT_CREDIT is nearly normal distributed after log. But it may not improve the relationship between the target and the independent features. So, treating independent skewed features may or may not benefit modelling accuracy, it all depends on the original causal relationship between the two.

Scaling

Scaling is required to rescale the data and it’s used when we want features to be compared on the same scale for our algorithm. And, when all features are in the same scale, it also helps algorithms to understand the relative relationship better.

If dependent features are transformed to normality, Scaling should be applied after transformation.

Which Algorithms may benefit after Scaling? Scaling is helpful in Distance-based algorithms and also in faster convergence

Linear & Logistic Regression, KMeans/ KNN, Neural Networks, PCA will benefit from scaling

Which Algorithms may not benefit after Scaling?Some algorithms are independent of Scaling. Entropy & Information Gain based techniques are not sensitive to monotonic transformation.

Tree-Based Algorithms, Decision Tree, Random Forest, Boosted Trees(GBM, light GBM, xgboost) may not benefit from scaling.

D uring Scaling/ Standardizing/ Normalizing, we will follow sklearn vocabulary, so it’s a good choice to use general word S caling instead of Standardizing or Normalizing

Scaler model fitted on the train data will be used to transform the test set. Never fit scaler again on the test data

Sklearn has following four scalers primarily

1. Minmax scaler

2. Robust scaler

3. Standard Scaler

4. Normalizer.

Minmax scalershould be the first choice for scaling. For each feature, each value is subtracted by the minimum value of the respective feature and then divide by the range of original maximum and minimum of the same feature. It has a default range between [0,1].

Below is the histogram of all 6 feature after the Minmax scaling. We haven’t log-transformed any of the features before scaling. MinMaxScaler hasn’t changed the internal distribution of the feature and also brought everyone on the same scale.

Histogram after MinMax Scaling

Below is the combined box plot of all 6 features after scaling. And, all are in the range of [0,1 ]. The internal space between each feature’s values has been maintained and their relative distribution is also looking better as compared to the original data

Boxplot after minmax scaler

RobustScalercan be used when your data has high outliers and we want to subside their effects. But unimportant outliers should be removed in the first place. RobustScaler subtracts the column’s median and divides by the interquartile range.

Following graph is a histogram of features after the Robust Scaler. Though the histogram looks similar to the original data distribution, their respective internal distance space is not maintained like the original data.

Histogram after Robust Scaler

Also, as seen in the below box plot, now the range is not in [0,1]. And also the relative spaces between each feature’s values are distorted and not same now. Using robustscaler, in this case, will pass wrong information to the modelling process about your underlying data

Boxplot after RobustScaler

StandardScalerrescales each column to have 0 mean and 1 Standard Deviation. It standardizes a feature by subtracting the mean and dividing by the standard deviation. If the original distribution is not normally distributed, it may distort the relative space among the features.

Below is the histogram of features after applying standard scaler, Distribution is looking similar to the original data distribution, but they aren’t same, their respective internal distance of observations are changed during standard scaling.

Following is a combined boxplot of features after the standard scaling, As expected, it distorts the relative distances between the feature values, where after min-max they were looking better

Box Plot After Standard Scaler

Normalizeris applied on rows, not columns, so sklearn users shouldn’t get confused and must not use normalizer. Some of the used applications cases of Normalizer are comparing multiple entities during the same time-series, i.e. stock movements of multiple stocks in a given period

Conclusion

  • Skewed Target Feature should be treated for normality before modelling, especially when the outliers are also important
  • Treating a skewed dependent feature and its effect should be understood during the analysis
  • MinMax Scaler should be the first choice for scaling
  • Experiments and observations can help us further to decide on the right approach

As always, I welcome your thoughts & feedback. I am also reachable on Linkedin .


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

创业头条

创业头条

[美]兰德尔•莱恩(Randall Lane)及《福布斯》杂志编辑部 / 孙莹莹 / 浙江人民出版社 / 2015-6 / 54.90

[内容简介] 全民创业的浪潮中,如何抓住共享经济带来的机遇?没有营收模式还一直烧钱的公司,如何赢得投资人的青睐?一轮死、二轮死、N轮死的魔咒下,怎样才能成功活下来?面对数十亿美元的收购要约,创始人究竟应该如何抉择?没有资金又不懂技术,是否就无法分享互联网创业的红利?《创业头条》一书将为你揭秘上述问题的答案。 阅读《创业头条》一书你会发现,在硅谷最新崛起的互联网亿万富豪身上,有这样一......一起来看看 《创业头条》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换