Akaike Information Criteria

栏目: IT技术 · 发布时间: 5年前

内容简介:All of us have used AIC for model selection. This blog is about the idea behind AIC, what is it and why is it used for model selection. While we have been told how to calculate AIC, at least I was never taught the logic behind why are we doing this — this

Akaike Information Theory

The idea behind AIC

All of us have used AIC for model selection. This blog is about the idea behind AIC, what is it and why is it used for model selection. While we have been told how to calculate AIC, at least I was never taught the logic behind why are we doing this — this blog is to cover that.

AIC is an estimate for the out-of-sample error . AIC is based on information theory. He called it an entropy maximization principle and minimizing AIC is equivalent to maximizing entropy in a thermodynamic system. Thus, in the language of information theory, we can say that while coding a model ( where we can never find the exact model ), some information is lost in representing the process by which the data was generated.

AIC measures the relative loss of information . Since we do not the exact model, we cannot measure the exact loss. Thus we measure relative loss among the different models (from which we have to select our model). If we have 3 models with AIC values 100, 102, and 110, then the second model is exp((100 − 102)/2) = 0.368 times as probable as the first model to minimize the information loss. Similarly, the third model is exp((100 − 110)/2) = 0.007 times as probable as the first model to minimize information loss.

AIC is given by 2 x number of parameters — 2 log(Likelihood)

When selecting the model (for example polynomial function), we select the model with the minimum AIC value. Or if we can select the top 2–3 models, collect more data and select the once with minimum AIC. This blog is about — where does this formula of AIC come from?

In AIC, we try to minimize the (proxy of) KL divergence between the model and the ground truth function. AIC is the calculation for the estimate of the proxy function. Thus minimizing the AIC is akin to minimizing the KL divergence from the ground truth — hence minimizing the out of sample error. The derivation for AIC is shown in the following two images.

Akaike Information Criteria

Figure 1. Derivation Part I

Akaike Information Criteria

Figure 2. Derivation Part II

Bayesian Information Criteria (BIC) is calculated similarly to AIC. Instead of 2k, BIC uses 2 ln(n)k. These are called the penalty terms. It is argued that if the true model is present in the set of models, BIC selects the true model with probability 1, given n tends to infinity. Since we never really have the true model in the set of candidate models, this property is not highly regarded. Also, AIC minimizes the risk of selecting a very bad model.

Reference

1. Wikipedia page on AIC

2. Derivation of AIC


以上所述就是小编给大家介绍的《Akaike Information Criteria》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

数据挖掘概念与技术

数据挖掘概念与技术

(加)Jiawei Han;Micheline Kamber / 范明、孟小峰 / 机械工业 / 2007-3 / 55.00元

《数据挖掘概念与技术(原书第2版)》全面地讲述数据挖掘领域的重要知识和技术创新。在第1版内容相当全面的基础上,第2版展示了该领域的最新研究成果,例如挖掘流、时序和序列数据以及挖掘时间空间、多媒体、文本和Web数据。本书可作为数据挖掘和知识发现领域的教师、研究人员和开发人员的一本必读书。 《数据挖掘概念与技术(原书第2版)》第1版曾是受读者欢迎的数据挖掘专著,是一本可读性极佳的教材。第2版充实了数据......一起来看看 《数据挖掘概念与技术》 这本书的介绍吧!

MD5 加密
MD5 加密

MD5 加密工具

html转js在线工具
html转js在线工具

html转js在线工具

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具