... | ... | @@ -30,3 +30,10 @@ where <img src="https://latex.codecogs.com/svg.latex?L(M_0)" title="L(M_0)" /> a |
|
|
|
|
|
It can be shown that the distribution for the statistic is chi-square with degree of freedom equal to the difference between number of parameters of the two models. Having both *LR* statistic and degree of freedom we can calculate the p-value of the test. If p-value is less than a predefined threshold (e.g. 0.05), two models are significantly different and the full model will be considered as the better fit to the data.
|
|
|
|
|
|
#### 3.2 Information Criteria
|
|
|
|
|
|
Information criteria can be used for both nested and non-nested models. Here we introduce two famous information criteria: Akaike's Information Criteria (AIC) and Baysian Information Criteria (BIC). For a model, they can be calculated as:
|
|
|
|
|
|
<img src="https://latex.codecogs.com/svg.latex?AIC&space;=&space;-2ln(L(M_1))+2Q&space;\\&space;\\&space;\indent&space;BIC&space;=&space;-2ln(L(M_1))+Qln(N)" title="AIC = -2ln(L(M_1))+2Q \\ \\ \indent BIC = -2ln(L(M_1))+Qln(N)" />
|
|
|
|
|
|
where Q equals the number of parameters in the model and N is the sample size. **Smaller values of AIC and BIC indicate better models.** |
|
|
\ No newline at end of file |