Hi,
I am doing a very simple two-levels models using R2MLwiN.
I was using the "coef" command to save the coefficient as a csv file.
e.g.:
fixedmodel.coeff <- coef(fixedmodel)
write.csv(fixedmodel.coeff , file="modelX.csv")
However, I would also like to save the number of observation, std. error, deviance statistics, and the p-values.
Can anyone help me with that?
Cheers,
Alice
exporting results
-
- Posts: 1384
- Joined: Mon Oct 19, 2009 10:34 am
Re: exporting results
We don't currently provide an automated way to extract this information together so you will need to extract it from the returned model object. If for example you had run the following:
You can extract the number of observations with:
You can obtain the standard errors with:
and the deviance statistics with:
Finally you can obtain the p-values with:
If you would like to extract any of the other quantities that are reported you can find how to do this by looking at the functions in the following file: https://github.com/rforge/r2mlwin/blob/ ... nfitIGLS.R. You can see what information related to a model is stored with the following:
Code: Select all
> require(R2MLwiN)
> data(tutorial)
> (mymodel <- runMLwiN(normexam ~ 1 + (1 | student), data = tutorial))
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
MLwiN (version: unknown or >2.35) multilevel model (Normal)
Estimation algorithm: IGLS Elapsed time : 0.55s
Number of obs: 4059 (from total 4059) The model converged after 3 iterations.
Log likelihood: -5754.7
Deviance statistic: 11509.4
---------------------------------------------------------------------------------------------------
The model formula:
normexam ~ 1 + (1 | student)
Level 1: student
---------------------------------------------------------------------------------------------------
The fixed part estimates:
Coef. Std. Err. z Pr(>|z|) [95% Conf. Interval]
Intercept -0.00011 0.01568 -0.01 0.9942 -0.03084 0.03061
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
---------------------------------------------------------------------------------------------------
The random part estimates at the student level:
Coef. Std. Err.
var_Intercept 0.99764 0.02215
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Code: Select all
> nobs(mymodel)
[1] 4059
Code: Select all
> diag(sqrt(vcov(mymodel)))
FP_Intercept RP1_var_Intercept
0.01567755 0.02214525
Code: Select all
> logLik(mymodel)
'log Lik.' -5754.683 (df=2)
> deviance(mymodel)
[1] 11509.37
Code: Select all
> 2 * pnorm(abs(coef(mymodel) / diag(sqrt(vcov(mymodel)))), lower.tail = FALSE)
FP_Intercept RP1_var_Intercept
0.9942026 0.0000000
Code: Select all
> str(mymodel)
Formal class 'mlwinfitIGLS' [package "R2MLwiN"] with 20 slots
..@ version : chr "MLwiN (version: unknown or >2.35)"
..@ Nobs : num 4059
..@ DataLength : num 4059
..@ Hierarchy : NULL
..@ D : chr "Normal"
..@ Formula :Class 'formula' length 3 normexam ~ 1 + (1 | student)
.. .. ..- attr(*, ".Environment")=<environment: R_GlobalEnv>
..@ levID : chr "student"
..@ FP : Named num -0.000114
.. ..- attr(*, "names")= chr "FP_Intercept"
..@ RP : Named num 0.998
.. ..- attr(*, "names")= chr "RP1_var_Intercept"
..@ RP.cov : num [1, 1] 0.00049
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "RP1_var_Intercept"
.. .. ..$ : chr "RP1_var_Intercept"
..@ FP.cov : num [1, 1] 0.000246
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "FP_Intercept"
.. .. ..$ : chr "FP_Intercept"
..@ LIKE : num 11509
..@ elapsed.time: Named num 0.55
.. ..- attr(*, "names")= chr "elapsed"
..@ call : language runMLwiN(Formula = normexam ~ 1 + (1 | student), data = tutorial)
..@ residual : list()
..@ Converged : logi TRUE
..@ Iterations : num 3
..@ Meth : num 1
..@ nonlinear : num [1:2] 0 1
..@ data :'data.frame': 4059 obs. of 3 variables:
.. ..$ normexam : num [1:4059] 0.261 0.134 -1.724 0.968 0.544 ...
.. ..$ student : Factor w/ 198 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
.. ..$ Intercept: num [1:4059] 1 1 1 1 1 1 1 1 1 1 ...
Re: exporting results
Thank you very much!
This is already very helpful.
This is already very helpful.