## Inconsistent results from runmlwin and svy:mean

Welcome to the forum for runmlwin users. Feel free to post your question about runmlwin here. The Centre for Multilevel Modelling take no responsibility for the accuracy of these posts, we are unable to monitor them closely. Do go ahead and post your question and thank you in advance if you find the time to post any answers!

Go to runmlwin: Running MLwiN from within Stata >> http://www.bristol.ac.uk/cmm/software/runmlwin/
tjsduq64
Posts: 31
Joined: Mon Jul 15, 2019 10:04 pm

### Inconsistent results from runmlwin and svy:mean

Hello,

Edit: I should provide more contexts. I am analyzing a complex survey (multistage sampling) design data. I wanted to compare a multilevel method using a MLwin and a svyset method in Stata. The actual data structure is three levels but below I am only showing as a level 1 structure to simplify. To incorporate the three level structure, I would use three levels in MLwiN and jackknife replicate weights in svyset of Stata.

I compared baseline statistics of RCT using runmlwin and svy:mean + estat sd, variance.
At the bottom of the post, you can see the results as an attachment. You see that for the unweighted version, means and the variances are the same (or very similar) for both methods. For the weighted version, you see that the variances are vastly different, while the means are the same. For example, svy: mean gives larger variances for the treatment group, while runmlwin gives the opposite.

Below are my codes for svy:mean. I am using the final weight w.

Code: Select all

**********************baseline statistics with descriptive analysis*****************

/////*unweighted*/////

local outcomes test1 test2 test3
foreach outcome in `outcomes' {

mean `outcome', over(treatment)
estat sd, variance

}
/////*weighted*/////

local outcomes test1 test2 test3

svyset [pweight=w]

foreach outcome in `outcomes' {

svy: mean `outcome', over(treatment)
estat sd, variance

}
Below are my codes for runmlwin. It might be confusing because I had to modify the codes to post it here. I am using final weight w at level 1, which I assume to be the same approach with svy: mean. Also, I used separate coding, including trt (treatment), ctrl (control) in the fixed part and the random part, instead of using the conventional contrast coding which uses intercept. This shouldn't affect the result. And as you can see for the unweighted version, this was not the issue.

Code: Select all

**********************baseline statistics with mlwin*****************

/////*unweighted*/////

***set a loop for a model for cognitive outcomes

eststo clear
local outcomes test1 test2 test3
local n : word count `outcomes'
local residuals a b c d e f g h i j k l m n o p q r s t u v w y z

forvalues i = 1/`n' {
local outcome = word("`outcomes'", `i')

eststo clear

*** treatment (F, R)

runmlwin `outcome' trt ctrl, ///
level1( HSIS_CHILDID: trt ctrl, diagonal residuals(`residual')) nopause

eststo
}

/////*weighted at level 1 only*/////

***set a loop for a model for cognitive outcomes

eststo clear
local outcomes test1 test2 test3
local n : word count `outcomes'
local residuals a b c d e f g h i j k l m n o p q r s t u v w y z
destring CHILDCOHORT, replace

forvalues i = 1/`n' {
local outcome = word("`outcomes'", `i')

eststo clear

*** treatment (F, R)

runmlwin `outcome' trt ctrl, ///
level1( HSIS_CHILDID: trt ctrl , weightvar(w) diagonal residuals(`residual')) nopause

eststo
}
I cannot figure out why the variances are different after I used the weight... obviously, it may be hard to know without looking at the data but could anyone take a look at the codes and see if something is wrong there? Especially the weighted approach for svy: mean and runmlwin. Should the two blocks of codes do the same thing or did I miss something?

Sun
Attachments
20190822_151433.png (18.18 KiB) Viewed 1814 times
Last edited by tjsduq64 on Mon Aug 26, 2019 5:49 pm, edited 1 time in total.
GeorgeLeckie
Posts: 430
Joined: Fri Apr 01, 2011 2:14 pm

### Re: Inconsistent results from runmlwin and sys:mean

Dear Sun,

Not sure, but am not that experienced in Stata's svyset commands

runmlwin should give the same results as Stata's mixed command. So suggest you first compare your svyset method to mixed (where you use the mixed weights options).

Assuming mixed and runmlwin give same results, Suggest then to approach Stata help about why you see the discrepancy between svyset and mixed

Best wishes

George
tjsduq64
Posts: 31
Joined: Mon Jul 15, 2019 10:04 pm

### Re: Inconsistent results from runmlwin and svy:mean

GeorgeLeckie wrote: Fri Aug 23, 2019 4:02 pm Dear Sun,

Not sure, but am not that experienced in Stata's svyset commands

runmlwin should give the same results as Stata's mixed command. So suggest you first compare your svyset method to mixed (where you use the mixed weights options).

Assuming mixed and runmlwin give same results, Suggest then to approach Stata help about why you see the discrepancy between svyset and mixed

Best wishes

George
It seems that svyset command doesn't allow weights at each level, and I couldn't get mixed command to work with just level 1 weight. So, here I compared mixed command and runmlwin command, as you suggested. Could you look at the codes, and see if I am doing correctly?

This time, I am comparing mixed command to runmlwin command with 3 levels specified at unweighted analysis and weighted analysis. The results are, once again, the means and variances from unweighted analysis are very similar in mixed command and runmlwin command.

However, for weighted analysis, the means are similar but variances are substantially different. In mixed command, the treatment group has larger variances but they seem not to be significantly different than those from the control group. But in runmlwin command, the treatment group has much smaller variances and these are statistically significant differences, although I don't show the confidence intervals.

Here are codes for mixed command.

Code: Select all

/////*unweighted at 3 levels*/////

local outcomes PPVT LetterWord AppliedProblems ///
OralComprehension Spelling ///
foreach outcome in `outcomes' {

mixed `outcome' TreatmentStatus  ///
|| Program: ///
|| Center:,  ///
residual(independent, by(TreatmentStatus))

}

/////*weighted and 3 levels*/////

local outcomes PPVT LetterWord AppliedProblems ///
OralComprehension Spelling ///
foreach outcome in `outcomes' {

mixed `outcome' TreatmentStatus [pweight=w_child] ///
|| Program:, pweight(w_program) ///
|| Center:, pweight(w_center)  ///
residual(independent, by(TreatmentStatus))

}

Here are codes for runmlwin command.

Code: Select all

/////*unweighted and 3 levels*/////

***set a loop for a model for cognitive outcomes

local outcomes PPVT LetterWord AppliedProblems ///
OralComprehension Spelling ///
local n : word count `outcomes'
local residuals a b c d e f g h i j k l m n o p q r s t u v w y z
destring CHILDCOHORT, replace

forvalues i = 1/`n' {
local outcome = word("`outcomes'", `i')

eststo clear

*** treatment (F, R)

runmlwin `outcome' Treatment Control, ///
level3( Program: cons) ///
level2( Center: cons) ///
level1( Child: Treatment Control, diagonal residuals(`residual')) nopause

}
/////*weighted at 3 levels*/////

***set a loop for a model for cognitive outcomes

local outcomes PPVT LetterWord AppliedProblems ///
OralComprehension Spelling ///
local n : word count `outcomes'
local residuals a b c d e f g h i j k l m n o p q r s t u v w y z
destring CHILDCOHORT, replace

forvalues i = 1/`n' {
local outcome = word("`outcomes'", `i')

*** treatment (F, R)

runmlwin `outcome' Treatment Control, ///
level3( Program: cons, weightvar(w_program)) ///
level2( Center: cons, weightvar(w_center)) ///
level1( Child: Treatment Control, weightvar(w_child) diagonal residuals(`residual')) nopause

}

Attachments
20190823_164431.png (43.52 KiB) Viewed 1778 times
20190823_164458.png (46.38 KiB) Viewed 1778 times