Problem with variance absolute

Welcome to the forum for MLwiN users. Feel free to post your question about MLwiN software here. The Centre for Multilevel Modelling take no responsibility for the accuracy of these posts, we are unable to monitor them closely. Do go ahead and post your question and thank you in advance if you find the time to post any answers!

Remember to check out our extensive software FAQs which may answer your question: http://www.bristol.ac.uk/cmm/software/s ... port-faqs/
Post Reply
creditheartpulse
Posts: 1
Joined: Wed Mar 27, 2024 6:57 am

Problem with variance absolute

Post by creditheartpulse »

Hello everyone,doodle baseball

I'm running a binary response 3 level-model, MQL1 extended to PQL2. I am checking the effect of some predictors in order to expplain student achievement. One of the predictors, has a strange for me "behaviour". The absolute of the variances overlap the absolute variance of model 0. I asked my advisor and told me to calculate the variance in each level by taking into account the ceof. minus SE (of each level variance) and then substract them to find the asolute and then the explained variance. But it stills overlap model's 0 total.

Is there anything else to do in order to fix this?
koalapoof
Posts: 1
Joined: Fri Apr 19, 2024 4:53 am

Re: Problem with variance absolute

Post by koalapoof »

Check for Collinearity: High collinearity among predictor variables can lead to overlapping variances and difficulty in interpreting the effects of individual predictors. Use techniques such as variance inflation factor (VIF) analysis to assess collinearity and consider removing or combining highly correlated variables. scratch geometry dash
Consider Different Model Specifications: Depending on the nature of your data and research question, consider alternative model specifications or transformations of your predictor variables. For example, you may try including interaction terms, polynomial terms, or different functional forms to better capture the relationship between predictors and the outcome variable.
Evaluate Model Fit: Assess the overall fit of your model using goodness-of-fit statistics such as the deviance, AIC (Akaike Information Criterion), or BIC (Bayesian Information Criterion). A well-fitting model should adequately explain the variability in the outcome variable without overfitting.
Explore Residual Analysis: Examine the residuals of your model to identify any patterns or deviations from the assumed model structure. Plotting residuals against predicted values or predictor variables can help identify potential issues such as heteroscedasticity or nonlinearity.
Seek Additional Guidance: Consult with your advisor or other experts in your field for further guidance and interpretation of your model results. They may offer insights or suggest alternative approaches to address the issue.
tetroupploy
Posts: 4
Joined: Wed Dec 20, 2023 8:55 am

Re: Problem with variance absolute

Post by tetroupploy »

Hello,
When you talk about a quadratic relationship, you're probably talking about it on a global scale, because having only two measures per child would only allow for a linear relationship. When someone uses terminology like baseline and follow-up, it implies that something happens in between, but they don't specify what. In such circumstances, one would typically go two levels with anything at baseline as a predictor for the response at follow-up, or alternatively, use differences to demonstrate the change in development score. Not sure if that helps.
Best wishes!
Tetroupploy space bar clicker
krampusiceskates
Posts: 1
Joined: Thu May 02, 2024 7:39 am

Re: Problem with variance absolute

Post by krampusiceskates »

Examine the correlation matrix or variance inflation factors (VIFs) to assess whether multicollinearity might be affecting your results. High levels of multicollinearity can inflate standard errors and make it difficult to interpret the effects of individual predictors.
Consider Model Simplification: If your model includes many predictor variables, consider simplifying it by removing non-significant variables or combining related variables into composite measures. This can help reduce complexity and improve the interpretability of your results.
Explore Interaction Effects: Investigate whether there are interaction effects between your predictor variables that might be influencing the observed behavior. Including interaction terms in your model can help capture more nuanced relationships between variables.
Evaluate Model Fit: Assess the overall fit of your model using appropriate goodness-of-fit statistics or diagnostic plots. Poor model fit can indicate that your model is not adequately capturing the underlying patterns in the data, which may require rethinking your modeling approach.
Seek Input from Colleagues or Experts: Consider consulting with colleagues or experts in your field who have experience with similar modeling techniques. They may be able to offer insights or suggestions for troubleshooting the issue. scratch geometry dash
Explore Alternative Modeling Approaches: If you're still unable to resolve the issue, consider exploring alternative modeling approaches or techniques that might be better suited to your data and research question. This could include different types of regression models, machine learning algorithms, or Bayesian methods.
Post Reply