Employing virtually any estimation method (quadrature, Bayesian) results in either non-convergence due to memory limitations of my machine or days/weeks to achieve convergence.
Often one reads about the minimum size necessary to obtain nearly unbiased estimates, but there is certainly a point at which a sample size can become excessive.
I know if I reduce the number of level 2 units randomly by say 50% or so, I can achieve convergence within a reasonable amount of time. Moreover, the estimates, standard errors, and p-values are virtually the same. I would like references to support what I am finding.
Are there any articles/textbooks which speak to this issue and provide guidelines/recommendations?
- Subramanian S V, Duncan C, Jones K, 2001, "Multilevel perspectives on modeling census data" Environment and Planning A 33(3) 399 – 417
- "Aggregation", in Snijders & Bosker, "Multilevel Analysis", Sage 2nd edition 2012.


雷达卡



京公网安备 11010802022788号







