Home > Measurement Error > Maximum Likelihood Computations For Regression With Measurement Error

Maximum Likelihood Computations For Regression With Measurement Error


Wiley-Interscience. 200227. In the validation subsample, complete data (Y,X,W) are measured on each subject. An EM-type argument shows that this limiting distribution for the X˜(t) is probability model (16) evaluated at the ML estimates; see [27, 29] and the references therein. In model 5, the RC estimates of β1 had a standard deviation (SD) of 7.28 (Table III) and ranged from −40:29 to 171:12 compared to an SD of 1:32 for ML his comment is here

Cancer Inst. 1996;88:1738–1747. [PubMed]4. If β′∑x|wβ is small, then the approximation (10) will be valid and ML and RC estimates should be close. then do; ETA = beta0 + betaX1+ betaX2*X2 + betaX*z ; LLBIN = y*ETA -log (1-exp(ETA)); LL = LL + LLBIN; end; end;/*main study samply log-likelihood - W and Y;*/else do; With this substitution, however, the β(i)’s at the parameter step are no longer used in the imputation step.

Measurement Error Linear Regression

We also show how maximum likelihood estimates can be found using standard software, in particular using the NLMIXED procedure in SAS. In fact, quite the opposite has occurred. Your cache administrator is webmaster.

In the main study we always observe outcome Y , but sometimes observe the surrogate measure W instead of exposure X. The resulting MI estimates were severely biased (Table III) towards zero. The completed data likelihood (15) still applies, except that Y˜ (Xi, Wi) replaces Y for validation study subjects. Measurement Error Models Fuller Pdf In particular, the likelihood for both internal and external validation designs is easily programmed in PROC NLMIXED in SAS v. 9.1.

In this article, we discuss three methods of estimation for such a main study / validation study design: (i) maximum likelihood (ML), (ii) multiple imputation (MI) and (iii) regression calibration (RC). Classical Vs Berkson Error Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. In summary, all methods work well in the internal validation design, although ML has an expected small advantage in efficiency compared to RC. Hence it is important in the analysis of such data to adjust for the error in the measurement of the exposure variables.

Login to your MyJSTOR account × Close Overlay Read Online (Beta) Read Online (Free) relies on page scans, which are not currently available to screen readers. Regression Calibration The SD estimates using ML were consistently lower than RC for all parameters, leading to the modest albeit significant 10% efficiency gains (see footnote Table I) when estimating β1 for Models The poor performance of naïve MI is not surprising since for the external validation design information on the outcome Y is not available in the validation study. Cole et al. [14] compare regression calibration to multiple imputation via simulations in survival analysis using a Cox proportional hazards model.

Classical Vs Berkson Error

In our simulations, Models 3, and 4 had the lowest values for β′∑x|wβ (Table I) and the RC estimator worked best in these models. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2630183/ We have adapted standard software to perform both maximum likelihood and multiple imputation estimation. Measurement Error Linear Regression Amer. Measurement Error Model This is important because maximum likelihood estimators can be more efficient than commonly used moment estimators and likelihood ratio tests and confidence intervals can be substantially superior to those based on

In this article, we compare these three methods analytically and by simulation under a variety of scenarios. this content Carroll,David Ruppert,Leonard A. We adopt this method for the external design and refer to it as naïve MI. Register or login Buy a PDF of this article Buy a downloadable copy of this article and own it forever. Classical Error

The most time-consuming case took typically about 20 seconds for each sample, and the Newton-Raphson algorithm used between 25 and 30 iterations.Regression Calibration EstimatorsRegression calibration (RC) is a widely used two-step However, the validation substudy sample size of 250 may not be unrealistic [4, 8]. This term controls the estimate of β and reduces the variability that was observed in the external validation design especially for models 2 and 5. weblink Unlimited access to purchased articles.

JSTOR, the JSTOR logo, JPASS, and ITHAKA are registered trademarks of ITHAKA. Differential Measurement Error At the first step, the likelihood from the measurement model is maximized using validation study data to obtain the regression estimates γ̃, ∑̃x|w. Ability to save and export citations.

Natarajan L, Flatt SW, Sun X, Gamst AC, Major JM, Rock CL, Al-Delaimy W, Thomson CA, Newman VA, Pierce JP.

To the extent that MI is a stochastic form of the EM algorithm, it might share in this advantage, although not necessarily in the approximate form presented here. In internal validation designs, the validation study is conducted on a random subsample of subjects in the main study. Login to your MyJSTOR account × Close Overlay Personal Access Options Read on our site for free Pick three articles and read them for free. Multiplicative Measurement Error The 5th and 95th percentiles of the RC estimate of β1 were similar to the extreme values of the MLE, suggesting that RC estimates have extreme values in about 10% of

For Model 5, RC exhibited approximately 4% (100*0.122/3) bias when estimating the true β1 = 3 compared to 3% bias for ML, and was much less efficient (RC SD=0.499 vs. 0.388 American Journal of Epidemiology. 1997;145:184–196. [PubMed]7. Assoc. 1995;90:1247-156.10. check over here Stat Med. 1998;17(19):2157–2177. [PubMed]24.