NEED A PERFECT PAPER? PLACE YOUR FIRST ORDER AND SAVE 15% USING COUPON:

. (1989) consider these data under normal linear regression and Student regression and show support for the latter.

. (1989) consider these data under normal linear regression and Student regression and show support for the latter..

Apply Student t regression (Section 5.7) to the stack loss data in Example 4.4, with degrees of freedom ν an unknown. Lange et al. (1989) consider these data under normal linear regression and Student regression and show support for the latter. In fact they report an estimate ν = 1.1.

data, also much analysed, illustrate both predictor redundancy and observation outliers. They relate to percent of unconverted ammonia escaping from a plant during 21 days of operation in a stage in the production of nitric acid. The three predictors are as follows: x2, airflow, a measure of the rate of operation of the plant; x3, the inlet temperature of cooling water circulating through coils in a countercurrent absorption tower; and x4, which is proportional to the concentration of acid in the tower. Small values of y correspond to efficient absorption of the nitric oxides. Previous analysis suggests x4 as most likely to be redundant and observations {3, 4, 21} as most likely to be outliers.

Here two methods for variable selection are considered and combined with outlier detection as in (4.7), with ω = 0.1 and η = 7. The assumed priors for βj are N(0, 1000), while β1 ∼ N(20, 1000) and 1/σ2 ∼ Ga(1, 0.001). The product of the selection indicator and the sampled value of the coefficient is denoted by κj = δjβj.

In the first model, variable selection is based on binary indicators δj ∼ Bern(0.5), j = 2, . . . , 4. A two-chain run of 10 000 iterations (1000 burn-in) shows highest posterior probabilities of outlier status for observations 4 and 21, namely 0.74 and 0.94, as compared to prior probabilities of 0.10. The posterior probabilty that δ2 = 1 is 1 (relating to the first predictor x2), while those for the second and third predictors are 0.47 and 0.04. While the posterior density of κ2 is clearly confined to positive values, those for κ3 and κ4 straddle zero. One may obtain Bayes factors on various models by considering the K = 23 models corresponding to combinations of δ(t)j1 = 1 and δ(t)j2 = 0 and accumulating over the iterations.

find the cost of your paper

. (1989) consider these data under normal linear regression and Student regression and show support for the latter.

Solution:

15% off for this assignment.

Our Prices Start at $11.99. As Our First Client, Use Coupon Code GET15 to claim 15% Discount This Month!!

Why US?

100% Confidentiality

Information about customers is confidential and never disclosed to third parties.

Timely Delivery

No missed deadlines – 97% of assignments are completed in time.

Original Writing

We complete all papers from scratch. You can get a plagiarism report.

Money Back

If you are convinced that our writer has not followed your requirements, feel free to ask for a refund.