代写接单-MTHM017 Advanced Topics in Statistics Ref/Def Assignment

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top

MTHM017 Advanced Topics in Statistics Ref/Def Assignment 

Please make sure that the submitted work is your own. This is NOT a group assignment, therefore approaches, solutions shouldnt be discussed with other students. Plagiarism and collusion with other students are examples of academic misconduct and will be reported. More information on academic honesty can be found here. The assignment has three main parts. Part A involves (i) fitting a Poisson regression model to assess the effect of using different priors, and (ii) fitting an auto-regressive process to time series data using the BUGS language in order to estimate missing data. Part B involves using different methods for classification of data into two groups. Part C involves producing a narrated power point presentation based on question 3 of part B. Part A and B gives 80% of your final marks and Part C gives 20% of your final marks. [Assignment: 125 marks in total] A. Bayesian Inference [66 marks] 1. The first question of part A involves fitting a Poisson regression model using the Ohio_Data dataset, which contains the observed and expected counts of lung cancer for counties in Ohio for 1988. (i) [3 marks] Calculate the Standard Mortality Ratios (SMRs) for each county and plot the distribution of the SMRs. Next, plot a map of the SMRs by county. You may want to use the following code using the OhioMap function which uses random numbers (the file with the code is attached), or you can write your own. We are interested in estimating the relative risk (RR) for each county and we are going to fit a Poisson model of the following form: Obsi Pois(i) log(i) = log(Expi) + 0 + log(i) RRi = exp(0) i Where the prior distributions for i are Gamma(, ). Here, the Exp(ected) numbers are an offset, i.e. we dont assign a coefficient to them (or another way of putting it is that we fix the coefficient to be one). (ii) [4 marks] Describe the role of 0 and the set of s in this model and how they contribute to the estimation of RR. (iii) [14 marks] Code up this Poisson-Gamma model in JAGS to analyse the Ohio data. Use the priors p(0) Unif(100,100) and Gamma(1,1). Initialise 2 chains and run the model with these two chains. You will have to decide on the appropriate values of n.iter and burnin. Produce source("OhioMap.R") library(maps) map.text("county", "ohio") testdat <- runif(88) # need to read in the OhioMap function OhioMap(testdat,ncol=8,type="e",figmain="Ohio random numbers",lower=0,upper=2) 1 trace plots for the chains and summaries of all the parameters. Investigate whether the chains for all the parameters have converged. (iv) [6 marks] Extract the posterior means for the RR and map them. Then calculate the posterior probabilities that the relative risk in each area exceeds 1.2. Extract these probabilities and map them. (v) [6 marks] Repeat the analysis with different priors for 0 and . The exact choice is yours, but explain why you have chosen them and what they mean. Map the two sets of RRs and explain any differences you see in the summaries of the posteriors for the parameters of the model. 2. One factor that affects the relative risk of lung cancer is air pollution. The dataset ohio_pm25.csv contains measurements of particulate matter (PM2.5) air pollution in Ohio for 1988-1989. However there is missing data. We will use JAGS to impute this missing data so that the PM2.5 measurements can be fed into the relative risk analysis at a later stage (note that this last step is not part of the assignment). (i) [4 marks] Do some exploratory data analysis: summarise the data, then plot the PM2.5 measurements against time, highlighting (showing clearly) the periods of missing data. We are going to fit a model that allows us to estimate these previously seen missing data by treating them as model parameters that will be estimated (and we find posterior distributions for them). As we have time series data, we are going to use the fact that day-to-day measurements will be correlated, i.e. todays measurement will correlate with yesterdays. A random walk process of order 1, RW(1), is defined at time t as Yt Yt1 = wt Yt = Yt1 + wt Where wt are a set of realisations of random (or white) noise, e.g. wt N (0, w2 ). Note the first line refers to the differences in the values at consecutive time points being white noise. We are interested in fitting a random walk model to the Ohio data. The model will be of the following form: Ohiot N(Yt,v2) Y t N ( Y t 1 , w2 ) Where w2 is the variance of the white noise process associated to the random walk. We then make noisy measurements of this random walk process, thus Ohiot, the measurement we have at time t, equals to the true value of the underlying process Yt plus some measurement error. In the formula above, v2 is the variance of this measurement error. (ii) [12 marks] Code this model using the model definition below in JAGS to analyse the Ohio data for 1988(!). Due to the nature of the model you will have to explicitly specify a value for Y1 in the model (i.e. for the first time point as Y0 doesnt exist). One suggestion might be Y1 dnorm(6, 0.001). The model definition can be found below. Run the model for 10,000 iterations, with 2 chains, discarding the first 5,000 as burn-in. Produce trace plots for the chains and summaries for the fitted parameters (including the missing data). In your solution file you should include a representative sample of this output. Hint: You will have to initialise both chains. One suggestion might be using the mean and median to initialise the missing values of Ohio, and using random uniforms (with a narrow interval centred around say 6) to initialise Y. # model jags.mod <- function(){ 2 # Observation model for (i in 2 : N) { Ohio[i] ~ dnorm(Y[i],tau.v) } Ohio[1] ~ dnorm(Y[1],tau.v) tau.v ~ dgamma(1,0.01) # System model for(i in 2:N){ Y[i] ~ dnorm(Y[i-1],tau.w) } Y[1] ~ dnorm(6,0.001) tau.w ~ dgamma(1,0.01) sigma.w <- 1/sqrt(tau.w) } (iii) [3 marks] Comment on whether the chains for all the parameters have converged. You should include evidence that supports your claim. (iv) [4 marks] Extract the posterior means and 95% credible intervals for Yt, and plot them against time, together with the original data (the measurements). Comment on the width of the credible interval during the periods of missing data. Can you explain your observation? (v) [6 marks] Use your model to predict the measurements of PM2.5 at Ohio for the first week of 1989. Plot the predicted values of PM2.5 for the first week of 2004, along with the actual measurements, against time. Calculate the root mean squared error of this prediction. For this you may want n (Yt Yt )2 to re-run the model with an extra line to calculate t=1 n , noting that this value will also have a posterior distribution as it is a function of the predicted values (that are treated as unknown paramaters that need to be estimated). (vi) [4 marks] Suppose that after doing this analysis we receive some PM2.5 measurements from a site that has similar parameters to our original monitoring location. We want to repeat the analysis and fill in the missing data for this new site as well. What priors should we use for the precision parameters? Explain your choice. B. Classification [34 marks] The following figure shows the information in the dataset Classification.csv - it shows two different groups, plotted against two explanatory variables. This is simulated data - the aim is to find a suitable method for classifying the 200 datapoints into two groups from a selection of possible approaches. 3 4 2 X2 0 2 Group 0 1 2 0 2 X1 1. [4 marks] Summarise the two groups in terms of the variables X1 and X2. Describe your findings. Considering the plot showing the observations and the numerical summaries, which of the following classification methods do you think are suitable for classifying this data and why? a. Linear discriminant analysis. b. Quadratic discriminant analysis. c. K-nearest neighbour regression. d. Support vector machines. e. Random forests. 2. [1 marks] Select 75% of the data to act as a training set, with the remaining 25% for testing/evaluation. 3. [27 marks] Choose four of the methods listed in Question 1 that might be suitable to classify the data. Perform classification using these methods. In each case, briefly describe how the classification method works, present the results of an evaluation of the method (highlighting different aspects of the model performance) and describe your findings. Where appropriate optimise the (hyper)parameters of the method. 4. [2 marks] Compare the results from your chosen four approaches and select what you think is the best method for classification in this case, explaining your reasoning. C. Presentation [25 marks] The presentation is based on PartB/Q3 only. You should submit a narrated power-point presentation that should be 5 minutes long, and you should aim for 5-6 slides in total (this includes the introduction/summary and a slide on each method). In the presentation you should explain what the problem is, how you approached it, and what your findings are. You should pay attention to the clarity/pace/coherency of the delivery, the style/information-balance on the slides, clear description of methodology and time management. The deadline for submission is Noon (12pm), 8th August. You should submit the narrated power point presentation and a pdf that will contain your answers (and relevant output!) to the questions via eBart. In Part A you should use the R programming language, but in Part B you can choose to use R or Python (or both). 4

51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468