程序代写案例-STAT3022

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
One-way and two-way ANOVA models
Dr. Linh Nghiem
STAT3022
One-way ANOVA model
Setup
• Comparing the mean response (Y ) of t different trea
tments,
when it is reasonable to assume that the treatment only
affects the response via the mean.
• Data: yij , i = 1, . . . , t, j = 1, . . . , ni, i.e there are ni
observations receiving treatment i. Total number of
observations is n =
∑t
i=1 ni.
• yi• =
ni∑
j=1
Yij , y¯i• = n−1i yi• : sum and average of response
for treatment i.
• y•• =
t∑
i=1
yi• =
t∑
i=1
ni∑
j=1
yij , y¯•• = n−1y•• : sum and average
of all the responses.
1
Factor-effect model
To study the effect of treatments, we use the factor-effect model
yij = µ+ αi + εij , εij ∼ N(0, σ2), i = 1, . . . , t, j = 1, . . . , ni
• µ represents the overall mean.
• αi represents the difference between mean of treatment i and
overall mean. Together µi = µ+ αi represents the mean of
treatment group i.
• εij is the model error term.
2
Factor-effect model
If we consider “treatment group” as a categorical predictor with t
levels, then the previous model is the same as writing
yij = µ+
t∑
k=1
αiXij + εij
where Xij = 1 for treatment i and 0 otherwise.
Clearly, this is a overparameterization, since we are representing a
categorical predictor with t levels by t indicator variables.
• Nevertheless, it is easy to interpret from the perspective of
experimental designs.
• Also, it can be easily generalized to a random-effect model
(we will get to it later).
3
Factor-effect model
The factor-effect model is a MLR model with
β(t+1)×1 = (µ, α1, · · · , αt)>,
yn×1 = (y11 · · · , y1n1 , y21, · · · , ytnt)>,
Xn×(t+1) =

1 1 0 · · · 0
1 1 0 · · · 0
...
...
...
. . .
...
1 1 0 · · · 0
1 0 1 · · · 0
...
...
...
. . .
...
1 0 0 · · · 1

.
Note that rank(X) = t < (t+ 1) since the first column is equal to
the sum of all remaining columns.
4
Least square estimators
Normal equations: X>X βˆ = X> y, i.e
n n1 · · · nt
n1 n1 0 · · · 0
...
...
. . .
...
nt 0 · · · 0 nt


µˆ
αˆ1
...
αˆt
 =

y••
y1•
...
yt•
 ,
nµˆ+
t∑
i=1
niαˆi = y••, niµˆ+ niαˆi = yi•, i = 1, . . . , n
Without any constraint, the system has an infinite number of
solution. The two most common constraints are:
• Sum constraint:
∑t
i=1 niαˆi = 0, or
• First treatment constraint: αˆ1 = 0.
5
Different constraints lead to different estimates
• Under sum constraint, we have nµˆ = y••, so µˆ = y¯••, then
αˆi = y¯i• − y¯••, i = 1, . . . , n.
• Under first treatment constraint, we have µˆ = y¯1•, and
αˆi = y¯i• − y¯1• for i ≥ 2.
• The first treatment constraint is default in R.
6
An example
Does caffeine stimulation affect the rate at which individuals can
tap their fingers?
• n = 30 male students randomly allocated to t = 3 groups of
10 students each:
Group 1: zero caffeine dose
Group 2: low caffeine dose
Group 3: high caffeine dose
• Two hours after treatment, each subject tapped fingers as
quickly as possible for a minute. Number of taps is recorded.
7
An example
242.5
245.0
247.5
250.0
252.5
High Low Zero
Dose
Ta
ps
8
Estimable parameters
• Although different constraints lead to different estimates of β,
there exist a class of linear combinations of β that are
uniquely determined, which we call them to be estimable.
• We will use the following result without proving it.
Let θ = a>β denote a linear combination of coefficients in
a linear model y = Xβ + ε, where the design matrix X has
dimension n × p and where a is a p × 1 row vector. Hence,
θ is estimable if and only if there exist another vector c of
dimension n × 1 such as a> = c>X; in other words, a has
to be a linear combination of row vectors of X.
Fortunately, most useful linear combinations are estimable.
9
Estimable parameters in the factor-effect model
X =

1 1 0 · · · 0
...
... · · · ...
1 1 0 · · · 0
1 0 1 · · · 0
...
... · · · ...
1 0 0 · · · 1

rm. dup−→ X˜t×(t+1) =

1 1 0 . . . 0
1 0 1 . . . 0
...
... · · · ...
1 0 0 . . . 1

For θ = a>β to be estimable, then a> = c>X. Denoting elements
of v as ci with i = 1, . . . , n, that implies a
> has the form:
c> =
[∑t
i=1 ci c1 · · · ct
]
10
Estimable parameters in the factor-effect model
Noting that β = (µ, α1, . . . , αt)
>, so the estimable linear
combination has the form
θ = a>β =
(
t∑
i=1
ci
)
µ+
t∑
i=1
ciαi.
Below are the most important estimable linear combinations:
• Group mean µi = µ+ αi (ci = 1 and cj = 0 for all i 6= j).
• Differences between groups αi − αj (ci = 1, cj = −1, ck = 0
for all k 6= i, j)
• In general, any contrast
∑t
i=1 ciαi, with
∑t
i=1 ci = 0.
11
Estimable pamameters in the factor-effect model
• When a linear combination is estimable, then we can use any
a>βˆ to estimate it, with βˆ is the solution of the normal
equation (regardless of the constraints)
• Example: for the group mean
µˆi = µˆ+ αˆi = y¯•• + (y¯i• − y¯••) (under sum constraint)
= y¯1• + (y¯i• − y¯1•) (under 1st treatment constraint)
= y¯i•.
• Verify that in both sum and first treatment constraint, the
mean difference between group i and group j is estimated by
αˆi − αˆj = y¯i• − y¯j•.
12
Partitioning of variability
Similar to the multiple linear regression model case, we also have
the following partitioning of variability
t∑
i=1
ni∑
j=1
(yij − y¯••)2 =
t∑
i=1
ni∑
j=1
(yij − y¯i•)2 +
t∑
i=1
ni(y¯i•− y¯••)2
• SSTotal =
∑t
i=1
∑ni
j=1(yij − y¯••)2: total sum of squares
• SSE = SSW =
∑t
i=1
∑ni
j=1(yij − y¯i•)2: within-treatment sum
of squares, playing the same role as SSE in the linear model.
• SSTr = SSB =
∑t
i=1 ni(y¯i• − y¯••)2: between-treatment sum
of squares, playing the same role as SSR in the linear model.
13
Degrees of freedom
• SSTotal has n− 1 df.
• For each i, we have
∑ni
j=1(yij − y¯i•)2 ∼ σ2χ2ni−1, so
SSW ∼ σ2χ2n−t and has n− t df. As a result,
MSW =
SSE
n− t is an unbiased estimator for σ
2.
• For SSB: it can be proved that
E(SSB) = (t− 1)σ2 +
t∑
i=1
ni(µi − µ¯)2,
with µ¯ = n−1
∑t
i=1 niµi, so it has t− 1 df.
14
Omnibus F-test
H0 : µ1 = µ2 = . . . = µt vs Ha : µi 6= µj for some i 6= j.
We apply the same general F-test as developed before.
• Under H1: we have SSE(H1) = SSW, with n− t df.
• Under H0: we have SSE(H0) = SSTotal with t− 1 df.
F =
(SSTotal− SSW)/(n− 1− (n− t))
SSW/(n− t) =
MSB
MSW
Under H0, since µi = µ¯, then E(MSB)= (t− 1)σ2, so
F ∼ Ft−1,n−t.
15
Analysis of variance table
Source of variation df SS MS F Pr(> F )
Treatment t− 1 SSB MSB MSB/MSW p-value
Residuals n− t SSW MSW
Total n− 1
We reject H0 if p-value = P (Ft−1,n−t > F ) < α.
16
Estimating contrasts
Recall, any contrast has the form c =

i ciαi with

i ci = 0; this
implies

i ciµi =

i ciαi + µ (

i ci) =

i ciµi.
Since contrasts are estimable, an unbiased estimator for it is
cˆ =

i ciαˆi, regardless of the constraints we impose to obtain αˆi.
• Under sum constraint:
∑t
i=1 niαˆi = 0
t∑
i=1
ciαˆi =
t∑
i=1
ci(y¯i•− y¯••) =
t∑
i=1
ciy¯i•− y¯••
t∑
i=1
ci =
t∑
i=1
ciy¯i•.
• Under treatment constraint αˆ1 = 0, and note
∑t
i=2 ci = −c1:
t∑
i=1
ciαˆi =
t∑
i=2
ci(y¯i• − y¯1•) =
t∑
i=2
ciy¯i• − y¯1•
t∑
i=2
ci =
t∑
i=1
ciy¯i•.
17
Estimating contrasts
Noting E(y¯i•) = µ+ αi and Var(y¯i•) = σ2/ni, then
E (cˆ) = E
(
t∑
i=1
ciy¯i•
)
=
t∑
i=1
ci(µ+ αi) =
t∑
i=1
ciαi = c,
Var (cˆ) = Var
(∑
i
ciy¯i•
)
=
t∑
i=1
c2i
(
σ2
ni
)
= σ2
t∑
i=1
c2i
ni
.
Replacing σ2 by σˆ2, so the 100(1− α)% CI for ∑i=1 ciαi is
cˆ± tn−t,α/2se(cˆ), se(cˆ) = σˆ
√√√√ t∑
i=1
c2i
ni
.
18
Caffeine example
Compute the 95% confidence interval for the difference in the mean
response of the High versus the average of Low and Zero group.
19
Hypothesis testing for contrast
H0 : c = c0 vs H1 : c(>,<, 6=)c0
Test statistic:
T =
cˆ− c0
se(cˆ)
∼ tn−1 under H0
Ha H1 : c 6= c0 H1 : c > c0 H1 : c < c0
p-value 2P (tn−t > |T |) P (tn−t > T ) P (tn−t < T )
Reject H0 if p-value < α.
20
Sum of square due to a hypothesis
We can do a F-test for H0 : c = 0 vs Ha : c 6= 0 as well. Since we
already knew how to compute SSE for model under Ha, it suffices
to compute SSH = SSE(H0)− SSE(Ha), which we called the sum
of squares due to a hypothesis.
Under H0, cˆ ∼ N
(
0, σ2
∑t
i=1(c
2
i /ni)
)
. It can be proven that
SSH =
cˆ2∑t
i=1 c
2
i /ni
=
(
∑n
i=1 ciy¯i•)
2∑t
i=1 c
2
i /ni
,
and hence, SSH ∼ σ2χ21. Therefore,
F =
SSH/1
SSE(Ha)/(n− t) = T
2 ∼ F1,n−1 under H0,
and we reject H0 if p-value = P (F1,n−1 > F ) < α.
21
Orthogonal contrasts
Two contrasts c =
∑t
i=1 ciαi and d =
∑t
i=1 diαi are called
orthogonal if and only if Cov(cˆ, dˆ) =
t∑
i=1
cidi
ni
= 0. This follows
because
Cov(cˆ, dˆ) = Cov
(
t∑
i=1
ciy¯i•,
t∑
i=1
diy¯i•
)
=
t∑
i=1
t∑
j=1
cidjCov(y¯i•, y¯j•)
= σ2
t∑
i=1
cidi
ni
= 0.
22
Orthogonal contrasts
If we can find t− 1 mutually orthogonal contrasts c(1), . . . , c(t−1),
then we can decompose the sum of squares for treatment (SSTr)
into t− 1 SSH, each has exactly 1 df.
SSTr =
n∑
i=1
ni(y¯i• − y¯••)2︸ ︷︷ ︸
t−1 df
= SSH(c(1))︸ ︷︷ ︸
1 df
+ . . .+ SSH(c(t−1))︸ ︷︷ ︸
1 df
As a result, the F-statistic for the omnibus test can be written as
F =
SSTr/(t− 1)
MSE
=
1
t− 1
t−1∑
i=1
SSH(c(i))
MSE
=
1
t− 1
n∑
i=1
Fi,
with Fi being the F -statistic for testing H0 : c
(i) = 0.
23
Orthogonal contrasts
There are indefinitely many way to obtain orthogonal contrasts,
but a systematic way is to use something called Helmert contrasts,
where we can obtain directly from contr.Helmert() in R.
• First contrasts: difference between second and the first
treatment
• jth contrasts: difference between the (j + 1) treatment versus
the average of the first j treatments.
24
Orthogonal contrasts
Helmert contrasts are uniquely determined by # of treatments.
25
Examples
Testing at significance level 5%
• Whether the mean of the Dose and the Zero group are
significantly different.
• Whether the mean of the High group is greater than the
average of the Low and Zero group.
26
Examples
27
Orthogonal contrasts
Partitioning the sum of square treatment into 2 orthogonal
sum of squares, each has 1 df.
Using Helbert contrast with 3 treatments, then we have
c(1) = −α1 + α2, c(2) = −α1 − α2 + 2α3. Hence,
SSH(c(1)) =
(−y¯1• + y¯2•)2
(−1)2
n1
+
12
n2
= 18.05,
SSH(c(2)) =
(−y¯1• − y¯2• + 2y¯3•)2
(−1)2
n1
+
(−1)2
n2
+
22
n3
= 43.35.
Each SSH has one df, and SSH(c(1)) + SSH(c(2)) = 61.4 = SSTr.
28
Two-way ANOVA model with
orthogonal design
Main effect model
• Two factors, A and B, which may be related to the response
variable. Assume factor A has a levels and B has b levels.
• Main effect model:
yijk = µ+ αi + βj + εijk, εijk ∼ N(0, σ2) where
i = 1, . . . , a, j = 1, , . . . , b and k = 1, . . . , nij .
yijk is the response for the kth unit at level i of factor A and
level j of factor B.
α1, . . . , αa and β1, . . . , βb are parameters describing the main
effects of A and B, respectively.
Let n = ∑i∑j nij be the total number of observations.
29
Matrix form
Assume a = 2, b = 3, nij = 2, then the matrix form is formed as
y = X β + ε
y111
y112
y121
...
y232
 =

1 1 0 1 0 0
1 1 0 1 0 0
1 1 0 0 1 0
...
...
...
...
...
...
1 0 1 0 0 1


µ
α1
α2
β1
β2
β3

+

ε111
ε112
ε121
...
ε232

We have rank(X) = rank(X>X) = 4 = a+ b− 1 < 6, so we need
two constraints for estimating parameters β.
30
Orthogonal design
• A design is called complete if all combinations of treatments
are observed i.e nij 6= 0.
• A design is called balanced if we have equal numbers of
observations for each combination, i.e nij = r > 0.
• A complete and balanced design is referred to as an
orthogonal design.
The key consequence of an orthogonal design is the columns
of X corresponding to two factors can be partitioned into two
uncorrelated parts.
31
Estimating parameters
Similar to one-way ANOVA model, the most two common
constraints are
• Sum constraints:
∑a
i=1 αˆi =
∑b
j=1 βˆj = 0.
• Treatment constraints: αˆ1 = βˆ1 = 0 (default in R).
With these constraints, we can get unique estimates for parameters
β. For example, with αˆ1 = βˆ1 = 0, one can remove the 2nd and
the 4th column of X above to form X∗, and the OLS estimate for
the other parameters are computed using βˆ = (X∗X∗)−1X∗ y.
Specifically, we get
µˆ = y¯1•• + y¯•1• − y¯•••, αˆi = y¯i•• − y¯1••, βˆj = y¯•j• − y¯1••,
for i, j ≥ 2.
32
Estimable parameters
Regardless of the constraints, estimable linear combination of β
are linear combination of the row of X˜, i.e X without the
duplicated rows, i.e
For a linear combination to be estimable, coefficient for µ
has to be equal to both sum of coefficients for αi and sum
of coefficients for βj , i = 1, . . . , a, j = 1, . . . , b.
Estimable
• Group µ+ αi + βj
• αi − αi′ , βj − βj′
• ciαi + djβj , with∑
i ci =

j dj = 0.
Non-estimable
• Each element of β
• µ+ αi
33
Partitioning total variability
Source df SS MS F
Factor A a− 1 SSA MSA MSA/MSE
Factor B b− 1 SSB MSB MSB/MSE
Residuals ab(r − 1) SSE MSE
Total abr − 1 SSTo
SSA = br
a∑
i=1
y¯2i•• − ny¯2•••, SSB = ar
b∑
j=1
y¯2•j• − ny¯2•••
SSE =
a∑
i=1
b∑
j=1
r∑
k=1
(yijk − y¯ij•)2.
Due to the balance design, these sum of squares are NOT
dependent upon the order of A and B in the model.
34
Example
• A student experiment was run to test the performance of 4
brands of batteries under 2 different environments (room
temperature and cold).
• For each of the 8 treatment combinations, 2 batteries of a
particular brand were put into a flashlight.
• The flashlight was then turned on and allowed to run until the
light went out. The number of minutes the flashlight stayed
on was recorded. Each treatment condition was run twice.
35
Example
36
Example
37
Model with interaction
When we have more than one factor, we can have interaction effect
yijk = µ+ αi + βj + γij + εijk, εijk ∼ N(0, σ2),
where γij ’s represents the interaction between the i level of A and
the jth level B, for i = 1, . . . , a and j = 1 . . . , b.
38
Matrix form
Let’s assume a = 3, b = 2, and nij = 2 for all i, j. Then the
matrix form is
y111
y112
...
y211
y212
...
y321
y322

︸ ︷︷ ︸
y
=

1 1 0 0 1 0 1 0 . . . 0
1 1 0 0 1 0 1 0 . . . 0
...
...
...
...
...
...
...
...
...
1 0 1 0 1 0 0 0 . . . 0
1 0 1 0 1 0 0 0 . . . 0
...
...
...
...
...
...
...
...
...
1 0 0 1 0 1 0 0 . . . 1
1 0 0 1 0 1 0 0 . . . 1

︸ ︷︷ ︸
X

µ
α1
α2
α3
β1
β2
γ11
...
γ32

︸ ︷︷ ︸
β
+

ε111
ε112
...
ε211
ε212
...
ε321
ε322

︸ ︷︷ ︸
ε
39
Matrix form
• X can be partitioned into 4 parts,
X =
[
1 XA XB XAB
]
,
each column of the first three parts can be written as a linear
combination of the columns in the last part.
• When the design is balance, then XA, XB and XAB are
mutually orthogonal to each other.
• In general, rank(X) = ab < (a+ 1)(b+ 1), where the second
is the number of columns of X. Hence to solve the normal
equation X>Xb = X> y, we need
(a+ 1)(b+ 1)− ab = a+ b+ 1 constraints.
40
Solving normal equations
• Sum constraints:
a∑
i=1
αˆi =
b∑
j=1
βˆj = 0,
a∑
i=1
γˆij = 0 for all j,
b∑
j=1
γˆij = 0 for all i.
The last two expressions only impose a+ b− 1 constraints
among γˆij instead of a+ b constraints.
• Treatment constraints:
αˆ1 = βˆ1 = 0, γˆ1j = 0 for all j, γˆi1 = 0 for all i.
Similarly, the last two expressions impose a+ b− 1
constraints, since γˆ11 appear in both.
41
Estimable linear combinations
Similar to before, a linear combination of β is estimable if and only
if it equals a linear combination of row vectors of X.
• There is NO estimable combination that only involves α’s or
β’s alone.
• Important estimable linear combinations:
Group mean µij = µ+ αi + βj + γij .
Interaction effect:
γij − γij′ − (γi′j − γi′j′).
42
Estimable linear combinations
Under the sum constraint, we have an estimate
µˆ = y¯•••, αˆi = y¯i•• − y¯•••, βˆj = y¯•j• − y¯•••,
γˆij = y¯ij• − y¯i•• − y¯•j• + y¯•••
so the estimate for group mean is
µˆij = y¯ij•,
and the estimate for the interaction effect is
y¯ij• − y¯ij′• −
(
y¯i′j• − y¯i′j′•
)
.
43
Partitioning total variability
Source df SS MS F
Factor A a− 1 SSA MSA MSA/MSE
Factor B b− 1 SSB MSB MSB/MSE
Interactions (a− 1)(b− 1) SSAB MSAB MSAB/MSE
Residuals ab(r − 1) SSE MSE
Total abr − 1 SSTo
SSA = br
a∑
i=1
y¯2i•• − ny¯2•••, SSB = ar
b∑
j=1
y¯2•j• − ny¯2•••
SSAB =
a∑
i=1
b∑
j=1
y2ij•
r −
a∑
i=1
y2i••
br −
b∑
j=1
y2•j•
ar +
y2•••
n ,
SSE =
a∑
i=1
b∑
j=1
r∑
k=1
(yijk − y¯ij•)2. Due to the balance design, these
sum of squares are NOT dependent upon the order.
44
Testing interaction
We first want to test whether all the interactions are significant:
H0 : γij − γij′ − (γi′j − γi′j′) = 0, i 6= i′, j 6= j′,
Ha : not H0
The F -statistic is
F =
MSAB
MSE
∼ F(a−1)(b−1),ab(r−1) under H0,
so we reject H0 at significance level α if p-value =
P
(
F(a−1)(b−1),ab(r−1) > F
)
< α.
Noting that if the interaction is significant, we will never test for
the main effect, because no main effect is estimable.
45
Interaction plot
100
125
150
175
A B C D
Brand
M
ea
nT
im
e Temp
Cold
Room
46
Dressing example
• Dressing percentages (less 70%) is the percentage of the live
animal that ends up as carcass
• n = 24 swine.
• Swine are classified by sex (2 levels), i.e. a = 2 and breed (4
levels), i.e. b = 4.
47
Dressing example
8
12
16
1 2 3 4
breed
dr
es
s
sex
1
2
The line for each sex factor does not have the same trend due in
particular to breed 4, but the variability of the response for breed
4 male is large and makes the interaction insignificant.
48
Dressing example
The model now has the additive effect only,
yijk = µ+ αi + βj + εijk
and therefore we can test the estimable main effect, such as
H0 : αi − α′i = 0 for i 6= i′.
49
Multiple comparison
Factor-level means
• In ANOVA models, we also want to compare the means of
each factor/combination of factor level, especially when the
omnibus F-test is significant. These means are always
estimable.
• One-way model: yij = µ+ αi + εij
Mean of the ith level: µi = µ+ αi.
• Two-way model: yijk = µ+ αi + βj + γij + εijk.
Mean of the (i, j)th level: µij = µ+ αi + βj + γij
Equivalent to one-way model with ab levels.
50
Pairwise comparison of means
• When no single group is “special” or notable, we can consider
each pairwise difference as a contrast of interest.
• Suppose we have m groups, and we want to compare µi − µj
for all pairs (i, j) with i, j = 1, . . . , m, j 6= i. In total, we
have m(m− 1)/2 pairs.
• In this case,
a t-statistic can be constructed for testing each pairwise
difference H0 : µi − µj = 0.
T =
µˆi − µˆj
se(µˆi − µˆj) ∼ tn−t under H0
a confidence interval can be constructed for each pairwise
population difference.
51
Controlling false discovery rate
• Recall that for testing one null hypothesis H0 problem, the
significance level α is the probability of rejecting incorrectly.
• For m null hypotheses H0j , j = 1, . . . ,m, assuming each test
has significance level α. What is the probability that we reject
at least one null hypothesis incorrectly?
• Let Aj denote the event that we reject H0j incorrectly, and
for simplicity, assume all Aj are independent. Then we need
to find
P (A1 ∪A2 ∪ . . . ∪Am) = P
 m⋃
j=1
Aj
 .
52
Controlling false discovery rate
We have
P
 m⋃
j=1
Aj
 = 1− P
 m⋂
j=1
Acj

= 1−
m∏
j=1
P
(
Acj
)
= 1− (1− α)m.
If the tests are dependent, then it can be proven that in general
P
 m⋃
j=1
Aj
 ≥ 1− (1− α)m.
53
Controlling false discovery rate
• The significance level α is called componentwise error rate,
while αF = P
(⋃m
j=1Aj
)
is called familywise error rate.
• For any α, the familywise error rate goes to 1 if m gets large.
If we test more null hypotheses, we have more chance to reject
at least one incorrectly.
0.25
0.50
0.75
1.00
0 20 40 60
m
Fa
m
ily
wi
se
e
rro
r r
a
te
54
Bonferroni correction
• Bonferroni correction uses componentwise error rate α/m.
• In other words, we only reject each null hypothesis
H0 : µi − µj = 0 if and only if the p-value < α/m, and the
corresponding confidence interval for µi − µj is
µˆi − µˆj ± tα/2m,n−tse(µˆi − µˆj)
0.04875
0.04900
0.04925
0.04950
0.04975
0.05000
0 20 40 60
m
Fa
m
ily
wi
se
e
rro
r r
a
te
(B
on
fer
ro
n
i)
55
Fertilizer example
• In a study to compare 4 fertilizers A, B, C and D an
experiment was designed and several control areas 0 were
incorporated.
8
10
12
13
14
12
20
19
22
8
10
14
9
2
7
5
11
8
14
12
7
8
12
8
5
3
4
7
11
5
1
2
3
4
5
6
0 A B C D
Treatment
R
ep
lic
at
e
• The omnibus F-test is significance, suggesting not all the
group means are equal.
56
Fertilizer example
Construct 95% confidence inteval for the difference in mean of
group C and D: µC − µD = αC − αD
57
All pairwise differences
0 − A
0 − B
0 − C
0 − D
A − B
A − C
A − D
B − C
B − D
C − D
−15 −10 −5 0 5 10
estimate
co
n
tra
st
0 − A
0 − B
0 − C
0 − D
A − B
A − C
A − D
B − C
B − D
C − D
−10 0 10
estimate
co
n
tra
st
Bonferroni correction uses t0.05/20, 25 = 3.078. 58
Tukey procedure
• Bonferroni’s corrections are known to be too conservative,
especially if the sample sizes for each group are unequal.
• Another common method to control for false discovery rate is
based on Tukey’s honest significant difference (HSD), which
usually yield narrower CIs than Bonferroni method.
• The main idea: Suppose y¯1•, . . . , y¯t• distributed as
N(µ, σ2/ni), with σˆ
2 is estimated with ν degrees of freedom.
Then, the ratio
max
1≤i≤t
y¯i• − min
1≤i≤t
y¯i•
σˆ
is called the studentized range statistic.
59
Tukey procedure
• The (1− α) quantile of the distribution of studentised range
is denoted by HSDν(1− α; t).
• This multiplier HSDν(1− α; t) is calculated in R as
qtukey(1− α, t, df).
• A (1− α)100% Tukey confidence interval is given by
(µˆi − µˆj)± HSDn−t(1− α; t)√
2
× se(µˆi − µˆj)
60
Bonferroni vs Tukey correction
0 − A
0 − B
0 − C
0 − D
A − B
A − C
A − D
B − C
B − D
C − D
−10 0 10
estimate
co
n
tra
st
0 − A
0 − B
0 − C
0 − D
A − B
A − C
A − D
B − C
B − D
C − D
−10 0 10
estimate
co
n
tra
st
Tukey correction uses qtukey(1-0.05, 5, 25)/

2 = 2.937. 61

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468