辅导案例-E 6644OA

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
1ISyE 6644OA — Summer 2019 — Test #3 Solutions
(revised 7/27/19)
This test is 120 minutes. You’re allowed three cheat sheets (6 sides total).
You’re allowed the following items:
• Pencil / pen and scratch paper.
• A reasonable calculator.
• Three cheat sheets (6 sides total).
• Normal, t, and χ2 tables, which I will supply. You will not need Kolmogorov-
Smirnov or ranking-and-selection tables.
But note that
• You are not allowed to use Arena, even though I’m asking a couple of questions
about it.
• This test requires some sort of proctor.
• If you encounter a ProctorTrack issue, contact us immediately (but don’t get an
ulcer over it).
All questions are 3 points, except #??, which is 1 point.
1. TRUE or FALSE? If f(x, y) = cxy for all 0 < x < 1 and 1 < y < 2, where c is
whatever value makes this thing integrate to 1, then X and Y are independent
random variables.
Solution: TRUE. (Because f(x, y) = a(x)b(y) factors nicely, and there are no
funny limits.) 2
2. Show how to generate in Arena a discrete random variable X for which we have
Pr(X = x) =

0.3 if x = −3
0.6 if x = 3.5
0.1 if x = 4
0 otherwise.
2(a) DISC(0.3, −3, 0.6, 3.5, 0.1, 4)
(b) DISC(0.3, −3, 0.9, 3.5, 1.0, 4)
(c) DISC(−3, 0.3, 3.5, 0.9, 4, 1.0)
(d) CONT(0.3, −3, 0.6, 3.5, 0.1, 4)
(e) CONT(0.3, −3, 0.9, 3.5, 1.0, 4)
Solution: (b). (It’s of the form DISC(F (x1), x1, F (x2), x2, . . .).) 2
3. TRUE or FALSE? In our Arena Call Center example, it was possible for entities
to be left in the system when it shut down at 7:00 p.m. (even though we stopped
allowing customers to enter the system at 6:00 p.m.).
Solution: TRUE — because of the small chance that a callback will occur. 2
4. TRUE or FALSE? An entity can be scheduled to visit the same resource twice,
with different service time distributions on the two visits!
Solution: TRUE. 2
5. TRUE or FALSE? Arena has a built-in Input Analyzer tool that allows for the
fitting of certain distributions to data.
Solution: TRUE. 2
6. Suppose the continuous random variable X has p.d.f. f(x) = 2x for 0 ≤ x ≤ 1.
Find the inverse of X’s c.d.f., and thus show how to generate the RV X in terms
of a Unif(0,1) PRN U .
(a) X = U/2
(b) X = 2U
(c) X = U2
(d) X =

U
(e) X =

2U
3Solution: The c.d.f. is easily shown to be F (x) = x2 for 0 ≤ x ≤ 1, so that the
Inverse Transform Theorem gives F (X) = X2 = U ∼ Unif(0, 1). Solving for X, we
obtain the desired inverse, F−1(U) = X =

U , where we don’t worry about the
negative square root, since X ≥ 0. Thus, (d) is the answer. 2
7. If U1 and U2 are i.i.d. Unif(0,1) with U1 = 0.45 and U2 = 0.45, use Box-Muller to
generate two i.i.d. Nor(0,1) realizations. (I was lazy and made U1 and U2 the same
value so that the problem will be easier for me to grade!)
(a) Z1 = 1.2622, Z2 = −0.0623
(b) Z1 = 1.2622, Z2 = 1.2622
(c) Z1 = −1.2019, Z2 = 0.3905
(d) Z1 = −1.2019, Z2 = −1.2019
(e) Z1 = 1.2622, Z2 = −1.2019
Solution: We have
Z1 =

−2`n(U1) cos(2piU2) =

−2`n(0.45) cos(0.9pi) = −1.2019
Z2 =

−2`n(U1) sin(2piU2) =

−2`n(0.45) sin(0.9pi) = 0.3905. 2
Thus, the answer is (c). 2
8. Suppose that Z1, Z2, and Z3 are i.i.d. Nor(0,1) random variables, and let
T =
Z1√
(Z22 + Z
2
3)/2
.
Find the value of x such that Pr(T < x) = 0.99.
(a) x = 4.550
(b) x = 6.965
(c) x = 9.210
(d) x = 11.345
(e) x = −11.345
4Solution: Note that
T ∼ Nor(0, 1)√
χ2(2)/2
∼ t(2).
Therefore,
0.99 = Pr(T < x) = Pr(t(2) < x),
so that the quantile x = t0.01,2 = 6.9646, i.e., answer (b). 2
9. Suppose X has the Laplace distribution with p.d.f. f(x) = λ
2
e−λ|x| for x ∈ R
and λ > 0. This looks like two exponentials symmetric on both sides of the y-
axis. Which of the methods below would be very reasonable to use to generate
realizations from this distribution?
(a) Ask a UGA student
(b) Box-Muller
(c) Inverse Transform Method
(d) Acceptance-Rejection
(f) Both (c) and (d)
Solution: (a) is a sick joke; (b) is for normal observations; (c) is OK because we
can get the c.d.f. in an easy closed form (see the composition notes); and (d) is
OK because the p.d.f. is nice and finite. Thus, the correct answer is (e). 2
10. Consider a bivariate normal random variable (X, Y ), for which E[X] = −3,
Var(X) = 4, E[Y ] = −2, Var(Y ) = 9, and Cov(X, Y ) = 2. Find the Cholesky ma-
trix associated with (X, Y ), i.e., the lower-triangular matrix C such that Σ = CC ′,
where Σ is the variance-covariance matrix.
(a) C =
(
2 0
1 2

2
)
(b) C =
(
2 0
2

2 1
)
(c) C =
(
4 2
2 9
)
(d) C =
(
2

2√
2 3
)
5Solution: By class notes, we saw that for k = 2, we obtain
C =
( √
σ11 0
σ12√
σ11

σ22 − σ
2
12
σ11
)
=
( √
4 0
2√
4

9− 22
4
)
=
(
2 0
1 2

2
)
.
Thus, the answer is (a). 2
11. Consider a nonhomogeneous Poisson arrival process with rate function λ(t) = 2t
for t ≥ 0. Find the probability that there will be exactly 2 arrivals between times
t = 1 and 2.
(a) 0.056
(b) 0.112
(c) 0.224
(d) 0.448
(e) 0.896
(f) None of the above
Solution: The distribution of the number of arrivals between times 1 and 2 is
N(2)−N(1) ∼ Pois
(∫ 2
1
λ(t) dt
)
∼ Pois
(∫ 2
1
2t dt
)
∼ Pois(3).
Thus,
Pr (N(2)−N(1) = 2) = Pr (Pois(3) = 2) = e
−332
2!
= 0.2240. 2
This is answer (c). 2
12. Suppose we are generating arrivals from a nonhomogeneous Poisson process with
rate function λ(t) = 1 + sin(pit), so that the maximum rate is λ? = 2, which is
periodically achieved. Suppose that we generate a potential arrival (i.e., one at rate
λ?) at time t = 0.75. What is the probability that our usual thinning algorithm
will actually accept that potential arrival as an actual arrival? (Note that the pi
means that calculations are in radians.)
(a) 0
6(b) 0.146
(c) 0.5
(d) 0.854
(e) 1
(f) None of the above
Solution: The thinning algorithm says that we accept with probability
λ(t)
λ?
=
1 + sin(0.75pi)
2
= 0.854.
This is answer (d). 2
13. Suppose X1, X2, . . . is an i.i.d. sequence of random variables with mean µ and
variance σ2. Consider the process Yn(t) ≡
∑bntc
i=1 (Xi − µ)/(σ

n) for t ≥ 0. What
is the asymptotic probability that Yn(4) will be at least 2 as n becomes large?
Hint: Recall that Donsker’s Theorem states that Yn(t) converges to a standard
Brownian motion as n becomes large.
(a) 0
(b) 0.1587
(c) 0.5
(d) 0.8413
(e) 1
(f) None of the above
Solution: The Hint reminds us that as n→∞, we have
Yn(t)
D→ W(t) ∼ Nor(0, t).
In particular, Yn(4)
D→ Nor(0, 4), so that
Pr(Yn(4) ≥ 2) ≈ Pr
(
Nor(0, 1) ≥ 2− 0√
4
)
= Pr (Nor(0, 1) ≥ 1) = 0.1587.
This is choice (b). 2
714. Which one of the following properties of a Brownian motion process W(t) is
FALSE?
(a) W(0) = 0.
(b) W(t) ∼ Nor(0, t).
(c) W(3)−W(1) has the same distribution as W(11)−W(9).
(d) W(3)−W(1) is independent of W(4)−W(2).
(e) Cov(W(3),W(5)) = 3.
Solution: (a) and (b) are TRUE because they are fundamental axioms of Brownian
motion. (c) is TRUE by stationary increments (another fundamental axiom). (d)
is FALSE because it violates independent increments (due to overlapping time
intervals). (e) is TRUE since we showed in class that Cov(W(a),W(b)) = min(a, b).
Thus, the FALSE answer we’re looking for is (d). 2
15. Find the sample variance of −10, 10, 0.
(a) 0
(b) 10
(c)

200
(d) 100
(e) 200
(f) None of the above
Solution: S2 = 100. So (d) is the answer. 2
16. If X1, . . . , X10 are i.i.d. Exp(1/7) (i.e., having mean 7), what is the expected value
of the sample variance S2?
(a) 1/49
(b) 1/7
(c) 10/7
(d) 7
(e) 49
8(f) None of the above
Solution: S2 is always unbiased for the variance of Xi. Thus, we have
E[S2] = Var(Xi) = 1/λ
2 = 49, which is answer (e). 2
17. TRUE or FALSE? The mean squared error of an estimator is the square of the
bias plus the square of its variance.
Solution: FALSE. It’s bias2 + var. 2
18. If X1 = 7, X2 = 3, and X3 = 5 are i.i.d. realizations from a Nor(µ, σ
2) distribution,
what is the value of the maximum likelihood estimate for the variance σ2?
(a)

2.667
(b) 2
(c) 2.667
(d) 4
(e) 16
(f) None of the above
Solution: We know from a class example that
σ̂2 =
n− 1
n
S2 =
1
n
n∑
i=1
(Xi − X¯)2 = 1
n
n∑
i=1
X2i − X¯2 = 2.667.
Thus, the answer is (c). 2
19. Suppose that we take three i.i.d. observations X1 = 2, X2 = 3, and X3 = 1 from
X ∼ Exp(λ). Using the maximum likelihood estimate for λ, find the MLE of
Pr(X > 2).
(a) 0.037
(b) 0.368
(c) 0.5
9(d) 0.632
(e) 0.963
(f) None of the above
Solution: By class notes, the MLE of λ is λˆ = 1/X¯ = 1/2. By the Invariance
Property of MLEs, we have
P̂r(X > 2) = e−λˆx = e−(1/2)(2) = e−1 = 0.368.
Thus, the answer is (b). 2
20. Suppose we’re conducting a χ2 goodness-of-fit test to determine whether or not 100
i.i.d. observations are from a Johnson distribution with s = 4 unknown parameters
a, b, c, and d. (The Johnson distribution is very general and often fits data quite
well.) If we divide the observations into k = 10 equal-probability intervals and
we observe a g-o-f statistic of χ20 = 14.2, will we ACCEPT (i.e., fail to reject) or
REJECT the null hypothesis of the Johnson? Use level of significance α = 0.05
for your test.
Solution: Note that the χ2 test has ν = k − s − 1 = 10 − 4 − 1 = 5 degrees of
freedom. Then
χ20 = 14.2 > χ
2
0.05,5 = 11.07,
so we (easily) REJECT. 2
21. TRUE or FALSE? The Kolmogorov-Smirnov test can be used both to see (i) if
data seem to fit to a particular hypothesized distribution and (ii) if the data are
independent.
Solution: FALSE. (It’s just a goodness-of-fit test.) 2
22. Let’s run a simulation whose output is a sequence of consecutive customer waiting
times in a crowded store. Which of the following statements is true?
(a) The waiting times are independent.
(b) The waiting times are correlated.
10
(c) The waiting times are normally distributed.
(d) The waiting times are identically distributed throughout the day.
Solution: Just (b). Consecutive waiting times are rarely i.i.d. normal. 2
23. Suppose we want to estimate the expected average waiting time (in minutes) for
the first m = 100 customers at a bank. We make r = 3 independent replications
of the system, each initialized empty and idle and consisting of 100 waiting times.
The resulting replicate means are:
i 1 2 3
Zi 12 14 11
Find a 95% two-sided confidence interval for the mean average waiting time for the
first 100 customers.
(a) [4.5, 20.1]
(b) [8.5, 16.1]
(c) [10.5, 14.1]
(d) 12.3± 5.8
(e) None of the above.
Solution: The sample mean and sample variance of the 3 replicate means are
easily calculated as Z¯3 = 12.333 and S
2
Z = 2.333. For level α = 0.05, we have
tα/2,r−1 = t0.025,2 = 4.303, and so the CI is:
µ ∈ Z¯r ± tα/2,r−1

S2Z
r
= 12.333± 4.303

2.333
3
= 12.333± 3.795
= [8.538, 16.128].
Thus, the answer is (b). 2
11
24. Suppose that µ ∈ [−30, 90] is a 90% confidence interval for the mean cost incurred
by a certain inventory policy. Further suppose that this interval was based on 4
independent replications of the underlying inventory system. Unfortunately, the
boss has decided that she wants a 95% confidence interval. Can you supply it?
(a) [−61.1, 121.1]
(b) [−51.1, 111.1]
(c) [−30, 90]
(d) [−20.5, 99.5]
(e) 30± 45
Solution: The 90% confidence interval is of the form
[−30, 90] = X¯ ± tα/2,b−1, y = 30± t0.05,3 y,
where the half-length t0.05,3 y = 60.
The new 95% CI will therefore be of the form
X¯ ± t0.025,3 y = 30± t0.025,3
t0.05,3
t0.05,3 y
= 30± 3.182
2.353
60
= 30± 81.14
= [−51.14, 111.14].
Thus, the answer is (b). 2
25. TRUE or FALSE? Welch’s method is a graphical technique to estimate truncation
(initialization bias) points for steady-state simulation.
Solution: TRUE. 2
26. Suppose that we’re studying a stochastic process whose covariance function is Rk =
3−k for k = 0,±1,±2, and 0 otherwise. Find the variance of X¯3 (the sample mean
of the first 3 observations).
(a) −0.35
12
(b) 0
(c) 0.65
(d) 1.48
(e) 2.11
Solution:
Var(X¯n) =
1
n
[
R0 + 2
n−1∑
k=1
(
1− k
n
)
Rk
]
=
1
3
[
3 + 2
2∑
k=1
(
1− k
3
)
(3− k)
]
=
1
3
[
3 + 2
(
2
3
)
2 + 2
(
1
3
)
1
]
= 2.111. 2
Thus, the answer is (e). 2
27. Consider the following 5 observations:
54 80 75 62 90
If we choose a batch size of 3, calculate all of the overlapping batch means for me.
(a) 72.2
(b) 75
(c) 67.75, 76.75
(d) 69.7, 72.3, 75.7
Solution:
X¯o1,3 =
1
3
∑3
i=1Xi = 69.67,
X¯o2,3 =
1
3
∑4
i=2Xi = 72.33, and
X¯o3,3 =
1
3
∑5
i=3Xi = 75.67.
Therefore, the answer is (d). 2
13
28. Which variance reduction method uses the difference X¯ − Y¯ of two positively cor-
related sample means to get a lower-variance estimator for the difference µX − µY
of the underlying unknown means?
(a) common random numbers
(b) antithetic random numbers
(c) stratified sampling
(d) composition
Solution: (a). 2
29. Which variance reduction method most-closely resembles regression, e.g., an esti-
mator for the expected value E[X] of the form Y = X¯ − β(C − E[C]), where E[C]
is known and β is a constant, so that Y is unbiased for E[X].
(a) common random numbers
(b) antithetic random numbers
(c) control variates
(d) composition
Solution: (c). 2
30. TRUE or FALSE? If you are using a ranking-and-selection procedure and two
competitors happen to fall within the indifference-zone, then you don’t really care
too much which one you end up selecting.
Solution: TRUE. That’s why it’s called the IZ! 2
31. Suppose we are interested in determining which of 3 soft drinks is most likely to
be chosen by a person in a survey. Which type of ranking-and-selection problem is
this?
(a) Normal
(b) Bernoulli
14
(c) Poisson
(d) Exponential
(e) Multinomial
Solution: (e). 2
32. TRUE or FALSE? Sequential ranking-and-selection procedures are designed to
stop early if one alternative seems to be way out in front of the others.
Solution: TRUE. 2
33. Suppose, when designing a ranking-and-selection procedure, you have decided to
increase the desired probability of correct selection compared to a previous run of
the procedure. What can you expect?
(a) Sample sizes that are about the same
(b) Larger sample sizes
(c) Somewhat lower achieved Pr(CS)
(d) A larger indifference zone
(e) Lower confidence in your selection
Solution: (b). 2
34. (1 point — this is not a bonus question!) Zombies or B¨ieb¸er?
Solution: Zoms. 2

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468