代写接单-ACMS 40740 Fall 2022 Take-Home Midterm Project

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top

ACMS 40740 Fall 2022 Take-Home Midterm Project

 In this project, you will read a monkeys brain! Specifically, you will derive an algorithm to decode neural activity recorded from the primary visual cortex in a monkeys brain to determine what the monkey was looking at. We will focus on the simple case of using spike counts from 1 neuron to distinguish between 2 different stimuli. Because we are just using 1 neuron, our decoder will not be very accurate. At the end of the course, we will learn to decode 12 stimuli using data from over 100 neurons with near perfect accuracy! Figure 2.3B shows histograms of spike counts recorded from a single neuron while the monkey was watching drifting grating stimuli with two different orientations, 1 = 120 and 2 = 150, for 100 trials each. A simple algorithm to decode these spike trains is to draw a vertical line on the x-axis. If a spike count lies to the right of this line, the decoder classifies the stimulus as 1. If the spike count lies to the left of the line, it classifies it as 2 (see the black line in Figure B.10.A for an example). In other words, the algorithm can be written as Guess 1 whenever N > z. (1) Here, z is the location of the vertical line. Our goal is to derive an optimal z. The first part of Section B.5 derives the optimal z under a Gaussian model. For this assignment, you will derive the optimal z under an arguably more realistic, Poisson model. This is Exercise B.5.2 in the textbook. The Poisson model is defined by assuming that the spike count under each stimulus condition obeys a Poisson distribution: P ( N | 1 ) = N1 e 1 N! P ( N | 2 ) = N2 e 2 N! (2) where j is the mean spike count under stimulus condition j for j = 1, 2. Here, P (N | j ) denotes the probability that the spike count is N given that the stimulus is j . An optimal decoder would guess 1 whenever 1 has a higher posterior probability than 2, i.e., the decoder should do the following: Guess 1 whenever P(1 |N) > P(2 |N). (3) However, we have a model for P (N | j ), not for P (j | N ). How can we resolve this? Bayes Theorem tells us that P(1 |N) = P(N |1)P(1) P(N) P(2 |N) = P(N |2)P(2) P(N) Note that P (N ) is the same in both cases. If we assume that both orientations are presented equally often (i.e., P(1) = P(2)) then Bayes theorem tells us that the optimal decoder in Eq. (3) is equivalent to the following likelihood-based decoder Guess 1 whenever P(N |1) > P(N |2). (4) If we assume that 1 > 2 (which is true for the empirical means in our data), then Eq. (4) can be re-written in the form of Eq. (1). 1 1. Derive an equation for z as a function of 1 and 2 under our Poisson model. Hint: Combine Eqs. (2) and (4). Turn in: Your derivation of z with the final equation circled. 2. Compute the numerical value of z using the mean spike counts from the data in Figure 2.3B (see the code in OrientationTuningCurve.ipynb). Add a vertical line to Figure 2.3B using the command plt.axvline(z,color=k) This helps to visualize the decision algorithm and gives you a sanity check. Does this line seem to give a reasonable cutoff? Any blue data to the left of this line is misclassified. Any red data to the right of the line is misclassified. What percentage of the data points are misclassified? Turn in: The value of z (to 1 decimal place). The percentage of misclassified data points (to 1 decimal place). You can submit both answers together in a single file, which can be a scanned or photographed handwritten document (or typed if you prefer and youre able). For Problem 2, just write down the numbers that you got from your code, no need for a derivation or any other details. If youre interested in reading more deeply about this approach to neural decoding and how it can be extended to neural populations, take a look at Section B.5. 2


51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468