代写辅导接单-CS 6923: Machine Learning Fall 2023 Homework 8

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top

CS 6923: Machine Learning Fall 2023

Homework 8

Part I: Written Exercises

1. In this exercise you will get practice with performing gradient descent in one variable. Consider the function

f(x) = +4x4 −15x3 +11x2 +10x+2

Graph this function for x in the interval [-1,3]. You will see that it has two minima in that range, a

local minimum and a global minimum. These are the only minima of the function.

(a) What is the value of x at the local minimum and at the global minimum? Find the answer to this question like either by hand with calculus or using software.

(b) Suppose we apply gradient descent to this function, starting with x = −1. To do this, we will need to update x using the update rule

x = x + −η ∗ f′(x)

where f′ is the derivative of f, and η is the “step size”.

Write a small program implementating gradient descent for this function. Setting x = −1 and η = 0.001, run gradient descent for 6 iterations (that is, do the update 6 times). Report the values of x and f(x) at the start and after each of the first 6 iterations.

Run the gradient descent again, starting with x=-1, for 1200 iterations. Report the last 6 values of x and f(x).

Has the value of x converged? Has the gradient descent found a minimum? Is it the global or the local minimum?

You do NOT have to hand in your code.

(c) Repeat the previous exercise, but this time, start with x=3.

(d) Setting x = −1 and η = 0.01, run gradient descent for 1200 iterations. As in the previous two exercises, report the initial values of x and f(x), the next 6 values of x and f(x), and the last 6 values of x and f(x). Compare the results obtained this time to the results obtained above for x = −1 and η = 0.001. What happened?

(e) Setting x = −1 and η = 0.1, run gradient descent for 100 iterations. What happened?

2. Given a neural network with 3 layers, where the input layer has 2 neurons, the hidden layer has 2

neurons, the output layer has 1 neuron, and

(1) ?1 2? (2) ? ? (1) ?1? (2) ??

W = 3 4 , W = 1 2 , b = −1 , b = 1

Suppose you had the following training set1: ((1, 0)T , 1), ((0, 1)T , 0). Perform 1 step of gradient descent

where the learning rate is 0.2, and the activation function is the sigmoid function. 1We are representing the training set using: ((x1, x2)T , y)

 1

 

3. Repeat the previous problem, but now use a RELU activation function for layer 2 instead of a sigmoid activation function for layer 2 of the network. See question 7 below.

4. In class, we considered a neural net with error function J = 1/2(y − yˆ)2. In this question, we will also discuss neural nets for three other types of problems.

To make it easier to refer to the four types of neural nets, we’ll give them names. We’ll call the neural net discussed in class NeuralNetRZeroOne (R for regression, since it is often used for regression tasks). We now give the names of the other three types of neural nets, with their descriptions. NeuralNetRZeroOne: This neural net is designed for problems with K outputs where each output is either an element in the set {0, 1}, or a real value in the interval [0, 1]. This neural net has sigmoid activation functions in both the hidden nodes AND the output nodes.

NeuralNetRK: The error function is squared error, the same as for NeuralNetRZeroOne. This neural net has sigmoid activation functions in both the hidden nodes, but NOT the output nodes. Each output node just outputs the z score for that node directly.This is equivalent to saying that the activation function at the output nodes is the identity function.

NeuralNetCB: NeuralNetCB is a standard neural net for binary classification, It has a single output node which outputs a value yˆ, where yˆ is the predicted value of P[Class1|x]. It uses sigmoid functions as the activation functions both for the hidden nodes and for the output nodes. 2 The goal of the backpropagation is to minimize cross-entropy error. (i.e. The loss function is changed to use the cross-entropy error instead of the squared error,.)

NeuralNetCK: This neural net is for classification with K > 2 classes. There are K output nodes i, (nl ) ezi

and yi is the predicted value of P[Classi|x]. In this case, each output yˆi = ai is equal to PKj=1 ezj

 (Where zj is the z score for the j’th neuron of the output layer.) Note that the sum of the yˆi is 1, which is appropriate since the yi are the estimated probabilities for the K labels of x. (In fact, the denominator in the expression for yˆi is just a “normalizing factor” that causes the sum of the yˆi to equal 1.) The error function is the generalization of cross-entropy to K classes: − PKi=1 yi log yˆi , whereyi =1ifxisinClassi,andyi =0otherwise.

(a) Which of the above neural nets can output values that are negative numbers?

(b) Which of the above neural nets ensures that outputs y1, . . . , yK will satisfy PKi=1 = 1?

(c) Consider a simple image classification problem. Each image is a 10x10 array of pixels, and all pixel values are between 0 and 1. These pixel arrays are treated as example vectors of length 100, with one attribute per pixel. There are three categories: face, cat, and tree. These are represented by 3 output values, y1, y2, and y3 where y1 is 1 or 0 depending on whether or not theimageisaface,y2 is1or0dependingonwhetheritisacat,andy3 is1or0dependingon whether it is a tree.

Suppose you want to use a neural net that will take an input image, and output three values p1,p2,p3, where p1 is the probability the image is a face, p2 that it is a cat, and p3 that it is a tree.

Which of the above neural nets would be appropriate for the image classification problem and why?

(d) Consider the following text classification problem. Each example is a document, represented by a binary vector of length n. Each attribute corresponds to a word, and the value is 1 or 0 depending on whether the word appears in the document. There are two outputs. The first is 1

2Another popular choice of activation function for such a network is tanh, the hyperbolic tangent function. Deep nets often use ReLU (Rectified Linear Units) in the hidden nodes.

 2

 

or 0 depending on whether or not the document is about politics. The second is 1 or 0 depending on whether it is written in a formal or informal style.

Which of the above neural nets would be appropriate for the text classification problem and why?

5. Consider a classification problem where examples correspond to mushrooms, and the task is to deter- mine whether the mushroom is edible (1) or poisonous (0) 3 Suppose one of the attributes is “odor” and it has the following 4 values:

• almond • anise

• creosote • fishy

There are actually two types of categorical variables (attributes): nominal and ordinal. The odor attribute is a nominal attribute, meaning the values (almond, anise, creosote, fishy) are not ordered.

The values of ordinal variables are ordered: e.g. if the values are low, medium, high, we would say that low < medium < high.

(Note: Sometimes in Machine Learning people talk about “nominal attributes” when they really mean “categorical attributes”.)

(a) In order to use a neural net on this dataset, we need to decide how to convert the categorical values to numerical values. The obvious way to do this is to just assign a number to each value: almond (1), anise (2), creosote (3), fishy (4). This approach (directly converting attribute values to numbers) is sometimes called “label encoding”.

This is NOT a good way to convert nominal values to numerical values, for use in a neural net. However, it would be fine to do this if we were using a random forest, rather than a neural net. Why?

(b) When converting nominal values to numerical values for neural nets (and a number of other learning methods), the following method is often used instead. It is often called one-hot encoding. In this method, for each possible value v of an attribute, we create a new binary-valued attribute whose value is 1 if the original attribute equals v, and 0 otherwise. We use these new attributes in place of the original attribute.

For example, we would replace the odor attribute above by 4 attributes we’ll call z1,z2,z3,z4, where z1 = 1 iff odor=almond, z2 = 1 iff odor = anise, z3 = 1 iff odor=creososte, and z4 = 1 iff odor = fishy. If odor was the only attribute, and the original dataset was as follows

odor label x(1) anise 0 x(2) creososte 1

then using one-hot encoding to convert the input attribute, the transformed dataset would be as follows:

z1 z2 z3 z4 label x(1) 0100 0 x(2) 0010 1

3This question is inspired by the well-known mushroom dataset, https://archive.ics.uci.edu/ml/datasets/mushroom which was used in many Machine Learning experiments.

   3

 

i. Now suppose that the original dataset actually has two attributes, the odor attribute just de- scribed, and another attribute called stalk shape. The stalk-shape attribute has two possible values, tapering or enlarging.

What is the transformed dataset, if you apply one-hot encoding to the attributes in the following dataset?

odor x(1) fishy

x(2) creososte

ii. Consider an ordinal attribute whose values are low, medium, and high. It might be better

to represent these values as 1,2, and 3 rather than using one-hot-encoding. Why?

iii. The stalk shape attribute has only two values. For nominal attributes with only two values, it’s generally fine to just represent the two values as 0 and 1 (or -1 and +1), rather than using one-hot encoding as described above. Why is this the case for attributes with 2 values,

but not for attributes with more than two values?

(Notes: If the label values are categorical, we also need to convert them to numerical values for use in a neural net. Above, we converted “poisonous” and “edible” into 1 and 0.)

Part II: Programming Exercise

6. Modify the neural network implementation we discussed in class to see if you can improve the perfor- mance on the MINST dataset by trying the following:

(a) Add a regularization term to the cost function ∂J(W,b) = 1 hPn ∂J(W,b,x(i),y(i))i+λW(l) where ∂W(l) n i=1 ∂W(l) ij

ij ij

x(i), y(i) are the ith training example.

(b) Try using the ReLU activation function, f(z) = max(0,z). You will notice it is not differentiable at0,butyoucanuse: f′(z)=0ifz<0andf′(z)=1ifz≥0. (Youcanalsotryusingthe leaky ReLU activation function.) For more information see https://www.kaggle.com/dansbecker/rectified-linear-units-relu-in-deep-learning

(c) Try using the tanh activation function, f(z) = ez−e−z . The derivative of tanh is f′(z) = ez +e−z

1 − (f(z))2. For more information see http://ufldl.stanford.edu/tutorial/supervised/ MultiLayerNeuralNetworks/

(α(exp(z) − 1)

(d) Try using the other activation functions ELU (exponential linear unit) ELUα(z) = z

(e) Try the different weight initializations given in the lecture notes

(f) Experiment on your own trying different hyper-parameters (e.g. number of iterations, number of hidden layers)

(g) Report your findings in a chart that contains the following columns: • accuracy

• activation function

• alpha (i.e. learning rate)

• initialization of weights (see chart in lecture notes) • lambda

Expect some surprising results. State what performed the best.

stalk shape tapering 0

enlarging 0

label

    4

if z < 0 if z ≥ 0

 

 

51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468