程序代写案例-CS 6301-Assignment 2

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
Assignment 2
Deep Learning
A. Nagar
CS 6301
Instructions
• This assignment requires you to use two different techniques for image classifica-
tion. The first one should be R-based and not utilize any pre-built deep learning
library, and the second technique can be using any deep learning library of your
choice.
• You should store your dataset on a public location e.g. UTD server or AWS S3,
and your code should run locally or in the cloud. Do not submit the dataset
(which could be quite large) on eLearning.
• You are allowed to work in teams of maximum four students. Please write the
names and NetIDs of each group member on the cover page.
Only 1 final submission per team.
• You have a total of 4 free late days for the entire semester. You can use
at most 2 days for any one assignment. After four days have been used
up, there will be a penalty of 10% for each late day. The submission
for this assignment will be closed 2 days after the due date.
• Please ask all questions on Piazza, not via email.
1
This project involves using deep learning for image classification. You will use two different
methods and compare their performance and results on the same dataset. The methods to
be used are as follows:
1. As the first method, you will use a R based technique, which doesn’t use a pre-built
deep learning library and API, such as Keras, PyTorch, Theano, TensorFlow etc, for
image classification. Some allowed techniques are:
• R based API for MXNet: https://mxnet.apache.org/api/r
• R based API for H20:
https://docs.h2o.ai/h2o-tutorials/latest-stable/tutorials/deeplearning/index.html
• NeuralNet package of R
• ImageRec package of R
https://cran.r-project.org/web/packages/imgrec/vignettes/intro.html
• Any other R based non-TensorFlow package
2. For the second method, you are free to use any pre-built deep learning library with an
API of your choice. You can use Google Colab and simply provide a public link to your
notebook. Note that you cannot use Convolution Neural Networks for this
part.
Dataset Selection
First of all, note that you cannot use the MNIST, Fashion MNIST, or any toy
dataset available as part of TensorFlow. You also cannot use any dataset whose
solution is publicly available. That rules out most of the Kaggle datasets, whose
attempted solution can be found online.
Following are some of the possible choices:
• CIFAR-10 - Object Recognition in Images
http://www.cs.utoronto.ca/~kriz/cifar.html
• Any image based dataset from UCI ML repository
https://archive.ics.uci.edu/ml/index.php
• Any image based datase extracted from Google research datasets
https://datasetsearch.research.google.com
• Any image based dataset available from Amazon’s dataset repository
https://registry.opendata.aws
• Any image based dataset from Microsoft Research Open Datasets
https://msropendata.com
2
Requirements
You need to download an acceptable dataset from the list mentioned in previous section.
Some additional requirements are:
1. Please follow the specific requirements for the two techniques. If you are unsure, please
ask the instructor.
2. Please do not hard code any paths to your local computer. You can refer to public
paths under your AWS S3 account or UTD web account.
3. In case the dataset is very large and you cannot load it, you are free to work on a smaller
uniformly sampled subset of the entire dataset. Uniformly sampled implies that you
ensure that the class distribution in the sample matches that of the entire dataset.
4. If the dataset doesn’t have training and test parts, you are free to divide the data into
these two parts. It’s up to you to choose the ratio.
5. If you are unsure, please ask the instructor.
6. You need to tune as many parameters as possible. You have to keep a log of your ex-
periments with the parameters used and accuracy and loss obtained. Details mentioned
in section below.
Submission
You need to output the at least the following:
1. R code for first method indicating which libraries you used and how to run your code.
2. For the second method, you can submit a public link to your Google Colab notebook
or include the code.
3. History plots showing training and testing accuracy and loss as a function of number
of iterations. This is automatically generated by the system.
4. A table containing details of parameter testing and tuning. Example:
Iteration Parameters Training and
Test Accuracy
1. Number of layers = ..................
Kernel Size Layer 1= ......................
Activation Function = ....................
Train = 80% and
Test = 78%
..... ...... ........
5. Example of at least 5 test images from test dataset, showing the following
• image
• true Label
3
• Predicted Label
If you have made any assumptions, please state them completely. Also include instructions
on how to compile and run your code in a README file.
4

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468