代写辅导案例-FIT3152 Data analytics – 2023: Assignment 3

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top

FIT3152 Data analytics – 2023: Assignment 3

   Your task

   ● The objective of this assignment is to gain familiarity with Natural Language Processing and network analysis using R.

● This is an individual assignment.

 Value

Suggested Length Due Date

Generative AI Use

● This assignment is worth 20% of your total marks for the unit.

● It has 34 marks in total.

• 8 – 10 A4 pages (for your report) + extra pages as appendix for your R script. • Font size 11 or 12pt, single spacing.

11.55pm Friday 9th June 2023

• In this assessment, you must not use generative artificial intelligence (AI) to generate any materials or content in relation to the assessment task.

       Submission

   You will submit 3 files:

• Submit your report as a single PDF file.

• Submit your corpus as either a zipped folder or csv file on Moodle.

• Submit your video file as an mp4, m4v etc.

• Use the naming convention: FirstnameSecondnameID.{pdf, zip, csv, mp4}

• Turnitin will be used for similarity checking of all written submissions.

    Late Penalties

   ● 10% (3 mark) deduction per calendar day for up to one week.

● Submissions more than 7 calendar days after the due date will receive a

mark of zero (0) and no assessment feedback will be provided.

 Instructions and data

In this assignment, you will create a corpus of documents and analyse the relationships between them, as well as the relationships between the important words used in these documents.

Background material for this assignment is contained in Lectures 10, 11, and 12. You are free to consult any other references, including those listed at the end of the document.

There are two options for compiling your written report:

(1) You can create your report using any word processor with your R code pasted in as machine- readable text as an appendix, and save as a pdf, or

(2) As an R Markdown document that contains the R code with the discussion/text interleaved. Render this as an HTML file and save as a pdf.

Your video report should be less than 100MB in size. You may need to reduce the resolution of your original recording to achieve this. Use a standard file format such as .mp4, or mov for submission.

1

 

Tasks

1. Collect a set of (machine-readable text) documents from an area of interest. For example, these could be a set of news stories, movie reviews, blogs, factual or creative writing. There is no restriction on the type of material you can choose although please avoid texts that might be offensive to people. As a guide, you should aim for the following:

• Each document should be at least 100 words in length, you should collect at least 15 documents. Have at least 3 different topic areas in your collection of documents.

• You can collect the documents as PDFs or as copied text from web-based articles or as text or other files.

• Reference the source of your documents (URL or bibliographic citation (APA or Harvard style). (1 Mark)

2. Create your corpus by first converting each document into a text format. The type of original material you collect will determine the way you need to do this. For some formats you can simply copy and paste the text into an empty text file. For Word documents, HTML, and PDFs etc., you may find it simpler to create the text document using “export”, or “save as” functions in software.

• Describe the process you follow for this step in your report.

• Create your corpus using one of the methods covered in lectures and tutorials. This

could either be a folder of text files or a suitably formatted CSV file. Use suitable identifiers for your text file names or document IDs so that you can recognize the document in your clustering or network graphs. (3 Marks)

3. Follow the text processing steps covered in lectures and tutorials to create your Document-Term Matrix (DTM).

• As part of this process, you may need to make particular text transformations to either preserve key words, or to remove unwanted terms, for example, characters or artefacts from the original formatting. Describe any processing of this kind in your report or state why you did not need to do so.

• Ideally your DTM should contain approximately 20 tokens after you have removed sparse terms. You will need to do this by trial-and-error to get the right number of tokens.

• Include your DTM as a table as an appendix to your report. (3 Marks)

4. Create a hierarchical clustering of your corpus and show this as a dendrogram.

• You may use the method covered in lectures although extra marks will be given for

a clustering based on Cosine Distance.

• Describe the quality of the clustering obtained by either conventional clustering

and/or using Cosine Distance. For example, does the clustering reflect the variety of

topics you identified when you collected the documents?

• Give a quantitative measure of the quality of the clustering. (5 Marks)

5 Create a single-mode network showing the connections between the documents based on the number of shared terms.

• To do this you will need first calculate the strength of the connections between each document using the method shown in Lecture 12, or another method of your choice.

 2

 

• What does this graph tell you about the relationship between the documents? Are there clear groups in the data? What are the most important (central) documents in the network?

• Improve your graph over the basic example given in Lecture 12 to more clearly show the interesting features of your data, such as the strength of connections, the relative importance of nodes, communities in the network. (4 Marks)

6 Repeat all the activities in Question 5, but now looking at the words (tokens). (4 Marks)

7 Create a bipartite (two-mode) network of your corpus, with document ID as one type of node and tokens as the other type of node.

• To do this you will need to transform your data into a suitable format.

• What does this graph tell you about the relationship between words and

documents? Are there clear groups in the data?

• Improve your graph over the basic example given in Lecture 12 to more clearly

show the interesting features of your data, such as the strength of connections, the relative importance of nodes, communities in the network. (4 Marks)

8 Write a brief report (suggested length 8 – 10 pages).

• Briefly summarise your results identifying important documents, tokens and groups

within the corpus. Comment on the relative effectiveness of clustering over social

network analysis to identify important groups and relationships in the data.

• Include your R script as an appendix. Use commenting in your R script, where

appropriate, to help a reader understand your code. Alternatively combine working, comments and reporting in R Markdown. (6 Marks)

9 Record a short presentation using your smart phone, Zoom, or similar method. Your presentation should be approximately 5 minutes in length and summarise your main findings, as well as describing how you conducted your research and any assumptions made. Pay particular emphasis to your results for the investigative tasks. (Submission Hurdle and 4 Marks)

Software

It is expected that you will use R for your data analysis and graphics and tables. You are free to use any R packages you need but please list these in your report and include in your R code.

References

Statistical Analysis of Network Data with R, Kolaczyk, E. D., Csárdi, G. Springer 2014. Chapters 1 – 4 A User’s Guide to Network Analysis in R, Luke, D. A. Springer 2015.

Network visualization with R, PolNet 2018 Workshop https://kateto.net/

Bipartite/Two-Mode Networks in igraph, Phil Murphy & Brendan Knapp. https://rpubs.com/

 

51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468