# 程序代写案例-COMP5911M

Coursework 2
These tasks are concerned with computing software metrics. There are a total of 40 marks available.
This
Preparation
Task 1 of assignment uses the Car Rental example from the lectures and exercises. You will need to analyze
the initial, unrefactored version of the code, as provided for Exercise 2. You will then do the same analysis
for the code after you have applied all of the refactorings from Exercise 4.
Hence it is important that you complete Exercise 4 before attempting this assignment!
1. Compute the following metrics for the original, unrefactored version of the Car Rental code (i.e., the
code provided as part of Exercise 2):
• Number of classes in the package
• Total SLOC (Source Lines of Code) in the package
• Average number of methods per class (computed across all classes in package)
• Average & maximum method complexity for Car, Rental and Customer separately
• Instability for Car, Rental and Customer separately
• Abstractness of the package
A total of 10 marks are available for these calculations.
2. Compute the same metrics for the Car Rental code after the refactoring of Exercise 4 has been
completed. Again, these calculations are worth 10 marks.
3. Compare the values of the metrics before and after refactoring. Relate this to what has been achieved
by the refactoring. Do the metrics tell us anything useful about how the software has changed? Are
they misleading in any way? [4 marks]
• Unless otherwise stated, metrics should be computed by hand, not using a tool. If you actually have
to calculate anything, rather than simply counting, make sure that you show your working, as there
are marks awarded for this as well as the final result.
• SLOC is defined as the number of non-blank, non-comment lines. You can compute this manually or
using a tool such as David Wheeler’s sloccount (https://dwheeler.com/sloccount/)
• Use McConnell’s simplified approach to compute complexity.
• Do not include unit testing code in your calculations.
• Do not use more than two decimal places when quoting non-integer results.
• If the metrics you compute for the refactored code differ from our values, you’ll still receive the marks
provided it is clear that you’ve done the calculation correctly for your code. For us to check this, we
will need to see your refactored code in your repository on gitlab.com.
If we are unable to check your calculation of a metric because the code isn’t visible in GitLab,
you won’t get any marks for that calculation.
1
Develop a software tool that can calculate one of the following metrics:
• SLOC (source lines of code) for each individual class in a given package1, and a total for all classes
in that package [6 marks]
• Average & maximum McConnell complexity for each class in a given package [10 marks]
• Abstractness and instability for a given package of classes, within a collection of packages making up
a larger software application [16 marks]
You can develop your tool in any sensible modern programming language. If you are unsure whether your
choice would be suitable, please speak to Nick. Development should be managed in your GitLab repository,
under a directory named cwk2.
Your tool can analyze code written in any sensible modern programming language. Note: this does not need
to be the same as the language used to implement the tool! For example, you could write a tool in Python
that analyzes code written in Java if you wanted . . .
Your submission should include a README file giving instructions on how to compile and run your tool.
We would prefer it if these tasks can be carried out via the command line. We will attempt to follow your
instructions when we mark your work.
If, for some reason, it is not practical for us to do this, we may require you to submit a short video
showing the tool being compiled and executed, so please be ready to produce such a video if asked.
A maximum of 16 marks are available for this task. The mark awarded will depend on the level of challenge
posed by the chosen metric (see above), the sophistication of the approach you have used, the quality of your
implementation, and evidence provided of a development process (via commits in your repository).
Think carefully about an approach that would be effective in solving the problem. You might find that
regular expressions are a useful tool. Most modern languages provide good support for using them.
More sophisticated approaches might make use of the reflection capabilities of your chosen implementation
language. Java, for example, has a powerful reflection API allowing a Java program to analyze the
characteristics of other Java code. C# and Python have similar capabilities.
Another possibility, if you are feeling ambitious enough, would be to use a dedicated parser for the language
that your tool analyzes. If following this approach, note that you would definitely NOT be expected to
implement such a parser from scratch; instead, we would expect to see you using a parser generator tool
such as ANTLR to produce the parser code.
If you have questions about any of this, please ask them in Microsoft Teams. (Use the Coursework channel
of the ‘COMP5911M Adv Software Engineering’ Team.)
the Gradescope Dashboard link in the Submit My Work folder in Minerva.
For Task 2, we expect to see code in your GitLab respository, with a history of commits providing evidence
that you have developed the code yourself. Please make sure that all of your commits have been pushed to
Your code for Task 2 should also be provided in a Zip archive named cwk2.zip, which you should submit
via the link provided for this purpose in Minerva.
Note: we will penalise you if you don’t provide your code via both of these methods.
The deadline for submission is 10.00 on 2 December 2021. Late submissions will suffer the standard
university penalty of 5% of the available marks per day unless an extension is approved by the SSO due to
mitigating circumstances.
1We use package in the same sense as Java package here. You might also see this described as a ‘module’ or a ‘namespace’.
2

Email:51zuoyejun

@gmail.com