You are on page 1of 14

COMP20050

Software Engineering 2
2022 / 2023
Project Handbook

Dr Chris Bleakley

UCD School of Computer Science,


University College Dublin,
Belfield, Dublin 4.
chris.bleakley@ucd.ie

Version 1.1 1
Introduction
There are five Sprints. These Sprints combine to incrementally build into a Java implementation of a board
game.
The following steps should be done before starting to code:
• The project will be done in teams of three.
• Self-select your group at My Class – My Groups on Brightspace. After the group selection expiry date,
any class members who have not self-selected a group will automatically be allocated to a group.

• Setup Git on your computer.


• Setup your GitHub account.
• Setup a GitHub repo for the project. THIS REPO MUST BE PRIVATE. The repo name should include
your Group number allocated on Brightspace (e.g. TheGameGroup1)

• Share the repo with other team members, the Teaching Assistant (user name gillanimaryam) and
the Module Coordinator (user name ChrisBleakley) access to the repo.

• GitHub must be used for source code control for the duration of the project. See the Grading Scheme
below “GitHub not used correctly, up to 2 grade deductions”. Part of the final submission is via GitHub.
Note that the Eclipse IDE is supported in the labs. No other IDE is supported.

Version 1.1 2
Sprint 1: Set Up
Play some Cascadia!!!
The reference rules are at: https://c.tabletopia.com/games/cascadia/rules/cascadia-rules/en
As a group, implement and verify a Java program with the following features.
Use the console for input and output. The commands should be case insensitive. Appropriate error messages
should be displayed for invalid inputs. The user should be prompted to re-enter the information.
A feature that prompts the user to enter the number of players (2-4).
A feature that prompts the players to enter the player’s names. Their names should be used in later prompts.
A feature that randomly chooses the order in which the players play and informs the users.
[Skip wildlife scoring card selection for now]
A feature that randomly selects a starter habitat tile for each player.
A feature that randomly selects 4 habitat tiles and 4 paired wildlife tokens and displays them.
[Skip culling for now]
A feature that displays the first player’s habitat and prompts the user for command input. An example of an
ASCII Art approach to displaying the habitats is shown overleaf. The wildlife tokens can be displayed at the
colour inverse of the habitat placeholder. For example, the fox token would be a white F on an orange
background, whereas the placeholder is an orange F on a white background. It is ok
A feature whereby a “next’ command causes the next player’s habitat to be displayed.
A feature whereby a “quit” command causes termination of the program.

Sprint 1 is assessed by Design Review during the Lab Session. A Demonstrator will meet with your team to
review progress. The team should demo the program, show the source code, and answer some questions.
The following check list will be used for grading. Zero marks are given to any team member who is absent
unless evidence of extenuating circumstances is provided. The Design Review must be done in the specified
Lab Session, it cannot be done late. Allowance will be made for which day of the week that the Design Review
is on.

Item Marks Commands


Available
Git and GitHub use 2 0 = not used, 1 = some commits, 2 = consistent
commit history
Project management 3 +1 = Kanban board in use, +1 = consistent effective
meetings, +1 = effective communication tools

Progress with respect to the 3 0 = no progress, 3 = all of the features working


requirements above
Code quality 2 0 = poor quality (e.g. all code in one class), 1 =
medium quality (e.g. some things are messy), 2 =
high quality (easy to read)
TOTAL 10

Version 1.1 3
Figure 1. ASCII Art starter habitat tiles.

Version 1.1 4
Table 1: Terrain tile colour key

Terrain Tile Colour


Forest Dark green
Wetland Light green
River Blue
Mountain Grey
Prairie Yellow

Table 2: Wildlife token colour key


Wildlife Token Colour
Hawk Blue

Bear Brown
Elk Black
Salmon Pink / Red

Fox Orange

Version 1.1 5
Sprint 2: Game Play
As a group, implement and verify the following additional features.
A feature that detects when an automatic cull is required. Notify the user and replace the wildlife tokens.
A feature that detects when the user has the option of a cull. Ask the users whether to cull or not and apply
this.
A feature that allows the user to select a habitat tile and wildlife token pair.
A feature that allows the user to rotate the selected habitat tile. A way to do this is to offer a menu of possible
angles.
A feature that allows the user to place the selected habitat tile on the board. A way to do this is to label the
possible locations with letters or numbers. Placement must follow the rules of the game.
A feature that allows the user to place the selected wildlife token on the board. A way to do this is to label the
possible locations with letters or numbers. Placement must follow the rules of the game.
A feature that allows the user not to place the token.
A feature that detects that a token cannot be placed, reports this to the user and continues with the game.
A feature that gives the player a nature token if the wildlife token is placed on a keystone tile.
A feature that replaces the selected tile and token in the 4 visible pairs.
A feature that allows to the players to take turns playing that game. The “done” command should be removed.
A feature that detects if no more tiles are available and ends the game.

Sprint 2 is assessed by Design Review.


Item Marks Commands
Available
Git and GitHub use 2 0 = not used, 1 = some commits, 2 = consistent
commit history
Project management 3 +1 = Kanban board in use, +1 = consistent effective
meetings, +1 = effective communication tools

Progress with respect to the 3 0 = no progress, 3 = all of the features working


requirements above
Code quality 2 0 = poor quality (e.g. all code in one class), 1 =
medium quality (e.g. some things messy), 2 = high
quality (easy to read)
TOTAL 10

Version 1.1 6
Sprint 3: Nature Tokens & Scoring
As a group, implement and verify the following additional features.
A feature that allows a player to spend a nature token so as to select any wildlife and habitat token OR to wipe
any number of wildlife tokens.
A feature that randomly selects and displays 5 wildlife scoring cards at the start of the game. Graphics are not
needed, simply display a textual description of the basic scoring rule (all point values not needed).
A feature that at the end of the game that calculates and displays the score card for the game, showing the
scores for each player. The calculations should apply all scoring rules.

Sprint 3 is assessed by submission to Brightspace. See the submission instructions below for details. The
following check list will be used for grading.

Item Marks Comments


Available
Functionality
Player names enter and display is working 1
Turn based game play is working 3
Game “quit” is working 1
Board display is working 6
Habitat tile draw and placement is working 6
Wildlife token draw and placement is working 6 Including automatic and user
decided culls
Nature token points and spending is working 4
Wildlife scoring card selection and display is working 4
Scoring is working for all wildlife scoring cards 6
Test Quality
At least 2 non-trivial classes have corresponding JUnit test 1
classes

All methods in that those classes have at least 1 non-trivial 1


JUnit test
A robust test strategy is described in the video 2
Code Quality
The code is readable 3
The project is well-structured into classes 3
The classes are well-structured into methods 3
Appropriate data structures are used 1
Scope is minimised 1
Data is single source 1
Naming conventions are used consistently 1

Version 1.1 7
Constants are well used 1
Names are clear and meaningful 1
Comments are useful 1
Documentation
Javadoc is used correctly for at least 1 non-trivial class 1
A class diagram for the program (excluding test classes) is 1
shown in the video
A sequence diagram is show in the video 1
Total 60

Version 1.1 8
Sprint 4: Bot, part 1
As a group, implement and verify the following.
Based on the code released on Brightspace, implement a Bot that Cascadia.
You can assume that scoring is based on the A habitat cards.
The design of the Bot is up to you.
Start by planning how you want to Bot to work. Turn this into a list of features that you wish to implement.
Create a Product Backlog, a Sprint Plan and a Kanban board.
Code and verify the features that you have selected to build during sprint 4.

Sprint 4 is assessed by Design Review.


Item Marks Commands
Available
Git and GitHub use 2 0 = not used, 1 = some commits, 2 = consistent
commit history
Project management 5 +1 = Kanban board in use, +1 = consistent effective
meetings, +1 = effective communication tools, +1 =
product backlog, +1 = sprint plan
Progress with respect to the 3 0 = no progress, 3 = all of the features working
Sprint Plan

Code quality 2 0 = poor quality (e.g. all code in one class), 1 =


medium quality (e.g. some things messy), 2 = high
quality (easy to read)

TOTAL 12

Version 1.1 9
Sprint 5: Bot, part 2
Create a Sprint Plan and Kanban board for sprint 5.
Code and verify the features that you have selected to build during sprint 5.

Sprint 5 is assessed by submission to Brightspace. See the submission instructions below for details. The
following check list will be used for grading.
Feature Marks Comments
Available
Functionality
Bot name entry is working 1
Turn based game play is working 4

Habitat tile placement is working 6


Wildlife token placement is working 6
Nature token spending is working 6

The Bot completes its turns in under 5 seconds 1


Effectiveness of the bot game play in terms of scoring (the 20
Bot game playing strategy should be explained in the
video).
Code Quality
The code is readable 3

The project is well-structured into classes 3


The classes are well-structured into methods 3
Appropriate data structures are used 1
Scope is minimised 1
Data is single source 1
Naming conventions are used consistently 1

Constants are well used 1


Names are clear and meaningful 1
Comments are useful 1

Total 60

Version 1.1 10
Calendar
The following is subject to change. Watch out for email and Brightspace announcements.

Week Starts Monday Work on Deadline


1 23 Jan Group Setup
2 30 Jan Sprint 1 Group Select Brightspace Submission Monday 10am
3 6 Feb (St Brigid) Sprint 1 Sprint 1 Design Review during Lab Session
4 13 Feb Sprint 2
5 20 Feb Sprint 2 Sprint 2 Design Review during Lab Session
6 27 Feb Sprint 3
7 6 Mar Sprint 3
13 Mar Study Week
20 Mar (St Patricks) Study Week
8 27 Mar Sprint 4 Sprint 3 Brightspace Submission Monday 10am
9 3 Apr (Easter) Sprint 4
10 10 Apr (Easter) Sprint 5 Sprint 4 Design Review during Lab Session
11 17 Apr Revision Class Test during Lab Session
12 24 Apr Sprint 5
13 1 May (May Day) Sprint 5 Brightspace Submission Tuesday 10am &
Bot Playoffs

Note that there is no late submission option for Sprint 3 as reference code will be released to assist with
sprints 4 and 5.

Version 1.1 11
Brightspace Submissions
For the Brightspace submissions, submit a zip file named with your group name and the sprint number
containing the following items:

• A report in PDF format (1 page) including:


o Group number, group name, student names and GitHub IDs
o A link to your GitHub release (see Brightspace – Learning Materials - About the Tools for more
information on creating the GitHub release). Test that the link works after it is added to the report.
The release must be private.
o State the relative amount of work done by each group member for the sprint (e.g. 50-50). If the
workload is not 50-50, please explain why.
• A video (max. 5 minutes) in .mp4 format of a screen recording with voice over. The video should:
o Run the code, showing the working features
o Explain the tests that you did, showing test running where possible
o Explain the code by walking through it

• A directory containing the source code.


• A directory containing an executable JAR file for the program (see Brightspace – Learning Materials -
About the Tools for more information on creating the JAR). Note, the colours may not display correctly
when running the JAR at the command line.

Notes
Be careful not to delete the release or make the repo public on GitHub until the end of the semester plus 3
months.
Make sure that the group number, group name, student names and GitHub IDs are included in comments at
the top of all source code files.
Use of open-source code is plagiarism.

Version 1.1 12
Grading Scheme
The marks for the Group Project are allocated as follows:

Assessment Marks
Sprint 1 Design Review 1%
Sprint 2 Design Review 2%
Sprint 3 Submission (Game) 45%
Sprint 4 Design Review 2%
Sprint 5 Submission (Bot) 30%
Class Test 20%

Marking is done according to the scoresheets above and below. The numerical value is converted to a grade
using the Alternative Linear Conversion Grade Scale 40% Pass.
https://www.ucd.ie/students/exams/gradingandremediation/understandinggrades/
The Design Reviews and Submissions are graded according to the rubrics above. Each requirement in the
rubric is scored according to the number of marks available, which is related to the difficulty associated with
the item.
An example grading sheet is shown below:
Item Marks Marks Comments
Available

Display panels 2 2 Working


Display board 2 2 Working
Enter names 2 2 Working

Roll dice 2 1 Partially working – only 1 dice works of 2


Quit 2 0 Not working
TOTAL 10 7 7/10 = 70% = B-

Programs are marked against the requirements described in this document. It is OK to include extra
functionality to your game in addition to the items listed above. However, any extra functionality should be in
addition to the functionality listed above, not instead of. There are no bonus marks for additional functionality.

Deductions
If necessary, the following deductions will be applied:

• Based on review of the comments in the report on the relative work done and on review of the group
member contributions on GitHub, the grades for individual students in a group may be adjusted.
• GitHub not used correctly, up to 2 grade deductions
• No report, grade deduction (e.g. A to B)

• Incomplete report, 1-3 grade point deduction (e.g. A to A- or B+ or B)


• No video, grade deduction
• Incomplete video, 1-3 grade point deduction (e.g. A to A- or B+ or B)
• No executable jar or jar not working, 1 grade point deduction

• No or incomplete source code in Brightspace but available in GitHub, 1 grade deduction


• No or incomplete source code submission in Brightspace and in GitHub, graded as NM (no mark)

Version 1.1 13
• No GitHub release, 1 grade deduction

• Up to 5 working days late, 1 grade point deduction.


• Up to 10 working days late, 2 grade points deduction.

• More than 10 workings late, submission on accepted.

Version 1.1 14

You might also like