You are on page 1of 34

COMSATS University Islamabad

Abbottabad, Pakistan

Plants Phenotypic Traits Using ML


Version 1.0

By

Mohsin Ashraf CIIT/FA17-BCS-065/ATD


Syed Sibghatullah Gillani CIIT/FA17-BCS-080/ATD
Umair Younus CIIT/FA17-BTN-015/ATD

Supervisor
Dr Ahmad Din

Bachelor of Science in Computer Science/


Telecommunication (2017-2021)

The candidate confirms that the work submitted is their own and appropriate
credit has been given where reference has been made to the work of others.
COMSATS University, Islamabad Pakistan

Plants Phenotypic Traits Using ML


Version 1.0

A project presented to
COMSATS Institute of Information Technology, Islamabad

In partial fulfillment
of the requirement for the degree of
Bachelor of Science in Computer Science/
Telecommunication (2017-2021)

By

Mohsin Ashraf CIIT/FA17-BCS-065/ATD


Syed Sibghatullah Gillani CIIT/FA17-BCS-080/ATD
Umair Younus CIIT/FA17-BTN-015/ATD

2
DECLARATION

We hereby declare that this software, neither whole nor as a part has been copied out from
any source. It is further declared that we have developed this software and accompanied
report entirely on the basis of our personal efforts. If any part of this project is proved to be
copied out from any source or found to be reproduction of some other. We will stand by the
consequences. No Portion of the work presented has been submitted of any application for
any other degree or qualification of this or any other university or institute of learning.

Mohsin Ashraf Syed Sibghatullah Gillani Umair Younus

--------------------------- --------------------------- ---------------------------

3
CERTIFICATE OF APPROVAL

It is to certify that the final year project of BS (CS) “Plant Phenotypic Trait” was developed
by
Mohsin Ashraf(CIIT/FA17-BCS-065) and Syed Sibghatullah Gillani (CIIT/FA17-BCS-
065) Umair Younus(CIIT/FA17-BTN-015) and under the supervision of “Dr Ahmad Din”
and that in his opinion; it is fully adequate, in scope and quality for the degree of Bachelors
of Science in Computer Sciences.

---------------------------------------
Supervisor

---------------------------------------
External Examiner

---------------------------------------
Head of Department
(Department of Computer Science)

4
EXECUTIVE SUMMARY
In public places, there is often a need for monitoring people and different activities going on,
which can be referred later for many reasons including security. Appointing humans for this
task involves many problems such as increased employee hiring, accuracy problem, trust, no
proof for later use, and also the fact that a human can remember things till a certain time
limit. Talking about the current security system, they use dumb still cameras with a
continuous recording facility ir-respective of the fact that any event may happen or not.
Moreover they are usually pointing at a specific user defined locations so more than one
cameras are required to cover the entire region.

To prevent all these problems from prevailing, the CSCS is developed. It is a surveillance
system, which provides solution to many of these problems. It is a stand-alone application
which doesn’t require any computer to operate. It monitors different situations using a camera
which is able to rotate intelligently based on sensor messages and captures the scene in the
form of video or photos later reference as well.

Customizable Surveillance Control System (CSCS) is a surveillance system that can be


assigned a sensor type as in our case a heat sensor is used, it works accordingly, rotates the
camera upon event detection and perform user defined actions like capturing video and stores
them, for the future use.

It is an embedded system consisting of Linux fox kit with embedded a running server
application also a camera, USB storage device and a sensor node base station is attached with
fox kit. LAN communication is used by user to download the videos and to operate the
system manually.

5
ACKNOWLEDGEMENT

All praise is to Almighty Allah who bestowed upon us a minute portion of His boundless
knowledge by virtue of which we were able to accomplish this challenging task.

We are greatly indebted to our project supervisor “Dr. Majid Iqbal Khan” and our Co-
Supervisor “Mr. Mukhtar Azeem”. Without their personal supervision, advice and valuable
guidance, completion of this project would have been doubtful. We are deeply indebted to
them for their encouragement and continual help during this work.

And we are also thankful to our parents and family who have been a constant source of
encouragement for us and brought us the values of honesty & hard work.

Mohsin Ashraf Syed Sibghatullah Gillani Umair Younus

--------------------------- --------------------------- ---------------------------

6
ABBREVIATIONS

SRS Software Require Specification

PC Personal Computer
DPP Deep Plant Phenomics
CNN Convolutional Neural Network

7
TABLE OF CONTENTS

1 Introduction..........................................................................................................................11

1.1 Brief.................................................................................................................................11
1.2 Relevance to Course Modules.........................................................................................11
1.3 Project Background.........................................................................................................11
1.4 Literature Review............................................................................................................11
1.5 Analysis from Literature Review (in the context of your project)..................................11
1.6 Methodology and Software Lifecycle for This Project...................................................11
1.6.1 Rationale behind Selected Methodology...................................................................11
2 Problem Definition...............................................................................................................12

2.1 Problem Statement..........................................................................................................12


2.2 Deliverables and Development Requirements................................................................12
2.3 Current System (if applicable to your project)................................................................12
3 Requirement Analysis.........................................................................................................13

3.1 Use Cases Diagram(s).....................................................................................................13


3.2 Detailed Use Case...........................................................................................................13
3.3 Functional Requirements.................................................................................................13
3.4 Non-Functional Requirements........................................................................................13
4 Design and Architecture......................................................................................................14

4.1 System Architecture........................................................................................................14


4.2 Data Representation [Diagram + Description]................................................................14
4.3 Process Flow/Representation..........................................................................................14
4.4 Design Models [along with descriptions]........................................................................14
5 Implementation....................................................................................................................15

5.1 Algorithm........................................................................................................................15
5.2 External APIs..................................................................................................................15
5.3 User Interface..................................................................................................................15
6 Testing and Evaluation........................................................................................................16

6.1 Manual Testing................................................................................................................16


6.1.1 System testing............................................................................................................16
6.1.2 Unit Testing...............................................................................................................16
6.1.3 Functional Testing.....................................................................................................17
6.1.4 Integration Testing.....................................................................................................17
6.2 Automated Testing:.........................................................................................................18
Tools used:................................................................................................................................18
7 Conclusion and Future Work.............................................................................................19

7.1 Conclusion.......................................................................................................................19
7.2 Future Work....................................................................................................................19
8 References.............................................................................................................................20

8
LIST OF FIGURES

Fig 1.1 Block Diagram.....................................................................................................................8


Fig 2.1 Use Case Diagram...............................................................................................................9

9
LIST OF TABLES

Table 1.1 Table1 .............................................................................................................................8


Table 2.1 Table 2.............................................................................................................................9

10
Title of Project

1. Introduction
The main Purpose of our project is to improve agricultural research in Pakistan. The
purpose of this application is to develop a functional system which will show the user Plant
Phenotypic trait specifically leaf counting. Which can help the researchers to enhance the
research in agriculture and to better analyze plant phenotype.

1.1.Brief

PHENOTYPE is formed during plant growth and developed from the dynamic
interaction between the GENOTYPE and ENVIRONMENT. PHENOTYPING, was
limited by a lack of efficient techniques. Network structured approaches were initiated
to close the gap between laboratory studies and field application by developing
technological solutions for this problem. Phenotyping traditional techniques evolves
around manual measurements, difficult and stressful techniques. These methods are
labour intensive, time consuming and frequently destructive to plants. However as the
Digital Imaging Processing has evolved, it has paved many new ways for the solution
and improvements regarding Phenotyping of the plants. Key features of this evolution
is that the images transformed will provide reliable and accurate phenotyping
measurement. Machine Learning and Deep learning techniques have the potential to
improve these image-based phenotyping moreover, tackle even more complex
features.

1.2.Relevance to Course Modules

This project is based on an android application, deep learning and a CNN model. .
The subjects which have been concerned for the project, which we have studied
during our 4 years education are:
1. Object Oriented Programming (CSC241),
2. Software Engineering Concepts (CSC291),
3. Artificial Intelligence(CSC462),
4. Machine Learning(CSC354),
5. Mobile Application Development (CSC303),
6. Human Computer Interaction (CSC356).

Department of Computer Science, CUI, Abbottabad 11


Plant Phenotypic Trait Using ML V.1.0

1.3.Project Background
1.3.1.1. Image Analysis

Many image analysis tools have been released by the scientific


community for the purpose of performing image-based high throughput
plant phenotyping. These tools range in the degree of automation, as well
as the types of phenotypic features or statistics they are capable of
measuring. From an image analysis perspective, phenotypic features can
be categorized based on their complexity. Image-based phenotypes can be
broadly separated into those that are simply linear functions of image pixel
intensities or more complex types that are non-linear functions of pixel
intensities.

.3.4 Deep Learning

In response to the limited flexibility and poor performance of classical


image processing pipelines for complex phenotyping tasks, machine
learning techniques are expected to take a prominent role in the future of
image-based phenotyping/ Plant disease detection and diagnosis is an
example of a complex phenotyping task where machine learning
techniques, such as support vector machines, clustering algorithms, and
neural networks, have demonstrated success.
Deep learning is an emerging area of machine learning for tackling
large data analytics problems. Deep convolutional neural networks (CNNs)
are a class of deep learning methods which are particularly well-suited to
computer vision problems. In contrast to classical approaches in computer
vision, which first measure statistical properties of the image as features to
use for learning a model of the data, CNNs actively learn a variety of filter
parameters during training of the model. CNNs also typically use raw
images directly as input without any time-consuming, hand tuned pre-
processing steps.

Department of Computer Science, CUI, Abbottabad 12


Plant Phenotypic Trait Using ML V.1.0

.3.4 CNN
A typical setup for a CNN uses a raw RGB image as input, which can be
considered as an n × m × 3 volume, where n is the image height, m is the image
width, and 3 is the number of color channels in the image, e.g., red, green, and
blue channels. The architecture of a CNN is comprised of several different layers
of three main types: convolutional layers, pooling layers, and fully connected
layers. The initial layers in a network are convolutional and pooling layers. The
convolutional layers apply a series of filters to the input volume in strided
convolutions. Each filter is applied over the full depth of the input volume, and
each depth slice in the layer’s output volume corresponds to the activation map of
one of these filters. For example, if padding is applied at the boundaries of the
image and with a stride size of one pixel, the output of the convolutional layer will
be an n×m×k volume where n and m are the height and width of the input volume,
and k is the number of filters.
The pooling layers apply a spatial down sampling operation to the input
volume, by calculating the maximum (called max pooling) or mean (average
pooling) value in a pixel’s neighborhood. After a series of convolutional and
pooling layers, there are typically one or more fully connected layers, including
the output layer. The input to the first fully connected layer is the output volume
from the previous layer, which is reshaped into a large one-dimensional feature
vector. This feature vector is matrix-multiplied with the weights of the fully
connected layer, which produces pre-activations. After each convolutional and
fully connected layer, a non-linear function (such as a sigmoid function or
Rectified Linear Unit) is applied to arrive at the final activations for the layer. The
output layer is a fully connected layer without an activation function. In the case
of classification, the number of units in the output layer corresponds to the number
of classes. These values can then be log-normalized to obtain class probabilities
for classification problems. In the case of regression problems, the number of
units in the output layer corresponds to the number of regression outputs (for
example, one output for leaf count, or four outputs for bounding boxes).
As with other supervised methods, CNNs are trained via an iterative
optimization procedure to minimize the difference between the network’s output
and a known ground-truth label for each input. A loss function compares the
output value of the network to the ground-truth label. This results in a singular
loss value, which is then back propagated through the network in reverse order. At
each layer, the gradient of the error signal with respect to each parameter can be
calculated. These parameters can then be adjusted by a factor proportional to this
gradient.

Department of Computer Science, CUI, Abbottabad 13


Plant Phenotypic Trait Using ML V.1.0

.3.4 Android
Android is a mobile operating system based on a modified version of
the Linux kernel and other open source software, designed primarily for
touchscreen mobile devices such as smartphones and tablets. Android is
developed by a consortium of developers known as the Open Handset
Alliance and commercially sponsored by Google. It was unveiled in
November 2007, with the first commercial Android device, the HTC
Dream, being launched in September 2008.
It is free and open-source software and its source code is known as
Android Open Source Project (AOSP), which is primarily licensed under
the Apache License. However most Android devices ship with additional
proprietary software pre-installed most notably Google Mobile Services
which includes core apps such as Google Chrome, the digital distribution
platform Google Play and associated Google Play Services development
platform.
The source code has been used to develop variants of Android on a
range of other electronics, such as game consoles, digital cameras, portable
media players, PCs and others, each with a specialized user interface. Some
well-known derivatives include Android TV for televisions and Wear OS
for wearables, both developed by Google. Software packages on Android,
which use the APK format, are generally distributed through proprietary
application stores like Google Play Store, Samsung Galaxy Store, Huawei
AppGallery, Cafe Bazaar, and GetJar, or open source platforms like
Aptoide or F-Droid.

1.4. Literature Review

Table 1: Related Systems


System Disadvantages Solution
1.Manual Plant Phenotyping Time Consuming We are using CNN to do plant
Labour Intensive phenotyping automatically.
Ambiguous No labour is required

Department of Computer Science, CUI, Abbottabad 14


Plant Phenotypic Trait Using ML V.1.0

Accurate results
2.Image Based Plant Used for simple phenotyping Can do more complex task
phenotyping tasks like Like
Biomass detection Leaf counting.
Compactness and diameter

1.5.Analysis from Literature Review

Simple image processing pipelines tend to breakdown when faced with more
complex non-linear, non-geometric phenotyping tasks. Tasks such as leaf counting,
vigor ratings, injury ratings, disease detection, age estimation, and mutant
classification add a higher level of abstraction which requires a more complicated
image processing pipeline with several more steps such as morphological operations,
connected components analysis, and others. Not only is this process dataset specific
and labor-intensive, the added complexity contributes additional parameters and
potential fragility to the pipeline. In addition to being limited to simple features,
existing image-based phenotyping tools are often also only applicable for processing
pictures of individual plants taken under highly controlled conditions, in terms of
lighting, background, plant pose, etc. Most tools rely on hand-engineered image
processing pipelines, typically requiring the hand-tuning of various parameters. In
some circumstances, hand-tuned parameters can be invalidated by variation in the
scene including issues like lighting, contrast, and exposure.
Machine learning techniques, and deep learning in particular, have potential to
improve the robustness of image-based phenotyping and extend toward more
complex and abstract phenotypic features.

1.6.Methodology and Software Lifecycle for This Project

We have used Incremental software life cycle for this project and procedural
programming paradigm will be used as software programming model.

Department of Computer Science, CUI, Abbottabad 15


Plant Phenotypic Trait Using ML V.1.0

1.6.
1.6.1. Multiple Standalone Modules

In incremental model, requirements are broken down into multiple standalone


modules of software development cycle. Incremental development is done in
steps from analysis design, implementation, testing/verification.

1.6.2. Flexible and Quick

The software will be generated quickly during the software life cycle. It is
flexible and less expensive to change requirements and scope. Errors are easy to
be identified.

Department of Computer Science, CUI, Abbottabad 16


Plant Phenotypic Trait Using ML V.1.0

2. Problem Definition
Our goal is to provide the plant phenotyping community access to state-of-the-art deep
learning techniques in computer vision in order to accelerate research in plant phenotyping and
help to close the genotype-to-phenotype gap.

Chapter 1:

1.

2.1.Problem Statement
Traditionally a large group of people is needed to take measurements of the plants, at
first there may exist ambiguity in the measurements as it is a very stressful and difficult task.
Manual measurement takes too much time which the researchers and scientist don’t have.
Another most important problem is that manual measurements may cause damage to the
plants. Now to overcome these problems, our system will use pictures of the object to take all
necessary measurements and then all the evaluation will be performed based on these
measurements. We are developing this system to overcome all the bottlenecks that are faced
in a traditional method to save time, labour and collateral damage.
We expect to learn research skills, Practical AI applications, Machine Learning, Deep
learning in particular, Convolutional neural networks and Mobile Application Development.

2.2.Deliverables and Development Requirements

Deep learning is a machine learning technique that teaches computers to do what


comes naturally to humans. In deep learning, a computer model learns to perform
classification tasks directly from images, text or sound. Most deep learning methods use
neural network architectures, which is why deep learning models are often referred to as deep
neural networks. One of the most popular types of deep neural networks is known as
convolutional neural networks. A CNN convolves learned features with input data, and uses
2D convolutional layers, making this architecture well suited to processing 2D data, such as
images.
In our system, Convolutional neural networks (CNNs) will be constructed and trained
from scratched to perform different benchmark tasks. A digital image will be provided to the
network and the network will take out necessary measurements from the image and perform
the required benchmark task.
This approach will take less time to figure out the required trait. A fewer number of
people will be needed to perform the tasks. And almost no damage will be given to the plants.

Department of Computer Science, CUI, Abbottabad 17


Plant Phenotypic Trait Using ML V.1.0

2.3.Current System (if applicable to your project)


Deep Plant Phenomics (DPP) is a platform for plant phenotyping using deep
learning. Think of it as Keras for plant scientists. DPP integrates Tensorflow for
learning. This means that it is able to run on both CPUs and GPUs, and scale easily
across devices.
Principally, DPP provides deep learning functionality for plant phenotyping
and related applications. Deep learning is a category of techniques which
encompasses many different types of neural networks. Deep learning techniques lead
the state of the art in many image-based tasks, including image classification, object
detection and localization, image segmentation, and others.
3.

3. Requirement Analysis
The following parts of Software Requirements Specification (SRS) report should be
included in this chapter.

Chapter 1:

1.

3.1.Use Cases Diagram(s)

3.2.Detailed Use Case

3.3.Functional Requirements

3.4.Non-Functional Requirements

Department of Computer Science, CUI, Abbottabad 18


Plant Phenotypic Trait Using ML V.1.0

4. Design and Architecture


The following parts of Software Design Description (SDD) report should be included in
this chapter.

1.

4.1.System Architecture

4.2.Data Representation [Diagram + Description]

4.3.Process Flow/Representation

4.4.Design Models [along with descriptions]

Department of Computer Science, CUI, Abbottabad 19


Plant Phenotypic Trait Using ML V.1.0

5. Implementation
5.1: Algorithm
5.1.1: Xception:
We are using the Xception Architecture for the most important part of our project that is
leaf counting.
It a convolutional neural network architecture based on depthwise separable convolution
layers. It is inspired by Google’s Inception Model. Xception is based on an ‘extreme’ version
of the inception model. The Xception architecture is a linear stack of depthwise separable
convolution layers with residual connnections. Depthwise separable convolution look at
channel and spatial correlations independently in successive steps.
The Xception Architecture has 36 convolutional layers forming the feature extraction of the
network. These 36 convolutional layers are grouped into 14 modules all of which have a
linear residual connections around them excluding first and last modules.In short the
Xception architecture is a linear stack of depthwise separable convolution layers with
residual connections. This make the architecture very easy to implement using libraries
such as Keras or tensorflow.

Department of Computer Science, CUI, Abbottabad 20


Plant Phenotypic Trait Using ML V.1.0

https://openaccess.thecvf.com/content_cvpr_2017/papers/Chollet_Xception_Deep_Learning_CVPR
_2017_paper.pdf

Department of Computer Science, CUI, Abbottabad 21


Plant Phenotypic Trait Using ML V.1.0

5.3 User Interface:

SignIn Screen:

Following is the entry page which user will see upon opening the application. On this
screen user has two options
1. If user does not have an account, he has to first create an account by clicking SignUp
button.
2. If user already have an account, he can SignIN by providing valid credentials.

Department of Computer Science, CUI, Abbottabad 22


Plant Phenotypic Trait Using ML V.1.0

SignUp Screen:
If user doesn’t have an account he has to sing up first by providing the email and password
for his account. After providing the email and password he can click sign up to move on to
next screen.

Department of Computer Science, CUI, Abbottabad 23


Plant Phenotypic Trait Using ML V.1.0

Home Screen:
On home screen user has two options he can either select an image from gallery or can
capture image with camera option or has option to see previous results.

Department of Computer Science, CUI, Abbottabad 24


Plant Phenotypic Trait Using ML V.1.0

Gallery Screen:
After selecting Gallery options user is redirected to the Galley screen where he has option to
insert image from the gallery. By clicking Insert Image button he can select appropriate plant
image from the gallery. After which he will click on the Next button.

Department of Computer Science, CUI, Abbottabad 25


Plant Phenotypic Trait Using ML V.1.0

Camera Screen:
From the Home page after clicking CAMERA button user will be redirected to Camera
Screen. From where he can open up his camera and take the image of the plant by clicking
the capture button.

Department of Computer Science, CUI, Abbottabad 26


Plant Phenotypic Trait Using ML V.1.0

Leaf Count Screen:


After selecting image from gallery or taking image from camera user will be directed to the
leaf count page where he can see selected image and can process the image by clicking the
start button.

Department of Computer Science, CUI, Abbottabad 27


Plant Phenotypic Trait Using ML V.1.0

Result Screen:
After clicking the start button user will be redirected to the results screen where he can see
his selected image and results i.e. number of leaves. After the results user has two option he
can try another image by doing the same process by clicking the try another button or he
can exit from the application.

Department of Computer Science, CUI, Abbottabad 28


Plant Phenotypic Trait Using ML V.1.0

6. Testing and Evaluation


This chapter may include the following sections. (Students are required to perform the
testing both manually and automatedly).

Chapter 5:

1.
6.1. Manual Testing
This is the sample text

1.
6.1.1. System testing
Once the system has been successfully developed, testing has to be performed
to ensure that the system working as intended. This is also to check that the
system meets the requirements stated earlier. Besides that, system testing will
help in finding the errors that may be hidden from the user. There are few
types of testing which includes the unit testing, functional testing and
integration testing. The testing must be completed before it is being deploy for
user to use.

6.1.2. Unit Testing


Once the system has been successfully developed

 Unit Testing 1: Login as FYP Committee as shown in Table 5.1


Testing Objective: To ensure the login form is working correctly
Table 5.1: Login Unit Testcase

No Test case/Test script Attribute and Expected result Result


. value

1. Verify user login after Username: Successfully log Pass


click on the ‘Login’ L001 into the main
button on login form Password: page of the
with correct input data 1234 system as FYP
Committee
member.
2.

 Unit Testing 2: Edit Profile

Department of Computer Science, CUI, Abbottabad 29


Plant Phenotypic Trait Using ML V.1.0

Testing Objective: To ensure the edit profile form is working properly.


Table 5.2: Edit Profile Unit Testcase

No. Test case/Test script Attribute and Expected result Result


value

1.

6.1.3. Functional Testing


The functional testing will take place after the unit testing. In this functional
testing, the functionality of each of the module is tested. This is to ensure that
the system produced meets the specifications and requirements.

 Functional Testing 1: Login with different roles as shown in Table 5.3


Objective: To ensure that the correct page with the correct navigation bar
is loaded.
Table 5.3: Login Functional Testcase

No. Test case/Test script Attribute and Expected result Result


value

1. Login as a ‘FYP Username: L001 Main page for the Pass


Committee’ member. Password: 1234 FYP Committee
member is loaded
with the FYP
Committee
navigation bar

2.

6.1.4. Integration Testing


Table 5.4 shows the integration testing

Table 5.4: Integration Testcase

No. Test case/Test Attribute and Expected result Result


script value

1. Login as “FYP Username: L001 Login successful and the FYP Pass
Committee” Password: 1234 Committee page with its
member navigation bar is loaded and in
the view profile page

Department of Computer Science, CUI, Abbottabad 30


Plant Phenotypic Trait Using ML V.1.0

2. Upload student - File successfully uploaded and Pass


record for Project return to the upload page.
1 Student records are updated.

3. View supervising - The list of supervisees shown on Pass


student the screen.

6.2. Automated Testing:


This is the sample text
6.2.1. Tools used:
Table 5.5 shows the
Table 5.5: Tools used

Tool Name Tool Description Applied on [list of Results


related test cases / FR
/ NFR]

Department of Computer Science, CUI, Abbottabad 31


Plant Phenotypic Trait Using ML V.1.0

Department of Computer Science, CUI, Abbottabad 32


Plant Phenotypic Trait Using ML V.1.0

7. Conclusion and Future Work


This chapter concludes the project and highlights future work.

Chapter 7:

7.1. Conclusion

7.2. Future Work

Department of Computer Science, CUI, Abbottabad 33


Plant Phenotypic Trait Using ML V.1.0

8. References
References to any book, journal paper or website should properly be acknowledged. Please
consistently follow the style. The following are few examples of different resources i.e.
journal article, book, and website.

1. Lyda M.S. Lau, Jayne Curson, Richard Drew, Peter Dew and Christine Leigh, (1999),
Use Of VSP Resource Rooms to Support Group Work in a Learning Environment,
ACM 99, pp-2. (Journal paper example)

2. Hideyuki Nakanishi, Chikara Yoshida, Toshikazu Nishmora and TuruIshada, (1996),


FreeWalk: Supporting Casual Meetings in a Network, pp 308-314 (paper on web)
http://www.acm.org/pubs/articles/proceedings/cscw/240080/p308-nakanishi.pdf

3. Ali Behforooz& Frederick J.Hudson, (1996), Software Engineering Fundamentals,


Oxford University Press. Chapter 8, pp255-235. (book reference example)

4. Page Author, Page Title, http://www.bt.com/bttj/archive.htm, Last date accessed. (web


site)

Department of Computer Science, CUI, Abbottabad 34

You might also like