You are on page 1of 37

Project: Call Centre Management System

INTRODUCTION

Call center management system is proposed for the call centers using which they can
computerized their shifts organized, work distribution and employee performances.
Call centers are basically the work centers, which provide 24 hours customer
care via telephones and Internets etc. Now a days every large or small company has, their call
centers situated at different locations. Besides supporting customers via a team of customer care
executives, call centers has one more team that of technical peoples responsible for allocating,
maintaining, updating and allotment company’s communication.

SYNOPSIS

The Domain "Call center management" is designed to keep track of employees and their shifts
and individual performance as to check the targeted calls. It is designed to store information of
the employees and from past history the details of the targeted calls. The system helps to identify
the calls and the clients feedback based on the client satisfaction. The client feed back and the
targets identification helps in identifying potential employees and providing bonus for the excess
targets. The project is help full for assigning the appraisal and promotion of the employees in
terms of pay appraisal and also in terms of promotions to teams leaders, supervisors and project
managers. The Statistical Summary of performance helps in achieving these targets.

In the existing system the application works under a single user environment but in proposed
system we plan to implement the project on a multiuser environment to perform all the
summaries of the Call Center Management System using the software

Modules
Administrative Modules:

This module is the main module which performs all the main operations in the system. The
major operations in this project are recording employee details, storing and retrieving the
performance of the employees in terms of targets. Appraisal of the employees and promotions
being an eminent part of this project; the project is designed to summaries these statistics for
references by the management.

Employee Information:

It displays the information about the employees working in the call center. We can add
the details like staff name, address, designation, phone number, etc. We can add the details of the
staff and we can add, delete, update and save the staff details. This module helps in quick
identification and on hand staff details which is very essential to manage the employee benefits.
Appraisal Identification:

This module is so designed to identify the employees for appraisal and promotions. The
identification is based on the employee with potential targets and the client feedback. This also
identifies the employees with their academic improvements, work experience and experience in
the same company. Through this module we can add the details like empid, name, address,
joining date, experience etc. We can retrieve these details to assess the employees for their
appraisal.

User Login:

This module is for the users of the application, through this module employees can register and
log into the system to view their status and performance in the company. Under this phase we
have the following modules:

Employee Registration:

This module can be used by the employees for registering themselves to the system. The
employees can register by providing the details like name, empid, contact etc. This module
creates employees login domain for the employees registered under this module.

Appraisal Summary Report:

This module provides the details about the employees getting to know their performance status.
A registered employee can log onto this and view performance and status of potential
performance at the same time keeping the information secure and confidential among other
employees.

System Analysis

System analysis is a process of gathering and interpreting facts, diagnosing problems and the
information to recommend improvements on the system. It is a problem solving activity that
requires intensive communication between the system users and system developers. System
analysis or study is an important phase of any system development process. The system is
studied to the minutest detail and analyzed. The system analyst plays the role of the interrogator
and dwells deep into the working of the present system. The system is viewed as a whole and the
input to the system are identified. The outputs from the organizations are traced to the various
processes. System analysis is concerned with becoming aware of the problem, identifying the
relevant and decisional variables, analyzing and synthesizing the various factors and determining
an optimal or at least a satisfactory solution or program of action.
A detailed study of the process must be made by various techniques like interviews,
questionnaires etc. The data collected by these sources must be scrutinized to arrive to a
conclusion. The conclusion is an understanding of how the system functions. This system is
called the existing system. Now the existing system is subjected to close study and problem areas
are identified. The designer now functions as a problem solver and tries to sort out the
difficulties that the enterprise faces. The solutions are given as proposals. The proposal is then
weighed with the existing system analytically and the best one is selected. The proposal is
presented to the user for an endorsement by the user. The proposal is reviewed on user request
and suitable changes are made. This is loop that ends as soon as the user is satisfied with
proposal.

Preliminary study is the process of gathering and interpreting facts, using the information for
further studies on the system. Preliminary study is problem solving activity that requires
intensive communication between the system users and system developers. It does various
feasibility studies. In these studies a rough figure of the system activities can be obtained, from
which the decision about the strategies to be followed for effective system study and analysis can
be taken.

EXISTING SYSTEM

In the existing system the transactions are done only on a single user environment but in
proposed system we plan to implement the project on a multiuser environment to perform all the
call center appraisal and promotional activities using the application Call center management
system.

PROBLEMS WITH EXISTING SYSTEM


Lack of security of data.
More man power.
Time consuming.
Consumes large volume of pare work.
Needs manual calculations.
No direct role for the higher officials.
Damage of machines due to lack of attention.

To avoid all these limitations and make the working more accurately the system needs to
implemented on a multiuser environment.
.
PROPOSED SYSTEM

The aim of proposed system is to develop a system of improved facilities. The proposed system
can overcome all the limitations of the existing system. The system provides proper security and
reduces the manual work. The aim of proposed system is to replace the existing single user
system to a multitier online system. The online system proposed here facilitates the online access
to the different activities of the project like accessing the administrative tools like storing
employee details, and statistics about the employee. Also the project performs the main operation
in the system. The major operation is the identification of an employee for appraisal using the
statistics stored in the system.

The user module that can be accessed online are the registration of the employee. Where the
user can register before visiting the page to view the performance status. The online
implementation is the main objective of the proposed project. The project also aims at providing
confidentiality and security through registration process and password for accessing. The project
is implemented in two phases is the administrative features and the user features that are
described above.
Functionalities provided by Call Center Management System are follows:

Administrative functionalities:

 Employee Information Details


 Appraisal Identification Details
 Admin Registration Module

User Functionalities:

 Employee Registration
 Appraisal summary details.

ADVANTAGES OF THE PROPOSED SYSTEM

The system is very simple in design and to implement. The system requires very low system
resources and the system will work in almost all configurations. It has got following features
Security of data.
Ensure data accuracy's.
Proper control of the higher officials.
Reduce the damages of the machines.
Minimize manual data entry.
Minimum time needed for the various processing.
Greater efficiency.
Better service.
User friendliness and interactive.
Minimum time required.

FEASIBILITY STUDY

Feasibility study is made to see if the project on completion will serve the purpose of the
organization for the amount of work, effort and the time that spend on it. Feasibility study lets
the developer foresee the future of the project and the usefulness. A feasibility study of a system
proposal is according to its workability, which is the impact on the organization, ability to meet
their user needs and effective use of resources. Thus when a new application is proposed it
normally goes through a feasibility study before it is approved for development.

The document provide the feasibility of the project that is being designed and lists various areas
that were considered very carefully during the feasibility study of this project such as Technical,
Economic and Operational feasibilities. The following are its features:

TECHNICAL FEASIBILITY

The system must be evaluated from the technical point of view first. The assessment of this
feasibility must be based on an outline design of the system requirement in the terms of input,
output, programs and procedures. Having identified an outline system, the investigation must go
on to suggest the type of equipment, required method developing the system, of running the
system once it has been designed.

Technical issues raised during the investigation are:

Does the existing technology sufficient for the suggested one Can the system expand if
developed

The project should be developed such that the necessary functions and performance are achieved
within the constraints. The project is developed within latest technology. Through the technology
may become obsolete after some period of time, due to the fact that never version of same
software supports older versions, the system may still be used. So there are minimal constraints
involved with this project. The system has been developed using Java the project is technically
feasible for development.

ECONOMIC FEASIBILITY

The developing system must be justified by cost and benefit. Criteria to ensure that effort is
concentrated on project, which will give best, return at the earliest. One of the factors, which
affect the development of a new system, is the cost it would require.

The following are some of the important financial questions asked during preliminary
investigation:

 The costs conduct a full system investigation.


 The cost of the hardware and software.
 The benefits in the form of reduced costs or fewer costly errors.

Since the system is developed as part of project work, there is no manual cost to spend for the
proposed system. Also all the resources are already available, it give an indication of the system
is economically possible for development.

BEHAVIORAL FEASIBILITY
This includes the following questions:
 Is there sufficient support for the users
 Will the proposed system cause harm

The project would be beneficial because it satisfies the objectives when developed and installed.
All behavioral aspects are considered carefully and conclude that the project is behaviorally
feasible.
Hardware and Software Requirements:

Hardware:
Processor : Pentium I series/Dual core
HDD : 250 GB .
Memory : 4 GB
I/O Interface : Basic keyboard/Mouse.
Software:
Scripting : Angular js.
(Front End)
Web Server : Node js
Server Frame work : Express js
Database : Mongo Database.
(Bach End)
Mark Up Language :XHTML/Bootstrap
Operating System :Windows 8 and 10
Software Review

AngularJS

AngularJS is a very powerful JavaScript library. It is used in Single Page Application (SPA)
projects. It extends HTML DOM with additional attributes and makes it more responsive to user
actions. AngularJS is open source, completely free, and used by thousands of developers around
the world. It is licensed under the Apache license version 2.0. AngularJS is an open-source web
application framework. It was originally developed in 2009 by Misko Hevery and Adam Abrons.
It is now maintained by Google. Its latest version is 8.0.

Definition of AngularJS as put by its official documentation is as follows:

AngularJS is a structural framework for dynamic web applications. It lets you use HTML as
your template language and lets you extend HTML's syntax to express your application
components clearly and succinctly. Its data binding and dependency injection eliminate much
of the code you currently have to write. And it all happens within the browser, making it an
ideal partner with any server technology.

General Features

AngularJS is a efficient framework that can create Rich Internet Applications (RIA).
AngularJS provides developers an options to write client side applications using
JavaScript in a clean Model View Controller (MVC) way.
Applications written in AngularJS are cross-browser compliant. AngularJS automatically
handles JavaScript code suitable for each browser.
AngularJS is open source, completely free, and used by thousands of developers around
the world. It is licensed under the Apache license version 2.0.
Overall, AngularJS is a framework to build large scale, high-performance, and easy-to-maintain
web applications.
Core Features

Data-binding: It is the automatic synchronization of data between model and view


components.
Scope: These are objects that refer to the model. They act as a glue between controller
and view.
Controller: These are JavaScript functions bound to a particular scope.
Services: AngularJS comes with several built-in services such as $http to make a
XMLHttpRequests. These are singleton objects which are instantiated only once in app.
Filters: These select a subset of items from an array and returns a new array.
Directives: Directives are markers on DOM elements such as elements, attributes, css,
and more. These can be used to create custom HTML tags that serve as new, custom
widgets. AngularJS has built-in directives such as ngBind, ngModel, etc.
Templates: These are the rendered view with information from the controller and model.
These can be a single file (such as index.html) or multiple views in one page using
partials.
Routing: It is concept of switching views.
Model View Whatever: MVW is a design pattern for dividing an application into
different parts called Model, View, and Controller, each with distinct responsibilities.
AngularJS does not implement MVC in the traditional sense, but rather something closer
to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as
Model View Whatever.
Deep Linking: Deep linking allows to encode the state of application in the URL so that
it can be bookmarked. The application can then be restored from the URL to the same
state.
Dependency Injection: AngularJS has a built-in dependency injection subsystem that
helps the developer to create, understand, and test the applications easily.
Concepts

The following diagram depicts some important parts of AngularJS which we will discuss in
detail in the subsequent chapters.

Advantages of AngularJS

The advantages of AngularJS are:

It provides the capability to create Single Page Application in a very clean and
maintainable way.
It provides data binding capability to HTML. Thus, it gives user a rich and responsive
experience.
AngularJS code is unit testable.
AngularJS uses dependency injection and make use of separation of concerns.
AngularJS provides reusable component.
With AngularJS, the developers can achieve more functionality with short code.
In AngularJS, views are pure html pages, and controllers written in JavaScript do the
business processing.
On the top of everything, AngularJS applications can run on all major browsers and smart
phones, including Android and iOS based phones/tablets.

Disadvantages of AngularJS

Though AngularJS comes with a lot of merits, here are some points of concern:

Not secure : Being JavaScript only framework, application written in AngularJS are not
safe. Server side authentication and authorization is must to keep an application secure.
Not degradable: If the user of your application disables JavaScript, then nothing would
be visible, except the basic page.

AngularJS Directives

The AngularJS framework can be divided into three major parts:

ng-app : This directive defines and links an AngularJS application to HTML.


ng-model : This directive binds the values of AngularJS application data to HTML input
controls.
ng-bind : This directive binds the AngularJS application data to HTML tags.

Example

Now let us write a simple example using AngularJS library. Let us create an HTML file
myfirstexample.html shown as below:
<!doctype html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0-
beta.17/angular.min.js"></script>
</head>
<body ng-app="myapp">
<div ng-controller="HelloController" >
<h2>Welcome {{helloTo.title}} to the world of Tutorialspoint!</h2>

</div>
<script>
angular.module("myapp", [])
.controller("HelloController", function($scope) {
$scope.helloTo = {};
$scope.helloTo.title = "AngularJS";
});
</script>
</body>
</html>
Let us go through the above code in detail:
Include AngularJS
We include the AngularJS JavaScript file in the HTML page so that we can use it:

<head>
<script
src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.
min.js"></script>
</head>

You can check the latest version of AngularJS on its official website.
Point to AngularJS app
Next, it is required to tell which part of HTML contains the AngularJS app. You can do this by
adding the ng-app attribute to the root HTML element of the AngularJS app. You can either add
it to the html element or the body element as shown below:

<body ng-app="myapp">
</body>

View
The view is this part:

<div ng-controller="HelloController" >


<h2>Welcome {{helloTo.title}} to the world of Angular!</h2> </div>

ng-controller tells AngularJS which controller to use with this view. helloTo.title tells AngularJS
to write the model value named helloTo.title in HTML at this location.
Controller
The controller part is:
<script>
angular.module("myapp", [])
.controller("HelloController", function($scope) {
$scope.helloTo = {};
$scope.helloTo.title = "AngularJS";
});
</script>

This code registers a controller function named HelloController in the angular module named
myapp. We will study more about modules and controllers in their respective chapters. The
controller function is registered in angular via the angular.module(...).controller(...) function call.
The $scope parameter model is passed to the controller function. The controller function adds a
helloTo JavaScript object, and in that object it adds a title field.

Execution

Save the above code as myfirstexample.html and


open it in any browser. You get to see the web
page built over angular js.

3. MVC Architecture

Model View Controller or MVC as it is popularly


called, is a software design pattern for developing
web applications. A Model View Controller
pattern is made up of the following three parts:

Model - It is the lowest level of the


pattern responsible for maintaining data.
View - It is responsible for displaying all
or a portion of the data to the user.
Controller - It is a software Code that controls the interactions between the Model and
View.
MVC is popular because it isolates the application logic from the user interface layer and
supports separation of concerns. The controller receives all requests for the application and then
works with the model to prepare any data needed by the view. The view then uses the data
prepared by the controller to generate a final presentable response. The MVC abstraction can be
graphically represented as follows.

The Model

The model is responsible for managing application data. It responds to the request from view and
to the instructions from controller to update itself.

The View

A presentation of data in a particular format, triggered by the controller's decision to present the
data. They are script-based template systems such as JSP, ASP, PHP and very easy to integrate
with AJAX technology.

The Controller

The controller responds to user input and performs interactions on the data model objects. The
controller receives input, validates it, and then performs business operations that modify the state
of the data model.

AngularJS is a MVC based framework. In the coming chapters, we will see how AngularJS uses
MVC methodology.
Node.js
Node.js is an open source development platform for executing JavaScript code server-
side. Node is useful for developing applications that require a persistent connection from the
browser to the server and is often used for real-time applications such as chat, news feeds and
web push notifications.
Node.js uses the event-driven nature of JavaScript to support non-blocking operations in the
platform, a feature that enables its excellent efficiency. JavaScript is an event-driven language,
which means that you register code to specific events, and that code will be executed once the
event is emitted. This concept allows you to seamlessly execute asynchronous code without
blocking the rest of the program from running.
When developing web server logic, you will probably notice a lot of your system resources are
wasted on blocking code. For instance, let's observe the following PHP database interactions:
$output = mysql_query('SELECT * FROM Users');
echo($output);
Our server will try querying the database that will then perform the select statemfent and return
the result to the PHP code, which will eventually output the data as a response. The preceding
code blocks any other operation until it gets the result from the database. This means the process,
or more commonly, the thread, will stay idle, consuming system resources while it waits for
other processes.
To solve this issue, many web platforms have implemented a thread pool system that usually
issues a single thread per connection. This kind of multithreading may seem intuitive at first, but
has some significant disadvantages.

Node modules
JavaScript has turned out to be a powerful language with some unique features that enable
efficient yet maintainable programming. Its closure pattern and event driven behavior have
proven to be very helpful in real-life scenarios, but like all programming languages, it isn't
perfect, and one of its major design laws is the sharing of a single global namespace.
To understand the problem, we need to go back to JavaScript's browser origins. In the browser,
when you load a script into your web page, the engine will inject its code into an address space
that is shared by all the other scripts. This means that when you assign a variable in one script,
you can accidently overwrite another variable already deined in a previous script. While this
could work with a small code base, it can easily cause conlicts in larger applications, as errors
will be difficult to trace. It could have been a major threat for Node.js evolution as a platform,
but luckily a solution was found in the CommonJS modules standard.
CommonJS modules
CommonJS is a project started in 2009 to standardize the way of working with JavaScript outside
the browser. The project has evolved since then to support a variety of JavaScript issues,
including the global namespace issue, which was solved through a simple specification of how to
write and include isolated JavaScript modules.
The CommonJS standards specify the following three key components when working with
modules:
• require(): This method is used to load the module into your code.
• exports: This object is contained in each module and allows you to expose pieces of your code
when the module is loaded.
• module: This object was originally used to provide metadata information about the module. It
also contains the pointer of an exports object as a property. However, the popular
implementation of the exports object as a standalone object literally changed the use case of the
module object.
Node.js core modules
Core modules are modules that were compiled into the Node binary. They come prebundled with
Node and are documented in great detail in its documentation. The core modules provide most of
the basic functionalities of Node, including ilesystem access, HTTP and HTTPS interfaces, and
much more. To load a core module, you just need to use the require method in your JavaScript
ile. An example code, using the fs core module to read the content of the environment hosts ile,
would look like the following code snippet:
fs = require('fs');
fs.readFile('/etc/hosts', 'utf8', function (err, data) {
if (err) {
return console.log(err);
}
console.log(data);
});
When you require the fs module, Node will ind it in the core modules folder. You'll then be able
to use the fs.readFile() method to read the ile's content and print it in the command-line output.
Developing Node.js web applications
Node.js is a platform that supports various types of applications, but the most popular kind is the
development of web applications. Node's style of coding depends on the community to extend
the platform through third-party modules; these modules are then built upon to create new
modules, and so on. Companies and single developers around the globe are participating in this
process by creating modules that wrap the basic Node APIs and deliver a better starting point for
application development.
There are many modules to support web application development but none as popular as the
Connect module. The Connect module delivers a set of wrappers around the Node.js low-level
APIs to enable the development of rich web application frameworks. To understand what
Connect is all about, let's begin with a basic example of a basic Node web server. In your
working folder, create a file named server.js, which contains the following code snippet:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {
'Content-Type': 'text/plain'
});
res.end('Hello World');
}).listen(3000);
console.log('Server running at http://localhost:3000/');
To start your web server, use your command-line tool, and navigate to your working folder.
Then, run the node CLI tool and run the server.js ile as follows:
$ node server
Now open http://localhost:3000 in your browser, and you'll see the Hello World response.
Express.js
"Express is a fast, unopinionated minimalist web framework for Node.js" - official web
site: Expressjs.com

Express.js is a web application framework for


Node.js. It provides various features that make
web application development fast and easy which
otherwise takes more time using only Node.js.

Express.js is based on the Node.js middleware


module called connect which in turn
uses http module. So, any middleware which is
based on connect will also work with Express.js.

The Express framework is a small set of common


web application features, kept to a minimum in order to maintain the Node.js style. It is built on
top of Connect and makes use of its middleware architecture. Its features extend Connect to
allow a variety of common web applications' use cases, such as the inclusion of modular HTML
template engines, extending the response object to support various data format outputs, a routing
system, and much more.

Advantages of Express.js

1. Makes Node.js web application development fast and easy.


2. Easy to configure and customize.
3. Allows you to define routes of your application based on HTTP methods and URLs.
4. Includes various middleware modules which you can use to perform additional tasks on
request and response.
5. Easy to integrate with different template engines like Jade, Vash, EJS etc.
6. Allows you to define an error handling middleware.
7. Easy to serve static files and resources of your application.
8. Allows you to create REST API server.
9. Easy to connect with databases such as MongoDB, Redis, MySQL

Creating your first Express application


After creating your package.json ile and installing your dependencies, you can now create your
irst Express application by adding your already familiar server.js file with the following lines of
code:
var express = require('express');
var app = express();
app.use('/', function(req, res) {
res.send('Hello World');
});
app.listen(3000);
console.log('Server running at http://localhost:3000/');
module.exports = app;
You should already recognize most of the code. The first two lines require the Express module
and create a new Express application object. Then, we use the app.use() method to mount a
middleware function with a specific path, and the app.
listen() method to tell the Express application to listen to the port 3000.
Notice how the module. exports object is used to return the application object. This will later
help us load and test our Express application. This new code should also be familiar to you
because it resembles the code you used in the previous Connect example. This is because
Express wraps the Connect module in several ways. The app.use() method is used to mount a
middleware function, which will respond to any HTTP request made to the root path. Inside the
middleware function, the res.send() method is then used to send the response back. The
res.send() method is basically an Express wrapper that sets the Content-Type header according to
the response object type and then sends a response back using the Connect res.end() method.

Building an Express Web Application


When passing a buffer to the res.send() method, the Content-Type header will be set to
application/octet-stream. When passing a string, it will be set to text/html and when passing an
object or an array, it will be set to application/json. To run your application, simply execute the
following command in your commandline
tool: $ node server
Congratulations! You have just created your irst Express application. You can test it by visiting
http://localhost:3000 in your browser.

The application, request, and response objects


Express presents three major objects that you'll frequently use. The application object is the
instance of an Express application you created in the first example and is usually used to
configure your application. The request object is a wrapper of Node's HTTP request object and is
used to extract information about the currently handled HTTP request. The response object is a
wrapper of Node's HTTP response object and is used to set the response data and headers.

The application object


The application object contains the following methods to help you conigure your application:
• app.set(name, value): This is used to set environment variables that Express will use in its
configuration.
• app.get(name): This is used to get environment variables that Express is using in its
configuration.
• app.engine(ext, callback): This is used to deine a given template engine to render certain file
types, for example, you can tell the EJS template engine to use HTML files as templates like this:
app.engine('html',require('ejs').renderFile).
• app.locals: This is used to send application-level variables to all rendered templates.
• app.use([path], callback): This is used to create an Express middleware to handle HTTP
requests sent to the server. Optionally, you'll be able to mount middleware to respond to certain
paths.
• app.VERB(path, [callback...], callback): This is used to deine one or more middleware
functions to respond to HTTP requests made to a certain path in conjunction with the HTTP verb
declared. For instance, when you want to respond to requests that are using the GET verb, then
you can just assign the middleware using the app.get() method. For POST requests you'll use
app.post(), and so on.
• app.route(path).VERB([callback...], callback): This is used to define one or more middleware
functions to respond to HTTP requests made to a certain uniied path in conjunction with multiple
HTTP verbs. For instance, when you want to respond to requests that are using the GET and
POST verbs, you can just assign the appropriate middleware functions using
app.route(path).get(callback).post(callback).•
app.param([name], callback): This is used to attach a certain functionality to any request made to
a path that includes a certain routing parameter. For instance, you can map logic to any request
that includes the userId parameter using app.param('userId', callback).
There are many more application methods and properties you can use, but using these common
basic methods enables developers to extend Express in whatever way they find reasonable.

The request object


The request object also provides a handful of helping methods that contain the information you
need about the current HTTP request. The key properties and methods of the request object are
as follows:
• req.query: This is an object containing the parsed query-string parameters.
• req.params: This is an object containing the parsed routing parameters.
• req.body: This is an object used to retrieve the parsed request body. This
property is included in the bodyParser() middleware.
• req.param(name): This is used to retrieve a value of a request parameter.
Note that the parameter can be a query-string parameter, a routing parameter, or a property from
a JSON request body.
• req.path, req.host, and req.ip: These are used to retrieve the current request path, host name,
and remote IP.• req.cookies: This is used in conjunction with the cookieParser()middleware to
retrieve the cookies sent by the user-agent. The request object contains many more methods and
properties that we'll discuss later in this book, but these methods are what you'll usually use in a
common web application.

The response object


The response object is frequently used when developing an Express application because any
request sent to the server will be handled and responded using the response object methods. It
has several key methods, which are as follows:
• res.status(code): This is used to set the response HTTP status code.
• res.set(field, [value]): This is used to set the response HTTP header.
• res.cookie(name, value, [options]): This is used to set a response
cookie. The options argument is used to pass an object defining common cookie coniguration,
such as the maxAge property.
• res.redirect([status], url): This is used to redirect the request to a given URL. Note that you can
add an HTTP status code to the response. When not passing a status code, it will be defaulted to
302 Found.
• res.send([body|status], [body]): This is used for non-streaming responses. This method does a
lot of background work, such as setting the Content-Type and Content-Length headers, and
responding with the proper cache headers.
• res.json([status|body], [body]): This is identical to the res.send()method when sending an object
or array. Most of the times, it is used as syntactic sugar, but sometimes you may need to use
it to force a JSON response to non-objects, such as null or undefined.
• res.render(view, [locals], callback): This is used to render a view and send an HTML response.
The response object also contains many more methods and properties to handle different
response scenarios
MongoDB

Derived from the word humongous, MongoDB was able to support complex data storage, while
maintaining the high-performance approach of other NoSQL stores. The community cheerfully
adopted this new paradigm, making MongoDB one of the fastest-growing databases in the world.
With more than 150 contributors and over 10,000 commits, it also became one the most popular
open source projects.

Key features of MongoDB

 The BSON format


One of the greatest features of MongoDB is its JSON-like storage format named BSON.
Standing for Binary JSON, the BSON format is a binary-encoded serialization of JSON-like
documents, and it is designed to be more eficient in size and speed, allowing MongoDB's high
read/write throughput.

Like JSON, BSON documents are a simple data structure representation of objects and arrays in
a key-value format. A document consists of a list of elements, each with a string typed ield name
and a typed ield value. These documents support all of the JSON speciic data types along with
other data types, such as the Date type.

 MongoDB ad hoc queries


One of the other MongoDB design goals was to expand the abilities of ordinary key-value stores.
The main issue of common key-value stores is their limited query capabilities, which usually
means your data is only queryable using the key ield, and more complex queries are mostly
predeined. To solve this issue, MongoDB drew its inspiration from the relational databases
dynamic query language.

 MongoDB indexing
Indexes are a unique data structure that enables the database engine to efficiently resolve queries.
When a query is sent to the database, it will have to scan through the entire collection of
dcuments to ind those that match the query statement. This way, the database engine processes a
large amount of unnecessary data, resulting in poor performance.

To speed up the scan, the database engine can use a predefined index, which maps
documents ields and can tell the engine which documents are compatible with this query
statement. To understand how indexes work, let's say we want to retrieve all the posts that have
more than 10 comments. For instance, if our document is defined as follows:
{
"_id": ObjectId("52d02240e4b01d67d71ad577"),
"title": "First Blog Post",
"comments": [
],
"commentsCount": 12
}
So, a MongoDB query that requests for documents with more than 10 comments
would be as follows
db.posts.find({ commentsCount: { $gt: 10 } });
 MongoDB replica set
To provide data redundancy and improved availability, MongoDB uses an architecture called
replica set. Replication of databases helps protect your data to recover from hardware failure and
increase read capacity. A replica set is a set of MongoDB services that host the same dataset.
One service is used as the primary and the other services are called secondaries. All of the set
instances support read operations, but only the primary instance is in charge of write operations.
When a write operation occurs, the primary will inform the secondaries about the changes and
make sure they've applied it to their datasets' replication.

 MongoDB sharding
Scaling is a common problem with a growing web application. The various approaches to solve
this issue can be divided into two groups: vertical scaling and horizontal scaling. The differences
between the two are illustrated in the following diagram:
MongoDB supports horizontal scaling, which it refers to as sharding. Sharding is the process of
splitting the data between different machines, or shards. Each shard holds a portion of the data
and functions as a separate database. The collection of several shards together is what forms a
single logical database. Operations are performed through services called query routers, which
ask the configuration servers how to delegate each operation to the right shard.
SYSTEM DESIGN
INTRODUCTION

Design is the first step into the development phase for any engineered product or system. Design
is a creative process. A good design is the key to effective system. The term "design" is defined
as "the process of applying various techniques and principles for the purpose of defining a
process or a system in sufficient detail to permit its physical realization". It may be defined as a
process of applying various techniques and principles for the purpose of defining a device, a
process or a system in sufficient detail to permit its physical realization. Software design sits at
the technical kernel of the software engineering process and is applied regardless of the
development paradigm that is used. The system design develops the architectural detail required
to build a system or product. As in the case of any systematic approach, this software too has
undergone the best possible design phase fine tuning all efficiency, performance and accuracy
levels. The design phase is a transition from a user oriented document to a document to the
programmers or database personnel. System design goes through two phases of development:
Logical and Physical Design.

LOGICAL DESIGN:
The logical flow of a system and define the boundaries of a system. It includes the following
steps:

Reviews the current physical system - its data flows, file content, volumes , frequencies etc.
Prepares output specifications - that is, determines the format, content and frequency of reports.
Prepares input specifications - format, content and most of the input functions.
Prepares edit, security and control specifications.
Specifies the implementation plan.
Prepares a logical design walk through of the information flow, output, input, controls and
implementation plan.
Reviews benefits, costs, target dates and system constraints.

PHYSICAL DESIGN:
Physical system produces the working systems by define the design specifications that tell the
programmers exactly what the candidate system must do. It includes the following steps.
Design the physical system.
Specify input and output media.
Design the database and specify backup procedures.
Design physical information flow through the system and a physical design Walk through.
Plan system implementation.
Prepare a conversion schedule and target date.
Determine training procedures, courses and timetable.
Devise a test and implementation plan and specify any new hardware/software.
Update benefits , costs , conversion date and system constraints
Design/Specification activities:
Concept formulation.
Problem understanding.
High level requirements proposals.
Feasibility study.
Requirements engineering.
Architectural design.

MODULE DESIGN

Admin Module
The Administrator logs in using the admin login. In this module two operations are done. During
login the Login and Password is verified with that in the database

INPUT DESIGN
The design of input focuses on controlling the amount of input required, controlling the errors,
avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in
such a way so that it provides security and ease of use with retaining the privacy. Input Design
considered the following things:

What data should be given as input o How the data should be arranged or coded o The dialog to
guide the operating personnel in providing input. o Methods for preparing input validations and
steps to follow when error occur.

OBJECTIVES
Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

It is achieved by creating user-friendly screens for the data entry to handle large volume of data.
The goal of designing input is to make data entry easier and to be free from errors. The data entry
screen is designed in such a way that all the data manipulates can be performed. It also provides
record viewing facilities.

When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in a
maize of instant. Thus the objective of input design is to create an input layout that is easy to
follow.

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In output design it is determined how the information is to be displaced for
immediate need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the system's relationship
to help user decision-making.

Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should :
Identify the specific output that is needed to meet the requirements.
Select methods for presenting information.
Create document, report, or other formats that contain information produced by the system.
DFD (Data Flow Diagrams)
Level - 0 Food Cart
Request
Food Delivery Booking
Customer
Storage
Submit
Booked Food Cart

Storage

Level - 1 Food
Details Storage Food details
Submit
Vehicle details
Employee detail
Submit Vehicle
Manager
Details
Storage
Submit
Employee
Details

Storage
Food
Updates
Level - 2 Vehicle
Food
Request
Vehicle
Manager Login Employee Employee
Customer
Customer

Order
Food
Submits Details Order
Details Details

Customer Order Request Booked Cart

Cart Details
Cart
Details
DATABASE DESIGN

A database is an organized mechanism that has the capability of storing information through
which a user can retrieve stored information in an effective and efficient manner. The data is the
purpose of any database and must be protected.

The database design is a two level process. In the first step, user requirements are gathered
together and a database is designed which will meet these requirements as clearly as possible.
This step is called Information Level Design and it is taken independent of any individual
DBMS.

In the second step, this Information level design is transferred into a design for the specific
DBMS that will be used to implement the system in question. This step is called Physical Level
Design, concerned with the characteristics of the specific DBMS that will be used. A database
design runs parallel with the system design. The organization of the data in the database is aimed
to achieve the following two major objectives.
Data Integrity
Data independence

Normalization is the process of decomposing the attributes in an application, which results in a


set of tables with very simple structure. The purpose of normalization is to make tables as simple
as possible. Normalization is carried out in this system for the following reasons.

To structure the data so that there is no repetition of data , this helps in saving.
To permit simple retrieval of data in response to query and report request.
To simplify the maintenance of the data through updates, insertions, deletions.
To reduce the need to restructure or reorganize data which new application requirements arise.

RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS):


A relational model represents the database as a collection of relations. Each relation resembles a
table of values or file of records. In formal relational model terminology, a row is called a tuple,
a column header is called an attribute and the table is called a relation. A relational database
consists of a collection of tables, each of which is assigned a unique name. A row in a tale
represents a set of related values.

RELATIONS, DOMAINS & ATTRIBUTES:


A table is a relation. The rows in a table are called tuples. A tuple is an ordered set of n elements.
Columns are referred to as attributes. Relationships have been set between every table in the
database. This ensures both Referential and Entity Relationship Integrity. A domain D is a set of
atomic values. A common method of specifying a domain is to specify a data type from which
the data values forming the domain are drawn. It is also useful to specify a name for the domain
to help in interpreting its values. Every value in a relation is atomic, that is not decomposable.
RELATIONSHIPS:
Table relationships are established using Key. The two main keys of prime importance are
Primary Key & Foreign Key. Entity Integrity and Referential Integrity Relationships can be
established with thesekeys.Entity Integrity enforces that no Primary Key can have null
values.Referential Integrity enforces that no Primary Key can have null values. Referential
Integrity for each distinct Foreign Key value, there must exist a matching Primary Key value in
the same domain. Other key are Super Key and Candidate Keys. Relationships have been set
between every table in the database. This ensures both Referential and Entity Relationship
Integrity.

NORMALIZATION:
As the name implies, it denoted putting things in the normal form. The application developer via
normalization tries to achieve a sensible organization of data into proper tables and columns and
where names can be easily correlated to the data by the user. Normalization eliminates repeating
groups at data and thereby avoids data redundancy which proves to be a great burden on the
computer resources. These includes:
Normalize the data.
Choose proper names for the tables and columns.
Choose the proper name for the data.

First Normal Form:


The First Normal Form states that the domain of an attribute must include only atomic values
and that the value of any attribute in a tuple must be a single value from the domain of that
attribute. In other words 1NF disallows "relations within relations" or "relations as attribute
values within tuples". The only attribute values permitted by 1NF are single atomic or indivisible
values.

The first step is to put the data into First Normal Form. This can be donor by moving data into
separate tables where the data is of similar type in each table. Each table is given a Primary Key
or Foreign Key as per requirement of the project. In this we form new relations for each
nonatomic attribute or nested relation. This eliminated repeating groups of data.

A relation is said to be in first normal form if only if it satisfies the constraints that contain the
primary key only.

Second Normal Form:


According to Second Normal Form, For relations where primary key contains multiple attributes,
no nonkey attribute should be functionally dependent on a part of the primary key.
In this we decompose and setup a new relation for each partial key with its dependent attributes.
Make sure to keep a relation with the original primary key and any attributes that are fully
functionally dependent on it. This step helps in taking out data that is only dependant on apart of
the key.

A relation is said to be in second normal form if and only if it satisfies all the first normal form
conditions for the primary key and every non-primary key attributes of the relation is fully
dependent on its primary key alone. Third Normal Form:

According to Third Normal Form, Relation should not have a non key attribute functionally
determined by another nonkey attribute or by a set of nonkey attributes. That is, there should be
no transitive dependency on the primary key.

In this we decompose and set up relation that includes the nonkey attributes that functionally
determines other nonkey attributes. This step is taken to get rid of anything that does not depend
entirely on the Primary Key.

A relation is said to be in third normal form if only if it is in second normal form and more over
the non key attributes of the relation should not be depend on other non key attribute.

SYSTEM IMPLEMENTATION AND TESTING


Implementation is the stage of the project where the theoretical design is turned into a working
system. It can be considered to be the most crucial stage in achieving a successful new system
gaining the users confidence that the new system will work and will be effective and accurate. It
is primarily concerned with user training and documentation. Conversion usually takes place
about the same time the user is being trained or later. Implementation simply means convening a
new system design into operation, which is the process of converting a new revised system
design into an operational one.
pwd
E-R Diagram (Food Delivery Sys)
cid
name name Employe eid

customer eeeeeeee
phno ee
Deliverd salary
by
place Contact
ssss

oid

cid

qty order
odate

contain
oid has
fcode
price

cart
qty food
name
fcode
price
type

vehicle uses
vno

status
SYSTEM TESTING
Software Testing is the process of executing software in a controlled manner, in order to answer
the question - Does the software behave as specified. Software testing is often used in association
with the terms verification and validation. Validation is the checking or testing of items, includes
software, for conformance and consistency with an associated specification. Software testing is
just one kind of verification, which also uses techniques such as reviews, analysis, inspections,
and walkthroughs. Validation is the process of checking that what has been specified is what the
user actually wanted.

Validation: Are we doing the right job


Verification: Are we doing the job right

Software testing should not be confused with debugging. Debugging is the process of analyzing
and localizing bugs when software does not behave as expected. Although the identification of
some bugs will be obvious from playing with the software, a methodical approach to software
testing is a much more thorough means for identifying bugs. Debugging is therefore an activity
which supports testing, but cannot replace testing. Other activities which are often associated
with software testing are static analysis and dynamic analysis. Static analysis investigates the
source code of software, looking for problems and gathering metrics without actually executing
the code. Dynamic analysis

looks at the behavior of software while it is executing, to provide information such as execution
traces, timing profiles, and test coverage information.

Testing is a set of activity that can be planned in advanced and conducted systematically. Testing
begins at the module level and work towards the integration of entire computers based system.
Nothing is complete without testing, as it vital success of the system testing objectives, there are
several rules that can serve as testing objectives. They are
Testing is a process of executing a program with the intend of finding an error. A good test case
is one that has high possibility of finding an undiscovered error. A successful test is one that
uncovers an undiscovered error.

If a testing is conducted successfully according to the objectives as stated above, it would


uncovered errors in the software also testing demonstrate that the software function appear to be
working according to the specification, that performance requirement appear to have been met.

There are three ways to test program.


For correctness
For implementation efficiency
For computational complexity

Test for correctness are supposed to verify that a program does exactly what it was designed to
do. This is much more difficult than it may at first appear, especially for large programs.
TEST PLAN
A test plan implies a series of desired course of action to be followed in accomplishing various
testing methods. The Test Plan acts as a blue print for the action that is to be followed. The
software engineers create a computer program, its documentation and related data structures. The
software developers is always responsible for testing the individual units of the programs,
ensuring that each performs the function for which it was designed. There is an independent test
group (ITG) which is to remove the inherent problems associated with letting the builder to test
the thing that has been built. The specific objectives of testing should be stated in measurable
terms. So that the mean time to failure, the cost to find and fix the defects, remaining defect
density or frequency of occurrence and test work-hours per regression test all should be stated
within the test plan.

The levels of testing include:


Unit testing
Integration Testing
Data validation Testing
Output Testing

UNIT TESTING
Unit testing focuses verification effort on the smallest unit of software design - the software
component or module. Using the component level design description as a guide, important
control paths are tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered scope established for unit testing. The unit testing is white-
box oriented, and step can be conducted in parallel for multiple components. The modular
interface is tested to ensure that information properly flows into and out of the program unit
under test. The local data structure is examined to ensure that data stored temporarily maintains
its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure
that all statements in a module have been executed at least once. Finally, all error handling paths
are tested.

Tests of data flow across a module interface are required before any other test is initiated. If data
do not enter and exit properly, all other tests are moot. Selective testing of execution paths is an
essential task during the unit test. Good design dictates that error conditions be anticipated and
error handling paths set up to reroute or cleanly terminate processing when an error does occur.
Boundary testing is the last task of unit testing step. Software often fails at its boundaries.

Unit testing was done in Sell-Soft System by treating each module as separate entity and testing
each one of them with a wide spectrum of test inputs. Some flaws in the internal logic of the
modules were found and were rectified.
INTEGRATION TESTING

Integration testing is systematic technique for constructing the program structure while at the
same time conducting tests to uncover errors associated with interfacing. The objective is to take
unit tested components and build a program structure that has been dictated by design. The entire
program is tested as whole. Correction is difficult because isolation of causes is complicated by
vast expanse of entire program. Once these errors are corrected, new ones appear and the process
continues in a seemingly endless loop.

After unit testing in Sell-Soft System all the modules were integrated to test for any
inconsistencies in the interfaces. Moreover differences in program structures were removed and a
unique program structure was evolved.

VALIDATION TESTING OR SYSTEM TESTING


This is the final step in testing. In this the entire system was tested as a whole with all forms,
code, modules and class modules. This form of testing is popularly known as Black Box testing
or System testing.

Black Box testing method focuses on the functional requirements of the software. That is, Black
Box testing enables the software engineer to derive sets of input conditions that will fully
exercise all functional requirements for a program.

Black Box testing attempts to find errors in the following categories; incorrect or missing
functions, interface errors, errors in data structures or external data access, performance errors
and initialization errors and termination errors.

OUTPUT TESTING OR USER ACCEPTANCE TESTING

The system considered is tested for user acceptance; here it should satisfy the firm's need. The
software should keep in touch with perspective system; user at the time of developing and
making changes whenever required. This done with respect to the following points
Input Screen Designs,
Output Screen Designs,
Online message to guide the user and the like.

The above testing is done taking various kinds of test data. Preparation of test data plays a vital
role in the system testing. After preparing the test data, the system under study is tested using
that test data. While testing the system by which test data errors are again uncovered and
corrected by using above testing steps and corrections are also noted for future use.
TRAINING
Once the system is successfully developed the next important step is to ensure that the
administrators are well trained to handle the system. This is because the success of a system
invariably depends on how they are operated and used. The implementation depends upon the
right people being at the right place at the right time. Education involves creating the right
atmosphere and motivating the user. The administrators are familiarized with the run procedures
of the system, working through the sequence of activities on an ongoing basis.

Implementation is the state in the project where the theoretical design is turned into a working
system. By this, the users get the confidence that the system will work effectively. The system
can be implemented only after through testing.

The systems personnel check the feasibility of the system. The actual data were inputted to the
system and the working of the system was closely monitored. The master option was selected
from the main menu and the actual data were input through the corresponding input screens. The
data movement was studied and found to be correct queries option was then selected and this
contains various reports. Utilities provide various data needed for inventory was input and the
module was test run. Satisfactory results were obtained. Reports related to these processes were
also successfully generated. Various input screen formats are listed in the appendix.

Implementation walkthroughs ensure that the completed system actually solves the original
problem. This walkthrough occurs just before the system goes into use, and it should include
careful review of all manuals, training materials and system documentation. Again, users, the
analyst and the members of the computer services staff may attend this meeting.

Name: Signature

Reg No.:

Internal Guide External Guide

You might also like