You are on page 1of 38

Practical-1

Aim: - Study and usage of OpenProj or similar software to draft a project


plan.
Introduction: - Project managers can use OpenProj, a free task tracking application, when
creating effective plans. OpenProj delivers functionality that rivals the capabilities of
commercial software. This can save thousands of dollars in startup costs. Of course, saving a
lot of money can be foolish if the required tasks can't be done. This is not the case with
OpenProj. Luckily the OpenProj application gives managers a full set of tools that are typically
used to track projects. Useful aids such as critical path analysis, resource tracking and task
comments are all present in OpenProj. The tool is ideal for simple project management but is
capable of larger efforts as well.

Fig no: -1

For the purposes of the example project plan, the following assumptions are made:
-The OpenProj software has already been installed and correctly configured on a workstation
with an attached printer
- The goal is to launch a new marketing effort in 6 months, called "Anganwadi"
- There are three full-time staff resources, including the manager
- Budget is not a consideration
- Schedule is the primary consideration
- The target implementation date is 6 months away but is not an absolute fixed date

Step 1: Create the project plan shell:


The first step is to use OpenProj to identify the basic parameters. The manager starts the
OpenProj application and presses the "Create Project" button. The file is named,
("Anganwadi"), and a starting date is given. You can forward schedule which is the default.
This allows you to enter the required tasks and OpenProj will calculate a completion date. If
required, you can schedule a finish date and have OpenProj work backwards for you. This
alternate method works best if there is a hard drop-dead date, such as a launch date. The project
manager can also add initial project notes. These might refer to locations of phase initiation
documentation or other optional information.

1
Fig no: -2

Step 2: Identify the project resources


Use the resources view to enter the particulars of all of the project team. The names and roles
of the team members can be specified. If required, you can enter hourly rates, overtime and
availability information for each team member. For this example, three 100% resources will be
entered.

Fig no: -3

Step 3: Identify the high-level tasks


For this example, the project is similar to an earlier effort that was completed successfully. That
work required tasks for initiation, research, contracting, development and launch. The project
manager enters these tasks into the Gantt view of OpenProj. The duration estimates are based

2
on the values previously seen for similar tasks. There is no ordering of tasks or dependencies.
The raw Gantt list is below.

Notice that the task "Application Development" is shown with a red duration bar while all other
tasks have blue bars. This task is identified as the project critical path. It is the longest running
task in the project. Since all tasks default to the start date of the project, the analysis of the
critical path is quite premature at this time. The project manager must now modify
dependencies.

Step 4: Identify the task dependencies for critical path analysis


During a effort, some tasks can't start until others have been completed. This is true for the
"Test launch" task. There is nothing to test until the development is completed. As well, the
"News Shower" launch is dependent on every other task. The project manager, in discussions
with team members or sponsors as appropriate, determines the task dependencies. The modified
Gantt view now shows a realistic schedule.
Notice that there is now a critical path, shown as a red bar, that is comprised of two tasks. The
other tasks are completed in parallel and don't affect the critical path. At this point, no resources
have been assigned to the tasks. No tasks have been split into components.

Fig no: -4

Step 5: Assign project resources to tasks


Each of the tasks can have one or more resources assigned. The column "Resource Names" on
the Gantt View allows direct data entry of this information. Enter the name of a resource in the
field. The default action is to have each named resource work 100% of their time on the task.
The field also supports the direct entry of multiple resources. Enter the resource names
separated by a semi-colon.

3
Fig no: -5
Step 6: Task elaboration
An important feature of project management applications is the ability to allow the manager to
split tasks into smaller sub-tasks. This can allow better accuracy in schedule estimating. It also
allows team members to be specified in a just-in-time fashion for their assignments. The
example project has some opportunities for task elaboration.

Fig no: -6
Step 7: Evaluate the project plan
With all of the tasks entered, and sub-tasks specified, the plan has really evolved. It now shows
a lot of information which can be useful in project reporting. The first item is the critical path.
This of the highest importance to the project manager and the organization. Reports showing
the tasks can be presented to company executives. An analysis of work loads can be done. Task
reports can be printed. In time, as completion percentages are entered for tasks, the project
manager can run status reports showing progress and schedule tracking.

4
Practical-2
Aim: - Study and usage of Openproj to track the progress of a project.
Finding the right project management solution for your team can be very hard. Finding an open
source project management solution may be even harder. That's the mission of solution that
allows teams to collaborate throughout the project life cycle. Additionally, the project aims to
replace proprietary software like Microsoft Project Server or Jira.

The OpenProject objectives:


1. Establish and promote an active and open community of developers, users, and companies
for continuous development of the open source project.
2. Define and develop the project vision, the code of conduct, and principles of the application.
3. Create development policies and ensure their compliance.
4. Define and evolve the development and quality assurance processes.
5. Provide the source code to the public.
6. Provide and operate the OpenProject platform.

Mission of OpenProject
The mission of OpenProject can be quickly summarized: we want to build excellent open source
project collaboration software. And when I say open source, I meant it. We strive to make
OpenProject a place to participate, collaborate, and get involved—with an active, openminded,
transparent, and innovative community.
Companies have finally become aware of the importance of project management software and
also the big advantages of open source. But why is it that project teams still tend to switch to
old-fashioned ways of creating project plans, task lists, or status reports with Excel, PowerPoint,
or Word—or having other expensive proprietary project management software in use? We want
to offer a real open source alternative for companies: free, secure, and easy to use.
Progress of the project is as below:-

5
6
Figures 2.1-2.4:-Project Progress Screenshots

Maintenance will keep on going till lifetime of this project.

7
Practical: - 3
Aim: - Preparation of Software Requirement Specification Document,
Design Documents and Testing Phase related documents for some problems.
SRS: - An SRS minimizes the time and effort required by developers to achieve desired goals
and also minimizes the development cost. A good SRS defines how an application will interact
with system hardware, other programs and human users in a wide variety of realworld
situations. Parameters such as operating speed, response time, availability, portability,
maintainability, footprint, security and speed of recovery from adverse events are evaluated.
Methods of defining an SRS are described by the IEEE (Institute of Electrical and Electronics
Engineers) specification 830-1998.
Qualities of SRS: -
• Correct
• Unambiguous
• Complete
• Consistent
• Ranked for importance and/or stability
• Verifiable
• Modifiable
• Traceable

The below diagram depicts the various types of requirements that are captured during SRS.

Figure 3.1:-SRS requirements


ABSTRACT
"Blog" is an abbreviated version of "weblog," which is a term used to describe websites that
maintain an ongoing chronicle of information. A blog features diary-type commentary and links
to articles on other websites, usually presented as a list of entries in reverse chronological order.

8
Fig – Blog Concept
What does Blog mean?
A frequent, chronological publication of personal thoughts and Web links. Blogs, or weblogs,
started out as a mix of what was happening in a person’s life and what was happening on the
Web, a kind of hybrid diary/news site.

Blogging Tools
These are the basic blogging tools we use and at marketingterms.com
Domain Name – Namecheap
WordPress Hosting – WP Engine
(Optional) Page Builder – Beaver Builder
(Optional) Page Builder Addons – Ultimate Addons
Blog versus Website
Many people are still confused over what constitutes a blog over a website. Part of the problem
is that many businesses use both, integrating them into a single web presence. But there are two
features of a blog that set it apart from a traditional website.
1. Blogs are updated frequently. Whether it's a mommy blog in which a woman shares adventures
in parenting, a food blog sharing new recipes, or a business providing updates to its services,
blogs have new content added several times a week.
2. Blogs allow for reader engagement. Blogs are often included in social media because of the
ability for readers to comment and have a discussion with the blogger and others who read the
blog makes it social.
Why Blogging Is So Popular?
There are several reasons why entrepreneurs have turned to blogging.
1. Search engines love new content, and as a result, blogging is a great search engine optimization
(SEO) tool.
2. Blogging provides an easy way to keep your customers and clients up-to-date on what's going
on, let them know about new deals, and provide tips. The more a customer comes to your blog,
the more likely they are to spend money.
3. A blog allows you to build trust and rapport with your prospects. Not only can you show off
what you know, building your expertise and credibility, but because people can post comments
and interact with you, they can get to know you, and hopefully, will trust you enough to buy
from you.
4. Blogs can make money. Along with your product or service, blogs can generate income from
other options, such as advertising and affiliate products.

9
Project Introduction
Project is based on transaction and its management .The objectives of project are to maintain
transaction management and concurrency control .Basically project is based on real world
problem which is related to banking and transaction .It also provide security features related to
database like only authenticated accountant or user can access the database or perform the
transaction .
It is based on banking so it is related to accountant and customers which are the naïve users.
In this there are two types of GUI for different users and they provide different external views.
It holds the feature of database sharing. In this if two different users can perform concurrently
if they are authorized users and have the permissions to access the database .Basically it is based
on database sharing and transaction management of concurrency control.
Firstly accountant end work as the admin in this project. To add new user there any accountant
who add that account and person detail in the database and then there is key generate to access
the database with the password for the naïve users .With the help of that key and password user
can access the details and can perform the transaction on database with very user friendly GUI.
On other end accountant has some more facilities like to update the details of the users .To
update any detail of the user then accountant is the person who can perform this task not the
naïve user. Accountant the person who can close the account of any customer who want to close
his or her account .After closing the detail of account is remain in database for some period of
time .With the trigger action after particular time period the data is automatically deleted
permanently.

Figure: 1 Prototype Model

Objectives of the project


Project is based on transaction and its management .The objectives of project are to maintain
transaction management and concurrency control .Basically project is based on real world
problem which is related to banking and transaction .It also provide security features related

10
to database like only authenticated accountant or authenticated user can access the database
or perform the transaction..

NETBANK objectives
• It ensures to provide transaction management and concurrency control.
• It ensure to prevent the database problems like lost update, dirty read and unrepeatable read etc.
• It provide the concurrency control features.
• This help to provide the security features like authentication ,authorization etc.
• It ensures to provide different type of views at external level.
• This project provides two different sides for two different types of users in the bank like
accountants and naïve users.
• It provides very user friendly GUI for different type of users.
• It provides the database sharing concept by using some networking concepts.
• It ensures that the sharing of the database only between the authorized users.

11
Overall Description

1.) Product Perspective:


User interface: The application that will be developing will have a user friendly and
Menu based interface.
2.) Hardware Interface:
Processor : Dual Core or Higher
RAM : 512 MB or higher
Other Peripheral Devices : CD-Drive, QWERTY Layout Keyboard
3.) Software Interface:
• Operating system: Window XP,Vista,7,8,8.1 and higher
• Platform: .Java
• Database: SQL server
• Language: java
4.) Communication Interface:
The communication between the different parts of the system is important they depend on each
other.
5.) Memory Constraints:
At least 512MB RAM and 4GB of the Hard disk space will be required for running the
application.
6.) Operations:
The system will have the user-friendly interfaces. The system will maintain the information
related to transactions operations by accountants or userson the databases. User can see their
details or information related to transaction. There will be additional backup for any kind
damages or data lost.
7.) Site Adaptation Requirements:
The centralized database is used so that the system can communicate to retrievethe information.
8.) Constraints:
• There is a backup for system.
• GUI feature available.
We will use SDLC (System Development Life Cycle) approach to make project as it is the
easiest and most commonly method of making project. We will try to make a project which
will provide a convenient and interesting way for studying any subject. The main objective
of the system design is to make the system user friendly.

• The problem was analyzed and then design was prepared. Then we implemented this design
through coding and then testing is done. If any errors were found then we have tried our
best to remove them and then again testing was done so that we can remove all the errors
of our project. This project will be maintained and upgraded time to time so that we can
provide proper and latest notes to all the users of this tutorial.

12
Figure: 2 (SDLC Cycle)

• Stages of Waterfall Model: -


• The SDLC is a process used by a systems analyst to develop an information system,
including requirements, validation, training, and user (stakeholder) ownership. Any SDLC
should result in a high quality system that meets or exceeds customer expectations, reaches
completion within time and cost estimates, works effectively and efficiently in the current
and planned Technology infrastructure, and is inexpensive to maintain and cost-effective to
enhance . SDLC is a methodology used to describe the process for building information
systems, intended to develop information systems in a very deliberate, structured and
methodical way, reiterating each stage of the life cycle.
REQUIREMENTS SPECIFICATION
Prior to the software development efforts in any type of system it is very essential to
understand the requirements of the system and users. A complete specification of the
software is the 1st step in the analysis of system. Requirements analysis provides the designer
with the representation of function and procedures that can be translated into data,
architecture and procedural design. The goal of requirement analysis is to find out how
current system is working and of there are any areas where improvement is necessary and
possible.

INTERFACE REQUIREMENTS: -
1.) User Interface The packet must be user friendly and robust. It must prompt the user with
proper message boxes to help them perform various actions and how to precede further
the system must respond normally under any input conditions and display proper message
instead of turning up faults and errors
2.) Software Specification Software is a set of program, documents, and procedure, routines
associated with computer system. Software is an essential complement to hardware. It is
the computer programs, when executed operates the hardware.

SYSTEM DESIGN: -

13
System design is the process of developing specifications for a candidate system that meet the
criteria established in the system analysis. Major step in system design is the preparation of
the input forms and the output reports in a form applicable to the user. The main objective of
the system design is to make the system user friendly.

System design involves various stages as:


• Entry
• Data Correction
• Data Deletion
• Data Processing
• Sorting and Indexing
• Report Generation
System design is the creative act of invention, developing new inputs, a database, offline files,
procedures and output for processing business to meet an organization objective. System
design builds information gathered during the system analysis.

DATABASE DESIGN: -
The overall objective in the development of the database technology has been to treat data as
an organizational resource and as an integrated whole. Database management system allows
data to be protected and organize separately from other resources. Database is an integrated
collection of data. The most significant of data as seen by the programs and data as stored on
the direct storage access storage devices. This is the difference between logical and physical
data.
The organization of data in the database aims to achieve three major objectives:
• Data Integration
• Data Integrity
• Data Independence
Methodology
The spiral model is similar to the incremental model, with more emphasis placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations (called
Spirals in this model). The baseline spiral, starting in the planning phase, requirements is
gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral. It’s one of
the software development models like Waterfall, V-Model.

Phases of Spiral Model: -


Planning Phase: Requirements are gathered during the planning phase. Requirements like
‘BRS’ that is ‘Business Requirement Specifications’ and ‘SRS’ that is ‘System
Requirement specifications’.
Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and
alternate solutions. A prototype is produced at the end of the risk analysis phase. If any risk
is found during the risk analysis then alternate solutions are suggested and implemented.
Engineering Phase: In this phase software is developed, along with testing at the end of
the phase. Hence in this phase the development and testing is done.

14
Evaluation phase: This phase allows the customer to evaluate the output of the project to
date before the project continues to the next spiral.

Architecture of Spiral model


Advantages of Spiral model:
• High amount of risk analysis hence, avoidance of Risk is enhanced.
• Good for large and mission-critical projects.
• Strong approval and documentation control.
• Additional Functionality can be added at a later date. Software is produced early in
the software life cycle.

Disadvantages of Spiral model:


• Can be a costly model to use.
• Risk analysis requires highly specific expertise.
• Project’s success is highly dependent on the risk analysis phase. Doesn’t work well
for smaller projects.

When to use Spiral model:


• When costs and risk evaluation is important
• For medium to high-risk projects
• Long-term project commitment unwise because of potential changes to economic
priorities
• Users are unsure of their needs
• Requirements are complex
• New product line
• Significant changes are expected (research and exploration)

15
FEASIBILITY ANALYSIS
A feasibility study is an analysis of how successfully a project can be completed, accounting
for factors that affect it such as economic, technological, legal and scheduling factors. Project
managers use feasibility studies to determine potential positive and negative outcomes of a
project before investing a considerable amount of time and money into it.

TYPES OF FEASIBILITY STUDY


• TECHNICAL: -
Fundamentally, we are trying to answer the question “Can it actually be built?” To do this we
investigated the technologies to be used on the project. For each technology alternative that we
accessed we identified the advantages and disadvantages of it. By studying available resources
and requirements, we concluded that minimum the application should be made user-friendly.

• ECONOMICAL: -
Keeping all these needs and demands of system in minimum budget we developed new software
which will not only lower their budget but also not require much cost to accept it. Keeping all
these needs and demands of system in minimum budget we developed new software which will
not only lower their budget but also not require much cost to accept it. The new system will
save the money, which they are investing on getting short codes, thus keeping all these good
qualities is an economic feasibility.

• OPERATIONAL : -
The basic question that you are trying to answer is “Is it possible to maintain and support this
application once it is in production?” we developed a system which does not require any extra
technical skill and training. It is developed using such environment, which are quite familiar to
most of the people concerned with the system. The new system will prove easy to operate
because it is developed in such a way so that it will prove user friendly. User will find it quite
familiar and easy to operate.

• BEHAVIOURAL : -
o Feasibility in terms of behavior of its employees. o It reflects the behavior
of the employees of an organization.
o Main focus is on teamwork and harmony among employees with no room
for discrimination and hatred.
Benefits of Conducting Feasibility Study: -
The importance of a feasibility study is based on organizational desire to “get it right” before
committing resources, time, or budget. A feasibility study might uncover new ideas that could
completely change a project’s scope. It’s best to make these determinations in advance, rather
than to jump in and learning that the project just won’t work. Conducting a feasibility study is
always beneficial to the project as it gives you and other stakeholders a clear picture of the
proposed project.
Below are some key benefits of conducting a feasibility study:
Improves project teams’ focus.
• Identifies new opportunities.

16
• Provides valuable information for a “go/no-go” decision.
• Narrows the business alternatives.
• Identifies a valid reason to undertake the project.
• Enhances the success rate by evaluating multiple parameters.
• Aids decision-making on the project.
• Identifies reasons not to proceed.

Features of the Proposed System: -


In earlier time, the Blogger Concept was using the manual system, which has based on the
entries on the registers. The computerized integrated system from the existing system will have
the following advantage:
• Handle volume of information.
• Complexity of data processing.
• Processing time constant.
• Computational demand.
• Instantaneous queries. Security features.

Schedule Of documents: -
Sr. No. Document Date

1. Design document 1/2/18

2. Coding document 1/3 18

3. Testing document 8/3/18

4. Risk Handling Document 15/3/18

5. Maintenance document 22/3/18

Design Documents: -
A software design document (SDD) is a written description of a software product, that a
software designer writes in order to give a software development team overall guidance to the
architecture of the software project. An SDD usually accompanies an architecture diagram with
pointers to detailed feature specifications of smaller pieces of the design. Practically, a design
document is required to coordinate a large team under a single vision.
A design document needs to be a stable reference, outlining all parts of the software and how
they will work. The document is commanded to give a fairly complete description, while
maintaining a high-level view of the software.
There are two kinds of design documents called HLDD (high-level design document) and
LLDD (low-level design document).
It includes description of the body and soul of the entire project, with all the details, and the
method by which each element will be implemented. It ensures that what is produced is what
you want to produce.

While preparing Design document:


Describe not just the body, but the soul.

17
• Make it readable Prioritize
• Get into the details
• Some things must be demonstrated
• Not just "what" but "how."
• Provide alternatives

The importance and need of different levels of DFD in software design: The
main reason why the DFD technique is so important and so popular is probably because of the
fact that DFD is a very simple formalism – it is simple to understand and use. Starting with a
set of high-level functions that a system performs, a DFD model hierarchically represents
various sub-functions. In fact, any hierarchical model is simple to understand. Human mind is
such that it can easily understand any hierarchical model of a system – because in a hierarchical
model, starting with a very simple and abstract model of a system, different details of the system
are slowly introduced through different hierarchies. The data flow diagramming technique also
follows a very simple set of intuitive concepts and rules. DFD is an elegant modeling technique
that turns out to be useful not only to represent the results of structured analysis of a software
problem, but also for several other applications such as showing the flow of documents or items
in an organization.

Disadvantages of DFDs: -
• Modification to a data layout in DFDs may cause the entire layout to be changed. This is
because the specific changed data will bring different data to units that it accesses.
Therefore, evaluation of the possible of the effect of the modification must be considered
first.
• The number of units in a DFD in a large application is high. Therefore, maintenance is
harder, more costly and error prone. This is because the ability to access the data is passed
explicitly from one component to the other. This is why changes are impractical to be made
on DFDs especially in large system.
• DFDs are inappropriate to use in a large system because if changes are to be made on a
specific unit, there is a possibility that the whole DFD need to be changed. This is because
the change may result in different data flow into the next unit. Therefore, the whole
application or system may need modification too.

LEVEL 0: -
The first thing we must do is model the main outputs and sources of data in the scenario above.
Then we draw the system box and name the system. Next we identify the information that is
flowing to the system and from the system.

18
Level 0 of DFD of Blogger Concept

LEVEL 1: -
The next stage is to create the Level 1 Data Flow Diagram. This highlights the main functions
carried out by the system as follows:

Level 1 of DFD of Blogger Concept

LEVEL 2: -
We now create the Level 2 Data Flow Diagrams. First 'expand' the function boxes 1.1 and 1.2
so that we can fit the process boxes into them. Then position the data flows from Level 1 into
the correct process in Level 2 as follows:

19
Level 2 of DFD of Blogger Concept

20
Coding:- Good software development organizations normally require their programmers to
adhere to some well-defined and standard style of coding called coding standards. Most
software development organizations formulate their own coding standards that suit them most,
and require their engineers to follow these standards rigorously. The purpose of requiring all
engineers of an organization to adhere to a standard style of coding is the following:
• A coding standard gives a uniform appearance to the codes written by different engineers.
• It enhances code understanding.
• It encourages good programming practices. A coding standard lists several rules to be
followed during coding, such as the way variables are to be named, the way the code is to be
laid out, error return conventions, etc. Important facts:-
• Version control application required in this phase.
• Before begin the actual coding, you should spend some time on selecting development tool,
which will be suitable for your debugging, coding, modification and designing needs.
• Before actual writing code, some standard should be defined, as multiple developers going to
use the same file for coding.
• During development developer should write appropriate comments so that other developers
will come to know the logic behind the code.
• Last but most important point. There should be a regular review meeting need to conduct in
this stage. It helps to identify the prospective defects in an early stage. Helps to improve product
and coding quality.

Coding standards and guidelines: - Good software development organizations usually


develop their own coding standards and guidelines depending on what best suits their
organization and the type of products they develop. The following are some representative
coding standards.

Rules for limiting the use of global: - These rules list what types of data can be declared
global and what cannot.

Contents of the headers preceding codes for different modules: - The information
contained in the headers of different modules should be standard for an organization. The exact
format in which the header information is organized in the header can also be specified. The
following are some standard header data:
• Name of the module.
• Date on which the module was created.
• Author’s name.
• Modification history.
• Synopsis of the module.
• Different functions supported, along with their input/output parameters.
• Global variables accessed/modified by the module.

Naming conventions for global variables, local variables, and constant


identifiers: - A possible naming convention can be that global variable names always start
with a capital letter, local variable names are made of small letters, and constant names are
always capital letters.
The code should be well-documented: - As a rule of thumb, there must be at least one
comment line on the average for every three-source line.

21
The length of any function should not exceed 10 source lines: - A function that is
very lengthy is usually very difficult to understand as it probably carries out many different
functions. For the same reason, lengthy functions are likely to have disproportionately larger
number of bugs.

Do not use an identifier for multiple purposes: - Programmers often use the same
identifier to denote several temporary entities. For example programmers use a temporary loop
variable for computing and a storing the final result. The rationale that is usually given by these
programmers for such multiple uses of variables is memory efficiency, e.g. three variables use
up three memory locations, whereas the same variable used in three different ways uses just
one memory location. However, there are several things wrong with this approach and hence
should be avoided. Some of the problems caused by use of variables for multiple purposes as
follows:
• Each variable should be given a descriptive name indicating its purpose. This is not possible
if an identifier is used for multiple purposes. Use of a variable for multiple purposes can lead
to confusion and make it difficult for somebody trying to read and understand the code. • Use
of variables for multiple purposes usually makes future enhancements more difficult.

Do not use goto statements: - Use of go to statements makes a program unstructured and
makes it very difficult to understand.

Code review: - Code review for a model is carried out after the module is successfully
compiled and the all the syntax errors have been eliminated. Code reviews are extremely cost-
effective strategies for reduction in coding errors and to produce high quality code. Normally,
two types of reviews are carried out on the code of a module. These two types code review
techniques are code inspection and code walk through.

Code Walk Throughs: - Code walk through is an informal code analysis technique. In this
technique, after a module has been coded, successfully compiled and all syntax errors
eliminated. A few members of the development team are given the code few days before the
walk through meeting to read and understand code. Each member selects some test cases and
simulates execution of the code by hand (i.e. trace execution through each statement and
function execution). The main objectives of the walk through are to discover the algorithmic
and logical errors in the code. The members note down their findings to discuss these in a walk
through meeting where the coder of the module is present.
Even though a code walks through is an informal analysis technique, several guidelines have
evolved over the years for making this naïve but useful analysis technique more effective. Of
course, these guidelines are based on personal experience, common sense, and several
subjective factors. Therefore, these guidelines should be considered as examples rather than
accepted as rules to be applied dogmatically. Some of these guidelines are the following.
• The team performing code walk through should not be either too big or too small. Ideally,
it should consist of between three to seven members.
• Discussion should focus on discovery of errors and not on how to fix the discovered errors.
• In order to foster cooperation and to avoid the feeling among engineers that they are being
evaluated in the code walk through meeting, managers should not attend the walk through
meetings.
Code Inspection: - In contrast to code walk through, the aim of code inspection is to
discover some common types of errors caused due to oversight and improper programming. In

22
other words, during code inspection the code is examined for the presence of certain kinds of
errors, in contrast to the hand simulation of code execution done in code walk throughs. For
instance, consider the classical error of writing a procedure that modifies a formal parameter
while the calling routine calls that procedure with a constant actual parameter. It is more likely
that such an error will be discovered by looking for these kinds of mistakes in the code, rather
than by simply hand simulating execution of the procedure. In addition to the commonly made
errors, adherence to coding standards is also checked during code inspection. Good software
development companies collect statistics regarding different types of errors commonly
committed by their engineers and identify the type of errors most frequently committed. Such
a list of commonly committed errors can be used during code inspection to look out for possible
errors.
Following is a list of some classical programming errors which can be checked during code
inspection:
• Use of uninitialized variables.
• Jumps into loops.
• Non terminating loops.
• Incompatible assignments.
• Array indices out of bounds.
• Improper storage allocation and deallocation.
• Mismatches between actual and formal parameter in procedure calls Use of incorrect
logical operators or incorrect precedence among operators.
• Improper modification of loop variables.
• Comparison of equally of floating point variables, etc.

23
Practical 4
Aim: - Preparation of the Software Configuration management and Risk
management related documents. Software Configuration management
SCM or Software Configuration management is a Project Function (as defined in the SPMP) with
the goal to make technical and managerial activities more effective. Software configuration
management (SCM) is a software engineering discipline consisting of standard processes and
techniques often used by organizations to manage the changes introduced to its software products.
SCM helps in identifying individual elements and configurations, tracking changes, and version
selection, control, and baselining.

SCM is also known as software control management. SCM aims to control changes introduced to
large complex software systems through reliable version selection and version control. The SCM
system has the following advantages:
• Reduced redundant work.
• Effective management of simultaneous updates.
• Avoids configuration-related problems.
• Facilitates team coordination.
• Helps in building management; managing tools used in builds.
• Defect tracking: It ensures that every defect has traceability back to its source.
Benefits of Software Configuration Management
SCM provides significant benefits to all projects regardless of size, scope, and complexity.
Some of the most common benefits experienced by project teams applying the SCM disciplines
described in this guide are possible because the SCM system:
• Organization
Being that Configuration Management is like the framework for greater information
management programs, it should go without saying that it is critical for greater
management and organization of information as a whole. With a well-ordered system in
place, a good IT worker should be able to see all of the past system implementations of
the business, and can better address future needs and changes to keep the system up to
date and running smoothly.
• Reliability
Nothing is worse than an unreliable system that is constantly down and needing repairs
because a company’s configuration management team is lacking in organization and pro-
activeness. If the system is used correctly, it should run like the well-oiled machine that
it is, ensuring that every department in the company can get their jobs done properly.
Increased stability and efficiency of the system will trickle down into every division of a
company, including customer relations, as the ease and speed in which their problems can
be solved and information can be accessed will surely make a positive impact.
• Cost Reduction and Risks
Like anything else in business, a lack of maintenance and attention to details can have
greater risks and cost down the line, as opposed to proactive action before a problem
arises. Configuration Management saves money with the constant system maintenance,
record keeping, and checks and balances to prevent repetition and mistakes. The
organized record keeping of the system itself saves time for the IT department and reduces
wasted money for the company with less money being spent on fixing recurring or
nonsensical issues.

24
SCM Process
The software configuration management process defines a series of tasks that have four primary
objectives:
1. To identify all the items that collectively defines the software configuration.
2. To manage changes to one or more of these items.
3. To facilitate the construction of different versions of an application.
4. To ensure that software quality is maintained as the configuration evolves over time.

Figure 4.1:- SCM process


Risk Management
1. A risk is any anticipated unfavourable event or circumstance that can occur while project is being
developed
2. The project manager needs to identify different type of risk in advance so that the project deadlines
don’t get extended
3. There are three main activities of risk management
Risk identification
1. The project manager needs to anticipate the risk in project as early as possible so that the
impact of risk can be minimised by using defective risk management plans
2. Following are the main types of risk that need to be identified
3. Project risk:- these include
• Resource related issues
• Schedule problems
• Budgetary issues Staffing problem
• Customer related issues
4. Technical risk := includes
• Potential design problems
• Implementation and interfacing issues Incomplete specification.
• Changing specification and technical uncertainty
• Ambiguos specification
• Testing and maintenance problem
5. Business risk :-
• Market trend changes

25
• Developing a product similar to the existing applications
• Personal commitments
4. In order to be able to successfully identify and foresee the different type of risk that might affect a
project it is good idea to have a company disaster list
5. The company disaster list contains al the possible risk or events that can occur in similar projects
Risk assessment: -
1. The main objective of risk assessment is to rank the risk in terms of their damage causing potential
2. The priority of each list can be computed using the equation p=r*s, where p is priority with which
the risk must be handled , r is probability of the risk becoming true and s is severity of damage
caused due to the risk becoming true
3. If all the identified risk are prioritised than most likely and damaging risk can be handled first and
others later on
Risk containment: -
1. Risk containment include planning the strategies to handle and face the most likely and damaging
risk first
2. Following are the strategies that can be used in general
a. Avoid the risk :- eg:- in case of having issues in designing phase with reference to specified
requirements , one can discuss withe customer to change the specifications and avoid the risk b.
Transfer the risk :-
i. This includes purchasing and insurance coverage
ii. Getting the risky component developed by Third party Risk
reduction: - leverage factor:
a) The project manger must consider the cost of handling the risk and the corresponding reduction
of the risk
b) Risk leverage = ( risk exposure before reduction - risk exposure after reduction) / cost of
reduction
Practical: - 5
Aim: - Study and usage of any Design phase CASE tool.
CASE Tools: -
CASE stands for Computer Aided Software Engineering. It means, development and
maintenance of software projects with help of various automated software tools. CASE tools
are set of software application programs, which are used to automate the SDLC activities.
CASE tools are used by software project managers, analysts and engineers to develop software
system.
Reasons for using case tools:
• The primary reasons for using a CASE tool are:
– To increase productivity
– To help produce better quality software at lower cost – To decrease the
development time and cost.
• Various tools are incorporated in CASE and are called CASE tools, which are used to
support different stages and milestones in a software development lifecycle.

26
Architecture Of CASE tools: -

Figure: - 5.1

• Layer 1 is the user interface whose function is to help the user to interact with core of the
system. It provides a graphical user interface to users using which interaction with the
system become easy.
• Layer 2 depicts tool management system (TMS) which constitutes multiple tools of
different category using which automation of the development process can be done. TMS
may include some tools to draw diagrams or to generate test cases.
• Layer 3 represents object management system (OMS) which represents the set of objects
generated by the users. Group of design notations, set of test cases (test suite) are treated
as the objects.
• Layer 4 represents a repository which stores the objects developed by the user. Layer 4 is
nothing but a database which stores automation files.
Components of CASE Tools: - CASE tools can be broadly divided into the following
parts based on their use at a particular SDLC stage:
 Central Repository - CASE tools require a central repository, which can serve as a source
of common, integrated and consistent information. Central repository is a central place of
storage where product specifications, requirement documents, related reports and diagrams,
other useful information regarding management are stored. Central repository also serves
as data dictionary.
 Upper Case Tools - Upper CASE tools are used in planning, analysis and design stages of
SDLC.
 Lower Case Tools - Lower CASE tools are used in the implementation, testing and
maintenance stages.
 Integrated Case Tools - Integrated CASE tools are helpful in all the stages of SDLC, from
Requirement gathering to Testing and documentation.

27
Figure: - 5.2

Why CASE Tools are developed?


• Main purpose of the CASE tools is to decrease the development time and cost and increase
the quality of software.
• CASE tools are developed for the following reasons:
– Firstly Quick Installation
– Time saving by reducing coding and testing time.
– Enrich graphical techniques and data flow.
– Enhanced analysis and design development.
– Create and manipulate documentation
– The speed during the system development increased.
Types of CASE tools: - Major categories of CASE tools are:
– Diagram tools
– Project Management tools
– Documentation tools
– Web Development tools
– Quality Assurance tools – Maintenance tools

Benefits of CASE tools


1. Project Management and control is improved: CASE tools can aid the project
management and control aspects of a development environment. Some CASE tools allow
for integration with industry-standard project management methods (such as PRINCE).
Others incorporate project management tools such as PERT charts and critical path analysis.
By its very nature, a CASE tool provides the vehicle for managing more effectively the
development activities of a project.
2. System Quality is improved: CASE tools promote standards within a development
environment. The use of graphical tools to specify the requirements of a system can also
help remove the ambiguities that often lead to poorly defined systems. Therefore, if used
correctly, a CASE tool can help improve the quality of the specification, the subsequent
design and the eventual working system.
3. Consistency checking is automated: Large amounts of information about a business area
and its requirement are gathered during the analysis phase of an information systems
development project. Using a manual system to record and cross reference this information
is both time-consuming and inefficient. One of the advantages of using CASE tool is that

28
all data definitions and other relevant information can be stored in a central repository that
can then be used to cross check the consistency of the different views being modelled.
4. Productivity is increased: One of the most obvious benefits of a CASE tool is that it may
increase the productivity of the analysis team. If used properly, the CASE tool will provide
a support environment enabling analysts to share information and resources, manage the
project effectively and produce supporting documentation quickly.
5. The maintenance effort is better supported: It has been argued that CASE tools help
reduce the maintenance effort required to support the system once it is operational. CASE
tools can be used to provide comprehensive and up-to-date documentation – this is
obviously a critical requirement for any maintenance effort. CASE tools should result in
better systems being developed in the first place.

Problems associated with CASE tools


1. Need for organization - wide commitment: To be used effectively, CASE tools require
the commitment of the organisation. Every member of the development team must adhere
to the standards, rules and procedures laid down by the CASE tool environment.
2. Unrealistic expectations: CSE tools cannot replace experienced business/systems analysts
and designers. They cannot automatically design a system nor can they ensure that the
business requirements are met. Analysts and designers still need to understand the business
environment and identify the system requirements. CASE tools can only support the
analytical skills of the developers, not replace them.
3. Long learning curve: CASE is technical software. It will take time for the development
team to get use to flow and use it effectively for development work.
4. Costs of CASE tools: CASE tools are complicated software packages and are, therefore,
expensive to buy. In addition to the initial costs, there are many ‘soft’ costs that have to be
considered. These ‘soft costs’ include integration of the new tool, customising the new tool,
initial and on-going training of staff, hardware costs and consultancy provided by the CASE
tool vendor.
Practical: - 6
Aim: -To perform unit testing and integration testing.

Unit Testing:- Unit testing, a testing technique using which individual modules are tested to
determine if there are any issues by the developer himself. It is concerned with functional
correctness of the standalone modules. The main aim is to isolate each unit of the system to
identify, analyze and fix the defects.

29
Unit Testing Life cycle:-

Figure: - 6.1

Advantages of unit testing:


• Defects are found at an early stage. Since it is done by the dev team by testing individual
pieces of code before integration, it helps in fixing the issues early on in source code
without affecting other source codes.
• It helps maintain the code. Since it is done by individual developers, stress is being put on
making the code less inter dependent, which in turn reduces the chances of impacting other
sets of source code.
• It helps in reducing the cost of defect fixes since bugs are found early on in the
development cycle.
• It helps in simplifying the debugging process. Only latest changes made in the code need
to be debugged if a test case fails while doing unit testing.
Disadvantages:
• It’s difficult to write good unit tests and the whole process may take a lot of time A
developer can make a mistake that will affect the whole system.
• Not all errors can be detected, since every module it tested separately and later different
integration bugs may appear.
• Testing will not catch every error in the program, because it cannot evaluate every
execution path in any but the most trivial programs. This problem is a superset of the
halting problem, which is un decidable.
• The same is true for unit testing. Additionally, unit testing by definition only tests the
functionality of the units themselves. Therefore, it will not catch integration errors or
broader system-level errors (such as functions performed across multiple units, or non-
functional test areas such as performance).

30
• Unit testing should be done in conjunction with other software testing activities, as they
can only show the presence or absence of particular errors; they cannot prove a complete
absence of errors.
• To guarantee correct behaviour for every execution path and every possible input, and
ensure the absence of errors, other techniques are required, namely the application of
formal methods to proving that a software component has no unexpected behaviour.
UnitTestingTechniques:
1. Black Box Testing - Using which the user interface, input and output are tested.
2. White Box Testing - used to test each one of those functions behavior is tested.
3. Gray Box Testing - Used to execute tests, risks and assessment methods.

Integration Testing:- It tests integration or interfaces between components, interactions to


different parts of the system such as an operating system, file system and hardware or interfaces
between systems.
• Also after integrating two different components together we do the integration testing.
As displayed in the image below when two different modules ‘Module A’ and
‘Module B’ are integrated then the integration testing is done.

Fig 6.2 Figure: - 6.3


• Integration testing is done by a specific integration tester or test team.
• Integration testing follows two approach known as ‘Top Down’ approach and ‘Bottom
Up’ approach as shown in the image below: Below are the integration testing techniques:
1. Big Bang integration testing: - In Big Bang integration testing all components or
modules are integrated simultaneously, after which everything is tested as a whole. As per
the below image all the modules from ‘Module 1’ to ‘Module 6’ are integrated
simultaneously then the Testing is carried out.

31
.
Fig 6.4
Advantage: Big Bang testing has the advantage that everything is finished before integration
testing starts.

Disadvantage: The major disadvantage is that in general it is time consuming and difficult
to trace the cause of failures because of this late integration.

2. Top-down integration testing: Testing takes place from top to bottom, following the
control flow or architectural structure (e.g. starting from the GUI or main menu).
Components or systems are substituted by stubs. Below is the diagram of ‘Top down
Approach”

Fig 6.5
Advantages of Top-Down approach:
• The tested product is very consistent because the integration testing is basically performed
in an environment that almost similar to that of reality
• Stubs can be written with lesser time because when compared to the drivers then Stubs are
simpler to author.

Disadvantages of Top-Down approach:


• Basic functionality is tested at the end of cycle
3. Bottom-up integration testing: Testing takes place from the bottom of the control flow
upwards. Components or systems are substituted by drivers. Below is the image of ‘Bottom
up approach’:

32
Fig 6.6
Advantage of Bottom-Up approach:
• In this approach development and testing can be done together so that the product or
application will be efficient and as per the customer specifications.

Disadvantages of Bottom-Up approach:


• We can catch the Key interface defects at the end of cycle
• It is required to create the test drivers for modules at all levels except the top control

System Testing: - System Testing (ST) is a black box testing technique performed to
evaluate the complete system the system's compliance against specified requirements. In
System testing, the functionalities of the system are tested from an end-to-end perspective.
System Testing is usually carried out by a team that is independent of the development team in
order to measure the quality of the system unbiased. It includes both functional and
NonFunctional testing.

Fig 6.7
Practical No. 07
Aim: - To perform various white box and black box testing techniques.
White Box Testing: -
White Box Testing is the testing of a software solution's internal coding and infrastructure. It
focuses primarily on strengthening security, the flow of inputs and outputs through the
application, and improving design and usability. White box testing is also known as Clear Box
testing, Open Box testing, Structural testing, Transparent Box testing, Code-Based testing, and
Glass Box testing.

33
It is one of two parts of the "box testing" approach of software testing. Its counter-part, black
box testing, involves testing from an external or end-user type perspective. On the other hand,
White box testing is based on the inner workings of an application and revolves around internal
testing.
The term "white box" was used because of the see-through box concept. The clear box or white
box name symbolizes the ability to see through the software's outer shell (or "box") into its
inner workings. Likewise, the "black box" in "Black Box Testing" symbolizes not being able
to see the inner workings of the software so that only the end-user experience can be tested.

What do you verify in White Box Testing


White box testing involves the testing of the software code for the following:
• Internal security holes
• Broken or poorly structured paths in the coding processes
• The flow of specific inputs through the code
• Expected output
• The functionality of conditional loops
• Testing of each statement, object and function on an individual basis
The testing can be done at system, integration and unit levels of software development. One of
the basic goals of white box testing is to verify a working flow for an application. It involves
testing a series of predefined inputs against expected or desired outputs so that when a specific
input does not result in the expected output, you have encountered a bug.

How do you perform White Box Testing


To give you a simplified explanation of white box testing, we have divided it into two basic
Steps. This is what testers do when testing an application using the white box testing technique:
Step 1) Understand the source code
The first thing a tester will often do is learn and understand the source code of the application.
Since white box testing involves the testing of the inner workings of an application, the tester
must be very knowledgeable in the programming languages used in the applications they are
testing. Also, the testing person must be highly aware of secure coding practices. Security is
often one of the primary objectives of testing software. The tester should be able to find security

34
issues and prevent attacks from hackers and naive users who might inject malicious code into
the application either knowingly or unknowingly.
Step 2) Create test cases and execute
The second basic step to white box testing involves testing the application's source code for
proper flow and structure. One way is by writing more code to test the application's source
code. The tester will develop little tests for each process or series of processes in the application.
This method requires that the tester must have intimate knowledge of the code and is often done
by the developer. Other methods include Manual Testing, trial and error testing and the use of
testing tools as we will explain further on in this article.

White Box Testing Techniques


The 3 main White Box Testing Techniques are:
• Statement Coverage
• Branch Coverage Path Coverage
Let’s understand these techniques one by one with a simple example.
• Statement coverage:
In a programming language, a statement is nothing but the line of code or instruction for the
computer to understand and act accordingly. A statement becomes an executable statement
when it gets compiled and converted into the object code and performs the action when the
program is in a running mode.
Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether
each and every line of the code is executed at least once.
• Branch Coverage:
“Branch” in a programming language is like the “IF statements”. An IF statement has two
branches: True and False. So in Branch coverage (also called Decision coverage), we validate
whether each branch is executed at least once. In case of an “IF statement”, there will be two
test conditions: One to validate the true branch and, Other to validate the false branch.
Hence, in theory, Branch Coverage is a testing method which is when executed ensures that
each and every branch from each decision point is executed.
• Path Coverage:
Path coverage tests all the paths of the program. This is a comprehensive technique which
ensures that all the paths of the program are traversed at least once. Path Coverage is even more
powerful than Branch coverage. This technique is useful for testing the complex programs.
Types of White Box Testing
White box testing encompasses several testing types used to evaluate the usability of an
application, block of code or specific software package. There are listed below --

35
Unit Testing: It is often the first type of testing done on an application. Unit testing is
performed on each unit or block of code as it is developed. Unit Testing is essentially done by
the programmer. As a software developer, you develop a few lines of code, a single function or
an object and test it to make sure it works before continuing. Unit testing helps identify majority
of bugs, early in the software development lifecycle. Bugs identified in this stage are cheaper
and easy to fix.
Testing for Memory Leaks: - Memory leaks are the most important causes of slower running
applications. A QA specialist who is experienced at detecting memory leaks is essential in cases
where you have a slow running software application. There are many tools available to assist
developers/testers with memory leak testing, example, Rational Purify for windows application.
Apart from above a few testing types are part of both black box and white box testing. They are
listed as below:
White Box Penetration Testing: In this testing, the tester/developer has full information of the
application's source code, detailed network information, IP addresses involved and all server
information the application runs on. The aim is to attack the code from several angles to expose
security threats
White Box Mutation Testing: Mutation testing is often used to discover the best coding
techniques to use for expanding a software solution.
Integration Testing: - Integration testing is a level of software testing where individual units
are combined and tested as a group. The purpose of this level of testing is to expose faults in
the interaction between integrated units. Test drivers and test stubs are used to assist in
Integration Testing.
Advantages of White Box Testing: Code
optimization by finding hidden errors.
• White box tests cases can be easily automated.
• Testing is more thorough as all code paths are usually covered. Testing can start early in
SDLC even if GUI is not available.
Disadvantages of White Box Testing: -
• White box testing can be quite complex and expensive.
• Developers who usually execute white box test cases detest it. The white box testing by
developers is not detailed can lead to production errors.
• White box testing requires professional resources, with a detailed understanding of
programming and implementation.

36
• White-box testing is time-consuming, bigger programming applications take the time to test
fully

Black Box Testing


Black box testing is a software testing techniques in which functionality of the software under
test (SUT) is tested without looking at the internal code structure, implementation details and
knowledge of internal paths of the software. This type of testing is based entirely on the
software requirements and specifications. In Black Box Testing we just focus on inputs and
output of the software system without bothering about internal knowledge of the software
program.

Black Box Testing – Steps: -


Here are the generic steps followed to carry out any type of Black Box Testing.
• Initially requirements and specifications of the system are examined.
• Tester chooses valid inputs (positive test scenario) to check whether SUT processes them
correctly. Also some invalid inputs (negative test scenario) are chosen to verify that the
SUT is able to detect them.
• Tester determines expected outputs for all those inputs.
• Software tester constructs test cases with the selected inputs.
• The test cases are executed.
• Software tester compares the actual outputs with the expected outputs. Defects if any are
fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but following are the prominent ones -
• Functional testing - This black box testing type is related to functional requirements of a
system; it is done by software testers.
• Non-functional testing - This type of black box testing is not related to testing of a specific
functionality, but non-functional requirements such as performance, scalability, usability.
• Regression testing - Regression Testing is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing code.

Black box testing strategy:


Following are the prominent Test Strategy amongst the many used in Black box Testing
• Equivalence Class Testing: It is used to minimize the number of possible test cases to an
optimum level while maintains reasonable test coverage.
• Boundary Value Testing: Boundary value testing is focused on the values at boundaries.
This technique determines whether a certain range of values are acceptable by the system

37
or not. It is very useful in reducing the number of test cases. It is mostly suitable for the
systems where input is within certain ranges.
• Decision Table Testing: A decision table puts causes and their effects in a matrix. There is
unique combination in each column.

Advantages of Black Box Testing


Tester can be non-technical.
• Used to verify contradictions in actual system and the specifications.
• Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing


• The test inputs needs to be from large sample space.
• It is difficult to identify all possible inputs in limited testing time. So writing test cases is
slow and difficult
• Chances of having unidentified paths during this testing

38

You might also like