Professional Documents
Culture Documents
On
Submitted by
Ankur Kaushik
Kuldeep Bhojak
M.Sc.IT IV Semester
Under Guidance of: Submitted to:
Sh. Hardayal Singh Sh. Hardayal Singh
H.O.D. (MCA Deptt.) H.O.D. (MCA Deptt.)
Engineering College Engineering College
Bikaner Bikaner
STUDY CENTER MANAGEMENT
A Project Report
A project like this takes quite a lot of time to do properly. As is often the
case, this project owes its existence and certainly its quality to a number of
people, whose name does not appear on the cover. Among them is one of the
most extra ordinary programmers it has been my pleasure to work with
Mr. Tarun Singh Bundela, who did a super job in technically editing this
project. He did more then just check the facts by offering thoughtful logic
where needed to improve the project as a whole.
(Ankur Kaushik)
(Kuldeep Bhojak)
Table of Contents
Chapter 1 Introduction
Chapter 10 Conclusion
Introuction
About Organization
The University also allows private candidates to enroll in various programs if they
satisfy the eligibility criteria. The University does not provide any courseware, nor
conduct any contact classes, nor offer any other support services to private
candidates. Such candidates have to prepare for the programs on their own as per
the curriculum and are required to attend the examinations, as per the rules and
regulations of the University. The University has no study centers/branches
outside the jurisdiction.
This project is useful for manage to ICFAI study center. In this project we try to
add the entire necessary requirement of a study center.
Chapter 2
Development model
Development model
Our project life cycle uses the waterfall model, also known as classic life cycle
model or linear sequential model.
System/Information
Analysis EngineeringDesign Code Test
3. Design
4. Code Generation
5. Testing
Once code has been generated, program testing begins. The testing focuses on the
logical internals of the software, ensuring that all statement have been tested, and
on the functional externals; that is, conducting test to uncover errors and ensure
that define input will produce actual results that agree with required results.
6. Support
System Study
Before the project can begin, it becomes necessary to estimate the work to be
done, the resource that will be required, and the time that will elapse from start to
finish. During making such a plan we visited site many more times.
We started to asking context-free questions; that is, a set of questions that will lead
to a basic understanding of the problem. The first set of context-free questions was
like this:
What do you want to be done?
Who will use this solution?
What is wrong with your existing working systems?
Is there another source for the solution?
• Can you show us (or describe) the environment in which the solution
will be used?
After first round of above asked questions. We revisited the site and asked many
more questions considering to final set of questions.
2.2.2 Feasibility
Software cost and effort estimation will never be an exact science. Too may
variables—human, technical, environmental, political—can affect the ultimate
cost of software and effort applied to develop it. However, software project
estimation can be transformed a black art to a series of systematic steps that
provide estimates with acceptable risk.
1. Delay estimation until late in the project (since, we can achieve 100%
accurate estimates after the project is complete!)
2. Base estimates on similar projects that have already been completed.
3. Use relatively simple decomposition techniques to generate project cost
and effort estimates.
4. Use one or more empirical models for software cost and effort
estimation.
Unfortunately, the first option, however attractive, is not practical. Cost estimates
must be provided “Up front”. However, we should recognize that the longer we
wait, the more we know, and the more we know, the less likely we are to make
serious errors in our estimates.
The second option can work reasonably well, if the current project is quite
similar to past efforts and other project influences (e.g., the customer, business
conditions, the SEE, deadlines) are equivalent. Unfortunately past experience has
not always been a good indicator of future results.
The remaining options are viable approaches the software project estimation.
Ideally, the techniques noted for each option be applied in tandem; each used as
cross check for the other. Decomposition techniques take a “divide and conquer”
approach to software project estimation. By decomposing a project into major
functions and related software engineering activities, cost and effort estimation can
be performed in the stepwise fashion.
Each of the viable software cost estimation options is only as good as the
historical data used to seed the estimate. If no historical data exist, costing rests on a
very shaky foundation.
Chapter 4
Program evaluation and review technique (PERT) and critical path method
(CPM) are two project scheduling methods that can be applied to software
development. These techniques are driven by following information:
• Estimates of Effort
• A decomposition of the product function
• The selection of the appropriate process model and task set
• Decomposition of tasks
PERT chart for this application software is illustrated in figure 3.1. The critical
Path for this Project is Design, Code generation and Integration and testing.
Aug-2008
Coding
Aug 15,2008
Documentati Finish
on and
Report
Oct 30,2008
Figure 4.1 PERT charts for “University Study Center Management System”.
System
Gantt chart which is also known as Timeline chart contains the information
like effort, duration, start date, completion date for each task. A timeline chart can
be developed for the entire project.
Below in figure 4.2 we have shown the Gantt chart for the project. All project
tasks have been listed in the left-hand column.
Start: Jan 1, 2008.
Figure: 4.2 Gant chart for the Project University Study Center Management
System. Note: Wk1—week1, d1—day1.
Chapter 5
System Analysis
Over the past two decades, a large number of analysis modeling methods
have been developed. Investigators have identified analysis problems and their
caused and have developed a variety of modeling notations and corresponding sets
of heuristics to overcome them. Each analysis method has a unique point of view.
However, all analysis methods are related by a set of operational principles:
We have tried to takes above said principles to heart so that we could provide
an excellent foundation for design.
considered.
Information content represents the individual data and control objects that
constitute some larger collection of information transformed by the software. For
example, the data object, Status declare is a composite of a number of important
pieces of data: the aircraft’s name, the aircraft’s model, ground run, no of hour
flying and so forth. Therefore, the content of Status declares is defined by the
attributes that are needed to create it. Similarly, the content of a control object
called System status might be defined by a string of bits. Each bit represents a
separate item of information that indicates whether or not a particular device is on-
or off-line.
Data and control objects can be related to other data and control objects.
For example, the date object Status declare has one or more relationships with the
objects like total no of flying, period left for the maintenance of aircraft an others.
Information flow represents the manner in which date and control change
as each moves through a system. Referring to figure 6.1, input objects are
transformed to intermediate information (data and / or control), which is further
transformed to output. Along this transformation path, additional information may
be introduced from an existing date store ( e.g., a disk file or memory buffer). The
transformations applied to the date are functions or sub functions that a program
must perform. Data and control that move between two transformations define the
interface for each function.
Data/Contro
l
Store
5.1.2 Modeling
The second and third operational analysis principles require that we build models
of function and behavior.
Functional models. Software transforms information, and in order to accomplish
this, it must perform at lease three generic functions:
• Input
• Processing
• And output.
The functional model begins with a single context level model ( i.e., the name of
the software to be built). Over a series of iterations, more and more functional
detail is gathered, until a through delineation of all system functionality is
represented.
A behavioral model creates a representation of the states of the software and the
events that cause software to change state.
The above six questions are made as per the Andriole [and92] suggestions for
prototyping approach.
E-R DIAGRAM:
STUDY CENTER
MANAGEMENT
Center Head
Staff Student
Members
Visitor
Informatio
n New
Student
Notice
Board
Exam
Schedule
Student Library
Record & Managemen
Fee t
DATA FLOW DIAGRAM
The following DFD shows how the working of a reservation system could be
smoothly managed:
CENTER HEAD
Edit
New Student Staff Members
Viewer
Library
Student Record
Management
Student Fee
Record Visitor
Information
Notice Board
Course
Information
Exam
Schedule
Chapter 6
Technology used
6.1 Tools and Platform used for the Development
The pre-coded solutions that form the framework's Base Class Library cover a
large range of programming needs in a number of areas, including user interface,
data access, database connectivity, cryptography, web application development,
numeric algorithms, and network communications. The class library is used by
programmers who combine it with their own code to produce applications.
Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. Also part of the .NET Framework,
this runtime environment is known as the Common Language Runtime (CLR).
The CLR provides the appearance of an application virtual machine so that
programmers need not consider the capabilities of the specific CPU that will
execute the program. The CLR also provides other important services such as
security, memory management, and exception handling. The class library and the
CLR together compose the .NET Framework.
The .NET Framework is included with Windows Server 2008 and Windows Vista.
The current version of the framework can also be installed on Windows XP and
the Windows Server 2003 family of operating systems.
ASP.NET
History
After the release of Internet Information Services 4.0 in 1997, Microsoft began
researching possibilities for a new web application model that would solve
common complaints about ASP, especially with regard to separation of
presentation and content and being able to write "clean" code. Mark Anders, a
manager on the IIS team, and Scott Guthrie, who had joined Microsoft in 1997
after graduating from Duke University, were tasked with determining what that
model would look like. The initial design was developed over the course of two
months by Anders and Guthrie, and Guthrie coded the initial prototypes during the
Christmas holidays in 1997.
The initial prototype was called "XSP"; Guthrie explained in a 2007 interview
that, "People would always ask what the X stood for. At the time it really didn't
stand for anything. XML started with that; XSLT started with that. Everything
cool seemed to start with an X, so that's what we originally named it." The initial
prototype of XSP was done using Java, but it was soon decided to build the new
platform on top of the Common Language Runtime (CLR), as it offered an object-
oriented programming environment, garbage collection and other features that
were seen as desirable features that Microsoft's Component Object Model
platform didn't support. Guthrie described this decision as a "huge risk", as the
success of their new web development platform would be tied to the success of the
CLR, which, like XSP, was still in the early stages of development, so much so
that the XSP team was the first team at Microsoft to target the CLR.
With the move to the Common Language Runtime, XSP was re-implemented in
C# (known internally as "Project Cool" but kept secret from the public), and
renamed to ASP+, as by this point the new platform was seen as being the
successor to Active Server Pages, and the intention was to provide an easy
migration path for ASP developers.
Mark Anders first demonstrated ASP+ at the ASP Connections conference in
Phoenix, Arizona on May 2, 2000. Demonstrations to the wide public and initial
beta release of ASP+ (and the rest of the .NET Framework) came at the 2000
Professional Developers Conference on July 11, 2000 in Orlando, Florida. During
Bill Gates's keynote presentation, Fujitsu demonstrated ASP+ being used in
conjunction with COBOL,[5] and support for a variety of other languages was
announced, including Microsoft's new Visual Basic .NET and C# languages, as
well as Python and Perl support by way of interoperability tools created by Active
State.
Once the ".NET" branding was decided on in the second half of 2000, it was
decided to rename ASP+ to ASP.NET. Mark Anders explained on an appearance
on The MSDN Show that year that, "The .NET initiative is really about a number
of factors, it’s about delivering software as a service, it's about XML and web
services and really enhancing the Internet in terms of what it can do .... we really
wanted to bring its name more in line with the rest of the platform pieces that
make up the .NET framework."
Characteristics
Pages
ASP.NET pages, known officially as "web forms", are the main building block for
application development. Web forms are contained in files with an ASPX
extension; in programming jargon, these files typically contain static (X)HTML
markup, as well as markup defining server-side Web Controls and User Controls
where the developers place all the required static and dynamic content for the web
page. Additionally, dynamic code which runs on the server can be placed in a page
within a block <% -- dynamic code -- %> which is similar to other web
development technologies such as PHP, JSP, and ASP, but this practice is
generally discouraged except for the purposes of data binding since it requires
more calls when rendering the page.
Note that this sample uses code "inline", as opposed to code behind.
<%@ Page Language="C#" %>
<script runat="server">
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Sample page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
The current time is: <asp:Label runat="server"
id="Label1" />
</div>
</form>
</body>
</html>
Code-behind model
It is recommended by Microsoft for dealing with dynamic program code to use the
code-behind model, which places this code in a separate file or in a specially
designated script tag. Code-behind files typically have names like MyPage.aspx.cs
or MyPage.aspx.vb based on the ASPX file name (this practice is automatic in
Microsoft Visual Studio and other IDEs). When using this style of programming,
the developer writes code to respond to different events, like the page being
loaded, or a control being clicked, rather than a procedural walk through the
document.
Example
The above tag is placed at the beginning of the ASPX file. The CodeFile property
of the @ Page directive specifies the file (.cs or .vb) acting as the code-behind
while the Inherits property specifies the Class the Page derives from. In this
example, the @ Page directive is included in SamplePage.aspx, then
SampleCodeBehind.aspx.cs acts as the code-behind for this page:
using System;
namespace Website
{
public partial class SampleCodeBehind :
System.Web.UI.Page
{
protected override void Page_Load(EventArgs e)
{
base.OnLoad(e);
}
}
}
In this case, the Page_Load () method is called every time the ASPX page is
requested. The programmer can implement event handlers at several stages of the
page execution process to perform processing.
User controls
Programmers can add their own properties, methods,[9] and event handlers.[10]
An event bubbling mechanism provides the ability to pass an event fired by a user
control up to its containing page.
State management
ASP.NET applications are hosted in a web server and are accessed over the
stateless HTTP protocol. As such, if the application uses stateful interaction, it has
to implement state management on its own. ASP.NET provides various
functionality for state management in ASP.NET applications.
Application state
Session state
ASPState Mode
In this mode, ASP.NET runs a separate Windows service that maintains the
state variables. Because the state management happens outside the
ASP.NET process, this has a negative impact on performance, but it allows
multiple ASP.NET instances to share the same state server, thus allowing
an ASP.NET application to be load-balanced and scaled out on multiple
servers. Also, since the state management service runs independent of
ASP.NET, variables can persist across ASP.NET process shutdowns.
SqlServer Mode
In this mode, the state variables are stored in a database server, accessible
using SQL. Session variables can be persisted across ASP.NET process
shutdowns in this mode as well. The main advantage of this mode is it
would allow the application to balance load on a server cluster while
sharing sessions between servers.
View state
Template engine
When first released, ASP.NET lacked a template engine. Because the .NET
framework is object-oriented and allows for inheritance, many developers
would define a new base class that inherits from "System.Web.UI.Page",
write methods here that render HTML, and then make the pages in their
application inherit from this new class. While this allows for common
elements to be reused across a site, it adds complexity and mixes source
code with markup. Furthermore, this method can only be visually tested by
running the application - not while designing it. Other developers have used
include files and other tricks to avoid having to implement the same
navigation and other elements in every page.
ASP.NET 2.0 introduced the concept of "master pages", which allow for template-
based page development. A web application can have one or more master pages,
which can be nested.[14] Master templates have place-holder controls, called
ContentPlaceHolders to denote where the dynamic content goes, as well as HTML
and JavaScript shared across child pages.
Child pages use those ContentPlaceHolder controls, which must be mapped to the
place-holder of the master page that the content page is populating. The rest of the
page is defined by the shared parts of the master page, much like a mail merge in a
word processor. All markup and server controls in the content page must be placed
within the ContentPlaceHolder control.
When a request is made for a content page, ASP.NET merges the output of the
content page with the output of the master page, and sends the output to the user.
The master page remains fully accessible to the content page. This means that the
content page may still manipulate headers, change title, configure caching etc. If
the master page exposes public properties or methods (e.g. for setting copyright
notices) the content page can use these as well.
Performance
Development tools
• Delphi 2006
• Macromedia Dreamweaver MX, Macromedia Dreamweaver MX 2004, or
Macromedia Dreamweaver 8 (doesn't support ASP.NET 2.0 features, and
produces very inefficient code for ASP.NET 1.x: also, code generation and
ASP.NET features support through version 8.0.1 was little if any changed
from version MX: version 8.0.2 does add changes to improve security
against SQL injection attacks)
• Macromedia HomeSite 5.5 (For ASP Tags)
• Microsoft Expression Web, part of the Microsoft Expression Studio
application suite.
• Microsoft SharePoint Designer
• MonoDevelop (Free/Open Source)
• SharpDevelop (Free/Open Source)
• Visual Studio .NET (for ASP.NET 1.x)
• Visual Web Developer 2005 Express Edition (free) or Visual Studio 2005
(for ASP.NET 2.0)
• Visual Web Developer 2008 Express Edition (free) or Visual Studio 2008
(for ASP.NET 2.0/3.5)
• Eiffel for ASP.NET
6.2.1 Back-end Environment
What is SQL?
During the 1970s, a group at IBM's San Jose research center developed the System
R relational database management system, based on the model introduced by
Edgar F. Codd in his influential paper, A Relational Model of Data for Large
Shared Data Banks.[3] Donald D. Chamberlin and Raymond F. Boyce of IBM
subsequently created the Structured English Query Language (SEQUEL) to
manipulate and manage data stored in System R.[4] The acronym SEQUEL was
later changed to SQL because "SEQUEL" was a trademark of the UK-based
Hawker Siddeley aircraft company.
The first non-commercial non-SQL RDBMS, Ingres, was developed in 1974 at the
U.C. Berkeley. Ingres implemented a query language known as QUEL, which was
later supplanted in the marketplace by SQL.
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the
potential of the concepts described by Codd, Chamberlin, and Boyce and
developed their own SQL-based RDBMS with aspirations of selling it to the U.S.
Navy, CIA, and other government agencies. In the summer of 1979, Relational
Software, Inc. introduced the first commercially available implementation of SQL,
Oracle V2 (Version2) for VAX computers. Oracle V2 beat IBM's release of the
System/38 RDBMS to market by a few weeks.
After testing SQL at customer test sites to determine the usefulness and
practicality of the system, IBM began developing commercial products based on
their System R prototype including System/38, SQL/DS, and DB2, which were
commercially available in 1979, 1981, and 1983, respectively.
Standardization
SQL was adopted as a standard by ANSI in 1986 and ISO in 1987. In the original
SQL standard. Until 1996, the National Institute of Standards and Technology
(NIST) data management standards program was tasked with certifying SQL
DBMS compliance with the SQL standard. In 1996, however, the NIST data
management standards program was dissolved, and vendors are now relied upon to
self-certify their products for compliance.
The SQL standard has gone through a number of revisions, as shown below:
Year Name Alias Comments
198 SQL-86 SQL-87 First published by ANSI. Ratified by ISO in 1987.
6
198 SQL-89 FIPS 127-1 Minor revision, adopted as FIPS 127-1.
9
199 SQL-92 SQL2, FIPS 127-2 Major revision (ISO 9075), Entry Level SQL-92
2 adopted as FIPS 127-2.
199 SQL:1999 SQL3 Added regular expression matching, recursive
9 queries, triggers, support for procedural and control-
of-flow statements, non-scalar types, and some
object-oriented features.
200 SQL:2003 Introduced XML-related features, window functions,
3 standardized sequences, and columns with auto-
generated values (including identity-columns).
200 SQL:2006 ISO/IEC 9075-14:2006 defines ways in which SQL
6 can be used in conjunction with XML. It defines
ways of importing and storing XML data in an SQL
database, manipulating it within the database and
publishing both XML and conventional SQL-data in
XML form. In addition, it provides facilities that
permit applications to integrate into their SQL code
the use of XQuery, the XML Query Language
published by the World Wide Web Consortium
(W3C), to concurrently access ordinary SQL-data
and XML documents.
The SQL standard is not freely available. SQL: 2003 and SQL: 2006 may be
purchased from ISO or ANSI. A late draft of SQL: 2003 is freely available as a zip
archive, however, from Whitemarsh Information Systems Corporation. The zip
archive contains a number of PDF files that define the parts of the SQL: 2003
specification.
Scope and extensions
Procedural extensions
SQL is designed for a specific purpose: to query data contained in a relational
database. SQL is a set-based, declarative query language, not an imperative
language such as C or BASIC. However, there are extensions to Standard SQL
which add procedural programming language functionality, such as control-of-
flow constructs. These are:
Common
Source Full Name
Name
ANSI/ISO
SQL/PSM SQL/Persistent Stored Modules
Standard
IBM SQL PL SQL Procedural Language (implements SQL/PSM)
Microsoft/
T-SQL Transact-SQL
Sybase
MySQL SQL/PSM SQL/Persistent Stored Module (as in ISO SQL:2003)
Oracle PL/SQL Procedural Language/SQL (based on Ada)
Procedural Language/PostgreSQL Structured Query
PostgreSQL PL/pgSQL
Language (based on Oracle PL/SQL)
Procedural Language/Persistent Stored Modules (implements
PostgreSQL PL/PSM
SQL/PSM)
Additional extensions
SQL: 2003 also defines several additional extensions to the standard to increase
SQL functionality overall. These extensions include:
The SQL/JRT, or SQL Routines and Types for the Java Programming Language,
extension is defined by ISO/IEC 9075-13:2003. SQL/JRT specifies the ability to
invoke static Java methods as routines from within SQL applications. It also calls
for the ability to use Java classes as SQL structured user-defined types.
SQL statements also include the semicolon (";") statement terminator. Though not
required on every platform, it is defined as a standard part of the SQL grammar.
Queries
The most common operation in SQL databases is the query, which is performed
with the declarative SELECT keyword. SELECT retrieves data from a specified
table, or multiple related tables, in a database. While often grouped with Data
Manipulation Language (DML) statements, the standard SELECT query is
considered separate from SQL DML, as it has no persistent effects on the data
stored in a database. Note that there are some platform-specific variations of
SELECT that can persist their effects in a database, such as the SELECT INTO
syntax that exists in some databases.
SQL queries allow the user to specify a description of the desired result set, but it
is left to the devices of the database management system (DBMS) to plan,
optimize, and perform the physical operations necessary to produce that result set
in as efficient a manner as possible. An SQL query includes a list of columns to be
included in the final result immediately following the SELECT keyword. An
asterisk ("*") can also be used as a "wildcard" indicator to specify that all
available columns of a table (or multiple tables) are to be returned. SELECT is the
most complex statement in SQL, with several optional keywords and clauses,
including:
The FROM clause which indicates the source table or tables from which the data
is to be retrieved. The FROM clause can include optional JOIN clauses to join
related tables to one another based on user-specified criteria.
The WHERE clause includes a comparison predicate, which is used to restrict the
number of rows returned by the query. The WHERE clause is applied before the
GROUP BY clause. The WHERE clause eliminates all rows from the result set
where the comparison predicate does not evaluate to True.
The GROUP BY clause is used to combine, or group, rows with related values
into elements of a smaller set of rows. GROUP BY is often used in conjunction
with SQL aggregate functions or to eliminate duplicate rows from a result set.
The HAVING clause includes a comparison predicate used to eliminate rows after
the GROUP BY clause is applied to the result set. Because it acts on the results of
the GROUP BY clause, aggregate functions can be used in the HAVING clause
predicate.
The ORDER BY clause is used to identify which columns are used to sort the
resulting data, and in which order they should be sorted (options are ascending or
descending). The order of rows returned by an SQL query is never guaranteed
unless an ORDER BY clause is specified.
SELECT *
FROM Book
WHERE price > 100.00
ORDER BY title;
The example below demonstrates the use of multiple tables in a join, grouping,
and aggregation in an SQL query, by returning a list of books and the number of
authors associated with each book.
FROM Book
JOIN Book_author
ON Book.isbn = Book_author.isbn
GROUP BY Book.title;
Title Authors
---------------------- -------
Pitfalls of SQL 1
(The underscore character "_" is often used as part of table and column names to
separate descriptive words because other punctuation tends to conflict with SQL
syntax. For example, a dash "-" would be interpreted as a minus sign.)
Under the precondition that isbn is the only common column name of the two
tables and that a column named title only exists in the Books table, the above
query could be rewritten in the following form:
FROM Book
GROUP BY title;
However, many vendors either do not support this approach, or it requires certain
column naming conventions. Thus, it is less common in practice.
Data retrieval is very often combined with data projection when the user is looking
for calculated values and not just the verbatim data stored in primitive data types,
or when the data needs to be expressed in a form that is different from how it's
stored. SQL allows the use of expressions in the select list to project data, as in the
following example which returns a list of books that cost more than 100.00 with
an additional sales_tax column containing a sales tax figure calculated at 6% of
the price.
FROM Book
ORDER BY title;
Some modern day SQL queries may include extra WHERE statements that are
conditional to each other. They may look like this example:
ORDER BY title;
6.1.3 Hardware Specification
Server Side
• Core 2 Due 2.4GHz and Above
• 2 GB of Random Access Memory and Above
• 160 GB Hard Disk
Client Side
• Pentium-IV 1.5MHs and Above
• 512 MB of Random Access Memory and Above
• 80 GB Hard Disk
Chapter 7
System Design
1. Index page
This webpage is the starting page of the Website.It gives the followings:
As in the above image the add new staff member webpage is displaying:
User can enter any existing staff member name in a given textbox
After clicking on search button
Give information like (name, qualification, post, salary etc.) if it is
exist.
19. Search Book
As in the above image the Add new book web page is displaying:
System Testing
System Testing
Once source code has been generated, software must be tested to uncover (and
correct) as many errors as possible before delivery to customer. Our goal is to
design a series of test cases that have a high likelihood of finding errors. To
uncover the errors software techniques are used. These techniques provide
systematic guidance for designing test that
(1) Internal program logic is exercised using “White box” test case design
techniques.
(2) Software requirements are exercised using “block box” test case design
techniques.
In both cases, the intent is to find the maximum number of errors with the
minimum amount of effort and time.
8.2 Strategies
A strategy for software testing must accommodate low-level tests that are
necessary to verify that a small source code segment has been correctly
implemented as well as high-level tests that validate major system functions
against customer requirements. A strategy must provide guidance for the
practitioner and a set of milestones for the manager. Because the steps of the test
strategy occur at a time when deadline pressure begins to rise, progress must be
measurable and problems must surface as earl as possible.
Following testing techniques are well known and the same strategy is adopted
during this project testing.
8.2.1 Unit testing: Unit testing focuses verification effort on the smallest unit of
software design- the software component or module. The unit test is white-box
oriented. The module interface is tested to ensure that information properly flows
into and of the program unit under test the local data structure has been examined
to ensure that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution. Boundary conditions are tested to ensure that the module
operated properly at boundaries established to limit or restrict processing. All
independent paths through the control structure are exercised to ensure that all
statements in a module haven executed at least once.
8.2.4 System testing: System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based system. Below we have
described the two types of testing which have been taken for this project.
8.2.4.1 Security testing
Any computer-based system that manages sensitive information causes actions
that can improperly harm (or benefit) individuals is a target for improper or illegal
penetration. Penetration spans a broad range of activities: hackers who attempt to
penetrate system for sport; disgruntled employees who attempt to penetrate for
revenge; dishonest individuals who attempt to penetrate for illicit personal gain.
For security purposes, when anyone who is not authorized user cannot
penetrate this system. When programs first load it check for correct username and
password. If any fails to act according will be simply ignored by the system.
As much time we run our project that is still sort of testing as Musa and Ackerman
said. They have suggested a response that is based on statistical criteria: “No, we
cannot be absolutely certain that the software will never fail, but relative to a
theoretically sound and experimentally validated statistical model, we have done
sufficient testing to say with 95 percent confidence that the probability of 1000
CPU hours of failure free operation in a probabilistically defined environment is at
least 0.995.”
8.4 Validation Checks
Software testing is one element of broader topic that is often referred to as
verification and validation. Verification refers to the set of activities that ensure
that software correctly implements a specific function. Validation refers to a
different set of activities that ensure that the software that has been built is
traceable to customer requirements. Boehm state this another way:
Validation checks are useful when we specify the nature of data input. Let us
elaborate what I mean. In this project while entering the data to many text box you
will find the use of validation checks. When you try to input wrong data. Your
entry will be automatically abandoned.
In the very beginning of the project when user wishes to enter into the project, he
has to supply the password. This password is validated to certain string, till user
won’t supply correct word of string for password he cannot succeed. When you try
to edit the record for the trainee in Operation division you will find the validation
checks. If you supply the number (digits) for name text box, you won’t get the
entry; similarly if you data for trainee code in text (string) format it will be simply
abandoned.
System Implementation
Specification, regardless of the mode through which we accomplish it, may
be viewed as a representation process. Requirements are represented in manner
that ultimately leads to successful software implementation.
Conclusion
To conclude, Project Grid works like a component which can access all the
databases and picks up different functions. It overcomes the many limitations
incorporated in the .NET Framework. Among the many features availed by the
project, the main among them are:
• Simple editing
• Insertion of individual images on each cell
• Insertion of individual colors on each cell
• Flicker free scrolling
• Drop-down grid effect
• Placing of any type of control anywhere in the grid
Chapter 11
The number of levels that the software is handling can be made unlimited
in future from the current status of handling up to N levels as currently laid
down by the software. Efficiency can be further enhanced and boosted up
to a great extent by normalizing and de-normalizing the database tables
used in the project as well as taking the kind of the alternative set of data
structures and advanced calculation algorithms available.
We can in future generalize the application from its current customized
status wherein other vendors developing and working on similar
applications can utilize this software and make changes to it according to
their business needs.
Faster processing of information as compared to the current system with
high accuracy and reliability.
Automatic and error free report generation as per the specified format with
ease.
Automatic calculation and generation of correct and precise Bills thus
reducing much of the workload on the accounting staff and the errors
arising due to manual calculations.
With a fully automated solution, lesser staff, better space utilization and
peaceful work environment, the company is bound to experience high
turnover.
A future application of this system lies in the fact that the proposed system would
remain relevant in the future. In case there be any additions or deletion of the
services, addition or deletion of any reseller in any type of modification in future
can be implemented easily. The data collected by the system will be useful for
some other purposes also.
All these result in high client-satisfaction, hence, more and more business for the
company that will scale the company business to new heights in the forthcoming
future.
References
References:
• Complete Reference of C#
• Programming in C# - Deitel & Deitel
• www.w3schools.com
• http://en.wikipedia.org
• The principles of Software Engineering – Roger S.Pressman
• Software Engineering – Hudson
• MSDN help provided by Microsoft .NET
• Object Oriented Programming – Deitel & Deitel