Roll No: 605211799

A Project Report submitted to “Punjab Technical University” in partial Fulfillment of the requirement for the award of the degree of

Master of Computer Application
[Session 2006-2009]

Project name: HUS Online Shop using Visual Studio 2005 and SQL Server 2005
Supervised by:
Mr. Arwinder Singh Kang Head, Deptt. Of Computer Science Project In charge Mr. Sahil Sharma Lecturer, Computer Science

Submitted by:
Umesh Kumar M.C.A. VIth Sem Roll No. 605211799 Chandigarh Engg. College


Chandigarh Engineering College, Landran, Mohali

Roll No: 605211799

It is my pleasure to acknowledge the help I have received from different individuals of Society For Promotion of IT Chandigarh (SPIC) during the project based training period. My sincere appreciation and gratitude goes to Mr.Ashok Kashav(Trainer) and Mr. Anil Prashar(Head), Software Developers SPIC for their guidance, constructive comments, valuable suggestions and inspirations. During the entire training session. I have received endless help, untiring guidance and supervision throughout my training Period. To derive benefits of his enormous experience, it is a matter of great privilege for me. I also take opportunity to express our sincere thanks and full appreciation to Mr. Arwinder Singh Kang, Head, Deptt. Of Computer Applications, Chandigarh Engineering College, Landran and Mr. Sahil Sharma who extended their wholehearted cooperation, moral support and rendering ungrudging assistance whenever and where ever need aroused. I am very Thankful to them. Finally, I wish to say thanks to all people of the company for their kind cooperation.

Umesh Kumar Roll No: 605211799 MCA 6th Semester Chandigarh Engg College, Landran, Mohali


Roll No: 605211799

SPIC (Society for Promotion of IT in Chandigarh)

The Society for Promotion of IT in Chandigarh (SPIC) has been set up under the aegis of the Department of Information Technology, Chandigarh Administration for implementing the various plans of the Administration to promote the IT industry in Chandigarh. The Chairperson of the Society is the Adviser to the Administrator.

1. To promote application of Information Technology in the Union Territory of Chandigarh in accordance with the IT Policy of Chandigarh Administration. 2. To carry out all such activities that are commensurate with the IT vision of the Chandigarh Administration as outlined in IT Policy. 3. To promote e-Governance, Software Exports, create IT Infrastructure, generate jobs in IT as outlined in the IT Mission of the Chandigarh Administration. 4. To facilitate the establishment and functioning of data processing computer centres. 5. To provide consultancy services and impart training in various disciplines of Information Technology. 6. To facilitate the development of software packages as well as related items and undertake turn key project / assignments in India and abroad in Information Technology by public and private sector companies in the Union Territory of Chandigarh in order to promote the application of Information Technology for the benefit of citizens of Chandigarh.


Roll No: 605211799

SPIC Centre of Excellence
SPIC and Microsoft have jointly set up a Centre of Excellence at Punjab Engineering College, Chandigarh. Under the aegis of Department of IT, SPIC and Microsoft have jointly set up a Centre of Excellence at Punjab Engineering College, Chandigarh. The Centre is a state-of-the-art Complex spread over an area of 3500 sq. ft. It consists of a spacious Conference Hall, Hi-tech class rooms, 30 work stations, a Meeting Room, and all the latest technological equipment for Training, Software Development and Presentations. Under this understanding the partnership will work towards computerizing organizations in Chandigarh U.T., building skilled technical resources, develop expertise in providing technical consultancy, developing custom applications. Microsoft, in return will provide access to training and skills transfer on Microsoft Corporation technology. The centre is offering various courses like MCSE, MCSD, MCDBA, VB and SQL 2000. Microsoft is carrying out training for the faculty, the students and employees of Chandigarh Administration on its new technologies / products for bench marking and demonstrating an array of Microsoft products, solutions and inters operability with other platforms at this Centre. The Centre of Excellencies being used as a centre for the development of skills for the emerging software industry in the UT. The Center also provides organized short-term courses for corporate executives, including executives from private companies. High-end training is carried out for the executives as per their requirement. Software engineers deployed by the Department of Information Technology and Microsoft are working on various e-governance projects, some government projects like an accounting package for the Chandigarh Pollution Control Committee, projects related to counseling/guidance (Regional Employment Officer), a project on Loan System for the Social Welfare Department, Developing a library software for the Chandigarh College of Architecture, a project for the ITI Chandigarh and also developing website of Chandigarh Administration which includes all public interacting departments of Chandigarh Administration

About Incubation Centre
The SPIC IT Enterprise Development Centre was inaugurated on March 6, 2002 by H.E.Lt.Gen.JFR Jacob PVSM(Retd) Governor Punjab & Administrator UT Chandigarh.

To promote small IT/ITES companies in setting up their facilities and to assist young professionals in setting up 4

Roll No: 605211799 their entrepreneurship by providing shell space in Chandigarh SPIC has IT incubation Centre situated in Punjab Engineering College, Sector – 12 to enhance software exports from Chandigarh. It has a built up space of 15,000 sq feet where shell space is provided to the small IT/ITES companies where they have been provided internet bandwidth connectivity by STPI for software export. Six IT Companies are operational in SPIC IT Incubation Centre which are doing software export .The IT Enterprise Development Centre is the first of its kind in North India, and has been set up in order to encourage IT companies to set up their facilities in Chandigarh. These IT companies are expected to shift to other locations within the next three years, after establishing themselves.


Roll No: 605211799


1.1 Introduction & Overview of the Project

Online shopping is the process consumers go through to purchase products or services over the Internet. An online shop, eshop, e-store, internet shop, webshop, webstore, online store, or virtual store evokes the physical analogy of buying products or services at a bricks-and-mortar retailer or in a shopping mall.This application provides an interface using .NET and SQL Server2005 for database connectivity.

In general, shopping has always catered to middle class and upper class women. Shopping is fragmented and pyramid-shaped. At the pinnacle are elegant boutiques for the affluent, a huge belt of inelegant but ruthlessly efficient “discounters” flog plenty at the pyramid’s precarious middle. According to the anaylsis of Susan D. Davis, at its base are the world’s workers and poor, on whose cheapened labor the rest of the pyramid depends for its incredible abundance.[5] Shopping has evolved from single stores to large malls containing many stores that most often offer attentive service, store credit, delivery, and acceptance of returns. [5] These new additions to shopping have encouraged and targeted middle class women. In recent years, online shopping has become popular; however, it still caters to the middle and upper class. In order to shop online, one must be able to have access to a computer, a bank account and a debit card. Shopping has evolved with the growth of technology. According to research found in the Journal of Electronic Commerce, if we focus on the demographic characteristics of the in-home shopper, in general, the higher the level of education, income, and occupation of the head of the household, the more favourable the perception of non-store shopping.[6] An influential factor in consumer attitude towards non-store shopping is exposure to technology, since it has been demonstrated that increased exposure to technology increases the probability of developing favourable attitudes towards new shopping channels.[6] Online shopping widened the target audience to men and women of the middle class. At first, main users of online shopping were young men with a high level of income and a university education. [6] This profile is changing. For example, in USA in the early years of Internet there were very few women users, but by 2001 women were


52.8% of the online population.


Roll No: 605211799 Sociocultural pressure has made men generally more independent in their

purchase decisions, while women place greater value on personal contact and social relations.

1.2 Objectives of the Project

 Online Shopping System is a system which is the online selling of products available.  It introduces new services with lower over-heads.  It allows authorized users enjoy a wider choice.  It is easy for consumers to manage a non-digital channel.  It increases market size.  It provides the information about the current status of the Products.  It provides reviews about new product in market.  This philosophy is widely used and will be of immense use of future applications.  Proper updating as well as feedback can be added and viewed by the authorized users.

1.3 Payment
Online shoppers commonly use credit card to make payments, however some systems enable users to create accounts and pay by alternative means, such as:
   

Debit card Various types of electronic money Cash on delivery (C.O.D., offered by very few online stores) Cheque 7

Roll No: 605211799
         

Wire transfer/delivery on payment Postal money order PayPal Google Checkout Amazon Payments Bill Me Later Money bookers Reverse SMS billing to mobile phones Gift cards Direct debit in some countries Some sites will not allow international credit cards and billing address and shipping address have to be in

the same country in which site does its business. Other sites allow customers from anywhere to send gifts anywhere. The financial part of a transaction might be processed in real time (for example, letting the consumer know their credit card was declined before they log off), or might be done later as part of the fulfillment process. While credit cards are currently the most popular means of paying for online goods and services, alternative online payments will account for 26% of e-commerce volume by 2009 according to Celent.[8]

1.4 Product delivery
Once a payment has been accepted the goods or services can be delivered in the following ways.

Download: This is the method often used for digital media products such as software, music, movies, or images.

 

Shipping: The product is shipped to the customer's address. Drop shipping: The order is passed to the manufacturer or third-party distributor, who ships the item directly to the consumer, bypassing the retailer's physical location to save time, money, and space.

In-store pickup: The customer orders online, finds a local store using locator software and picks the product up at the closest store. This is the method often used in the bricks and clicks business model.

In the case of buying an admission ticket one may get a code, or a ticket that can be printed out. At the premises it is made sure that the same right of admission is not used twice. 8

Roll No: 605211799

1.5 Shopping cart systems

Simple systems allow the offline administration of products and categories. The shop is then generated as HTML files and graphics that can be uploaded to a webspace. These systems do not use an online database.

A high end solution can be bought or rented as a standalone program or as an addition to an enterprise resource planning program. It is usually installed on the company's own webserver and may integrate into the existing supply chain so that ordering, payment, delivery, accounting and warehousing can be automated to a large extent.

Other solutions allow the user to register and create an online shop on a portal that hosts multiple shops at the same time.

open source shopping cart packages include advanced platforms such as Interchange, and off the shelf solutions as Satchmo, osCommerce, Magento, Zen Cart, OpenCart, VirtueMart, Flying Cart and PrestaShop or the dual licensed PhPepperShop.

Commercial systems can also be tailored to ones needs so that the shop does not have to be created from scratch. By using a framework already existing, software modules for different functionalities required by a web shop can be adapted and combined.


Roll No: 605211799

2.1 Problem Analysis:
Problem analysis is done in following steps:  Firstly we indentify important elements of problem situation by analyzing relevant information.  Thus we framed problems.  Indentify possible causes.  Framing and reframing of possible solutions is done.  Exhibiting conceptual flexibility.

2.2 Problem Statement:
Before coming of computers, the data processing activities faced many problems such as inaccuracy, delays, improper record maintenance etc. as everything was being manually. Some problems are detailed below  INACCURACY: In the exiting system all the information is stored in long registers. Inaccuracy is caused due to manual storage of data.  INCONSISTENCY: Presently system is not aware of replication and duplication of data.  IMPROPER RECORD MAINTEANCE: It is inconvenient to modify data. Number of mistakes is also high. Wastage of lot of time in correcting the mistakes.  REDUNDACY OF RECORDS: There is data duplication and wastage of storage space.  PROBLEM OF UPDATION: It is very time consuming to update all of the records.  TIME AND RETRIEVAL PROBLEM: Retrieval of data is time consuming.  STORAGE PROBLEM: More space is required o store it .this storage is prone to damage.

2.3 Processing Environment
Hardware Requirements:
The system must have the following hardware requirements:  Pentium IV Processors  1 GB of RAM  CPU 2.40 GHz  10 GB of Hard Disk 10

Roll No: 605211799  Server Machine  Client Machine

Software Requirements:
The system must have the following software requirements:  OPERATING SYSTEM:- Windows 2000/XP/2003  .NET  SQL Server 2005

2.4 Solution Strategy:
Solution plan for the project as follows: A best solution of any problem can be attained only if there is a proper strategy of solving the problem. A best solution can be achieved by extracting suitable strategy. The problem can be dispersed into different phases. A problem has come to the solution after going through various other intermediate phases. Some phases are described in brief as:  ANALYSIS PHASE: Proper analysis should be done by meeting the client. Even a little thing should not be ignored related to the problem. The requirements of the user should be properly analyzed.  COMPARISON: A problem must be compared to the existing system to extract the best solution. Format review must be done. Compare the existing system with the system which is to be developed.  PROPER TEAM STRUCTURE: A good team structure has to be organized. Every member should assign his/her modules. Proper coordination among them is necessary. They all are answerable to the team leader. Proper documentation modules must be given to them.  PROPER DOCUMENTATION: Proper documentation of the problem must be done before going towards final solution. Formal meetings and reviews have to be arranged.  TESTING: There may be some faults occurred in the solution path which are very necessary to detect and eliminate to reach towards final and best solution  FLEXIBLE: Adopted strategy must be flexible in the context if there is some need of modification it could be done easily.


Roll No: 605211799

Feasibility Analysis:
Feasibility study is done so that an ill-conceived system is recognized early in definition phase. This phase is really important as before starting with the real work of building the system it is very important to find out whether the idea thought is possible or not. The analysis done in this project is: Technical feasibility: a study of function, performance and constraints that may affect the ability to achieve an acceptable system.  Operational feasibility- a study about the operational aspects of the system.  Economic feasibility-a study about the economical aspects of the system.  Behavioral feasibility- estimates the reaction of the user staff towards the development of computerized system.

3.1 Technical Feasibility
During technical analysis, the technical merits of the system are studied and at the same time collecting additional information about performance, maintainability, reliability and predictability. Technical analysis begins with an assessment of the technical viability of the proposed system  What technologies are required to accomplish system functions and performance?  What new methods, procedures are required and what is their development risk?  How will these obtained from technical analysis form the basis for another go/no go decision on the test system?

3.2 Operational Feasibility
In operational feasibility we study that whether proposed system will work in right manner when implemented. The proposed system seems to function well operability. The software was tested properly both at development stage and at testing stage. Any problem that was encountered was carefully removed. Hence the system is Operational Feasible.


Roll No: 605211799

3.3 Economic Feasibility
Economic feasibility determines the benefits and savings that are expected from the system and compare them with the costs. The designed system provides the following advantages:  Better customer services.  Faster information retrieval.  Quicker test preparation.  Better result accuracy.  Lower processing and operating cost.  Improved staff utilization and efficiency.  Consistent procedure to eliminate error.  Better security implementations Since the software does not require any special hardware components of software utilities, hence no purchases are needed. The only purchase that will have to be made is any personal computer of higher configuration than specified above and a printer for generation of hard copy of reports. Hence the system is Economically Feasible.

3.4 Behavioral Feasibility
Behavioral feasibility estimates the reaction of the user staff towards the development of computerized system. In case of this system, the staff was in completely in favor of automating the process of report generating as it saved their precious time energy and moreover the system when implemented would help to remove inconsistencies, redundancy error that might be associated with existing system. So it might be associated with existing system. So it said that the behavioral feasibility analysis yielded positive results. Hence the system is Behavioral Feasible.


Roll No: 605211799

4.1 Team Structure







4.2Development Schedule

Investigation Phase
The investigation phase is also known as fact-finding stage or the analysis of current system. This is a detailed study conducted with the purpose of wanting to fully understand the existing system and to identify the basic information requirements. A thorough investigation was done in every aspect when determining whether the system is feasible enough to be implemented. As it was essential for us to find out more about the present system, we used the following methods to gather information:  Observation 14

Roll No: 605211799  Document sampling

Constraints and Limitations
The constraints and limitations within a system are the drawbacks that occur during the implementation of the system. These limitations and constraints can crop up in almost every system; the most important fact is to find the way to overcome these problems. Software design is the first of the three technical activities- design, code generation and test that are required to build and verify the software. Each activity transforms information in manner that ultimately results in validated computer software. The design task produces a data design, an architectural design, an interface design and component design. The design of an information system produces the details that clearly describe how a system will meet requirements identified during system analysis. When I started working on system design, I faced different types of problems; many of these are due to constraints imposed by the limitations of hardware and software available.

Design Objectives
The primary objective of the design is to deliver the requirements as specified in the feasibility report. These are some of the objectives, which we kept in mind.  Practicality: the system is quite stable and can be operated by the people with average intelligence  Efficiency: We tried to involve accuracy, timeliness and comprehensiveness of the system output.  Security: This is very important aspect which we followed in this designing phase and tries covering the areas of security of data by applying constraints.

Implementation is the stage where the theoretical design is turned into working system and is giving confidence to the new system for the users i.e work efficiently and effectively. It involves careful planning, investigation of the system and its constraints on implementation, design of method to achieve the change over. Apart from planning major task of preparing the implementation is education of users. The more complex system is implemented, the more involved will be the system analysis and design effort required for implementation.

The system can be implemented only after thorough testing is done and if it is found working according to the specification. This method also offers the greatest security since the old system can take over if the errors are found or inability to handle certain types of transactions while using the system developed. 15

Roll No: 605211799 At the beginning of the development a preliminary implementation plan is created to schedule and manage the many different activities that must be integrated into plan. The implementation plan is updated throughout development phase, culminating in a change over plan for the operation phase. The major elements of implementation plan are:  Test plan,  Training plan and  Equipment installation.


Roll No: 605211799

The software requirement specification is produced at the culmination of the analysis task. The function and performance allocated to software as part of system engineering are refined by establishing a complete information description, a detailed functional description, a representation of system behavior, an indication of performance requirement and design constraints appropriate validation criteria, and other information pertinent to requirement. The introduction to software requirement specification states the goals and objectives of the software, describing it in the context of the computer based system. The information description provides a detailed description of the problem that the software must solve. Information content, flow and structure are documented. Validation criteria are probably the most important and ironically the most often neglected section of the software requirement specification. Software requirement specification can be used for different purposes. Here are the major uses.

Statement of user need
A main purpose of the product specification is to define the need of the product’s user. Sometimes, the specification may be part of a contract sign between the producer and the user. It could also form part of the user manuals. But here we have developed the system out of curiosity to know how this project helps the institution to keep their information update and they can edit it also. In this case, careful analysis- involving more interaction with the user should be devoted to reach a clear statement of requirements, in order to avoid possible misunderstandings. Sometimes at the beginning of a project, even the user has no clear idea of what exactly the desired product is. Think for instance of user interface, a user with no previous knowledge of computer products may not appreciate the difference between, say menu driven and a command line interface. Even an exact formation of system functions and performance may be missing an initial description produced by an inexperienced user.


Roll No: 605211799

A statement of the requirements for the implementation
Specifications are also used as a reference point during product implementation. In fact, the ultimate goal of the implementation is to build a product that needs specification. Thus the implementer’s user specifications during design to make design decisions and during the verification activity to check that the implementation compiles with specification.


Roll No: 605211799

6.1 Introduction
System design is the process of developing specifications for a candidate system that meet the criteria established in the system analysis. Major step in the system design is the preparation of input forms i.e. the window where the input is entered and output generated in the form of student details. The main objective of the system design is to make the system user friendly. System design involves various stages as:  Data entry  Data correction  Data deletion

6.2 Data Flow Diagram
DFD shows the flow of data. These diagrams help to understand the basic working of the system. It helps to make and recognize various parts and their inter relationships. It is a way of expressing system requirements in a graphical form; this leads to modular design. It is also known as bubble chart, has the purpose of clarifying system requirements and identifying major transformations that will become program in system design. So it is the starting point of the design phase that functionally decomposes the requirement specifications down to the lowest level of details. Data flow diagrams are used to describe how the system transforms information. They define how information is processed and stored and identify how the information flows through the processes.

When building a data flow diagram, the following items should be considered:  Where does the data that passes through the system come from and where does it go,  What happens to the data once it enters the system (i.e., the inputs) and before it leaves the system (i.e., the outputs),  What delays occur between the inputs and outputs (i.e., identifying the need for data stores). 19

Roll No: 605211799

DFD Symbols
In the DFD, there are four symbols, 1) A Square defines a source (originator) or destination of system data.


An Arrow identifies data flow- data in motion .It is pipeline through which information flows.


A circle or a bubble (or a oval bubble) represents a process that transforms incoming data flow(s) into

outgoing data flow(s)


An open rectangle is a data store-data at rest, or temporary repository of data.


Roll No: 605211799

6.2.1 DFD 1(Level 1)
Once the authenticity of the administrator is being verified, Project Management System look whether the project is new or existing ,then it provides the information about the project, its cost, time duration, what is its current status, and which phase it is going.

Username & Password


Project Allotment

Ongoing Project Details

New Project

Existing Project

Adding New Details

Updating Of Details

Table Project Master

Project Info Money Requirement Time Duration

Project Status Phase

Project Name Project Cost Project Date

Current Status Stage 21

Roll No: 605211799

6.2.3 DFD 2(Level 2)
In each phase it is checked that time duration and money requirements are fulfilling. If not then reports are issued why not and if ok then enters to the next phase. After completion of each phase project is completed and again reports are generated. There is also facility of giving feedback by the users PROJECT MANAGEMENT SYSTEM

New Project
Checks Money Needs

Details Project Details Table

Existing Project
Money Allotments

Money And Time Requirements
Decides no. Of Phases

Money Allotted to each Phase
Project Enter into Ist Phase

Number of phases

Enter to Phase I
Checks Feedback

Consider Time needs

Time and Money Requirements
If Ok Money Allotments

And Messages

Time Requirements

Project Decided

Enter to Next Phase

Project Allotted

Project Accomplished

Reports Generated 22

Roll No: 605211799

.NET is both a business strategy is collection of programming support for what are known as web services, the ability to use the Web rather than your own computer for various services. Its goal is to provide individual and business users with a seamlessly interoperable and Web-enabled interface for applications and computing devices and to make computing activities increasingly Web browser-oriented. The .NET platform includes servers; building-block services, such as Web-based data storage; and device software. The .NET platform was designed to provide:  The ability to make the entire range of computing devices work together and to have user information automatically updated and synchronized on all of them  Increased interactive capability for Web sites, enabled by greater use of XML(Extensible Markup Language) rather than HTML  A premium online subscription service, that will feature customized access and delivery of products and services to the user from a central starting point for the management of various applications, such as e-mail, for example, or software, such as Office .NET  Centralized data storage, which will increase efficiency and ease of access to information, as well as synchronization of information among users and devices  The ability to integrate various communications media, such as e-mail, faxes, and telephones  For developers, the ability to create reusable modules, which should increase productivity and reduce the number of programming errors  A component model for the Internet.  The new approach to building large scale distributed systems for the Internet.  Provides the capability to integrate multiple devices.  Built around the tools and protocols (XML, WSDL, SOAP, HTTP) that are becoming standard on the Internet.


Roll No: 605211799

7.1 .NET Framework
.NET Framework Architecture

Principal design features
Interoperability Because interaction between new and older applications is commonly required, the .NET Framework provides means to access functionality that is implemented in programs that execute outside the .NET environment. Access to COM components is provided in the System.Runtime.InteropServices and System.Enterprise Services namespaces of the framework; access to other functionality is provided using the P/Invoke feature. Common Runtime Engine


Roll No: 605211799 The Common Language Runtime (CLR) is the virtual machine component of the .NET framework. All .NET programs execute under the supervision of the CLR, guaranteeing certain properties and behaviors in the areas of memory management, security, and exception handling. Language Independence The .NET Framework introduces a Common Type System, or CTS. The CTS specification defines all possible data types and programming constructs supported by the CLR and how they may or may not interact with each other. Because of this feature, the .NET Framework supports the exchange of instances of types between programs written in any of the .NET languages. This is discussed in more detail in Microsoft .NET Languages. Base Class Library The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of functionality available to all languages using the .NET Framework. The BCL provides classes which encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction and XML document manipulation. Simplified Deployment The .NET framework includes design features and tools that help manage the installation of computer software to ensure that it does not interfere with previously installed software, and that it conforms to security requirements. Security The design is meant to address some of the vulnerabilities, such as buffer overflows, that have been exploited by malicious software. Additionally, .NET provides a common security model for all applications. Portability The design of the .NET Framework allows it to theoretically be platform agnostic, and thus cross-platform compatible. That is, a program written to use the framework should run without change on any type of system for which the framework is implemented.


Roll No: 605211799

7.2 Architecture
C# Code VB.Net Code J# Code




Common Intermediate Language

Common Language Runtime

Visual overview of the Common Language Infrastructure (CLI) Common Language Infrastructure (CLI) Main article: Common Language Infrastructure The core aspects of the .NET Framework lie within the Common Language Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform for application development and execution, including functions for exception handling, garbage collection, security, and interoperability. Microsoft's implementation of the CLI is called the Common Language Runtime or CLR. Assemblies The intermediate CIL code is housed in .NET assemblies. As mandated by specification, assemblies are stored in the Portable Executable (PE) format, common on the Windows platform for all DLL and EXE files. The assembly 26

Roll No: 605211799 consists of one or more files, one of which must contain the manifest, which has the metadata for the assembly. The complete name of an assembly (not to be confused with the filename on disk) contains its simple text name, version number, culture, and public key token. The public key token is a unique hash generated when the assembly is compiled, thus two assemblies with the same public key token are guaranteed to be identical from the point of view of the framework. Metadata Main article: .NET metadata All CLI is self-describing through .NET metadata. The CLR checks the metadata to ensure that the correct method is called. Metadata is usually generated by language compilers but developers can create their own metadata through custom attributes. Metadata contains information about the assembly, and is also used to implement the reflective programming capabilities of .NET Framework. Security .NET has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. Code Access Security is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the intranet or Internet). Code Access Security uses evidence to determine the permissions granted to the code. Other code can demand that calling code is granted a specified permission. The demand causes the CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown.


Roll No: 605211799

Microsoft SQL Server 2005 Microsoft SQL Server 2005 is a full-featured relational database management system (RDBMS) that offers a variety of administrative tools to ease the burdens of database development, maintenance and administration. In this article, we'll cover six of the more frequently used tools: Enterprise Manager, Query Analyzer, SQL Profiler, Service Manager, Data Transformation Services and Books Online.

8.1 Components of Microsoft SQL Server 2005
 Enterprise Manager: is the main administrative console for SQL Server installations. It provides you with

a graphical "birds-eye" view of all of the SQL Server installations on your network. You can perform high-level administrative functions that affect one or more servers, schedule common maintenance tasks or create and modify the structure of individual databases.  Query Analyzer: offers a quick and dirty method for performing queries against any of your SQL Server

databases. It's a great way to quickly pull information out of a database in response to a user request, test queries before implementing them in other applications, create/modify stored procedures and execute administrative tasks.  SQL Profiler: provides a window into the inner workings of your database. You can monitor many

different event types and observe database performance in real time. SQL Profiler allows you to capture and replay system "traces" that log various activities. It's a great tool for optimizing databases with performance issues or troubleshooting particular problems.  Service Manager: is used to control the MSSQLServer (the main SQL Server process), MSDTC (Microsoft

Distributed Transaction Coordinator) and SQLServerAgent processes. An icon for this service normally resides in the system tray of machines running SQL Server. You can use Service Manager to start, stop or pause any one of these services.  Data Transformation Services (DTS): provide an extremely flexible method for importing and exporting

data between a Microsoft SQL Server installation and a large variety of other formats. The most commonly used DTS application is the "Import and Export Data" wizard found in the SQL Server program group.  Books Online: is an often overlooked resource provided with SQL Server that contains answers to a variety

of administrative, development and installation issues. It's a great resource to consult before turning to the Internet or technical support.


Roll No: 605211799 Database engine Upgrade tool: Setup Migration method: Administrators perform side-by-side installation and then database backup/restore or detach/attach Analysis Upgrade tool: Setup Services Migration tool: Migration Wizard Migration method: Migration Wizard migrates objects, but optimization and client-access upgrades are required Integration Upgrade tool: None Services Migration tool: DTS Migration Wizard Migration method: DTS Migration Wizard converts 50 to 70 percent of the tasks, but some manual migration is required; runtime DTS DLLs are available in SSIS; package re-architecture is recommended Reporting Upgrade tool: Setup Services Migration method: Administrators perform side-by-side installation, and reports are deployed on the new instance Notification Upgrade tool: None Services Migration method: Upgrade of Notification Services instances occurs during installation

Microsoft SQL Server 2005 Overview 8.2 Architecture
Protocol layer
Protocol layer implements the external interface to SQL Server. All operations that can be invoked on SQL Server are communicated to it via a Microsoft-defined format, called Tabular Data Stream (TDS). TDS is an application layer protocol, used to transfer data between a database server and a client..

Data storage
The main unit of data storage is a database, which is a collection of tables with typed columns. SQL Server supports different data types, including primary types such as Integer, Float, Decimal, Char (including character strings), Varchar (variable length character strings), binary (for unstructured blobs of data), Text (for textual data) among others. It also allows user-defined composite types (UDTs) to be defined and used. SQL Server also makes server statistics available as virtual tables and views (called Dynamic Management Views or DMVs). A database can also contain other objects including views, stored procedures, indexes and constraints, in addition to tables, 29

Roll No: 605211799 along with a transaction log. A SQL Server database can contain a maximum of 2 objects, and can span multiple

OS-level files with a maximum file size of 220 TB.[18] The data in the database are stored in primary data files with an extension .mdf. Secondary data files, identified with an .ndf extension, are used to store optional metadata. Log files are identified with the .ldf extension.

Buffer management
SQL Server buffers pages in RAM to minimize disc I/O. Any 8 KB page can be buffered in-memory, and the set of all pages currently buffered is called the buffer cache. The amount of memory available to SQL Server decides how many pages will be cached in memory. The buffer cache is managed by the Buffer Manager. Either reading from or writing to any page copies it to the buffer cache. Subsequent reads or writes are redirected to the in-memory copy, rather than the on-disc version.

Logging and Transaction
SQL Server ensures that any change to the data is ACID-compliant, i.e., it uses transactions to ensure that any operation either totally completes or is undone if fails, but never leaves the database in an intermediate state. Using transactions, a sequence of actions can be grouped together, with the guarantee that either all actions will succeed or none will. SQL Server implements transactions using a write-ahead log. Any changes made to any page will update the in-memory cache of the page, simultaneously all the operations performed will be written to a log, along with the transaction ID which the operation was a part of. Each log entry is identified by an increasing Log Sequence Number (LSN) which ensure that no event overwrites another. SQL Server ensures that the log will be written onto the disc before the actual page is written back. This enables SQL Server to ensure integrity of the data, even if the system fails. If both the log and the page were written before the failure, the entire data is on persistent storage and integrity is ensured.

Concurrency and locking
SQL Server allows multiple clients to use the same database concurrently. As such, it needs to control concurrent access to shared data, to ensure data integrity - when multiple clients update the same data, or clients attempt to read data that is in the process of being changed by another client. SQL Server provides two modes of concurrency control: pessimistic concurrency and optimistic concurrency. When pessimistic concurrency control is being used, SQL Server controls concurrent access by using locks. Locks can be either shared or exclusive. Exclusive lock grants the user exclusive access to the data - no other user can access the data as long as the lock is held. Shared locks are used when some data is being read - multiple users can read from data locked with a shared lock, but not 30

Roll No: 605211799 acquire an exclusive lock. The latter would have to wait for all shared locks to be released. Locks can be applied on different levels of granularity - on entire tables, pages, or even on a per-row basis on tables. For indexes, it can either be on the entire index or on index leaves.

Data retrieval
The main mode of retrieving data from an SQL Server database is querying for it. The query is expressed using a variant of SQL called T-SQL, a dialect Microsoft SQL Server shares with Sybase SQL Server due to its legacy. The query declaratively specifies what is to be retrieved. It is processed by the query processor, which figures out the sequence of steps that will be necessary to retrieve the requested data. The sequence of actions necessary to execute a query is called a query plan. There might be multiple ways to process the same query .SQL Server includes a cost-based query optimizer which tries to optimize on the cost, in terms of the resources it will take to execute the query. Given a query, the query optimizer looks at the database schema, the database statistics and the system load at that time. SQL Server also allows stored procedures to be defined. Stored procedures are parameterized T-SQL queries that are stored in the server itself (and not issued by the client application as is the case with general queries). Stored procedures can accept values sent by the client as input parameters, and send back results as output parameters.

Microsoft SQL Server 2005 includes a component named SQL CLR via which it integrates with .NET Framework. Unlike most other applications that use .NET Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory, threading and resource management requirements of .NET Framework are satisfied by SQLOS itself, rather than the underlying Windows operating system. SQLOS provides deadlock detection and resolution services for .NET code as well. With SQL CLR, stored procedures and triggers can be written in any managed .NET language, including C# and VB.NET. Managed code can also be used to define UDT's (user defined types), which can persist in the database. Managed code is compiled to .NET assemblies and after being verified for type safety, registered at the database. After that, they can be invoked like any other procedure.[

8.3 Features of SQL Server 2005
 User-Defined Functions: SQL Server 2000 introduces the long-awaited support for user-defined functions. 31

User-defined functions can take zero or more input parameters and return a single value—either a scalar value like

Roll No: 605211799 the system-defined functions, or a table result. Table-valued functions can be used anywhere table or view expressions can be used in queries, and they can perform more complex logic than is allowed in a view.  Indexed Views: Views are often used to simplify complex queries, and they can contain joins and aggregate

functions. In SQL Server 2000 Enterprise or Developer Edition, you can define indexes on views to improve query performance against the view. When creating an index on a view, the result set of the view is stored and indexed in the database. Existing applications can take advantage of the performance improvements without needing to be modified.  Distributed Partitioned Views: SQL Server 2000 expands the ability to create partitioned views by

allowing you to horizontally partition tables across multiple SQL Servers. The feature helps you scale out one database server to multiple database servers, while making the data appear as if it comes from a single table on a single SQL Server. In addition, partitioned views are now able to be updated.  New Data types: SQL Server 2000 introduces three new data types. Two of these can be used as data types

for local variables, stored procedure parameters and return values, user-defined function parameters and return values, or table columns:  Text in Row Data: SQL Server 2000 provides a new text in row table option that allows small text and

image data values to be placed directly in the data row, instead of requiring a separate data page. This can reduce the amount of space required to store small text and image data values, as well as reduce the amount of I/O required to retrieve rows containing small text and image data values.  Cascading RI Constraints: SQL Server 2000 provides the ability to specify the action to take when a

column referenced by a foreign key constraint is updated or deleted. You can still abort the update or delete if related foreign key records exist by specifying the NO ACTION option, or you can specify the new CASCADE option, which will cascade the update or delete operation to the related foreign key records.  Multiple SQL Server Instances: SQL Server 2000 provides support for running multiple instances of SQL

Server on the same system. This allows you to simultaneously run one instance of SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2000. Each SQL Server instance runs independently of the others and has its own set of system and user databases, security configuration, and so on. Applications can connect to the different instances in the same way they connect to different SQL Servers on different machines.


Roll No: 605211799

Various forms used in the ONLINE SHOPPING SYSTEM: 9.1 Login Form


Roll No: 605211799

9. 2 Signup form


Roll No: 605211799

9.3 Master Form


Roll No: 605211799

9.4 Laptops form


Roll No: 605211799

9.5 Ipods From


Roll No: 605211799

9.6 Pendrive Form


Roll No: 605211799

9.7 CD/DVD Form


Roll No: 605211799

9.8 Products Form


Roll No: 605211799



Roll No: 605211799




Roll No: 605211799


Roll No: 605211799



Roll No: 605211799



Roll No: 605211799

Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to topics as  Correctness  Completeness  Security But quality can also include more technical requirements such as  Capability  reliability  efficiency  portability  maintainability  compatibility  Usability.

Testing is a process of technical investigation that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification. Testing presents an interesting challenge for the software engineers attempt to build software from an abstract concept to an acceptable implementation. In testing engineer create a series of test cases that occur when the errors are uncovered. A good test is one that has the highest probability of finding an undiscovered error. The term error is used to refer the difference between the actual output of the system and the current output. Fault is a condition that causes the software to fail to perform its required function. Different levels of testing were employed for software to make it error free, fault free and reliable.


Roll No: 605211799

11.1 Types of Testing:
11.1.1 Unit Testing
Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an Object-oriented environment, this is usually at the class level. In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class. Ideally, each test case is independent from the others; mock objects and test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by developers and not by end-users.

Benefits of Unit Testing
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.  Facilitates change  Simplifies integration

Facilitates change
Unit testing allows the programmer to change the code at a later date, and make sure the module still works correctly (i.e. regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified and fixed. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly. Good unit test design produces test cases that cover all paths through the unit with attention paid to loop conditions. In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. Depending upon established development practices and unit test coverage, up-to-the-second accuracy can be maintained. Simplifies integration Unit testing helps to eliminate uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.


Roll No: 605211799

11.1.2 Integration Testing
Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.

The purpose of integration testing is to verify the following requirements  Functional  Performance  Reliability Placed on major design items, these "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing.

11.1.3 System Testing
System testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.


Roll No: 605211799

Testing the Whole System
System testing is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS). Moreover, the system testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s). One could view System testing as the final destructive testing phase before user acceptance testing.

11.1.4 Acceptance Testing
In engineering and its various sub disciplines, acceptance testing is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. In some engineering sub disciplines, it is known as functional testing, black-box testing, release acceptance, QA testing, application tests, confidence testing, final testing, validation testing, usability testing, or factory acceptance tests. In most environments, acceptance testing by the system provider is distinguished from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. In such environments, acceptance testing performed by the customer is known as beta testing, user acceptance testing (UAT), end user testing, site (acceptance) testing, or field (acceptance) testing. 11.5.1 Alpha testing - In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. 11.5.2 Beta testing - Testing typically done by end-users or others, final Test before releasing application for use.

11.2 Basic Methods of Testing
11.2.1 White Box Testing
White box testing is performed to reveal problems with the internal structure of a program. This requires the tester to have detailed knowledge of the internal structure. A common goal of the white box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box strategies share is that the entire software implementation is taken into accounts during testing, which facilitate error detection even when software application is vague or incomplete. The effectiveness or thoroughness of white box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases. 49

Roll No: 605211799 This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

11.2.2 Black Box Testing
Black box tests are performed to access how well a program meets its requirements, looking for incorrect or missing functionality. Functional tests typically exercise code with valid or nearly valid input for which the expected output is known. Performance tests evaluate response time, memory usage, throughput and execution time. Reliability tests monitor system response to representative user input, counting failures over time to measure or certify reliability. Black box testing uncovers the following types of errors: Incorrect or missing functions  Interface errors  External database access  Performance errors  Initialization and termination errors


Roll No: 605211799

12.1Test Plan
12.1.1Test Plan Identifier
Some type of unique company generated number to identify this test plan, its level and the level of software that it is related to. Preferably the test plan level will be the same as the related software level. The number may also identify whether the test plan is a Master plan, a Level plan, an integration plan or whichever plan level it represents. This is to assist in coordinating software and test ware versions within configuration management.  Unique "short" name for the test plan  Version date and version number of procedure  Version Author and contact information  Revision history Keep in mind that test plans are like other software documentation, they are dynamic in nature and must be kept up to date. Therefore they will have revision numbers. You may want to include author and contact information including the revision history information as part of either the identifier section of as part of the introduction.

12.1.2 Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan. You may want to include any references to other plans, documents or items that contain information relevant to this project/process. If preferable, you can create a references section to contain all reference documents.  Project Authorization  Project Plan  Quality Assurance Plan  Configuration Management Plan  Relevant Policies and Standards  For lower level plans, reference higher level plan(s) Identify the Scope of the plan in relation to the Software Project plan that it relates to. 51

Roll No: 605211799 Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities. As this is the “Executive Summary” keep information brief and to the point.

12.1.3 Test Items
These are things you intend to test within the scope of this test plan. Essentially a list of what is to be tested. This can be developed from the software application test objectives inventories as well as other sources of documentation and information such as:  Requirements Specifications  Design Specifications  Users Guides  Operations Manuals or Guides  Installation Manuals or Procedures This can be controlled and defined by your local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements. Identify any critical steps required before testing can begin as well, such as how to obtain the required item. This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build. References to existing incident reports or enhancement requests should also be included. This section can also indicate items that will be excluded from testing

12.1.4 Features to Be Tested
This is a listing of what is to be tested from the USERS viewpoint of what the system does. This is not a technical description of the software but a USERS view of the functions. It is recommended to identify the test design specification associated with each feature or set of features. Set the level of risk for each feature. Use a simple rating scale such as (H, M, L); High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.


Roll No: 605211799 This is another place where the test objectives inventories can to used to help identify the sets of objectives to be tested together, (this takes advantage of the hierarchy of test objectives). Depending on the level of test plan, specific attributes (objectives) of a feature or set of features may be identified.

Features Not To Be Tested This is a listing of what is NOT to be tested from both the Users viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software but a USERS view of the functions.  Identify WHY the feature is not to be tested, there can be any number of reasons.  Not to be included in this release of the Software.  Low risk has been used before and is considered stable.  Will be released but not tested or documented as a functional part of the release of this version of the software.

12.1.5 Approach
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.  Are any special tools to be used and what are they?  Will the tool require special training?  What metrics will be collected?  Which level is each metric to be collected at?  How is Configuration Management to be handled?  How many different configurations will be tested  Hardware  Software  Combinations of HW, SW and other vendor packages  What are the regression test rules? How much will be done and how much at each  Will regression testing be based on severity of defects detected?  How will elements in the requirements and design that do not make sense or are untestable be processed?  If this is a master test plan the overall project testing approach and coverage requirements must also be identified. 53 test level

Roll No: 605211799  Specify if there are special requirements for the testing.  Only the full component will be tested.  A specified segment of grouping of features/components must be tested together.  Other information that may be useful in setting the approach are :  MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.  How will meetings and other organizational processes be handled.  Are there any significant constraints to testing.  Resource availability  Deadlines  Are there any recommended testing techniques that should be used, if so why?

Item Pass/Fail Criteria What are the Completion criteria for this plan? This is a critical aspect of any test plan and should be appropriate to the level of the plan. The goal is to identify whether or not a test item has passed the test process.  At the Unit test level this could be items such as:  All test cases completed.  A specified percentage of cases completed with a percentage containing some number of minor defects.  Code coverage tool indicates all code covered.  At the Master test plan level this could be items such as:  All lower level plans completed.  A specified number of plans completed without errors and a percentage with minor defects.  This could be an individual test case level criterion or a unit level plan or it can be general functional requirements for higher level plans.  What is the number and severity of defects located?  Is it possible to compare this to the total number of defects? This may be are never detected.  A defect is something that may cause a failure, and may be acceptable to leave in the application.  A failure is the result of a defect as seen by the User, the system crashes, etc. impossible as some defects


Roll No: 605211799

12.1.6 Suspension Criteria and Resumption Requirements
Know when to pause in a series of tests or possibly terminate a set of tests. Once testing is suspended how is it resumed and what are the potential impacts, (i.e. regression tests). If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue the test; you are just wasting resources.  Specify what constitutes stoppage for a test or series of tests and what is the acceptable level of defects that will allow the testing to proceed past the defects.  Testing after a truly fatal error will generate conditions that may be identified as defects but are in fact ghost errors caused by the earlier defects that were ignored.

Test Deliverables What is to be delivered as part of this plan?  Test plan  Test design specifications.  Test case specifications  Test procedure specifications  Test item transmittal reports  Test logs  Test Incident Reports  Test Summary reports  Test Incident reports Test data can also be considered a deliverable as well as possible test tools to aid in the testing process.One thing that is not a test deliverable is the software; that is listed under test items and is delivered by development. These items need to be identified in the overall project plan as deliverables (milestones) and should have the appropriate resources assigned to them in the project tracking system. This will ensure that the test process has visibility within the overall project tracking process and that the test tasks to create these deliverables are started at the appropriate time. Any dependencies between these deliverables and their related software deliverable should be identified. If the predecessor document is incomplete or unstable the test products will suffer as well.

Test Tasks There should be tasks identified for each test deliverable. Include all inter-task dependencies, skill levels, etc. These tasks should also have corresponding tasks and 55

Roll No: 605211799 milestones in the overall project tracking process (tool). If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing Non-defects. If the project is being developed as a multi-party process this plan may only cover a portion of the total functions/features. This needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan. When a third party is developing the software this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.

Environmental Needs Are there any special requirements for this test plan, such as:  Special hardware such as simulators, static generators etc.  How will test data be provided. Are there special collection requirements or specific  Ranges of data that must be provided?  How much testing will be done on each component of a multi-part feature?  Special power requirements.  Specific versions of other supporting software.  Restricted use of the system during testing.  Tools (both purchased and created).  Communications  Web  Client/Server  Network  Topology  External  Internal  Bridges/Routers  Security


Roll No: 605211799 Responsibilities Who is in charge? There should be a responsible person for each aspect of the testing and the test process. Each test task identified should also have a responsible person assigned. This includes all areas of the plan, here are some examples.  Setting risks.  Selecting features to be tested and not tested.  Setting overall strategy for this level of plan.  Ensuring all required elements are in place for testing.  Providing for resolution of scheduling conflicts, especially if testing is done on the production system.  Who provides the required training?  Who makes the critical go/no go decisions for items not covered in the test plans?  Who delivers each item in the test items section?

Staffing and Training Needs Identify all critical training requirements and concerns.  Training on the product.  Training for any test tools to be used.

Schedule Should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate the entire project plan will slip and the testing is part of the overall project plan. As we all know the first area of a project plan to get cut when it comes to crunch time at the end of a project is the testing. It usually comes down to the decision, ‘Let’s put something out even if it does not really work all that well’. And as we all know this is usually the worst possible decision.  How slippage in the schedule will to be handled should also be addressed here.  If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system they just may be a little more tolerant if they know it’s in their interest to get a better tested application.  By spelling out the effects here you have a change to discuss them in advance of their actual occurrence. You may even get the users to agree to a few defects in advance if the schedule slips.  At this point all relevant milestones should be identified with their relationship to the development process identified. This will also help in identifying and tracking 57

Roll No: 605211799 potential slippage in the schedule caused by the test process.  It is always best to tie all test dates directly to their related development activity dates. This prevents the test team from being perceived as the cause of a delay. For example: if system testing is to begin after delivery of the final build then system testing begins the day after delivery. If the delivery is late system testing starts from the day of delivery, not on a specific date. There are many elements to be considered for estimating the effort required for testing. It is critical that as much information as possible goes into the estimate as soon as possible in order to allow for accurate test planning.

12.1.7 Risks and Contingencies What are the overall risks to the project with an emphasis on the testing process?  Lack of personnel resources when testing is to begin.  Lack of availability of required hardware, software, data or tools.  Late delivery of the software, hardware or tools.  Delays in training on the application and/or tools.  Changes to the original requirements or designs.  Specify what will be done for various events, for example:  Requirements definition will be complete and if the requirements change the following actions will be taken.  The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.  The number of test performed will be reduced.  The number of acceptable defects will be increased.  These two items could lower the overall quality of the delivered product.  Resources will be added to the test team.  The test team will work overtime.  This could affect team morale.  The scope of the plan may be changed.  There may be some optimization of resources. This should be avoided if possible for obvious reasons.  You could just QUIT. A rather extreme option to say the least.


Roll No: 605211799 Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past. The important thing to remember is that if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option

Approvals Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan).  At the master test plan level this may be all involved parties.  When determining the approval process keep in mind who the audience is.  The audience for a unit test level plan is different than that of an integration, system or master level plan.  The levels and type of knowledge at the various levels will be different as well.  Programmers are very technical but may not have a clear understanding of the overall business process driving the project.  Users may have varying levels of business acumen and very little technical skills.


Roll No: 605211799



USER NAME AND PASSWORD Time Taken To Check the validity of user name and password Tester

Prepared by: 1 TEST CASE NO: User name TEST Password DATA: STEP NO: Steps 1.

Data username

Expected Results Should Display Warning Message Box “Please enter valid password” Should Display Warning Message Box “Please enter valid user name”. Should Display Warning Message Box “Please enter valid password”. Should Display Warning Message Box “user name should not be blank”

Actual Results

Enter username


Enter username and password

Username and password


Enter username And password

Username and password.


Enter password



Roll No: 605211799

Project Monitoring System is an online system, monitoring the projects undertaken by Chandigarh Administration. Designing, Coding, Testing modules of this system has been worked upon and completed. Reporting module is under progress and will be completed by the end of our training program. Our team is now going to work on making the crystal reports.


Roll No: 605211799

The following books and web give an overview of this project and have been referred during this preparation of this project.

By WROX,………….”Beginning .net 2003” By WROX,……….….”Professional .net 2003” By Rita Sahoo,….…..” Beginners guide to ASP.NET 2001” By Sam, ……….……”SQL Server express edition 2005” By Wiley& John , ….” SQL Server 2005 Bible”



Sign up to vote on this title
UsefulNot useful