A

PROJECT REPORT ON

TIME TABLE GENERATOR
Submitted in partial fulfillment of the requirement of UTTARAKHAND TECHNICAL UNIVERSITY, DEHRADUN For the degree of B.Tech In INFORMATION TECHNOLOGY

Submitted by ANISHA VERMA(07070103014) HARSHITA RAI( 98070103123) ANKITA ASWAL(07070104017) DIVYA SHARMA(07070106019)

Under the Guidance of Ms. ENA JAIN ASSISTANT PROFESSOR DEPARTMENT OF IT

DEPARTMENT OF INFORMATION TECHNOLOGY DEHRADUN INSTITUTE OF TECHNOLOGY Dehradun MAY 2011

A Project report on “TIME TABLE GENERATOR”

Submitted in partial fulfillment of the requirement of Uttarakhand Technical University, Dehradun for the degree of B.Tech. In Information Technology
Submitted by: Anisha Verma (07070103014) Harshita Rai (98070103123 Ankita Aswal (07070104017) Divya Sharma(07070106019) Under the Supervision of:Ms. Ena Jain Assistant Professor Department Of IT

Information Technology Department DEHRADUN INSTITUTE OF TECHNOLOGY DEHRADUN (UTTARAKHAND) MAY 2011

ACKNOWLEDGEMENT

It is a great pleasure to have the opportunity to extend our heartiest felt gratitude to everybody who helped us throughout the course of this project. It is distinct pleasure to express our deep sense of gratitude and indebtedness to our learned supervisor Ms. Ena Jain, Assistant Professor ,for their invaluable guidance, encouragement and patient reviews. With their continuous inspiration only, it becomes possible to complete this Project. We would also like to take this opportunity to present our sincere regards to our teachers Mr. G. P Saroha(HOD), Mr. Vivudh Fore for their support and encouragement.

We would also like to thank all the faculty members of the Department for their support and encouragement.

ANISHA VERMA(07070103014) HARSHITA RAI(98070103123) ANKITA ASWAL(07070104017) DIVYA SHARMA(07070106019)

CERTIFICATE This is to certify that Project Report entitled “TIME TABLE GENERATOR” which is submitted by following students in partial fulfillment of the requirement for the award of degree B. Of CSE/IT Dehradun Institute of Technology Dehradun .P. ENA JAIN Deptt. G.Tech. Of IT Dehradun Institute of Technology Dehradun Supervisor/Guide: Ms. The matter embodied in this report is original and has not been submitted for the award of any other degree. Saroha Deptt. ANISHA VERMA HARSHITA RAI ANKITA ASWAL DIVYA SHARMA 07070103014 98070103123 07070104017 07070106019 HOD: Prof. Dehradun is a record of the candidate’s own work carried out by them under my supervision. In Computer Science and Engineering to Uttarakhand Technical University.

3 Data Flow Diagram 6. SYSTEM FEATURES 3. ABSTRACT 2. SYSTEM DESIGN 6.3 Team Structure 3.1 Feature & Scope of Old and New System 3.2 Database Design 7.1 Existing System 5.2 Proposed System 5.2 Technical Feasibility 4.1 System Flow Chart 6.1 Economic Feasibility 4.4 Hardware & Software Requirements 4. FEASIBILITY STUDY 4.Table of Contents 1.2 Benefit of Proposed System 3. INTRODUCTION 2.2 Objective of the Project 3. GRAPHICAL USER INTERFACE 8.3 Operational Feasibility 5. SYSTEM TESTING . SYSTEM ANALYSIS & DATA FLOW DIAGRAMS 5.1 Brief Overview 2.

9.APPENDIX . IMPLEMENTATION & TESTING 10. CONCLUSION 11. REFERENCES 12.

this project will be generating time tables that will ensure the conlict free allocation of subjects assigned to various faculties.ABSTRACT The project called Time Table Generator is a software for generating conflict free time tables. This project will be suitable from the security point of view as well. The manual method of generating time table is very cumbersome and requires a lot of time and not even completely removes all the conflicts . Each teacher and Student is eligible for viewing his own timetable once they are finalized for a given semester but they can not edit them. We provide another facility to export the generated time table to ms excel from where it can be printed. Only the administrator will be having the authority to create the timetable and faculties and students will only be viewing the time table. This timetable generator is a semi automatic time table scheduling software. . This project is aimed at developing a Time Table Generator for Colleges. The project differentiates between users on the basis of their designation. Colleges are supposed to make time tables for each semester which used to be a very tedious and pain staking job.

SYSTEM FEATURES

FEATURE AND SCOPE OF OLD AND NEW SYSTEM The manual method of generating time table is very cumbersome and requires a lot of time and not even completely removes all the conflicts. This was about the old system. The new system will be generating time tables semi-automatically that will ensure the conflict free allocation of subjects assigned to various faculties. The system includes the administrative login as well as the student login. Through the student login we can only view the timetable which has been generated, whereas through the administrative login, the faculty and the administrative block can login where the various functions can be performed. The faculty members are given a provision to register themselves and view the timetable. If required, they can request for changing the lecture timings also. The work of the administrative block is to keep a track of all the registered faculty members, assign them the subjects, and generate the timetable for various sections of students.they also accept the requests made by the faculty members and make changes as per the requirements so that still there is no conflict in the timetable generated. Since all the timetables are generated manually in the college, the college can use this project to generate the timetable semi-automatically, by reducing the chances of conflicts of any kind.

BENEFITS OF PROPOSED SYSTEM        The first and foremost benefit of the proposed system is that it is useful for the college administrative authorities The generation of the timetable will be fast. The timetable will be accurate. The system will provide error-free timetable without any conflicts. The resources can be used else-where since the time table will be generated within few seconds. The faculty members have the provision of sending request for changing the time table according to the timing that best suits them. The proposed system is extensible; i.e. it can be extended to generate the time table for as many branches as required.

FEASIBILITY STUDY

Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of the existing business or proposed venture, opportunities and threats as presented by the environment, the resources required to carry through, and ultimately the prospects for success. In its simplest term, the two criteria to judge feasibility are cost required and value to be attained. As such, a well-designed feasibility study should provide a historical background of the business or project, description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation.

Five common factors (TELOS)
Technology and system feasibility
The assessment is based on an outline design of system requirements in terms of Input, Processes, Output, Fields, Programs, and Procedures. This can be quantified in terms of volumes of data, trends, frequency of updating, etc. in order to estimate whether the new system will perform adequately or not. Technological feasibility is carried out to determine whether the company has the capability, in terms of software, hardware, personnel and expertise, to handle the completion of the project when writing a feasibility report, the following should be taken to consideration:
   

A brief description of the business The part of the business being examined The human and economic factor The possible solutions to the problems

At this level, the concern is whether the proposal is both technically and legally feasible (assuming moderate cost).

Economic feasibility

Schedule feasibility is a measure of how reasonable the project timetable is. You need to determine whether the deadlines are mandatory or desirable. then the decision is made to design and implement the system. Time-based study: This is an analysis of the time required to achieve a return on investments. An entrepreneur must accurately weigh the cost versus benefits before taking an action. and 2. Schedule feasibility A project will fail if it takes too long to be completed before it is useful. a data processing system must comply with the local Data Protection Acts. and takes advantage of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirements analysis phase of system development. Other feasibility factors . are the project deadlines reasonable? Some projects are initiated with specific deadlines. If benefits outweigh costs. e. The future value of a project is also a factor. and if it can be completed in a given time period using some methods like payback period.Economic analysis is the most frequently used method for evaluating the effectiveness of a new system. Development costs. More commonly known as cost/benefit analysis. Cost-based study: It is important to identify cost and benefit factors. Typically this means estimating how long the system will take to develop. Operating costs. Legal feasibility Determines whether the proposed system conflicts with legal requirements. the procedure is to determine the benefits and savings that are expected from a candidate system and compare them with costs.g. Given our technical expertise. which can be categorized as follows: 1. Operational feasibility Operational feasibility is a measure of how well a proposed system solves the problems. This is an analysis of the costs to be incurred in the system and the benefits derivable out of the system.

For example. Resource feasibility This involves questions such as how much time is available to build the new system.Market and real estate feasibility Market Feasibility Study typically involves testing geographic locations for a real estate development project. dependencies. whether it interferes with normal business operations.financial viability can be judged on the following parameters:   Total estimated cost of the project Financing of the project in terms of its capital structure. Developers often conduct market studies to determine the best location within a jurisdiction. debt equity ratio and promoter's share of total cost   Existing investment by the promoter in any other business Projected cash flow and profitability ECONOMIC FEASIBILITY STUDY . manufacturing. Financial feasibility In case of a new project. commercial. and usually involves parcels of real estate land. type and amount of resources required. when it can be built. Jurisdictions often require developers to complete feasibility studies before they will approve a permit application for retail. and to test alternative land uses for given parcels. the project's alternatives are evaluated for their impact on the local and general culture. housing. office or mixed-use project. environmental factors need to be considered and these factors are to be well known. industrial. Cultural feasibility In this stage. Further an enterprise's own culture can clash with the results of the project. Market Feasibility takes into account the importance of the business in the selected area.

They also provide independent project assessment and enhance project credibility. In economic feasibility. Click on the link below which will get you to the page that explains what cost benefit analysis is and how you can perform a cost benefit analysis. the most important is cost-benefit analysis.Feasibility studies are crucial during the early development of any project and form a vital component in the business development process. Built on the information provided in the feasibility study. It is often a prerequisite for any funding approval. It will also sum up the strengths. As the name suggests. cost benefit analysis is done in which expected costs and benefits are evaluated. In economic feasibility. the system can be judged to be economically feasible. a business case is used to convince the audience that a particular project should be implemented. weaknesses and validity of assumptions as well as assessing the financial and non-financial costs and benefits underlying preferred options. The business case will detail the reasons why a particular project should be prioritized higher than others. For any system if the expected benefits equal or exceed the expected costs. cost and benefits of projects before financial resources are allocated. A Deloitte feasibility study can help your organization to:  Define the business requirements that must be met by the selected project and include the critical success factors for the project  Detail alternative approaches that will meet business requirements. including comparative cost/benefit and risk analyses  Recommend the best approach for preparing a business case or moving through the implementation process Our feasibility studies and business cases can help you answer crucial questions such as:   Have the alternatives been carefully. thoroughly and objectively examined? What are the consequences of each choice on all relevant areas? . it is an analysis of the costs to be incurred in the system and benefits derivable out of the system. Accounting and Advisory feasibility studies enable organizations to assess the viability. Economic analysis is used for evaluating the effectiveness of the proposed system.

Cost benefit determines the benefits and savings that are expected from the system and compares them with the expected costs. Profits can be monetary or in the form of an improved working environment. Cost benefit analysis helps to give management a picture of the costs. The development costs are one time investment whereas maintenance costs are recurring. it carries risks. It usually involves comparing alternate investments. because in some cases an estimate can be wrong. The development cost is basically the costs incurred during the various stages of the system development. Each phase of the life cycle has a cost. And the project might not actually turn out to be beneficial. Since after developing that application it provides the organization with profits. However. benefits and risks.     What are the results of any cost/benefit studies? What are the costs and consequences of no action? What are the impacts on the various interest groups? What are the timelines for decisions? Are the consequences displayed to make comparisons easier? Cost Benefit Analysis: Developing an IT application is an investment. Some examples are :      Personnel Equipment Supplies Overheads Consultants' fees Cost and Benefit Categories . The cost of an information system involves the development cost and maintenance cost.

In a broad sense the costs can be divided into two types 1. . flooring. . Software costs involves required software costs.In performing Cost benefit analysis (CBA) it is important to identify cost and benefit factors. spent on the people involved in the development of the system. These can be wiring. Operating costs. lighting. and air conditioning. personnel. acoustics. Cost and benefits can be categorized into the following categories. and supply costs. etc. conveyance allowance. Wages Supplies Overheads Another classification of the costs can be: Hardware/software costs: It includes the cost of purchasing or leasing of computers and it's peripherals. Personnel costs: It is the money. Development costs- Development costs that are incurred during the development of the system are one time investment. facility. operating. There are several cost factors/elements. other benefits such as health insurance.   Wages Equipment 2. These expenditures include salaries. These are hardware. Facility costs: Expenses incurred during the preparation of the physical site where the system will be operational. e.g.

The purchase of hardware or software. personnel training.Operating costs: Operating costs are the expenses required for the day to day running of the system. The two main benefits are improved performance and minimized processing costs. They are identified and measured. software cost are all tangible costs.both The system will provide some benefits also. Benefits can be tangible or intangible. direct or indirect. That can be in the form of maintaining the hardware or application programs or money paid to professionals responsible for running or maintaining the system. Hardware costs. the first task is to identify each benefit and assign a monetary value to it.Decreasing costs. salaries for professionals.Increasing income. This includes the maintenance of the system. In cost benefit analysis. disks. . and the like. Further costs and benefits can be categorized as Tangible or Intangible Costs and Benefits Tangible cost and benefits can be measured. ribbons. Costs whose value cannot be measured are referred as intangible costs. and employee salaries are example of tangible costs. or ..Costs Benefits can be accrued by : . Supply costs: These are variable costs that vary proportionately with the amount of use of paper. or . The cost of breakdown of an online system during banking hours will cause the bank lose deposits. Benefits We can define benefit as Profit or Benefit = Income . These should be estimated and included in the overall cost ofthe system.

Indirect costs result from the operations that are not directly associated with the system. Direct costs are having rupee value associated with it.Benefits are also tangible or intangible. These options should cover all technical sub-areas. heat. Both tangible and intangible costs and benefits should be considered in the evaluation process. Direct benefits are also attributable to a given project. The need for evaluation is great especially in large high-risk information service development projects. The goal of a feasibility study is to outline and clarify the things and factors . technical problems and solution models of information service realisation. TECHNICAL FEASIBILITY A feasibility study is an important phase in the development of business related services. A feasibility study focuses in the study of the challenges. For example. the costs are treated as either direct or indirect. if the proposed system that can handle more transactions say 25% more than the present system then it is direct benefit. The proof is defining a comprehensive number of technical options that are feasible within known and demanded resources and requirements. Insurance. Fixed or Variable Costs and Benefits Some costs and benefits are fixed. more customer satisfaction. Variable costs are incurred on regular basis. Whereas improved response time. analyses the potential solutions to the problems against the requirements. Variable benefits are realized on a regular basis. etc are all intangible benefits. Recurring period may be weekly or monthly depending upon the system. Insurance. light. air conditioning are all indirect costs. The term “technical feasibility” establishes that the product or service can operate in the desired manner. evaluates their ability to meet the goals and describes and rationalises the recommended solution. Fixed benefits don't change. etc are all fixed costs. producing error free output such as producing reports are all tangible benefits. This has to be proven without building the system. For example. They are proportional to the work volume and continue as long as system is in operation. Depreciation of hardware. Direct or Indirect Costs and Benefits From the cost accounting point of view. maintenance. improved company status. Technical feasibility means “achievable”. Fixed costs don't change.

A more demanding and in-depth evaluation process. help direct research and development investments to right things. Other goals for feasibility studies are: • Produce sufficient information: • Is the development project technically feasible? • Produce base data for the requirement definition phase. E. goals. • At the beginning of an information service development project alongside requirement definition (and partly after): Produces data for requirement definition and utilises the data produced by the requirement definition process.e. The study also forms the framework for the system development project and creates a baseline for further studies. The feasibility study is a critical document defining the original system concepts. • How practical is the technical solution? • The availability of technical resources and know-how? The evaluation backs the selection of the most suitable technical alternative.connected to the technical realisation of a information service system.g. what technical requirements does the production of the service place on the clients’ current technology i. recognise and possibly solve these things before the actual design and realisation. It is good to outline. • What are the realisation alternatives? • Is there a recommended realisation alternative? Evaluation answers: The evaluation of technical feasibility tries to answer whether the desired information service is feasible and what are the technical matters connected to the feasibility: • Technical feasibility from the organisation viewpoint. . requirements and alternatives. Usually a more concise evaluation. The evaluation process can be utilised at different stages of an information service development project: • At the beginning of an information service development project: evaluation of the technical feasibility of the desired information service and the implementation alternatives based on the available information.

g. • Identifies.Software and hardware Once the technical feasibility is established. economic feasibility of the proposed system is carried out.Manpower. The feasibility evaluation studies the information system from different viewpoints: • Technical feasibility • Financial feasibility • Operational feasibility The other modules connected to the ”Technical feasibility” module are thus: • Markets and foresight (technology foresight. mega-trends) • Risk analyses (reliability. technical risks (?)) • Revenue and finance (economical profitability and project feasibility) In technical feasibility the following issues are taken into consideration.   Whether the required technology is available or not Whether the required resources are available - . raises and clarifies issues connected to the technical implementation of an information system.programmers. testers & debuggers . Since it might happen that developing a particular system may be technically possible but it may require huge investments and benefits may be less. For evaluating this. complements and utilises the information service requirement definition.e. the following points: • Definitions of feasible alternatives for information service system design and development. it is important to consider the monetary factors also.The evaluation of the technical feasibility of an information system produces information and answers e. OPERATIONAL FEASIBILITY . • Produces data for requirement definition i.

If the existing operations and support infrastructure can handle your . it must also make operational sense. maintenance. TABLE below summarizes critical issues which you should consider. therefore you need to determine whether or not you can effectively operate and support it. Issues to consider when determining the operational feasibility of a project. To determine what the impact will be you will need to understand both the current operations and support infrastructure of your organization and the operations and support characteristics of your new application. Operations Issues Support Issues   What documenta tion will users be given? What tools are needed to support operations?  What skills will operators need to be trained in?  What training will users be given?  What processes need to be created and/or updated?  How will change requests be managed?  What documentation does operations need? Very often you will need to improve the existing operations.Not only must an application make economic and technical sense. and support infrastructure to support the operation of the new application that you intend to develop. The basic question that you are trying to answer is “Is it possible to maintain and support this application once it is in production?” Building an application is decidedly different than operating it.

The operational feasibility parameters are:     Does this project require some investment in tools.application. skill levels. infrastructures? Do we have the right mix of team to take up this project? Is there any time zone advantage? Did we anticipate any operational risk …like staffing. . and loyalty? Based on this the operational feasibility of the project is checked and the score is generated. albeit with the appropriate training of the affected staff. retention. people leaving the company in the middle of the project?  Identify the anticipated impact on customer satisfaction. hiring. then your project is operationally feasible.

the time tables are generated manually.SYSTEM ANALYSIS AND DATA FLOW DIAGRAMS EXISTING SYSTEM: every college or institution requires time tables and the manual generation of time table is quite a cumbersome task and requires lot of sharp observation and time. so. this project is aimed at developing a system that will generate a time table semi automatically which will be conflict free and thus replace the existing system of manual time table generation with a time table generator. Thus far. Their must have been other time table generators too but this project that we have completed is quite customised for use in our institution specially. .

dynamic code which runs on the server can be placed in a page within a block <% -. Characteristics PAGES ASP.PROPOSED SYSTEM: INTRODUCTION TO THE PROPOSED SYSTEM: Time Table generator is revolutionary software which helps in reducing human efforts in generating time table. It is a powerful software tool not only for the management but also for students.NET Framework. Web forms are contained in files with an ".aspx" extension. It not only helps in managing time schedule but also helps in allocating the classes and laboratories. these files typically contain static (X)HTML markup. It gives benefit to: administrator. It makes the work of time table generation error free and accurate. web applications and web services. Additionally. are the main building block for application development. known officially as "web forms". faculty and students. ASP. It was first released in January 2002 with version 1. The ASP.NET code using any supported . It is an organizational based software which helps in maintain the time management in the organization and helps faculty and students.NET is built on the Common Language Runtime (CLR). as well as markup defining server-side Web Controls and User Controls where the developers place all the required static and dynamic content for the web page. which is similar .NET language. and is the successor to Microsoft's Active Server Pages (ASP) technology.NET components to process SOAP messages.%>.NET web pages or webpage. the head of concerned department.0 of the .NET is a web application framework developed and marketed by Microsoft to allow programmers to build dynamic web sites. ABOUT THE TECHNOLOGY USED ASP.dynamic code -.NET SOAP extension framework allows ASP. allowing programmers to write ASP.

aspx.aspx. JSP.aspx. USER CONTROLS User controls are encapsulations of sections of pages which are registered and used as controls in ASP. but with the final extension denoting the page language).NET's code-behind model marks a departure from Classic ASP in that it encourages developers to build applications with separation of presentation and content in mind.cs file (depending on the programming language used). User controls are created as ASCX markup files. User controls have their own events which are handled during the life of ASP. while dynamic code remains in an . This is similar to the separation of the controller from the view in Model–View–Controller (MVC) frameworks.aspx page. the developer writes code to respond to different events.aspx (same filename as the page file (ASPX). These files usually contain static (X)HTML markup. like the page being loaded. as well as markup defining server-side web controls. and ASP. such as which programming language is used for the server-side code. or a control being clicked. to focus on the design markup with less potential for disturbing the programming code that drives it.NET should process the pages.NET requests.vb or . These are the locations where the developer can place the required static and dynamic content. In theory.aspx. A user control is compiled when its containing page is requested and is stored in memory for subsequent requests.0. which places this code in a separate file or in a specially designated script tag. This practice is automatic in Microsoft Visual Studio and other IDEs. this would allow a web designer. for example. The most common directive is <%@ Page %> which can specify many things.NET Framework 2. rather than a procedural walk through of the document. DIRECTIVES A directive is special instructions on how ASP.cs or MyPage. ASP.vb while the page file is MyPage. With ASP. Code-behind files typically have names like MyPage.to other web development technologies such as PHP. When using this style of programming.NET. CODE BEHIND MODEL Microsoft recommends dealing with dynamic program code by using the code-behind model. An event bubbling mechanism provides the ability to pass an event fired . Microsoft introduced a new code-behind model which allows static text to remain on the .

The page doubles as the root of the control tree. After the request has been processed. This produces the initial control tree which is now typically manipulated by the methods of the page in the following steps. having all their code compiled into a dynamic link library (DLL) file. if an application uses stateful interaction. The resulting HTML output is sent to the client.NET applications. the instance of the page class is discarded and with it the entire control tree. CUSTOM CONTROLS Programmers can also build custom controls for ASP. and server controls are represented by instances of a specific control class. Unlike user controls. The initialization code is combined with user-written code (usually by the assembly of multiple partial classes) and results in a class specific for the page. one of its containing pages is requested instead. Such custom controls can be used across multiple web applications and Visual Studio projects RENDERING TECHNIQUE ASP.NET programmers who rely on class instance members that are lost with every page request/response cycle. the code may change the tree structure as well as manipulate the properties/methods of the individual nodes. This is a source of confusion among novice ASP. a user control cannot be requested independently. As each node in the tree is a control represented as an instance of a class. Literal text goes into instances of the Literal control class. an instance of the page class is created and the initialization code is executed. during the initialization steps.aspx) file is compiled into initialization code which builds a control tree (the composite) representing the original template. STATE MANAGEMENT ASP. during the rendering step a visitor is used to visit every node in the tree. First. the template (. As such.NET applications are hosted by a web server and are accessed using the stateless HTTP protocol. asking each node to render itself using the methods of the visitor. Actual requests for the page are processed through a number of steps. Unlike an ASP.NET page.by a user control up to its containing page. Finally. it has to implement state . these controls do not have an ASCX markup file. During compilation.NET uses a visited composites rendering technique.

In-Process Mode The session variables are maintained within the ASP. accessed using the Session collection.NET application to be load-balanced and scaled across multiple servers. Microsoft treats "state" as GUI state. This is the fastest way. Problems may arise if an application needs to keep track of "data state".NET process is recycled or shut down.NET runs a separate Windows service that maintains the state variables. Because the . These are set and initialized when the Application_OnStart event fires on the loading of the first instance of the application and are available until the last instance exits. for example. a finite state machine which may be in a transient state between requests (lazy evaluation) or which takes a long time to initialize. ASPState is slower than In-Process. in this mode the variables are destroyed when the ASP.NET Remoting. Because state management happens outside the ASP. State management in ASP. APPLICATION STATE Application state is held by a collection of shared user-defined variables.NET engine accesses data using . Application state variables are identified by name.NET provides various functions for state management.NET pages with authentication can make Web scraping difficult or impossible. SESSION STATE Server-side session state is held by a collection of user-defined session variables that are persistent during a user session. This mode allows an ASP. These variables. The variables can be set to be automatically destroyed after a defined time of inactivity even if the session does not end. ASP.management on its own. are unique to each session instance. Client-side user session is maintained by either a cookie or by encoding the session ID in the URL itself.NET supports three modes of persistence for server-side session variables. which provides a wrapper for the application state.NET process. ASP. Conceptually. ASPState Mode ASP.NET process. and because the ASP. Application state variables are accessed using the Applications collection. however.

The states of individual controls are decoded at the server. or server-wide basis. Encryption can be enabled on a server-wide (and server-specific) basis. The main advantage of this mode is that it allows the application to balance load on a server cluster. the controls render at their last state. it is still one point of failure for session state.NET process shutdowns. This is the slowest method of session state management in ASP.NET pages using the ViewState collection.state management service runs independently of ASP. the application may change the viewstate. SqlServer Mode State variables are stored in a database. The session-state service cannot be load-balanced. At the server side. if the processing requires a change of state of any control. as the base64 string containing the view state data can easily be de-serialized. This behavior can (and should) be modified. the session variables can persist across ASP. sharing sessions between servers. and there are restrictions on types that can be stored in a session variable. since session state server runs as one instance. . utilized by the HTML pages emitted by ASP. View state does not encrypt the __VIEWSTATE value. View state is turned on by default and normally serializes the data in every control on the page regardless of whether it is actually used during a postback.NET applications to maintain the state of the web form controls and widgets.NET process shutdowns.NET. The state of the controls is encoded and sent to the server at every form submission in a hidden field known as __VIEWSTATE. By default.NET. allowing for a certain level of security to be maintained. The main use for this is to preserve form information across postbacks. The server sends back the variable so that when the page is re-rendered. per-page. however. as View state can be disabled on a per-control. and are available for use in ASP. allowing session variables to be persisted across ASP. However. VIEW STATE View state refers to the page-level state management mechanism. Developers need to be wary of storing sensitive or private information in the View state of a page or control.

Other developers have used include files and other tricks to avoid having to implement the same navigation and other elements in every page. and then make the pages in their application inherit from this new class.0. ASP.NET are cookies. The "Cache" object holds the data only for a specified amount of time and is automatically cleaned after the session time-limit elapses. and using the query string. All markup and server controls in the content page must be placed within the ContentPlaceHolder control.NET Framework is object-oriented and allows for inheritance. Child pages use those ContentPlaceHolder controls. OTHER: Other means of state management that are supported by ASP.NET merges the output of the content page with the output of the master page.Page". write methods there that render HTML.NET 2. it adds complexity and mixes source code with markup. caching. When a request is made for a content page. called ContentPlaceHolders to denote where the dynamic content goes.Web. this method can only be visually tested by running the application .not while designing it. beginning with ASP. much like a mail merge in a word processor. which. Because the . many developers would define a new base class that inherits from "System. The rest of the page is defined by the shared parts of the master page. can be nested.NET 2. Furthermore. ASP. TEMPLATE ENGINE When first released. which must be mapped to the placeholder of the master page that the content page is populating. A web application can have one or more master pages. as well as HTML and JavaScript shared across child pages.SERVER SIDE CACHING ASP.NET offers a "Cache" object that is shared across the application and can also be used to store various objects. . Master templates have place-holder controls. and sends the output to the user.NET lacked a template engine.0 introduced the concept of "master pages".UI. ASP. which allow for template-based page development. While this allows for common elements to be reused across a site.

Although ASP.NET 2.0 on) are: App_Code This is the "raw code" directory.NET will automatically find and use this file for localization. the site can span any number of directories.NET server automatically compiles files (and subdirectories) in this folder into an assembly which is accessible in the code of every page of the site. Apart from a few reserved directory names. DIRECTORY STRUCTURE In general.The master page remains fully accessible to the content page. App_GlobalResources .aspx page.resx holds localized resources for the French version of the CheckOut. ASP.NET provides means for intercepting the request at any point during processing. App_Data default directory for databases.g. for setting copyright notices) the content page can use these as well. model code and business code. App_LocalResources E. When the UI culture is set to French. change title.NET directory structure can be determined by the developer's preferences. configure caching etc. the developer is not forced to funnel requests through a central application or front controller.aspx.fr-FR.g. As an alternative to using App_Code the developer may opt to provide a separate assembly with precompiled code. such as Access mdb files and SQL Server mdf files. This means that the content page may still manipulate headers. App_Code will typically be used for data access abstraction code. the ASP. The special directory names (from ASP. Also any site-specific http handlers and modules and web service implementation go in this directory. If the master page exposes public properties or methods (e. a file called CheckOut. The ASP. The structure is typically reflected directly in the URLs. This directory is usually the only one with write access for the application.

PERFORMANCE ASP. below). Any classes represented by code in the Bin folder are automatically referenced in your application.NET aims for performance benefits over other script-based technologies (including Classic ASP) by compiling the server-side code to one or more DLL files on the web server.NET will compile the entire site in batches of 1000 files upon first .Holds resx files with localized resources available to every page of the site. which are used on more than one page. subsequent requests are served from the DLL files. or other code that you want to reference in your application. components.dll files) for controls. The first time a client requests a page. the . However. App_WebReferences holds discovery files and WSDL files for references to web services to be consumed in the site.This compilation happens automatically the first time a page is requested (which means the developer need not perform a separate compilation step for pages). This is where the ASP. Bin Contains compiled code (.NET developer will typically store localized messages etc. but will not again unless the page requested is updated further. App_Themes Adds a folder that holds files related to themes which is a new ASP.NET Framework parses and compiles the file(s) into a . see Other implementations. The ASPX and other resource files are placed in a virtual host on an Internet Information Services server (or other compatible ASP.NET assembly and sends the response. the compilation might cause a noticeable but short delay to the web user when the newly-edited page is first requested from the web server. This feature provides the ease of development offered by scripting languages with the performance benefits of a compiled binary.NET feature that helps ensure a consistent appearance throughout a Web site and makes it easier to change the Web site’s appearance when necessary.NET servers. By default ASP.

using MS Visual Studio. such as a button or label. rather than in conventional web-scripting environments like ASP and PHP. Other differences compared to ASP classic are: .NET simplifies developers' transition from Windows application development to web development by offering the ability to build pages composed of controls similar to a Windows user interface. The framework combines existing technologies such as JavaScript with internal components like "ViewState" to bring persistent (inter-request) state to the inherently stateless web environment.NET AJAX An extension with both client-side as well as server-side components for writing ASP. web controls produce segments of HTML and JavaScript which form parts of the resulting page sent to the end-user's browser. ASP. EXTENSION Microsoft has released some extension frameworks that plug into ASP. functions in very much the same way as its Windows counterpart: code can assign its properties and respond to its events.NET and extend its functionality.NET MVC Framework An extension to author ASP. the batch size or the compilation strategy may be tweaked. A web control. This also eliminates the need of having the source code on the web server. Controls know how to render themselves: whereas Windows controls draw themselves to the screen.request. ASP. Developers can also choose to pre-compile their "codebehind" files before deployment.NET compared with ASP classic ASP. ASP. eliminating the need for just-in-time compilation in a production environment.NET pages using the MVC architecture. Some of them are: ASP.NET encourages the programmer to develop applications using an event-driven GUI model. If the compilation delay is causing problems.NET pages that incorporate AJAX functionality.

NET application leaks memory.NET for providing the UI for the web form. Detection of standards-compliant web browsers is more robust and support for Cascading Style Sheets is more extensive.  ASP. in version 2. That way session values are not lost when the web server is reset or the ASP. FRAMEWORKS . In addition.NET. These controls are state managed controls and are WYSIWYG controls.NET prior to 2. such as menus. all controls generate valid HTML 4.NET. Compiled code means applications run faster with more design-time errors trapped at the development stage. Delphi.NET uses the multi-language abilities of the . allowing web pages to be coded in VB. J#.  Versions of ASP. making use of exception handling using trycatch blocks.0 were criticized for their lack of standards compliance.NET Common Language Runtime. However.   Similar metaphors to Microsoft Windows applications such as controls and events. the framework's browser detection feature sometimes incorrectly identified web browsers other than Microsoft's own Internet Explorer as "downlevel" and returned HTML/JavaScript to these clients with some of the features removed. depending on the site configuration.   Ability to use true object-oriented design for programming pages and controls If an ASP. plus user-defined controls allow commonly-used web template.0.NET worker process is recycled.1 output.   Ability to cache the whole page or just parts of it to improve performance.  Web Server Controls: these are controls introduced by ASP. Layout of these controls on a page is easier because most of it can be done visually in most editors. C#.  Significantly improved run-time error handling.0. Chrome.NET can be saved in a Microsoft SQL Server database or in a separate process running on the same machine as the web server or on a different machine. The generated HTML and JavaScript sent to the client browser would not always validate against W3C/ECMA standards. the ASP. XHTML 1.  Session state in ASP. Ability to use the code-behind development model to separate business logic from presentation. An extensive set of controls and class libraries allows the rapid building of applications.NET runtime unloads the AppDomain hosting the erring application and reloads the application in a new AppDomain. or sometimes crippled or broken.0 (the default) or XHTML 1. etc.

It was developed by Microsoft within the . an ORM layer built on NHibernate. a port of the Spring framework for Java.0. modern. functional. 2010. general-purpose.NET database and distributed computing applications. Noteworthy frameworks designed for the platform include:  Base One Foundation Component Library (BFC) is a RAD framework for building . an open-source MVC framework with an execution model similar to Ruby on Rails.NET. object-oriented (class-based).NET. A simple framework for . The framework is commonly used with Castle ActiveRecord. object-oriented programming language. Survey™ Project is an open-source web based survey and form engine framework written in ASP.    Spring. C# is intended to be a simple. skins. Its development team is led by Anders Hejlsberg. C# C# is a multi-paradigm programming language encompassing imperative.NET. used in enterprise applications.NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270). general-purpose. object-oriented programming language. The most recent version is C# 4. which was released on April 12. C# is one of the programming languages designed for the Common Language Infrastructure. and component-oriented programming disciplines. and providers.It is not essential to use the standard webforms development model when developing with ASP. .NET and C#. DESIGN GOALS The ECMA standard lists these design goals for C#  C# language is intended to be a simple. generic. Skaffold.NET applications.  Castle Monorail. declarative. modern.  DotNetNuke is an open-source solution which comprises both a web application framework and a content management system which allows for advanced extensibility through modules.

  Support for internationalization is very important. as is programmer portability. including J# (a . C# is intended to be suitable for writing applications for both hosted and embedded systems. A# (from Ada).  Source code portability is very important.This convention is reflected in the ECMA-334 C# Language Specification.  The language is intended for use in developing software components suitable for deployment in distributed environments. Software robustness.NET language also designed by Microsoft which is derived from Java 1. etc. when it is practical to do so (for example. and automatic garbage collection.NET languages that are variants of existing languages. browsers. the language was not intended to compete directly on performance and size with C or assembly language. array bounds checking. Due to technical limitations of display (standard fonts. The original implementation of . The "sharp" suffix has been used by a number of other . ranging from the very large that use sophisticated operating systems. This is similar to the language name of C++. and the functional F#. especially for those programmers already familiar with C and C++. in advertising or in box art).  Although C# applications are intended to be economical with regard to memory and processing power requirements. )) is not present on the standard keyboard. where "++" indicates that a variable should be incremented by 1. However. NAME The name "C sharp" was inspired by musical notation where a sharp indicates that the written note should be made a semitone higher in pitch. )) was chosen to represent the sharp symbol in the written name of the programming language. and implementations thereof.) and the fact that the sharp symbol (U+266F ♯ (HTML: &#9839. down to the very small having dedicated functions. durability. and programmer productivity are important. the number sign (U+0023 # NUMBER SIGN (HTML: &#35. The language. detection of attempts to use uninitialized variables. Microsoft uses the intended musical symbol.1). should provide support for software engineering principles such as strong type checking.

a name since retired since the full Eiffel language is now supported. Theoretically. Variable shadowing is often considered confusing by C++ texts. on the grounds that forcing programmers to use expressions that return exactly bool can prevent certain types of common programming mistakes in C or C++ such as if (a = b) (use of assignment = instead of equality ==). and programs with unsafe code need appropriate permissions to run. Static members of public classes can substitute for global variables and functions. a C# compiler could generate machine code like traditional compilers of C++ or Fortran. C# disallows this "integer meaning true or false" approach. Most object access is done through safe object references.  In C#. such as while and if.NET was called Eiffel#. allowing a to be an int. such as Gtk# (a . or a pointer. Cocoa# (a wrapper for Cocoa) and Qt# FEATURES By design. The suffix has also been used for libraries. C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). it does not state that a C# compiler must target a Common Language Runtime. Statements that take conditions. unlike C and C++. memory address pointers can only be used within blocks specifically marked as unsafe.  Local variables cannot shadow variables of the enclosing block. the language specification does not state the code generation requirements of the compiler: that is. and expressions such as if(a) require only that a is convertible to bool. it is impossible to obtain a reference . such as the boolean type. While C++ also has a boolean type. which always either point to a "live" object or have the well-defined null value. Some notable distinguishing features of C# are:  There are no global variables or functions.  C# supports a strict Boolean datatype. Most of its intrinsic types correspond to value-types implemented by the CLI framework. However.Eiffel for . it can be freely converted to and from integers. or generate Common Intermediate Language (CIL).NET wrapper for GTK+ and other GNOME libraries). All methods and members must be declared within classes. or generate any other specific format. bool. require an expression of a type that implements the true operator.

There are no implicit conversions between booleans and integers. C# provides properties as syntactic sugar for a common pattern in which a pair of methods. Checked exceptions are not present in C# (in contrast to Java).IntPtr type. in some cases. This was a design decision by the language's lead architect to avoid complication and simplify architectural requirements throughout CLI. C# currently (as of version 4.  Managed memory cannot be explicitly freed. unlike C++ copy constructors and conversion operators.0) has 77 reserved words. which can be implicitly converted to any enumerated type). An unsafe pointer can point to an instance of a value-type.  Multiple inheritance is not supported. Any user-defined conversion must be explicitly marked as explicit or implicit. such as widening of integers. Code that is not marked as unsafe can still store and manipulate pointers through the System.  In addition to the try. The only implicit conversions by default are those which are considered safe. which are both implicit by default. although a class can implement any number of interfaces.  C# is more type safe than C++. This is enforced at compiletime. but it cannot dereference them. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory which is no longer needed.catch construct to handle exceptions.    Full type reflection and discovery is available. and. nor between enumeration members and integers (except for literal 0.. accessor (getter) and mutator (setter) encapsulate operations on a single attribute of a class. instead. or to a random block of memory. This has been a conscious decision based on the issues of scalability and versionability .. Starting with version 4.. C# supports a "dynamic" data type that enforces type checking at runtime only. C# has a try..   Enumeration members are placed in their own scope. array. or a block of memory allocated on a stack. string. at runtime. during JIT. it is automatically garbage collected.0.finally construct to guarantee execution of the code in the finally block.to a "dead" object (one which has been garbage collected).

.

.

.

.

) is a type of user interface that allows users to interact with electronic devices with images rather than text commands.we have presented the snap shots of all the pages that are encountered while using the software for generating time table. household appliances and office equipment . portable media players or gaming devices. The actions are usually performed through direct manipulation of the graphical elements. GUIs can be used in computers. as opposed to text-based interfaces.          Home page Login page New student login page Welcome page for admin only Page for changing password Show/delete user login page Add subject page Show all faculty list . hand-held devices such as MP3 players. sometimes pronounced gooey. typed command labels or text navigation.GRAPHICAL USER INTERFACE graphical user interface (GUI. A GUI represents the information and actions available to a user through graphical icons and visual indicators such as secondary notation. Next .

HOME PAGE: LOGIN PAGE .

NEW STUDENT LOGIN PAGE .

WELCOME PAGE FOR ADMIN ONLY PASSWORD CHANGE PAGE CREATE NEW USER LOGIN PAGE .

SHOW/DELETE USER LOGIN PAGE ADD NEW SUBJECT PAGE .

SHOW ALL FACULTY LIST PAGE ASSIGN SUBJECTS TO FACULTY PAGE .

SHOW/DELETE ASSIGNED SUBJECT ADD LECTURE TIMINGS PAGE .

SHOW/DELETE LECTURE TIMINGS PAGE SHOW SEMESTER WISE SUBJECT LIST .

GENERATE TIME TABLE PAGE .

verification and validation. Testing is more than just debugging.or bugs -.will be buried in and remain latent until activation. Most physical systems fail in a fixed (and reasonably small) set of ways. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of . Because software and any digital systems are not continuous. Detecting all of the different failure modes for software is generally infeasible. for the same reason of complexity. but complete testing is infeasible.SYSTEM TESTING Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. not manufacturing defects. It is also true that for any complex systems. Although crucial to software quality and widely deployed by programmers and testers. or reliability estimation. The purpose of testing can be quality assurance. Software is not unlike other physical processes where inputs are received and outputs are produced. the design defects -. is equally difficult. Software Testing is the process of executing a program or system with the intent of finding errors. due to limited understanding of the principles of software.generally it will not change until upgrades. or until obsolescence. time and quality. testing boundary values are not sufficient to guarantee correctness.and humans have only limited ability to manage complexity. Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible. design defects can never be completely ruled out. software testing still remains an art. Where software differs is in the manner in which it fails. but because the complexity of software is generally intractable -. All the possible values need to be tested and verified. By contrast. software can fail in many bizarre ways. wear-and-tear -. it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. most of the defects in software are design errors. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Discovering the design defects in software. Testing can be used as a generic metric as well. Software does not suffer from corrosion. So once the software is shipped. Unlike most physical systems. [Myers79] Or.

As computers and software are used in critical applications. just to retain the reliability you had before. even if tests were performed at a rate of thousands per second. testing should be restarted. Bugs can cause huge losses. halted trading on the stock market. [Beizer90] Regardless of the limitations. known as the Pesticide Paradox [Beizer90]: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. An interesting analogy parallels the difficulty in software testing with the pesticide. allowed space shuttle missions to go awry. Thus. If a failure occurs during preliminary testing and the code is changed. It is broadly deployed in every phase in the software development cycle. Society seems to be unwilling to limit complexity because we all want that extra bell.years. . Testing is usually performed for the following purposes:  To improve quality. the software may now work for a test case that it didn't work for previously. because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration. The expense of doing this is often prohibitive. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility. If inputs from the real world are involved. the problem will get worse. for a realistic software module. Typically. our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. testing is an integral part in software development. and worse. but his time you have subtler bugs to face. By eliminating the (previous) easy bugs you allowed another escalation of features and complexity. because the Complexity Barrier principle states: Software complexity(and therefore that of bugs) grows to the limits of our ability to manage that complexity. the complexity can be far beyond the example mentioned here. A further complication has to do with the dynamic nature of programs. But this alone will not guarantee to make the software better. the outcome of a bug can be severe. whistle. Obviously. Bugs in critical systems have caused airplane crashes. more than 50% percent of the development time is spent in testing. and feature interaction.

Bugs can cause disasters. the quality and reliability of software is a matter of life and death. Table 1 illustrates some of the most frequently cited quality considerations. Testers can make claims based on interpretations of the testing results.Bugs can kill. and adaptability. is performed heavily to find out design defects by the programmer. but we can test related factors to make quality visible. Quality means the conformance to the specified design requirement. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Functionality (exterior quality) Correctness Reliability Engineering (interior quality) Efficiency Testability Adaptability (future quality) Flexibility Reusability . It is heavily used as a tool in the V&V process. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. which either the product works under certain situations. We can not test quality directly. is the purpose of debugging in programming phase. means performing as required under specified circumstances. Testing can serve as metrics.  For Verification & Validation (V&V) Just as topic Verification and Validation indicated. Debugging. based on results from the same test. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. Finding the problems and get them fixed [Kaner93]. Quality has three sets of factors -. [Bugs] In a computerized embedded world. another important purpose of testing is verification and validation (V&V). a narrow view of software testing. We can also compare the quality among different products under the same specification. engineering.functionality. the minimum requirement of quality. Being correct. or it does not work. These three sets of factors can be thought of as dimensions in the software quality space.

Typical Software Quality Factors [Hetzel88] Good testing provides measures for all relevant factors. while for a one-time scientific program neither may be significant. must be geared to measuring each relevant factor and thus forcing quality to become tangible and visible. . A testable design is a design that can be easily validated. The drawbacks are that it can only validate that the software works for the specified test cases. refers to the tests aiming at breaking the software. In the typical business system usability and maintainability are the key factors. testing can serve as a statistical sampling method to gain failure data for reliability estimation. or positive tests. and the amount of testing it has been subjected to. The importance of any particular factor varies from application to application. On the contrary. to be fully effective. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program [Lyu95]).  For reliability estimation [Kaner93] [Lyu95] Software reliability has important relations with many aspects of software. Dirty tests. design for testability is also an important design rule for software development. A piece of software must have sufficient exception handling capabilities to survive a significant level of dirty tests. Any system where human lives are at stake must place extreme emphasis on reliability and integrity. only one failed test is sufficient enough to show that the software does not work. or negative tests. A finite number of tests can not validate that the software works for all situations.Usability Integrity Documentation Structure Maintainability Table 1. Our testing. Because testing is a rigorous effort and requires significant time and cost. including the structure. or showing that it does not work. [Hetzel88] Tests with the purpose of validating the product works are named clean tests. falsified and maintained.

a testing method emphasized on executing the functions and examination of their input and output data. outputs and specification are visible. black-box testing also mainly refers to functional testing -. we can never be sure whether the specification is either correct or complete. We can never be certain that a verification system is correct either. It still remains an art.Software testing is not mature. the specification itself becomes an intractable problem: . Sometimes. but not testing software is even more expensive. and resource variables. No implementation details of the code are considered. [Howden87] The tester treats the software under test as a black box -. and the functionality is determined by observing the outputs to corresponding inputs. In testing. To make things worse. We can never be sure that a piece of software is correct. Software testing can be costly. No verification system can verify every correct program. the more problems we will find and therefore we will be more confident about the quality of the software. because we still cannot make it a science. or requirements-based [Hetzel88] testing. We can never be sure that the specifications are correct. It is obvious that the more we have covered in the input space. But as stated above. Solving the software-testing problem is no easier than solving the Turing halting problem. Even if we use some type of formal or restricted language. Ideally we would be tempted to exhaustively test the input space. ambiguity is often inevitable. input/output driven [Myers79]. Because only the functionality of the software module is of concern.  Black-box testing The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. especially in places that human lives are at stake. some of which are crafted methods or heuristics rather than good engineering methods. sequence. timing. let alone considering invalid inputs.only the inputs. We are still using the same testing techniques invented 20-30 years ago. Due to limitations of the language used in the specifications (usually natural language). [Perry90] It is also termed data-driven. we may still fail to write down all the possible cases in the specification. Combinatorial explosion is the major roadblock in functional testing. exhaustively testing the combinations of valid inputs will be impossible for most of the programs. various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification.

The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered. and combinations of the two. And people can seldom specify clearly what they want -. and styles. usually the number of test cases. Boundary values are of special interest. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost. Domain testing [Beizer95] partitions the input domain into regions. Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases.  White-box testing Contrary to black-box testing. Test cases are derived from the program structure. Specification problems contributes approximately 30 percent of all bugs in software. or is not. or glass-box in white-box testing. White-box testing is also called glass-box testing. The intention of exhausting some aspect of the software is still strong in white-box testing. because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. Partitioning is one of the common techniques. and some degree of exhaustion can be achieved. Testing plans are made according to the details of the software implementation. what they want after they have been finished. logic-driven testing [Myers79] or design-based testing [Hetzel88]. Good partitioning requires knowledge of the software structure. but also white-box approaches. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. software is viewed as a white-box. logic. such as executing each line of code . There are many techniques available in white-box testing. as the structure and flow of the software under test are visible to the tester.it is not possible to specify precisely every situation that can be encountered using limited words. A good testing plan will not only contain black-box testing. It is not possible to exhaust the input space. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. and consider the input values in each domain an equivalent class. such as programming language. If we have partitioned the input space and assume all the input values in a partition is equivalent. but it is possible to exhaustively test a subset of the input space.they usually can tell whether a prototype is. then we only need to test one representative value in each partition to sufficiently cover the whole input space.

By doing so we may discover unnecessary "dead" code -. may not be safely classified into black-box testing or white-box testing. which can not be discovered by functional testing. all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. or cover all the possible combinations of true and false condition predicates (Multiple condition coverage) Control-flow testing. The more mutants a test case can kill. and programming style as part of the specification content. In mutation testing. But every system will have implicit performance requirements. One reason is that all the above techniques will need some knowledge of the specification of the software under test. the original program code is perturbed and many mutated programs are created. finite-state testing. Test data are selected based on the effectiveness of failing the mutants. The test case selection is simple and straightforward: they are randomly chosen. the better the test case is considered. and data-flow testing. The boundary between black-box approach and white-box approach is not clear-cut. syntax testing. The problem with mutation testing is that it is too computationally expensive to use. loop testing. Some very subtle errors can be discovered with low cost. We may be reluctant to consider random testing as a testing technique. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies. Each faulty version of the program is called a mutant.at least once (statement coverage). Another reason is that the idea of specification itself is broad -. One can also obtain reliability estimate using random testing results based on operational profiles. And it is also not inferior in coverage than other carefully designed testing techniques. Study in [Duran84] indicates that random testing is more cost effective for many programs. It is also true for transaction-flow testing. The software should not take infinite time or infinite .it may contain any requirement including the structure. and many other testing strategies not discussed in this text. each contains one fault. traverse every branch statements (branch coverage).code that is of no use. programming language. Performance testing: Not all software systems have specifications on performance explicitly. Many testing strategies mentioned above. or never get executed at all.

Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. disk space. the developers can decide whether to release the software. Therefore. The typical method of doing performance testing is using a benchmark -. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade. [Vokolos98] Reliability testing Software reliability refers to the probability of failure-free operation of a system. Performance has always been a great concern and a driving force of computer evolution. The oracle is relatively simple. software testing (usually black-box testing) can be used to obtain failure data. therefore robustness testing .a program. process hangs or abnormal termination. throughput. The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Directly estimating software reliability by quantifying its related factors can be difficult. Guided by the operational profile. It only watches for robustness problems such as machine crashes. Robustness testing and stress testing are variances of reliability testing based on this simple criterion. based on the estimation. Performance evaluation of a software system usually includes: resource usage.resource to execute. Risk of using software can also be assessed based on reliability information. etc. There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. disk access operations. and the users can decide whether to adopt and use the software. including the testing process. CPU cycles. performance comparison and evaluation. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software. and memory usage [Smith90]. The goal of performance testing can be performance bottleneck identification. It is related to many aspects of software. stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Testing is an effective sampling method to measure software reliability. workload or trace designed to be representative of the typical system usage. Typical resources that need to be considered include network bandwidth requirements.

and sustained high loads. most of which uses commercial operating systems as their target. In such tests the software or system are exercised with or beyond the specified limits. is often used to test the whole system rather than the software alone.can be made more portable and scalable than correctness testing. Typical stress includes resource exhaustion. . This research has drawn more and more interests recently. or load testing. Stress testing. bursts of activities.

or any design for doing something. As such. Training users and others.this may include a variety of human and computer-assisted sessions as well as tools to explain the purpose and use of the system.IMPLEMENTATION AND MAINTENANCE IMPLEMENTATION Implementation is the carrying out. Many implementations may exist for a given specification or standard. software component. This is primarily done by the programmers. execution. new system is brought on-line in functional components. implementation is the action that must follow any preliminary thinking in order for something to actually happen.this includes not only installing the new system in organizational sites but also dealing with documentation. After a thorough testing of the different aspects of the system as described in earlier section. configuration. We have used parallel installation technique in the implementation of the new system. This type of a system is a bit risky because users are at the mercy of new system) and Phased Installation (Under this. There are other implementation techniques also like Direct Installation (in this the old system is turned off and the new system replaces the old system. In an information technology context. web browsers contain implementations of World Wide Web Consortium-recommended specifications. implementation encompasses all the processes involved in getting new software or hardware operating properly in its environment. running. including installation. For example. Implementation includes the following phases:     Writing Computer Software (Coding) this means actually writing the code. a method. or other computer system through programming and deployment. . Testing software this involves using test data scenario to verify that each component and the whole system work under normal and abnormal circumstances. An implementation is a realization of a technical specification or algorithm as a program. different parts of the old and new system are in cooperation until the whole new system is installed). testing. and software development tools contain implementations of programming languages. or practice of a plan. the system is to be put to actual use by using live data by user staff after sufficient training for the use of the software has been provided to staff members. Converting from old system to new system. and making necessary changes.

The list of potential points we have taken in mind while estimating the amount of user training needed are as follows:      Use of system General computer concepts Information system concepts System management System installation As the end users in our case are computer literates. How to operate computer system). .USER TRANING: The type of necessary training varies by type of system and expertise of users. therefore we don’t have to give them any computer fundamentals training (eg.

and maintainability measurement. staffing. testing. which organization does maintenance. his research led to the formulation of eight Laws of Evolution (Lehman 1997). A common perception of maintenance is that it is merely fixing bugs. impact analysis. Just as practice that has the most positive impact on maintenance productivity is the use of trained maintenance experts. The sets of best and worst practices are not the same. However. while the factor that has the greatest negative impact is the presence error-prone modules in application being maintained. Software maintenance and evolution of systems was first addressed by Meir M. The key software maintenance issues are both managerial and technical.MAINTENANCE Software maintenance in software engineering is the modification of a software product after delivery to correct faults. to improve performance or other attributes. they grow more complex unless some action such as code refactoring is taken to reduce the complexity. As they evolve. estimating costs. The software maintenance is categorized into four classes:   Adaptive – dealing with changes and adapting in the software environment Perfective – accommodating with new or changed user requirements which concern functional enhancements to the software Corrective – dealing with errors found and fixing it  . Key management issues are: alignment with customer priorities. Best and Worst Practices in Software Maintenance Because maintenance of aging legacy software is very labour intensive it is quite important to explore the best and most cost effective methods available for dealing with the millions of applications that currently exist. of the maintenance effort is used for non-corrective actions (Pigosky 1997). Lehman demonstrated that systems continue to evolve over time. Key findings of his research include that maintenance is really evolutionary developments and that maintenance decisions are aided by understanding what happens to systems (and software) over time. Lehman in 1969. Over a period of twenty years. studies and surveys over the years have indicated that the majority. This perception is perpetuated by users submitting problem reports that in reality are functionality enhancements to the system. Key technical issues are: limited understanding. over 80%.

The software maintenance which can last for 5–6 years after the development calls for an effective planning which addresses the scope of software maintenance. The process acceptance of the modification. investigate it and propose a solution. such as the conception and creation of the maintenance plan. an estimate of the life-cycle costs . 3. and is not part of daily maintenance tasks. The problem and modification analysis process. for example) is exceptional. by confirming the modified work with the individual who submitted the request in order to make sure the modification provided a solution. Preventive – concerns activities aiming on increasing software maintainability and prevent problems in the future SOFTWARE MAINTENANCE PLANNING The integral part of software is the maintenance part which requires accurate maintenance plan to be prepared during software development and should specify how users will request modifications or report problems and the estimation of resources such as cost should be included in the budget and a new decision should address to develop a new system and its quality objectives. If the software must be ported to another platform . the preparation for handling problems identified during development. 5. SOFTWARE MAINTENANCE PROCESSES This section describes the six software maintenance processes as: 1. obtain all the required authorizations to apply the modifications. The maintenance programmer must analyze each request. The process considering the implementation of the modification itself. 4. the tailoring of the post delivery process. The implementation processes contains software preparation and transition activities. and the follow-up on product configuration management. and. finally. which is executed once the application has become the responsibility of the maintenance group. The migration process (platform migration. 2. document the request and the solution proposal. the designation of who will provide maintenance. confirm it (by reproducing the situation) and check its validity.

Things like compliance with coding standards that includes software maintainability goals.  Perfective maintenance: Modification of a software product after delivery to improve performance or maintainability.  Adaptive maintenance: Modification of a software product performed after delivery to keep a software product usable in a changed or changing environment. also an event which does not occur on a daily basis. documents and route the requests they receive.without any change in functionality. activities and practices that are unique to maintainers. adaptive. The management of coupling . These have since been updated and ISO/IEC 14764 presents:  Corrective maintenance: Reactive modification of a software product performed after delivery to correct discovered problems. for example:  Transition: a controlled and coordinated sequence of activities during which a system is transferred progressively from the developer to the maintainer. 6. this process will be used and a maintenance project team is likely to be assigned to this task. is the retirement of a piece of software.B.  Service Level Agreements (SLAs) and specialized (domain-specific) maintenance contracts negotiated by maintainers. CATEGORIES OF MAINTENANCE IN ISO/IEC 14764 E. There are a number of processes. and perfective. Swanson initially identified three categories of maintenance: corrective. the last maintenance process. There is also a notion of pre-delivery/pre-release maintenance which is all the good things you do to lower the total cost of ownership of the software. Finally.  Preventive maintenance: Modification of a software product after delivery to detect and correct latent faults in the software product before they become effective faults.  Modification Request and Problem Report Help Desk: a problem-handling process used by maintainers to prioritize.  Modification Request acceptance/rejection: modification request work over a certain size/effort/complexity may be rejected by maintainers and rerouted to a developer.

0 where there is no design data available. Note also that some academic institutions are carrying out research to quantify the cost to ongoing software maintenance due to the lack of resources such as design documents and system/software comprehension training and resources (multiply costs by approx.5-2.and cohesion of the software. 1.). . JA1005 and JA1006 for example). The attainment of software supportability goals (SAE JA1004.

Sign up to vote on this title
UsefulNot useful