Professional Documents
Culture Documents
SYSTEM DESIGN
3.1 Introduction
1. UML Analysis modeling, this focuses on the user model and structural model
views of the system.
2. UML design modeling, which focuses on the behavioral modeling,
implementation modeling and environmental model views.
3.2 Design Principles
Basic design principles enable the software engineer to navigate the design process.
Davis suggest a set of principles for software design, which have been adapted and
extended in the following list:
The design process should not suffer from “tunnel vision”. A good
designer should consider alternative approaches, judging each based on the
requirements of the problem, the resources available to do the job.
The design should be traceable to the analysis model. Because a single
element of the design model often traces to multiple requirements, it is necessary to
have a means for tracking how requirements have been satisfied by the design model.
The design should not reinvent the wheel. Systems are constructed using a
set of design patterns, many of which have likely been encountered before. These
patterns should always be chosen as an alternative to reinvention. Time is short and
resources are limited! Design time should be invested in representing truly new ideas
and integrating those patterns that already exist.
The design should “minimize the intellectual distance” between the
software and the problem as it exists in the real world. That is, the structure of the
software design should mimic the structure of the problem domain.
The design should exhibit uniformity and integration. A design is uniform
if it appears that one person developed the entire thing. Rules of style and format
should be defined for a design team before design work begins. A design is integrated
if care is taken in defining interfaces between design components.
The design should be structured to accommodate change. The design
concepts enable a design to achieve this principle.
The design should be structured to degrade gently, even when aberrant
data, events, or operating conditions are encountered. Well-designed software
should never “bomb”. It should be designed to accommodate unusual circumstances,
and if it must terminate processing, do so in a graceful manner.
Design is not coding, coding is not design. Even when detailed procedural
designs are created for program components, the level of abstraction of the design
model id higher than source code. The only design decisions made at the coding level
address the small implementation details that enable the procedural design to be
coded.
The design should be assessed for quality as it is being created, not after
the fact. A variety of design concepts and measures are available to assist the
designer in assessing quality.
The design should be reviewed to minimize conceptual errors. There is
sometimes a tendency to focus on minutiae when the design is reviewed, missing the
forest for the trees. A design team should ensure that major conceptual elements of
the design have been addressed before worrying about the syntax of the design model.
When these design principles are properly applied, the software engineer creates a
design that exhibits both external and internal quality factors. External quality factors
are those properties of the software that can be readily observed by users. Internal
quality factors are of importance to software engineers.
The purpose of the design phase is to plan a solution of the problem specified
by the requirement document. This phase is the first step in moving from problem
domain to the solution domain. The design of a system is perhaps the most critical
factor affecting the quality of the software, and has a major impact on the later phases,
particularly testing and maintenance. The output of this phase is the design document.
This document is similar to a blue print or plan for the solution, and is used later
during implementation, testing and maintenance.
The design activity is often divided into two separate phase-system design and
detailed design. System design, which is sometimes also called top-level design, aims
to identify the modules that should be in the system, the specifications of these
modules, and how they interact with each other to produce the desired results. At the
end of system design all the major data structures, file formats, output formats, as well
as the major modules in the system and their specifications are decided.
During detailed design the internal logic of each of the modules specified in
system design is decided. During this phase further details of the data structures and
algorithmic design of each of the modules is specified. The logic of a module is
usually specified in a high-level design description language, which is independent of
the target language in which the software will eventually be implemented. In system
design the focus is on identifying the modules, whereas during detailed design the
focus is on designing the logic for each of the modules. In other words, in system
design the attention is on what components are needed, while in detailed design how
the components can be implemented in software is the issue.
This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.
The trends of increasing technical complexity of the systems, coupled with the
need for repeatable and predictable process methodologies, have driven System
Developers to establish system development models or software development life
cycle models.
Since there were a lot of organizations, which were opting for automation, it
was felt that some standard and structural procedure or methodology be introduced in
the industry so that the transition from manual to automated system became easy. The
concept of system life cycle came into existence then. Life cycle model emphasized
on the need to follow some structured approach towards building new or improved
system. There were many models suggested. System development begins with the
recognition of user needs. Then there is a preliminary investigation stage. It includes
evaluation of present system, information gathering, feasibility study, and request
approval. Feasibility study includes technical, economic, legal and operational
feasibility. In economic feasibility cost-benefit analysis is done. After that, there are
detailed design, implementation, testing and maintenance stages.
In this session, we'll be learning about various stages that make system's life
cycle. In addition, different life cycles models will be discussed. These include
Waterfall model, Prototype model, Object-Oriented Model, spiral model and Dynamic
Systems Development Method (DSDM).
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral
Model of Software Development and Enhancement. This model was not the first
model to discuss iterative development, but it was the first model to explain why the
iteration models.
The following figure 3.3.2.1 shows how a spiral model acts like:
Figure 3.3.2.1
Software engineers can get their hands in and start woring on the core of a
project earlier.
The spiral development model has one characteristic that is common to all
models—the need for advanced technical planning and multidisciplinary reviews at
critical staging or control points. Each cycle of the model culminates with a technical
review that assesses the status, progress, maturity, merits, risk, of development efforts
to date; resolves critical operational and/or technical issues (COIs/CTIs); and reviews
plans and identifies COIs/CTIs to be resolved for the next iteration of the spiral.
Subsequent implementations of the spiral may involve lower level spirals that
follow the same quadrant paths and decision considerations.
A graphical tool used to describe and analyze the moment of data through a
system manual or automated including the process, stores of data, and delays in the
system. Data Flow Diagrams are the central tool and the basis from which other
components are developed. The DFD is also known as a data flow graph or a bubble
chart. The Basic Notation used to create a DFD’s are as follows:
ii) Process: People, procedures, or devices that use or produce (Transform) Data. The
physical component is not identified.
iii) Source: External sources or destination of data, which may be People, programs,
organizations or other entities.
iv) Data Store: Here data are stored or referenced by a process in the System.
Association
Generalization
Realization
a)Dependency:
The relationship “Dependency” between two entities refers to position
where changes caused to one entity may have its effect on other entity. The
dependency relationship is represented as,
As seen from the figure a dashed arrow proceeding in one direction represents the
dependency symbol.
b)Association
A structural relationship that shows a connection among objects is
called as an “Association”. It is represented as,
c)Generalization
Generalization is termed as “Specialized Relationship”. In this
relationship, the objects of one entity can be substituted with the objects of another
entity. The entity whose objects are substituted is known as parent entity and the
entity, which is providing objects for replacement, is known as child entity. It is
represented as,
d)Realization
Realization is a relationship between classifiers in which one classifier lays
down a contract and another classifier guarantees to carry out this contract.
class diagram shows a set of classes, interfaces, and collaborations and their
relationships.
Graphically, a class diagram is a collection of vertices and arcs. The class diagram of
the
Contents
Class Diagrams commonly contain the following things
Classes
Interfaces
Collaborations
Dependency, generalization and association relationships
b) Component Diagrams
A component is the physical implementation of classes and collaborations.
Architecture of a system can be explained with its components. Therefore a
component is the basic building block of a system. These diagrams can be achieved
by modeling various physical components like libraries, tables, and files etc. which
are residing internal to given node.
Contents
Components
Interfaces
Relationships
c) Deployment Diagrams
The deployment diagrams indicate the processing elements, processes,
software components. The static deployment view of a system in terms of different
components, processes can be modeled by deployment diagrams.
Contents
Use Case diagrams commonly contain:
Use Cases
Actors
Dependency, generalization, and association relationships
Like all other diagrams, use case diagrams may contain notes and constraints. Use
Case diagrams may also contain packages, which are used to group elements of your
model into larger chunks. Occasionally, you will want to place instances of use cases
in your diagrams, as well, especially when you want to visualize a specific executing
system..
b) Sequence Diagrams
A sequence diagram is an interaction diagram that emphasizes the time
ordering of the messages. Graphically, a sequence diagram is a table that shows
objects arranged along the X-axis and messages, ordered in increasing time, along the
Y-axis.
Typically you place the object that initiates the interaction at the left, and
increasingly more sub-routine objects to the right. Next, you place the messages that
these objects send and receive along the Y-axis, in order of increasing time from top
to the bottom. This gives the reader a clear visual cue to the flow of control over time.
Sequence diagrams have two interesting features
There is the object lifeline. An object lifeline is the vertical dashed line that
represents the existence of an object over a period of time. Most objects that appear in
the interaction diagrams will be in existence for the duration of the interaction, so
these objects are all aligned at the top of the diagram, with their lifelines drawn from
the top of the diagram to the bottom.
There is a focus of the control. The focus of control is tall, thin rectangle that
shows the period of time during which an object is performing an action, either
directly or through the subordinate procedure. The top of the rectangle is aligns with
the action; the bottom is aligned with its completion. The sequence diagram for the
Flexible Rollback Recovery in Dynamic Heterogeneous Grid Computing is shown in
fig 3.3
c) Collaboration Diagram:
d) Activity Diagram:
going on inside a use case or among several classes. Activity diagram can also be
outgoing solid arrow attached to the end of activity symbol indicates a transition
triggered by the completion. The Activity Diagram for Flexible Rollback Recovery in
A state chart diagram shows a state machine. State chart diagrams are used to
model the dynamic aspects of the system. For the most part this involves modeling the
behavior of the reactive objects.
A reactive object is one whose behavior is best characterized by its response to
events dispatched from outside its context. A reactive object has a clear lifeline whose
current behavior is affected by its past.
A state chart diagram show a state machine emphasizing the flow of control
from state to state. A state machine is a behavior that specifies the sequence of states
an object goes through during its life time in response to events together with its
response to those events.
A state is a condition in the life of the object during which it satisfies some
conditions, performs some activity or wait for some events. An event is a
specification of a significant occurrence that has a location in time and space.
Graphically a state chart diagram is a collection of vertices and arcs. State chart
diagram commonly contain, Simple states and Composite states, transitions, including
events and actions.
Class Diagram:
UML Class diagram shows the static structure of the model. The class
diagram is a collection of static modeling elements, such as classes
and their relationships, connected as a graph to each other and to
their contents
Use Case Diagram:
A use case diagram is a graph of actors, a set of use cases enclosed by a
system boundary, communication (participation) associations between the actors and
users and generalization among use cases. The use case model defines the outside
(actors) and inside (use case) of the system’s behavior.
Update Network
Pactorization
Send
Possible Retransmit
Sequence Diagram:
Sequence diagram are an easy and intuitive way of describing the behavior Of a
system by viewing the interaction between the system and its environment. A
Sequence diagram shows an interaction arranged in a time sequence. A sequence
diagram has two dimensions: vertical dimension represents time; the horizontal
Dimension represents different objects. The vertical line is called is the object’s life
line. The lifeline represents the object’s existence during the interaction
Node Update Network Destination Path Discovery Node Disjoint Pactorization Send Retransmit
1 : distance()
7 : pactorize()
8 : packets()
9 : possible retransmit()
Collaboration Diagram:
The collaboration diagram represents a collaboration, which is a set of objects
Related in a particular context, and interaction, which is a set of messages exchanged
among the objects within the collaboration to achieve a designed Outcome.
Send
5 : paths()
Node Disjoint
8 : packets()
7 : pactorize()
4 : find node disjoint path()
Pactorization
Path Discovery
Retransmit
1 : distance()
Destination
Activity Diagram:
The purpose of activity diagram is to provide a view of flows and what is going on
inside a use case or among several classes. Activity diagram can also be used to
represent a class’s method implementation. A token represents an operation. An
activity is shown as a round box containing the name of the operation. An outgoing
solid arrow attached to the end of activity symbol indicates a transition triggered by
the completion.
Deployment Diagram
Ente theDestination
UpdateNetwork
Enterthediistance
FindNode disjointpath
Packetization
send
Possible retransmit
CHAPTER-4
IMPLEMENTATION
4.1 Introduction
A programming tool or software tool is a program or application that software
developers use to create, debug, maintain, or otherwise support other programs and
applications. The term usually refers to relatively simple programs that can be
combined together to accomplish a task. The Chapter describes about the software
tool that is used in our project.
Java Technology
Initially the language was called as “oak” but it was renamed as “Java” in
1995. The primary motivation of this language was the need for a platform-
independent (i.e., architecture neutral) language that could be used to create software
to be embedded in various consumer electronic devices.
Java has had a profound effect on the Internet. This is because; Java expands
the Universe of objects that can move about freely in Cyberspace. In a network, two
categories of objects are transmitted between the Server and the Personal computer.
They are: Passive information and Dynamic active programs. The Dynamic, Self-
executing programs cause serious problems in the areas of Security and probability.
But, Java addresses those concerns and by doing so, has opened the door to an
exciting new form of program called the Applet.
Every time you that you download a “normal” program, you are risking a viral
infection. Prior to Java, most users did not download executable programs frequently,
and those who did scan them for viruses prior to execution. Most users still worried
about the possibility of infecting their systems with a virus. In addition, another type
of malicious program exists that must be guarded against. This type of program can
gather private information, such as credit card numbers, bank account balances, and
passwords. Java answers both these concerns by providing a “firewall” between a
network application and your computer. When you use a Java-compatible Web
browser, you can safely download Java applets without fear of virus infection or
malicious intent.
Portability
The key that allows the Java to solve the security and portability problems is
that the output of Java compiler is Byte code. Byte code is a highly optimized set of
instructions designed to be executed by the Java run-time system, which is called the
Java Virtual Machine (JVM). That is, in its standard form, the JVM is an interpreter
for byte code.
Translating a Java program into byte code helps makes it much easier to run a
program in a wide variety of environments. The reason is, once the run-time package
exists for a given system, any Java program can run on it.
Byte code verification takes place at the end of the compilation process to
make sure that is all accurate and correct. So byte code verification is integral to the
compiling and executing of Java code.
Java Source Java byte code JavaVM
Java .Class
Java programming uses to produce byte codes and executes them. The first
box indicates that the Java source code is located in a. Java file that is processed with
a Java compiler called javac. The Java compiler produces a file called a. class file,
which contains the byte code. The .Class file is then loaded across the network or
loaded locally on your machine into the execution environment is the Java virtual
machine, which interprets and executes the byte code.
Java Architecture
Java architecture provides a portable, robust, high performing environment for
development. Java provides portability by compiling the byte codes for the Java
Virtual Machine, which is then interpreted on each platform by the run-time
environment. Java is a dynamic system, able to load code when needed from a
machine in the same room or across the planet.
Compilation of code
When you compile the code, the Java compiler creates machine code (called
byte code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM
is supposed to execute the byte code. The JVM is created for overcoming the issue of
portability. The code is written and compiled for one machine and interpreted on all
machines. This machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code
During run-time the Java interpreter tricks the byte code file into thinking that
it is running on a Java Virtual Machine. In reality this could be a Intel Pentium
Windows 95 or SunSARC station running Solaris or Apple Macintosh running system
and all could receive code from any computer through Internet and run the Applets.
PC Compiler Java
Source Interpreter
Code Java
Java
Interpreter
………..
Interpreter
Java
Macintosh
Byte code
compiler
Independent
SPARC
Compiler
Java was designed to be easy for the Professional programmer to learn and to
use effectively. If you are an experienced C++ programmer, learning Java will be
even easier. Because Java inherits the C/C++ syntax and many of the object oriented
features of C++. Most of the confusing concepts from C++ are either left out of Java
or implemented in a cleaner, more approachable manner. In Java there are a small
number of clearly defined ways to accomplish a given task.
Object-Oriented
Java was not designed to be source-code compatible with any other language.
This allowed the Java team the freedom to design with a blank slate. One outcome of
this was a clean usable, pragmatic approach to objects. The object model in Java is
simple and easy to extend, while simple types, such as integers, are kept as high-
performance non-objects.
Robust
The multi-platform environment of the Web places extraordinary demands on
a program, because the program must execute reliably in a variety of systems. The
ability to create robust programs was given a high priority in the design of Java. Java
is strictly typed language; it checks your code at compile time and run time.
Java virtually eliminates the problems of memory management and de-
allocation, which is completely automatic. In a well-written Java program, all run
time errors can –and should –be managed by your program.
4.2 Overview of Implementation Language
Java Swing
Swing is a widget toolkit for java. The main characteristics of the Swing
toolkit are platform independent, customizable, extensible, configurable and
lightweight. It has a rich set of widgets. From basic widgets like Buttons, Labels,
Scrollbars to advanced widgets like Trees and Tables.
All Swing components whose names begin with "J" descend from the
jcomponent API class. For example, JPanel, JScrollPane, JButton, and JTable all
inherit from JComponent. However, JFrame doesn't because it implements a top-level
container. The JComponent class extends the Container API class, which itself
extends Component API. The Component class includes everything from providing
layout hints to supporting painting and events. The Container class has support for
adding components to the container and laying them out.
JPanel
JFrame
JFrame is Swing's version of Frame and is descended directly from that class. It
is used to create Windows in a Swing program. The components added to the frame
are referred to as its contents; these are managed by the content Pane. To add a
component to a JFrame, we must use its content Pane instead.
JButton
The JButton object generally consists of a text label and/or image icon that
describes the purpose of the button, an empty area around the text/icon and border.
JLabel
JTextArea
JList
JList provides a scrollable set of items from which one or more may be
selected. JList can be populated from an Array or Vector. JList does not support
scrolling directly, instead, the list must be associated with a scrollpane. The view port
used by the scroll pane can also have a user-defined border. JList actions are handled
using ListSelectionListener.
File choosers provide a GUI for navigating the file system, and then either
choosing a file or directory from a list, or entering the name of a file or directory. To
display a file chooser, you usually use the JFileChooser API to show a modal dialog
containing the file chooser. A JFileChooser is a dialog to select a file or files.
public java.io.File getSelectedFile ()
public java.io.File[] getSelectedFiles ()
JScrollPane
Class BufferedImage
java.lang.Object
java.awt.Image
java.awt.image.BufferedImage
RenderedImage, WritableRenderedImage
extends Image
implements WritableRenderedImage
JCreator
JCreator is a powerful IDE for java. JCreator is the best development tool for
programming. It is faster, more efficient and more reliable than other IDE’s.
Therefore it is the perfect tool for programmers of every level, from learning
programmer to Java-specialist.
JCreator provides the user with a wide range of functionality such as Project
management, project templates, code-completion, debugger interface, editor with
syntax highlighting, wizards and a fully customizable user interface
With JCreator you can directly compile or run your Java program without
activating the main document first. JCreator will automatically find the file with the
main method or the html file holding the java applet, and then it will start the
appropriate tool.
JCreator is written entirely in C++, which makes it fast and efficient
compared to the Java based editors/IDE.
Java Database Connectivity
What Is JDBC?
JDBC is a Java API for executing SQL statements. (As a point of interest,
JDBC is a trademarked name and is not an acronym; nevertheless, JDBC is often
thought of as standing for Java Database Connectivity. It consists of a set of classes
and interfaces written in the Java programming language. JDBC provides a standard
API for tool/database developers and makes it possible to write database applications
using a pure Java API.
So why not just use ODBC from Java? The answer is that you can use ODBC
from Java, but this is best done with the help of JDBC in the form of the JDBC-
ODBC Bridge, which we will cover shortly. The question now becomes "Why do you
need JDBC?" There are several answers to this question:
1. ODBC is not appropriate for direct use from Java because it uses a C interface.
Calls from Java to native C code have a number of drawbacks in the security,
implementation, robustness, and automatic portability of applications.
2. A literal translation of the ODBC API into a Java API would not be desirable.
For example, Java has no pointers, and ODBC makes copious use of them, including
the notoriously error-prone generic pointer "void *". You can think of JDBC as
ODBC translated into an object-oriented interface that is natural for Java
programmers.
3. ODBC is hard to learn. It mixes simple and advanced features together, and it
has complex options even for simple queries. JDBC, on the other hand, was designed
to keep simple things simple while allowing more advanced capabilities where
required.
4. A Java API like JDBC is needed in order to enable a "pure Java" solution.
When ODBC is used, the ODBC driver manager and drivers must be manually
installed on every client machine. When the JDBC driver is written completely in
Java, however, JDBC code is automatically installable, portable, and secure on all
Java platforms from network computers to mainframes.
Two-tier and Three-tier Models
The JDBC API supports both two-tier and three-tier models for database
access.
In the two-tier model, a Java applet or application talks directly to the
database. This requires a JDBC driver that can communicate with the particular
database management system being accessed. A user's SQL statements are delivered
to the database, and the results of those statements are sent back to the user. The
database may be located on another machine to which the user is connected via a
network. This is referred to as a client/server configuration, with the user's machine as
the client, and the machine housing the database as the server. The network can be an
Intranet, which, for example, connects employees within a corporation, or it can be
the Internet.
JAVA
Application Client machine
JDBC
DBMS-proprietary protocol
Java applet or
Html browser Client machine (GUI)
Application
Server (Java)
Server machine (business Logic)
JDBC
DBMS-proprietary protocol
Database server
DBMS
Until now the middle tier has typically been written in languages such as C or
C++, which offer fast performance. However, with the introduction of optimizing
compilers that translate Java byte code into efficient machine-specific code, it is
becoming practical to implement the middle tier in Java. This is a big plus, making it
possible to take advantage of Java's robustness, multithreading, and security features.
JDBC is important to allow database access from a Java middle tier.
Fig: 4.5 Data Base Specific API’s
The JDBC drivers that we are aware of at this time fit into one of four categories:
JDBC-ODBC bridge plus ODBC driver
Native-API partly-Java driver
JDBC-Net pure Java driver
Native-protocol pure Java driver
JDBC-ODBC Bridge
If possible, use a Pure Java JDBC driver instead of the Bridge and an ODBC
driver. This completely eliminates the client configuration required by ODBC. It also
eliminates the potential that the Java VM could be corrupted by an error in the native
code brought in by the Bridge (that is, the Bridge native library, the ODBC driver
manager library, the ODBC driver library, and the database client library).
Originally found only in large companies with the computer hardware needed
to support large data sets, DBMSs have more recently emerged as a fairly standard
part of any company back office.
Description
A DBMS is a complex set of software programs that controls the organization,
storage, management, and retrieval of data in a database. A DBMS includes:
A modeling language to define the schema of each database hosted in the DBMS,
according to the DBMS data model.
The four most common types of organizations are the hierarchical, network, relational
and object models. Inverted lists and other methods are also used. A given database
management system may provide one or more of the four models. The optimal
structure depends on the natural organization of the application's data, and on the
application's requirements (which include transaction rate (speed), reliability,
maintainability, scalability, and cost).
The dominant model in use today is the ad hoc one embedded in SQL, despite the
objections of purists who believe this model is a corruption of the relational model,
since it violates several of its fundamental principles for the sake of practicality and
performance. Many DBMSs also support the Open Database Connectivity API that
supports a standard way for programmers to access the DBMS.
Data structures (fields, records, files and objects) optimized to deal with very large
amounts of data stored on a permanent data storage device (which implies relatively
slow access compared to volatile main memory).
A database query language and report writer to allow users to interactively interrogate
the database, analyze its data and update it according to the users privileges on data.
It also controls the security of the database.
Data security prevents unauthorized users from viewing or updating the database.
Using passwords, users are allowed access to the entire database or subsets of it called
subschemas. For example, an employee database can contain all the data about an
individual employee, but one group of users may be authorized to view only payroll
data, while others are allowed access to only work history and medical data.
If the DBMS provides a way to interactively enter and update the database, as well as
interrogate it, this capability allows for managing personal databases. However, it
may not leave an audit trail of actions or provide the kinds of controls necessary in a
multi-user organization. These controls are only available when a set of application
programs are customized for each data entry and updating function.
A transaction mechanism, that ideally would guarantee the ACID properties, in order
to ensure data integrity, despite concurrent user accesses (concurrency control), and
faults (fault tolerance).
It also maintains the integrity of the data in the database.
The DBMS can maintain the integrity of the database by not allowing more than one
user to update the same record at the same time. The DBMS can help prevent
duplicate records via unique index constraints; for example, no two customers with
the same customer numbers (key fields) can be entered into the database. See ACID
properties for more information (Redundancy avoidance).
The DBMS accepts requests for data from the application program and
instructs the operating system to transfer the appropriate data.
When a DBMS is used, information systems can be changed much more easily
as the organization's information requirements change. New categories of data can be
added to the database without disruption to the existing system.
Organizations may use one kind of DBMS for daily transaction processing and
then move the detail onto another computer that uses another DBMS better suited for
random inquiries and analysis. Overall systems design decisions are performed by
data administrators and systems analysts.
Data definition: Defining tables and structures in the database (DDL used to create,
alter and drop schema objects such as tables and indexes).
Data manipulation: Used to manipulate the data within those schema objects (DML
Inserting, Updating, Deleting the data, and Querying the Database).
A schema is a collection of database objects that can include: tables, views, indexes
and sequences
List of SQL statements that can be issued against an Oracle database schema are:
TESTING
5.1 INTRODUCTION
Software Testing:
The testing phase involves, testing of the development of the system using
various techniques such as White Box Testing, Control Structure Testing.
White box testing is a test case design method that uses the control
structure of the procedural design to derive test cases. After performing white
box testing it was identified that
The Leave Recording System (LRS) software guarantees that all independent
paths within the modules have been exercised at least once.
It has been exercised all logical decisions on their true and false sides.
It was tested to execute all loops at their boundaries and within their
Operational bounds.
It was tested for the internal data structures to ensure their validity.
The following tests were conducted and it was noted that the BCBS is
performing them well.
A strategy for software testing must accommodate low-level tests that are
necessary to verify that a small source code segment has been correctly implemented
as well as high level against customer requirements.
Reviews serve as a filter for the software process, removing errors while they are
relatively inexpensive to find and correct properly verify a system data about software
engineering process should be collected, evaluated and disseminated. SQA helps to
improve the quality of the product and software process itself.
Code Testing
Specification Testing
Unit Testing
Integration Testing
System Testing
Output Testing
User Acceptance Testing
Code Testing:
Testing the logic of the program is called the code testing. Every path through
the program is tested and checked whether the logic is working properly or not. This
project was logically performing well.
Specification Testing:
Specification testing means checking means checking the software it is as per
the specification given this project has tested for its specification such as what the
particular module or program should do and how it should perform under various
conditions.
Unit Testing:
Unit testing focuses verification on the smaller unit of software design such as
form. This is known as form testing. The testing is done individually on each form.
Using the unit test plan, prepared in design phase of the system development as a
guide important control paths are tested to uncover within the boundary of the
module. In this step, the module is working satisfactorily as a regard to the expected
output from the module.
Integration Testing:
Data can be lost across an interface, one module can have an adverse effect on
another sub function, when combined, may not produce the desired major function.
Integration testing is a systematic technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with the
interface. All the modules are combined in the testing step. Then the entire program is
as a whole. Different integrated test plans like top down integration and bottom up
integration are tested and different errors found in the system are corrected using
them. Finally, all the combined modules are performed well.
Validation Testing:
The following tests were conducted to test the validity of the software. The
validation succeeds when the software functions in a manner that can be reasonably
expected by the customer. The developed software undergoes the following types of
validation testing and was succeeded.
Alpha Testing
Beta Testing
System Testing:
Testing the entire system as a whole and checking for its correctness is system
testing. The system is listed for dispensaries between the system and its original
objectives. This project was effective and efficient.
Output Testing:
After performing system testing, the next step is output testing of the proposed
system, since no system could be useful if it does not procedure the desired output in
the specified format. The outputs generated are displayed by the system under
consideration or testing by asking the users about the format required by them, here
the output format is considered in two ways, one is the screen and other is the printed
form.
User acceptance of a system is the key factor for the success of any system.
The system under consideration was tested for user acceptance by constantly keeping
in touch with the perspective system users at the time of developing and making
changes whenever required. The following are the testing points.
1 Check for the date Time The date and time of The date and time of Success
Auto Display the system must be the system must be
displayed displayed
The enhancement of this project can be accomplished easily. That is, any new
functional capabilities can be added to the project by simply including the new
module in the homepage and giving a hyperlink to that module. Adaptation of this
project to a new environment is also performed easily.
Adaptive Maintenance
Conclusion
We introduce novel path-diversity overlay retransmission architecture for IP-
multicast based multimedia applications. The readily available network utility
function “Tracert” is used to help identify the path disjoint retransmission nodes, and
periodic probing is employed to adaptively and more accurately identify an overall
good retransmission node. A hybrid approach that exploits both has been shown to
achieve the best retransmission performance. Furthermore, to save the probing
overhead, the receiver can use selective probing to probe only a subset of the
candidate retransmission nodes who have the highest probability to be a good
candidate.
Future Work
1. As future work, a heuristic approach to combine all mapping and adaptation
approaches according to historic data, multimedia characteristics and traffic
patterns will be investigated. Moreover, QoE2M will be evaluated based on
simulation and experimental experiments.
2. Future work will include more quantitative evaluation of the impact of the
proposed framework on the perceived quality of the multimedia applications
in both simulation and real networks.
3. The next step will be to study the interaction between several traffic flows
sharing the same link and having the same, or different, network behaviour –
elastic or inelastic. These realistic conditions will help us generalize the
conclusions we have drawn so far for separate applications. We also plan to
extend our area of interest to other network applications, such as web
browsing, video streaming and teleconferencing, for which UPQ is of
considerable importance.