You are on page 1of 24

Section-A

Ans1)A) Software engineering is a layered technology . In this view, software engineering


encompasses a process, technical methods and tools. According to this view, we approach to the
agent-based software engineering as a layered technology. This technology encompasses agent-
oriented software development process, agent-oriented methods and agent development tools. Of
course, the basis of agent-based software engineering is quality. So, for developing high quality and
efficient agent-based systems in a timely and cost-effective manner, it is important to have software
quality control and assurance techniques, agent-oriented software development process, agent-
oriented analysis and design methods, agent construction methods, agent languages, testing methods,
support methods and tools for supporting agent-oriented processes and methods. According to this
approach, the researchers in the field of agent-based software engineering can organize the existing
works, determine gaps in the current work and focus on the needed further works. Also, it is a good
framework for study the pragmatic aspects of agent system development.

Ans1)B) Software Quality measures how well software is designed (quality of design), and how
well the software conforms to that design (quality of conformance), although there are several
different definitions.

Whereas quality of conformance is concerned with implementation; quality of design measures how
valid the design and requirements are in creating a worthwhile product.

Ans1)C) Clean room software engineering is a theory-based team-oriented process for development
and certification of high-reliability software systems under statistical quality control. A principal
objective of the Clean room process is development of software that exhibits zero failures in use. The
Clean room name is borrowed from hardware Clean rooms, with their emphasis on rigorous
engineering discipline and focus on defect prevention rather than defect removal. Clean room
combines mathematically based methods of software specification, design, and correctness
verification with statistical, usage-based testing to certify software fitness for use. Clean room
projects have reported substantial gains in quality and productivity.

The basic principles of the Clean room process are

1. Software development based on formal methods


Clean room development makes use of the Box Structure Method to specify and design a software
product. Verification that the design correctly implements the specification is performed through team
review.
2. Incremental implementation under statistical quality control
Clean room development uses an iterative approach, in which the product is developed in increments
that gradually increase the implemented functionality. The quality of each increment is measured
against pre-established standards to verify that the development process is proceeding acceptably. A
failure to meet quality standards results in the cessation of testing for the current increment, and a
return to the design phase.
3. Statistically sound testing
Software testing in the Clean room process is carried out as a statistical experiment. Based on the
formal specification, a representative subset of software input/output trajectories is selected and tested.
This sample is then statistically analyzed to produce an estimate of the reliability of the software, and a
level of confidence in that estimate.

Ans1)D) Commercial, off-the-shelf (COTS) or simply off the shelf (OTS) computer software or
hardware, technology, or computer products, are ready-made and available for sale, lease, or license
to the general public. They are often used as alternatives to in-house developments or one-off
government-funded developments. The use of COTS is being mandated across many government and
business programs; as such products may offer significant savings in procurement and maintenance.
However, since COTS software specifications are written externally, government agencies sometimes
fear future changes to the product will not be under their control.
Motivations for using COTS components include reduction of overall system development and costs
(as components can be bought or licensed instead of being developed from scratch) and reduced
maintenance costs.

Ans1)E) Factors Influences Group Working

 Cultural factors, such as language, age, gender, and others, can influence the mental health
 Traditional (adhering to native values) place great value on a unit. Each individual has a
clearly defined role and position in the organizational hierarchy and is expected to function
within that role, submitting to the larger needs of the organization.
 Social stigma, shame, and saving face often prevent one from seeking behavioral health care
 Patients are considered likely to express psychological distress as physical complaints
 Average age of the people in group

Ans1)F) People Capability Maturity Model (short names: People CMM, PCMM,P-CMM) is a
maturity framework that focuses on continuously improving the management and development of the
human assets of an organization. It describes an evolutionary improvement path from ad hoc,
inconsistently performed practices, to a mature, disciplined, and continuously improving development
of the knowledge, skills, and motivation of the workforce that enhances strategic business
performance. The People CMM is a framework that helps organizations successfully address their
critical people issues. Based on the best current practices in fields such as human resources,
knowledge management, and organizational development, the People CMM guides organizations in
improving their processes for managing and developing their workforces. The People CMM helps
organizations characterize the maturity of their workforce practices, establish a program of
continuous workforce development, set priorities for improvement actions, integrate workforce
development with process improvement, and establish a culture of excellence

CMMI is a process improvement approach that provides organizations with the essential elements of
effective processes that ultimately improve their performance. CMMI can be used to guide process
improvement across a project, a division, or an entire organization. It helps integrate traditionally
separate organizational functions, set process improvement goals and priorities, provide guidance for
quality processes, and provide a point of reference for appraising current processes.
CMMI in three different areas of interest:
 Product and service development (CMMI for Development model)
 Service establishment, management, and delivery (CMMI for Services model)
 Product and service acquisition (CMMI for Acquisition model)

Ans1)G) User interface design or user interface engineering is the design of computers,
appliances, machines, mobile communication devices, software applications, and websites with the
focus on the user's experience and interaction. The goal of user interface design is to make the user's
interaction as simple and efficient as possible, in terms of accomplishing user goals—what is often
called user-centered design. Good user interface design facilitates finishing the task at hand without
drawing unnecessary attention to itself. Graphic design may be utilized to apply a theme or style to
the interface without compromising its usability. The design process must balance technical
functionality and visual elements (e.g., mental model) to create a system that is not only operational
but also usable and adaptable to changing user needs.

Interface design is involved in a wide range of projects from computer systems, to cars, to
commercial planes; all of these projects involve much of the same basic human interaction yet also
require some unique skills and knowledge. As a result, designers tend to specialize in certain types of
projects and have skills centered around their expertise, whether that be software design, user
research, web design, or industrial design.

Ans1)H) Verification is a Quality control process that is used to evaluate whether or not a product,
service, or system complies with regulations, specifications, or conditions imposed at the start of a
development phase. Verification can be in development, scale-up, or production. This is often an
internal process.
Validation is Quality assurance process of establishing evidence that provides a high degree of
assurance that a product, service, or system accomplishes its intended requirements. This often
involves acceptance of fitness for purpose with end users and other product stakeholders.
It is sometimes said that validation can be expressed by the query "Are you building the right thing?"
and verification by "Are you building the thing right?" "Building the right thing" refers back to the
user's needs, while "building it right" checks that the specifications be correctly implemented by the
system. In some contexts, it is required to have written requirements for both as well as formal
procedures or protocols for determining compliance.

SECTION-B
Ans 2) SPIRAL MODEL

The spiral model is a software development process combining elements of


both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-
up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems
development method (SDM) used in information technology (IT). This model of development
combines the features of the prototyping model and the waterfall model. The spiral model is intended
for large, expensive and complicated projects.

There are four phases in the "Spiral Model" which are: Planning, Evaluation, Risk Analysis and
Engineering. These four phases are iteratively followed one after other in order to eliminate all the
problems, which were faced in "The Waterfall Model". Iterating the phases helps in understating the
problems associated with a phase and dealing with those problems when the same phase is repeated
next time, planning and developing strategies to be followed while iterating through the phases. The
phases in "Spiral Model" are:
Plan: In this phase, the objectives, alternatives and constraints of the project are determined and are
documented. The objectives and other specifications are fixed in order to decide which
strategies/approaches to follow during the project life cycle.
Risk Analysis: This phase is the most important part of "Spiral Model". In this phase all possible
(and available) alternatives, which can help in developing a cost effective project are analyzed and
strategies are decided to use them. This phase has been added specially in order to identify and
resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in
requirements, prototyping may be used to proceed with the available data and find out possible
solution in order to deal with the potential changes in the requirements.
Engineering: In this phase, the actual development of the project is carried out. The output of this
phase is passed through all the phases iteratively in order to obtain improvements in the same.
Customer Evaluation: In this phase, developed product is passed on to the customer in order to
receive customer’s comments and suggestions which can help in identifying and resolving potential
problems/errors in the software developed. This phase is very much similar to TESTING phase.
The process progresses in spiral sense to indicate iterative path followed, progressively more
complete software is built as we go on iterating through all four phases. The first iteration in this
model is considered to be most important, as in the first iteration almost all possible risk factors,
constraints, requirements are identified and in the next iterations all known strategies are used to
bring up a complete software system. The radical dimensions indicate evolution of the product
towards a complete system.

Disadvantage Of Spiral Model

 Requires considerable expertise in risk evaluation and reduction


 Complex and relatively difficult to follow strictly
 Applicable only to large systems
 Risk assessment could cost more than development
 Need for further elaboration of spiral model steps (milestones, specifications,
guidelines and checklists)
Advantage Of Spiral Model

 The spiral model is a realistic approach to the development of large-scale software products
because the software evolves as the process progresses. In addition, the developer and the
client better understand and react to risks at each evolutionary level.
 The model uses prototyping as a risk reduction mechanism and allows for the development of
prototypes at any stage of the evolutionary development.
  It maintains a systematic stepwise approach, like the classic life cycle model, but incorporates
it into an iterative framework that more reflect the real world.
 If employed correctly, this model should reduce risks before they become problematic, as
consideration of technical risks are considered at all stages.

1. Spiral model is called a meta model. Spiral model is made with the features of Prototype model
and Waterfall model.
2. Spiral model takes special care about Risk Analysis. Where as it is not given importance in
Prototype model.

Ans3) In systems engineering and requirements engineering, a NON-FUNCTIONAL


REQUIREMENT is a requirement that specifies criteria that can be used to judge the operation of a
system, rather than specific behaviors. This should be contrasted with functional requirements that
define specific behavior or functions.
In general, functional requirements define what a system is supposed to do whereas non-functional
requirements define how a system is supposed to be. Non-functional requirements are often called
qualities of a system. Other terms for non-functional requirements are "constraints", "quality
attributes", "quality goals" and "quality of service requirements". Qualities, that is, non-functional
requirements, can be divided into two main categories:

1. Execution qualities, such as security and usability, which are observable at run time.
2. Evolution qualities, such as testability, maintainability, extensibility and scalability, which
are embodied in the static structure of the software system.

Examples
A system may be required to present the user with a display of the number of records in a database.
This is a functional requirement. How up-to-date this number needs to be is a non-functional
requirement. If the number needs to be updated in real time, the system architects must ensure that the
system is capable of updating the displayed record count within an acceptably short interval of the
number of records changing.
In software engineering, A FUNCTIONAL REQUIREMENT defines a function of a software
system or its component. A function is described as a set of inputs, the behavior, and outputs (see also
software).
Functional requirements may be calculations, technical details, data manipulation and processing and
other specific functionality that define what a system is supposed to accomplish. Behavioral
requirements describing all the cases where the system uses the functional requirements are captured
in use cases. Functional requirements are supported by non-functional requirements (also known as
quality requirements), which impose constraints on the design or implementation (such as
performance requirements, security, or reliability). How a system implements functional requirements
is detailed in the system design.

As defined in requirements engineering, functional requirements specify particular results of a


system. This should be contrasted with non-functional requirements which specify overall
characteristics such as cost and reliability. Functional requirements drive the application
architecture of a system, while non-functional requirements drive the technical architecture of a
system.

In some cases a requirements analyst generates use cases after gathering and validating a set of
functional requirements. Each use case illustrates behavioral scenarios through one or more functional
requirements. Often, though, an analyst will begin by eliciting a set of use cases, from which the
analyst can derive the functional requirements that must be implemented to allow a user to perform
each use case.

Process

A typical functional requirement will contain a unique name and number, a brief summary, and a
rationale. This information is used to help the reader understand why the requirement is needed, and
to track the requirement through the development of the system.

The crux of the requirement is the description of the required behavior, which must be clear and
readable. The described behavior may come from organizational or business rules, or it may be
discovered through elicitation sessions with users, stakeholders, and other experts within the
organization. Many requirements may be uncovered during the use case development. When this
happens, the requirements analyst may create a placeholder requirement with a name and summary,
and research the details later, to be filled in when they are better known.

Ans4) The V-model is a software development process which can be presumed to be the extension
of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards
after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing.
The V-model deploys a well-structured method in which each phase can be implemented by the
detailed documentation of the previous phase. Testing activities like test designing start at the
beginning of the project well before coding and therefore saves a huge amount of the project time.

The Phases of the V-model


The V-model consists of a number of phases. The Verification Phases are on the left hand side of the
V, the Coding Phase is at the bottom of the V and the Validation Phases are on the right hand side of
the V.

VERIFICATION PHASES

Requirements analysis
In the Requirements analysis phase, the requirements of the proposed system are collected by
analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system
has to perform. However it does not determine how the software will be designed or built. Usually,
the users are interviewed and a document called the user requirements document is generated.

The user requirements document will typically describe the system’s functional, physical, interface,
performance, data, security requirements etc as expected by the user. It is one which the business
analysts use to communicate their understanding of the system back to the users. The users carefully
review this document as this document would serve as the guideline for the system designers in the
system design phase. The user acceptance tests are designed in this phase. See also Functional
requirements,

System Design
Systems design is the phase where system engineers analyze and understand the business of the
proposed system by studying the user requirements document. They figure out possibilities and
techniques by which the user requirements can be implemented. If any of the requirements are not
feasible, the user is informed of the issue. A resolution is found and the user requirement document is
edited accordingly.

The software specification document which serves as a blueprint for the development phase is
generated. This document contains the general system organization, menu structures, data structures
etc. It may also hold example business scenarios, sample windows, reports for the better
understanding. Other technical documentation like entity diagrams, data dictionary will also be
produced in this phase. The documents for system testing are prepared in this phase.

Architecture Design

The phase of the design of computer architecture and software architecture can also be referred to as
high-level design. The baseline in selecting the architecture is that it should realize all which typically
consists of the list of modules, brief functionality of each module, their interface relationships,
dependencies, database tables, architecture diagrams, technology details etc. The integration testing
design is carried out in this phase.

Module Design

The module design phase can also be referred to as low-level design. The designed system is broken
up into smaller units or modules and each of them is explained so that the programmer can start
coding directly. The low level design document or program specifications will contain a detailed
functional logic of the module, in pseudocode:

 database tables, with all elements, including their type and size
 all interface details with complete API references
 all dependency issues
 error message listings
 Complete input and outputs for a module.

The unit test design is developed in this stage.

VALIDATION PHASES

Unit Testing
In the V-model of software development, unit testing implies the first stage of dynamic testing
process. According to software development expert Barry Boehm, a fault discovered and corrected in
the unit testing phase is more than a hundred times cheaper than if it is done after delivery to the
customer.
It involves analysis of the written code with the intention of eliminating errors. It also verifies that the
codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is
done using the Unit test design prepared during the module design phase. This may be carried out by
software developers.

Integration Testing

In integration testing the separate modules will be tested together to expose faults in the interfaces
and in the interaction between integrated components. Testing is usually black box as the code is not
directly checked for errors.

System Testing
System testing will compare the system specifications against the actual system. The system test
design is derived from the system design documents and is used in this phase. Sometimes system
testing is automated using testing tools. Once all the modules are integrated several errors may arise.
Testing done at this stage is called system testing.

User Acceptance Testing

Acceptance testing is the phase of testing used to determine whether a system satisfies the
requirements specified in the requirements analysis phase. The acceptance test design is derived from
the requirements document. The acceptance test phase is the phase used by the customer to determine
whether to accept the system or not

Ans5) Within an information system, software is a tool, and tools have to be selected for quality and
for appropriateness. But software is more than a tool. It dictates the performance of the system, and it
is therefore important to the system quality.
For any engineered product, there are many desired qualities relevant to a particular project, to be
discussed and determined at the time that the product requirements are determined. Qualities may be
present or absent, or may be matters of degree, with tradeoffs among them, with practicality and cost
as major considerations. All processes associated with software quality (e.g. building, checking,
improving quality) will be designed with these in mind and carry costs based on the design. Thus, it is
important to have in mind some of the possible attributes of quality.

Various researchers have produced models (usually taxonomic) of software quality characteristics or
attributes that can be useful for discussing, planning, and rating the quality of software products. The
models often include metrics to “measure” the degree of each quality attribute the product attains.

A software quality factor is a non-functional requirement for a software program which is not called
up by the customer's contract, but nevertheless is a desirable requirement which enhances the quality
of the software program. Note that none of these factors are binary; that is, they are not “either you
have it or you don’t” traits. Rather, they are characteristics that one seeks to maximize in one’s
software to optimize its quality. So rather than asking whether a software product “has” factor x, ask
instead the degree to which it does (or does not).

Some software quality factors are listed here:

Understandability – clarity of purpose. This goes further than just a statement of purpose; all of the
design and user documentation must be clearly written so that it is easily understandable. This is
obviously subjective in that the user context must be taken into account: for instance, if the software
product is to be used by software engineers it is not required to be understandable to the layman.

Completeness–presence of all constituent parts, with each part fully developed. This means that if the
code calls a subroutine from an external library, the software package must provide reference to that
library and all required parameters must be passed. All required input data must also be available.
Conciseness – minimization of excessive or redundant information or processing. This is important
where memory capacity is limited, and it is generally considered good practice to keep lines of code
to a minimum. It can be improved by replacing repeated functionality by one subroutine or function
which achieves that functionality. It also applies to documents.

Portability – ability to be run well and easily on multiple computer configurations. Portability can
mean both between different hardware—such as running on a PC as well as a smartphone—and
between different operating systems—such as running on both Mac OS X and GNU/Linux.

Consistency – uniformity in notation, symbology, appearance, and terminology within itself.

Maintainability – propensity to facilitate updates to satisfy new requirements. Thus the software
product that is maintainable should be well-documented, should not be complex, and should have
spare capacity for memory, storage and processor utilization and other resources.

Testability – disposition to support acceptance criteria and evaluation of performance. Such a


characteristic must be built-in during the design phase if the product is to be easily testable; a
complex design leads to poor testability.

Usability –convenience and practicality of use. This is affected by such things as the human-
computer interface. The component of the software that has most impact on this is the user interface
(UI), which for best usability is usually graphical (i.e. a GUI).

Reliability –ability to be expected to perform its intended functions satisfactorily. This implies a time
factor in that a reliable product is expected to perform correctly over a period of time. It also
encompasses environmental considerations in that the product is required to perform correctly in
whatever conditions it finds itself (sometimes termed robustness).

Structuredness –organization of constituent parts in a definite pattern. A software product written in


a block-structured language such as Pascal will satisfy this characteristic.

Efficiency–fulfillment of purpose without waste of resources, such as memory, space and processor
utilization, network bandwidth, time, etc.

Security –ability to protect data against unauthorized access and to withstand malicious or
inadvertent interference with its operations. Besides the presence of appropriate security mechanisms
such as authentication, access control and encryption, security also implies resilience in the face of
malicious, intelligent and adaptive attackers.
Ans6)a) THE TESTING PROCESS
Traditional CMMI or waterfall development model

A common practice of software testing is that testing is performed by an independent group of testers
after the functionality is developed, before it is shipped to the customer. This practice often results in
the testing phase being used as a project buffer to compensate for project delays, thereby
compromising the time devoted to testing. Another practice is to start software testing at the same
moment the project starts and it is a continuous process until the project finishes

Agile or Extreme development model

In this process, unit tests are written first, by the software engineers (often with pair programming in
the extreme programming methodology). Of course these tests fail initially; as they are expected to.
Then as code is written it passes incrementally larger portions of the test suites. The test suites are
continuously updated as new failure conditions and corner cases are discovered, and they are
integrated with any regression tests that are developed. Unit tests are maintained along with the rest of
the software source code and generally integrated into the build process (with inherently interactive
tests being relegated to a partially manual build acceptance process). The ultimate goal of this test
process is to achieve continuous deployment where software updates can be published to the public
frequently.

A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing. The sample
below is common among organizations employing the Waterfall development model.

 Requirements analysis: Testing should begin in the requirements phase of the software
development life cycle. During the design phase, testers work with developers in determining
what aspects of a design are testable and with what parameters those tests work.
 Test planning: Test strategy, test plan, test bed creation. Since many activities will be carried out
during testing, a plan is needed.
 Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in
testing software.
 Test execution: Testers execute the software based on the plans and tests and report any errors
found to the development team.
 Test reporting: Once testing is completed, testers generate metrics and make final reports on
their test effort and whether or not the software tested is ready for release.
 Test result analysis: Or Defect Analysis, is done by the development team usually along with the
client, in order to decide what defects should be treated, fixed, rejected (i.e. found software
working properly) or deferred to be dealt with later.
 Defect Retesting: Once a defect has been dealt with by the development team, it is retested by
the testing team. AKA Resolution testing.
 Regression testing: It is common to have a small test program built of a subset of tests, for each
integration of new, modified, or fixed software, in order to ensure that the latest delivery has not
ruined anything, and that the software product as a whole is still working correctly.
 Test Closure: Once the test meets the exit criteria, the activities such as capturing the key
outputs, lessons learned, results, logs, documents related to the project are archived and used as a
reference for future projects.

SOFTWARE TESTING POLICIES


Design phase: Test Planning

There are various levels of testing that can be performed on a software product. There needs to be
plans in place to support this process. The software test plan document and test case specification
document should address:

1. TYPES OF TESTS
At which level (unit, integration, system, acceptance) and what type (white box, functional,
boundary, stress, performance ...) of tests will be created. Do not forget to include these basic
items:
o Unit tests should validate expected functionality at the level of individual classes or small group
of collaborating classes.
o Performance testing should be included initially for measurement/monitoring purposes. Once
performance requirements must be met on delivered software, performance testing should be
included in acceptance tests. This should include testing of scalability.
o Regression Tests
o Wherever possible, examples and example code that are distributed and offered to users should be
incorporated in the test suites, so that user examples are continuously validated for correctness.
o Tests should include memory leak checking.
2. TEST ENVIRONMENT, TOOLS AND TEST SOFTWARE
Look on what SPI is proposing at the Sw-testing web pages. There you can find the tests
frameworks installed at SPI external software , How to documents, templates for test case
documentation, examples...
3. RESOURCES AND SCHEDULE REQUIRED TO CREATE THE TESTS
o Unit tests should be written as the code is written.
4. RESOURCES AND SCHEDULE REQUIRED TO RUN THE TESTS
o All levels of testing (unit tests, component level tests, component interaction tests, system level
tests) should be run as part of automated nightly (and pre-release) builds
5. ENTRY AND EXIT CRITERIA FOR EACH PHASE OF TESTING
o Define those tests failures that will invalidate a candidate release and those failures allowed but
what should be fixed in the next release.
6. TEST CASES, TEST DATA AND EXPECTED RESULTS SPECIFICATION

Code Phase: Unit Tests

The Unit testing or component test, in which each unit (basic component) of software is tested to
verify that the detailed design for the unit has been correctly implemented, has to be performed
following these items:

1. All the unit tests corresponding to a specific package, Package_X in your $PROJECT, have to
be placed at Package_X/tests directory. If it is not created you should create one.
2. Under tests directory you have to place the Build File and you can create as many sub-directories
as you like (at least one), but use names that hints the test case in that particular subdirectory.
Subdirectories to host all the tests created with a specific framework are not allowed. The binary
produced from code stored in one of these test subdirectory will appear with the name
test_Package_X_test1 in the "$PLATFORM/tests/bin/" directory.

CODE PHASE: INTEGRATION TESTS

1. The integration tests will go into the Tests subdirectory under "src". There you can make as many
subdirectories as needed, again a good naming is required. Test libraries (useful when one or
more classes are shared between different tests) will go into Tests/Libraries.
2. In each test or lib subdirectory you have to create a "BuildFile" which defines the dependencies to
the relevant packages.
3. In each test directory you have to create a "OvalFile" to run automatically your tests. The needed
reference file should be placed at the same level and not inside the "src" directory where the
sources of the tests are.
4. If input data is needed for you tests, it must be at the same level as the BuildFile, inside the
specific test directory.
5. For testing related purposes and for input data shared by different test-cases we propose that the
input data files should be stored in:
6. CVS/project/src/Tests/data

CODE PHASE: BUG TRACKING AND REGRESSION TESTS

- As is written in the test plan template is under the responsibility of developers and Sw-
infrastructure and testing team (depending on the type of bug) to perform the bug tracking and
convert into tests all those bugs susceptible to be use in the regression testing scheme.
- The corresponding bug number information from Savannah must appear in the Oval File in the
line where you write the executable name:

<executable name="TestCase-name"> BUGFIX:number1,number2,number3 </executable>

- If you need to specify more you can do it in the test case description file.

TEST ENVIRONMENTS AND TOOLS

 The test environments supported by SPI are: CppUnit, PyUnit and Oval.
 The specific test frameworks mentioned above must be run through QMTest.
 The configuration files needed to run QMTest must be placed at "qmtest" subdirectory
 All the test messages that the developer wants to output in the test code must be in the
"stdout". The "stderr" is reserved for the integration of different test frameworks under
QMTest.
 The normal return code of the test programs and examples must be "RC=0".

Ans6)b) STRUCTURAL AND FUNCTIONAL TESTING

Structural testing is considered white-box testing because knowledge of the internal logic of the
system is used to develop test cases. Structural testing includes path testing, code coverage testing
and analysis, logic testing, nested loop testing, and similar techniques. Unit testing, string or
integration testing, load testing, stress testing, and performance testing are considered structural.

Functional testing addresses the overall behavior of the program by testing transaction flows, input
validation, and functional completeness. Functional testing is considered black-box testing because
no knowledge of the internal logic of the system is used to develop test cases. System testing,
regression testing, and user acceptance testing are types of functional testing.

Both methods together validate the entire system. For example, a functional test case might be taken
from the documentation description of how to perform a certain function, such as accepting bar code
input.

A structural test case might be taken from a technical documentation manual. To effectively test
systems, both methods are needed. Each method has its pros and cons, which are listed below:

STRUCTURAL TESTING
Advantages
- The logic of the software’s structure can be tested.
- Parts of the software will be tested which might have been forgotten if only functional testing was
performed.

Disadvantages
> Its tests do not ensure that user requirements have been met.
> Its tests may not mimic real-world situations.

FUNCTIONAL TESTING
Advantages
> Simulates actual system usage.
> Makes no system structure assumptions.

Disadvantages
> Potential of missing logical errors in software.
> Possibility of redundant testing.

Ans7) In the AD phase, the software requirements are transformed into definitions of software
components and their interfaces, to establish the framework of the software. This is done by
examining the SRD and building a 'physical model' using recognized software engineering methods.
The physical model should describe the solution in concrete, implementation terms. Just as the logical
model produced in the SR phase structures the problem and makes it manageable, the physical model
does the same for the solution. The physical model is used to produce a structured set of component
specifications that are consistent, coherent and complete. Each specification defines the functions,
inputs and outputs of the component
The major software components are documented in the Architectural Design Document (ADD). The
ADD gives the developer's solution to the problem stated in the SRD. The ADD must cover all the
Requirements stated in the SRD (AD20), but avoid the detailed consideration of software
requirements that do not affect the structure.
The main outputs of the AD phase are the:
 Architectural Design Document (ADD);
 Software Project Management Plan for the DD phase (SPMP/DD);
 Software Configuration Management Plan for the DD phase(SCMP/DD);
 Software Verification and Validation Plan for the DD Phase (SVVP/DD);
 Software Quality Assurance Plan for the DD phase (SQAP/DD);
 Integration Test Plan (SVVP/IT).
Progress reports, configuration status accounts and audit reports are also outputs of the phase. These
should always be archived.
DISTRIBUTING A SYSTEM OVER SEVERAL PROCESSING NODES MEANS, IN
GENERAL, THAT (BENEFITS):

 things happen concurrently, so you can get more done


 by replicating functionality, you can provide a more robust service
 users and other actors in different locations can readily be served
 not all the functional nodes need be up at once
 diverse systems may be connected and (obstacles to design):

 things happen concurrently, admitting interference, deadlocks, etc


 different nodes may get out of sync
 communication between the nodes may be unreliable, slow, and erratic
 not all the nodes may be up at once
 the execution environments running the distributed parts  may differ (different languages,
operating systems, performance, resources available, ...)

Distributed systems can be open or closed or in-between. Very open systems like the Internet have
no-one in control: broad standards of communication are set, which are often extended for particular
purposes by different groups of users. In a closed system, the designers have control over the whole
lot, though they usually want it to be readily extensible: the different parts of an aircraft control
system are an example.
Besides the Internet, various large systems (some of which are built atop the Internet) fall into the
open category: for example, a system (described by XXXX at OOPSLA98) for retrieving medical
records from the patient's home medico and various places he may have been treated, to a hospital
anywhere in the world.

OO DESIGN Object-oriented programming (OOP) is a programming paradigm that uses "objects" –


data structures consisting of data fields and methods together with their interactions – to design
applications and computer programs. Programming techniques may include features such as
information hiding, data abstraction, encapsulation, modularity, polymorphism, and inheritance.

An object-oriented program may thus be viewed as a collection of interacting objects, as opposed to


the conventional model, in which a program is seen as a list of tasks (subroutines) to perform. In
OOP, each object is capable of receiving messages, processing data, and sending messages to other
objects and can be viewed as an independent 'machine' with a distinct role or responsibility. The
actions (or "operators") on these objects are closely associated with the object.

Nevertheless, he identifies the following as fundamental features that are found in most object-
oriented languages and that, in concert, support the OOP programming style:

 Dynamic dispatch – when a method is invoked on an object, the object itself determines what
code gets executed by looking up the method at run time in a table associated with the object.
This feature distinguishes an object from an abstract data type (or module), which has a fixed
(static) implementation of the operations for all instances. It is a programming methodology
that gives modular component development while at the same time being very efficient.
 Encapsulation (or multi-methods, in which case the state is kept separate)
 Subtype polymorphism
 Object inheritance (or delegation)
 Open recursion – a special variable (syntactically it may be a keyword), usually called this or
self, that allows a method body to invoke another method body of the same object. This
variable is late-bound; it allows a method defined in one class to invoke another method that is
defined later, in some subclass thereof.

REAL-TIME SOFTWARE .

Here the software team understands the system that is being designed. The team also reviews at the
proposed hardware architecture and develops a very basic software architecture. This architecture
definition will be further refined in Co-Design.

Use Cases are also used in this stage to analyze the system. Use cases are used to understand the
interactions between the system and its users. For example, use cases for a telephone exchange would
specify the interactions between the telephone exchange, its subscribers and the operators which
maintain the exchange.

USER INTERFACE DESIGN OR USER INTERFACE ENGINEERING is the design of


computers, appliances, machines, mobile communication devices, software applications, and websites
with the focus on the user's experience and interaction. The goal of user interface design is to make
the user's interaction as simple and efficient as possible, in terms of accomplishing user goals—what
is often called user-centered design. Good user interface design facilitates finishing the task at hand
without drawing unnecessary attention to itself. Graphic design may be utilized to apply a theme or
style to the interface without compromising its usability. The design process must balance technical
functionality and visual elements (e.g., mental model) to create a system that is not only operational
but also usable and adaptable to changing user needs.

Interface design is involved in a wide range of projects from computer systems, to cars, to
commercial planes; all of these projects involve much of the same basic human interaction yet also
require some unique skills and knowledge. As a result, designers tend to specialize in certain types of
projects and have skills centered around their expertise, whether that be software design, user
research, web design, or industrial design.

Ans 8) Software Projects Tend to Fail:-


1. The maturity of the software engineering field
o The software engineering field is much younger than the other engineering fields and that, in
time, will get more stable
o The field is young and therefore most of the field engineers and managers are also young. Young
people have less experience and therefore tend to fail more
o Young people are more optimistic and tend to estimate badly
2. Shortage of knowledge base
As a relatively young engineering field, software engineering is short of accumulative knowledge
bases. For example, the famous gang of four book "Design Patterns: Elements of Reusable
Object-Oriented Software" was first published in late 1994. The book suggests design patterns to
common software design problems and it is one of the famous knowledge base materials in the
software engineering field. "Software engineering has evolved steadily from its founding days in
the 1940s", but it is still short of accumulative knowledge base as opposed to other engineering
fields.
3. Software is not tangible
As opposed to other engineering fields like civil engineering, the software engineering building
blocks are much less tangible and therefore hard to measure and estimate. "Software is abstract"
4. Competition: harsh deadlines
The competition in the software industry is harsh. The Time-To-Market (TTM) is crucial and the
drive to meet harsh deadlines is enormous. This characteristic, along with other methodological
anomalies like "Code first; think later" and "Plan to throw one away; you will, anyhow," makes
competition harsh. The hard competition in the software industry causes not only the need to
deliver ASAP, but also the requirement to catch as many potential customer eyes as possible..
5. Technology changes rapidly
"Software development technologies change faster than other construction technologies."(2)
Until recently, Microsoft was frequently bombarding the industry with new technologies. Rapid
technology changes introduce liability for software manufactures. For example, new Operating
Systems obligate a company like Ahead to release a new adaptable version of Nero.
6. Change is tempting
A building architect will not decide to add additional floors during the building construction. The
result would be dreadful, as the building foundations were not constructed for it. The software
architect's hand, however, will be much more loose on the trigger. Irresponsible changes like
adding new features and redefining existed ones may cause deadline violations and/or bad
planning and coding (patch). Given the harsh competition (see item 6), it looks like changes are
inevitable.
7. Bad time management
Estimating the development time should correlate to the employees ("resources") on hand. In
some cases, managers estimate and then enforce a time table as if they were the ones who were
going to do the developing. This type of enforcement yields pressure on development and may
harm it. Moreover, violating deadlines in this condition is common.
8. Bad or no managing skills
It is common that software managers are used to being excellent, successful and professional
software engineers. Unfortunately, the skills are not the same when it comes to successful
managing. Great engineering skills do not guarantee great managing. Newborn software
managers do not receive the right, or any, guidance.
9. Wrong or no Software Development Life Cycle (SDLC) methodology
Developing life cycle methodology must be part of software project management. Nevertheless, it
should not be forced into the R&D environment. The software engineering field is relatively
young , but still there are already well-known developing life cycle methodologists (Agile,
Crystal, Spiral, Waterfall, etc.), successful stories and case studies. Software project managers
may adopt one of the existing methodologies, but usually there is also a need to adapt the
methodology to the company on hand.
10. Bad or no documentation
Documentation should be considered as a "must have" and not "nice to have". Documentation is
an integral part of developing the life cycle process. It should not be caught as a nagging tedious
task, done for the sake of some strict QA manager. There are various types of software project
documentation, each related to a certain stage in the development life cycle of the project. For
example:

o Statement of Work: preliminary requirements and project frame, usually written by the
customer
o Marketing Requirements Document (MRD)
o Software Requirements Specifications (SRS)
o High Level Design (HLD), written by R&D
o Low Level Design, written by the R&D
o Project Schedule
o Software Quality Assurance Plan (SQA)

There are lots of templates and different names to the above documentation. Nevertheless, the
important thing is that their existence requires the position holder to think before working on the
project. The documents need to be stored and updated during the life of the project as it is done in
a source code case (out of date document is a bug). Badly written or no MRD or SRS document
can cause project failure (See item 11 bad SRS document).

11. Bad or no software requirements

As much as it sounds bizarre, in some software projects SRSs (Software Requirement


Specifications) do not exist or are badly written. There are many types of SRS formats and even
if it was only one common template, the content would vary from company to company.

12. Lack of testing


o Those who develop the software should not test it. The developer should run unit testing, but it
is not a replacement to an objective QA test.
o Testing only at the end of a long milestone raises problems due to the load of testing and
inherent problems that should have been caught at earlier stages.
Moreover, managers tend to rush the testing period at the end of the milestone in order to
release on time
o QA that does not bite and has no real power does not have the right effect on the R&D
department and is there for the project itself.
o QA should be started as soon as the software project starts. Hence, even in the planning stage.
QA participation in early stages is important for its preparation for the software. For example,
QA should also check the SRS document and make sure that the software was implemented
according to it.
13. Poor communication among the "Holy Triangle:" customers, R&D and marketing

The "Holy Triangle," as I define it, describes the important relationships between the customers,
marketing and R&D. As seen in the picture below, the marketing side combines the Customer
and R&D. Marketing interviews the customers and picks at their needs constantly. Then it brings
the important knowledge to the R&D department. Strange as it may seem, in some commercial
software companies, the customer requirements and needs are not gathered. This anomaly can
happen if the company suffers from "Hero base project." In this case, a certain persona, usually
the company CTO, enforces the project requirements without taking into consideration the market
and the customer's real needs. The result of this behavior might be the creation of software that
lacks the market needs and, in time, is a project failure.

14. No version / source control

Surprising as it may sound, some software projects are not backed up in source control. Sources
get lost; versions cannot be regained; products on customer's side can not be reconstructed.

15. No or bad risk management


A project risk is an uncertain event or condition that has consequences for the project. The
purpose of risk management is to identify, analyze, and respond to project risks. Given the above
items and the fact that software projects tend to fail, it would be absurd not to manage risks. The
Risk Management Document is the foremost design to enforce the software company to think
about what can go wrong. The thinking process itself can solve problems before they even
happened. Examples of risk:

o Incomplete or badly written requirements


o Choosing a technology that is not known by the current developers
o Relying on complex third party software

You might also like