Testing dan Implementasi Sistem Informasi

Tujuan

Tercapainya tujuan implementasi sistem informasi yang mendukung keberlanjutan dalam pengelolaan bisnis untuk mencapai visi dan target utama bisnis. Efisiensi dan efektifitas bisnis proses;

Information Systems

Leader Commitment (e-Leadership)

H/W

S/W

N/W

Services

Quality of Services Continuity of Services

Correctness System Improvement

INF.

HR

General Control
     

Management Hardware Software System Support Database Networking

    

Logical Security Operations Continuity Physical Security Systems development

General Controls Management
      

Organization Planning Training Security Resource management Facilities Operations

General Controls Hardware

Hardware Interaction

Processors
Mainframes  Desktops  Laptops

Input devices

Tape, CD, scanner, smart card reader, RFID, dll

Output devices

General Controls Software
    

Systems interactions Operating systems Applications Integrated systems Utilities

Software Functions
Application Processing Data Management Transaction Processor Communications Online

Control Access Control Operating System Function

General Controls Program Integrity

Program integrity
  

Testing Access Maintenance

General Controls System Support
   

Primary objective Control environment Central vs. Decentralized Control requirements

General Controls Database

Defined
 

Logical view Physical view

  

Products How they work Security
 

Access Control Authentication

 

Backup Contingency Plan

Information Network

General Controls Networking

Basics of networking
 

Topologies: star, bus, hybrid, Protocols: TCPIP, PPP, …

  

Components: router, switch, firewall Internal Connectivity: VPN, Performance: speed, bandwidth, CIR

General Controls Network Configurations
   

Centralized Decentralized Distributed Client-Server

General Controls Networking

Internet, intranet, extranet technologies

Services Objectives Protection steps Firewall configuration Distributed executables “Cookies” Digital signatures

External connectivity
  

Website issues
  

General Controls Physical Security
  

Physical protections Practices Facilities

General Controls Logical Security
  

Policies Passwords Practices

General Controls Operations
   

Data center operations Scheduling Media management Production environment

General Controls Programming

System programming
  

Critical functions Controlling functions Make the system programmer your friend Program maintenance Roles of the programmer

Application programming
 

General Controls Continuity Planning
    

Proper planning Scenarios Sufficient resources and commitment Human side Media management
 

Inventory Processes Onsite and offsite verification

General Controls Continuity Planning
   

 

Analysis terhadap ancaman (threats) Analysis terhadap processes Mencari/mempertimbangkan alternatives Penentuan dan pengembangan rencana (plan) contigency Documentation pendukung rencana (plan) Pengujian rencana (Plan)

IT Contingency Planning Process

To develop and maintain an effective IT contingency plan, organizations should use the following approach: 
      

Develop the contingency planning policy statement Conduct the business impact analysis (BIA) Identify preventive controls Develop recovery strategies Develop an IT contingency plan Plan testing, training, and exercises Plan maintenance.

Testing Contingency Plan

Stage 1 - Senior Staff Review
The senior staff selects an internally-publicized date and time to review all contingency plans. Aside from ensuring overall business soundness, this review also serves to recognize people who have thoughtfully completed their assignment. Knowledge of a firm date for a senior staff review will increase quality, accuracy and timeliness.


Stage 2 - Interdepartmental Reviews
Each department should review another department’s plans. The goal of this stage is to find bottlenecks, identify conflicts and allocate resources. If possible, departments that are "downstream" in the business process can review the plans of "upstream" departments.

Testing Contingency Plan

Stage 3 - Failures in Critical Systems
This testing can be localized within departments. It involves simulating system or vendor failures. You don't actually have to shut down critical equipment or processes - you can role-play a "what if" scenario. You can either run a "surprise" drill or plan a role-playing event for a specific time.


Stage 4 - The Real Deal
This testing involves short-term shutdowns in key areas. If possible, these tests should be conducted in a real-time environment. The goal, of course, is to fully test the contingency plan. Concentrate this last phase of testing only on areas that have a high business priority and a high risk for failure.

Software Testing Techniques

Strategy:

A strategy for software testing integrates software test case design techniques into a well – planned series of steps that result in the successful construction of software

Common Characteristics of Software Testing Strategies
   

Testing begins at module level and works outward towards the integration of the entire system. Different testing techniques are appropriate at different points in time. Testing is conducted by the developer of the software and for large projects by an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy

Validation and Verification (V&V)

Validation (Product Oriented)

Validation is concerned with whether the right functions of the program have been properly implemented, and that this function will properly produce the correct output given some input value. Verification involves checking to see whether the program conforms to specification. I.e the right tools and methods have been employed. Thus it focuses on process correctness.

Verification (Process Oriented)

Software Quality Assurance Involvement

Seven Principles Software Testing
  

To test a program is to try to make it fail Tests are no substitute for specifications. Regression testing: Any failed execution must yield a test case, to remain a permanent part of the project’s test suite. Applying oracles: Determining success or failure of tests must be an automatic process.

Seven Principles Software Testing
 

Manual and automatic test cases: An effective testing process must include both manually and automatically produced test cases. Empirical assessment of testing strategies: Evaluate any testing strategy, however attractive in principle, through objective assessment using explicit criteria in a reproducible testing process. Assessment criteria: A testing strategy’s most important property is the number of faults it uncovers as a function of time.

 

Testing from low-level to high level (Testing in Stages) Except for small programs, systems should not be tested as a single unit.

Large systems are built out of sub-systems, which are built out of modules that are composed of procedures and functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages.

Component testing Unit Testing Module Testing Integrated testing Sub-System Testing System Testing

Verification (Process Oriented)

White Box Testing Techniques (Tests that are derived from knowledge of the program’s structure and implementation)

The stages in the testing process are as follows:

Unit testing: (Code Oriented)
Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components.


Module testing:
A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions. A module encapsulates related components so it can be tested without other system modules.

Proses Testing
Unit Testing Module Testing Sub-system Testing System Testing Acceptance Testing

Component Testing

Integration Testing

User Testing

32

The stages in the testing process (cont.)

Sub-system testing: (Integration Testing) (Design Oriented)

This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements.

System testing:

The stages in the testing process (cont.)

Acceptance testing:

This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements definition( user – oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (nonfunctional) is unacceptable.

The stages in the testing process (cont.)

 

Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process continues until the system developer and the client agrees that the delivered system is an acceptable implementation of the system requirements. When a system is to be marketed as a software product, a testing process called beta testing is often used. Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and either released fur further beta testing or for general sale.

Testing Strategies

Top-down testing

Where testing starts with the most abstract component and works downwards. Where testing starts with the fundamental components and works upwards. Which is used for systems with multiple processes where the processing of a transaction threads its way through these processes. Which relies on stressing the system by going beyond its specified limits and hence testing how well the system can cope with over-load situations.


Bottom-up testing Thread testing


Stress testing

Testing Strategies

Back-to-back testing

Which is used when versions of a system are available. The systems are tested together and their outputs are compared. This is used to test the run-time performance of software. This attempts to verify that protection mechanisms built into system will protect it from improper penetration. This forces software to fail in a variety ways and verifies that recovery is properly performed.

Performance testing.

Security testing.

Recovery testing.

Testing Strategies

Large systems are usually tested using a mixture of these strategies rather than any single approach. Different strategies may be needed for different parts of the system and at different stages in the testing process. Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system and system testing. Rather than integrate all components into a system and then start testing, the system should be tested incrementally. Each increment should be tested before the next increment is added to the system. This process should continue until all modules have been incorporated into the system. When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may now, detect defects. These defects are probably due to interactions with the new module. The source of the problem is localized to some extent, thus simplifying defect location and repair.

Unit Testing

Coding

Focuses on each module and whether it works properly. Makes heavy use of white box testing Centered on making sure that each module works with another module. Comprised of two kinds: 1. Top-down and 2. Bottom-up integration. Or focuses on the design and construction of the software architecture. Makes heavy use of Black Box testing.(Either answer is acceptable)

Integration Testing

Design

Validation Testing Systems Testing

Analysis

Ensuring conformity with requirements  

Systems Making sure that the software product works Engineeri with the external ng environment,e.g.,computer system,other software products.

   

Driver and Stubs  Driver: dummy main program Stub: dummy sub-program This is due to the fact that the modules are not yet stand alone programs therefore drive and or stubs have to be developed to test each unit.

The responsibility for testing between the Project & Software Qualitiy Assurance (S.Q.A.)

 

Unit Test is the responsibility of the Development Team System Testing is the responsibility of SQA User Acceptance Testing is the Responsibility of the User Representatives Team Technology Compliance Testing is the responsibility of the Systems Installation & Support Group.

Web Application Testing

Does the User Interface promote Usability?
  

Navigation Interactrion Abstract Interface Design and Implementation

Does aesthetics of Web App. Appropriate for the app domain and pleasing to the user?

Web Application Testing

 

Is the content designed in a manner that imparts the most information with the least effort. Is navigation efficient and straightforward? Is Web App. Architecture provided for user: structure content and function, flow of navigation for efficient use ? Are components designed in manner to reduce complexity and enhanced correctness, reliability and performance? Security Test: SQL Inject

Representative Tools:
 

Technical Metrics for Web Apps
Netmechanic Tools: improve web-performance (www.netmechnic.com) NIST Web Matrics Testbed (zing.ncsl.nist.gov/WebTools/) Web Static Analyzer Tool (WebSAT) Web Category Analysis Tool (Web CAT) Web Variable Instrument Program (WebVIP): capture log user interaction Framework for Logging Usability Data (FLUD) VisVIP: visualization of user navigation paths TreeDec: adds navigation aids to the web site

  

  

Web Security
   

Firewall Authentication Encryption Authorization

Performace Test
  

 

Response Time Unacceptable rate Component responsible of degradation performance Impact of degradation to security What happens when load over the maximum capacity
 

Load Testing Stress Testing

Load Testing
   

P=NxTxD N: concurrent user T: On Line Transaction D: Data load processed by the Server per transaction

Sign up to vote on this title
UsefulNot useful