SlideShare a Scribd company logo
1 of 98
SOFTWARE TESTING
By Mousmi Pawar
TESTING…
 Software testing, when done correctly, can
  increase overall software quality of
  conformance by testing that the product conforms
  to its requirements.
 Testing begins with the software engineer in
  early stages, but later specialists may be involved
  in the testing process.
 After generating source code, the software must
  be tested to uncover and correct as many errors
  as possible before delivering it to the customer.
 It’s important to test software because a
  customer, after running it many times would
  finally uncover an error.
INTRODUCTION TO QUALITY
ASSURANCE
 Quality assurance is an essential activity for any
  business that produces products to be used by
  others.
 software quality assurance is a "planned and
  systematic pattern of actions" that are required
  to ensure high quality in software.
 Software quality assurance is composed of a
  variety of tasks associated with two different
  constituencies—the software engineers who do
  technical work and an SQA group that has
  responsibility for quality assurance planning,
  oversight, record keeping, analysis, and
  reporting.
SIX SIGMA..
 Six Sigma is a set of practices originally developed
  by Motorola to systematically improve processes by
  eliminating defects.
 The Six Sigma approach is:
     Define: Determine  the project goals and the
      requirements of customers.
     Measure the current performance of the process.
     Analyze and determine the root causes of the defects.
     Improve the process to eliminate defects.
     Control the performance of the process.
SOFTWARE TESTING
FUNDAMENTALS
 During software engineering activities, an
  engineer attempts to build software, where as
  during testing activities, the intent is to demolish
  the software.
 Hence testing is psychologically destructive
  rather constructive.
 A good and successful test is one that uncovers
  an undiscovered error.
 All the test that are carried out should be
  traceable to customer requirements at the
  sometime exhaustive testing is not possible.
TESTING OBJECTIVES


 Testing is a process of executing a program with
  the intent of finding an error.
 A good test case is one that has a high probability
  of finding an as yet undiscovered error.
 A successful test is one that uncovers an as yet
  undiscovered error.
COMMON TERMS:
   Error:- difference between the actual output of a
    software and the correct output.
    A  mistake made by programmer. OR Anything that
      stops the execution of application.
 Fault:  is a condition that cause a system to fail in
  performing its required function.
 Bug:  A Bug is a Application Error, in Software
  doesn't affect the whole application in terms of
  running. It doesn't stop the execution of
  application. It produces wrong output.
 Failure:  occurs when a fault is executed. The
  inability of a system or component to perform its
  required functions within specified performance
  requirements.
TESTING PRINCIPLES
 “All tests should be traceable to customer
  requirements.” For a customer, the most
  severe defect are the one that causes the
  program to fail to meet the requirements.
 “Tests should be planned long before
  testing begins.” All test can be planned
  and designed before any code has been
  generated
 “Testing should begin ‘in the small’ and
  progress toward testing in the large.” The
  first test planned and executed generally
  focus on individual component.
TESTING PRINCIPLES
 “As testing progresses focus shifts on
  integrated clusters of components and
  then the entire system.”
 “Exhaustive testing is not possible.” to
  test exhaustively, even a small program
  could take years.
 “To be more effective, testing should be
  conducted by an independent third
  party.”.
     Software  engineer who constricts the
     software is not always the best person to test
     it. So a third party deems to be more
     efficient.
SOFTWARE TESTING
There are two major types of software testing
 Black box testing

 White box testing/ Glass box testing.



 Black box testing focuses on input, output, and
  principle function of a software module.
 Glass box testing looks into the structural testing
   or glass box testing, statements paths and
  branches are checked for correct execution.
LEVELS OF SOFTWARE TESTING
   Software Testing is carried out at different levels
    through the entire software development life
    cycle.
 In unit testing, test cases verify the
  implementation of design of a each software
  element.
 Integration testing traces detailed design and
  architecture of the system. Hardware and
  software elements are combined and tested until
  the entire system has been integrated.
 Functional Test It is used to exercise the code
  with the common input values for which the
  expected values are available.
 In system testing, it checks for system
  requirement.
 In acceptance testing, it determines if test
  results satisfy acceptance criteria of project stake
  holders.
UNIT TESTING
 Unit testing focuses verification effort on the smallest
  unit of software design—the software component or
  module.
 Using the component-level design description as a
  guide, important control paths are tested to uncover
  errors within the boundary of the module.
 The relative complexity of tests and uncovered errors is
  limited by the constrained scope established for unit
  testing.
 The unit test is white-box oriented, and the step can be
  conducted in parallel for multiple components.
Unit Test Considerations
   The module interface is tested to ensure that information properly
    flows into and out of the program unit under test.
   The local data structure is examined to ensure that data stored
    temporarily maintains its integrity during all steps in an algorithm's
    execution.
   Boundary conditions are tested to ensure that the module operates
    properly at boundaries established to limit or restrict processing.
    All independent paths (basis paths) through the control structure are
    exercised to ensure that all statements in a module have been executed
    at least once.
   And finally, all error handling paths are tested.
   Following are some types of errors commonly found during unit testing
      misunderstood or incorrect arithmetic precedence
      mixed mode operations,
      incorrect initialization,
      precision inaccuracy,
Unit Testing Precedence
   Once the source level code has been developed, reviewed, and verified
    for correspondence to component level design, unit test case design
    begins.
   a component is not a stand-alone program, driver and/or stub
    software must be developed for each unit test.
   In most applications a driver is nothing more than a "main program"
    that accepts test case data, passes such data to the component (to be
    tested), and prints relevant results.
   Stubs serve to replace modules that are subordinate (called by) the
    component to be tested. A stub or "dummy subprogram" uses the
    subordinate module's interface, may do minimal data manipulation,
    prints verification of entry, and returns control to the module
    undergoing testing.
   Drivers and stubs represent overhead. That is, both are software that
    must be written (formal design is not commonly applied) but that is
    not delivered with the final software product.
Integration Testing
 Unit that otherwise seem to be working fine
  individually, starts causing problems when integrated
  together.
 Data can be lost across an interface; one module can
  have an inadvertent, adverse affect on another; sub-
  functions, when combined, may not produce the desired
  major function; individually acceptable imprecision
  may be magnified to unacceptable levels; global data
  structures can present problems.
 Integration testing is a systematic technique for
  constructing the program structure while at the same
  time conducting tests to uncover errors associated with
  interfacing.
   Non-Incremental integration or "big bang" approach:
     Combines   all the components at the same time and then
      tests the program on the whole. This approach gives a
      number of errors that are difficult to isolate and trace as
      the whole program now is very vast. The correction of
      such errors seems to go into infinite loop.
   Incremental integration:
     constructed   and tested in small increments, where errors
      are easier to isolate and correct; interfaces are more
      likely to be tested completely; and a systematic test
      approach may be applied.
     Top-down Integration
     Bottom-up Integration
Top-down Integration
 Top-down integration testing is an incremental approach
  to construction of program structure.
 Modules are integrated by moving downward through
  the control hierarchy, beginning with the main control
  module (main program).
 Modules subordinate (and ultimately subordinate) to the
  main control module are incorporated into the structure
  in either a depth-first or breadth-first manner.
Depth-first integration & Breadth-first
    integration

   depth-first integration would integrate all components on a major
    control path of the structure.
    Selection of a major path is somewhat arbitrary and depends on
    application-specific characteristics.
   For example, selecting the left-hand path, components M1, M2 , M5
    would be integrated first. Next, M8 or (if necessary for proper
    functioning of M2) M6 would be integrated.
   Then, the central and right-hand control paths are built.
   Breadth-first integration incorporates all components directly
    subordinate at each level, moving across the structure horizontally.
   components M2, M3, and M4 would be integrated first. The next
    control level, M5, M6, and so on, follows.
   The integration process is performed in a series of five
    steps:
     1. The main control module is used as a test driver and stubs
      are substituted for all components directly subordinate to the
      main control module.
     2. Depending on the integration approach selected (i.e., depth
      or breadth first), subordinate stubs are replaced one at a time
      with actual components.
     3. Tests are conducted as each component is integrated.
     4. On completion of each set of tests, another stub is replaced
      with the real component.
     5. Regression testing may be conducted to ensure that new
      errors have not been introduced.
   The process continues from step 2 until the entire
    program structure is built.
Bottom-up Integration
   Bottom-up integration testing, as its name implies, begins
    construction and testing with atomic modules i.e., components at
    the lowest levels in the program structure.
   Because components are integrated from the bottom up, processing
    required for components subordinate to a given level is always
    available and the need for stubs is eliminated.
   A bottom-up integration strategy may be implemented with the
    following steps:
      1. Low-level components are combined into clusters (sometimes
        called builds) that perform a specific software subfunction.
      2. A driver (a control program for testing) is written to coordinate
        test case input and output.
      3. The cluster is tested.
      4. Drivers are removed and clusters are combined moving upward in
        the program structure.
   Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
    using a driver (shown as a dashed block). Components in clusters 1 and 2 are
    subordinate to Ma.
   Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma.
   Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb.
    Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
Regression Testing
 Each time a new module is added as part of integration
  testing, the software changes. New data flow paths are
  established, new I/O may occur, and new control logic is
  invoked.
 These changes may cause problems with functions that
  previously worked flawlessly.
 In the context of an integration test strategy, regression
  testing is the re-execution of some subset of tests that
  have already been conducted to ensure that changes
  have not propagated unintended side effects.
 Regression testing is the activity that helps to ensure
  that changes (due to testing or for other reasons) do not
  introduce unintended behavior or additional errors.
   The regression test suite (the subset of tests to be executed)
    contains three different classes of test cases:
      A representative sample of tests that will exercise all software
        functions.
      Additional tests that focus on software functions that are likely to
        be affected by the change.
      Tests that focus on the software components that have been
        changed.
   As integration testing proceeds, the number of regression tests
    can grow quite large.
   Therefore, the regression test suite should be designed to include
    only those tests that address one or more classes of errors in each
    of the major program functions.
   It is impractical and inefficient to re-execute every test for every
    program function once a change has occurred.
Smoke Testing
   Smoke testing is an integration testing approach that is commonly
    used when “shrinkwrapped” software products are being
    developed. It is designed as a pacing mechanism for time-critical
    projects, allowing the software team to assess its project on a
    frequent basis.
   the smoke testing approach encompasses the following activities:
      1. Software components that have been translated into code are
       integrated into a “build.” A build includes all data files, libraries,
       reusable modules, and engineered components that are required to
       implement one or more product functions.
      2. A series of tests is designed to expose errors that will keep the
       build from properly performing its function. The intent should be to
       uncover “show stopper” errors that have the highest likelihood of
       throwing the software project behind schedule.
      3. The build is integrated with other builds and the entire product
       (in its current form) is smoke tested daily. The integration
       approach may be top down or bottom up.
   Smoke testing provides a number of benefits when it is applied on
    complex, timecritical software engineering projects:
      Integration risk is minimized. Because smoke tests are conducted
       daily, incompatibilities and other show-stopper errors are
       uncovered early, thereby reducing the likelihood of serious
       schedule impact when errors are uncovered.
      The quality of the end-product is improved. Because the approach
       is construction (integration) oriented, smoke testing is likely to
       uncover both functional errors and architectural and component-
       level design defects. If these defects are corrected early, better
       product quality will result.
      Error diagnosis and correction are simplified. Like all integration
       testing approaches, errors uncovered during smoke testing are
       likely to be associated with “new software increments”—that is,
       the software that has just been added to the build(s) is a probable
       cause of a newly discovered error.
      Progress is easier to assess. With each passing day, more of the
       software has been integrated and more has been demonstrated to
       work. This improves team morale and gives managers a good
       indication that progress is being made.
Validation testing /Acceptance Testing
 After integration testing, one may start with
  validation testing.
 validation succeeds when software functions in a
  manner that can be reasonably expected by the
  customer.
 Software validation is achieved through a series of
  black-box tests that demonstrate conformity with
  requirements.
 This is to ensure that all functional requirements are
  satisfied, all behavioral characteristics are achieved,
  all performance requirements are attained,
  documentation is correct, and other requirements are
  met
 It is virtually impossible for a software developer to
  foresee how the customer will really use a program.
 When custom software is built for one customer, a
  series of acceptance tests are conducted to enable the
  customer to validate all requirements.
 Conducted by the end-user rather than software
  engineers, an acceptance test can range from an
  informal "test drive" to a planned and systematically
  executed series of tests.
Alpha and Beta Testing
   The alpha test is conducted at the developer's site by a customer.
   The software is used in a natural setting with the developer
    "looking over the shoulder" of the user and recording errors and
    usage problems.
   Alpha tests are conducted in a controlled environment.
   The beta test is conducted at one or more customer sites by the
    end-user of the software.
    Unlike alpha testing, the developer is generally not present.
    Therefore, the beta test is a "live" application of the software in
    an environment that cannot be controlled by the developer.
   The customer records all problems that are encountered during
    beta testing and reports these to the developer at regular
    intervals.
   As a result of problems reported during beta tests, software
    engineers make modifications and then prepare for release of the
    software product to the entire customer base.
SYSTEM TESTING
 software is incorporated with other system
  elements e.g., hardware, people, information, and
  a series of system integration and validation
  tests are conducted.
 System testing is actually a series of different
  tests whose primary purpose is to fully exercise
  the computer-based system.
     Recovery Testing
     Security Testing
     Stress Testing
     Performance Testing
Recovery Testing
 Many computer based systems must recover from
  faults and resume processing within a pre-specified
  time.
 Recovery testing is a system test that forces the
  software to fail in a variety of ways and verifies that
  recovery is properly performed.
 If recovery is:
     Automatic   recovery,
     reinitialization,
     Checkpointing
     data recovery, and
     restart are evaluated for correctness.
Security Testing
 Security testing attempts to verify that protection
  mechanisms built into a system will, in fact, protect it
  from improper penetration.
 During security testing, the tester plays the role(s) of
  the individual who desires to penetrate the system.
 Anything goes! The tester may attempt to acquire
  passwords through external clerical means; may attack
  the system with custom software designed to
  breakdown any defenses that have been constructed;
  may overwhelm the system, thereby denying service to
  others; may purposely cause system errors, hoping to
  penetrate during recovery; may browse through
  insecure data, hoping to find the key to system entry.
Stress Testing
   Stress testing executes a system in a manner that demands
    resources in abnormal quantity, frequency, or volume.
   (1) special tests may be designed that generate ten interrupts per
    second, when one or two is the average rate
   (2) input data rates may be increased by an order of magnitude to
    determine how input functions will respond,
   (3) test cases that require maximum memory or other resources
    are executed,
   (4) test cases that may cause thrashing in a virtual operating
    system are designed,
   (5) test cases that may cause excessive hunting for disk-resident
    data are created. Essentially, the tester attempts to break the
    program.
   variation of stress testing is a technique called sensitivity testing.
Performance Testing
 For real-time and embedded systems, software that
  provides required function but does not conform to
  performance requirements is unacceptable
 Performance testing is designed to test the run-time
  performance of software within the context of an
  integrated system. Performance testing occurs
  throughout all steps in the testing process.
 Performance tests are often coupled with stress testing
  and usually require both hardware and software
  instrumentation.
WHITE BOX TESTING
 Focus: Thoroughness (Coverage). Every
  statement in the component is executed at least
  once.
 White box testing is also known as structural
  testing or code-based testing.
 The major objective of white box testing is to
  focus in internal program structure, and discover
  all internal program errors.
ADVANTAGE OF WHITE BOX
TESTING
 Helps optimizing the code.
 Helps removing extra line of code.

 All possible code pathways can be tested
  including error handling, resource dependencies,
  & additional internal code logic/flow.
 Enables tester to find programming errors
  quickly.
 Good quality of coding work and coding
  standards.
DISADVANTAGE OF WHITE BOX
TESTING
 Knowledge of code & internal structure is a
  prerequisite, a skilled tester is needed to carry
  out this type of testing, which increase the cost.
 Impossible to look into every bit of code to find
  out hidden errors.
 Very expensive technique.

 Requires knowledge of target system, testing
  tools and coding language.
 Requires specialized tools such as source code
  analyzer, debugger, and fault injectors.
TRADITIONAL WHITE BOX TESTING
   White box testing is further divided into 2 types

     BASIS   PATH TESTING
        Flow graph
        Cyclomatic complexity

        Graph matrix

     CONTROL    STRUCTURE
        Condition testing
        Data flow testing

        Loop testing
BASIS PATH TESTING
 It is a white box testing technique that enables
  the designer to derive a logical complexity
  measure of a procedural design.
 Test cases based on Basis path testing grantee to
  execute every statement in program at least
  once.
FLOW GRAPH
   A flow graph depicts the logical control flow
    using the following notations




   Every structural construct has a corresponding
    flow graph symbol.
FLOW GRAPH
 Each circle, called flow graph node, represents
  one or more procedural statements.
 The arrows called as links or edged represent
  flow of control.
 Areas bounded by edges are called regions. while
  counting regions the area outside the graph is
  also taken as a region.
 Each node containing a condition is called a
  predicate node and has 2 or more edges out of it.
CYCLOMATIC COMPLEXITY
 Cyclomatic complexity is software metric that
  provides a quantitative measure of the logical
  complexity of a program.
  cyclomatic complexity defines the number of
  independent paths in the basis set of a program.
 This number gives us an upper bound for the
  number of tests that must be conducted to ensure
  that all statements have been executed at least
  once.
 An independent path is any path through the
  program that introduces at least one new set of
  processing statements or a new condition.
 For example, a set of independent paths for the
  flow graph illustrated in Figure
 path 1: 1-11
  path 2: 1-2-3-4-5-10-1-11
  path 3: 1-2-3-6-8-9-10-1-11
  path 4: 1-2-3-6-7-9-10-1-11

 Note that each new path introduces a new edge.
 Paths 1, 2, 3, and 4 constitute a “basis set” for
  the flow graph
 Basis set is a set where every statement in the
  program is visited at least once. This set is not
  unique.
 Computing the cyclomatic would tell us how
  many paths are required in the basis set.
 Complexity is computed in one of three ways:
     The  number of regions of the flow graph correspond
      to the cyclomatic complexity.
     Cyclomatic complexity, V(G), for a flow graph, G, is
      defined as V(G) = E - N + 2 where E is the number of
      flow graph edges, N is the number of flow graph
      nodes.
     Cyclomatic complexity, V(G), for a flow graph, G, is
      also defined as V(G) = P + 1 where P is the number of
      predicate nodes contained in the flow graph G.
   the cyclomatic complexity can be computed using
    each of the algorithms just noted:
     The flow graph has four regions.
     V(G) = 11 edges - 9 nodes + 2 = 4.
     V(G) = 3 predicate nodes + 1 = 4.
GRAPH MATRICES
 To develop a software tool that assists in basis
  path testing, a data structure, called a graph
  matrix, can be quite useful.
 A graph matrix is a square matrix whose size
  (i.e., number of rows and columns)is equal to the
  number of nodes on the flow graph. Each row and
  column corresponds to an identified node, and
  matrix entries correspond to connections (an
  edge) between nodes.
 The graph matrix is nothing more than a tabular
  representation of a flow graph.
 each row with two or more entries represents a
  predicate node.
 Therefore, performing the arithmetic shown to
  the right of the connection matrix provides us
  with still another method for determining
  cyclomatic complexity.
CONTROL STRUCTURE TESTING
 Although basis path testing is simple and highly
  effective, it is not sufficient in itself.
 other variations on control structure improve quality
  of white-box testing.
 Control Structure testing controls over the order in
  which the instructions in program are executed.
 One of the major objective of control structure testing
  includes the selection of the test case to meet various
  criteria for covering the code.
 Test cased are derived to exercise control over the
  order of the execution of the code.
 Coverage based testing works by choosing test cases
  according to well defined coverage criteria.
 The more common coverage criteria are the following:

 Statement Coverage or Node Coverage:
     Every   statement of the program should be exercised at least
      once.
   Branch Coverage or Decision Coverage:
     Every possible alternative in a branch of the program
      should be exercised at least once.
   Condition Coverage
     Each  condition in a branch is made to evaluate to true and
      false at least once.
   Decision/Condition Coverage
     Each condition in a branch is made to evaluate to both true
      and false and each branch is made to evaluate to both true
      and false.
Conditional Testing
 In this kind of testing various test cases are
  designed so that the logical conditions contained
  in a program can be exercised.
 A simple condition can be defined as a single
  Boolean variable or a relational expression.
 The conditional testing method tries to focus on
  testing each and every condition which is existing
  in a program and gives a path to execute.
 Adv
     Measurement    of test coverage of a condition is simple
     The test coverage of condition in a program makes
      guidance for the generation of extra tests for the
      program
Data Flow testing
 Testing in which test cases are designed based on
  variable usage within the code.
 Data-flow testing looks at the lifecycle of a
  particular variable.
 The two major types of usage node are:
     P-use: predicate use – the variable is used when
      making a decision (e.g. if b > 6).
     C-use: computation use – the variable is used in a
      computation (for example, b = 3 + d – with respect to
      the variable d).
Loop Testing
   Four different classes of loops exist
     Simple loops
     Nested loops
     Concatenated loops
     Unstructured loops
Black Box Testing
 Black-box testing, also called behavioral testing,
  focuses on the functional requirements of the software.
 That is, black-box testing enables the software
  engineer to derive sets of input conditions that will
  fully exercise all functional requirements for a
  program.
 Black-box testing is not an alternative to white-box
  techniques. Rather, it is a complementary approach
  that is likely to uncover a different class of errors than
  white-box methods.
Black Box Testing
   Black-box testing attempts to find errors in the
    following categories:
     incorrect  or missing functions,
     interface errors,
     errors in data structures or external data base access,
     behavior or performance errors, and
     initialization and termination errors.
   Stages of Black Box Testing
     Equivalence Partitioning
     Boundary Value Analysis
     Robustness Testing
Advantage’s of Black Box Testing
  More effective on larger units of code than glass
  box testing
 Tester needs no knowledge of implementation,
  including specific programming
  languages
 Tester and programmer are independent of each
  other
 Tests are done from a user's point of view

 Will help to expose any ambiguities or
  inconsistencies in the specifications
 Test cases can be designed as soon as the
  specifications are complete
Disadvantages of Black Box Testing
  Only a small number of possible inputs can
  actually be tested, to test every
  possible input stream would take nearly forever
 Without clear and concise specifications, test
  cases are hard to design
 There may be unnecessary repetition of test
  inputs if the tester is not informed of
  test cases the programmer has already tried
Equivalence partitioning
 Equivalence partitioning is a black-box testing
  method that divides the input domain of a program
  into classes of data from which test cases can be
  derived.
 Test case design is based on an evaluation of
  equivalence classes for an input condition
 An equivalence class represents a set of valid or
  invalid states for input conditions
 From each equivalence class, test cases are selected
  so that the largest number of attributes of an
  equivalence class are exercise at once.
   If an input condition specifies a range, one valid and
    two invalid equivalence classes are defined
     Input    range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}
   If an input condition requires a specific value, one
    valid and two invalid equivalence classes are defined
     Input    value: 250 Eq classes: {250}, {x < 250}, {x > 250}
   If an input condition specifies a member of a set, one
    valid and one invalid equivalence class are defined
     Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any
      other x}
   If an input condition is a Boolean value, one valid and
    one invalid class are define
     Input: {true condition} Eq classes: {true condition}, {false
      condition}
                                           (e.g from Pressman)
Boundary Value Analysis
 A greater number of errors occur at the
  boundaries of the input domain rather than in
  the "center"
 Boundary value analysis leads to a selection of
  test cases that exercise bounding values.
 Boundary value analysis is a test case design
  method that complements equivalence
  partitioning
     It selects test cases at the edges of a class
     It derives test cases from both the input domain and
      output domain
Guidelines for
Boundary Value Analysis
1. If an input condition specifies a range bounded by values a and
  b, test cases should be designed with values a and b as well as
  values just above and just below a and b
2. If an input condition specifies a number of values, test case
  should be developed that exercise the minimum and maximum
  numbers. Values just above and just below the minimum and
  maximum are also tested
 Apply guidelines 1 and 2 to output conditions; produce output
  that reflects the minimum and the maximum values expected;
  also test the values just below and just above
 If internal program data structures have prescribed boundaries
  (e.g., an array), design a test case to exercise the data structure
  at its minimum and maximum boundaries
Robustness Testing
 Robustness means it is an capability of a
  software to keep whole the acceptable behavior.
  This acceptability is a standard known as
  robustness requirement.
 Robustness compromises the appropriate
  handling of the following:
     Invalid i/p given by user or any other interfacing
      applications.
     Hardware component failure
     Software component failure
     External system failure
Cause effect graph
 This helps in identifying combination of input
  conditions in a organized manner, in such a
  manner that number of test cases does not
  become improperly larger.
 This technique begins with finding out ‘causes’ as
  well as ‘effect’ of the system under testing.
 The inputs (causes) and outputs (effects) are
  represented as nodes of a cause-effect graph.
 In such a graph, a number of intermediate nodes
  linking causes and effects in the formation of a
  logical expression
A Simple Cause-Effect Graphing
Example
I have a requirement that says: “If A OR B, then C.”
 The following rules hold for this requirement:
    If A is true and B is true, then C is true.
   If A is true and B is false, then C is true.
   If A is false and B is true, then C is true.
   If A is false and B is false, then C is false.
   The cause-effect graph is then converted into a
    decision table or “truth table” representing the
    logical relationships between the causes and
    effects.
Static Testing
 Static testing is carried out without execution of program.
 At different levels, static testing may expose the faulty
  programming constructs
     Allows   to make it sure that variables used prior to
      initialization, declared but not used.
     Assigned 2 times but not at all used between assignment.
     Undeclared variables.
     Interface fault or not
     Parameter type and number mismatch and uncalled function
      as well as procedures.
Stages of Static Testing
1.    Control Flow Analysis: checking is done on loops which
      are with either multiple or exit point.
2.    Data Use Analysis: checking is done regarding un-
      initialized variables, variables assigned more than
      once, variables are declared but not at all used.
3.    Interface Analysis: checking consistency of procedure
      as well as their use.
4.    Information Analysis: checking is done regarding
      dependencies of output variables and also information
      for code inspection or review.
5.    Path Analysis: checking paths through the program as
      well as sets out the statements executed in that path
Principle’s of Static Testing
   Review each and every coding standard violation.
     Code  standard can be any one of above general good
      practice, self imposed company standards.
 Review complexity of code
 Inspect and remove unreachable code, unreferenced
  procedure and unused variables.
 Report each and every types of Data flow anomalies:
     Before  initialization a variable is used, is a great
      abnormality.
     First a value is assigned to a variable and then without
      being used runs out of scope.
     First value is assigned to a variable and again reassigned
      without making use in between.
     A parameter Mismatch
   Suspicious casts.
OBJECT-ORIENTED TESTING
STRATEGIES
 The classical strategy for testing computer software
  begins with “testing in the small” and works outward
  toward “testing in the large.”
 we begin with unit testing, then progress toward
  integration testing, and culminate with functional
  and system testing.
 In conventional applications, unit testing focuses on
  the smallest compliable program unit—the
  subprogram (e.g., module, subroutine, procedure,
  component). Once each of these units has been tested
  individually, it is integrated into a program structure
  while a series of regression tests are run to uncover
  errors due to interfacing between the modules and
  side effects caused by the addition of new units.
 Finally, the system as a whole is tested to ensure that
  errors in requirements are uncovered.
Unit Testing in the OO Context
   In the OO context the smallest testable unit is the class or the
    instance of the class. A class encapsulates attributes
    (variables) & behavior (methods) into one unit.
   A class may contain many methods and the same methods
    may be a part of a many classes. So testing in isolation for a
    particular method is not possible. It has to be tested as part of
    the class.
   E.g. an operation X is inherited from a super class by many
    subclasses. And each subclass makes its own type of changes
    to this operation X. Hence we need to check X for each
    subclass rather than checking it for the super class.
   Class testing for OO software is the equivalent of unit testing
    for conventional software
   Conventional UT tends to focus on the algorithm, whereas OO
    UT lays emphasis on a module and the data flow across that
    module.
Integration Testing in the OO Context
   object-oriented software does not have a hierarchical control
    structure, conventional top-down and bottom-up integration
    strategies.
   There are two different strategies for integration testing of OO
    systems
      The first, thread-based testing, integrates the set of classes
        required to respond to one input or event for the system. Each
        thread is integrated and tested individually. Regression testing is
        applied to ensure that no side effects occur.
      The second integration approach, use-based testing, begins
        construction of the system by testing those classes (called
        independent classes) that use very few server classes. the next
        layer of classes, called dependent classes, that use the independent
        classes are tested. This sequence of testing layers of dependent
        classes continues until the entire system is constructed. Unlike
        conventional integration, the use of drivers and stubs.
   Cluster testing is one step in the integration testing of OO
    software. Here, a cluster of collaborating classes is exercised by
    designing test cases that attempt to uncover errors in the
    collaborations.
Validation Testing in an OO Context

 the validation of OO software focuses on user-
  visible actions and user-recognizable output from
  the system.
 For validation tests, the use-cases drawn by the
  tester are of great help.
 The use-case provides a scenario that has a high
  likelihood of uncovered errors in user interaction
  requirements.
 Conventional black-box testing methods can be
  used to drive validations tests. Also test cases
  may be derived from the object-behavior model
  and from event flow diagram created as part of
  OOA.
TEST CASE DESIGN FOR OO
SOFTWARE
The following are the points to be considered for OO test case
  design.
 Each test case should be uniquely identified and explicitly
  associated with the class to be tested.
 The purpose of the test should be stated.
 A list of testing steps should be developed for each test and
  should contain:
    A list of specified states for the object that is to be tested.
    A list of messages and operations that will be exercised as a
     consequence of the test.
    A list of exceptions that may occur as the object is tested.
    A list of external conditions (i.e., changes in the environment
     external to the software that must exist in order to properly
     conduct the test).
    Supplementary information that will aid in understanding or
     implementing the test
The Test Case Design Implications of OO
Concepts
 the OO class is the target for test case design.
  Because attributes and operations are
  encapsulated, testing operations outside of the
  class is generally unproductive.
 Encapsulation makes it difficult to obtain state
  information of a class, but some built in
  operations may be provided to report the value
  for class attributes.
 Inheritance may also lead to further challenges
  in testing as new context of usage requires
  retesting.
Applicability of Conventional Test Case
Design Methods
 white-box testing methods an be applied to the
  operations defined for a class.
 Basis path, loop testing, or data flow techniques
  can help to ensure that every statement in an
  operation has been tested.
 the concise structure of many class operations
  causes some to argue that the effort applied to
  white-box testing might be better redirected to tests
  at a class level.
 Black-box testing methods are as appropriate for
  OO systems. use-cases can provide useful input in
  the design of black-box testing.
Testing methods
o   Fault-Based Testing
     o The tester looks for plausible faults (i.e., aspects of the
       implementation of the system that may result in defects). To
       determine whether these faults exist, test cases are designed
       to exercise the design or code.
     o Because the product or system must conform to customer
       requirements, the preliminary planning required to perform
       fault based testing begins with the analysis model.
   Class Testing and the Class Hierarchy
      Inheritance does not obviate the need for thorough testing of
       all derived classes. In fact, it can actually complicate the
       testing process.
   Scenario-Based Test Design
      Scenario-based testing concentrates on what the user does, not
       what the product does. This means capturing the tasks (via
       use-cases) that the user has to perform, then applying them
       and their variants as tests.
TESTING METHODS APPLICABLE AT THE
CLASS LEVEL

   Random testing
     identify operations applicable to a class
     define constraints on their use
     identify a minimum test sequence
          an operation sequence that defines the minimum
           life history of the class (object)
     generate   a variety of random (but valid) test sequences
          exercise other (more complex) class instance life
           histories
TESTING METHODS APPLICABLE AT
THE CLASS LEVEL
   Partition Testing
     reduces the number of test cases required to test a class in
      much the same way as equivalence partitioning for
      conventional software
     state-based partitioning
     attribute-based partitioning
     category-based partitioning
state-based partitioning
   categorize and test operations based on their ability to change the
    state of a class.
   considering the account class, state operations include deposit and
    withdraw, whereas nonstate operations include balance,
    summarize, and creditLimit.
   Tests are designed in a way that exercises operations that change
    state and those that do not change state separately. Therefore,
      Test case p1:
       open•setup•deposit•deposit•withdraw•withdraw•close
      Test case p2:
       open•setup•deposit•summarize•creditLimit•withdraw•close
   Test case p1 changes state, while test case p2 exercises operations
    that do not change state (other than those in the minimum test
    sequence).
attribute-based partitioning
 categorize and test operations based on the
  attributes that they use
 For the account class, the attributes balance
  and creditLimit can be used to define
  partitions. Operations are divided into three
  partitions:
(1) operations that use creditLimit,
(2) operations that modify creditLimit, and
(3) operations that do not use or modify
  creditLimit.
Test sequences are then designed for each
  partition.
category-based partitioning

 categorize and test operations based on the
  generic function each performs
 For example, operations in the account class
  can be categorized in
     initializationoperations (open, setup),
     computational operations (deposit,withdraw),
     queries (balance, summarize, creditLimit) and
     termination operations (close).
TESTING METHODS APPLICABLE AT
THE CLASS LEVEL
   Inter-class testing
      Multiple Class Testing
      Tests Derived from Behavior Models
   Multiple Class Testing
      the following sequence of steps to generate multiple class
       random test cases:
        For each client class, use the list of class operations to generate a
         series of n random test sequences. The operations will send
         messages to other server classes.
        For each message that is generated, determine the collaborator

         class and the corresponding operation in the server object.
        For each operation in the server object (that has been invoked by

         messages sent from the client object), determine the messages that
         it transmits.
        For each of the messages, determine the next level of operations

         that are invoked and incorporate these into the test sequence.
TESTS DERIVED FROM BEHAVIOR
MODELS

 the state transition diagram represents the
  dynamic behavior of a class.
 The STD for a class can be used to help derive a
  sequence of tests that will exercise the dynamic
  behavior of the class.
 The tests to be designed should achieve all state
  coverage. That is all operation sequence should
  cause the class to transition through all
  allowable states
 More test cases could be derived to make sure
  that all behavior for the class have adequately
  exercised.
Overview of Website Testing
 The testing concepts remains dame for web applications
  and web testing.
 But web based systems & application are residing on
  network as well as interoperate with various OS,
  browsers, various types of interface devices like mobile,
  computers, hardware platform protocol of communication.
 This is the reason why there is high probability of errors
  and these errors are the challenges for the web engineers.
Quality…
 Content of the web site or web application are evaluated
  at both syntactical as well as semantic level.
 The syntactical level means assessment of text based
  document involves spelling punctuation and grammar
  assessment.
 At semantic level assessment of correctness of presented
  information of correctness of presented information
  consistency transversely content object as well as
  associated objects also lack of ambiguity or redundancy is
  evaluated.
 Website’s usability is tested to make sure that each and
  every category can easily use or work with the interface.
Quality…
 Navigation on the website is tested to make it sure that
  each and every navigation is working. The navigation
  errors can comprise dead links improper links and
  erroneous links.
 Performance testing of website is carried out under
  various operating conditions configurations and loading
  in order to make sure that the system is responsive to
  user interaction.
 Web application compatibility is tested by executing it
  on various kinds of hosts which are having different
  configuration on both the client as well as server side.
 testing of security is carried out by performing the
  assessment of potential vulnerabilities and making an
  attempt to use each vulnerability.
Errors found in Web environment
 Some web application are found on the interface
  which are mainly on specific browser, mobile phone.
 Due to incorrect design or inaccurate HTML code or
  any other programming languages coding certain
  errors are resulted.
 As web application is following client/server
  architecture errors are very difficult to locate errors
  across the client the server or network, i.e. three layer
  architecture.
 It might be difficult or not possible to regenerate an
  error outside the environment where these errors
  actually came across.
Planning Software Testing

   The testing process for a project consists of three high-
    level tasks—
     test planning,
     test case design, and
     test execution.
   In general, in a project, testing commences with a test
    plan and terminates with successful execution of
    acceptance testing.
Test plan
 Test plan is a general document for the entire project
  that defines the scope, approach to be taken, and the
  schedule of testing, as well as identifies the test items
  for testing and the personnel responsible for the
  different activities of testing.
 The test planning can be done well before the actual
  testing commences and can be done in parallel with the
  coding and design activities.
 The inputs for forming the test plan are:

  (1) project plan,
  (2) requirements document, and
  (3) architecture or design document.
Test Plan Specification
   Test unit specification:
      A test unit is a set of one or more modules that form a software
       under test (SUT). The basic idea behind forming test units is to
       make sure that testing is being performed incrementally, with each
       increment including only a few aspects that need to be tested.
   Features to be tested:
      Features to be tested include all software features and combinations
       of features that should be tested. A software feature is a software
       characteristic specified or implied by the requirements or design
       documents. These may include functionality, performance, design
       constraints, and attributes.
   Approach for testing:
      The approach for testing specifies the overall approach to be
       followed in the current project. The techniques that will be used to
       judge the testing effort should also be specified. This is sometimes
       called the testing criterion or the criterion for evaluating the set of
       test cases used in testing.
Test Plan Specification

   Test deliverables:
     Testing  deliverables should be specified in the test plan
      before the actual testing begins. Deliverables could be a list
      of test cases that were used, detailed results of testing
      including the list of defects found, test summary report,
      and data about the code coverage
   Schedule and task allocation:
     This schedule should be consistent with the overall project
      schedule. The detailed plan may list all the testing tasks
      and allocate them to test resources who are responsible for
      performing them.
Test Case Execution
 With the specification of test cases, the next step in the
  testing process is to execute them.
 This step is also not straightforward.

 The test case specifications only specify the set of test
  cases for the unit to be tested. However, executing the
  test cases may require construction of driver modules or
  stubs.
 It may also require modules to set up the environment
  as stated in the test plan and test case specifications.
  Only after all these are ready can the test cases be
  executed.
Test Case Execution and Analysis, Defect
    logging and tracking
 During test case execution, defects are found. These defects
  are then fixed and testing is done again to verify the fix.
 To facilitate reporting and tracking of defects found during
  testing (and other quality control activities), defects found
  are often logged.
 Defect logging is particularly important in a large software
  project which may have hundreds or thousands of defects
  that are found by different people at different stages of the
  project.
 Often the person who fixes a defect is not the person who
  finds or reports the defect.
 Defect logging and tracking is considered one of the best
  practices for managing a project, and is followed by most
  software organizations.
Life cycle of a defect
1.     When a defect is found, it is logged in a defect control
      system, along with sufficient information about the defect.
2.    The defect is then in the state “submitted,”.
3.    The job of fixing the defect is assigned to some person,
      who is generally the author of the document or code in
      which the defect is found.
4.    The defect then enters the “fixed” state.
5.    a defect that is fixed is still not considered as fully done.
      The successful fixing of the defect is verified.
6.    Once the defect fixing is verified, then the defect can be
      marked as “closed.”
References

1.   Software Testing – Concepts & Practices,
     Narosa,
2.   Software Engineering- A Practitioner’s
     Approach, 5th Edition, McGraw Hill Int.
Thank You!

More Related Content

What's hot

Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing FundamentalsChankey Pathak
 
Manual testing concepts course 1
Manual testing concepts course 1Manual testing concepts course 1
Manual testing concepts course 1Raghu Kiran
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategiesSHREEHARI WADAWADAGI
 
functional testing
functional testing functional testing
functional testing bharathanche
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual TestingHiral Gosani
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testingHaris Jamil
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts pptRathna Priya
 
Black Box Testing
Black Box TestingBlack Box Testing
Black Box TestingTestbytes
 
Software Testing 101
Software Testing 101Software Testing 101
Software Testing 101QA Hannah
 
Types of software testing
Types of software testingTypes of software testing
Types of software testingPrachi Sasankar
 
What is Integration Testing? | Edureka
What is Integration Testing? | EdurekaWhat is Integration Testing? | Edureka
What is Integration Testing? | EdurekaEdureka!
 

What's hot (20)

Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
 
Software testing
Software testingSoftware testing
Software testing
 
Manual testing concepts course 1
Manual testing concepts course 1Manual testing concepts course 1
Manual testing concepts course 1
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategies
 
Black box software testing
Black box software testingBlack box software testing
Black box software testing
 
Test Levels & Techniques
Test Levels & TechniquesTest Levels & Techniques
Test Levels & Techniques
 
functional testing
functional testing functional testing
functional testing
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Manual testing ppt
Manual testing pptManual testing ppt
Manual testing ppt
 
STLC
STLCSTLC
STLC
 
Basic Guide to Manual Testing
Basic Guide to Manual TestingBasic Guide to Manual Testing
Basic Guide to Manual Testing
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testing
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
 
Black Box Testing
Black Box TestingBlack Box Testing
Black Box Testing
 
Testing
TestingTesting
Testing
 
Software Testing 101
Software Testing 101Software Testing 101
Software Testing 101
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
Testing methodology
Testing methodologyTesting methodology
Testing methodology
 
What is Integration Testing? | Edureka
What is Integration Testing? | EdurekaWhat is Integration Testing? | Edureka
What is Integration Testing? | Edureka
 

Viewers also liked

Chapter 1 introduction
Chapter 1 introductionChapter 1 introduction
Chapter 1 introductionPiyush Gogia
 
What is Loadrunner ?
What is Loadrunner ?What is Loadrunner ?
What is Loadrunner ?Guru99
 
WHITE BOX & BLACK BOX TESTING IN DATABASE
WHITE BOX & BLACK BOXTESTING IN DATABASEWHITE BOX & BLACK BOXTESTING IN DATABASE
WHITE BOX & BLACK BOX TESTING IN DATABASESalman Memon
 
Verification and Validation in Software Engineering SE19
Verification and Validation in Software Engineering SE19Verification and Validation in Software Engineering SE19
Verification and Validation in Software Engineering SE19koolkampus
 
Complex System Engineering
Complex System EngineeringComplex System Engineering
Complex System EngineeringEmmanuel Fuchs
 
Verifcation and Validation
Verifcation and ValidationVerifcation and Validation
Verifcation and ValidationSaggitariusArrow
 
9-Software Verification and Validation (Object Oriented Software Engineering ...
9-Software Verification and Validation (Object Oriented Software Engineering ...9-Software Verification and Validation (Object Oriented Software Engineering ...
9-Software Verification and Validation (Object Oriented Software Engineering ...Hafiz Ammar Siddiqui
 
Lou wheatcraft vv
Lou wheatcraft vvLou wheatcraft vv
Lou wheatcraft vvNASAPMC
 
Software Testing
Software TestingSoftware Testing
Software TestingKiran Kumar
 
Poole.eric
Poole.ericPoole.eric
Poole.ericNASAPMC
 

Viewers also liked (13)

Chapter 1 introduction
Chapter 1 introductionChapter 1 introduction
Chapter 1 introduction
 
What is Loadrunner ?
What is Loadrunner ?What is Loadrunner ?
What is Loadrunner ?
 
WHITE BOX & BLACK BOX TESTING IN DATABASE
WHITE BOX & BLACK BOXTESTING IN DATABASEWHITE BOX & BLACK BOXTESTING IN DATABASE
WHITE BOX & BLACK BOX TESTING IN DATABASE
 
Slides chapters 13-14
Slides chapters 13-14Slides chapters 13-14
Slides chapters 13-14
 
White box ppt
White box pptWhite box ppt
White box ppt
 
Software Verification and Validation
Software Verification and Validation Software Verification and Validation
Software Verification and Validation
 
Verification and Validation in Software Engineering SE19
Verification and Validation in Software Engineering SE19Verification and Validation in Software Engineering SE19
Verification and Validation in Software Engineering SE19
 
Complex System Engineering
Complex System EngineeringComplex System Engineering
Complex System Engineering
 
Verifcation and Validation
Verifcation and ValidationVerifcation and Validation
Verifcation and Validation
 
9-Software Verification and Validation (Object Oriented Software Engineering ...
9-Software Verification and Validation (Object Oriented Software Engineering ...9-Software Verification and Validation (Object Oriented Software Engineering ...
9-Software Verification and Validation (Object Oriented Software Engineering ...
 
Lou wheatcraft vv
Lou wheatcraft vvLou wheatcraft vv
Lou wheatcraft vv
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Poole.eric
Poole.ericPoole.eric
Poole.eric
 

Similar to Software Testing

Software testing
Software testingSoftware testing
Software testingAshu Bansal
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptVijayaPratapReddyM
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Tanzeem Aslam
 
Ch 2 Apraoaches Of Software Testing
Ch 2 Apraoaches Of Software Testing Ch 2 Apraoaches Of Software Testing
Ch 2 Apraoaches Of Software Testing Prof .Pragati Khade
 
Testing strategies in Software Engineering
Testing strategies in Software EngineeringTesting strategies in Software Engineering
Testing strategies in Software EngineeringMuhammadTalha436
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategiesHemanth Gajula
 
Software Engineering unit 4
Software Engineering unit 4Software Engineering unit 4
Software Engineering unit 4Abhimanyu Mishra
 
How to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdfHow to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdfAbhay Kumar
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing pptSavyasachi14
 
What is integration testing
What is integration testingWhat is integration testing
What is integration testingTestingXperts
 
Software Testing.pptx
Software Testing.pptxSoftware Testing.pptx
Software Testing.pptxsonalshitole
 
softwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdfsoftwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdfSHAMSHADHUSAIN9
 
Software testing
Software testingSoftware testing
Software testingEng Ibrahem
 
Software Testing
Software TestingSoftware Testing
Software TestingSengu Msc
 

Similar to Software Testing (20)

Software testing
Software testingSoftware testing
Software testing
 
Chapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.pptChapter 9 Testing Strategies.ppt
Chapter 9 Testing Strategies.ppt
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.
 
Software testing
Software testingSoftware testing
Software testing
 
Testing
Testing Testing
Testing
 
Ch 2 Apraoaches Of Software Testing
Ch 2 Apraoaches Of Software Testing Ch 2 Apraoaches Of Software Testing
Ch 2 Apraoaches Of Software Testing
 
Testing strategies in Software Engineering
Testing strategies in Software EngineeringTesting strategies in Software Engineering
Testing strategies in Software Engineering
 
Testing ppt
Testing pptTesting ppt
Testing ppt
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategies
 
Software Engineering unit 4
Software Engineering unit 4Software Engineering unit 4
Software Engineering unit 4
 
How to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdfHow to Make the Most of Regression and Unit Testing.pdf
How to Make the Most of Regression and Unit Testing.pdf
 
software engineering
software engineeringsoftware engineering
software engineering
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
 
What is integration testing
What is integration testingWhat is integration testing
What is integration testing
 
Software Testing.pptx
Software Testing.pptxSoftware Testing.pptx
Software Testing.pptx
 
softwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdfsoftwaretesting-140721025833-phpapp02.pdf
softwaretesting-140721025833-phpapp02.pdf
 
Software testing
Software testingSoftware testing
Software testing
 
08 fse verification
08 fse verification08 fse verification
08 fse verification
 
Testing strategies
Testing strategiesTesting strategies
Testing strategies
 
Software Testing
Software TestingSoftware Testing
Software Testing
 

Recently uploaded

31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...Nguyen Thanh Tu Collection
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 
Using Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea DevelopmentUsing Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea Developmentchesterberbo7
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQuiz Club NITW
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxkarenfajardo43
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research DiscourseAnita GoswamiGiri
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptxDhatriParmar
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationdeepaannamalai16
 
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxBIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxSayali Powar
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxlancelewisportillo
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQuiz Club NITW
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operationalssuser3e220a
 
Narcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfNarcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfPrerana Jadhav
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSMae Pangan
 

Recently uploaded (20)

31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 
Faculty Profile prashantha K EEE dept Sri Sairam college of Engineering
Faculty Profile prashantha K EEE dept Sri Sairam college of EngineeringFaculty Profile prashantha K EEE dept Sri Sairam college of Engineering
Faculty Profile prashantha K EEE dept Sri Sairam college of Engineering
 
Using Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea DevelopmentUsing Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea Development
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research Discourse
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentation
 
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptxBIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
BIOCHEMISTRY-CARBOHYDRATE METABOLISM CHAPTER 2.pptx
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
 
Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operational
 
Narcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdfNarcotic and Non Narcotic Analgesic..pdf
Narcotic and Non Narcotic Analgesic..pdf
 
Textual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHSTextual Evidence in Reading and Writing of SHS
Textual Evidence in Reading and Writing of SHS
 

Software Testing

  • 2. TESTING…  Software testing, when done correctly, can increase overall software quality of conformance by testing that the product conforms to its requirements.  Testing begins with the software engineer in early stages, but later specialists may be involved in the testing process.  After generating source code, the software must be tested to uncover and correct as many errors as possible before delivering it to the customer.  It’s important to test software because a customer, after running it many times would finally uncover an error.
  • 3. INTRODUCTION TO QUALITY ASSURANCE  Quality assurance is an essential activity for any business that produces products to be used by others.  software quality assurance is a "planned and systematic pattern of actions" that are required to ensure high quality in software.  Software quality assurance is composed of a variety of tasks associated with two different constituencies—the software engineers who do technical work and an SQA group that has responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.
  • 4. SIX SIGMA..  Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects.  The Six Sigma approach is:  Define: Determine the project goals and the requirements of customers.  Measure the current performance of the process.  Analyze and determine the root causes of the defects.  Improve the process to eliminate defects.  Control the performance of the process.
  • 5. SOFTWARE TESTING FUNDAMENTALS  During software engineering activities, an engineer attempts to build software, where as during testing activities, the intent is to demolish the software.  Hence testing is psychologically destructive rather constructive.  A good and successful test is one that uncovers an undiscovered error.  All the test that are carried out should be traceable to customer requirements at the sometime exhaustive testing is not possible.
  • 6. TESTING OBJECTIVES  Testing is a process of executing a program with the intent of finding an error.  A good test case is one that has a high probability of finding an as yet undiscovered error.  A successful test is one that uncovers an as yet undiscovered error.
  • 7. COMMON TERMS:  Error:- difference between the actual output of a software and the correct output. A mistake made by programmer. OR Anything that stops the execution of application.  Fault: is a condition that cause a system to fail in performing its required function.  Bug:  A Bug is a Application Error, in Software doesn't affect the whole application in terms of running. It doesn't stop the execution of application. It produces wrong output.  Failure:  occurs when a fault is executed. The inability of a system or component to perform its required functions within specified performance requirements.
  • 8. TESTING PRINCIPLES  “All tests should be traceable to customer requirements.” For a customer, the most severe defect are the one that causes the program to fail to meet the requirements.  “Tests should be planned long before testing begins.” All test can be planned and designed before any code has been generated  “Testing should begin ‘in the small’ and progress toward testing in the large.” The first test planned and executed generally focus on individual component.
  • 9. TESTING PRINCIPLES  “As testing progresses focus shifts on integrated clusters of components and then the entire system.”  “Exhaustive testing is not possible.” to test exhaustively, even a small program could take years.  “To be more effective, testing should be conducted by an independent third party.”.  Software engineer who constricts the software is not always the best person to test it. So a third party deems to be more efficient.
  • 10. SOFTWARE TESTING There are two major types of software testing  Black box testing  White box testing/ Glass box testing.  Black box testing focuses on input, output, and principle function of a software module.  Glass box testing looks into the structural testing or glass box testing, statements paths and branches are checked for correct execution.
  • 11. LEVELS OF SOFTWARE TESTING  Software Testing is carried out at different levels through the entire software development life cycle.
  • 12.  In unit testing, test cases verify the implementation of design of a each software element.  Integration testing traces detailed design and architecture of the system. Hardware and software elements are combined and tested until the entire system has been integrated.  Functional Test It is used to exercise the code with the common input values for which the expected values are available.  In system testing, it checks for system requirement.  In acceptance testing, it determines if test results satisfy acceptance criteria of project stake holders.
  • 13. UNIT TESTING  Unit testing focuses verification effort on the smallest unit of software design—the software component or module.  Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module.  The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing.  The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
  • 14. Unit Test Considerations  The module interface is tested to ensure that information properly flows into and out of the program unit under test.  The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution.  Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.  All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.  And finally, all error handling paths are tested.  Following are some types of errors commonly found during unit testing  misunderstood or incorrect arithmetic precedence  mixed mode operations,  incorrect initialization,  precision inaccuracy,
  • 15. Unit Testing Precedence  Once the source level code has been developed, reviewed, and verified for correspondence to component level design, unit test case design begins.  a component is not a stand-alone program, driver and/or stub software must be developed for each unit test.  In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results.  Stubs serve to replace modules that are subordinate (called by) the component to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.  Drivers and stubs represent overhead. That is, both are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product.
  • 16.
  • 17. Integration Testing  Unit that otherwise seem to be working fine individually, starts causing problems when integrated together.  Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; sub- functions, when combined, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems.  Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.
  • 18. Non-Incremental integration or "big bang" approach:  Combines all the components at the same time and then tests the program on the whole. This approach gives a number of errors that are difficult to isolate and trace as the whole program now is very vast. The correction of such errors seems to go into infinite loop.  Incremental integration:  constructed and tested in small increments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.  Top-down Integration  Bottom-up Integration
  • 19. Top-down Integration  Top-down integration testing is an incremental approach to construction of program structure.  Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program).  Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
  • 20. Depth-first integration & Breadth-first integration  depth-first integration would integrate all components on a major control path of the structure.  Selection of a major path is somewhat arbitrary and depends on application-specific characteristics.  For example, selecting the left-hand path, components M1, M2 , M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.  Then, the central and right-hand control paths are built.  Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally.  components M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.
  • 21. The integration process is performed in a series of five steps:  1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.  2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.  3. Tests are conducted as each component is integrated.  4. On completion of each set of tests, another stub is replaced with the real component.  5. Regression testing may be conducted to ensure that new errors have not been introduced.  The process continues from step 2 until the entire program structure is built.
  • 22. Bottom-up Integration  Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules i.e., components at the lowest levels in the program structure.  Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.  A bottom-up integration strategy may be implemented with the following steps:  1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunction.  2. A driver (a control program for testing) is written to coordinate test case input and output.  3. The cluster is tested.  4. Drivers are removed and clusters are combined moving upward in the program structure.
  • 23. Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma.  Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma.  Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
  • 24. Regression Testing  Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked.  These changes may cause problems with functions that previously worked flawlessly.  In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.  Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.
  • 25. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:  A representative sample of tests that will exercise all software functions.  Additional tests that focus on software functions that are likely to be affected by the change.  Tests that focus on the software components that have been changed.  As integration testing proceeds, the number of regression tests can grow quite large.  Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions.  It is impractical and inefficient to re-execute every test for every program function once a change has occurred.
  • 26. Smoke Testing  Smoke testing is an integration testing approach that is commonly used when “shrinkwrapped” software products are being developed. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis.  the smoke testing approach encompasses the following activities:  1. Software components that have been translated into code are integrated into a “build.” A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions.  2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.  3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up.
  • 27. Smoke testing provides a number of benefits when it is applied on complex, timecritical software engineering projects:  Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered.  The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component- level design defects. If these defects are corrected early, better product quality will result.  Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are likely to be associated with “new software increments”—that is, the software that has just been added to the build(s) is a probable cause of a newly discovered error.  Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated to work. This improves team morale and gives managers a good indication that progress is being made.
  • 28. Validation testing /Acceptance Testing  After integration testing, one may start with validation testing.  validation succeeds when software functions in a manner that can be reasonably expected by the customer.  Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements.  This is to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all performance requirements are attained, documentation is correct, and other requirements are met
  • 29.  It is virtually impossible for a software developer to foresee how the customer will really use a program.  When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements.  Conducted by the end-user rather than software engineers, an acceptance test can range from an informal "test drive" to a planned and systematically executed series of tests.
  • 30. Alpha and Beta Testing  The alpha test is conducted at the developer's site by a customer.  The software is used in a natural setting with the developer "looking over the shoulder" of the user and recording errors and usage problems.  Alpha tests are conducted in a controlled environment.  The beta test is conducted at one or more customer sites by the end-user of the software.  Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer.  The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals.  As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the software product to the entire customer base.
  • 31. SYSTEM TESTING  software is incorporated with other system elements e.g., hardware, people, information, and a series of system integration and validation tests are conducted.  System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.  Recovery Testing  Security Testing  Stress Testing  Performance Testing
  • 32. Recovery Testing  Many computer based systems must recover from faults and resume processing within a pre-specified time.  Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.  If recovery is:  Automatic recovery,  reinitialization,  Checkpointing  data recovery, and  restart are evaluated for correctness.
  • 33. Security Testing  Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration.  During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.  Anything goes! The tester may attempt to acquire passwords through external clerical means; may attack the system with custom software designed to breakdown any defenses that have been constructed; may overwhelm the system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry.
  • 34. Stress Testing  Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.  (1) special tests may be designed that generate ten interrupts per second, when one or two is the average rate  (2) input data rates may be increased by an order of magnitude to determine how input functions will respond,  (3) test cases that require maximum memory or other resources are executed,  (4) test cases that may cause thrashing in a virtual operating system are designed,  (5) test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the program.  variation of stress testing is a technique called sensitivity testing.
  • 35. Performance Testing  For real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable  Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process.  Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation.
  • 36. WHITE BOX TESTING  Focus: Thoroughness (Coverage). Every statement in the component is executed at least once.  White box testing is also known as structural testing or code-based testing.  The major objective of white box testing is to focus in internal program structure, and discover all internal program errors.
  • 37. ADVANTAGE OF WHITE BOX TESTING  Helps optimizing the code.  Helps removing extra line of code.  All possible code pathways can be tested including error handling, resource dependencies, & additional internal code logic/flow.  Enables tester to find programming errors quickly.  Good quality of coding work and coding standards.
  • 38. DISADVANTAGE OF WHITE BOX TESTING  Knowledge of code & internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increase the cost.  Impossible to look into every bit of code to find out hidden errors.  Very expensive technique.  Requires knowledge of target system, testing tools and coding language.  Requires specialized tools such as source code analyzer, debugger, and fault injectors.
  • 39. TRADITIONAL WHITE BOX TESTING  White box testing is further divided into 2 types  BASIS PATH TESTING  Flow graph  Cyclomatic complexity  Graph matrix  CONTROL STRUCTURE  Condition testing  Data flow testing  Loop testing
  • 40. BASIS PATH TESTING  It is a white box testing technique that enables the designer to derive a logical complexity measure of a procedural design.  Test cases based on Basis path testing grantee to execute every statement in program at least once.
  • 41. FLOW GRAPH  A flow graph depicts the logical control flow using the following notations  Every structural construct has a corresponding flow graph symbol.
  • 42. FLOW GRAPH  Each circle, called flow graph node, represents one or more procedural statements.  The arrows called as links or edged represent flow of control.  Areas bounded by edges are called regions. while counting regions the area outside the graph is also taken as a region.  Each node containing a condition is called a predicate node and has 2 or more edges out of it.
  • 43.
  • 44.
  • 45. CYCLOMATIC COMPLEXITY  Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program.   cyclomatic complexity defines the number of independent paths in the basis set of a program.  This number gives us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.  An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.
  • 46.  For example, a set of independent paths for the flow graph illustrated in Figure  path 1: 1-11 path 2: 1-2-3-4-5-10-1-11 path 3: 1-2-3-6-8-9-10-1-11 path 4: 1-2-3-6-7-9-10-1-11  Note that each new path introduces a new edge.  Paths 1, 2, 3, and 4 constitute a “basis set” for the flow graph  Basis set is a set where every statement in the program is visited at least once. This set is not unique.
  • 47.  Computing the cyclomatic would tell us how many paths are required in the basis set.  Complexity is computed in one of three ways:  The number of regions of the flow graph correspond to the cyclomatic complexity.  Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2 where E is the number of flow graph edges, N is the number of flow graph nodes.  Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G.  the cyclomatic complexity can be computed using each of the algorithms just noted:  The flow graph has four regions.  V(G) = 11 edges - 9 nodes + 2 = 4.  V(G) = 3 predicate nodes + 1 = 4.
  • 48. GRAPH MATRICES  To develop a software tool that assists in basis path testing, a data structure, called a graph matrix, can be quite useful.  A graph matrix is a square matrix whose size (i.e., number of rows and columns)is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.  The graph matrix is nothing more than a tabular representation of a flow graph.
  • 49.
  • 50.  each row with two or more entries represents a predicate node.  Therefore, performing the arithmetic shown to the right of the connection matrix provides us with still another method for determining cyclomatic complexity.
  • 51. CONTROL STRUCTURE TESTING  Although basis path testing is simple and highly effective, it is not sufficient in itself.  other variations on control structure improve quality of white-box testing.  Control Structure testing controls over the order in which the instructions in program are executed.  One of the major objective of control structure testing includes the selection of the test case to meet various criteria for covering the code.  Test cased are derived to exercise control over the order of the execution of the code.
  • 52.  Coverage based testing works by choosing test cases according to well defined coverage criteria.  The more common coverage criteria are the following:  Statement Coverage or Node Coverage:  Every statement of the program should be exercised at least once.  Branch Coverage or Decision Coverage:  Every possible alternative in a branch of the program should be exercised at least once.  Condition Coverage  Each condition in a branch is made to evaluate to true and false at least once.  Decision/Condition Coverage  Each condition in a branch is made to evaluate to both true and false and each branch is made to evaluate to both true and false.
  • 53. Conditional Testing  In this kind of testing various test cases are designed so that the logical conditions contained in a program can be exercised.  A simple condition can be defined as a single Boolean variable or a relational expression.  The conditional testing method tries to focus on testing each and every condition which is existing in a program and gives a path to execute.  Adv  Measurement of test coverage of a condition is simple  The test coverage of condition in a program makes guidance for the generation of extra tests for the program
  • 54. Data Flow testing  Testing in which test cases are designed based on variable usage within the code.  Data-flow testing looks at the lifecycle of a particular variable.  The two major types of usage node are:  P-use: predicate use – the variable is used when making a decision (e.g. if b > 6).  C-use: computation use – the variable is used in a computation (for example, b = 3 + d – with respect to the variable d).
  • 55. Loop Testing  Four different classes of loops exist  Simple loops  Nested loops  Concatenated loops  Unstructured loops
  • 56. Black Box Testing  Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.  That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program.  Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods.
  • 57. Black Box Testing  Black-box testing attempts to find errors in the following categories:  incorrect or missing functions,  interface errors,  errors in data structures or external data base access,  behavior or performance errors, and  initialization and termination errors.  Stages of Black Box Testing  Equivalence Partitioning  Boundary Value Analysis  Robustness Testing
  • 58. Advantage’s of Black Box Testing   More effective on larger units of code than glass box testing  Tester needs no knowledge of implementation, including specific programming languages  Tester and programmer are independent of each other  Tests are done from a user's point of view  Will help to expose any ambiguities or inconsistencies in the specifications  Test cases can be designed as soon as the specifications are complete
  • 59. Disadvantages of Black Box Testing   Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever  Without clear and concise specifications, test cases are hard to design  There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
  • 60. Equivalence partitioning  Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived.  Test case design is based on an evaluation of equivalence classes for an input condition  An equivalence class represents a set of valid or invalid states for input conditions  From each equivalence class, test cases are selected so that the largest number of attributes of an equivalence class are exercise at once.
  • 61. If an input condition specifies a range, one valid and two invalid equivalence classes are defined  Input range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}  If an input condition requires a specific value, one valid and two invalid equivalence classes are defined  Input value: 250 Eq classes: {250}, {x < 250}, {x > 250}  If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined  Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any other x}  If an input condition is a Boolean value, one valid and one invalid class are define  Input: {true condition} Eq classes: {true condition}, {false condition} (e.g from Pressman)
  • 62. Boundary Value Analysis  A greater number of errors occur at the boundaries of the input domain rather than in the "center"  Boundary value analysis leads to a selection of test cases that exercise bounding values.  Boundary value analysis is a test case design method that complements equivalence partitioning  It selects test cases at the edges of a class  It derives test cases from both the input domain and output domain
  • 63. Guidelines for Boundary Value Analysis 1. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b as well as values just above and just below a and b 2. If an input condition specifies a number of values, test case should be developed that exercise the minimum and maximum numbers. Values just above and just below the minimum and maximum are also tested  Apply guidelines 1 and 2 to output conditions; produce output that reflects the minimum and the maximum values expected; also test the values just below and just above  If internal program data structures have prescribed boundaries (e.g., an array), design a test case to exercise the data structure at its minimum and maximum boundaries
  • 64. Robustness Testing  Robustness means it is an capability of a software to keep whole the acceptable behavior. This acceptability is a standard known as robustness requirement.  Robustness compromises the appropriate handling of the following:  Invalid i/p given by user or any other interfacing applications.  Hardware component failure  Software component failure  External system failure
  • 65. Cause effect graph  This helps in identifying combination of input conditions in a organized manner, in such a manner that number of test cases does not become improperly larger.  This technique begins with finding out ‘causes’ as well as ‘effect’ of the system under testing.  The inputs (causes) and outputs (effects) are represented as nodes of a cause-effect graph.  In such a graph, a number of intermediate nodes linking causes and effects in the formation of a logical expression
  • 66. A Simple Cause-Effect Graphing Example I have a requirement that says: “If A OR B, then C.”  The following rules hold for this requirement:  If A is true and B is true, then C is true.  If A is true and B is false, then C is true.  If A is false and B is true, then C is true.  If A is false and B is false, then C is false.
  • 67. The cause-effect graph is then converted into a decision table or “truth table” representing the logical relationships between the causes and effects.
  • 68. Static Testing  Static testing is carried out without execution of program.  At different levels, static testing may expose the faulty programming constructs  Allows to make it sure that variables used prior to initialization, declared but not used.  Assigned 2 times but not at all used between assignment.  Undeclared variables.  Interface fault or not  Parameter type and number mismatch and uncalled function as well as procedures.
  • 69. Stages of Static Testing 1. Control Flow Analysis: checking is done on loops which are with either multiple or exit point. 2. Data Use Analysis: checking is done regarding un- initialized variables, variables assigned more than once, variables are declared but not at all used. 3. Interface Analysis: checking consistency of procedure as well as their use. 4. Information Analysis: checking is done regarding dependencies of output variables and also information for code inspection or review. 5. Path Analysis: checking paths through the program as well as sets out the statements executed in that path
  • 70. Principle’s of Static Testing  Review each and every coding standard violation.  Code standard can be any one of above general good practice, self imposed company standards.  Review complexity of code  Inspect and remove unreachable code, unreferenced procedure and unused variables.  Report each and every types of Data flow anomalies:  Before initialization a variable is used, is a great abnormality.  First a value is assigned to a variable and then without being used runs out of scope.  First value is assigned to a variable and again reassigned without making use in between.  A parameter Mismatch  Suspicious casts.
  • 71. OBJECT-ORIENTED TESTING STRATEGIES  The classical strategy for testing computer software begins with “testing in the small” and works outward toward “testing in the large.”  we begin with unit testing, then progress toward integration testing, and culminate with functional and system testing.  In conventional applications, unit testing focuses on the smallest compliable program unit—the subprogram (e.g., module, subroutine, procedure, component). Once each of these units has been tested individually, it is integrated into a program structure while a series of regression tests are run to uncover errors due to interfacing between the modules and side effects caused by the addition of new units.  Finally, the system as a whole is tested to ensure that errors in requirements are uncovered.
  • 72. Unit Testing in the OO Context  In the OO context the smallest testable unit is the class or the instance of the class. A class encapsulates attributes (variables) & behavior (methods) into one unit.  A class may contain many methods and the same methods may be a part of a many classes. So testing in isolation for a particular method is not possible. It has to be tested as part of the class.  E.g. an operation X is inherited from a super class by many subclasses. And each subclass makes its own type of changes to this operation X. Hence we need to check X for each subclass rather than checking it for the super class.  Class testing for OO software is the equivalent of unit testing for conventional software  Conventional UT tends to focus on the algorithm, whereas OO UT lays emphasis on a module and the data flow across that module.
  • 73. Integration Testing in the OO Context  object-oriented software does not have a hierarchical control structure, conventional top-down and bottom-up integration strategies.  There are two different strategies for integration testing of OO systems  The first, thread-based testing, integrates the set of classes required to respond to one input or event for the system. Each thread is integrated and tested individually. Regression testing is applied to ensure that no side effects occur.  The second integration approach, use-based testing, begins construction of the system by testing those classes (called independent classes) that use very few server classes. the next layer of classes, called dependent classes, that use the independent classes are tested. This sequence of testing layers of dependent classes continues until the entire system is constructed. Unlike conventional integration, the use of drivers and stubs.  Cluster testing is one step in the integration testing of OO software. Here, a cluster of collaborating classes is exercised by designing test cases that attempt to uncover errors in the collaborations.
  • 74. Validation Testing in an OO Context  the validation of OO software focuses on user- visible actions and user-recognizable output from the system.  For validation tests, the use-cases drawn by the tester are of great help.  The use-case provides a scenario that has a high likelihood of uncovered errors in user interaction requirements.  Conventional black-box testing methods can be used to drive validations tests. Also test cases may be derived from the object-behavior model and from event flow diagram created as part of OOA.
  • 75. TEST CASE DESIGN FOR OO SOFTWARE The following are the points to be considered for OO test case design.  Each test case should be uniquely identified and explicitly associated with the class to be tested.  The purpose of the test should be stated.  A list of testing steps should be developed for each test and should contain:  A list of specified states for the object that is to be tested.  A list of messages and operations that will be exercised as a consequence of the test.  A list of exceptions that may occur as the object is tested.  A list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test).  Supplementary information that will aid in understanding or implementing the test
  • 76. The Test Case Design Implications of OO Concepts  the OO class is the target for test case design. Because attributes and operations are encapsulated, testing operations outside of the class is generally unproductive.  Encapsulation makes it difficult to obtain state information of a class, but some built in operations may be provided to report the value for class attributes.  Inheritance may also lead to further challenges in testing as new context of usage requires retesting.
  • 77. Applicability of Conventional Test Case Design Methods  white-box testing methods an be applied to the operations defined for a class.  Basis path, loop testing, or data flow techniques can help to ensure that every statement in an operation has been tested.  the concise structure of many class operations causes some to argue that the effort applied to white-box testing might be better redirected to tests at a class level.  Black-box testing methods are as appropriate for OO systems. use-cases can provide useful input in the design of black-box testing.
  • 78. Testing methods o Fault-Based Testing o The tester looks for plausible faults (i.e., aspects of the implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code. o Because the product or system must conform to customer requirements, the preliminary planning required to perform fault based testing begins with the analysis model.  Class Testing and the Class Hierarchy  Inheritance does not obviate the need for thorough testing of all derived classes. In fact, it can actually complicate the testing process.  Scenario-Based Test Design  Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests.
  • 79. TESTING METHODS APPLICABLE AT THE CLASS LEVEL  Random testing  identify operations applicable to a class  define constraints on their use  identify a minimum test sequence  an operation sequence that defines the minimum life history of the class (object)  generate a variety of random (but valid) test sequences  exercise other (more complex) class instance life histories
  • 80. TESTING METHODS APPLICABLE AT THE CLASS LEVEL  Partition Testing  reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software  state-based partitioning  attribute-based partitioning  category-based partitioning
  • 81. state-based partitioning  categorize and test operations based on their ability to change the state of a class.  considering the account class, state operations include deposit and withdraw, whereas nonstate operations include balance, summarize, and creditLimit.  Tests are designed in a way that exercises operations that change state and those that do not change state separately. Therefore,  Test case p1: open•setup•deposit•deposit•withdraw•withdraw•close  Test case p2: open•setup•deposit•summarize•creditLimit•withdraw•close  Test case p1 changes state, while test case p2 exercises operations that do not change state (other than those in the minimum test sequence).
  • 82. attribute-based partitioning  categorize and test operations based on the attributes that they use  For the account class, the attributes balance and creditLimit can be used to define partitions. Operations are divided into three partitions: (1) operations that use creditLimit, (2) operations that modify creditLimit, and (3) operations that do not use or modify creditLimit. Test sequences are then designed for each partition.
  • 83. category-based partitioning  categorize and test operations based on the generic function each performs  For example, operations in the account class can be categorized in  initializationoperations (open, setup),  computational operations (deposit,withdraw),  queries (balance, summarize, creditLimit) and  termination operations (close).
  • 84. TESTING METHODS APPLICABLE AT THE CLASS LEVEL  Inter-class testing  Multiple Class Testing  Tests Derived from Behavior Models  Multiple Class Testing  the following sequence of steps to generate multiple class random test cases:  For each client class, use the list of class operations to generate a series of n random test sequences. The operations will send messages to other server classes.  For each message that is generated, determine the collaborator class and the corresponding operation in the server object.  For each operation in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.  For each of the messages, determine the next level of operations that are invoked and incorporate these into the test sequence.
  • 85. TESTS DERIVED FROM BEHAVIOR MODELS  the state transition diagram represents the dynamic behavior of a class.  The STD for a class can be used to help derive a sequence of tests that will exercise the dynamic behavior of the class.  The tests to be designed should achieve all state coverage. That is all operation sequence should cause the class to transition through all allowable states  More test cases could be derived to make sure that all behavior for the class have adequately exercised.
  • 86. Overview of Website Testing  The testing concepts remains dame for web applications and web testing.  But web based systems & application are residing on network as well as interoperate with various OS, browsers, various types of interface devices like mobile, computers, hardware platform protocol of communication.  This is the reason why there is high probability of errors and these errors are the challenges for the web engineers.
  • 87. Quality…  Content of the web site or web application are evaluated at both syntactical as well as semantic level.  The syntactical level means assessment of text based document involves spelling punctuation and grammar assessment.  At semantic level assessment of correctness of presented information of correctness of presented information consistency transversely content object as well as associated objects also lack of ambiguity or redundancy is evaluated.  Website’s usability is tested to make sure that each and every category can easily use or work with the interface.
  • 88. Quality…  Navigation on the website is tested to make it sure that each and every navigation is working. The navigation errors can comprise dead links improper links and erroneous links.  Performance testing of website is carried out under various operating conditions configurations and loading in order to make sure that the system is responsive to user interaction.  Web application compatibility is tested by executing it on various kinds of hosts which are having different configuration on both the client as well as server side.  testing of security is carried out by performing the assessment of potential vulnerabilities and making an attempt to use each vulnerability.
  • 89. Errors found in Web environment  Some web application are found on the interface which are mainly on specific browser, mobile phone.  Due to incorrect design or inaccurate HTML code or any other programming languages coding certain errors are resulted.  As web application is following client/server architecture errors are very difficult to locate errors across the client the server or network, i.e. three layer architecture.  It might be difficult or not possible to regenerate an error outside the environment where these errors actually came across.
  • 90. Planning Software Testing  The testing process for a project consists of three high- level tasks—  test planning,  test case design, and  test execution.  In general, in a project, testing commences with a test plan and terminates with successful execution of acceptance testing.
  • 91. Test plan  Test plan is a general document for the entire project that defines the scope, approach to be taken, and the schedule of testing, as well as identifies the test items for testing and the personnel responsible for the different activities of testing.  The test planning can be done well before the actual testing commences and can be done in parallel with the coding and design activities.  The inputs for forming the test plan are: (1) project plan, (2) requirements document, and (3) architecture or design document.
  • 92. Test Plan Specification  Test unit specification:  A test unit is a set of one or more modules that form a software under test (SUT). The basic idea behind forming test units is to make sure that testing is being performed incrementally, with each increment including only a few aspects that need to be tested.  Features to be tested:  Features to be tested include all software features and combinations of features that should be tested. A software feature is a software characteristic specified or implied by the requirements or design documents. These may include functionality, performance, design constraints, and attributes.  Approach for testing:  The approach for testing specifies the overall approach to be followed in the current project. The techniques that will be used to judge the testing effort should also be specified. This is sometimes called the testing criterion or the criterion for evaluating the set of test cases used in testing.
  • 93. Test Plan Specification  Test deliverables:  Testing deliverables should be specified in the test plan before the actual testing begins. Deliverables could be a list of test cases that were used, detailed results of testing including the list of defects found, test summary report, and data about the code coverage  Schedule and task allocation:  This schedule should be consistent with the overall project schedule. The detailed plan may list all the testing tasks and allocate them to test resources who are responsible for performing them.
  • 94. Test Case Execution  With the specification of test cases, the next step in the testing process is to execute them.  This step is also not straightforward.  The test case specifications only specify the set of test cases for the unit to be tested. However, executing the test cases may require construction of driver modules or stubs.  It may also require modules to set up the environment as stated in the test plan and test case specifications. Only after all these are ready can the test cases be executed.
  • 95. Test Case Execution and Analysis, Defect logging and tracking  During test case execution, defects are found. These defects are then fixed and testing is done again to verify the fix.  To facilitate reporting and tracking of defects found during testing (and other quality control activities), defects found are often logged.  Defect logging is particularly important in a large software project which may have hundreds or thousands of defects that are found by different people at different stages of the project.  Often the person who fixes a defect is not the person who finds or reports the defect.  Defect logging and tracking is considered one of the best practices for managing a project, and is followed by most software organizations.
  • 96. Life cycle of a defect 1. When a defect is found, it is logged in a defect control system, along with sufficient information about the defect. 2. The defect is then in the state “submitted,”. 3. The job of fixing the defect is assigned to some person, who is generally the author of the document or code in which the defect is found. 4. The defect then enters the “fixed” state. 5. a defect that is fixed is still not considered as fully done. The successful fixing of the defect is verified. 6. Once the defect fixing is verified, then the defect can be marked as “closed.”
  • 97. References 1. Software Testing – Concepts & Practices, Narosa, 2. Software Engineering- A Practitioner’s Approach, 5th Edition, McGraw Hill Int.