The document discusses concepts related to automated testing, including:
1) Automated testing scripts are developed and updated in sync with the cyclic development process of the application under test.
2) Automated testing is effective when the time to create, update, and analyze scripts across iterations is less than the time for manual testing.
3) Effective logging, test result modeling, and failure analysis are important for reducing the time spent understanding failures in automated tests.
2. Abstraction Level Limitations
Abstraction Implementation
Requirement Test Case
Test Script
Test Execution
QA
Robot
Automating a Manual Process
Level of abstraction accessible to human and automated tests
3. Cyclic Development Process
Automation test scripts development, execution and results analysis are determined by
cyclic development process of a tested application
For each development iteration, the application must implement a set of requirements.
The set may include a new or altered requirements and already implemented
requirements (regression area)
An iterative development model
4. Cyclic Development Process
Test scripts are strongly bound with implementation of application requirements
Automation results require attention only if there are test script failures. A test script failure means
one of the following
• Defect. Application implementation is changed and doesn't meet requirements
• Test script is out of date. Application implementation is changed and meets new or altered
requirements. Test script requires update
Results validation phase will continue until application defects are fixed and test scripts are updated.
• Environment
Simplified development cycle that includes automation test scripts
implementation, execution and analysis
5. Cyclic Development Process
Imortant points
• test script doesn't show defects itself. It only shows areas of changed
implementation
• automation results analysis cannot be detached from scripts update and involves
person with development skills
• test scripts version is bound to application version. Test scripts cannot be treated
as a universal pack and be invoked against different versions or branches of the
application. Each application branch or version should have its own scripts
6. Automation Efficiency
TA cr / N + TA errval + TA upd < TM val
N - iteration number since scripts creation
TA cr - time to create automation scripts
TA errval - average iteration time to understand reasons of failed scripts
TA upd - average iteration time to update scripts
TM val - average iteration time of manual validation
Let's consider time for automation and manual testing to validate the same
functionality and find the same amount of defects.
Automation testing is effective, if the following statement becomes true
Time estimation allows further calculation of Return On Investment (ROI) value,
if to consider automation and manual time cost.
7. Automation Efficiency
Cost of test scripts creation is reduced, if we need to validate the same functionality
for many development iterations. Automated validation isn't effective for functionality
that should be checked only once, or functionality that significantly changes during
iterations
Error validation time can be decreased with effective results analysis strategy that
shuld be supported by scripts. The following questions have to be answered as quickly
as possible
• What functionality does each error correspond?
• What scripts have the same error?
• What actions did scripts perform?
• What state and dynamic data the application had at the moment of scripts execution?
Update time can be decreased with
• Don't Repeat Yourself (DRY) principles (increasing abstraction level) from test scripts
side
• techniques that reduce test scripts dependency on application implementation
from application side (automation friendly GUI toolkits, ids of GUI elements, etc.)
8. Automation Script
An automation script fully based on general test case definition. A test oracle
mechanism is used to determining whether the program has passed or failed a test. A
complete oracle have three capabilities
• generator, to provide predicted or expected results for each test.
• comparator, to compare predicted and obtained results.
• evaluator, to determine whether the comparison results are sufficiently close to be a
pass.
The application is viewed as a black box and is tested in terms of the outputs
generated in response to predetermined inputs.
Scheme of a black box
9. Automation Script
Failed automation test cases require human interaction to understand fail reason. Ideally
there shouldn't be need to rerun failed test cases, watch application under test or rerun the
tests manually. Automation test case should collect as much information as needed to get
quick answer, what was the problem
Logging writes automatically all actions performed by script into log file. It includes state of
application at the moment of test (e.g. user name, new advertisement id, page screen
shot) and test script internals (e.g stack trace, element locators). Logging is useful for
detailed analysis of application and script behaviour.
Test result model is used for general analysis of a large amount of failed test cases.
Simple result model could be 'pass' and 'fail' statuses of test case. For deep and effective
understanding of fail reasons should be used extended concepts, such as
• error reason
• requirement link
• dependent functionality
• execution time
Extended result concepts allow to group test cases with the same fail reason, take into
account blocked test cases, build traceability matrix, etc.
10. Script Design and Development
Ways of test scripts development
Record/Play
Performed by QA, no development skils
No abstraction layer for reporting and code reuse
Visual Oriented Tols
Performed by QA, QA Automation, minimum development skils
Predefined visualized abstractions
Predefined reporting model
Language Oriented Tools
Performed by QA Automation or QA + Developer co-operation
Development skils required
Extendable abstraction layer
Extendable reporting model
11. QA Automation Evolution
xUnit frameworks
Test-driven development
Continious
Integration process
and tools
Automated Testing
Unit testing
Integration testing
System testing
GUI drivers (e.g. Selenium RC)
Automation friendly UI libraries
Programming
Quality Assurance
Record/Play
Visual
Development
QA&Developers
co-operation
Editor's Notes
Logging and results model can be used together. E.g. we can investigate error reason of one test case with log file and approximate it to all test cases, that have the same reason.
Storing results and logs into central database we can collect statistics and history of changing application areas.
Basic results analysis use case
1. Get scripts execution results for new build
2. Filter failed scripts
3. Group failed scripts by errors
4. Investigate reason of each error group
5. File defects if any
6. Update scripts if required
Describe aspects of automation test case design and implementation here