More Related Content Similar to Automated Test Case Generation and Execution from Models (20) More from Dharmalingam Ganesan (20) Automated Test Case Generation and Execution from Models1. Automated Testcase Generation
and Execution from Models
Dharmalingam Ganesan, Mikael Lindvall, Christoph Schulze
Software Architecture and Embedded Systems Division
Fraunhofer Center for Experimental Software Engineering
College Park
Maryland
© 2012 Fraunhofer USA, Inc.
2. 2
Fraunhofer Center – Maryland (FC-MD)
Applied Research and Tech Transfer
Affiliations
University of Maryland, College Park
Fraunhofer Germany
Close ties to NASA, FDA
Focus on Software Engineering
Contract R &D for industry, government
Clients/partners: Bosch, Biofortis, DoD, FDA, JHU,
JHU/APL, NASA, Optimal Solutions, etc.
© 2012 Fraunhofer USA, Inc.
3. 3
Fraunhofer Center – Maryland (FC-MD)
Located at MSquare
© 2012 Fraunhofer USA, Inc.
4. 4
Goal
Introduce an approach for test case generation
Demonstrate the applicability of the approach on
different types of systems:
Web-based systems
Systems with GUIs (e.g., Java Swing)
Software components with interfaces
After the presentation we expect you to be able to:
Understand ideas of model-based testing
Develop models from requirements
© 2012 Fraunhofer USA, Inc.
5. 5
Motivation
Bugs can lead to deaths, life-threatening, financial
loss, unsatisfied customers, etc.
Software testing effort at least 50% of
development effort
Mission-critical systems even up to 75%
Software testing is necessary to find the presence
of bugs
© 2012 Fraunhofer USA, Inc.
6. 6
Tester 1 – hands on testing
While (true) {
tested_manually();
found_bugs();
reported_to_developers();
developers_returned_newVersion();
if(tired) {
break;
}
}
shipped_the_product();
customer_found_bugs();
customer_unhappy();
company_lost_business();
Cartoon Reference: H. Robinson. Intelligent Test Automation.
© 2012 Fraunhofer USA, Inc.
7. 7
Tester 2 – testing with scripts
While (true) {
wrote_test_scripts();
found_bugs();
reported_to_developers();
developers_changed_code();
developers_returned_newVersion();
test_scripts_broke();
changed_test_scripts();
if (tired) {
break;
}
}
shipped_the_product();
customer_found_bugs();
customer_unhappy();
company_lost_business();
© 2012 Fraunhofer USA, Inc.
8. 8
Tester 3 – monkey banging on a
keyboard
While (true) {
monkey_randomly_press_keys();
found_bugs();
reported_to_developers();
developers_returned_newVersion();
manually_tested_missing_scenarios();
if (tired) {
break;
}
}
shipped_the_product();
customer_found_bugs();
customer_unhappy();
company_lost_business();
© 2012 Fraunhofer USA, Inc.
9. 9
Tester 4- model-based testing
developed_model_of_system();
While (true) {
generated_test_cases_from_model();
found_bugs();
reported_to_developers();
developers_returned_newVersion();
updated_the_model(); // if needed
if (fullModelCovered() & noBugsFound()) {
break;
}
}
shipped_the_product();
customer_happy();
© 2012 Fraunhofer USA, Inc.
10. 10
MBT @ Fraunhofer CESE
At Fraunhofer, we have a long history of various
Model Based Development and Testing
NASA’s SARP program sponsors us in this work
We are developing the Fraunhofer Approach for
Software Testing (FAST) based on MBT
The FAST approach has been applied to several
NASA systems as well as other commercial
systems
© 2012 Fraunhofer USA, Inc.
11. 11
Model-based Testing (MBT) –Idea
Develop a model of the system under test
The model contains actions and expected results
Automatically derive test cases from the model
Execute the test cases
Decide when to stop testing based on model
coverage!
© 2012 Fraunhofer USA, Inc.
12. 12
Benefits
Allow all stakeholders to review test coverage
Generate plenty ready-to-run test cases
Immediate return-on-investment
No costly editing of test cases
A software product that is very well tested
© 2012 Fraunhofer USA, Inc.
14. 14
Terminology
Nodes of the model are called states
Edges of the model are called transitions
A test case is a path from start to exit
A random test is generated using a random walk from the
start state to the exit state
A test suite is a collection of test cases
Model coverage means that each transition is included in at
least one test case
14
15. 15
Workflow
1. Define test objective
and software to test
2. Analyze requirements,
software under test,
existing test cases!
6. Execute test cases,
analyze results,
remove bugs
3. Create models
4. Setup infrastructure
5. Generate test cases (TestMonkey and
DataProvider)
15
16. 16
Tools used by the FAST
Modeling: Yed Graphical Editor from yWorks
Model traversal and test generation: Jumbl- Uni. Tennessee
Test Execution:
Junit (Java)
CuTest (C)
Selenium (Web)
UISpec (Java Swing)
Sikuli (Image-based testing of legacy systems)
Glue scripts:
Conversion of Yed models to Jumbl models
Preparing a test suite from generated test cases
Generation of system-specific build files (e.g., makefiles)
Helper scripts to clean-up generated files
© 2012 Fraunhofer USA, Inc.
17. 17
Testing Web-based Systems
We have used the FAST approach to test web-
based systems
We used use-case documentation (as well as
knowledge about the running system) to build a
model of the system under test
Automatically derived test cases from the model
© 2012 Fraunhofer USA, Inc.
18. 18
Example: Search Form
Main scenarios:
1. Some valid search input
a) that matches db
b) that doesn’t match db
2. Invalid search input (evil)
3. No search input
Endless number of test input combinations
Our approach: build model(s) that can be extended as necessary
19. An Example: Search Model
2. Invalid search
1. Some valid search input 3. No search input
input (evil)
a) that matches db b) doesn’t match db
19 © 2011 Fraunhofer USA, Inc.
20. An Example: Search Model
Tester: Let’s try with something
we know exist in the database
Name: Benderson
City: College Park
Result: 4 Matches of 35 records in database
20 © 2011 Fraunhofer USA, Inc.
21. An Example: Search Model
Tester: Let’s try with
something that doesn’t exist
in the database
Name: Fake Name
City: Fake Town
Result: 0 Matches of 35 records in
database
21 © 2011 Fraunhofer USA, Inc.
22. An Example: Search Model
Tester: Let’s try
something evil!
Name: O’Hare
City: Chicago
Result: System crash!
22 © 2011 Fraunhofer USA, Inc.
23. An Example: Search Model
Tester: Let’s try an “empty” search
Result: 35 Matches of 35 records in database
23 © 2011 Fraunhofer USA, Inc.
24. An Example: Search Model
A more complex scenario
Once we have the
model, we can
generate an
endless number of
such test cases!
24 © 2011 Fraunhofer USA, Inc.
26. Ready-to-run generated test
cases!
This automatically generated test can be executed immediately and/or
integrated into the daily build.
26 © 2012 Fraunhofer USA, Inc.
27. 27
Running the generated test cases
We need to map each action into some code
For web-testing, there are several tools (e.g.,
HttpUnit, Selenium)
These tools offer APIs to interact with web-testing
We used Selenium for executing the generated
test cases
© 2012 Fraunhofer USA, Inc.
28. 28
Example: detected defect
We can register
Clarkson’s
29. Example: detected defect
Clarkson’s
We get this error
System crash!
However, if we search for
Clarkson’s
Other examples of detected defects:
• Loss of state when moving back and forth between pages
• Records missing in search results due to sorting issues
30. 30
Pagination issue (513
registrations in total)
© 2012 Fraunhofer USA, Inc.
31. 31
Sorting issue
Missing 198:
© 2012 Fraunhofer USA, Inc.
32. 32
Another Example: FDA CFR Part 11
CFR Part 11 is a legal standard for electronic records
and electronic signatures
Rules for user account management are covered
Challenge: Test for conformance to CFR Part 11
Approach:
Developed a model of Part 11
Automatically derived test cases
© 2012 Fraunhofer USA, Inc.
33. 33
(Snippet) Model of Part 11
© 2012 Fraunhofer USA, Inc.
34. 34
Test Monkey Interface
//This is a technology-agnostic interface for testing the Login/Password use case.
public interface ILoginPasswordTestMonkey
{
public void enterSystem();
public void gotoLoginPasswdPage();
public void enterGeneratedLoginPasswd();
public void enterValidLoginPasswd();
public void enterInvalidLoginPasswd();
public void clickLogin();
public boolean isLoginSuccessful();
public void clickLogout();
public boolean isLogoutSuccessful();
…
} © 2012 Fraunhofer USA, Inc.
35. 35
Sample black-box test case
[Start]."Enter application"
[Application Home]."Recover password using invalid credentials"
[Password Recovery Warning]."Recover password using valid credentials"
[Password Recovered]."Exit"
[Start]."Enter application"
[Application Home]."Enter valid login and password"
[Logged In]."Request change password"
[Change Password Page]."Enter valid password"
[Logged In]."Logoff"
[Application Home]."Enter expired password"
[Expired Password Warning]."Enter valid login and password"
[Logged In]."Exit"
© 2012 Fraunhofer USA, Inc.
36. 36
Sample generated Junit case
import part11.Factory;
import part11.ILoginPasswordTestMonkey;
import part11.ILoginPasswordDataProvider;
import junit.framework.TestCase;
public class Part_11_9 extends TestCase {
private ILoginPasswordTestMonkey monkey;
private ILoginPasswordDataProvider dataProvider;
protected void setUp() {
// Get a user account data provider
dataProvider = Factory.createLoginPasswordDataProvider();
// Get a test monkey
monkey = Factory.createLoginPasswordTestMonkey(dataProvider);
// Start the system under test
monkey.enterSystem();
} © 2012 Fraunhofer USA, Inc.
37. 37
Sample generated Junit case …
public void testMethod() {
// Test using valid login and password
monkey.gotoLoginPasswdPage();
monkey.enterValidLoginPasswd();
monkey.clickLogin();
monkey.waitForPageToLoad();
assertTrue("Valid user should login",
monkey.isLoginSuccessful());
// Test password and confirm password mismatch
monkey.clickChangePassword();
monkey.enterMismatchPasswordConfirmPassword();
assertTrue("Password mismatch not detected",
monkey.isPasswordMismatchMsgShown());
© 2012 Fraunhofer USA, Inc.
38. 38
Sample generated Junit case …
// Test Logout
monkey.clickLogout();
monkey.waitForPageToLoad();
assertTrue("Valid user should be able to logout",
monkey.isLogoutSuccessful());
// Test using invalid login and password
monkey.gotoLoginPasswdPage();
monkey.enterInvalidLoginPasswd();
monkey.clickLogin();
monkey.waitForPageToLoad();
assertTrue("Login should fail on invalid account",
monkey.isLoginErrorMsgShown());
…
}
© 2012 Fraunhofer USA, Inc.
39. 39
Testing of software components
We can also use FAST to test source code
API specs and existing test cases were used to
develop models
Executable test cases were derived from models
Let us discuss two NASA systems:
GMSEC (modeling of Java APIs)
OSAL (modeling of C APIs)
© 2012 Fraunhofer USA, Inc.
41. 41
FAST @ NASA GMSEC …
State-of-the-practice: Test cases are hand-crafted
New initiative started to evaluate the feasibility of the
FAST approach
Modeled a portion of the GMSEC Software Bus based
on existing test cases and documentation
Automatically generated test cases
Found a few problems (already fixed now)
© 2012 Fraunhofer USA, Inc.
42. 42
Hand-crafted test case (snippet)
public static void main( String args[] ) {
Status result = new Status();
Connection conn = new Connection();
ConnectionConfig cfg = new ConnectionConfig( args );
// Create the connection
result = ConnectionFactory.Create( cfg, conn );
checkError( false, result, "Creating the connection object" );
// Disconnect
result = conn.Disconnect();
checkError( true, result, "Disconnecting before connection is established" );
// Connect
result = conn.Connect();
checkError( false, result, "Establishing the connection to the middleware" );
} //..main() © 2012 Fraunhofer USA, Inc.
43. 43
Manually developed test cases –
source of Inspiration
• We reviewed existing Java test cases
• Found that the tester has used certain permutations
of API-usage
• Also, both good and “evil” cases are considered
• We used these test cases as a source of reference for
building API usage models
© 2012 Fraunhofer USA, Inc.
44. 44
FAST @ NASA GMSEC …
APIs of the module under test
public interface IConnection
{
public Status Connect();
public Status Disconnect();
…
}
© 2012 Fraunhofer USA, Inc.
45. 45
FAST @ NASA OSAL
Operating System Abstraction Layer
Isolates flight software from real time operating
systems and hardware.
Implementation for the real time systems RTEMS
and VxWorks and POSIX compliant non-real time
systems.
Used for mission critical embedded systems
Provides support for file-system, tasks, queues,
semaphores, interrupts, hardware abstraction,
I/O ports and exception handling
© 2012 Fraunhofer USA, Inc.
46. 46
FAST @ NASA OSAL …
Why is it important that OSAL is bug free?
Flight software is mission critical and needs to be of
very high quality
OSAL is the building block of the core flight
software product line
OSAL is used in many NASA missions, e.g. the
Lunar Renaissance Orbit
If OSAL has issues, it might result in catastrophic
failure
© 2012 Fraunhofer USA, Inc.
47. 47
FAST @ NASA OSAL …
© 2012 Fraunhofer USA, Inc.
48. 48
NASA OSAL - Architecture
© 2012 Fraunhofer USA, Inc.
49. 49
Sample APIs
/******************************************************************************
** Directory API
******************************************************************************/
// Makes a new directory
int32 OS_mkdir (const char *path, uint32 access);
// Opens a directory for searching
os_dirp_t OS_opendir (const char *path);
// Closes an open directory
int32 OS_closedir(os_dirp_t directory);
// Removes an empty directory from the file system.
int32 OS_rmdir (const char *path);
© 2012 Fraunhofer USA, Inc.
50. 50
Example of an OSAL model
© 2012 Fraunhofer USA, Inc.
52. 52
Inside “Open Directory”
© 2012 Fraunhofer USA, Inc.
53. 53
API doc of open directory
/*--------------------------------------------------------------------------------------
Name: OS_mkdir
Purpose: makes a directory specified by path.
Returns: OS_FS_ERR_INVALID_POINTER if path is NULL
OS_FS_ERR_PATH_TOO_LONG if the path is too long to be stored locally
OS_FS_ERR_PATH_INVALID if path cannot be parsed
OS_FS_ERROR if the OS call fails
OS_FS_SUCCESS if success
Note: The access parameter is currently unused.
---------------------------------------------------------------------------------------*/
int32 OS_mkdir (const char *path, uint32 access);
© 2012 Fraunhofer USA, Inc.
54. 54
Inside Open Invalid Directory
© 2012 Fraunhofer USA, Inc.
55. 55
Sample IMonkey Interface
int32 removeDirectoryValid(void);
int32 removeDirectoryPathNull(void);
int32 removeDirectoryPathTooLong(void);
int32
removeDirectoryPathUnparsable(void);
int32 removeDirectoryCurrent(void);
int32 removeDirectoryNotEmpty(void);
…
© 2012 Fraunhofer USA, Inc.
56. 56
Sample generated Test in CuTest
void Testosal_Filesystem_min_2(CuTest* tc)
{
status = makeFilesystemValid();
CuAssertIntEquals_Msg(tc,"Filesystem could not be created",
OS_FS_SUCCESS, status);
status = mountFilesystemValid();
CuAssertIntEquals_Msg(tc,"Filesystem could not be mounted",
OS_FS_SUCCESS, status);
pointer = openDirectoryValid();
CuAssertTrue(tc, pointer != NULL);
…
status = removeFilesystemValid();
CuAssertIntEquals_Msg(tc,"Filesystem could not be removed”, status);
}
© 2012 Fraunhofer USA, Inc.
57. 57
Issues found using FAST
• File-descriptors after removing file-system:
• After somewhat long tests we would run out of
file-descriptors
• This would even happen with a newly created
file-system
• OSAL does not remove file-descriptors for files
open when the file-system is removed
• Unable to create and open files
© 2012 Fraunhofer USA, Inc.
58. 58
Issues found using FAST
• Some wrong error codes returned (and
unimplemented features)
Test function Error message Expected Actual
Filesystem Not
checkFilesystemValid() Checked OS_FS_SUCCESS OS_FS_UNIMPLEMENTED
Filesystem error code
copyFileLongSourceFilename() expected OS_FS_ERROR OS_FS_ERR_NAME_TOO_LONG
Filesystem error code
copyFileNonExistingSourceFile() expected OS_FS_ERROR OS_FS_SUCCESS
Filesystem error code
renameFileLongSourceFilename() expected OS_FS_ERROR OS_FS_ERR_NAME_TOO_LONG
© 2012 Fraunhofer USA, Inc.
59. 59
Code Coverage using FAST
FAST helps achieving good code coverage!
New model improves the old model by covering
more lines not covered during testing
95
90
osfileapi.c (new
85
model)
80 osfilesys.c (new
model)
75 osfileapi.c (old
model)
70
osfilesys.c (old
65 model)
60
Min 10 25 50 100 1000
© 2012 Fraunhofer USA, Inc.
60. 60
Code Coverage Analysis
Never passing a permission other than
OS_READ_WRITE for creating and opening file
switch(access) {
case OS_READ_ONLY: …
case OS_WRITE_ONLY: …
case OS_READ_WRITE: …
default: …
}
No passing of NULL parameters to several
functions
if (path == NULL || filestats == NULL)
return OS_FS_ERR_INVALID_POINTER;
60 © 2012 Fraunhofer USA, Inc.
61. 61
Code Coverage Analysis
Never passing a string parameter with certain
length to several functions
if (strlen(path) >= OS_MAX_PATH_LEN)
return OS_FS_ERR_PATH_TOO_LONG;
Calling of function of underlying operating
system never returns an error
status = close((int)
OS_FDTable[filedes].OSfd);
if (status == ERROR)
…
61 © 2012 Fraunhofer USA, Inc. 03/23/2012
62. 62
MBT - Limitations
Modeling requires specification of SUT
Developers are not used to modeling
Slower feedback compare to manual testing
Some experience is needed to define the
architecture of models
Difficult to document individual test cases
Note: Models summarize all test cases
Some customers require document of each test case
Difficult to locate the root cause of failures in
(lengthy) generated test cases
Require manual work to squeeze failed test cases
© 2012 Fraunhofer USA, Inc.
63. 63
MBT – Limitations …
For web-based testing:
creation of the data provider requires effort to
explore the SUT
if Ids of web elements change or dynamic, then
tests may not work (testability issue)
During development, models and data providers
may need rework because
requirements and APIs change (frequently)
lack of contracts
© 2012 Fraunhofer USA, Inc.
64. 64
MBT- Limitations …
Test oracle problem for distributed publish-
subscribe systems
Cannot assign one test oracle for a given state and
action
Due to decoupling of publishers and subscribers in space
and time
Requires runtime monitoring and global
coordination models
Difficult to execute generated tests in parallel due
to shared resources (e.g., files)
© 2012 Fraunhofer USA, Inc.
65. 65
Our Experience with MBT -
Summary
We found some bugs during the initial exploration of
the system
Some spec issues found during the modeling phase
Some bugs found during execution of test cases
It took some effort to get started with MBT
Learning of new approach, tools, etc
It definitely cuts testing effort and adds fun!
© 2012 Fraunhofer USA, Inc.
66. 66
Our Experience with MBT –
Summary …
Generate several hundred (or thousands) ready-
to-run test cases!
The test code embedded in the model is reused to
generate many test cases!
Immediate return-on-investment
You get test cases for your current model that can
be part of the daily build
No editing of test cases
Source code changes means updating the model
and regenerating test cases
A software product that is very well tested!
© 2012 Fraunhofer USA, Inc.
67. 67
Thanks!
Dr. Dharmalingam Ganesan
(dganesan@fc-md.umd.edu)
Dr. Mikael Lindvall
(mlindvall@fc-md.umd.edu)
© 2012 Fraunhofer USA, Inc.
68. 68
Acknowledgements
NASA IV & V : Lisa P. Montgomery
NASA Goddard Space Flight Center (GSFC):
GMSEC Team:
Lamont Ruley, Robert Wiegand, Sharon Orsborne
CFS Team:
Dave Mccomas, Nicholas Yanchik, Alan Cudmore
White sands: Markland Benson
Sally Godfrey
Fraunhofer CESE:
Rance Cleaveland
Frank Herman
Myrna Regardie
© 2012 Fraunhofer USA, Inc.
69. 69
Acknowledgements - Interns
Palmi Valgeirsson
Gunnar Cortes
Faraz Ahmed
Henning Femmer
Vignir Örn Guðmundsson
70. 70
Acronyms
API: Application Program Interface
CFE: Core Flight Executive
CFR: Code of Federal Regulations
CFS: Core Flight Software
FAST: Fraunhofer Approach for Software Testing
FDA: Food and Drug Administration
GMSEC: Ground Mission Services Evolution Center
MBT: Model-based Testing
OSAL: Operating System Abstraction Layer