It is easy to measure code coverage when running unit tests.
However, very frequently the following questions come up:
- How can we measure API test coverage and e2e / UI test coverage?
- Does e2e / UI test coverage add value?
- If not, what other data can we look at to know if the e2e tests have good coverage?
This session is about understanding the above questions, and finding solutions for the same.
4. • Tests are “testing” the “right functionality / behavior”!
• Tests give quick feedback
• IF Tests fail, the correct reason of failure is provided
• Tests are optimized & efficient
@BagmarAnand
Criteria of Automated Tests
https://i.pinimg.com/474x/ec/8a/17/ec8a176036490a3338f806172a23e4b3.jpg
6. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
Business-facing
Tests
Technology-facing
Tests
@BagmarAnand
Test Pyramid
Slow
More integration
Expensive
Fast
More isolation
Cheap
7. @BagmarAnand
Test Pyramid – Includes NFRs
Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
Performance
Security
Accessibility
Analytics
11. • Gives understanding of current state
• Provides indication of potential problems / issues
• Ability to make educated, data-driven decisions, quickly!
@BagmarAnand
Why are Metrics important?
13. • Abused
• Taken out of context & proportion
• Seem to be set in stone!
@BagmarAnand
(Sad) Reality of Metrics
https://i.pinimg.com/474x/ec/8a/17/ec8a176036490a3338f806172a23e4b3.jpg
14. @BagmarAnand
Way of working has evolved
https://upload.wikimedia.org/wikipedia/commons/1/1c/Evolution-des-wissens.jpg
15. @BagmarAnand
We are still stuck in the past in measuring Quality!
https://previews.123rf.com/images/riedjal/riedjal1403/riedjal140300005/26549722-prehistoric-age-of-beardy-caveman-surprised-to-find-a-laptop.jpg
17. • Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
• Test Case Productivity = (Number of Test Cases / Efforts Spent for Test Case Preparation)
• Test Coverage = Number of detected faults/number of predicted defects.
• Test Code Coverage = Produce code coverage by e2e / UI tests
• Requirement Coverage = (Number of requirements covered / Total number of requirements) x 100
• Test Design Coverage = (Total number of requirements mapped to test cases / Total number of requirements) x 100
• Test Execution Coverage = (Total number of executed test cases or scripts / Total number of test cases or scripts
planned to be executed) x 100
• %ge of work completed & yet to be completed
• Time to complete the remaining work
@BagmarAnand
(Traditional) Test Metrics
18. • There is a lot of work required to capture these metrics
• The result of these metrics does not necessarily tell how can I make
the product quality better
• How do these metrics (capturing, measuring and analyzing) work in
the Agile way of working?
Hence, such metrics add very limited value to teams!
@BagmarAnand
Traditional Test Metrics
19. Metrics that add very limited value to teams should be
avoided!
They seem to be more for managers than for teams to
build a high-quality product!
@BagmarAnand
Traditional Test Metrics
20. @BagmarAnand
How to find better ways?
https://previews.123rf.com/images/bosecher/bosecher1509/bosecher150900020/45695557-vector-cartoon-illustration-of-a-stoneage-man-sitting-on-a-stone-wheel.jpg
22. • Ways of working has evolved, but people are still the
same!
• Roles / Titles have changed, but people are still the same!
• We still need to measure quality, so we end up using
techniques what we knew before!
@BagmarAnand
Why is it a challenge?
23. • Metrics should be easy and quick to capture
• Use practices that will allow metrics to be captured
automatically
• Generate meaningful & visual reports to infer Quality
@BagmarAnand
Evolved way of thinking about Metrics
29. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
Business-facing
Tests
Technology-facing
Tests
@BagmarAnand
Test Pyramid
Slow
More integration
Expensive
Fast
More isolation
Cheap
30. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
@BagmarAnand
Quality is NOT some report from QA Team!
Test Execution
report created
by QA Team
31. @BagmarAnand
All aspects of testing combined indicate Quality
of product-under-test
Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2ePerformance
Security
Accessibility
Analytics
Quality of product-under-test
33. • Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
• Test Case Productivity = (Number of Test Cases / Efforts Spent for Test Case Preparation)
• Test Coverage = Number of detected faults/number of predicted defects.
• Test Code Coverage = Produce code coverage by e2e / UI tests
• Requirement Coverage = (Number of requirements covered / Total number of requirements) x 100
• Test Design Coverage = (Total number of requirements mapped to test cases / Total number of requirements) x 100
• Test Execution Coverage = (Total number of executed test cases or scripts / Total number of test cases or scripts
planned to be executed) x 100
• %ge of work completed & yet to be completed
• Time to complete the remaining work
@BagmarAnand
(Traditional) Test Metrics
34. I often get asked ….
@BagmarAnand
https://lh3.googleusercontent.com/proxy/3JFWJGTlmTym_g9xCZpHIFNLN3rQZu3js8f7iJDHFc6CCeTDb4hpLJaaQm1YCe2BfxyuA_7Lv3B4Clt7MpICbe_Ay09TY8k
35. How can you capture code
coverage from e2e / UI tests?
@BagmarAnand
https://lh3.googleusercontent.com/proxy/3JFWJGTlmTym_g9xCZpHIFNLN3rQZu3js8f7iJDHFc6CCeTDb4hpLJaaQm1YCe2BfxyuA_7Lv3B4Clt7MpICbe_Ay09TY8k
36. Let us first understand
Code Coverage?
@BagmarAnand
40. But, does it really make sense?
@BagmarAnand
https://lh3.googleusercontent.com/proxy/3JFWJGTlmTym_g9xCZpHIFNLN3rQZu3js8f7iJDHFc6CCeTDb4hpLJaaQm1YCe2BfxyuA_7Lv3B4Clt7MpICbe_Ay09TY8k
43. When tests are running:
• Instrumented product-under-test to allow tracing and
capturing data / metrics
• Isolated environment – i.e. No one else should be using
the same environment
@BagmarAnand
Criteria for Code Coverage & Analysis
44. @BagmarAnand
Feasibility of Code Coverage & Analysis
How the tests run?
Code Coverage
Analysis Possible?
Challenges
Unit Tests
• Mocks / Stubs
• Isolated environment
Yes
Minimal
• Standard mocking & stubbing
practices should suffice
45. @BagmarAnand
Feasibility of Code Coverage & Analysis
How the tests run?
Code Coverage
Analysis Possible?
Challenges
Unit Tests
• Mocks / Stubs
• Isolated environment
Yes
Minimal
• Standard mocking & stubbing
practices should suffice
API / WebService
Tests
• Against deployed Service(s)
• Dependent services may be
stubbed / mocked for
better orchestration
Yes, but at service level
Medium
• From a setup / mocking
dependencies perspective
46. @BagmarAnand
Feasibility of Code Coverage & Analysis
How the tests run?
Code Coverage
Analysis Possible?
Challenges
Unit Tests
• Mocks / Stubs
• Isolated environment
Yes
Minimal
• Standard mocking & stubbing
practices should suffice
API / WebService
Tests
• Against deployed Service(s)
• Dependent services may be
stubbed / mocked for
better orchestration
Yes, but at service level
Medium
• From a setup / mocking
dependencies perspective
e2e / UI Tests
• Against deployed
environment
• All integrations in place
• Data / config setup may be
necessary
• Some systems may be
stubbed
Yes, but may not be a
good strategy (cost Vs
value)
High
• Need full environment setup
• Keeping environment “pure” and
“isolated” from use by tests or
humans for tracing to work
• e2e tests may trigger only a small
set of code paths compared to
Unit & API tests
47. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
Business-facing
Tests
Technology-facing
Tests
@BagmarAnand
Test Pyramid
Slow
More integration
Expensive
Fast
More isolation
Cheap
48. @BagmarAnand
Feasibility of Code Coverage & Analysis
How the tests run?
Code Coverage
Analysis Possible?
Challenges
Unit Tests
• Mocks / Stubs
• Isolated environment
Yes
Minimal
• Standard mocking & stubbing
practices should suffice
API / WebService
Tests
• Against deployed Service(s)
• Dependent services may be
stubbed / mocked for
better orchestration
Yes, but at service level
Medium
• From a setup / mocking
dependencies perspective
e2e / UI Tests
• Against deployed
environment
• All integrations in place
• Data / config setup may be
necessary
• Some systems may be
stubbed
Yes, but may not be a
good strategy (cost Vs
value)
High
• Need full environment setup
• Keeping environment “pure” and
“isolated” from use by tests or
humans for tracing to work
• e2e tests may trigger only a small
set of code paths compared to
Unit & API tests
Value
Easy
Feasibility
Difficult
50. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
@BagmarAnand
Test Pyramid
51. Unit (xUnit / JavaScript)
Manual / Exploratory
Web Service
UI / e2e
@BagmarAnand
Measuring Quality from each Test Type
Feature Coverage
User Experience / Usability
Accessibility
Feature Coverage
Visual Coverage
Accessibility
Product-deployment & setup time
Test Execution time
Code Coverage
Contract Tests
Load / Performance / Security
API / Service Deployment & Setup time
Execution time
Code Coverage
Static Code Analysis
Execution time
52. In the rest of this session
we will focus on Feature
Coverage for e2e / UI Tests
@BagmarAnand
54. • e2e / UI tests are typically user scenarios that cut across
many features of the product
• High-impact / High-risk features should be tested more
than low-impact / low-risk features
* Risk / Impact is relative to business goals / user functionality
@BagmarAnand
Why should we capture Feature Coverage?
55. The ability to know which features are
covered, and how many times, as part of
execution of the e2e / UI Tests!
@BagmarAnand
Feature Coverage - Objective
56. Visualizing the feature coverage can give –
• confidence of the automated e2e tests,
• provide data where to focus next
@BagmarAnand
Feature Coverage – how will it help?
57. • Add appropriate ‘tags’ to automated e2e / UI tests
• Gherkin-based tools (ex: Cucumber, Karate, etc.) make it easy to add tags
• Test runners have different ways to provide custom Annotations
• Ex: TestNG, Junit, etc.
@BagmarAnand
Capturing Feature Coverage
58. • Approach #1: Update your reports to include ‘tags’ for the tests
• Create visualization based on the ’tags’ added in test reports
• Ex: Cucumber-reporting gives you ‘tag’ analysis out-of-the-box
• Uses json report files as input to create report
• Approach #2: Benchmark tag data (if #1 is not feasible)
• Update test framework to “log” all tags & and its test name to a common csv
file
• Post test execution, open csv file in a spreadsheet, and create chart to
visualize Feature Coverage
@BagmarAnand
Visualizing Feature Coverage
62. • Identify the correct set of tags to be used, for each type of test
• Tags can be based on features, components of your product
• Watch out for “tag-hell”
@BagmarAnand
Feature Coverage – Tips
63. Tracking feature coverage from your api / functional UI (e2e) tests
https://essenceoftesting.blogspot.com/2020/03/tracking-functional-coverage.html
Sample repo using cucumber-reports with karate:
https://github.com/anandbagmar/karate-sample
Cucumber-reporting
https://github.com/damianszczepanik/cucumber-reporting
@BagmarAnand
References