YouTube channel : https://www.youtube.com/c/prelrik
This course of slides are very useful for beginners or less experienced testers. The course focuses to teach how actually testers work in LIVE environment.
2. WHAT IS TESTING
Google says: âa procedure intended to establish
the quality, performance, or reliability of
something, especially before it is taken into
widespread use.â
3. âSoftware testing is a process of executing a
system or system component in order to find
bugs and errors.â
Or
âSoftware testing is process of Verification and
Validation (v&v), performed on Application
under Test (AUT) with an intention of finding
bugs and errors.â
SOFTWARE TESTING IS:
5. âSTLC stand for Software Testing Lifecycle. STLC is a
set of process which is executed in a systematic
manner to achieve the testing goalâ
STLC
6. STLC PHASES:
TEST PLANNING
TEST DESIGNING
TEST ENVIRONMENT
TEST EXECUTION
BUG REPORTING &
MANAGMENT
RTM (REQUIREMENT
TRACEABILITY
MATRIX)
TEST REPORTING
10. BUG MANAGEMENT
â How to raise a bug
â Bug components
â Bug lifecycle
TESTING PHASES
11. TEST REPORTING
â Test results
â Bug report
â Test coverage
â Lesson learnt
TESTING PHASES
12. TESTING IMPORTANCE
Testing is important, to make sure:
âą All business requirements are implemented
âą Functionalities are behaving as expected
âą Application is secure enough
âą Does not break under working circumstances
âą Does not harm the business reputation
âą Performs its functions within an acceptable time
âą Works well on all supported OS, devices and screens
âą To make sure that customer can trust and rely on your
product
13. A CLASSIC EXAMPLE
âNASAâs climate orbiter
was lost due to the two
agencies NASA and
Lockheed Martin used
two different
measurement units and
that was not tested or
was left during the
testing.â
16. âA productâs degree of excellence is quality.
Quality can be judged by how product
matches with requirement and how
accurately and/or efficiently the work can be
doneâ
QUALITY
17. âQA is a systematic and scientific approach of
monitoring and improving the software
development process.â
Itâs a verification activity and the idea behind this is to
prevent the errors occurring in software development
process.
QA
21. âQC is a validation activity where the quality
of the developed product is evaluated with a
benchmark or any competitive programâs set
standard to make sure that the software
quality is adhering the defined or expected
quality.â
QC
23. The difference:
QUALITY ASSURANCE QUALITY CONTROL
QA is a verification process QC is a validation activity
QA is process oriented QC is product oriented
QA prevents the defect QC finds the defect
It improves the development
process
It improves the developed
product quality
Requirement document, design
review and code reviews are
done as part of this
Product testing is done as part
of this activity
Performed without the program
execution
Performed by executing the
program
24. âTesting is a validation activity so it comes
under QC. Wherein the developed program is
executed with an intention of finding bugsâ
TESTING
29. ⶠWaterfall Model
ⶠSpiral model
ⶠPrototype Model
ⶠIterative and Incremental Model
ⶠV -Model
ⶠW-Model
ⶠAgile Mode
SDLC MODELS
30. âWaterfall model is a sequential development model
where each phase of SDLC is executed one after
another in a linear way so its also called linear
sequential modelâ
Waterfall Model:
REQUIREMENT
DESIGN
DEVELOPMENT
TESTING
DEPLOYMENT
MAINTENANCE
31. âThe spiral model is a risk-driven process model
generator for software projects. Based on the unique
risk patterns of a given project, the spiral model
guides a team to adopt elements of one or more
process models, such as incremental, waterfall, or
evolutionary prototyping.â
Spiral Model:
32. âIn this model a software prototype will be created to
understand that whether it satisfies the stakeholderâs
requirements.â
Prototype Model:
Prototyping is used to allow the users evaluate
developer proposals and try them out before
implementation
33. âIn incremental model the whole requirement is
divided into various builds. Multiple development
cycles uses waterfall model. Cycles are divided up
into smaller, more easily managed modules.â
Incremental Model:
34. âIn V model of SDLC the real development phases
and testing plans goes side by side as It can be
interpreted as in a 'Vâ shapeâ
V Model:
REQUIREMENT
ANALYSIS
UAT (USER ACCEPTANCE
TESTING)
HIGH LEVEL DESIGN SYSTEM TESTING
LOW LEVEL DESIGN
or SPECIFICATION
INTEGRATION
TESTING
CODING UNIT TESTING
35. âIn V model of SDLC the real development phases
and testing plans goes side by side as It can be
interpreted as in a 'Vâ shapeâ
W Model:
REQUIREMENT ANALYSIS UAT (USER ACCEPTANCE
TESTING)
HIGH LEVEL
DESIGN
SYSTEM TESTING
LOW LEVEL DESIGN or
SPECIFICATION
INTEGRATION TESTING
CODING UNIT TESTING
REQUIREMENT REVIEW
HLD Review
SPECIFICATION
Review
Code Review
DEPLOYMENT
BUILD
CODE MERGE
36. âAgile is a software development model that gives the
flexibility to develop, test and deploy the things
quickly and easily.â
or
âAgile is software development model that implements
the continuous iteration approach to develop and
deploy a product where requirement changes very
frequentlyâ.
AGILE MODEL:
39. âA test plan is a document that describes that what
and how to achieve something in AUT (Application
Under Test).â
TEST PLANNING:
Making of a test plan is very first step in software
testing lifecycle and it defines the rule, covers the
scope, analyzes the available resources, timeline and
risks associated with the testing of a project.
40. Follow the below mentioned steps to make a good test
plan document:
ⶠAnalyze the business requirement
ⶠFind the test objective
ⶠDefine a Test Strategy
ⶠAnalyze the risks (Tools, Resources and Time)
ⶠTest Environment and Test Data preparation
ⶠDefine Entry & Exit Criteria
ⶠDefine a Traceability Matrix
ⶠSchedule and Timeline
ⶠDeliverable
TEST PLAN DOCUMENT:
41. ⶠUnderstand the requirement by going through the
requirement document
ⶠNote down and resolve your queries
ⶠGet a product or document reviewed
ANALYZE BUSINESS
REQUIREMENT:
42. The objective of the testing is finding as many valid
defects as possible and ensure that the software
under test is bug free before release.
In this phase you need to find out the core motive of
your product. You should need to know
ⶠBug Free features
ⶠSmooth performance
ⶠNo security threats
FIND TEST OBJECTIVE:
43. âTest strategy defines the software testing approach
to achieve the testing goal. Usually the test strategy
document is created by Test Manager and It says
what type of technique to follow and which module
(scope) to testâ Test strategy can be defined that:
ⶠHow to get maximum requirements under testing
umbrella by putting minimal effort
ⶠWhat sort of testing you need to perform
ⶠWhat tools will be required
DEFINE TEST STRATEGY:
44. ⶠFind out that do you have available tools required
for testing
ⶠFind out that do you have required skills to test
ⶠFind out if you have enough resources to complete
the task in the given time frame
ⶠFind out that you have sufficient time to complete
the testing
ANALYZE THE RISKS:
45. âTest Environment is also called Test Bed setup, it
means laying down all the needed softwares or
hardwares required for a tester to execute the test
scripts.â
âIn order to work on something or to process any
request a tester need some data which they can place
as an input and thatâs called a test dataâ
TEST ENVIRONMENT & TEST
DATA PREPARATION:
46. âEntry criteria is the prerequisites that
must needs to be fulfilled before test team
can begin the testing.â
In most of the places smoke/sanity testing is defined
and the cases of smoke sanity must be passed as
condition of Entry criteria.
DEFINE ENTRY CRITERIA:
47. âExit criteria are some conditions that must be
fulfilled by test team before they can conclude their
testing activitiesâ like the test coverage should reach
100% of traceability matrix, all the test deliverable
are shared and many other things.
ⶠVerify that all business requirements are covered
as part of testing
ⶠNo critical bugs are in open state
DEFINE EXIT CRITERIA:
48. The Requirement Traceability Matrix or RTM is a
document that maps the business requirement and
test cases to trace the coverage of testing at any given
point of time and to ensure that no requirement is
missed as part of the testing.
The main purpose of Requirement Traceability Matrix
is to see that all test cases are covered so that no
functionality should miss while testing.
DEFINE TRACEABILITY MATRIX:
50. DEFINE DELIVERABLES:
Test deliverable are the documents, scenarios, test
cases, test scripts, test data, screenshots and bug
reports that is shared during or after the testing life
cycle.
53. TEST SCENARIOS:
Test scenario meaning finding out that âwhat to be
testedâ in any given requirement. This is to make
sure that end 2 end functionality is covered.
or
âTest scenarios are the high level classification of test
requirement grouped depending on the functionality
of a moduleâ
54. TEST SCENARIO EXAMPLE:
Requirement: âLogin the website, select and play your
favourite musicâ
Scenario 1: Test the login function
Scenario 2: Search for the music based on different
genres and play
Scenario 3: Create your music playlist
Scenario 4: Play the songs from your playlist
55. TEST CASE:
â A test case, in software testing, is a set of conditions
under which a tester will determine whether an
application, software system or one of its features is
working as it was developed and expected to do.â
56. TEST CASE EXAMPLE:
Letâs pick 2 scenario from scenario example:
Scenario 1: Test the login function
TC 1: Test the login function with valid credentials
TC 2: Test the login function with invalid credentials and verify the
result
TC 3: Check and validate the error msgs for login errors
Scenario 2: Search for the music based on different
genres and play
TC1: Search for specific genre songs and play them
TC2: Check the player functionalities (play, pause, stop, forward, ..)
TC3: Check that the songs can be added in your playlist
57. TEST SCRIPTING
âIn order to execute test cases a detailed procedure
needs to be written where the writer will mention all
the steps to be performed and what should be the
expected result for each activityâ
60. TECHNIQUES
âTest Case designing technique is needed to get the
maximum coverage by using an optimal number of
test cases.
ⶠEquivalence Class Partitioning
ⶠBoundary Value Analysis
ⶠDecision Table
ⶠUse Case Testing
61. Equivalence Class Partitioning
âECP is a testing technique that divides the input test
data into partitions of equivalent classes and from
each class minimum one data must be tested at least
once.â
62. BVA Example
Scenario: Assume that there is an input field for age
and it accepts the values from 1-100. So, if we follow
the ECP concept then:
Valid Class
values from
1 -100
Invalid Class
values below 1
(0- anything)
Invalid Class
values above 100
(101- anything)
Class A : Any value from valid input data domain : 1-100
Class B : Any value which is lower than lowest : 0,-1,-2..
Class C : Any value which is higher than Highest : 0,-1,-2..
Class D: Any alphanumeric : A1, B1, C1
Class E: Any decimal values: 1.1,1.2,1.3
63. Boundary Value Analysis
âMore application errors occur at the boundaries of
input domain. âBoundary value analysisâ testing
technique is used to identify errors at boundaries
instead of finding errors in center of input domain.â
64. BVA Example
Scenario: Assume that there is an input field for age
and it accepts the values from 1-100. So, if we follow
the ECP concept then:
Valid Boundary
values 1 & 100
one lower than
lowest = 0
one higher than
highest = 101
Boundary Value: 1 & 100
Invalid values for negative test cases: 0 & 101
65. Decision Table
âDecision table technique is used in complex
business scenarios where the input data will be
determined based on some conditions.â
Ex:
Conditions 1 2 2
Cash Yes No No
Coupon N/A Yes No
Actions
Order Placed Yes Yes No
66. USE CASE Testing
ⶠUse Cases capture the interactions between
'actors' and the 'system'.
ⶠA use case is a description of a particular use of
the system by an actor. Each use case describes
the interactions the actor has with the system in
order to achieve a specific task.
73. WHITE BOX TESTING or
Structural Testing or Glass Box
Testing
ⶠTesting the code and internal structure comes
under white box testing.
ⶠCodeâs internal structure, design and
implementation is tested as part of white box
testing.
ⶠIt is also known as âglass box testingâ or âOpen
box testingâ or âStructural Testingâ.
74. WHITE BOX TESTING TYPE
UNIT TESTING:
Once the developer develops their piece of segment,
each individual component is tested independently in
this testing for their expected outcomes.
INTEGRATION TESTING:
When two independent units of codes merged then
checking that whether they functioning together is
called integration testing.
75. WHITE BOX TESTING
TECHNIQUES- CODE COVERAGE
STATEMENT COVERAGE
This technique requires every possible statement in
the code to be tested at least once during the testing
process
BRANCH COVERAGE
This technique checks every possible path like if-else
and other conditional loops of a software application
76. WHITE BOX TESTING IS
PERFORMED TO FIND
CODE COVERAGE
ⶠPoorly structured codes
ⶠOutput of the code
ⶠConditional loops
ⶠInternal security holes
ⶠCover branch and statement
77. BLACK BOX TESTING
â Black Box Testing is the testing method where
Functional and nonfunctional aspects of the
application is tested without looking into the code.â
Also known as Behavioural Testing or close box
testing.
78. Black Box Test is performed to
check
ⶠMissing on incorrect features
ⶠPerformance errors
ⶠSecurity errors
ⶠDatabase connection
ⶠAPIs
ⶠ...
79. BLACK BOX TESTING TYPES
ⶠFUNCTIONAL TEST
⊠Smoke/Sanity
⊠Feature testing
⊠Regression testing
⊠System Testing
⊠UAT
ⶠNONFUNCTIONAL TEST
⊠Performance (Load/stress/soak)
⊠Compatibility testing
⊠Usability testing
⊠GUI testing
80. GARY BOX TESTING
âGray Box Testing is a combination of White box
testing and black box testing. The idea is to find the :
Improper structure and/or
Improper usage of applicationâ
ⶠIt is based on functional specification, UML
Diagrams, Database Diagrams or architectural
view
ⶠGrey-box tester handles can design complex test
scenario more intelligently
83. UNIT TESTING
Once the developer develops their piece of segment,
each individual component is tested independently in
this testing for their expected outcomes.
84. INTEGRATION TESTING
When two independent units of codes merged then
checking that whether they functioning together is
called integration testing.
85. Integration Testing STUBS & DRIVERS
Stubs and drivers are the dummy piece of code called
in to create an integrated environment where some
work is not completed yet or not available for testing.
ⶠStubs are used during top down integration
approach
ⶠDrivers are used In bottom up integration
approach
87. SMOKE/SANITY TESTING
ⶠSmoke/sanity testing is also called Build
verification testing or confidence testing
ⶠThe motive of this testing is to ensure that the
major & critical functions are working as expected
ⶠSo that the test team can get enough level of
confidence about the applicationâs stability and
can execute the other testings.
88. SMOKE/SANITY TESTING Example:
Smoke testing is collection of test cases that covers
the most crucial functions of AUT (Application Under
Test).
An example of smoke/sanity test cases for a website:
ⶠURL is accessible
ⶠUser is able to login
ⶠThe payment is being successful
91. Functionality TESTING
âFor any new release of the product, testing the
added or modified requirement is called functional
testing.â
Functional testing is done to check that all the
business functionalities are working ok as expected
and it will fulfill the end userâs requirement.
92. Regression TESTING
When a new functions is added or an existing
application function is modified then checking the
existing features is called regression testing as the
newly added/modified feature may create some side
effects for other features.
ⶠThis testing is done after every release
ⶠRegression test suite shall be updated after every
new release.
94. Retesting
Once a bug is fixed then checking the same
functionality again to ensure that the bug fix is ok is
called retesting. This testing is done to ensure that
bug fix is working ok and based on that the bug
status will be updated as fixed or reopened.
95. SYSTEM TESTING
ⶠSystem testing is testing the whole system in
integrated environment with all necessary
software and hardwares.
ⶠUsually performed in E2E (End 2 End)
environment
In complex business scenarios one system work with
collaboration of so many other systems, so testing
your application when it is integrated with all other
applications is called system testing.
96. EXPLORATORY TESTING
When there is a requirement or an application to test
but with no relevant document or details provided to
tester the tester explores the application and keep
checking the features to figure out the issues is
known as Exploratory Testing.
Itâs a format of unstructured testing.
97. UAT
ⶠUAT stands for User Acceptance Testing
ⶠWhere the product is evaluated as per the end
userâs requirement
ⶠPerformed by the end users or stakeholders
ⶠProduct is evaluated that whether it matches with
business requirement or not and whether to
accept or reject the product
ⶠUsually the last type of testing
ⶠUAT has 2 types Alpha testing and Beta Testing
98. Alpha & Beta Testing
Alpha Testing Beta Testing
Performed by testers in testing
environment
Performed by real end users in live
environment
Alpha Testing performed at
developer's site
Beta testing is performed at client
location or end user of the product
Alpha testing comes before beta
testing
Beta testing is performed after alpha
testing
Alpha testing involves both the white
box and black box techniques
Beta Testing typically uses black box
testing
Alpha testing is conducted within the
organization and tested by
representative group of end users
Beta testing is conducted by the end
users
101. PERFORMANCE TESTING
Performance testing is a test done to determine how a
system performs in terms of speed, load and stability
under a particular workload
Performed to check the reliability of the application.
Performance testing types are:
ⶠLoad Testing
ⶠStress Testing
ⶠSoak Testing
102. PERFORMANCE TESTING TYPES
Load Testing
Load testing is done to determine how the application
behaves under a specific load. Example - Check that
what is the page load time if 10 concurrent users
access the same page at a time
103. PERFORMANCE TESTING TYPES
Stress Testing
This testing is performed to check the breakpoint of
the application so that the team can plan for load
balancer.
104. PERFORMANCE TESTING TYPES
Soak Testing
Soak testing is done to check that how the
application behaves when the app gone through
under a specific load for a specific time.
105. SECURITYTESTING
Security testing is a testing process intended to
reveal flaws in the security mechanisms of an
application to protect data, sensitive information.
ⶠThis testing is done to check the vulnerabilities in
the application and fix it.
ⶠTo ensure that the sensitive data travels in
encrypted format
ⶠTo insure that no unauthorized access can be
made
106. COMPATIBILITY TESTING 1/2
ⶠCompatibility Testing is done to check the
applicationâs responsiveness across various OS,
Devices & browsers.
ⶠThis testing is done to check the applications
rendering and usability
ⶠThis testing is done to make sure that all target
devices are covered
107. COMPATIBILITY TESTING 2/2
Use cloud service to perform compatibility testing
Saucelabs
Browserstack
Or, use google chrome developer mode to check the
responsiveness
111. SOFTWARE BUG/DEFECT
In general software testing language âDuring an
application testing any deviation in actual result and
expected result is called a bug.â
whereas Bug and Defects has slightly difference:
BUG:
DEFECT:
FAILURE:
112. BUG, DEFECT & FAILURE
BUG:
A bug is the result of a coding error. Bug is the
general term used for all issues caught in testing
environment.
DEFECT:
A defect is a deviation from the requirements. Defect
word is used by the business persons or stakeholder.
FAILURE:
Any issue which is passed to live users and impacting
the applications motive is called a product failure.
113. How to raise a Bug?
Consider the below mentioned 10 point as a must to
raise an effective bug
1) Summary
2) Description
3) Component or Assigned to
4) Steps to reproduce with Test Data and URLs
5) Logs & screenshot
6) Requirement reference
7) Release Name /Build Number / Sprint name
8) Environment
9) Priority
10)Severity
114. PRIORITY & SEVERITY
Priority
The impact of the bug or function on business level
is called priority.
Priorities are defined as : P1, P2 & P3
Severity
The impact of the bug or function on Application
level is called priority.
Severities are defined as : Major, minor, critical,
blocker
116. BUG MANAGEMENT TOOLS AND
IMPORTANCE
ⶠBugZilla
ⶠMantis
ⶠHp-QC or HP-ALM
ⶠZira
117. NON REPRODUCIBLE BUGS
When a tester reports a valid bug but the same bug
can not be reproduced by others due to any reason is
called a non-reproducible bug.
The reason could be :
1) Test Environment
2) Test locations
3) Test Data
4) System configuration
5) System of server cache memory
118. How to deal with NON REPRODUCIBLE
BUGS
ⶠProvide proper story and not just steps
ⶠRecord a video or capture a screenshot for proof
ⶠProvide the system log, application log and server
logs
ⶠProvide the time of execution
ⶠProvide the location of test execution
Try to reproduce the bug by clearing the system
cache memory
Try to clear the server side cache (to clear the server
side cache put ?nocache=1 at the end of the URL
and hit enter)
121. TEST MANAGEMENT TOOL
âTest Management tools are the tools required to
manage whole testing activity and reporting in an
organized manner.â
122. TEST MANAGEMENT TOOL
Feature
1: Manage the build and versioning for test
2: To manage the Test scenarios, Test Case and Test
scripts
3: To map the requirements and test scenarios, test
cases
4: To update the test case status while executing
5: To update the test cases with a defect
6: To track the test progress
7: To generate the test report
8: To maintain the test history
123. TEST MANAGEMENT TOOLs
ⶠHP ALM (Application lifecycle management) or HP-
QC (Quality centre, is the older version of ALM)
ⶠJIRA
ⶠq-Test
ⶠTestlink
ⶠZephyr
127. âą A SDLC model to build an application fast in very
short span of time
âą Delivers workable modules after each release
âą Development and Testing goes side by side
âą A combination of iterative and incremental
approach
Agile is:
131. ⶠWaterfall Model
ⶠSpiral model
ⶠPrototype Model
ⶠIterative and Incremental Model
ⶠV -Model
ⶠW-Model
ⶠAgile Mode
SDLC MODELS
132. âWaterfall model is a sequential development model
where each phase of SDLC is executed one after
another in a linear way so its also called linear
sequesntial modelâ
Waterfall Model:
REQUIREMENT
DESIGN
DEVELOPMENT
TESTING
DEPLOYMENT
MAINTENANCE
133. Agile vs Waterfall model
Waterfall Agile
A sequential process; once a step
has been completed, developers
canât go back to a previous step.
Agile came about as a âsolutionâ to the
disadvantages of the waterfall. Quick and
easy develop and deploy
Waterfall methodology relies
heavily on initial requirements. NO
further changes allowed.
The Agile methodology allows for changes to
be made after the initial planning.
If a requirement error is found, or
a change needs to be made, the
project has to start from the
beginning
Itâs easier to add features that will keep you
up to date
The whole product is only tested at
the end so bugs caused by very
initial phases will be detected at
later stage
The requirements are tested from the very
beginning so early detection of bugs is an
advantage.
134. âą Sprint & Scrum
âą Lean & Kanban
âą XP (Extreme Programming)
âą RAD (Rapid Application Development)
Agile Methods
135. âą Scrum is an Agile development method
âą Simple way to implement agile in product
development
âą Its Management and control process for development
and testing
âą The product is developed in an incremental order
Scrum
136. Roles
Product Owner Scrum Master Scrum Team Members
A person from business
end
Facilitator for PO and
Team
Cross-functional group of
people
Managing product
Backlogs
To clear the hurdles
faced by team
Estimates the size and
complexity of the work
Prioritizes and refines the
backlog
Makes sure team is
following the agile
practices
The core technical team that
works on requirements
person to add the tasks in
sprint and approve the
demo in product review
Set up sprint planning,
scrum meeting , review
and retrospective
Design, develop and test the
business requirements
138. sprint planning
âą Sprint Window : 1 week (5 days)
âą Per day workable hours : 6 hrs
âą Resources (2 developer & 2 tester) : 4
Total time : 5*6*4 = 120 hrs
Task1 - Estimated time = 10hrs
Task2 - Estimated time = 5 hrs
139. Product Backlog
âA product backlog is a prioritized list of
business requirementsâ
A typical Scrum backlog comprises the following different types of
items:
âą Features
âą Bugs
âą Non-functional features
âą or any other technical stuff
140. Sprint planning
âIn scrum methodology each work
iteration is called a Sprint.â
âą Every Scrum begins with the sprint planning meeting.
âą In which the Product Owner and the team(s) discuss which stories will
be moved from the Product Backlog into the sprint backlog.
âą It is the responsibility of the Product Owner to determine what work
the team will do.
âą Once the team commits to the work, the Product Owner cannot add
more work or micromanage.
âą The Product Owner can cancel a Sprint, which shouldnât happen often,
and would usually occur due to a sudden change in business needs.
141. Scrum Meeting or stand-ups
âStandup meetings are the daily 15
minutes catch up among all team
members and scrum master, where they
update each other about
âą what they did the previous day
âą what they will do today and
âą what are the road blockers.â
142. Burndown
âA burndown chart is a graphical
representation of work left to do versus
timeâ
The daily analysis of task remaining and time remaining to complete the
sprint and assess whether we are on time or behind the schedule.
143. âSprint review meet is held at the end of each
sprint and a demonstration of developed work
will be done to stakeholdersâ
The meeting itself should be strictly time boxed to no more
than an hour per week of Sprint. So a two-week Sprint would
have a two-hour review and a one-week sprint a one-hour
review.
ⶠReviews the work that was completed and the planned work
that was not completed
ⶠPresents the completed work to the stakeholders
Sprint Review
144. Sprint Retrospective
âAt the end of the sprint whole team gathers to reflect
how things went and what theyâd like to change. This
meeting is called the Sprint Retrospective meeting.â
Take away:
1: What worked well
2: What didnât work well
3: Actions to improve
145. Jira
â Jira is a tool developed by
Atlassian an australian
company
â Itâs a project management
and bug management tool
â Jira is a truncation of
Gojira, the Japanese name
for Godzilla
146. Jira
Epic: An epic captures a large body of work. It is
essentially a large user story that can be broken
down into a number of smaller stories. It may take
several sprints to complete an epic.
Story: A story or user story is a software system
requirement that is expressed in a few short
sentences, ideally using non-technical language.
Task: A task is a unit of work contained within a
story. In JIRA Agile
Sub-task: A individual task within a task in called a
sub task
149. INTRODUCTION
â Automating the manual testing process by using
any tool is automation testing.
â Using a tool to execute the test cases is
automation testing.
â Test automation requires the significant amount
of Money and skills.
â Significant amount of time can be saved by
automation
â Not everything can be automated
150. MANUAL vs AUTOMATION
Manual Automation
Manual Testing is done manually by
humans
Automation testing is done with the help
of a tool
To test the same thing it takes much
time
It takes less time in execution but
writing automation script may take
longer time
More resource required Less resource required
New features are usually tested
manually first
regression test cases are executed with
automation
Executing same test cases on multiple
environment is very time consuming
Same automation script can be run on
multiple machines
Manual testing is not accurate at all Automated testing is more reliable, as it
151. When to automate
â High Priority - Business Critical features
â Test cases that are executed repeatedly
â Test Cases that are very tedious or difficult to
perform manually
â Test Cases which are time consuming
153. When not to automate
â Exploratory Testing
â Ad-hoc Testing
â Usability Testing
154. Continuous integration
Continuous Integration (CI) is a development practice
that requires developers to integrate code into a
shared repository several times a day. Each check-in
is then verified by an automated build, allowing
teams to detect problems early.
By integrating regularly, you can detect errors
quickly, and locate them more easily.
156. Benefits
â Faster than the manual execution
â Reliable and accurate test results
â Saves time and cost
â Reusable test scripts
â Increased efficiency
â Tests can be triggered automatically based on
some conditions
â Early Testing can be obtained
157. Tools
Testing Type Automation Tool
Functional & Regression Testing â Selenium WebDriver
â HP UFT (QTP)
â WATIR
â Silktest
â Rational Functional Tester
Client side performance testing â JMeter
â Load Runner
â Neo Load
Server side performance
testing
New Relic
API Testing â JMeter
â ReadyAPI
â POST MAN
159. WHAT TO TEST IN A
WEBSITE & A MOBILE APP
Video 15
160. What to test in a Website
INTRODUCTION
Two broad testing types
â Functional Testing
â Non Functional Testing
161. What to test in website
Functional Test
â Smoke or sanity testing
â Functional Test
â Check the newly added / modified
features for latest release
â Check all internal/External links
â Check the form/field validations
â Check the submit actions for forms
â Check the database connection and
integrity
â Regression Test
162. What to test in website
Non-Functional Test
â Performance Testing
â Check the Page load times on different net speeds
â Test the page response times on different loads
â Check the server response time
â Check that whether your application assets are optimized or
not
â Test with: https://testmysite.thinkwithgoogle.com/ gives
very useful details about your website
â Security Testing
â Compatibility Testing
â Test to check the application's responsiveness on different
different devices, Oss, browsers and browser versions
â Usability Testing
â Test for Navigations
â Test for look and feel (no overlapping, Website
instructions,Error msgs)
163. What to test in a Mobile app
INTRODUCTION
â Device specific Tests
â Network specific Test
â App Functional-NonFunctional Tests
164. What to test in Mobile app
DEVICE SPECIFIC TESTS
1 Check the App installation/uninstallation in device
2 Check the launching of the app in device(verify splash screens, load time,
presentation)
3 Connect/Disconnect the charger, check that the running app is not impacted
4 Lock/unlock the screen and check that running app has no impact
5 Go to home and come back to your app (sleep the app)
Check that you can switch to other apps in device smoothly
6 Tilt and shake to check the impact on your running app
7 Check the device notifications are not impacting the running app
8 Check that no data network connection is displayed if your app needs a data
connection
9 Check that app interacts with device hardware (like GPS)
10 All device buttons have associated action for your app
165. What to test in Mobile app
NETWORK SPECIFIC TESTS
1 Check the App behavior on 2G, 3G and WiFi internet connections
2 Check the App behavior in no network condition
3 What happens if user switch from one data network to another
4 Lock/unlock the screen and check that running app has no impact
5 Interruption Testing : Does the app resumes when a call comes
6 Interruption Testing : Does the app resumes when a message comes
7 Check the device notifications are not impacting the running app
8 If app has a transaction system then how the transaction behaves during
network fluctuations
9 Check the appâs background data connections can be changed
10 How app behaves in Flight Mode
166. What to test in Mobile app
Functional/NonFunctional
Functional - During this testing tester has to focus on the business requirements
that the application must deliver or perform
Non-Functional -
Check the applicationâs heap memory
Check the applicationâs RAM memory usage
Check the applicationâs CPU usage
Compatibility: On different OS versions and on different Devices
Usability: Check that user has no issue with the usability of the app
Check that the cache memory can be cleaned
Check the appâs performance on different sets of configurations
169. INTRODUCTION
â International Software Testing Qualification Board
â Is a software testing qualification certification
organisation that operates internationally
â Founded in Edinburgh in November 2002
â ISTQB is a non-profit association legally registered
in Belgium
â The ISTQBÂź is a software testing qualification
certification organization having over 350,000
certifications issued.
â The ISTQBÂź consists of 49 member boards
worldwide representing 72 countries
â Official site http://www.istqb.org/
170. ISTQB CERTIFICATION
There are three levels of certification:
â ISTQB Certified Tester Foundation Level (CTFL)
â ISTQB Certified Tester Advanced Level (CTAL)
Test Manager
Test Analyst
Technical Test Analyst
Advanced Level (CTAL) - Full Advanced Level (after passing the above
exams of Advanced Level)
â Expert Level
Improving the Test Process
Test Management
Test Automation
Security Testing
172. FOUNDATION LEVEL
CERTIFICATION
The Foundation level offers the following exams:
â Foundation level core (1 syllabus )
â Foundation level specialist(4 syllabi, two of them
currently available, two of them undergoing
development)
â Agile Tester
â Model-Based Tester
â Usability Tester (in development)
â Automotive Tester (in development)
174. PASSING SCORE
â Total 40 questions (at least 26 questions should
be right)
â Time: 1hr (15 minutes Extra for non-native
language person)
â Passing Score 65%
â Fees - 4500 INR
if reappearing - 2800 INR
...and No Negative marking :)
175. How to clear ISTQB tip 1
Go through the ISTQB syllabus
Find the updated syllabus here (official ISTQB site):
Link: http://www.istqb.org/downloads/syllabi/foundation-level-
syllabus.html
176. How to clear ISTQB tip 2
Go through the ISTQB book at least once
Download the e-books from here (official ISTQB site)
Link: http://www.istqb.org/downloads/e-books.html
177. How to clear ISTQB tip 3
Go through the ISTQB Exam documents
Exam Document here:
Link: http://www.istqb.org/downloads/exam-documents.html
178. How to clear ISTQB tip 4
Practice the mock questions
Search for the sample papers and keep practicing the exercises.
A good collection is here:
http://istqbexamcertification.com/istqb-dumps-download-mock-tests-
and-sample-question-papers/
This will help you out to find your weak areas, practice more on
that.
But never rely completely on sample questions as these are not
the one that will show up in exam.
179. How to clear ISTQB tip 5
Make a list of terminologies and keep checking
that.
180. How to clear ISTQB tip 6
Almost all the answer options will look similar so
always read carefully and understand properly.
so read questions carefully
and read the options too⊠double check!