SlideShare a Scribd company logo
1 of 33
Download to read offline
Practices and Tools for
Better Software Testing
ishepard @DavideSpadini
Davide Spadini
ā€¢ Bachelor and Master in Italy
ā€¢ In August 2016 I started my PhD in Delft University of
Technology
ā€¢ 2 years and 3 months
About me
ā€¢ I am from Verona, in the north Italy
ā€¢ In August 2016 I started my PhD in Delft University of
Technology
ā€¢ 2 years and 3 months
ā€¢ I am enrolled in an European project
called SENECA
About me
SENECA
ā€¢ It involves 3 countries:
1. The Netherlands
2. Spain
3. Greece
ā€¢ For each country, there is one academic partner, and
one industry partner
ā€¢ In my case:
ā€¢ Academic partner: TU Delft
ā€¢ Industry partner: SIG
SENECA ā€” MY AREA
Software Testing
SIG perspective
ā€¢ SIG does not calculate test code quality metrics
ā€¢ BTW, what is ā€œtest code qualityā€?
ā€¢ In general, test code is not as good as production
code
ā€¢ Little research has been done in how to help
developers in writing better test code
MAKE TESTING
GREAT AGAIN
My campaign
ā€¢ Help developers in writing test code by:
1. raising awareness of the effect of writing poor
tests
2. means of new techniques and tools
1. Test Quality and Software Quality
On The Relation of Test Smells to
Software Code Quality
Davide Spadini,ā‡¤ā€” Fabio PalombaĀ§ Andy Zaidman,ā‡¤ Magiel Bruntink,ā€” Alberto BacchelliĀ§
ā€”Software Improvement Group, ā‡¤Delft University of Technology, Ā§University of Zurich
ā‡¤{d.spadini, a.e.zaidman}@tudelft.nl, ā€”m.bruntink@sig.eu, Ā§{palomba, bacchelli}@iļ¬.uzh.ch
Abstractā€”Test smells are sub-optimal design choices in the
implementation of test code. As reported by recent studies, their
presence might not only negatively affect the comprehension of
test suites but can also lead to test cases being less effective
in ļ¬nding bugs in production code. Although signiļ¬cant steps
toward understanding test smells, there is still a notable absence
of studies assessing their association with software quality.
In this paper, we investigate the relationship between the
presence of test smells and the change- and defect-proneness of
test code, as well as the defect-proneness of the tested production
code. To this aim, we collect data on 221 releases of ten software
systems and we analyze more than a million test cases to investi-
gate the association of six test smells and their co-occurrence with
software quality. Key results of our study include:(i) tests with
smells are more change- and defect-prone, (ii) ā€˜Indirect Testingā€™,
ā€˜Eager Testā€™, and ā€˜Assertion Rouletteā€™ are the most signiļ¬cant
smells for change-proneness and, (iii) production code is more
defect-prone when tested by smelly tests.
I. INTRODUCTION
Automated testing (hereafter referred to as just testing)
found evidence of a negative impact of test smells on both
comprehensibility and maintainability of test code [7].
Although the study by Bavota et al. [7] made a ļ¬rst,
necessary step toward the understanding of maintainability
aspects of test smells, our empirical knowledge on whether
and how test smells are associated with software quality
aspects is still limited. Indeed, van Deursen et al. [74] based
their deļ¬nition of test smells on their anecdotal experience,
without extensive evidence on whether and how such smells
are negatively associated with the overall system quality.
To ļ¬ll this gap, in this paper we quantitatively investigate
the relationship between the presence of smells in test methods
and the change- and defect-proneness of both these test
methods and the production code they intend to test. Similar
to several previous studies on software quality [24], [62], we
employ the proxy metrics change-proneness (i.e., number of
times a method changes between two releases) and defect-
proneness (i.e., number of defects the method had between two
1. Test Quality and Software Quality
Test code is
more change- and
defect-prone if affected
by smells
Production code is
more defect-prone if
exercised by test code
affected by test smells
1. Test Quality and Software Quality
ā€¢ Test coupling: Free your code and tests
ā€¢ Problem: coupling between test and production code
ā€¢ Example: due to a refactor or a semantic change in the
production code, many tests break
ā€¢ Study:
1. How spread is the problem
2. What is the cause and it is ļ¬xed.
3. Can we offer refactorings strategies?
1. Test Quality and Software Quality
t
#42abc
A.java
ATest.java
B.java
BTest.java
1. Test Quality and Software Quality
t
#42abc
A.java
#21def
ATest.java
B.java
BTest.java
1. Test Quality and Software Quality
t
#42abc
A.java
#21def
ATest.java
B.javaBTest.java
1. Test Quality and Software Quality
t
#42abc
A.java
#21def
ATest.java
B.javaBTest.java
mvn install old-tests new-production
1. Test Quality and Software Quality
t
#42abc
A.java
#21def
ATest.java
B.javaBTest.java
mvn install old-tests new-production
ā€¢ Preliminary results: 42% of the tests fail, in average
with 9 different errors
2. Current Practices in Testing
To Mock or Not To Mock?
An Empirical Study on Mocking Practices
Davide Spadiniā‡¤ā€ , MaurĆ­cio Anicheā€ , Magiel Bruntinkā‡¤, Alberto Bacchelliā€ 
ā‡¤Software Improvement Group
{d.spadini, m.bruntink}@sig.eu
ā€ Delft University of Technology
{d.spadini, m.f.aniche, a.bacchelli}@tudelft.nl
Abstractā€”When writing automated unit tests, developers often
deal with software artifacts that have several dependencies. In
these cases, one has the possibility of either instantiating the
dependencies or using mock objects to simulate the dependen-
ciesā€™ expected behavior. Even though recent quantitative studies
showed that mock objects are widely used in OSS projects,
scientiļ¬c knowledge is still lacking on how and why practitioners
use mocks. Such a knowledge is fundamental to guide further
research on this widespread practice and inform the design of
tools and processes to improve it.
The objective of this paper is to increase our understanding
of which test dependencies developers (do not) mock and why,
as well as what challenges developers face with this practice.
To this aim, we create MOCKEXTRACTOR, a tool to mine
the usage of mock objects in testing code and employ it to
collect data from three OSS projects and one industrial system.
Sampling from this data, we manually analyze how more than
2,000 test dependencies are treated. Subsequently, we discuss
our ļ¬ndings with developers from these systems, identifying
practices, rationales, and challenges. These results are supported
by a structured survey with more than 100 professionals. The
study reveals that the usage of mocks is highly dependent on
the responsibility and the architectural concern of the class.
To support the simulation of dependencies, mocking frame-
works have been developed (e.g., Mockito [7], EasyMock [2],
and JMock [3] for Java, Mock [5] and Mocker [6] for Python),
which provide APIs for creating mock (i.e., simulated) objects,
setting return values of methods in the mock objects, and
checking interactions between the component under test and
the mock objects. Past research has reported that software
projects are using mocking frameworks widely [21] [32] and
has provided initial evidence that using a mock object can ease
the process of unit testing [29].
However, empirical knowledge is still lacking on how
and why practitioners use mocks. To scientiļ¬cally evaluate
mocking and its effects, as well as to help practitioners in
their software testing phase, one has to ļ¬rst understand and
quantify developersā€™ practices and perspectives. In fact, this
allows both to focus future research on the most relevant
aspects of mocking and on real developersā€™ needs, as well
as to effectively guide the design of tools and processes.
To ļ¬ll this gap of knowledge, the goal of this paper is to
2. Current Practices in Testing
To Mock or Not To Mock?
An Empirical Study on Mocking Practices
Davide Spadiniā‡¤ā€ , MaurĆ­cio Anicheā€ , Magiel Bruntinkā‡¤, Alberto Bacchelliā€ 
ā‡¤Software Improvement Group
{d.spadini, m.bruntink}@sig.eu
ā€ Delft University of Technology
{d.spadini, m.f.aniche, a.bacchelli}@tudelft.nl
Abstractā€”When writing automated unit tests, developers often
deal with software artifacts that have several dependencies. In
these cases, one has the possibility of either instantiating the
dependencies or using mock objects to simulate the dependen-
ciesā€™ expected behavior. Even though recent quantitative studies
showed that mock objects are widely used in OSS projects,
scientiļ¬c knowledge is still lacking on how and why practitioners
use mocks. Such a knowledge is fundamental to guide further
research on this widespread practice and inform the design of
tools and processes to improve it.
The objective of this paper is to increase our understanding
of which test dependencies developers (do not) mock and why,
as well as what challenges developers face with this practice.
To this aim, we create MOCKEXTRACTOR, a tool to mine
the usage of mock objects in testing code and employ it to
collect data from three OSS projects and one industrial system.
Sampling from this data, we manually analyze how more than
2,000 test dependencies are treated. Subsequently, we discuss
our ļ¬ndings with developers from these systems, identifying
practices, rationales, and challenges. These results are supported
by a structured survey with more than 100 professionals. The
study reveals that the usage of mocks is highly dependent on
the responsibility and the architectural concern of the class.
To support the simulation of dependencies, mocking frame-
works have been developed (e.g., Mockito [7], EasyMock [2],
and JMock [3] for Java, Mock [5] and Mocker [6] for Python),
which provide APIs for creating mock (i.e., simulated) objects,
setting return values of methods in the mock objects, and
checking interactions between the component under test and
the mock objects. Past research has reported that software
projects are using mocking frameworks widely [21] [32] and
has provided initial evidence that using a mock object can ease
the process of unit testing [29].
However, empirical knowledge is still lacking on how
and why practitioners use mocks. To scientiļ¬cally evaluate
mocking and its effects, as well as to help practitioners in
their software testing phase, one has to ļ¬rst understand and
quantify developersā€™ practices and perspectives. In fact, this
allows both to focus future research on the most relevant
aspects of mocking and on real developersā€™ needs, as well
as to effectively guide the design of tools and processes.
To ļ¬ll this gap of knowledge, the goal of this paper is to
452376
442445 9
402669 11
16171013 27
6438 27 5
Web service
External dependency
Database
Domain object
Native library
Never Almost never Occasionally/ Sometimes Almost always Always
2. Current Practices in Testing
Noname manuscript No.
(will be inserted by the editor)
Mock Objects For Testing Java Systems
Why and How Developers Use Them, and How They Evolve
Davide Spadini Ā· MaurĆ­cio Aniche Ā· Magiel
Bruntink Ā· Alberto Bacchelli
Received: date / Accepted: date
Abstract When testing software artifacts that have several dependencies, one has
the possibility of either instantiating these dependencies or using mock objects to
simulate the dependenciesā€™ expected behavior. Even though recent quantitative
studies showed that mock objects are widely used both in open source and propri-
etary projects, scientiļ¬c knowledge is still lacking on how and why practitioners
use mocks. Such a knowledge is fundamental to guide further research on this
2. Current Practices in Testing
Noname manuscript No.
(will be inserted by the editor)
Mock Objects For Testing Java Systems
Why and How Developers Use Them, and How They Evolve
Davide Spadini Ā· MaurĆ­cio Aniche Ā· Magiel
Bruntink Ā· Alberto Bacchelli
Received: date / Accepted: date
Abstract When testing software artifacts that have several dependencies, one has
the possibility of either instantiating these dependencies or using mock objects to
simulate the dependenciesā€™ expected behavior. Even though recent quantitative
studies showed that mock objects are widely used both in open source and propri-
etary projects, scientiļ¬c knowledge is still lacking on how and why practitioners
use mocks. Such a knowledge is fundamental to guide further research on this
changes that convert tests from using mocks to using the real implementation of
a class. Nevertheless, it is reasonable to hypothesize that the choice of deleting
a mock is inļ¬‚uenced by many diļ¬€erent factors, as it happens for the choices of
(not) mocking a class, which we reported in the previous sections.
Table 5 When mock objects were introduced (N=2,935).
Spring Sonarqube VRaptor Alura Total
Mocks introduced
from the beginning
234 (86%) 1,485 (84%) 177 (94%) 263 (74%) 2,159 (83%)
Mocks introduced
later
37 (14%) 293 (16%) 12 (6%) 91 (26%) 433 (17%)
Mocks removed
from the tests
59 (22%) 243 (14%) 6 (3%) 35 (10%) 343 (13%)
RQ4. In the studied systems, mocks are mostly (80% of the time) present
at the inception of the test class and tend to stay in the test class for its
whole lifetime (87% of the time).
2. Current Practices in Testing
When Testing Meets Code Review:
Why and How Developers Review Tests
Davide Spadini
Delft University of Technology
Software Improvement Group
Delft, The Netherlands
d.spadini@sig.eu
MaurĆ­cio Aniche
Delft University of Technology
Delft, The Netherlands
m.f.aniche@tudelft.nl
Margaret-Anne Storey
University of Victoria
Victoria, BC, Canada
mstorey@uvic.ca
Magiel Bruntink
Software Improvement Group
Amsterdam, The Netherlands
m.bruntink@sig.eu
Alberto Bacchelli
University of Zurich
Zurich, Switzerland
bacchelli@i.uzh.ch
ABSTRACT
Automated testing is considered an essential process for ensuring
software quality. However, writing and maintaining high-quality
test code is challenging and frequently considered of secondary
importance. For production code, many open source and industrial
software projects employ code review, a well-established software
quality practice, but the question remains whether and how code
review is also used for ensuring the quality of test code. The aim
of this research is to answer this question and to increase our un-
derstanding of what developers think and do when it comes to
reviewing test code. We conducted both quantitative and quali-
tative methods to analyze more than 300,000 code reviews, and
interviewed 12 developers about how they review test les. This
work resulted in an overview of current code reviewing practices, a
set of identied obstacles limiting the review of test code, and a set
of issues that developers would like to see improved in code review
ACM Reference Format:
Davide Spadini, MaurĆ­cio Aniche, Margaret-Anne Storey, Magiel Bruntink,
and Alberto Bacchelli. 2018. When Testing Meets Code Review: Why and
How Developers Review Tests. In Proceedings of ICSE ā€™18: 40th International
Conference on Software Engineering , Gothenburg, Sweden, May 27-June 3,
2018 (ICSE ā€™18), 11 pages.
https://doi.org/10.1145/3180155.3180192
1 INTRODUCTION
Automated testing has become an essential process for improving
the quality of software systems [15, 31]. Automated tests (hereafter
referred to as just ā€˜testsā€™) can help ensure that production code is
robust under many usage conditions and that code meets perfor-
mance and security needs [15, 16]. Nevertheless, writing eective
tests is as challenging as writing good production code. A tester has
to ensure that test results are accurate, that all important execution
2. Current Practices in Testing
When Testing Meets Code Review:
Why and How Developers Review Tests
Davide Spadini
Delft University of Technology
Software Improvement Group
Delft, The Netherlands
d.spadini@sig.eu
MaurĆ­cio Aniche
Delft University of Technology
Delft, The Netherlands
m.f.aniche@tudelft.nl
Margaret-Anne Storey
University of Victoria
Victoria, BC, Canada
mstorey@uvic.ca
Magiel Bruntink
Software Improvement Group
Amsterdam, The Netherlands
m.bruntink@sig.eu
Alberto Bacchelli
University of Zurich
Zurich, Switzerland
bacchelli@i.uzh.ch
ABSTRACT
Automated testing is considered an essential process for ensuring
software quality. However, writing and maintaining high-quality
test code is challenging and frequently considered of secondary
importance. For production code, many open source and industrial
software projects employ code review, a well-established software
quality practice, but the question remains whether and how code
review is also used for ensuring the quality of test code. The aim
of this research is to answer this question and to increase our un-
derstanding of what developers think and do when it comes to
reviewing test code. We conducted both quantitative and quali-
tative methods to analyze more than 300,000 code reviews, and
interviewed 12 developers about how they review test les. This
work resulted in an overview of current code reviewing practices, a
set of identied obstacles limiting the review of test code, and a set
of issues that developers would like to see improved in code review
ACM Reference Format:
Davide Spadini, MaurĆ­cio Aniche, Margaret-Anne Storey, Magiel Bruntink,
and Alberto Bacchelli. 2018. When Testing Meets Code Review: Why and
How Developers Review Tests. In Proceedings of ICSE ā€™18: 40th International
Conference on Software Engineering , Gothenburg, Sweden, May 27-June 3,
2018 (ICSE ā€™18), 11 pages.
https://doi.org/10.1145/3180155.3180192
1 INTRODUCTION
Automated testing has become an essential process for improving
the quality of software systems [15, 31]. Automated tests (hereafter
referred to as just ā€˜testsā€™) can help ensure that production code is
robust under many usage conditions and that code meets perfor-
mance and security needs [15, 16]. Nevertheless, writing eective
tests is as challenging as writing good production code. A tester has
to ensure that test results are accurate, that all important execution
Prod. ļ¬les are
2 times more likely
to be discussed
than test ļ¬les
Checking out the
change and run
the tests
Test
Driven
Review
2. Current Practices in Testing
Test-Driven Code Review: An Empirical Study
BLINDED AUTHOR(S)
AFFILIATION(S)
Abstractā€”Test-Driven Code Review (TDR) is a code review
practice in which a reviewer inspects a patch by examining the
changed test code before the changed production code. Although
this practice has been mentioned positively by practitioners
in informal literature and interviews, there is no systematic
knowledge on its effects, prevalence, problems, and advantages.
In this paper, we aim at empirically understanding whether
this practice has an effect on code review effectiveness and how
developersā€™ perceive TDR. We conduct (i) a controlled experiment
with 93 developers that perform more than 150 reviews, and (ii)
9 semi-structured interviews and a survey with 103 respondents
to gather information on how TDR is perceived. Key results from
the experiment show that developers adopting TDR ļ¬nd the same
proportion of defects in production code, but more in test code, at
the expenses of fewer maintainability issues in production code.
Furthermore, we found that most developers prefer to review
production code as they deem it more critical and tests should
follow from it. Moreover, general poor test code quality and no
tool support hinder the adoption of TDR.
I. INTRODUCTION
Peer code review is a well-established and widely adopted
practice aimed at maintaining and promoting software qual-
ity [3]. Contemporary code review, also known as Change-
programmers, another article supported TDR (collecting more
than 1,200 likes): ā€œBy looking at the requirements and check-
ing them against the test cases, the developer can have a pretty
good understanding of what the implementation should be
like, what functionality it covers and if the developer omitted
any use cases.ā€ Interviewed developers reported preferring to
review test code ļ¬rst to better understanding the code change
before looking for defects in production [50].
These above are compelling arguments in favor of TDR, yet
we have no systematic knowledge on this practice: whether
TDR is effective in ļ¬nding defects during code review, how
frequently it is used, and what are its potential problems and
advantages beside review effectiveness. This knowledge can
provide insights for both practitioners and researchers. De-
velopers and project stakeholders can use empirical evidence
about TDR effects, problems, and advantages to make informed
decisions about when to adopt it. Researchers can focus their
attention on the novel aspects of TDR and challenges reviewers
face to inform future research.
In this paper, our goal is to obtain a deeper understanding
of TDR. We do this by conducting an empirical study set up
Building better tools
ā€¢ Given the discoveries on how developers review test
code, I created GitHub Enhancer:
ā€¢ better linking of test and production code
ā€¢ code coverage information within the CR
ā€¢ go-to-deļ¬nition support
ā€¢ and of course, PyDriller!!
GHEnhancer
GHEnhancer
PyDriller ā¤
PyDriller: Python Framework for Mining Soware Repositories
Davide Spadini
Delft University of Technology
Software Improvement Group
Delft, The Netherlands
d.spadini@sig.eu
MaurĆ­cio Aniche
Delft University of Technology
Delft, The Netherlands
m.f.aniche@tudelft.nl
Alberto Bacchelli
University of Zurich
Zurich, Switzerland
bacchelli@i.uzh.ch
ABSTRACT
Software repositories contain historical and valuable information
about the overall development of software systems. Mining software
repositories (MSR) is nowadays considered one of the most inter-
esting growing elds within software engineering. MSR focuses
on extracting and analyzing data available in software repositories
to uncover interesting, useful, and actionable information about
the system. Even though MSR plays an important role in software
engineering research, few tools have been created and made public
to support developers in extracting information from Git reposi-
tory. In this paper, we present P, a Python Framework that
eases the process of mining Git. We compare our tool against the
state-of-the-art Python Framework GitPython, demonstrating that
P can achieve the same results with, on average, 50% less
LOC and signicantly lower complexity.
URL: https://github.com/ishepard/pydriller,
Materials: https://doi.org/10.5281/zenodo.1327363,
Pre-print: https://doi.org/10.5281/zenodo.1327411
CCS CONCEPTS
ā€¢ Software and its engineering;
KEYWORDS
actionable insights for software engineering, such as understanding
the impact of code smells [13ā€“15], exploring how developers are
doing code reviews [2, 4, 10, 21] and which testing practices they
follow [20], predicting classes that are more prone to change/de-
fects [3, 6, 16, 17], and identifying the core developers of a software
team to transfer knowledge [12].
Among the dierent sources of information researchers can use,
version control systems, such as Git, are among the most used ones.
Indeed, version control systems provide researchers with precise
information about the source code, its evolution, the developers of
the software, and the commit messages (which explain the reasons
for changing).
Nevertheless, extracting information from Git repositories is
not trivial. Indeed, many frameworks can be used to interact with
Git (depending on the preferred programming language), such as
GitPython [1] for Python, or JGit for Java [8]. However, these tools
are often dicult to use. One of the main reasons for such diculty
is that they encapsulate all the features from Git, hence, developers
are forced to write long and complex implementations to extract
even simple data from a Git repository.
In this paper, we present P, a Python framework that
helps developers to mine software repositories. P provides
developers with simple APIs to extract information from a Git
repository, such as commits, developers, modications, dis, and
PyDriller
ā€¢ Aim: to ease the extraction of information from Git repositories
ā€¢ What is supported:
ā€¢ analysing the history of a project
ā€¢ retrieving commit information (date, message, authors, etc.)
ā€¢ retrieving ļ¬les information (diff, source code)
ā€¢ What is not supported:
ā€¢ writing on the repo (git pull, git push, git add, git commit,
etc..)
Statistics
ā€¢ Everything is lazy evaluated, so you ā€œpayā€ what you get.
1. only commit information:
immediate (as git log)
2. commit and ļ¬le information:
60 commits/sec (1240 commits in 22 seconds)
3. commit, ļ¬le and metrics information:
4 commits/s (1240 commits in ~5min)
Statistics
ā€¢ Some numbers:
1. Downloaded approximatively 4000 times
ā€¢ Community driven
ā€¢ University of Zurich, TU Delft and University of Catania
teach PyDriller in their MSR courses
ā€¢ SIG uses PyDriller in their quality assessments
PyDriller
ā€¢ Source code: https://github.com/ishepard/pydriller
ā€¢ Doc: https://pydriller.readthedocs.io/en/latest/
ā€¢ Feel free to leave a star! :)
And now?
ā€¢ Investigating how people debug test code
ā€¢ how widespread is the problem?
ā€¢ how do developers ļ¬x it/debug it?
Summary
Helping developers in writing more/better test code, by:
1.
raising awareness of the effect of
writing poor tests
2.
means of new techniques and
tools
ā€¢ On The Relation of Test smells to
Software Code quality (ICSME18)
ā€¢ Test coupling: Free your Code and
Tests (ESEM19/FSE19)
ā€¢ To Mock or Not To Mock? An
empirical study on Mocking
Practices(MSR17/EMSE)
ā€¢ When Testing Meets Code Review:
Why and How Developers Review
Tests (ICSE18)
ā€¢ Test-Driven Review: An Empirical
Study (hopefully, ICSE19)
ā€¢ PyDriller (FSE19)
ā€¢ GHEnhancer

More Related Content

What's hot

Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...CSCJournals
Ā 
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUES
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUESTOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUES
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUESShakas Technologies
Ā 
Unique fundamentals of software
Unique fundamentals of softwareUnique fundamentals of software
Unique fundamentals of softwareijcsit
Ā 
Thesis Part I EMGT 698
Thesis Part I EMGT 698Thesis Part I EMGT 698
Thesis Part I EMGT 698Karthik Murali
Ā 
Towards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesTowards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesredpel dot com
Ā 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
Ā 
Towards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesTowards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesPvrtechnologies Nellore
Ā 
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code Review
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code ReviewICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code Review
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code ReviewAli Ouni
Ā 
Adopting code reviews for agile software development
Adopting code reviews for agile software developmentAdopting code reviews for agile software development
Adopting code reviews for agile software developmentmariobernhart
Ā 
Cross-project defect prediction
Cross-project defect predictionCross-project defect prediction
Cross-project defect predictionThomas Zimmermann
Ā 
Can we induce change with what we measure?
Can we induce change with what we measure?Can we induce change with what we measure?
Can we induce change with what we measure?Michaela Greiler
Ā 
Recommending Software Refactoring Using Search-based Software Enginnering
Recommending Software Refactoring Using Search-based Software EnginneringRecommending Software Refactoring Using Search-based Software Enginnering
Recommending Software Refactoring Using Search-based Software EnginneringAli Ouni
Ā 
Towards effective bug triage with software
Towards effective bug triage with softwareTowards effective bug triage with software
Towards effective bug triage with softwareNexgen Technology
Ā 
An Empirical Study of Adoption of Software Testing in Open Source Projects
An Empirical Study of Adoption of Software Testing in Open Source ProjectsAn Empirical Study of Adoption of Software Testing in Open Source Projects
An Empirical Study of Adoption of Software Testing in Open Source ProjectsPavneet Singh Kochhar
Ā 
Controlled experiments, Hypothesis Testing, Test Selection, Threats to Validity
Controlled experiments, Hypothesis Testing, Test Selection, Threats to ValidityControlled experiments, Hypothesis Testing, Test Selection, Threats to Validity
Controlled experiments, Hypothesis Testing, Test Selection, Threats to Validityalessio_ferrari
Ā 
A Mono- and Multi-objective Approach for Recommending Software Refactoring
A Mono- and Multi-objective Approach for Recommending Software RefactoringA Mono- and Multi-objective Approach for Recommending Software Refactoring
A Mono- and Multi-objective Approach for Recommending Software RefactoringAli Ouni
Ā 
Using Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorUsing Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorTim Menzies
Ā 
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...Tim Menzies
Ā 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
Ā 

What's hot (20)

IDEAL: An Open-Source Identifier Name Appraisal Tool
IDEAL: An Open-Source Identifier Name Appraisal ToolIDEAL: An Open-Source Identifier Name Appraisal Tool
IDEAL: An Open-Source Identifier Name Appraisal Tool
Ā 
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritizatio...
Ā 
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUES
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUESTOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUES
TOWARDS EFFECTIVE BUG TRIAGE WITH SOFTWARE DATA REDUCTION TECHNIQUES
Ā 
Unique fundamentals of software
Unique fundamentals of softwareUnique fundamentals of software
Unique fundamentals of software
Ā 
Thesis Part I EMGT 698
Thesis Part I EMGT 698Thesis Part I EMGT 698
Thesis Part I EMGT 698
Ā 
Towards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesTowards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniques
Ā 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
Ā 
Towards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniquesTowards effective bug triage with software data reduction techniques
Towards effective bug triage with software data reduction techniques
Ā 
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code Review
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code ReviewICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code Review
ICSME 2016: Search-Based Peer Reviewers Recommendation in Modern Code Review
Ā 
Adopting code reviews for agile software development
Adopting code reviews for agile software developmentAdopting code reviews for agile software development
Adopting code reviews for agile software development
Ā 
Cross-project defect prediction
Cross-project defect predictionCross-project defect prediction
Cross-project defect prediction
Ā 
Can we induce change with what we measure?
Can we induce change with what we measure?Can we induce change with what we measure?
Can we induce change with what we measure?
Ā 
Recommending Software Refactoring Using Search-based Software Enginnering
Recommending Software Refactoring Using Search-based Software EnginneringRecommending Software Refactoring Using Search-based Software Enginnering
Recommending Software Refactoring Using Search-based Software Enginnering
Ā 
Towards effective bug triage with software
Towards effective bug triage with softwareTowards effective bug triage with software
Towards effective bug triage with software
Ā 
An Empirical Study of Adoption of Software Testing in Open Source Projects
An Empirical Study of Adoption of Software Testing in Open Source ProjectsAn Empirical Study of Adoption of Software Testing in Open Source Projects
An Empirical Study of Adoption of Software Testing in Open Source Projects
Ā 
Controlled experiments, Hypothesis Testing, Test Selection, Threats to Validity
Controlled experiments, Hypothesis Testing, Test Selection, Threats to ValidityControlled experiments, Hypothesis Testing, Test Selection, Threats to Validity
Controlled experiments, Hypothesis Testing, Test Selection, Threats to Validity
Ā 
A Mono- and Multi-objective Approach for Recommending Software Refactoring
A Mono- and Multi-objective Approach for Recommending Software RefactoringA Mono- and Multi-objective Approach for Recommending Software Refactoring
A Mono- and Multi-objective Approach for Recommending Software Refactoring
Ā 
Using Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorUsing Developer Information as a Prediction Factor
Using Developer Information as a Prediction Factor
Ā 
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Make the Most of Your Time: How Should the Analyst Work with Automated Tracea...
Ā 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Ā 

Similar to Practices and Tools for Better Software Testing

Characterization of Open-Source Applications and Test Suites
Characterization of Open-Source Applications and Test Suites Characterization of Open-Source Applications and Test Suites
Characterization of Open-Source Applications and Test Suites ijseajournal
Ā 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLCIJEACS
Ā 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comwww.testersforum.com
Ā 
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSQUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
Ā 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingRachel Davis
Ā 
A Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyA Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Ā 
Comparative Analysis of Model Based Testing and Formal Based Testing - A Review
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewComparative Analysis of Model Based Testing and Formal Based Testing - A Review
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Ā 
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMA RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMijseajournal
Ā 
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMA RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMijseajournal
Ā 
Ijcatr04051006
Ijcatr04051006Ijcatr04051006
Ijcatr04051006Editor IJCATR
Ā 
Annotated Bibliography .Guidelines Annotated Bibliograph.docx
Annotated Bibliography  .Guidelines Annotated Bibliograph.docxAnnotated Bibliography  .Guidelines Annotated Bibliograph.docx
Annotated Bibliography .Guidelines Annotated Bibliograph.docxjustine1simpson78276
Ā 
Exploratory Testing, A Guide Towards Better Test Coverage.pdf
Exploratory Testing, A Guide Towards Better Test Coverage.pdfExploratory Testing, A Guide Towards Better Test Coverage.pdf
Exploratory Testing, A Guide Towards Better Test Coverage.pdfpCloudy
Ā 
Comparison between Test-Driven Development and Conventional Development: A Ca...
Comparison between Test-Driven Development and Conventional Development: A Ca...Comparison between Test-Driven Development and Conventional Development: A Ca...
Comparison between Test-Driven Development and Conventional Development: A Ca...IJERA Editor
Ā 
Software testing defect prediction model a practical approach
Software testing defect prediction model   a practical approachSoftware testing defect prediction model   a practical approach
Software testing defect prediction model a practical approacheSAT Journals
Ā 
A Comparative Case Study On Tools For Internal Software Quality Measures
A Comparative Case Study On Tools For Internal Software Quality MeasuresA Comparative Case Study On Tools For Internal Software Quality Measures
A Comparative Case Study On Tools For Internal Software Quality MeasuresTodd Turner
Ā 

Similar to Practices and Tools for Better Software Testing (20)

Characterization of Open-Source Applications and Test Suites
Characterization of Open-Source Applications and Test Suites Characterization of Open-Source Applications and Test Suites
Characterization of Open-Source Applications and Test Suites
Ā 
Importance of Testing in SDLC
Importance of Testing in SDLCImportance of Testing in SDLC
Importance of Testing in SDLC
Ā 
Too many files
Too many filesToo many files
Too many files
Ā 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.com
Ā 
Qa Faqs
Qa FaqsQa Faqs
Qa Faqs
Ā 
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSQUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONS
Ā 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application Testing
Ā 
A Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection StrategyA Complexity Based Regression Test Selection Strategy
A Complexity Based Regression Test Selection Strategy
Ā 
Comparative Analysis of Model Based Testing and Formal Based Testing - A Review
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewComparative Analysis of Model Based Testing and Formal Based Testing - A Review
Comparative Analysis of Model Based Testing and Formal Based Testing - A Review
Ā 
C41041120
C41041120C41041120
C41041120
Ā 
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMA RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
Ā 
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEMA RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
A RELIABLE AND AN EFFICIENT WEB TESTING SYSTEM
Ā 
Ijcatr04051006
Ijcatr04051006Ijcatr04051006
Ijcatr04051006
Ā 
Annotated Bibliography .Guidelines Annotated Bibliograph.docx
Annotated Bibliography  .Guidelines Annotated Bibliograph.docxAnnotated Bibliography  .Guidelines Annotated Bibliograph.docx
Annotated Bibliography .Guidelines Annotated Bibliograph.docx
Ā 
Exploratory Testing, A Guide Towards Better Test Coverage.pdf
Exploratory Testing, A Guide Towards Better Test Coverage.pdfExploratory Testing, A Guide Towards Better Test Coverage.pdf
Exploratory Testing, A Guide Towards Better Test Coverage.pdf
Ā 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
Ā 
Comparison between Test-Driven Development and Conventional Development: A Ca...
Comparison between Test-Driven Development and Conventional Development: A Ca...Comparison between Test-Driven Development and Conventional Development: A Ca...
Comparison between Test-Driven Development and Conventional Development: A Ca...
Ā 
Software testing defect prediction model a practical approach
Software testing defect prediction model   a practical approachSoftware testing defect prediction model   a practical approach
Software testing defect prediction model a practical approach
Ā 
Stm unit1
Stm unit1Stm unit1
Stm unit1
Ā 
A Comparative Case Study On Tools For Internal Software Quality Measures
A Comparative Case Study On Tools For Internal Software Quality MeasuresA Comparative Case Study On Tools For Internal Software Quality Measures
A Comparative Case Study On Tools For Internal Software Quality Measures
Ā 

More from Delft University of Technology

Investigating Severity Thresholds for Test Smells
Investigating Severity Thresholds for Test SmellsInvestigating Severity Thresholds for Test Smells
Investigating Severity Thresholds for Test SmellsDelft University of Technology
Ā 
Primers or Reminders? The Effects of Existing Review Comments on Code Review
Primers or Reminders? The Effects of Existing Review Comments on Code ReviewPrimers or Reminders? The Effects of Existing Review Comments on Code Review
Primers or Reminders? The Effects of Existing Review Comments on Code ReviewDelft University of Technology
Ā 
PyDriller: Python Framework for Mining Software Repositories
PyDriller: Python Framework for Mining Software RepositoriesPyDriller: Python Framework for Mining Software Repositories
PyDriller: Python Framework for Mining Software RepositoriesDelft University of Technology
Ā 
When Testing Meets Code Review: Why and How Developers Review Tests
When Testing Meets Code Review: Why and How Developers Review TestsWhen Testing Meets Code Review: Why and How Developers Review Tests
When Testing Meets Code Review: Why and How Developers Review TestsDelft University of Technology
Ā 

More from Delft University of Technology (6)

Investigating Severity Thresholds for Test Smells
Investigating Severity Thresholds for Test SmellsInvestigating Severity Thresholds for Test Smells
Investigating Severity Thresholds for Test Smells
Ā 
Primers or Reminders? The Effects of Existing Review Comments on Code Review
Primers or Reminders? The Effects of Existing Review Comments on Code ReviewPrimers or Reminders? The Effects of Existing Review Comments on Code Review
Primers or Reminders? The Effects of Existing Review Comments on Code Review
Ā 
Test-Driven Code Review: An Empirical Study
Test-Driven Code Review: An Empirical StudyTest-Driven Code Review: An Empirical Study
Test-Driven Code Review: An Empirical Study
Ā 
PyDriller: Python Framework for Mining Software Repositories
PyDriller: Python Framework for Mining Software RepositoriesPyDriller: Python Framework for Mining Software Repositories
PyDriller: Python Framework for Mining Software Repositories
Ā 
When Testing Meets Code Review: Why and How Developers Review Tests
When Testing Meets Code Review: Why and How Developers Review TestsWhen Testing Meets Code Review: Why and How Developers Review Tests
When Testing Meets Code Review: Why and How Developers Review Tests
Ā 
To Mock or Not To Mock
To Mock or Not To MockTo Mock or Not To Mock
To Mock or Not To Mock
Ā 

Recently uploaded

College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
Ā 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
Ā 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
Ā 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
Ā 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
Ā 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
Ā 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
Ā 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
Ā 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
Ā 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
Ā 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
Ā 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
Ā 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
Ā 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
Ā 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
Ā 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
Ā 
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”soniya singh
Ā 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
Ā 

Recently uploaded (20)

College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Ā 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
Ā 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
Ā 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Ā 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Ā 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
Ā 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Ā 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
Ā 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
Ā 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
Ā 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
Ā 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
Ā 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
Ā 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
Ā 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
Ā 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
Ā 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
Ā 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Ā 
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”
Model Call Girl in Narela Delhi reach out to us at šŸ”8264348440šŸ”
Ā 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Ā 

Practices and Tools for Better Software Testing

  • 1. Practices and Tools for Better Software Testing ishepard @DavideSpadini Davide Spadini
  • 2. ā€¢ Bachelor and Master in Italy ā€¢ In August 2016 I started my PhD in Delft University of Technology ā€¢ 2 years and 3 months About me
  • 3. ā€¢ I am from Verona, in the north Italy ā€¢ In August 2016 I started my PhD in Delft University of Technology ā€¢ 2 years and 3 months ā€¢ I am enrolled in an European project called SENECA About me
  • 4. SENECA ā€¢ It involves 3 countries: 1. The Netherlands 2. Spain 3. Greece ā€¢ For each country, there is one academic partner, and one industry partner ā€¢ In my case: ā€¢ Academic partner: TU Delft ā€¢ Industry partner: SIG
  • 5. SENECA ā€” MY AREA Software Testing
  • 6. SIG perspective ā€¢ SIG does not calculate test code quality metrics ā€¢ BTW, what is ā€œtest code qualityā€? ā€¢ In general, test code is not as good as production code ā€¢ Little research has been done in how to help developers in writing better test code
  • 8. My campaign ā€¢ Help developers in writing test code by: 1. raising awareness of the effect of writing poor tests 2. means of new techniques and tools
  • 9. 1. Test Quality and Software Quality On The Relation of Test Smells to Software Code Quality Davide Spadini,ā‡¤ā€” Fabio PalombaĀ§ Andy Zaidman,ā‡¤ Magiel Bruntink,ā€” Alberto BacchelliĀ§ ā€”Software Improvement Group, ā‡¤Delft University of Technology, Ā§University of Zurich ā‡¤{d.spadini, a.e.zaidman}@tudelft.nl, ā€”m.bruntink@sig.eu, Ā§{palomba, bacchelli}@iļ¬.uzh.ch Abstractā€”Test smells are sub-optimal design choices in the implementation of test code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in ļ¬nding bugs in production code. Although signiļ¬cant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality. In this paper, we investigate the relationship between the presence of test smells and the change- and defect-proneness of test code, as well as the defect-proneness of the tested production code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investi- gate the association of six test smells and their co-occurrence with software quality. Key results of our study include:(i) tests with smells are more change- and defect-prone, (ii) ā€˜Indirect Testingā€™, ā€˜Eager Testā€™, and ā€˜Assertion Rouletteā€™ are the most signiļ¬cant smells for change-proneness and, (iii) production code is more defect-prone when tested by smelly tests. I. INTRODUCTION Automated testing (hereafter referred to as just testing) found evidence of a negative impact of test smells on both comprehensibility and maintainability of test code [7]. Although the study by Bavota et al. [7] made a ļ¬rst, necessary step toward the understanding of maintainability aspects of test smells, our empirical knowledge on whether and how test smells are associated with software quality aspects is still limited. Indeed, van Deursen et al. [74] based their deļ¬nition of test smells on their anecdotal experience, without extensive evidence on whether and how such smells are negatively associated with the overall system quality. To ļ¬ll this gap, in this paper we quantitatively investigate the relationship between the presence of smells in test methods and the change- and defect-proneness of both these test methods and the production code they intend to test. Similar to several previous studies on software quality [24], [62], we employ the proxy metrics change-proneness (i.e., number of times a method changes between two releases) and defect- proneness (i.e., number of defects the method had between two
  • 10. 1. Test Quality and Software Quality Test code is more change- and defect-prone if affected by smells Production code is more defect-prone if exercised by test code affected by test smells
  • 11. 1. Test Quality and Software Quality ā€¢ Test coupling: Free your code and tests ā€¢ Problem: coupling between test and production code ā€¢ Example: due to a refactor or a semantic change in the production code, many tests break ā€¢ Study: 1. How spread is the problem 2. What is the cause and it is ļ¬xed. 3. Can we offer refactorings strategies?
  • 12. 1. Test Quality and Software Quality t #42abc A.java ATest.java B.java BTest.java
  • 13. 1. Test Quality and Software Quality t #42abc A.java #21def ATest.java B.java BTest.java
  • 14. 1. Test Quality and Software Quality t #42abc A.java #21def ATest.java B.javaBTest.java
  • 15. 1. Test Quality and Software Quality t #42abc A.java #21def ATest.java B.javaBTest.java mvn install old-tests new-production
  • 16. 1. Test Quality and Software Quality t #42abc A.java #21def ATest.java B.javaBTest.java mvn install old-tests new-production ā€¢ Preliminary results: 42% of the tests fail, in average with 9 different errors
  • 17. 2. Current Practices in Testing To Mock or Not To Mock? An Empirical Study on Mocking Practices Davide Spadiniā‡¤ā€ , MaurĆ­cio Anicheā€ , Magiel Bruntinkā‡¤, Alberto Bacchelliā€  ā‡¤Software Improvement Group {d.spadini, m.bruntink}@sig.eu ā€ Delft University of Technology {d.spadini, m.f.aniche, a.bacchelli}@tudelft.nl Abstractā€”When writing automated unit tests, developers often deal with software artifacts that have several dependencies. In these cases, one has the possibility of either instantiating the dependencies or using mock objects to simulate the dependen- ciesā€™ expected behavior. Even though recent quantitative studies showed that mock objects are widely used in OSS projects, scientiļ¬c knowledge is still lacking on how and why practitioners use mocks. Such a knowledge is fundamental to guide further research on this widespread practice and inform the design of tools and processes to improve it. The objective of this paper is to increase our understanding of which test dependencies developers (do not) mock and why, as well as what challenges developers face with this practice. To this aim, we create MOCKEXTRACTOR, a tool to mine the usage of mock objects in testing code and employ it to collect data from three OSS projects and one industrial system. Sampling from this data, we manually analyze how more than 2,000 test dependencies are treated. Subsequently, we discuss our ļ¬ndings with developers from these systems, identifying practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. The study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. To support the simulation of dependencies, mocking frame- works have been developed (e.g., Mockito [7], EasyMock [2], and JMock [3] for Java, Mock [5] and Mocker [6] for Python), which provide APIs for creating mock (i.e., simulated) objects, setting return values of methods in the mock objects, and checking interactions between the component under test and the mock objects. Past research has reported that software projects are using mocking frameworks widely [21] [32] and has provided initial evidence that using a mock object can ease the process of unit testing [29]. However, empirical knowledge is still lacking on how and why practitioners use mocks. To scientiļ¬cally evaluate mocking and its effects, as well as to help practitioners in their software testing phase, one has to ļ¬rst understand and quantify developersā€™ practices and perspectives. In fact, this allows both to focus future research on the most relevant aspects of mocking and on real developersā€™ needs, as well as to effectively guide the design of tools and processes. To ļ¬ll this gap of knowledge, the goal of this paper is to
  • 18. 2. Current Practices in Testing To Mock or Not To Mock? An Empirical Study on Mocking Practices Davide Spadiniā‡¤ā€ , MaurĆ­cio Anicheā€ , Magiel Bruntinkā‡¤, Alberto Bacchelliā€  ā‡¤Software Improvement Group {d.spadini, m.bruntink}@sig.eu ā€ Delft University of Technology {d.spadini, m.f.aniche, a.bacchelli}@tudelft.nl Abstractā€”When writing automated unit tests, developers often deal with software artifacts that have several dependencies. In these cases, one has the possibility of either instantiating the dependencies or using mock objects to simulate the dependen- ciesā€™ expected behavior. Even though recent quantitative studies showed that mock objects are widely used in OSS projects, scientiļ¬c knowledge is still lacking on how and why practitioners use mocks. Such a knowledge is fundamental to guide further research on this widespread practice and inform the design of tools and processes to improve it. The objective of this paper is to increase our understanding of which test dependencies developers (do not) mock and why, as well as what challenges developers face with this practice. To this aim, we create MOCKEXTRACTOR, a tool to mine the usage of mock objects in testing code and employ it to collect data from three OSS projects and one industrial system. Sampling from this data, we manually analyze how more than 2,000 test dependencies are treated. Subsequently, we discuss our ļ¬ndings with developers from these systems, identifying practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. The study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. To support the simulation of dependencies, mocking frame- works have been developed (e.g., Mockito [7], EasyMock [2], and JMock [3] for Java, Mock [5] and Mocker [6] for Python), which provide APIs for creating mock (i.e., simulated) objects, setting return values of methods in the mock objects, and checking interactions between the component under test and the mock objects. Past research has reported that software projects are using mocking frameworks widely [21] [32] and has provided initial evidence that using a mock object can ease the process of unit testing [29]. However, empirical knowledge is still lacking on how and why practitioners use mocks. To scientiļ¬cally evaluate mocking and its effects, as well as to help practitioners in their software testing phase, one has to ļ¬rst understand and quantify developersā€™ practices and perspectives. In fact, this allows both to focus future research on the most relevant aspects of mocking and on real developersā€™ needs, as well as to effectively guide the design of tools and processes. To ļ¬ll this gap of knowledge, the goal of this paper is to 452376 442445 9 402669 11 16171013 27 6438 27 5 Web service External dependency Database Domain object Native library Never Almost never Occasionally/ Sometimes Almost always Always
  • 19. 2. Current Practices in Testing Noname manuscript No. (will be inserted by the editor) Mock Objects For Testing Java Systems Why and How Developers Use Them, and How They Evolve Davide Spadini Ā· MaurĆ­cio Aniche Ā· Magiel Bruntink Ā· Alberto Bacchelli Received: date / Accepted: date Abstract When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependenciesā€™ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and propri- etary projects, scientiļ¬c knowledge is still lacking on how and why practitioners use mocks. Such a knowledge is fundamental to guide further research on this
  • 20. 2. Current Practices in Testing Noname manuscript No. (will be inserted by the editor) Mock Objects For Testing Java Systems Why and How Developers Use Them, and How They Evolve Davide Spadini Ā· MaurĆ­cio Aniche Ā· Magiel Bruntink Ā· Alberto Bacchelli Received: date / Accepted: date Abstract When testing software artifacts that have several dependencies, one has the possibility of either instantiating these dependencies or using mock objects to simulate the dependenciesā€™ expected behavior. Even though recent quantitative studies showed that mock objects are widely used both in open source and propri- etary projects, scientiļ¬c knowledge is still lacking on how and why practitioners use mocks. Such a knowledge is fundamental to guide further research on this changes that convert tests from using mocks to using the real implementation of a class. Nevertheless, it is reasonable to hypothesize that the choice of deleting a mock is inļ¬‚uenced by many diļ¬€erent factors, as it happens for the choices of (not) mocking a class, which we reported in the previous sections. Table 5 When mock objects were introduced (N=2,935). Spring Sonarqube VRaptor Alura Total Mocks introduced from the beginning 234 (86%) 1,485 (84%) 177 (94%) 263 (74%) 2,159 (83%) Mocks introduced later 37 (14%) 293 (16%) 12 (6%) 91 (26%) 433 (17%) Mocks removed from the tests 59 (22%) 243 (14%) 6 (3%) 35 (10%) 343 (13%) RQ4. In the studied systems, mocks are mostly (80% of the time) present at the inception of the test class and tend to stay in the test class for its whole lifetime (87% of the time).
  • 21. 2. Current Practices in Testing When Testing Meets Code Review: Why and How Developers Review Tests Davide Spadini Delft University of Technology Software Improvement Group Delft, The Netherlands d.spadini@sig.eu MaurĆ­cio Aniche Delft University of Technology Delft, The Netherlands m.f.aniche@tudelft.nl Margaret-Anne Storey University of Victoria Victoria, BC, Canada mstorey@uvic.ca Magiel Bruntink Software Improvement Group Amsterdam, The Netherlands m.bruntink@sig.eu Alberto Bacchelli University of Zurich Zurich, Switzerland bacchelli@i.uzh.ch ABSTRACT Automated testing is considered an essential process for ensuring software quality. However, writing and maintaining high-quality test code is challenging and frequently considered of secondary importance. For production code, many open source and industrial software projects employ code review, a well-established software quality practice, but the question remains whether and how code review is also used for ensuring the quality of test code. The aim of this research is to answer this question and to increase our un- derstanding of what developers think and do when it comes to reviewing test code. We conducted both quantitative and quali- tative methods to analyze more than 300,000 code reviews, and interviewed 12 developers about how they review test les. This work resulted in an overview of current code reviewing practices, a set of identied obstacles limiting the review of test code, and a set of issues that developers would like to see improved in code review ACM Reference Format: Davide Spadini, MaurĆ­cio Aniche, Margaret-Anne Storey, Magiel Bruntink, and Alberto Bacchelli. 2018. When Testing Meets Code Review: Why and How Developers Review Tests. In Proceedings of ICSE ā€™18: 40th International Conference on Software Engineering , Gothenburg, Sweden, May 27-June 3, 2018 (ICSE ā€™18), 11 pages. https://doi.org/10.1145/3180155.3180192 1 INTRODUCTION Automated testing has become an essential process for improving the quality of software systems [15, 31]. Automated tests (hereafter referred to as just ā€˜testsā€™) can help ensure that production code is robust under many usage conditions and that code meets perfor- mance and security needs [15, 16]. Nevertheless, writing eective tests is as challenging as writing good production code. A tester has to ensure that test results are accurate, that all important execution
  • 22. 2. Current Practices in Testing When Testing Meets Code Review: Why and How Developers Review Tests Davide Spadini Delft University of Technology Software Improvement Group Delft, The Netherlands d.spadini@sig.eu MaurĆ­cio Aniche Delft University of Technology Delft, The Netherlands m.f.aniche@tudelft.nl Margaret-Anne Storey University of Victoria Victoria, BC, Canada mstorey@uvic.ca Magiel Bruntink Software Improvement Group Amsterdam, The Netherlands m.bruntink@sig.eu Alberto Bacchelli University of Zurich Zurich, Switzerland bacchelli@i.uzh.ch ABSTRACT Automated testing is considered an essential process for ensuring software quality. However, writing and maintaining high-quality test code is challenging and frequently considered of secondary importance. For production code, many open source and industrial software projects employ code review, a well-established software quality practice, but the question remains whether and how code review is also used for ensuring the quality of test code. The aim of this research is to answer this question and to increase our un- derstanding of what developers think and do when it comes to reviewing test code. We conducted both quantitative and quali- tative methods to analyze more than 300,000 code reviews, and interviewed 12 developers about how they review test les. This work resulted in an overview of current code reviewing practices, a set of identied obstacles limiting the review of test code, and a set of issues that developers would like to see improved in code review ACM Reference Format: Davide Spadini, MaurĆ­cio Aniche, Margaret-Anne Storey, Magiel Bruntink, and Alberto Bacchelli. 2018. When Testing Meets Code Review: Why and How Developers Review Tests. In Proceedings of ICSE ā€™18: 40th International Conference on Software Engineering , Gothenburg, Sweden, May 27-June 3, 2018 (ICSE ā€™18), 11 pages. https://doi.org/10.1145/3180155.3180192 1 INTRODUCTION Automated testing has become an essential process for improving the quality of software systems [15, 31]. Automated tests (hereafter referred to as just ā€˜testsā€™) can help ensure that production code is robust under many usage conditions and that code meets perfor- mance and security needs [15, 16]. Nevertheless, writing eective tests is as challenging as writing good production code. A tester has to ensure that test results are accurate, that all important execution Prod. ļ¬les are 2 times more likely to be discussed than test ļ¬les Checking out the change and run the tests Test Driven Review
  • 23. 2. Current Practices in Testing Test-Driven Code Review: An Empirical Study BLINDED AUTHOR(S) AFFILIATION(S) Abstractā€”Test-Driven Code Review (TDR) is a code review practice in which a reviewer inspects a patch by examining the changed test code before the changed production code. Although this practice has been mentioned positively by practitioners in informal literature and interviews, there is no systematic knowledge on its effects, prevalence, problems, and advantages. In this paper, we aim at empirically understanding whether this practice has an effect on code review effectiveness and how developersā€™ perceive TDR. We conduct (i) a controlled experiment with 93 developers that perform more than 150 reviews, and (ii) 9 semi-structured interviews and a survey with 103 respondents to gather information on how TDR is perceived. Key results from the experiment show that developers adopting TDR ļ¬nd the same proportion of defects in production code, but more in test code, at the expenses of fewer maintainability issues in production code. Furthermore, we found that most developers prefer to review production code as they deem it more critical and tests should follow from it. Moreover, general poor test code quality and no tool support hinder the adoption of TDR. I. INTRODUCTION Peer code review is a well-established and widely adopted practice aimed at maintaining and promoting software qual- ity [3]. Contemporary code review, also known as Change- programmers, another article supported TDR (collecting more than 1,200 likes): ā€œBy looking at the requirements and check- ing them against the test cases, the developer can have a pretty good understanding of what the implementation should be like, what functionality it covers and if the developer omitted any use cases.ā€ Interviewed developers reported preferring to review test code ļ¬rst to better understanding the code change before looking for defects in production [50]. These above are compelling arguments in favor of TDR, yet we have no systematic knowledge on this practice: whether TDR is effective in ļ¬nding defects during code review, how frequently it is used, and what are its potential problems and advantages beside review effectiveness. This knowledge can provide insights for both practitioners and researchers. De- velopers and project stakeholders can use empirical evidence about TDR effects, problems, and advantages to make informed decisions about when to adopt it. Researchers can focus their attention on the novel aspects of TDR and challenges reviewers face to inform future research. In this paper, our goal is to obtain a deeper understanding of TDR. We do this by conducting an empirical study set up
  • 24. Building better tools ā€¢ Given the discoveries on how developers review test code, I created GitHub Enhancer: ā€¢ better linking of test and production code ā€¢ code coverage information within the CR ā€¢ go-to-deļ¬nition support ā€¢ and of course, PyDriller!!
  • 27. PyDriller ā¤ PyDriller: Python Framework for Mining Soware Repositories Davide Spadini Delft University of Technology Software Improvement Group Delft, The Netherlands d.spadini@sig.eu MaurĆ­cio Aniche Delft University of Technology Delft, The Netherlands m.f.aniche@tudelft.nl Alberto Bacchelli University of Zurich Zurich, Switzerland bacchelli@i.uzh.ch ABSTRACT Software repositories contain historical and valuable information about the overall development of software systems. Mining software repositories (MSR) is nowadays considered one of the most inter- esting growing elds within software engineering. MSR focuses on extracting and analyzing data available in software repositories to uncover interesting, useful, and actionable information about the system. Even though MSR plays an important role in software engineering research, few tools have been created and made public to support developers in extracting information from Git reposi- tory. In this paper, we present P, a Python Framework that eases the process of mining Git. We compare our tool against the state-of-the-art Python Framework GitPython, demonstrating that P can achieve the same results with, on average, 50% less LOC and signicantly lower complexity. URL: https://github.com/ishepard/pydriller, Materials: https://doi.org/10.5281/zenodo.1327363, Pre-print: https://doi.org/10.5281/zenodo.1327411 CCS CONCEPTS ā€¢ Software and its engineering; KEYWORDS actionable insights for software engineering, such as understanding the impact of code smells [13ā€“15], exploring how developers are doing code reviews [2, 4, 10, 21] and which testing practices they follow [20], predicting classes that are more prone to change/de- fects [3, 6, 16, 17], and identifying the core developers of a software team to transfer knowledge [12]. Among the dierent sources of information researchers can use, version control systems, such as Git, are among the most used ones. Indeed, version control systems provide researchers with precise information about the source code, its evolution, the developers of the software, and the commit messages (which explain the reasons for changing). Nevertheless, extracting information from Git repositories is not trivial. Indeed, many frameworks can be used to interact with Git (depending on the preferred programming language), such as GitPython [1] for Python, or JGit for Java [8]. However, these tools are often dicult to use. One of the main reasons for such diculty is that they encapsulate all the features from Git, hence, developers are forced to write long and complex implementations to extract even simple data from a Git repository. In this paper, we present P, a Python framework that helps developers to mine software repositories. P provides developers with simple APIs to extract information from a Git repository, such as commits, developers, modications, dis, and
  • 28. PyDriller ā€¢ Aim: to ease the extraction of information from Git repositories ā€¢ What is supported: ā€¢ analysing the history of a project ā€¢ retrieving commit information (date, message, authors, etc.) ā€¢ retrieving ļ¬les information (diff, source code) ā€¢ What is not supported: ā€¢ writing on the repo (git pull, git push, git add, git commit, etc..)
  • 29. Statistics ā€¢ Everything is lazy evaluated, so you ā€œpayā€ what you get. 1. only commit information: immediate (as git log) 2. commit and ļ¬le information: 60 commits/sec (1240 commits in 22 seconds) 3. commit, ļ¬le and metrics information: 4 commits/s (1240 commits in ~5min)
  • 30. Statistics ā€¢ Some numbers: 1. Downloaded approximatively 4000 times ā€¢ Community driven ā€¢ University of Zurich, TU Delft and University of Catania teach PyDriller in their MSR courses ā€¢ SIG uses PyDriller in their quality assessments
  • 31. PyDriller ā€¢ Source code: https://github.com/ishepard/pydriller ā€¢ Doc: https://pydriller.readthedocs.io/en/latest/ ā€¢ Feel free to leave a star! :)
  • 32. And now? ā€¢ Investigating how people debug test code ā€¢ how widespread is the problem? ā€¢ how do developers ļ¬x it/debug it?
  • 33. Summary Helping developers in writing more/better test code, by: 1. raising awareness of the effect of writing poor tests 2. means of new techniques and tools ā€¢ On The Relation of Test smells to Software Code quality (ICSME18) ā€¢ Test coupling: Free your Code and Tests (ESEM19/FSE19) ā€¢ To Mock or Not To Mock? An empirical study on Mocking Practices(MSR17/EMSE) ā€¢ When Testing Meets Code Review: Why and How Developers Review Tests (ICSE18) ā€¢ Test-Driven Review: An Empirical Study (hopefully, ICSE19) ā€¢ PyDriller (FSE19) ā€¢ GHEnhancer