SlideShare a Scribd company logo
1 of 61
Download to read offline
Code Quality
Quality is not an act, it is a habit.
—Aristotle

Mark Daggett 2013!
@heavysixer!
http://www.github.com/heavysixer!
http://www.markdaggett.com

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
“The word quality has, of course, two meanings:
first, the nature or essential characteristic of
something, as in His voice has the quality of
command; second, a condition of excellence
implying fine quality as distinct from poor
quality.”
–Barbara W. Tuchman

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is self-nourishing.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Kinds of Quality
Objective
Subjective

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Subjective
•

Subjective quality often describes code that is
inspired or essential, or what Truchman calls an
"innate excellence."

•

Dependent on personal experience or the
guidance of skilled individuals to recognize and
promote excellence within their field. It asserts that
subjective quality at its essential level is universally
true, not so much created as discovered.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Objective
•

If genius can be measured, it can be quantified
and repeated.

•

Makes, applies, and refines empirical
approximations about the subject in a feedback
loop

•

Lends itself to algorithms, test suites, and software
tooling

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Aesthetics: This metric measures the visual cohesion of
the code, but also encompasses the consistency and
thoughtfulness of formatting, naming conventions, and
document structure. Aesthetics are measured to answer
these questions:
A. How readable is the code?
B. How are the individual parts organized on the page?
C. Does it use best practices in terms of programming
style?
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Completeness: measures if whether the code is "fit for purpose."1 To be
considered complete, a program must meet or exceed the requirements of
the specified problem. Completeness can also measure how well the
particular implementation conforms to industry standards or ideals. The
questions about completeness this measure attempts to answer are these:
A. Does the code solve the problem it was meant to?
B. Does the code produce the desired output given an expected input?
C. Does it meet all the defined use cases?
D. Is it secure?
E. Does it handle edge cases well?
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Performance: Performance measures an implementation
against known benchmarks to determine how successful
it is. These metrics may consider, attributes like such as
program size, system resource efficiency, load time, or
bugs per line of code. Using performance measures, you
can answer the following questions:
A. How efficient is this approach?
B. What load can it handle?
C. What are the limits on capacity with this code?
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Effort: This metric measures the development cost
incurred to produce, and support the code. Effort can be
categorized in terms of time, money, or resources used.
Measuring effort can help answer these questions:
A. Is the code maintainable?
B. Does it deploy easily?
C. Is it documented?
D. How much did it cost to write?
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Durability: To be durable, a program's life in production
is measured. Durability can also be thought of as a
measure of reliability, which is another way to measure
longevity. Durability can be measured to answer these
questions:
A. Does it perform reliably?
B. How long can it be run before it must be restarted,
upgraded, and/ or replaced?
C. Is it scalable?
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality is measured using
metrics
Reception: Reception measures how other programmers
evaluate and value the code. Tracking reception allows you to
answer these questions:
A. How hard to understand is the code?
B. How well well-thought thought-out are the design decisions?
C. Does the approach leverage established best practices?
D. Is it enjoyable to use?

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Why measure Quality?
•

What gets measured gets done

•

Identify areas of technical debt in your software

•

Gauge the amount of future effort to maintain code

•

Determine the likelihood that bugs exist

•

can tell you when you have reached an appropriate
level of test coverage using simple heuristics.

•

Quality is that it is much harder to add later
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Measuring Quality of JS

•

Static Code Analysis

•

Syntax Validations

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Quality Metrics
•

Excessive Comments

•

Lines of Code

•

Coupling

•

Variables per Function

•

Arguments per Function

•

Nesting Depth
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Cyclomatic Complexity
This measure derives a complexity score in one of two
ways:
1. It can count all the decision points within a function
and then increments it by one.
2. It can consider the function as a control flow graph7
(G) and subtract the number of edges (e) from the
total number vertices (n) and connected squared
components (p); for example: e.g. v(G) = e - n + 2p

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Cyclomatic Complexity
// score = 1

route = function() {
// score = 2

if (request && request.controller) {
switch (true) {
// score = 3

case request.controller === "home":
// score = 4

if (request.action) {
...
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Cyclomatic Complexity
Limitations
•

It focuses exclusively on control flow as the only
source of complexity within a function.

•

Any programmer who has spent even a trivial
amount of time reading code knows that complexity
comes from more than just control flow.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Cyclomatic Complexity
Limitations
// Cyclomatic score: 2
if(true){
console.log('true');
}
// Cyclomatic score: 2
if([+[]]+[] == +false){
console.log('surprise also true!');
}
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
NPATH Complexity
NPATH, counts the acyclic execution paths through a
function, and is an objective measure of software
complexity related to the ease with which software
can be comprehensively tested. (Nejmeh, 1988)!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
NPATH Complexity
1. The cyclomatic complexity number does not properly account for
the different number of acyclic paths through a linear function as
opposed to an exponential one. !
2. It treats all control flow mechanisms the same way. However,
Nejmeh argues that some structures are inherently harder to
understand and use properly. !
3. McCabe's approach does not account for the level of nesting within
a function. For example, three sequential if statements receive the
same score as three nested if statements. However, Nejmeh
argues that a programmer will have a harder time understanding
the latter and thus should be considered more complex. !

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
NPATH Complexity
Limitations
•

Quantifying complexity benefits the programmer, not the
runtime engine. Therefore these metrics are at a rudimentary
fundamental level based around the creator's own personal
definition of complexity.!

•

In the case of NPATH, Nejmeh argues that some control flow
statements are inherently easier to understand than others. For
example, you will receive a lower NPATH score for using a
switch statement with two case labels over a pair of sequential
if statements. !

•

Although the pair of if statements may require more lines of
code, I don't believe they are intrinsically harder to understand.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
In the late ‘70's, computers programs were written as
a single files, which that, over time, became harder to
maintain and enhance due to their monolithic
structure. !
In an effort to improve the quality of these programs,
Maurice Halstead developed a series of quantitative
measures to determine the complexity of a program's
source.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics

Halstead's metrics tallies a function's use operators
and operands as inputs for their various measures.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Program Length (N)!
The program length is calculated by adding the total
number of operands and operators together (N1 +
N2). !
A large number indicates that the function may benefit
from being broken into smaller components. WeYou
can express program length in this way:!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Vocabulary Size (n)!
The vocabulary size is derived by adding the unique
operators and operands together (n1 + n2). Just as with
the program length metric, a higher number is an indicator
that the function may be doing too largemuch. WeYou can
represent vocabulary size with the following expression:!
var n = n1 + n2;!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Program Volume (V)!
If your brain were a glass jar, a program's volume describes
how much of the container it occupies. It describes the
amount of code the reader must mentally parse in order to
understand the function completely. The program volume
considers the total number or operations performed against
operands within a function. As a result, a function will
receive the same score regardless of if whether it is has
been minified or not.!
var V = N * (Math.log(n) / Math.log(2));
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Difficulty Level (D)!
The difficulty level measures the likelihood that a
reader will misunderstand the source code. Difficulty
level is calculated by multiplying half the unique
operators by the total number of operands, which
have been divided by the number of unique operands.
In JavaScript, it would be written like this:!
var D = (n1 / 2) * (N2 / n2);
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Programming Effort (E)!
This measures estimates the likely effort a competent
programmer will have in understanding, using, and
improving a function based on its volume and difficulty
scores. Therefore, programming effort can be
represented as follows:!
var E = V * D;!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Time to Implement (T)!
This measure estimates the time it takes a competent
programmer to implement a given function. Halstead derived
this metric by dividing effort (E) by a Stroud number.12. The
Stroud number is the quantity of rudimentary (binary)
decisions that a person can make per second. SinceBecause
the Stroud number is not derived from the program source, it
can be calibrated over time by comparing the expected and
actual results. The time to implement can be expressed like
this:!
var T = E / 18;
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Halstead Metrics
Number of Bugs (B)!
This metric estimates the number of software defects
already in a given program. As one you might expect, the
number of bugs correlates strongly to its complexity
(volume), but is mitigated through a programmer's own skill
level (E1). Halstead found in his own research that a
sufficient value for E1 can be found in the range of 3000 to
3200. Bugs can be estimated using the following
expression:!
var B = V/E1;
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Tooling
Complexity Report!
1. Lines of code !
2. Arguments per function !
3. Cyclomatic complexity !
4. Halstead metrics !
5. Maintainability index !

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Tooling
Complexity Report Example!
Maintainability index: 125.84886810899188 !
Aggregate cyclomatic complexity: 32!
Mean parameter count: 0.9615384615384616!

!

Function: parseCommandLine !
Line No.: 27!
Physical SLOC: 103!

Logical SLOC: 19!
Parameter count: 0!

Cyclomatic complexity: 7!

Halstead difficulty: 11.428571428571427!
Halstead volume: 1289.3654689326472 !
Halstead effort: 14735.605359230252!

!

Function: expectFiles !
Line No.: 131!

Physical SLOC: 5 !
Logical SLOC: 2 !
Parameter count: 2 !
Cyclomatic complexity: 2 !
Halstead difficulty: 3 !

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Plato

http://es-analysis.github.io/plato/examples/grunt/

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Improving Testability
For every complex problem there is a solution that is
simple, neat, and wrong.
—H. L. Mencken

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Why testing fails to test
JavaScript testing fails, when the test suite
passes. The goal of testing is not to write tests
but to find bugs by making programs fail!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Why testing fails to test
1. The JavaScript community does not have the same quality of tooling needed to
test code like other languages. Without quality tools it is impractical, if not
impossible to write tests. !
2. JavaScript is often inextricably linked to user interface, and used to produce visual
effects, which must be experienced and evaluated by a human. The need for
manual evaluation prevents JavaScript from being able to be programmatically
tested. !
3. There is no way to step through a running test on the fly like you can using
command line debuggers found in other languages. Without this capability
debugging JavaScript is a nightmare. !
4. The ever changing variety of host environments (browsers), make testing
impractical, because developers have to repeatedly retest the same program logic
for each host environment. !

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
Testing proves there are no bugs in a program.!
Program testing can be used to show the presence of
bugs, but never to show their absence!!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
Successful test are ones which complete without
errors!
"Consider the analogy of a person visiting a doctor because
of an overall feeling of malaise. If the doctor runs some
laboratory tests that do not locate the problem, we do not
call the laboratory tests "successful"; they were
unsuccessful tests... the patient is still ill, and the patient
now questions the doctor's ability as a
diagnostician." (Myers, 1979)!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
Testing ensures the program is of high quality!
When managers use tests as a defensive blockade
against poor quality software, they may be ensuring
the opposite outcome. When tests are hurdles for
programmers to overcome, they become obstacles
not assets.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
1. Tests as measure of quality demoralizes programmers because
it infers that their source is intrinsically of low quality to start with. !
2. It constructs a useless dichotomy between the testing and
development processes. It also may have the effect stoking
tensions in teams, which separates testers from developers. !
3. It suggests that tests by themselves can add quality to the code
base. !
4. If tests are the arbiter of quality, they infer that there is an
asymmetry in power between testers and developers, as if
developers are supplicate to testers on behalf of their programs.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
Testing prevents future errors!
The fallacy with this viewpoint is that it assumes that
tests should have some predicative quality to them.
This leads developers to write superfluous what if
tests, which belabor muddy the clarity of purpose of
testing.!
!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Testing fallacies
Testing proves that the program works as
designed!
More than the act of testing, the act of designing tests
is one of the best bug preventers known. The thinking
that must be done to create a useful test can discover
and eliminate bugs before they are coded – indeed,
test-design thinking can discover and eliminate bugs
at every stage in the creation of software, from
conception to specification, to design, coding and the
rest
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Confirmation Bias
A confirmation bias describes the likelihood that a
person will favor information that supports their
worldview and disregard evidence of the contrary.!
Not surprisingly, programmers who spend much of
their time inside their own head can also suffer from
confirmation bias, though its expression can be more
subtle.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Selective Seeing
Writing software, like writing a book should be a personal
expression of the author's craft. As such, programmers like other
craftsmen tend to see themselves reflected in their work product. !
Programmers are inclined to be optimistic about their own abilities
and by extension the software they write. This intimacy between a
programmer and their work can prevent them from evaluating it
honestly. !
They tend to selectively see their functions from the frame of
reference in which they were intended to run, and not as they were
actually implemented.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
The curse of knowledge
You would think that an intimate understanding of a
program would afford the developer the ability to write a
robust test around it. !
However, in reality these tests are less likely to find hidden
edge cases within the function, because the programmer
cannot get enough critical distance from their approach. !
The curse of knowledge contributes to defect density, which
is the likelihood for bugs to huddle together. These bugs
are shielded from the programmer's view by a misplaced
assumption about how the application works.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
The pesticide paradox
"Every method you use to prevent or find bugs leaves a
residue of subtler bugs against which those methods are
ineffective." (Beizer, Software testing techniques, 1990)!
The pesticide paradox explains the fallacy that past
tests will catch future errors. In actuality the bugs that
these tests were meant to catch have already been
caught. This is not to say that these tests do not have
some value as regression tests, which ensure these
known errors are not reintroduced. However, you should
not expect new bugs by tests looking in old places.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Defect clusters
Bug clusters often occur as a result of a developer falling
prey to their own curse of knowledge. !
However, once the programmer identifies this bias clusters
can guide the programmer where to apply future efforts. !
For example, if the same number of tests found six bugs in
one module and only one in another, then you are likely to
find more future bugs in the former than the later. !
This is because bug clusters can reveal crucial
misconceptions to the programmer about their program.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Framework Bias
Testing frameworks are not created in a vacuum. They
are influenced by the set of assumptions and
philosophies about testing which the creator holds
true. !
Like any process that is contextualized through a
metaphor, testing frameworks have the potential to
make one task easier while making another more
difficult.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Find a baseline
Test suites are judged like bed comforters, in their
ability to cover all the important parts. !
A program is thought to be well-tested when all of the
paths through the application are covered by at least
one test.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Find a baseline

Istanbul live coding!!!!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Find a baseline
Statement Coverage!
Statement coverage is the most straightforward form
of code coverage. It merely records when statements
are executed.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Find a baseline
Function coverage!
Determines if any test called a function at least once.
Function coverage does not track how a test may
fiddle with the function's innards, it simply cares if the
function was invoked it.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Find a baseline

Branch coverage!
To have sufficient branch coverage, every path
through a function must be covered in a test.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Coverage Bias
Code coverage tools like Istanbul are great for quickly and
visually exploring the coverage of a program. They afford
the viewer a means of finding testing deficiencies with little
effort. !
However, coverage tools come with their own biases, which
you should be aware of. Executing a line of code is much
different than testing a line of code. Code coverage metric
can give a false sense of security that 100% coverage is
the same as being 100% tested.!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Fuzz Testing
Fuzz testing, or fuzzing, is a software analysis
technique which attempts to automatically find
defects, by providing the application with
unanticipated, invalid or random inputs and then
evaluating its behavior.

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Fuzz Testing
Two forms of fuzzers!
Mutation based (dumb fuzzers)!
•

Understand little about the format or structure of their data.!

•

Blindly alter their data and merely record the exception as it
occurs.!

Generation based!
•

Understand the semantics of their data formats!

•

Takes much longer to create
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Fuzz Testing
jsfunfuzz (generation fuzzer) found over 1000 bugs in
Firefox.!
Think about that for a second, this testing tool found
hundreds of bugs in the JavaScript interpreter whose
development is managed by the creator of the
language!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
Fuzz Testing Limitations
1. Fuzzers that are mutation based can run forever. Therefore it can
be hard to choose a duration that gives you a meaningful chance at
finding bugs, without consuming too much time. !
2. Most JavaScript fuzzers are mainly directed towards the host
environments like browsers.!
3. Fuzzers can find hundreds of bugs, which are related through a
common failure point. This 

can mean getting hundreds of tests with unnecessary overlap. !
4. Fuzzers, not only find bugs in programs but also the underlying
language itself. Therefore it can be difficult to distinguish between
errors in your program as opposed to faults in the JavaScript
interpreter.
Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
JS CHECK

Live Coding!

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
If liked this presentation, then you'll love my book on
JavaScript.
Expert JavaScript is your definitive guide to
understanding how and why JavaScript behaves
the way it does. Master the inner workings of
JavaScript by learning in detail how modern
applications are made. In covering lesserunderstood aspects of this powerful language
and truly understanding how it works, your
JavaScript code and programming skills will
improve.

Mark Daggett 2013!
@heavysixer!
http://www.github.com/heavysixer!
http://www.markdaggett.com

Unless source quoted all content is licensed under a Creative Commons Attribution License.
Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.

More Related Content

What's hot

Documenting Code - Patterns and Anti-patterns - NLPW 2016
Documenting Code - Patterns and Anti-patterns - NLPW 2016Documenting Code - Patterns and Anti-patterns - NLPW 2016
Documenting Code - Patterns and Anti-patterns - NLPW 2016Søren Lund
 
Measuring Code Quality in WTF/min.
Measuring Code Quality in WTF/min. Measuring Code Quality in WTF/min.
Measuring Code Quality in WTF/min. David Gómez García
 
Caring about Code Quality
Caring about Code QualityCaring about Code Quality
Caring about Code QualitySaltmarch Media
 
Documenting code yapceu2016
Documenting code yapceu2016Documenting code yapceu2016
Documenting code yapceu2016Søren Lund
 
Beyond Unit Testing
Beyond Unit TestingBeyond Unit Testing
Beyond Unit TestingSøren Lund
 
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...VincitOy
 
Coding standards
Coding standardsCoding standards
Coding standardsMimoh Ojha
 
Git branching policy and review comment's prefix
Git branching policy and review comment's prefixGit branching policy and review comment's prefix
Git branching policy and review comment's prefixKumaresh Chandra Baruri
 
Data Generation with PROSPECT: a Probability Specification Tool
Data Generation with PROSPECT: a Probability Specification ToolData Generation with PROSPECT: a Probability Specification Tool
Data Generation with PROSPECT: a Probability Specification ToolIvan Ruchkin
 
BDD in Action - Automated Web Testing with WebDriver and Serenity
BDD in Action - Automated Web Testing with WebDriver and SerenityBDD in Action - Automated Web Testing with WebDriver and Serenity
BDD in Action - Automated Web Testing with WebDriver and SerenityJohn Ferguson Smart Limited
 
Designing with tests
Designing with testsDesigning with tests
Designing with testsDror Helper
 
Refactoring legacy code driven by tests - ITA
Refactoring legacy code driven by tests -  ITARefactoring legacy code driven by tests -  ITA
Refactoring legacy code driven by tests - ITALuca Minudel
 
TDD vs. ATDD - What, Why, Which, When & Where
TDD vs. ATDD - What, Why, Which, When & WhereTDD vs. ATDD - What, Why, Which, When & Where
TDD vs. ATDD - What, Why, Which, When & WhereDaniel Davis
 
Agile development with Ruby
Agile development with RubyAgile development with Ruby
Agile development with Rubykhelll
 
Code coverage for automation scripts
Code coverage for automation scriptsCode coverage for automation scripts
Code coverage for automation scriptsvodQA
 

What's hot (20)

Quick Intro to Clean Coding
Quick Intro to Clean CodingQuick Intro to Clean Coding
Quick Intro to Clean Coding
 
Documenting Code - Patterns and Anti-patterns - NLPW 2016
Documenting Code - Patterns and Anti-patterns - NLPW 2016Documenting Code - Patterns and Anti-patterns - NLPW 2016
Documenting Code - Patterns and Anti-patterns - NLPW 2016
 
Measuring Code Quality in WTF/min.
Measuring Code Quality in WTF/min. Measuring Code Quality in WTF/min.
Measuring Code Quality in WTF/min.
 
Caring about Code Quality
Caring about Code QualityCaring about Code Quality
Caring about Code Quality
 
Documenting code yapceu2016
Documenting code yapceu2016Documenting code yapceu2016
Documenting code yapceu2016
 
Beyond Unit Testing
Beyond Unit TestingBeyond Unit Testing
Beyond Unit Testing
 
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...
Improving Code Quality In Medical Software Through Code Reviews - Vincit Teat...
 
Coding standards
Coding standardsCoding standards
Coding standards
 
Code Quality Assurance
Code Quality AssuranceCode Quality Assurance
Code Quality Assurance
 
Git branching policy and review comment's prefix
Git branching policy and review comment's prefixGit branching policy and review comment's prefix
Git branching policy and review comment's prefix
 
Code metrics in PHP
Code metrics in PHPCode metrics in PHP
Code metrics in PHP
 
Data Generation with PROSPECT: a Probability Specification Tool
Data Generation with PROSPECT: a Probability Specification ToolData Generation with PROSPECT: a Probability Specification Tool
Data Generation with PROSPECT: a Probability Specification Tool
 
BDD in Action - Automated Web Testing with WebDriver and Serenity
BDD in Action - Automated Web Testing with WebDriver and SerenityBDD in Action - Automated Web Testing with WebDriver and Serenity
BDD in Action - Automated Web Testing with WebDriver and Serenity
 
Gherkin /BDD intro
Gherkin /BDD introGherkin /BDD intro
Gherkin /BDD intro
 
Designing with tests
Designing with testsDesigning with tests
Designing with tests
 
BDD-Driven Microservices
BDD-Driven MicroservicesBDD-Driven Microservices
BDD-Driven Microservices
 
Refactoring legacy code driven by tests - ITA
Refactoring legacy code driven by tests -  ITARefactoring legacy code driven by tests -  ITA
Refactoring legacy code driven by tests - ITA
 
TDD vs. ATDD - What, Why, Which, When & Where
TDD vs. ATDD - What, Why, Which, When & WhereTDD vs. ATDD - What, Why, Which, When & Where
TDD vs. ATDD - What, Why, Which, When & Where
 
Agile development with Ruby
Agile development with RubyAgile development with Ruby
Agile development with Ruby
 
Code coverage for automation scripts
Code coverage for automation scriptsCode coverage for automation scripts
Code coverage for automation scripts
 

Similar to Understanding, measuring and improving code quality in JavaScript

Test Driven Development
Test Driven DevelopmentTest Driven Development
Test Driven Developmentnikhil sreeni
 
Introduction to Behavior Driven Development
Introduction to Behavior Driven Development Introduction to Behavior Driven Development
Introduction to Behavior Driven Development Robin O'Brien
 
Enter the mind of an Agile Developer
Enter the mind of an Agile DeveloperEnter the mind of an Agile Developer
Enter the mind of an Agile DeveloperBSGAfrica
 
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the ProblemsTakanori Suzuki
 
Capability Building for Cyber Defense: Software Walk through and Screening
Capability Building for Cyber Defense: Software Walk through and Screening Capability Building for Cyber Defense: Software Walk through and Screening
Capability Building for Cyber Defense: Software Walk through and Screening Maven Logix
 
Software Measurement: Lecture 3. Metrics in Organization
Software Measurement: Lecture 3. Metrics in OrganizationSoftware Measurement: Lecture 3. Metrics in Organization
Software Measurement: Lecture 3. Metrics in OrganizationProgrameter
 
Quality Concept
Quality ConceptQuality Concept
Quality ConceptAnand Jat
 
Software engineering interview questions
Software engineering interview questionsSoftware engineering interview questions
Software engineering interview questionsMuhammadTalha436
 
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifij
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifijboughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifij
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifijakd3143
 
Indy meetup#7 effective unit-testing-mule
Indy meetup#7 effective unit-testing-muleIndy meetup#7 effective unit-testing-mule
Indy meetup#7 effective unit-testing-muleikram_ahamed
 
PMI-ACP Lesson 06 Quality
PMI-ACP Lesson 06 QualityPMI-ACP Lesson 06 Quality
PMI-ACP Lesson 06 QualityThanh Nguyen
 
7a Good Programming Practice.pptx
7a Good Programming Practice.pptx7a Good Programming Practice.pptx
7a Good Programming Practice.pptxDylanTilbury1
 
Software engineering introduction
Software engineering introductionSoftware engineering introduction
Software engineering introductionVishal Singh
 
Topic tdd-and-bdd b4usolution
Topic tdd-and-bdd b4usolutionTopic tdd-and-bdd b4usolution
Topic tdd-and-bdd b4usolutionHoa Le
 
Quality Jam: BDD, TDD and ATDD for the Enterprise
Quality Jam: BDD, TDD and ATDD for the EnterpriseQuality Jam: BDD, TDD and ATDD for the Enterprise
Quality Jam: BDD, TDD and ATDD for the EnterpriseQASymphony
 
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...Katy Slemon
 

Similar to Understanding, measuring and improving code quality in JavaScript (20)

Test Policy and Practices
Test Policy and PracticesTest Policy and Practices
Test Policy and Practices
 
Test Driven Development
Test Driven DevelopmentTest Driven Development
Test Driven Development
 
Introduction to Behavior Driven Development
Introduction to Behavior Driven Development Introduction to Behavior Driven Development
Introduction to Behavior Driven Development
 
Ensuring code quality
Ensuring code qualityEnsuring code quality
Ensuring code quality
 
Enter the mind of an Agile Developer
Enter the mind of an Agile DeveloperEnter the mind of an Agile Developer
Enter the mind of an Agile Developer
 
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
 
Capability Building for Cyber Defense: Software Walk through and Screening
Capability Building for Cyber Defense: Software Walk through and Screening Capability Building for Cyber Defense: Software Walk through and Screening
Capability Building for Cyber Defense: Software Walk through and Screening
 
Software Measurement: Lecture 3. Metrics in Organization
Software Measurement: Lecture 3. Metrics in OrganizationSoftware Measurement: Lecture 3. Metrics in Organization
Software Measurement: Lecture 3. Metrics in Organization
 
Quality Concept
Quality ConceptQuality Concept
Quality Concept
 
Software engineering interview questions
Software engineering interview questionsSoftware engineering interview questions
Software engineering interview questions
 
Sepm t1
Sepm t1Sepm t1
Sepm t1
 
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifij
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifijboughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifij
boughtonalexand jdjdjfjjfjfjfjnfjfjjjfkdifij
 
Indy meetup#7 effective unit-testing-mule
Indy meetup#7 effective unit-testing-muleIndy meetup#7 effective unit-testing-mule
Indy meetup#7 effective unit-testing-mule
 
OOSE UNIT-1.pdf
OOSE UNIT-1.pdfOOSE UNIT-1.pdf
OOSE UNIT-1.pdf
 
PMI-ACP Lesson 06 Quality
PMI-ACP Lesson 06 QualityPMI-ACP Lesson 06 Quality
PMI-ACP Lesson 06 Quality
 
7a Good Programming Practice.pptx
7a Good Programming Practice.pptx7a Good Programming Practice.pptx
7a Good Programming Practice.pptx
 
Software engineering introduction
Software engineering introductionSoftware engineering introduction
Software engineering introduction
 
Topic tdd-and-bdd b4usolution
Topic tdd-and-bdd b4usolutionTopic tdd-and-bdd b4usolution
Topic tdd-and-bdd b4usolution
 
Quality Jam: BDD, TDD and ATDD for the Enterprise
Quality Jam: BDD, TDD and ATDD for the EnterpriseQuality Jam: BDD, TDD and ATDD for the Enterprise
Quality Jam: BDD, TDD and ATDD for the Enterprise
 
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...
Tdd vs bdd vs atdd — developers’ methodologies to navigate complex developmen...
 

Recently uploaded

UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Kaya Weers
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 

Recently uploaded (20)

UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)Design pattern talk by Kaya Weers - 2024 (v2)
Design pattern talk by Kaya Weers - 2024 (v2)
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 

Understanding, measuring and improving code quality in JavaScript

  • 1. Code Quality Quality is not an act, it is a habit. —Aristotle Mark Daggett 2013! @heavysixer! http://www.github.com/heavysixer! http://www.markdaggett.com Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 2. “The word quality has, of course, two meanings: first, the nature or essential characteristic of something, as in His voice has the quality of command; second, a condition of excellence implying fine quality as distinct from poor quality.” –Barbara W. Tuchman Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 3. Quality is self-nourishing. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 4. Kinds of Quality Objective Subjective Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 5. Subjective • Subjective quality often describes code that is inspired or essential, or what Truchman calls an "innate excellence." • Dependent on personal experience or the guidance of skilled individuals to recognize and promote excellence within their field. It asserts that subjective quality at its essential level is universally true, not so much created as discovered. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 6. Objective • If genius can be measured, it can be quantified and repeated. • Makes, applies, and refines empirical approximations about the subject in a feedback loop • Lends itself to algorithms, test suites, and software tooling Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 7. Quality is measured using metrics Aesthetics: This metric measures the visual cohesion of the code, but also encompasses the consistency and thoughtfulness of formatting, naming conventions, and document structure. Aesthetics are measured to answer these questions: A. How readable is the code? B. How are the individual parts organized on the page? C. Does it use best practices in terms of programming style? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 8. Quality is measured using metrics Completeness: measures if whether the code is "fit for purpose."1 To be considered complete, a program must meet or exceed the requirements of the specified problem. Completeness can also measure how well the particular implementation conforms to industry standards or ideals. The questions about completeness this measure attempts to answer are these: A. Does the code solve the problem it was meant to? B. Does the code produce the desired output given an expected input? C. Does it meet all the defined use cases? D. Is it secure? E. Does it handle edge cases well? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 9. Quality is measured using metrics Performance: Performance measures an implementation against known benchmarks to determine how successful it is. These metrics may consider, attributes like such as program size, system resource efficiency, load time, or bugs per line of code. Using performance measures, you can answer the following questions: A. How efficient is this approach? B. What load can it handle? C. What are the limits on capacity with this code? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 10. Quality is measured using metrics Effort: This metric measures the development cost incurred to produce, and support the code. Effort can be categorized in terms of time, money, or resources used. Measuring effort can help answer these questions: A. Is the code maintainable? B. Does it deploy easily? C. Is it documented? D. How much did it cost to write? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 11. Quality is measured using metrics Durability: To be durable, a program's life in production is measured. Durability can also be thought of as a measure of reliability, which is another way to measure longevity. Durability can be measured to answer these questions: A. Does it perform reliably? B. How long can it be run before it must be restarted, upgraded, and/ or replaced? C. Is it scalable? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 12. Quality is measured using metrics Reception: Reception measures how other programmers evaluate and value the code. Tracking reception allows you to answer these questions: A. How hard to understand is the code? B. How well well-thought thought-out are the design decisions? C. Does the approach leverage established best practices? D. Is it enjoyable to use? Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 13. Why measure Quality? • What gets measured gets done • Identify areas of technical debt in your software • Gauge the amount of future effort to maintain code • Determine the likelihood that bugs exist • can tell you when you have reached an appropriate level of test coverage using simple heuristics. • Quality is that it is much harder to add later Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 14. Measuring Quality of JS • Static Code Analysis • Syntax Validations Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 15. Quality Metrics • Excessive Comments • Lines of Code • Coupling • Variables per Function • Arguments per Function • Nesting Depth Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 16. Cyclomatic Complexity This measure derives a complexity score in one of two ways: 1. It can count all the decision points within a function and then increments it by one. 2. It can consider the function as a control flow graph7 (G) and subtract the number of edges (e) from the total number vertices (n) and connected squared components (p); for example: e.g. v(G) = e - n + 2p Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 17. Cyclomatic Complexity // score = 1
 route = function() { // score = 2
 if (request && request.controller) { switch (true) { // score = 3
 case request.controller === "home": // score = 4
 if (request.action) { ... Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 18. Cyclomatic Complexity Limitations • It focuses exclusively on control flow as the only source of complexity within a function. • Any programmer who has spent even a trivial amount of time reading code knows that complexity comes from more than just control flow. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 19. Cyclomatic Complexity Limitations // Cyclomatic score: 2 if(true){ console.log('true'); } // Cyclomatic score: 2 if([+[]]+[] == +false){ console.log('surprise also true!'); } Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 20. NPATH Complexity NPATH, counts the acyclic execution paths through a function, and is an objective measure of software complexity related to the ease with which software can be comprehensively tested. (Nejmeh, 1988)! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 21. NPATH Complexity 1. The cyclomatic complexity number does not properly account for the different number of acyclic paths through a linear function as opposed to an exponential one. ! 2. It treats all control flow mechanisms the same way. However, Nejmeh argues that some structures are inherently harder to understand and use properly. ! 3. McCabe's approach does not account for the level of nesting within a function. For example, three sequential if statements receive the same score as three nested if statements. However, Nejmeh argues that a programmer will have a harder time understanding the latter and thus should be considered more complex. ! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 22. NPATH Complexity Limitations • Quantifying complexity benefits the programmer, not the runtime engine. Therefore these metrics are at a rudimentary fundamental level based around the creator's own personal definition of complexity.! • In the case of NPATH, Nejmeh argues that some control flow statements are inherently easier to understand than others. For example, you will receive a lower NPATH score for using a switch statement with two case labels over a pair of sequential if statements. ! • Although the pair of if statements may require more lines of code, I don't believe they are intrinsically harder to understand. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 23. Halstead Metrics In the late ‘70's, computers programs were written as a single files, which that, over time, became harder to maintain and enhance due to their monolithic structure. ! In an effort to improve the quality of these programs, Maurice Halstead developed a series of quantitative measures to determine the complexity of a program's source. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 24. Halstead Metrics Halstead's metrics tallies a function's use operators and operands as inputs for their various measures. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 25. Halstead Metrics Program Length (N)! The program length is calculated by adding the total number of operands and operators together (N1 + N2). ! A large number indicates that the function may benefit from being broken into smaller components. WeYou can express program length in this way:! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 26. Halstead Metrics Vocabulary Size (n)! The vocabulary size is derived by adding the unique operators and operands together (n1 + n2). Just as with the program length metric, a higher number is an indicator that the function may be doing too largemuch. WeYou can represent vocabulary size with the following expression:! var n = n1 + n2;! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 27. Halstead Metrics Program Volume (V)! If your brain were a glass jar, a program's volume describes how much of the container it occupies. It describes the amount of code the reader must mentally parse in order to understand the function completely. The program volume considers the total number or operations performed against operands within a function. As a result, a function will receive the same score regardless of if whether it is has been minified or not.! var V = N * (Math.log(n) / Math.log(2)); Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 28. Halstead Metrics Difficulty Level (D)! The difficulty level measures the likelihood that a reader will misunderstand the source code. Difficulty level is calculated by multiplying half the unique operators by the total number of operands, which have been divided by the number of unique operands. In JavaScript, it would be written like this:! var D = (n1 / 2) * (N2 / n2); Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 29. Halstead Metrics Programming Effort (E)! This measures estimates the likely effort a competent programmer will have in understanding, using, and improving a function based on its volume and difficulty scores. Therefore, programming effort can be represented as follows:! var E = V * D;! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 30. Halstead Metrics Time to Implement (T)! This measure estimates the time it takes a competent programmer to implement a given function. Halstead derived this metric by dividing effort (E) by a Stroud number.12. The Stroud number is the quantity of rudimentary (binary) decisions that a person can make per second. SinceBecause the Stroud number is not derived from the program source, it can be calibrated over time by comparing the expected and actual results. The time to implement can be expressed like this:! var T = E / 18; Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 31. Halstead Metrics Number of Bugs (B)! This metric estimates the number of software defects already in a given program. As one you might expect, the number of bugs correlates strongly to its complexity (volume), but is mitigated through a programmer's own skill level (E1). Halstead found in his own research that a sufficient value for E1 can be found in the range of 3000 to 3200. Bugs can be estimated using the following expression:! var B = V/E1; Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 32. Tooling Complexity Report! 1. Lines of code ! 2. Arguments per function ! 3. Cyclomatic complexity ! 4. Halstead metrics ! 5. Maintainability index ! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 33. Tooling Complexity Report Example! Maintainability index: 125.84886810899188 ! Aggregate cyclomatic complexity: 32! Mean parameter count: 0.9615384615384616! ! Function: parseCommandLine ! Line No.: 27! Physical SLOC: 103! 
Logical SLOC: 19! Parameter count: 0! 
Cyclomatic complexity: 7! 
Halstead difficulty: 11.428571428571427! Halstead volume: 1289.3654689326472 ! Halstead effort: 14735.605359230252! ! Function: expectFiles ! Line No.: 131! 
Physical SLOC: 5 ! Logical SLOC: 2 ! Parameter count: 2 ! Cyclomatic complexity: 2 ! Halstead difficulty: 3 ! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 34. Plato http://es-analysis.github.io/plato/examples/grunt/ Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 35. Improving Testability For every complex problem there is a solution that is simple, neat, and wrong. —H. L. Mencken Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 36. Why testing fails to test JavaScript testing fails, when the test suite passes. The goal of testing is not to write tests but to find bugs by making programs fail! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 37. Why testing fails to test 1. The JavaScript community does not have the same quality of tooling needed to test code like other languages. Without quality tools it is impractical, if not impossible to write tests. ! 2. JavaScript is often inextricably linked to user interface, and used to produce visual effects, which must be experienced and evaluated by a human. The need for manual evaluation prevents JavaScript from being able to be programmatically tested. ! 3. There is no way to step through a running test on the fly like you can using command line debuggers found in other languages. Without this capability debugging JavaScript is a nightmare. ! 4. The ever changing variety of host environments (browsers), make testing impractical, because developers have to repeatedly retest the same program logic for each host environment. ! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 38. Testing fallacies Testing proves there are no bugs in a program.! Program testing can be used to show the presence of bugs, but never to show their absence!! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 39. Testing fallacies Successful test are ones which complete without errors! "Consider the analogy of a person visiting a doctor because of an overall feeling of malaise. If the doctor runs some laboratory tests that do not locate the problem, we do not call the laboratory tests "successful"; they were unsuccessful tests... the patient is still ill, and the patient now questions the doctor's ability as a diagnostician." (Myers, 1979)! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 40. Testing fallacies Testing ensures the program is of high quality! When managers use tests as a defensive blockade against poor quality software, they may be ensuring the opposite outcome. When tests are hurdles for programmers to overcome, they become obstacles not assets.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 41. Testing fallacies 1. Tests as measure of quality demoralizes programmers because it infers that their source is intrinsically of low quality to start with. ! 2. It constructs a useless dichotomy between the testing and development processes. It also may have the effect stoking tensions in teams, which separates testers from developers. ! 3. It suggests that tests by themselves can add quality to the code base. ! 4. If tests are the arbiter of quality, they infer that there is an asymmetry in power between testers and developers, as if developers are supplicate to testers on behalf of their programs. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 42. Testing fallacies Testing prevents future errors! The fallacy with this viewpoint is that it assumes that tests should have some predicative quality to them. This leads developers to write superfluous what if tests, which belabor muddy the clarity of purpose of testing.! ! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 43. Testing fallacies Testing proves that the program works as designed! More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 44. Confirmation Bias A confirmation bias describes the likelihood that a person will favor information that supports their worldview and disregard evidence of the contrary.! Not surprisingly, programmers who spend much of their time inside their own head can also suffer from confirmation bias, though its expression can be more subtle. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 45. Selective Seeing Writing software, like writing a book should be a personal expression of the author's craft. As such, programmers like other craftsmen tend to see themselves reflected in their work product. ! Programmers are inclined to be optimistic about their own abilities and by extension the software they write. This intimacy between a programmer and their work can prevent them from evaluating it honestly. ! They tend to selectively see their functions from the frame of reference in which they were intended to run, and not as they were actually implemented.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 46. The curse of knowledge You would think that an intimate understanding of a program would afford the developer the ability to write a robust test around it. ! However, in reality these tests are less likely to find hidden edge cases within the function, because the programmer cannot get enough critical distance from their approach. ! The curse of knowledge contributes to defect density, which is the likelihood for bugs to huddle together. These bugs are shielded from the programmer's view by a misplaced assumption about how the application works. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 47. The pesticide paradox "Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective." (Beizer, Software testing techniques, 1990)! The pesticide paradox explains the fallacy that past tests will catch future errors. In actuality the bugs that these tests were meant to catch have already been caught. This is not to say that these tests do not have some value as regression tests, which ensure these known errors are not reintroduced. However, you should not expect new bugs by tests looking in old places. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 48. Defect clusters Bug clusters often occur as a result of a developer falling prey to their own curse of knowledge. ! However, once the programmer identifies this bias clusters can guide the programmer where to apply future efforts. ! For example, if the same number of tests found six bugs in one module and only one in another, then you are likely to find more future bugs in the former than the later. ! This is because bug clusters can reveal crucial misconceptions to the programmer about their program. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 49. Framework Bias Testing frameworks are not created in a vacuum. They are influenced by the set of assumptions and philosophies about testing which the creator holds true. ! Like any process that is contextualized through a metaphor, testing frameworks have the potential to make one task easier while making another more difficult.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 50. Find a baseline Test suites are judged like bed comforters, in their ability to cover all the important parts. ! A program is thought to be well-tested when all of the paths through the application are covered by at least one test.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 51. Find a baseline Istanbul live coding!!!! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 52. Find a baseline Statement Coverage! Statement coverage is the most straightforward form of code coverage. It merely records when statements are executed.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 53. Find a baseline Function coverage! Determines if any test called a function at least once. Function coverage does not track how a test may fiddle with the function's innards, it simply cares if the function was invoked it. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 54. Find a baseline Branch coverage! To have sufficient branch coverage, every path through a function must be covered in a test. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 55. Coverage Bias Code coverage tools like Istanbul are great for quickly and visually exploring the coverage of a program. They afford the viewer a means of finding testing deficiencies with little effort. ! However, coverage tools come with their own biases, which you should be aware of. Executing a line of code is much different than testing a line of code. Code coverage metric can give a false sense of security that 100% coverage is the same as being 100% tested.! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 56. Fuzz Testing Fuzz testing, or fuzzing, is a software analysis technique which attempts to automatically find defects, by providing the application with unanticipated, invalid or random inputs and then evaluating its behavior. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 57. Fuzz Testing Two forms of fuzzers! Mutation based (dumb fuzzers)! • Understand little about the format or structure of their data.! • Blindly alter their data and merely record the exception as it occurs.! Generation based! • Understand the semantics of their data formats! • Takes much longer to create Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 58. Fuzz Testing jsfunfuzz (generation fuzzer) found over 1000 bugs in Firefox.! Think about that for a second, this testing tool found hundreds of bugs in the JavaScript interpreter whose development is managed by the creator of the language! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 59. Fuzz Testing Limitations 1. Fuzzers that are mutation based can run forever. Therefore it can be hard to choose a duration that gives you a meaningful chance at finding bugs, without consuming too much time. ! 2. Most JavaScript fuzzers are mainly directed towards the host environments like browsers.! 3. Fuzzers can find hundreds of bugs, which are related through a common failure point. This 
 can mean getting hundreds of tests with unnecessary overlap. ! 4. Fuzzers, not only find bugs in programs but also the underlying language itself. Therefore it can be difficult to distinguish between errors in your program as opposed to faults in the JavaScript interpreter. Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 60. JS CHECK Live Coding! Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.
  • 61. If liked this presentation, then you'll love my book on JavaScript. Expert JavaScript is your definitive guide to understanding how and why JavaScript behaves the way it does. Master the inner workings of JavaScript by learning in detail how modern applications are made. In covering lesserunderstood aspects of this powerful language and truly understanding how it works, your JavaScript code and programming skills will improve. Mark Daggett 2013! @heavysixer! http://www.github.com/heavysixer! http://www.markdaggett.com Unless source quoted all content is licensed under a Creative Commons Attribution License. Other content is copyright © 2014 Mark Daggett. Some Rights Reserved.