SlideShare a Scribd company logo
1 of 36
Download to read offline
Sonar Metrics
  Keheliya Gallaba
    WSO2 Inc.
Why collect metrics?
●   You cannot improve what you don’t measure
●   What you don’t measure, you cannot prove
●   Broken Window Theory
What to do?
●   Prevention is the best medicine
●   Planning and Prioritizing
●   Technical Debt Resolution
What to monitor?
    Duplicated code

   Coding standards

       Unit tests

     Complex code

     Potential bugs

       Comments

 Design and architecture
How to monitor?
●   Sonar Dashboard
      –   Lines of code
      –   Code Complexity
      –   Code Coverage
      –   Rules Compliance
●   Time Machine
●   Clouds & Hot spots
Demo
Metrics - Rules
●   Violations
       –   Total number of rule violations
●   New Violations
       –   Total number of new violations
●   xxxxx violations
       –   Number of violations with severity xxxxx, xxxxx being blocker, critical, major,
           minor or info
●   New xxxxx violations
       –   Number of new violations with severity xxxxx, xxxxx being blocker, critical,
           major, minor or info
●   Weighted violations
       –   Sum of the violations weighted by the coefficient associated at each priority
           (Sum(xxxxx_violations * xxxxx_weight))
       –   Default Weights: INFO=0;MINOR=1;MAJOR=3;CRITICAL=5;BLOCKER=10
●   Rules compliance index (violations_density)
       –   100 - weighted_violations / Lines of code * 100
Metrics - Size
●   Physical lines
       –   Number of carriage returns
●   Comment lines
       –   Number of javadoc, multi-comment and single-comment lines. Empty comment
           lines like, header file comments (mainly used to define the license) and
           commented-out lines of code are not included.
●   Commented-out lines of code
       –   Number of commented-out lines of code. Javadoc blocks are not scanned.
●   Lines of code (ncloc)
       –   Number of physical lines of code - number of blank lines - number of comment lines
           - number of header file comments - commented-out lines of code
●   Density of comment lines
       –   Number of comment lines / (lines of code + number of comments lines) * 100
       –   With such formula :
              50% means that there is the same number of lines of code and comment lines
              100% means that the file contains only comment lines and no lines of code
Metrics – Size (Contd.)
●   Packages
       –   Number of packages
●   Classes
       –   Number of classes including nested classes, interfaces, enums and annotations
●   Files
       –   Number of analyzed files
●   Directories
       –   Number of analyzed directories
●   Accessors
       –   Number of getter and setter methods used to get(reading) or set(writing) a class' property .
●   Methods
       –   Number of Methods without including accessors. A constructor is considered to be a method.
●   Public API
       –   Number of public classes, public methods (without accessors) and public properties (without public final static ones)
●   Public undocumented API
       –   Number of public API without a Javadoc block
●   Density of public documented API (public_documented_api_density)
       –   (Number of public API - Number of undocumented public API) / Number of public API * 100
●   Statements
       –   Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets
           incremented by one each time an expression, if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch,
           finally is encountered..
       –   Statements counter is not incremented by a class, method, field, annotation definition or by a package and import declaration.
Metrics – Complexity
●   Complexity
      –   The Cyclomatic Complexity Number is also known as McCabe Metric. It all
          comes down to simply counting 'if', 'for', 'while' statements etc. in a method.
          Whenever the control flow of a method splits, the Cyclomatic counter gets
          incremented by one.
      –   Each method has a minimum value of 1 per default except accessors which
          are not considered as method and so don't increment complexity. For each of
          the following Java keywords/statements this value gets incremented by one:
           ●   If
           ●   For
           ●   While
           ●   Case
           ●   Catch
           ●   Throw
           ●   return (that isn't the last statement of a method)
           ●   &&
           ●   ||
           ●   ?
      –   Note that else, default, and finally don't increment the CCN value any further.
Metrics – Complexity
                    (continued..)
public void process(Car myCar){                              <- +1
         if (myCar.isNotMine()){                             <- +1
                        return;                              <- +1
    }
    car.paint("red");
    car.changeWheel();
    while(car.hasGazol() && car.getDriver().isNotStressed()){ <- +2
        car.drive();
    }
    return;
}
Metrics – Complexity
                        (Continued..)
●   Average complexity by method (function_complexity)
       –   Average cyclomatic complexity number by method


●   Complexity distribution by method
    (function_complexity_distribution)
       –   Number of methods for given complexities


●   Average complexity by class (class_complexity)
       –   Average cyclomatic complexity by class


●   Complexity distribution by class (class_complexity_distribution)
       –   Number of classes for given complexities


●   Average complexity by file (file_complexity)
       –   Average cyclomatic complexity by file
Metrics – Duplication
●   Duplicated lines (duplicated_lines)
       –   Number of physical lines touched by a duplication

●   Duplicated blocks (duplicated_blocks)
       –   Number of duplicated blocks of lines

●   Duplicated files (duplicated_files)
       –   Number of files involved in a duplication of lines

●   Density of duplicated lines
    (duplicated_lines_density)
       –   Duplicated lines / Physical lines * 100
Metrics – Tests
●   Unit tests (tests)
       –   Number of unit tests


●   Unit tests duration (test_execution_time)
       –   Time required to execute unit tests


●   Unit test error (test_errors)
       –   Number of unit tests that failed


●   Unit test failures (test_failures)
       –   Number of unit tests that failed with an unexpected exception


●   Unit test success density (test_success_density)
       –   (Unit tests - (errors + failures))/ Unit tests * 100


●   Skipped unit tests (skipped_tests)
       –   Number of skipped unit tests
●
Metrics – Tests (continued..)
●   Line Coverage (line_coverage)
      –   On a given line of code, line coverage simply answers the
          question: "Is this line of code executed during unit test
          execution?". At project level, this is the density of covered
          lines:
          Line coverage = LC / EL where
           ●   LC - lines covered (lines_to_cover – uncovered_lines
           ●   EL - total number of executable lines (lines_to_cover)
●   New Line Coverage (new_line_coverage)
      –   identical to line_coverage but restricted to new / update
          source code
Metrics – Tests (continued..)
●   Branch coverage (branch_coverage)
    ●   On each line of code containing some boolean expressions, the
        branch coverage simply answers the question: "Has each
        boolean expression evaluated both to true and false ?". At
        project level, this is the density of possible branches in flow
        control structures that have been followed.
            Branch coverage = (CT + CF) / (2*B) where
            CT - branches that evaluated to "true" at least once
            CF - branches that evaluated to "false" at least once
            (CT + CF = conditions_to_cover – uncovered_conditions)
            B - total number of branches (2*B = conditions_to_cover)


●   New Branch Coverage (new_branch_coverage)
        –   identical to branch_coverage but restricted to new / update source code
Metrics – Tests (continued..)
●   Coverage (coverage)
    ●   Coverage metric is a mix of the two previous line coverage and
        branch coverage metrics to get an even more accurate answer to the
        question "how much of a source-code is being executed by your unit
        tests?".
    ●   The Coverage is calculated with the following formula :
          coverage = (CT + CF + LC)/(2*B + EL) where
          CT - branches that evaluated to "true" at least once
          CF - branches that evaluated to "false" at least once
          LC - lines covered (lines_to_cover - uncovered_lines)


          B - total number of branches (2*B = conditions_to_cover)
          EL - total number of executable lines (lines_to_cover)


●   New Coverage (new_coverage)
    ●   identical to coverage but restricted to new / update source code
Metrics – Tests (continued..)
●   Conditions to Cover (conditions_to_cover)
      –   Total number of conditions which could be covered by unit tests.
●   New Conditions to Cover (new_conditions_to_cover)
      –   identical to conditions_to_cover but restricted to new / update source code
●   Lines to Cover (lines_to_cover)
      –   Total number of lines of code which could be covered by unit tests.
●   New Lines to Cover (new_lines_to_cover)
      –   identical to lines_to_cover but restricted to new / update source code
●   Uncovered Conditions (uncovered_conditions)
      –   Total number of conditions which are not covered by unit tests
●   New Uncovered Conditions (new_uncovered_conditions)
      –   identical to uncovered_conditions but restricted to new / update source code
●   Uncovered Lines (uncovered_lines)
      –   Total number of lines of code which are not covered by unit tests.
●   New Uncovered Lines (new_uncovered_lines)
      –   identical to uncovered_lines but restricted to new / update source code
Metrics – Design
●   Depth of inheritance tree (dit)
       –   The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels
           from the object hierarchy top. In Java where all classes inherit Object the minimum value of DIT is 1.
●   Number of children (noc)
       –   A class's number of children (NOC) metric simply measures the number of direct and indirect
           descendants of the class.
●   Response for class (rfc)
       –   The response set of a class is a set of methods that can potentially be executed in response to a
           message received by an object of that class. RFC is simply the number of methods in the set.
●   Afferent couplings (ca)
       –   A class's afferent couplings is a measure of how many other classes use the specific class.
●   Efferent couplings (ce)
       –   A class's efferent couplings is a measure of how many different classes are used by the specific class.
●   Lack of cohesion of methods (lcom4)
       –   LCOM4 measures the number of "connected components" in a class. A connected component is a set
           of related methods and fields. There should be only one such component in each class. If there are 2
           or more components, the class should be split into so many smaller classes.
Metrics – Design (continued..)
●   Package cycles (package_cycles)
      –   Minimal number of package cycles detected to be able to identify all undesired
          dependencies.


●   Package dependencies to cut (package_feedback_edges)
      –   Number of package dependencies to cut in order to remove all cycles between
          packages.


●   File dependencies to cut (package_tangles)
      –   Number of file dependencies to cut in order to remove all cycles between packages.


●   Package edges weight (package_edges_weight)
      –   Total number of file dependencies between packages.


●   Package tangle index (package_tangle_index)
      –   Gives the level of tangle of the packages, best value 0% meaning that there is no
          cycles and worst value 100% meaning that packages are really tangled. The index is
          calculated using : 2 * (package_tangles / package_edges_weight) * 100.
Metrics – Design (continued..)
●   File cycles (file_cycles)
       –   Minimal number of file cycles detected inside a package to be able to identify all
           undesired dependencies.


●   Suspect file dependencies (file_feedback_edges)
       –   File dependencies to cut in order to remove cycles between files inside a
           package. Warning : cycles are not always bad between files inside a package.


●   File tangle (file_tangles)
       –   file_tangles = file_feedback_edges.


●   File edges weight (file_edges_weight)
       –   Total number of file dependencies inside a package.


●   File tangle index (file_tangle_index)
       –   2 * (file_tangles / file_edges_weight) * 100.
Metrics – SCM
●   Commits
      –   The number of commits.
●   Last commit date
      –   The latest commit date on a resource.
●   Revision
      –   The latest revision of a resource.
●   Authors by line
      –   The last committer on each line of code.
●   Revisions by line
      –   The revision number on each line of code.
Coverage - Take-home Points:
●   Don’t use percentage metrics for coverage
●   Unit tests make code simpler, easier to
    understand
Compliance - Take-home Point:




 Don’t Change the Rules During the Game
Complexity - Take-home Point:




  Don’t Prohibit Complexity, Manage It.
Comments - Take-home Point:




    Code a little. Comment a little.
Architecture & Design - Take-
        home Point:




      Sonar-guided re-factoring
Live Sonar instance

http://nemo.sonarsource.org/
Technical Debt Plugin
Technical Debt Calculation
               Important metrics to look for


    duplicated_blocks

    violations – info_violations

    public_undocumented_api

    uncovered_complexity_by_tests (it is considered that
    80% of coverage is the objective)

    function_complexity_distribution >= 8,
    class_complexity_distribution >= 60

    package_edges_weight
Technical Debt Calculation

Debt(in man days) =

      cost_to_fix_duplications +
      cost_to_fix_violations +
      cost_to_comment_public_API +
      cost_to_fix_uncovered_complexity +
      cost_to_bring_complexity_below_threshold +
      cost_to_cut_cycles_at_package_level
Calculation of Debt ratio




Debt Ratio = (Current Debt / Total Possible
               Debt)* 100
Sonar in 2 minutes

http://sonar.codehaus.org/downloads/

               unzip

         sonar.sh console

         mvn sonar:sonar

       http://localhost:9000/
Installing Plug-ins



 Download the plug-in jar

Copy to extensions/plug-ins

      Restart Sonar
Some Useful Plugins

●   SQALE – Quality Model
●   Technical Debt
●   Eclipse/IntelliJ IDEA
●   JIRA Issues
●   Build Breaker
Thank You !




twitter.com/keheliya
keheliya@wso2.com           Image credit:
April 5, 2012               http://crimespace.ning.com/profiles/blogs/psychological-impact-of-the
                            http://agileandbeyond.blogspot.com/2011/05/velocity-handle-with-care.html

More Related Content

What's hot

What's hot (20)

Sonar Tool - JAVA code analysis
Sonar Tool - JAVA code analysisSonar Tool - JAVA code analysis
Sonar Tool - JAVA code analysis
 
Jenkins with SonarQube
Jenkins with SonarQubeJenkins with SonarQube
Jenkins with SonarQube
 
SonarQube: ¿cómo de malo es mi software?
SonarQube: ¿cómo de malo es mi software?SonarQube: ¿cómo de malo es mi software?
SonarQube: ¿cómo de malo es mi software?
 
Sonar
SonarSonar
Sonar
 
Track code quality with SonarQube
Track code quality with SonarQubeTrack code quality with SonarQube
Track code quality with SonarQube
 
Sonar qube
Sonar qubeSonar qube
Sonar qube
 
SonarQube Overview
SonarQube OverviewSonarQube Overview
SonarQube Overview
 
Terraform 101
Terraform 101Terraform 101
Terraform 101
 
Introduction to DevSecOps
Introduction to DevSecOpsIntroduction to DevSecOps
Introduction to DevSecOps
 
Azure DevOps - Azure Guatemala Meetup
Azure DevOps - Azure Guatemala MeetupAzure DevOps - Azure Guatemala Meetup
Azure DevOps - Azure Guatemala Meetup
 
Managing code quality with SonarQube
Managing code quality with SonarQubeManaging code quality with SonarQube
Managing code quality with SonarQube
 
SonarQube - The leading platform for Continuous Code Quality
SonarQube - The leading platform for Continuous Code QualitySonarQube - The leading platform for Continuous Code Quality
SonarQube - The leading platform for Continuous Code Quality
 
SonarQube
SonarQubeSonarQube
SonarQube
 
Static code analysis with sonar qube
Static code analysis with sonar qubeStatic code analysis with sonar qube
Static code analysis with sonar qube
 
Code Quality Lightning Talk
Code Quality Lightning TalkCode Quality Lightning Talk
Code Quality Lightning Talk
 
#ATAGTR2019 Presentation "DevSecOps with GitLab" By Avishkar Nikale
#ATAGTR2019 Presentation "DevSecOps with GitLab" By Avishkar Nikale#ATAGTR2019 Presentation "DevSecOps with GitLab" By Avishkar Nikale
#ATAGTR2019 Presentation "DevSecOps with GitLab" By Avishkar Nikale
 
SonarQube Presentation.pptx
SonarQube Presentation.pptxSonarQube Presentation.pptx
SonarQube Presentation.pptx
 
Docker Security: Are Your Containers Tightly Secured to the Ship?
Docker Security: Are Your Containers Tightly Secured to the Ship?Docker Security: Are Your Containers Tightly Secured to the Ship?
Docker Security: Are Your Containers Tightly Secured to the Ship?
 
What is SonarQube in DevOps.docx
What is SonarQube in DevOps.docxWhat is SonarQube in DevOps.docx
What is SonarQube in DevOps.docx
 
Slide DevSecOps Microservices
Slide DevSecOps Microservices Slide DevSecOps Microservices
Slide DevSecOps Microservices
 

Viewers also liked

Robot PowerPoint
Robot PowerPointRobot PowerPoint
Robot PowerPoint
bradschultz
 

Viewers also liked (13)

Effective Dashboard Design
Effective Dashboard DesignEffective Dashboard Design
Effective Dashboard Design
 
Maven overview
Maven overviewMaven overview
Maven overview
 
Alfresco Mavenisation
Alfresco MavenisationAlfresco Mavenisation
Alfresco Mavenisation
 
Sonar system
Sonar systemSonar system
Sonar system
 
Sonar
SonarSonar
Sonar
 
Sonar Overview
Sonar OverviewSonar Overview
Sonar Overview
 
SONAR
SONAR SONAR
SONAR
 
Robot PowerPoint
Robot PowerPointRobot PowerPoint
Robot PowerPoint
 
robotics ppt
robotics ppt robotics ppt
robotics ppt
 
Basics of Robotics
Basics of RoboticsBasics of Robotics
Basics of Robotics
 
Robotics project ppt
Robotics project pptRobotics project ppt
Robotics project ppt
 
SEO in 2017/18
SEO in 2017/18SEO in 2017/18
SEO in 2017/18
 
Inside Google's Numbers in 2017
Inside Google's Numbers in 2017Inside Google's Numbers in 2017
Inside Google's Numbers in 2017
 

Similar to Sonar Metrics

Qat09 presentations dxw07u
Qat09 presentations dxw07uQat09 presentations dxw07u
Qat09 presentations dxw07u
Shubham Sharma
 
Dealing with the Three Horrible Problems in Verification
Dealing with the Three Horrible Problems in VerificationDealing with the Three Horrible Problems in Verification
Dealing with the Three Horrible Problems in Verification
DVClub
 

Similar to Sonar Metrics (20)

11 whiteboxtesting
11 whiteboxtesting11 whiteboxtesting
11 whiteboxtesting
 
Qat09 presentations dxw07u
Qat09 presentations dxw07uQat09 presentations dxw07u
Qat09 presentations dxw07u
 
Software Engineering : Software testing
Software Engineering : Software testingSoftware Engineering : Software testing
Software Engineering : Software testing
 
Code Metrics
Code MetricsCode Metrics
Code Metrics
 
Building largescalepredictionsystemv1
Building largescalepredictionsystemv1Building largescalepredictionsystemv1
Building largescalepredictionsystemv1
 
Path testing, data flow testing
Path testing, data flow testingPath testing, data flow testing
Path testing, data flow testing
 
Pragmatic Code Coverage
Pragmatic Code CoveragePragmatic Code Coverage
Pragmatic Code Coverage
 
EKON 23 Code_review_checklist
EKON 23 Code_review_checklistEKON 23 Code_review_checklist
EKON 23 Code_review_checklist
 
Taking your machine learning workflow to the next level using Scikit-Learn Pi...
Taking your machine learning workflow to the next level using Scikit-Learn Pi...Taking your machine learning workflow to the next level using Scikit-Learn Pi...
Taking your machine learning workflow to the next level using Scikit-Learn Pi...
 
AutoTest.ppt
AutoTest.pptAutoTest.ppt
AutoTest.ppt
 
AutoTest.ppt
AutoTest.pptAutoTest.ppt
AutoTest.ppt
 
AutoTest.ppt
AutoTest.pptAutoTest.ppt
AutoTest.ppt
 
Testing foundations
Testing foundationsTesting foundations
Testing foundations
 
Chapter1.1 Introduction.ppt
Chapter1.1 Introduction.pptChapter1.1 Introduction.ppt
Chapter1.1 Introduction.ppt
 
Chapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.pptChapter1.1 Introduction to design and analysis of algorithm.ppt
Chapter1.1 Introduction to design and analysis of algorithm.ppt
 
Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test
 
Dealing with the Three Horrible Problems in Verification
Dealing with the Three Horrible Problems in VerificationDealing with the Three Horrible Problems in Verification
Dealing with the Three Horrible Problems in Verification
 
Unit 2 - Test Case Design
Unit 2 - Test Case DesignUnit 2 - Test Case Design
Unit 2 - Test Case Design
 
6months industrial training in software testing, jalandhar
6months industrial training in software testing, jalandhar6months industrial training in software testing, jalandhar
6months industrial training in software testing, jalandhar
 
6 weeks summer training in software testing,ludhiana
6 weeks summer training in software testing,ludhiana6 weeks summer training in software testing,ludhiana
6 weeks summer training in software testing,ludhiana
 

Recently uploaded

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Recently uploaded (20)

Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 

Sonar Metrics

  • 1. Sonar Metrics Keheliya Gallaba WSO2 Inc.
  • 2. Why collect metrics? ● You cannot improve what you don’t measure ● What you don’t measure, you cannot prove ● Broken Window Theory
  • 3. What to do? ● Prevention is the best medicine ● Planning and Prioritizing ● Technical Debt Resolution
  • 4. What to monitor? Duplicated code Coding standards Unit tests Complex code Potential bugs Comments Design and architecture
  • 5. How to monitor? ● Sonar Dashboard – Lines of code – Code Complexity – Code Coverage – Rules Compliance ● Time Machine ● Clouds & Hot spots
  • 7. Metrics - Rules ● Violations – Total number of rule violations ● New Violations – Total number of new violations ● xxxxx violations – Number of violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info ● New xxxxx violations – Number of new violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info ● Weighted violations – Sum of the violations weighted by the coefficient associated at each priority (Sum(xxxxx_violations * xxxxx_weight)) – Default Weights: INFO=0;MINOR=1;MAJOR=3;CRITICAL=5;BLOCKER=10 ● Rules compliance index (violations_density) – 100 - weighted_violations / Lines of code * 100
  • 8. Metrics - Size ● Physical lines – Number of carriage returns ● Comment lines – Number of javadoc, multi-comment and single-comment lines. Empty comment lines like, header file comments (mainly used to define the license) and commented-out lines of code are not included. ● Commented-out lines of code – Number of commented-out lines of code. Javadoc blocks are not scanned. ● Lines of code (ncloc) – Number of physical lines of code - number of blank lines - number of comment lines - number of header file comments - commented-out lines of code ● Density of comment lines – Number of comment lines / (lines of code + number of comments lines) * 100 – With such formula : 50% means that there is the same number of lines of code and comment lines 100% means that the file contains only comment lines and no lines of code
  • 9. Metrics – Size (Contd.) ● Packages – Number of packages ● Classes – Number of classes including nested classes, interfaces, enums and annotations ● Files – Number of analyzed files ● Directories – Number of analyzed directories ● Accessors – Number of getter and setter methods used to get(reading) or set(writing) a class' property . ● Methods – Number of Methods without including accessors. A constructor is considered to be a method. ● Public API – Number of public classes, public methods (without accessors) and public properties (without public final static ones) ● Public undocumented API – Number of public API without a Javadoc block ● Density of public documented API (public_documented_api_density) – (Number of public API - Number of undocumented public API) / Number of public API * 100 ● Statements – Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets incremented by one each time an expression, if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch, finally is encountered.. – Statements counter is not incremented by a class, method, field, annotation definition or by a package and import declaration.
  • 10. Metrics – Complexity ● Complexity – The Cyclomatic Complexity Number is also known as McCabe Metric. It all comes down to simply counting 'if', 'for', 'while' statements etc. in a method. Whenever the control flow of a method splits, the Cyclomatic counter gets incremented by one. – Each method has a minimum value of 1 per default except accessors which are not considered as method and so don't increment complexity. For each of the following Java keywords/statements this value gets incremented by one: ● If ● For ● While ● Case ● Catch ● Throw ● return (that isn't the last statement of a method) ● && ● || ● ? – Note that else, default, and finally don't increment the CCN value any further.
  • 11. Metrics – Complexity (continued..) public void process(Car myCar){ <- +1 if (myCar.isNotMine()){ <- +1 return; <- +1 } car.paint("red"); car.changeWheel(); while(car.hasGazol() && car.getDriver().isNotStressed()){ <- +2 car.drive(); } return; }
  • 12. Metrics – Complexity (Continued..) ● Average complexity by method (function_complexity) – Average cyclomatic complexity number by method ● Complexity distribution by method (function_complexity_distribution) – Number of methods for given complexities ● Average complexity by class (class_complexity) – Average cyclomatic complexity by class ● Complexity distribution by class (class_complexity_distribution) – Number of classes for given complexities ● Average complexity by file (file_complexity) – Average cyclomatic complexity by file
  • 13. Metrics – Duplication ● Duplicated lines (duplicated_lines) – Number of physical lines touched by a duplication ● Duplicated blocks (duplicated_blocks) – Number of duplicated blocks of lines ● Duplicated files (duplicated_files) – Number of files involved in a duplication of lines ● Density of duplicated lines (duplicated_lines_density) – Duplicated lines / Physical lines * 100
  • 14. Metrics – Tests ● Unit tests (tests) – Number of unit tests ● Unit tests duration (test_execution_time) – Time required to execute unit tests ● Unit test error (test_errors) – Number of unit tests that failed ● Unit test failures (test_failures) – Number of unit tests that failed with an unexpected exception ● Unit test success density (test_success_density) – (Unit tests - (errors + failures))/ Unit tests * 100 ● Skipped unit tests (skipped_tests) – Number of skipped unit tests ●
  • 15. Metrics – Tests (continued..) ● Line Coverage (line_coverage) – On a given line of code, line coverage simply answers the question: "Is this line of code executed during unit test execution?". At project level, this is the density of covered lines: Line coverage = LC / EL where ● LC - lines covered (lines_to_cover – uncovered_lines ● EL - total number of executable lines (lines_to_cover) ● New Line Coverage (new_line_coverage) – identical to line_coverage but restricted to new / update source code
  • 16. Metrics – Tests (continued..) ● Branch coverage (branch_coverage) ● On each line of code containing some boolean expressions, the branch coverage simply answers the question: "Has each boolean expression evaluated both to true and false ?". At project level, this is the density of possible branches in flow control structures that have been followed. Branch coverage = (CT + CF) / (2*B) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once (CT + CF = conditions_to_cover – uncovered_conditions) B - total number of branches (2*B = conditions_to_cover) ● New Branch Coverage (new_branch_coverage) – identical to branch_coverage but restricted to new / update source code
  • 17. Metrics – Tests (continued..) ● Coverage (coverage) ● Coverage metric is a mix of the two previous line coverage and branch coverage metrics to get an even more accurate answer to the question "how much of a source-code is being executed by your unit tests?". ● The Coverage is calculated with the following formula : coverage = (CT + CF + LC)/(2*B + EL) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once LC - lines covered (lines_to_cover - uncovered_lines) B - total number of branches (2*B = conditions_to_cover) EL - total number of executable lines (lines_to_cover) ● New Coverage (new_coverage) ● identical to coverage but restricted to new / update source code
  • 18. Metrics – Tests (continued..) ● Conditions to Cover (conditions_to_cover) – Total number of conditions which could be covered by unit tests. ● New Conditions to Cover (new_conditions_to_cover) – identical to conditions_to_cover but restricted to new / update source code ● Lines to Cover (lines_to_cover) – Total number of lines of code which could be covered by unit tests. ● New Lines to Cover (new_lines_to_cover) – identical to lines_to_cover but restricted to new / update source code ● Uncovered Conditions (uncovered_conditions) – Total number of conditions which are not covered by unit tests ● New Uncovered Conditions (new_uncovered_conditions) – identical to uncovered_conditions but restricted to new / update source code ● Uncovered Lines (uncovered_lines) – Total number of lines of code which are not covered by unit tests. ● New Uncovered Lines (new_uncovered_lines) – identical to uncovered_lines but restricted to new / update source code
  • 19. Metrics – Design ● Depth of inheritance tree (dit) – The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels from the object hierarchy top. In Java where all classes inherit Object the minimum value of DIT is 1. ● Number of children (noc) – A class's number of children (NOC) metric simply measures the number of direct and indirect descendants of the class. ● Response for class (rfc) – The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. RFC is simply the number of methods in the set. ● Afferent couplings (ca) – A class's afferent couplings is a measure of how many other classes use the specific class. ● Efferent couplings (ce) – A class's efferent couplings is a measure of how many different classes are used by the specific class. ● Lack of cohesion of methods (lcom4) – LCOM4 measures the number of "connected components" in a class. A connected component is a set of related methods and fields. There should be only one such component in each class. If there are 2 or more components, the class should be split into so many smaller classes.
  • 20. Metrics – Design (continued..) ● Package cycles (package_cycles) – Minimal number of package cycles detected to be able to identify all undesired dependencies. ● Package dependencies to cut (package_feedback_edges) – Number of package dependencies to cut in order to remove all cycles between packages. ● File dependencies to cut (package_tangles) – Number of file dependencies to cut in order to remove all cycles between packages. ● Package edges weight (package_edges_weight) – Total number of file dependencies between packages. ● Package tangle index (package_tangle_index) – Gives the level of tangle of the packages, best value 0% meaning that there is no cycles and worst value 100% meaning that packages are really tangled. The index is calculated using : 2 * (package_tangles / package_edges_weight) * 100.
  • 21. Metrics – Design (continued..) ● File cycles (file_cycles) – Minimal number of file cycles detected inside a package to be able to identify all undesired dependencies. ● Suspect file dependencies (file_feedback_edges) – File dependencies to cut in order to remove cycles between files inside a package. Warning : cycles are not always bad between files inside a package. ● File tangle (file_tangles) – file_tangles = file_feedback_edges. ● File edges weight (file_edges_weight) – Total number of file dependencies inside a package. ● File tangle index (file_tangle_index) – 2 * (file_tangles / file_edges_weight) * 100.
  • 22. Metrics – SCM ● Commits – The number of commits. ● Last commit date – The latest commit date on a resource. ● Revision – The latest revision of a resource. ● Authors by line – The last committer on each line of code. ● Revisions by line – The revision number on each line of code.
  • 23. Coverage - Take-home Points: ● Don’t use percentage metrics for coverage ● Unit tests make code simpler, easier to understand
  • 24. Compliance - Take-home Point: Don’t Change the Rules During the Game
  • 25. Complexity - Take-home Point: Don’t Prohibit Complexity, Manage It.
  • 26. Comments - Take-home Point: Code a little. Comment a little.
  • 27. Architecture & Design - Take- home Point: Sonar-guided re-factoring
  • 30. Technical Debt Calculation Important metrics to look for  duplicated_blocks  violations – info_violations  public_undocumented_api  uncovered_complexity_by_tests (it is considered that 80% of coverage is the objective)  function_complexity_distribution >= 8, class_complexity_distribution >= 60  package_edges_weight
  • 31. Technical Debt Calculation Debt(in man days) = cost_to_fix_duplications + cost_to_fix_violations + cost_to_comment_public_API + cost_to_fix_uncovered_complexity + cost_to_bring_complexity_below_threshold + cost_to_cut_cycles_at_package_level
  • 32. Calculation of Debt ratio Debt Ratio = (Current Debt / Total Possible Debt)* 100
  • 33. Sonar in 2 minutes http://sonar.codehaus.org/downloads/ unzip sonar.sh console mvn sonar:sonar http://localhost:9000/
  • 34. Installing Plug-ins Download the plug-in jar Copy to extensions/plug-ins Restart Sonar
  • 35. Some Useful Plugins ● SQALE – Quality Model ● Technical Debt ● Eclipse/IntelliJ IDEA ● JIRA Issues ● Build Breaker
  • 36. Thank You ! twitter.com/keheliya keheliya@wso2.com Image credit: April 5, 2012 http://crimespace.ning.com/profiles/blogs/psychological-impact-of-the http://agileandbeyond.blogspot.com/2011/05/velocity-handle-with-care.html