Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Case Study: Automated Code Reviews In A Grown SAP Application Landscape At EWE AG

2,559 views

Published on

Markus Theilen, IT Development coordinator at EWE, talks about his experience and approach to the introduction of Virtual Forge CodeProfiler in the application development of easy+

easy+ is a 100% custom-developed application system of EWE based on SAP ERP 6.0, which includes the components of meter reading, accounting, invoicing and claims management, market communication and reporting / controlling for energy services of the EWE Group.

Published in: Technology
  • Login to see the comments

Case Study: Automated Code Reviews In A Grown SAP Application Landscape At EWE AG

  1. 1. 1 Mastering the Hard Way Safe custom ABAP code Implementing Virtual Forge CodeProfiler for ABAP security and quality in a grown application landscape Markus Theilen, EWE AG
  2. 2. 22 Agenda About EWE Your presenter Motivation Looking back Approach with CodeProfiler Lessons learned
  3. 3. Comprehensive solutions in three key sectors 3 EWE brings together energy, telecommunications and information technology, and thereby possesses all the key expertise for sustainable, intelligent energy supply systems
  4. 4. EWE – one of the largest companies in northwest Germany 4 Sales of €8.9 billion Net profit of €57.2 million Average number of employees 9,162
  5. 5. Our strengths are our excellent service and advice as well as the proximity to our customers 5 2013 1.4 million electricity customers 1.6 million gas customers 680,000 telecommunications customers
  6. 6. EWE’s regions in Germany, Poland and Turkey 6
  7. 7. 77 Introducing your presenter: Markus Theilen Enterprise Architect 2001 – 2012: software developer and architect working in the easy+ ABAP development at BTC AG o Created and established development guidelines o Introduced automated checks of development objects Since 2012: IT coordinator in the E-IT group “Billing and Market Communication” o Responsible for coordination of development and operation of easy+ Since 2009: associate speaker of DSAG working group Development Co-Author: Best Practice Guidelines for Development – Practical tips on the ABAP Development Author of one of the most popular presentations about ABAP code analysis: http://www.slideshare.net/therealtier/static-abap-code-analyzers
  8. 8. 88 About easy+ o 100% custom-developed, based on SAP ERP 6.0 o Includes components for meter reading, accounting, invoicing, claims management, market communication o Reports and controls EWE group energy services o Comparable with the functionality of SAP IS-U o Entirely written in ABAP o In productive use since 1995 o Approximately 100 people in development
  9. 9. 99 Why “mastering the hard way?” o Easy+ developed over decades o Involved more than 100 developers with very different skill levels o No distinct encapsulation of internal modules  ultra dependent monolith  extremely difficult to maintain o Rare checks for compliance with developer guidelines o Completely manual regression testing (high efforts), including for purely technical changes (even higher efforts)
  10. 10. 1010 Technical view on easy+ o 8Tb data volume o 10 Mil. lines of code o 1,600 packages o 11,000 programs o 8,600 classes and 1,500 interfaces o 1,500 function groups o 4,400 tables
  11. 11. 1111 Far too much code for manual reviews o The more code, the higher the complexity (exponential growth) o Code might look OK upon manual review, but it can have a severe impact in the context of the call hierarchy o It’s impossible to check complex code manually Our paradigm: We do not allow development requirement that we can’t check automatically! Risky Statement
  12. 12. 1212 Looking back Before 2009: o No static analysis tools o No regular code reviews o No meaningful reporting about code quality was possible However, we found bad code in old programs and expected better in new developments
  13. 13. 1313 Starting with static analysis tools In 2009: o Introduction of a ABAP code scanning product in easy+ development at BTC o Focus: ABAP and risk reporting for management o Scanned non-ABAP, including Java, C/C++, .NET, and other languages
  14. 14. 1414 The Good … o Some limited reporting on the quality of code o Developers got “used to” code analysis o Knowledge transfer about good and bad coding was spread among developers informally o From a 10,000 feet point of view, many dashboards were available for a management target group, revealing insightful results
  15. 15. 1515 … and the bad o The tool was expensive, hard to use, and error-prone o Many false positives or entirely wrong checking rules  decreasing acceptance by developers o Developers started to do a “benchmark optimization” with negative impact  they tried to satisfy the tool and stopped thinking o No integration in ABAP development process possible (workbench, TMS)  no simple way to bring “their” results to the developer’s desk o Distinction between legacy coding (not-changed) and recently changed coding almost impossible  corrections could only be made in context of complete code, which significantly increased the manual testing effort
  16. 16. 1616 A (last) word on the competitive product o The vendor has much in-house expertise in languages other than ABAP. In a non-ABAP context, it’s much more stable and mature. o The product offers interesting and informative analysis tools based on a “development object database” with meta data and manifold relations between objects o The products focus aims at the management level o It was only after using the tool that we realized we should follow a different direction  we need a different tool for this
  17. 17. 1717 Changeover to CodeProfiler End of 2012: start of a productive pilot Since early 2013: in full productive use Initial focus was more on feedback for developers, not on dashboards
  18. 18. 1818 Positive experience and potential o Faster analysis o Much lower false positive rate o TMS integration for automatic checking of code changes o Targeted checking of existing code base o Best coding practices documentation for ABAP Vendor: • Applies feedback into future development • Fast and accurate response • Semi-annual releases
  19. 19. 1919 Impact of Scrum introduction on development By end of 2012, we started to switch to a Scrum-based development process in the easy+ environment Key Scrum principle: feedback early and often! o The shorter the feedback cycles, the quicker and easier the right target can be reached again o In parallel, agile development practices to improve quality were introduced The principle of quick feedback should be applied to the compliance of developer guidelines  Change of behavior necessary
  20. 20. 2020 The behavioral change o Feedback Regularly inform all involved parties in a factual and objective way about all issues o Penalties and rewards • Impact of “bad” behavior must be tangible for those who can change something in their day- to-day work • Positive behavior must be reinforced by appropriate rewarding mechanisms
  21. 21. 2121 Competitive product feedback o Infrequent feedback from the beginning o No direct relation between developer’s day-to-day work and abstract “management figures” o No direct “pain” or a feeling of inconvenience from possible penalties  No incentive for behavioral change
  22. 22. 2222 CodeProfiler feedback o Feedback about security and quality is possible any time through the tight integration of CodeProfiler in the ABAP development process o Developers get instant feedback by transport release  several times per day or week  Much closer to the time of adding issues in newly developed code o Immediate “pain” by penalty: • Violation of known and accepted requirements leads to rejection of transports and an approval process with potentially unpleasant inquiries We are still working on providing a rewarding option  gamification approach
  23. 23. 2323 Rollout of CodeProfiler o Step 1: integrate CodeProfiler in development environment o Step 2: activate selected, recognized, expert-developed rules that would block transport release o Step 3: establish approval instance for deciding about exceptions (architecture team) o Step 4: activate entire rule set in waves: • Approx. every 3 months • Within 12 months, all rules could be activated
  24. 24. 2424 Current status of transports and approvals per month Transports without critical findings: 499 w. Findings: 50 Rejected, and then corrected 44% approvals 56% (*) ∑ Findings: 87 (*) Reasons for approvals: - Code that we cannot touch - Technical follow-up transports (system copy) - False positives (only in rare cases) After ~12 months:  All new code tested  95% of transports “clean:”  90% good code by developer  5% after rejection and correction
  25. 25. 2525 Criteria for testing rules selection A board of key developers discussed a selection of rules that are activated and can also stop a transport according to the following criteria: o Effort for correction o Occurrences of findings in existing code o Impact of findings o Personal opinion / experience
  26. 26. 2626 How we deal with legacy code Legacy code often violates more rules than new coding because the rules were not in place when the code was written. Approach 1: handle legacy code like new code o Fixes old code when you touch it o Provokes an “outcry of horror” o Works if you softly roll out the rules in waves Approach 2: check code only after a certain creation date o Makes rollout easier o Risk: old issues will not be fixed (yet) We use approach 1
  27. 27. 2727 Next steps o Implement CodeProfiler BW components for management reporting o Roll out ABAP development tools (ABAP in Eclipse) with CodeProfiler integration  allows the earliest feedback (interactively while you write the code) • ABAP development tools allow seamless integration of further (in-house) tools in the development environment
  28. 28. 2828 Evolution of feedback cycles 14 days x times per week any time Competitive tool CodeProfiler ABAP Development Tools with CodeProfiler
  29. 29. 2929 Lessons Learned o Not everyone is happy about the new code quality transparency, but this is a “must” if you want to successfully change and improve o The work council was involved quickly to address possible employee concerns o We observed “benchmark optimization” in order to avoid penalties
  30. 30. 3030 Lessons Learned o Start with a small, piecemeal extension of scope: • Roll out rules in waves • More and more code will be part of tool-based scans o Integrate testing tools as efficiently and early as possible in the development lifecycle o Involve developers to decide about the set of rules  High acceptance in overall developer team
  31. 31. 32 Contact: Email Markus.Theilen@ewe.de Twitter @therealtier
  32. 32. 33 Thank you for your attention. EWE Aktiengesellschaft Tirpitzstrasse 39 26122 Oldenburg, Germany T +49 441 4805 - 0 www.ewe.com

×