Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Building Quality with Foundations of Mud


Published on

Published in: Technology, Business
  • Login to see the comments

  • Be the first to like this

Building Quality with Foundations of Mud

  1. 1. Building quality on foundations of mud Kristian Rosenvold Selenium Committer/ Apache Maven Committer Zenior AS Oslo,Norway @krosenvold «from the trenches, kitten free»
  2. 2. Textbook solutionProduction TestWeb Web Web Web Web Web 1 2 3 1 2 3 App App App App 1 2 1 2 Db1 Db2 Db1 Db2
  3. 3. Your test environment sucks Hardware mismatches.  2 server cluster in test, 8 in production ? Capability mismatches  «Clustered multi-cpu oracle licence is too expensive to buy for test» Version mismatches Latency/bandwidth differences We do it «right» now, but 10 years ago....
  4. 4. Your test data suck Data does not match production  «Anonymized snapshots» of production data may be old  Products/data different in production ? − Test using the products you were selling 2 years ago? Test back-ends in states of disarray. − Subscriber has terminated his subscription in one system while other thinks hes still active. − Only half your test-systems know the customer is deceased.
  5. 5. Fixing all those broken test systems ? Low perceived business value Removes focus from business tasks  no-one wants to «stop» for 3-6 months. Pick your fights  Too expensive, wont happen  Learn to live with it!
  6. 6. How to build quality and stay good? Empowered developers with full responsibility from specification, test to deployment Your team members should feel uneasy when the build is red.  They should know their reputation/quality is on the line. Team members should know red tests mean trouble.  False reds must be at controllable levels
  7. 7. Make your dev team live & breathe the build Devs build tests for all main use-cases and all significant error scenarios. Devs do all application testing by starting a test  No manual testing allowed, whatsoever! − Good sign; Devs do not know where the link to a specific feature is shown. All application development «driven» by stopping test with breakpoint; test-first Fully empowered devs delivering expected quality on time and budget key diffrentiator
  8. 8. What can go wrong ? Too slow test runs;  >15 minute feedback loop too slow − Significant cost increase «Untestable» parts of the system  100% functional coverage inspires confidence; 80% doesnt Too many false red tests;  10 tests that fail 5% of the time gives total ~50% red; totally destructive  Tolerable failure rate <= 0.01% for any single test. − Given 1000 tests, 1 in 10 builds can fail intermittently
  9. 9. Too slow tests ? Use grid (or SAAS)  Outsource/get as much hardware as you can Let individual developers have access to grid for running tests against local workstation
  10. 10. CI environmentRun tests non-stop during working hours Build-status screens CI support build history down to test levelSocially acceptable to break the build Use «social engineering» to control scope Aggressive reverting of breaking changes • VCS flexibility importantBroken build is priority 1 For whoever owns the breaking committs
  11. 11. Untestable parts of system Remove impediments for testing  Tests may need APIs that are not regular core business − Cancel order − Aborting transactions in odd states − Asynch processes Discard test-unfriendly technologies Tests «consuming test data»  Consider generating test data too Queue based technology
  12. 12. Too many errors ?Selenium bugs Stay close to latest version Fix bug, submit patch with testcase Work around (js simulations) Sacrifice speed for correctnessTest «problems» Instrument javascript code Instrument application 0-tolerance for intermittent issues
  13. 13. Too many data errors ? Strategy 1: Lookup data by characteristics  Make «TestDataLocator» that identifies data for test automatically: − FindCustomerWithPrepaid − FindCustomerWithUnpaidBills − FindHighVolumeCustomer  TestDataLocator can introduce randomness wrt what data is returned  Using real (anonymized) production data − Intermittent failures will reveal variations in your production data  Test/application should handle these variations
  14. 14. Too many errors (2) Use JUnit categories to partition tests, dividing into groups by stability/feature.  Tagging by primary back-end system a nice strategy; @UsesDatabase, @UsesGeolocation, @UsesRemotePricingService. − Any classification that makes sense from a domain perspective that can partition your domain. Instead of 1 big failing blob you now have  7 groups that are «always» green, 3 that are troublesome
  15. 15. Strategy 2: Stubbing Replace live systems with hard-coded fakes  Replace one or more external systems − Increase reliability, reduce complexity − Split complexity problem into parts «100%» confidence in some parts of stack can offset «holes» created by stubs.
  16. 16. Architecture for stubbing Web pages «Core business logic»Model Integration-api Integration impl stub impl
  17. 17. Test data for stubsinterface DataSet {Login getLogin();Customer getCustomer();Address getAddress();List<Subscriptions> getSubscriptions();List<Bill> getUnpaidBills();List<Order> getMerchandiseOrders();}  Build a stub model that is sufficiently  complex to handle domain.
  18. 18. Stubs with datasets Logging on to a stubbed Set when application can bind a Logging in specific dataset to the session Stub implementations can ActiveDataSet be aware of the session dataset Integration-api Session Integration impl stub impl
  19. 19. StubsPublic BillingStub implements BillingService{   int i;   public List<Bill> getBills(){      if (i++ % 10 == 0) {          throw new RuntimeException(«Every now  and then we fail»);      }      return getSessionDataSet().getBills();   }}
  20. 20. Stubbing antipatterns Keep it all at one layer Avoid stubs in «core» layer Datasets should reflect domain Web pages «Core business logic» Model Integration-api stub impl
  21. 21. Normalize stubsinterface DataSet {Login getLogin();Customer getCustomer();Address getAddress();List<Subscriptions> getSubscriptions();List<Bill> getUnpaidBills();List<Order> getMerchandiseOrders();}
  22. 22. Coverage Web pages Near 100% of core business «Core business logic» logic and web layer logicModel Integration-api Integration impl stub impl
  23. 23. Coverage Web pages Near 100% of core business «Core business logic» logic and web layer logic Model Integration-apiIntegration tests Integration impl stub impl
  24. 24. Live tests Web pages Small set of tests «Core business logic» running on full live stack Integration-api  Smoke test the Model «whole thing» Reduce detail because of general Integration impl flakiness  These are hard to maintain and amount of tests should be as low as comfortable
  25. 25. Coverage Live tests Web pages «Core business logic» Model Integration-apiIntegration tests Integration impl stub impl
  26. 26. «Full» coverage rules Approximations work well !  Dont be afraid of the «holes» Keep clear record of real defects missed due to approximations Communicate clearly where your holes are  Manual test in weak areas − Or maybe not, let your track record decide ? Your tests will outperform /any/ manual testing in the long run
  27. 27. Questions ? Kristian @krosenvold