Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Applying a Methodical Approach to Website Performance


Published on

When addressing website performance issues, developers typically jump to conclusions, focusing on the perceived causes rather than uncovering the real causes through research.

Mitchel Sellers will show you how to approach website performance issues with a level of consistency that ensures they're properly identified and resolved so you'll avoid jumping to conclusions in the future.

You can watch the webinar recording here:

Published in: Software
  • Login to see the comments

  • Be the first to like this

Applying a Methodical Approach to Website Performance

  1. 1. Website Performance Understanding the Importance and the Road to Implementation
  2. 2. About Your Speaker  Mitchel Sellers  CEO @ IowaComputerGurus Inc.  Microsoft MVP, DNN MVP, ASPInsider  Experienced trainer & educator  Contact Information:   Twitter @mitchelsellers & @IowaCompGurus  Office: 515-270-7063
  3. 3. Disclaimer/Disclosure  The tools discussed in this presentation are tools that we have found helpful in our years of performance tuning applications for customers  We are not being compensated for recommending/showing the tools we are discussing today  Without the use of tools, your work will be FAR more involved. Feel free to use your own tools, or favorites. We are always looking for recommendations as well
  4. 4. Watch the webinar recording here: /webinar-recording-website- performance
  5. 5. Agenda  Why do we care about performance?  What indicates successful performance?  Understanding how webpages work  Visualizing complete page load…  Quick Fixes & Tools?  Diagnosing Issues  Applying load?  Adopting Change
  6. 6. Why Do We Care About Performance  Search Engine Optimization  Google Ranking  User Perception  Do I really want to work with this business?  Devices  Differing network abilities  Throttled vs Free, etc.
  7. 7. Indicators of a Well Performing Site  Not an exact science  Can articulate a few key metrics  Anything more than 250 ms response triggers warnings from Google Page Speed Tools  User dissatisfaction starts around the 2-3 second mark  Various studies show > 25% increase in abandonment after 6 seconds
  8. 8. Well Performing Site Indicators  User experience focused, rather than true “metrics” focused  Think or  Minimal requests needed to render the website
  9. 9. Understanding How Webpages Work  Technical or not, understanding the order is key  Logical processing  Request/Response for main HTML  Request/Response for individual assets, after HTML processed  Request/Response for linked assets within other assets  Limitations  Current web browsers can only request 4-10 items per domain, at a time
  10. 10. Watch the webinar recording here: /webinar-recording-website- performance
  11. 11. Example Loading:
  12. 12. Example CMS Driven Site (Moderate speed)
  13. 13. Compare & Contrast Metric KCCI Moderate CMS High Perf CMS Total Page Load 8.0 Seconds 2.3 Seconds 1.7 Seconds Total Page Size 5.43 Mb 1.97 Mb 817 Kb Total Requests 317 59 26 Google Page Speed 90/100 94/100 95/100 Yahoo! Y! Slow E (58%) C (70%) A (92%) Time to First Byte 112 ms 223 ms
  14. 14. Quick Fixes & Tools  – Aggregates Google PageSpeed, Yslow and other metrics  Common trouble areas (User Generated Often)  Non-compressed image files (Large page load)  Non-combined JS Files (Large # of requests)  Large Hidden HTML elements  Lack of “Static File Caching”  Common areas often result in a 40-80% improvement  Internal Networks need to use other tools
  15. 15. Additional Quick Fixes & Tools  Content Delivery Network  Reduces load to your server  Serves content more local to the user  Can improve throughput by exponential factors  Application Caching  Keep DB Queries that are costly  Image Sprites  Combine multiple smaller images
  16. 16. Understanding Content Delivery Networks  Pull Type CDN  Pro: Requires less changes to your application and can often be implemented quickly  Con: Can often make very “unusual” things happen if just thrown in and left to run  Examples  CloudFlare  Incapsula  Traditional CDN  Pro: Does not impact areas that you don’t expect  Con: Requires manual intervention  Examples  Amazon S3 / Windows Azure
  17. 17. Diagnosing Issues  Regular monitoring of all application metrics  Web & DB Server CPU  Web & DB Server Memory Usage  Web Request Throughput  Web Transaction Times  SQL Activity  Google Analytics  NewRelic, CopperEgg, Application Insights, etc.
  18. 18. Applying Load  Static Examples are great, but don’t take into consideration heavy load, or caching  Load application needs to meet key requirements  Simulate real traffic  Behave similar to actual users  Be realistic for the expected loads  Hardware environments need to be similar  Expected Behavior  Load should be handled in a stable manner. As requests go up, throughput should go up, bandwidth should go up
  19. 19. LoadTesting Tools   Ensure that it matches what your browser does  Validate that you know how they “simulate” users
  20. 20. Metrics in Load Based Scenarios  Possible targets or minimums  RPS (Requests Per Second)  Concurrent users  Average Server Response  % failed request  Queue length  Real goal is dependent on your environment
  21. 21. BudgetChallenge Use Case Example  was going to be featured on the Today show, as well as other national outlets. Additionally formed a partnership with HR Block and needed to guarantee performance with up to 75,000 concurrent users.  Hardware Situation:  6 Virtual Web Servers (18 Gb Ram each)  1 SQL Server Cluster (64 Gb RAM and 32 cores)  Starting Performance:  Initial load test resulted in substantial performance failure at ~600- 700 concurrent users  Ending Performance:  Successful Test with 73,259 concurrent users before network device failure  Total Time: 2 weeks
  22. 22. Load Test Trends Two examples: Before and After Optimization. (Before is 100 users, after is 75,000 users, same hardware)
  23. 23. Adopting Change in Performance  Proactive is Best  Start with a metric, and validate it ALL the way through development  Adjust as needs change  Reactive is most common  Must strongly ensure that jumping to conclusions is not done. Very easy to skip a key metric  ONLY change 1 thing at a time and re-validate as you move on
  24. 24. Watch the webinar recording here: /webinar-recording-website- performance
  25. 25. Questions?