This document discusses how an experimentation program manager, Kristen, can empower teams within her organization to make data-driven decisions through experimentation at scale. It outlines three key challenges Kristen faces: teams cannot measure what matters to them, cannot leverage experiment results, and lack confidence in results. The document then presents solutions from Optimizely's data platform to address each challenge by allowing teams to: 1) measure custom metrics that are important to their work, 2) analyze results through their preferred tools and workflows, and 3) gain confidence in results by reducing biases, discrepancies and false discoveries. This will help Kristen grow and maintain adoption of her organization's experimentation program.
5. Bing had 8 years
of consecutive
market share
growth
Source: Comscore
6. “The growth of experimentation is the major reason Bing is
profitable and its share of U.S desktop searches nearly tripled.”
Ronny Kohavi, GM of Analysis and Experimentation, Microsoft
Source: Comscore
7. Old Reality Culture of
Top-Down Innovation
Embrace Success
Make Decisions
Follow Orders
Bottom-Up Innovation
Embrace Failure
Validate Decisions
Follow Data
8. Experimentation Hero
10’s experiments / year
Experimentation Program
100’s experiments / year
Culture of Experimentation
1000’s experiments / year
Scaling the Experimentation Program
11. ResultsEvents Stats EngineAudiences Metrics
Process signals
across customer
experiences
Target user groups
with common
attributes
Measure the
impact of
hypotheses
Understand the
impact of
hypotheses
Validate hypotheses
with statistical
confidence
6.5+Billion Events
Daily
1.2
+
Billion
Experiences
Daily
50+Million Unique
Users Daily
17. Can't leverage the results Experimentation is not useful
Can’t measure what matters to them Reduced adoption of the program
Don't have confidence in the results Losing trust in experimentation
18. “I want to support the measurement
needs of all teams across the
organization”
1. Can’t measure what matters
3. No confidence in the results
2. Can't leverage the results
24. Sources
X Event API
Bring your own signals into
experimentation no matter where
they live.
Web, Mobile
& Full-Stack
Offline
Conversions
Business
Intelligence
Secure
Environments
Event APISnippet / SDKs
Optimizely X
26. Metrics Builder
Measure the metrics that matter most to your
business with greater precision and flexibility.
Metrics
! Intuitive interface
! New metric types
! Metrics calculated in real-time
27.
28. Bounce & Exit Rage
Metrics
Conversions
Revenue
Custom Value
Measure distinct user actions
Measure any numerical goal
checkouts per visitor,
page views per visitor
Measure transaction amounts
revenue per visitor (RPV),
average order value (AOV)
Session Duration Measure engagement duration avg. time spent on news article pages,
avg. time spent on site
Measure user abandonment
page load time per visitor,
purchased items per order
bounce rate on homepage,
exit rate on checkout page
Metric Description Examples
(2018)
(in beta November 2017)
29. Teams can measure what matters to them
• Identify the signals and metrics teams need
• Incorporate them into experimentation
• Scale by enabling everyone to measure what
matters to them
1. Can’t measure what matters
3. No confidence in the results
2. Can't leverage the results
30. “I want teams to have flexibility in
the way they analyze the results”
1. Can’t measure what matters
3. No confidence in the results
2. Can't leverage the results
32. Analysis Methods
Results Export
Expand your analysis workflow with a
variety of new and improved methods
to export your results.
Results API
CSV Export
Raw Data Export
Analytics Integration
33. Analysis Methods
GET /v2/experiments/{experiment_id}/results
...
"metrics": [
{
"name":"items in cart"
"results": {
"123456": {
"is_baseline":false,
"lift": {
"confidence_interval": [
1.152505,
2.612566
],
"is_significant":true,
"significance":0.9885,
"value":0.44
"visitors_remaining":2601
}
}
...
}
Your Application
Custo
m
Alerts
Automated
Actions
Aggregated
Reporting
Results API
35. Develop your own custom integration to bring
experimentation results into your analytics tool.
Analysis Methods
+
Custom
Integration
Framework
36. 1. Can’t measure what matters
3. No confidence in the results
2. Can't leverage the results Teams can analyze the results
• Understand how teams are doing analysis
• Connect experiment data with their workflows
• Scale by enabling everyone to analyze
37. “I want tools our teams can trust
and have faith that the results will
manifest”
1. Can’t measure what matters
3. No confidence in the results
2. Need help to analyze experiments
38. 1. False Discoveries
2. Skewed Results (Outliers / Bots)
3. Data Discrepancies
Top reasons teams lose
confidence on experimentation?!
39. 1.False Discoveries
2. Skewed Results (Outliers / Bots)
3. Data Discrepancies
Top reasons teams lose
confidence on experimentation?!
40. Results Confidence
Stats Engine
Validate decisions with statistical confidence
! Error control for multiple hypothesis testing
! Prevents bad decisions due to ‘peeking’
! Configurable risk tolerance
41. 1. False Discoveries
2.Skewed Results (Outliers / Bots)
3. Data Discrepancies
Top reasons teams lose
confidence on experimentation
46. Outlier Filtering
Improve the integrity of your decisions by
filtering outliers from the results.*
! Enabled on demand
! Calculated in real-time
! Configurable thresholds
Results Confidence
*Availability: Coming in beta November 2017. Compatible with revenue and custom value metrics.
47.
48. Bot Filtering
Enhanced Bot Filtering to improve the
fidelity of the results
! Enhanced bot filtering that complies to industry
standard (IAB/ABC List) for web analytics
! Bots automatically filtered from the results
OPTIMIZELY
Results Confidence
49. 1. False Discoveries
2. Skewed Results (Bots / Outliers)
3.Data Discrepancies
Top reasons teams lose
confidence on experimentation
50. Numbers don't match!
1. Product Differences
2. Implementation Bugs
3.Event Timing Issues
Leading factors
of analytics
discrepancies
51. visitor counter: +1
First Optimizely event
First 3rd-party analytics event
Optimizely
visitor counter: +1
Other Analytics
Results Confidence
52. visitor counter: +1
First Optimizely event
First 3rd-party analytics event
Optimizely
visitor counter: +1
Other Analytics
+2
+1
Results Confidence
53. Hold/SendEvents API
Mitigate discrepancies between
Optimizely and your web analytics.
window.optimizely.push({type: "holdEvents"});
window.optimizely.push({type: "sendEvents"});
HoldEvents: Instruct Optimizely to hold
the events in a browser queue.
SendEvents: Instruct Optimizely to release
the events from the browser queue.
Results Confidence
54. visitor counter: +1
First 3rd-party analytics event Optimizely &
Other AnalyticsFirst Optimizely event
Results Confidence
55. visitor counter: +1
First 3rd-party analytics event Optimizely &
Other AnalyticsFirst Optimizely event
reduction in discrepancies
related to event timing
90%
up to
Results Confidence
56. 1. Can’t measure what matters
3. No confidence in the results
2. Need help to analyze experiments
Teams have confidence in the results
• Identify sources of bias or discrepancies
• Filter them out from the results
• Scale by enabling everyone to analyze the results
with confidence
57. Decisions at Scale
!Grow adoption by enabling teams to measure what matters to them
!Grow adoption by enabling teams to analyze using their own workflows
!Maintain adoption by giving teams confidence on the results