Monil Shah from Clover presented on how Clover uses experimentation to drive digital growth. Clover started with walk experiments to test small changes and validate hypotheses. They then increased their experiment velocity by prioritizing high impact experiments and defining success metrics upfront. Clover also developed processes to conclude experiments early if clearly winning or losing, and to iterate based on experiment learnings. Clover evangelized experimentation across the company by finding executive sponsors, involving multiple teams, and educating and incentivizing experimentation.
3. ● Customize the widgets on your page
to your preference
● This webinar is recorded and you will
receive the link with the slides in the
next few days
● We will have time for questions! Please
submit them in the Q&A box on the left
side of the screen
Housekeeping
4. 4
Agenda
1. Welcome & Intros
2. Experimentation for Clover’s Digital Growth Team
3. Q&A
5. 5
The only solution built for your entire team: growth marketers,
product managers, engineers, and data analysts.
What
We Do
A unified platform for A/B/n testing, feature flagging, and
progressive rollouts across the entire customer journey.
World’s #1 digital laboratory - Optimizely customers have
run over 1.8M experiments on our platform since 2010.
Replaces Digital Guesswork
With Evidence-based Optimization
Leader in Digital Experimentation &
Progressive Delivery
Measure customer data from real users in production
environments, so that every experience delivered is high-
quality and high-value.
Built for the whole team
7. `
Full Stack
Optimizely Platform
Built for product, engineering, growth, and data teams
Javascript snippet experimentation
enabling optimization through an easy to
use WYSIWYG visual editor.
Enterprise Expertise
Real-time
Data & Statistics
Progressive Delivery &
Experimentation
Enablement Services Ongoing Support Training & Documentation
Governance Security Scale
Zero-latency feature-flagging and in-code
experimentation implemented via 12 SDKs or as
a microservice (beta)
Web
Rollouts:
Standalone, free feature flagging
Performance Edge: Faster experiments
processed at the edge (CDN)
Stats Engine
Data Integration Flexible analysis
Open APIs
11. Growth is a Team Sport
Goal: Connecting small businesses to the right Clover
products to help them run and grow their business
North Star: Number of small businesses actively
processing payments with Clover
Scaled
Experimentation
Started
Experimentation
Conversions
/
month
12. Walk - Run - Grow
Walk: Design winning
experiments
Run: Increase your
velocity of running
experiments
Grow: Evangelize
experimentation across the
Company
14. Good vs Bad Experiment
Good Experiment, Bad
Experiment
Reforge Blog article by Fareed Mosavat
(Former Director of Product at Slack)
Good experiments drive impact by
solving real user problems
Good experiments define success
up-front.
Good experiments ask “What do
we do next with what we
learned?”
X
15. Solve Real Customer Problems
Hypothesis: A new product page that makes different
configurations more clear will improve click throughs
and checkout
Rationale: Based on survey data we found - in the
discovery process users wanted to understand what
each device is best for in order to make their choice.
Many users were disappointed when they couldn't find
more information on the product pages.
CONTROL
VARIATION
The variation led to 20-70% lift in click through to
the product detail pages
16. Define Success Upfront
Hypothesis If we test a lower price point to $B, we can gauge
the price sensitivity of merchants by measuring the
lift in Flex device sales.
Test plan Traffic Split: 50/50 split between Control ($A) and
Treatment ($B)
Success: A minimum of 35% lift to have positive
revenue impact
Projection: Expected to run for minimum 4 weeks
to measure “1a” for 90% confidence, assuming a
35% lift
Primary
metrics
● (1a) Checkout Complete
● (1b) Applications Complete
Secondary
metrics
● (2a) Add to cart for Flex
● (2b) Add to cart & Checkout complete for all
other devices
17. Iterate based on learnings
2X lift in conversions
Compared to Shop experience
Hypothesis 1:
More intuitive interaction design
Hypothesis 2: Copy changes to be
more inclusive of businesses
19. Prioritization
Adjustable, weighted model based on eight individual factors: (PILL Framework)
POTENTIAL
Sample Size Required: Conversion rate and MDE
Time to Stat Sig: Traffic volume and experiment traffic allocation
IMPACT
Conversion Quality: Potential to directly impact lead creation
Internal Drivers: Importance to business
Adoption: Potential to be implemented onsite
LEVEL OF EFFORT
Engineering Effort: No code vs low code vs totally custom
Creative Effort: Leveraging assets vs augmenting assets vs creating assets
LOVE
Channeling the Customer: Level of customer delight
20. Decision Criteria
2 Weeks 4 Weeks 6 Weeks
Hard Winner 1. Declare test over
2. Launch as winner
3. Explore why test performed so well
4. Re-prioritize backlog based on this result
1. Declare test over
2. Launch as winner
3. Explore why test performed so
well
4. Re-prioritize backlog based on this
result.
5. Explore what factors caused this
experiment to declare significance
so late.
Hard Loser 1. Stop test
2. Analyze results
3. Explore how future experiments in backlog should be
changed to avoid this scenario.
1. Stop test
2. Analyze results
3. Explore how future experiments in
backlog should be changed to
avoid this scenario.
4. Explore why result took so long to
show ‘hard loser’ result. What
changed over time? Stop test and
declare loser.
21. Inconclusive Experiment
VARIATION 2
VARIATION 1
10.63% submit rate 10.76% submit rate
Previous
12 days
3,459 visits ( 0 / 1,702 / 1,757 )
0% statistical significance
- Early results were inconclusive –
needed an additional ~100 days to
reach stat sig
- Variation 2 was set to control and tested
against a higher contrast version of the
page: Connect to Sales (Visual Form
Page)
Concluded Early –
Inconclusive in form completion rates
22. Experiment Trending Up/Down
CONTROL
8.56% submit rate
VARIATION
9.9% submit rate
Concluded Early
15.7% lift in form completion rate (low stat sig)
Variation concluded as winner and set as new control
23. Drive impact faster with technology
Optimizely Stats Accelerator & Engine Optimizely - Heap Integration
Optimizely Web + React
- Reduce reliance on technical
resources to instrument web
experiments
- Experiment out-of-the-box on
Dynamic Websites
- Faster decisions with reduced time
to statistical significance
- Allows peeking results of
experiments over time
- Iterate experiments faster with all data
available real-time in one place without
using developer resources
- Understand impact of A/B test on
entire customer experience