13. Authority
Relevance
It’s not even easy in two
dimensions:
Imagine choosing
between a
more-relevant page with
less authority…
...and a less-relevant
page with more
authority.
15. Aided by the new head of search John
Giannandrea and ML experts like Jeff Dean
16. If you haven’t already seen it, you should
read the story of how Jeff Dean & three
engineers took just a month to beat a
decade’s worth of work by hundreds of
engineers by attacking Translate with ML.
17. Audiences generally still think
they’re pretty good at this
You’re probably thinking something similar to yourself right now.
18. I’ve now run an
in-person experiment a
few times.
19. I show two pages that
rank for a particular
search along with
various metrics for each
page.
20. Then I ask the audience
to stand up and predict
which page ranks better
for a given query.
21. I get people to sit down as they get
them wrong.
By the time we’ve done 2 or 3
almost everyone is sitting.
44. In mathematical terms, we express each page as a set
of features:
{‘DA’: ‘67’, ‘lrd’: ‘254’, ‘tld’: ‘1’, ‘h1_tgtg’: ‘0.478’, ‘links_on_page’: ‘200’ ....}
Combine the two sets of features into one big vector.
Label it as (1,0) if A outranks B and (0,1) if B outranks A.
A
B
45. Note: we’re doing no spam detection
We’re working only with Google’s top 10
46. To run the model, we input a
pair of pages with their
associated metrics.
New
input
50. If we could do this perfectly, then
we could tweak the values of our
page (call that A`) and compare
A to A`
We’d get to simulate changes to
see impacts without making them
This is the holy grail
51. And when we get close the gaps
will tell us where the unknowns in
the algorithm lie
52. There’s a lot of dead-ends before
we get anywhere near that though
Let’s go stumbling through the trees
53. The first thing to realise is that data
pipelines are hard.
Really hard.
There’s a reason that most of Google’s rules of ML is about data.
Here’s what we did:
60. This is what it looks like on our data
(Running on their web version)
61. So I took this big dataset, restricted
it to property keywords, and gave it
a shot
I have an ongoing argument with @tomanthonySEO about how much
the keyword grouping matters...
66. One of the problems with deep
learning is the the models are far
from human understanding
There is not really any concept of “explain how you got this answer”
67. So I tried a much simpler model on the same data
A “decision tree classifier” from scikit-learn
68. You read these decision trees like flowcharts
The first # refers to the two URLs in the comparison
74. I eventually figured out what was going on.
There are a small number of domains that rank well for
essentially every property-related search in the UK.
My model was just learning:
domain A > domain B > domain C
75. The model was essentially just identifying URLs
Zoopla vs.
findaproperty
Rightmove vs.
primelocation
etc
76. So we started splitting the data
better so that it never saw the
same domains that it was trained
on
77. Our current state-of-the-art is 65-66% accuracy on
large diverse keyword sets.
Decision trees are nowhere near as good on this data.
We are still only using fairly naive on-page metrics.
78.
79. Known factors Unknown factors
The better our model gets, the more we can
constrain how much of an impact other things must
be having - advanced on-page ML, usage data etc
80. Known factors Unknown factors
The better our model gets, the more we can
constrain how much of an impact other things must
be having - advanced on-page ML, usage data etc
We expect to see progress from more advanced on-page analysis - we
have a theory that link signals get you into the consideration set, but
increasingly don’t reorder it:
82. That was all very complicated.
In practice, we are running
real-world split-tests.
This is a difficult thing to do, so we’ve built a platform to help:
83.
84. In keeping with the theme of this
presentation, I want to share some
scary results
It turns out that you are probably recommending a ton of changes that
are making no difference, or even making things worse...
85. 1. Adding ALT attributes
2. Adding structured data
3. Setting exact match title tags
4. Writing more emotive meta copy
86. Established wisdom and correlation studies would suggest ALT
attributes on images might be good for SEO
90. 1. Adding ALT attributes
2. Adding structured data
3. Setting exact match title tags
4. Writing more emotive meta copy
91. Title tag before: Which TV should I buy? - Argos
Title tag after: Which TV to buy? - Argos
What happens when you match title tags to the greatest search volume?
114. And a bunch that we haven’t written up yet:
Including:
● Replacing en-gb words & spellings with en-us on British company’s US site
○ Status: statistically significant positive uplift
● Fresh content: more recent update dates across large long-tail set of pages
○ Status: statistically significant positive uplift
● Change on-page targeting to higher volume query structure
○ Status: statistically significant positive uplift
115. All of this is why we have been
investing so much in
split-testing
Check out www.distilledodn.com
if you haven’t already.
We will be happy to demo for
you.
We’re now serving well over a
billion requests / month, and
recently published information
covering everything from
response times to our +£100k /
month split test.
116. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
117. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
2. Machine learning will make this worse, and is accelerating under Sundar
118. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
2. Machine learning will make this worse, and is accelerating under Sundar
3. By applying our own machine learning, we can model the algorithm and find
the gaps in our understanding
119. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
2. Machine learning will make this worse, and is accelerating under Sundar
3. By applying our own machine learning, we can model the algorithm and find
the gaps in our understanding
4. We can apply what we learn by split-testing on our own sites:
120. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
2. Machine learning will make this worse, and is accelerating under Sundar
3. By applying our own machine learning, we can model the algorithm and find
the gaps in our understanding
4. We can apply what we learn by split-testing on our own sites:
a. It is very likely that if you are not split-testing, you are recommending
changes that have no effect
121. Let’s recap
1. Even in a world of 200+ “classical” ranking factors, humans were bad at
understanding the algorithm
2. Machine learning will make this worse, and is accelerating under Sundar
3. By applying our own machine learning, we can model the algorithm and find
the gaps in our understanding
4. We can apply what we learn by split-testing on our own sites:
a. It is very likely that if you are not split-testing, you are recommending
changes that have no effect
b. And (obviously worse) you are very likely recommending changes that
damage your visibility
123. ● Sundar Pichai
● Go
● Jeff Dean
● Train
● Wake up
● Statue of Liberty
● Sleeping cat
● Complexity
● Holy Grail
● Wilderness
● Pipeline
● Houses
Image credits
● Head in hands
● Rope bridge
● Spider
● Cheating
● Celebration
● Split rock
● Science
● Jolly Roger
● Thumbs up
● Spam