In this talk at the Netflix Machine Learning Platform Meetup on 12 Sep 2019, Kinjal Basu from LinkedIn discussed Online Parameter Selection for web-based Ranking vis Bayesian Optimization
4. LinkedIn Feed
Mission: Enable Members
to build an active
professional community
that advances their career.
Heterogenous List:
β’ Shares from a memberβs
connections
β’ Recommendations such
as jobs, articles, courses,
etc.
β’ Sponsored content or
ads
Member
Shares
Jobs
Articles
Ads
5. Important Metrics
Viral Actions (VA)
Members liked, shared or
commented on an item
Job Applies (JA)
Members applied for a job
Engaged Feed Sessions (EFS)
Sessions where a member
engaged with anything on
feed.
6. Ranking Function
π π, π’ β πππ΄ π, π’ + π₯ πΈπΉπ ππΈπΉπ π, π’ + π₯π½π΄ ππ½π΄ π, π’
β’ The weight vector π₯ = π₯ πΈπΉπ, π₯π½π΄ controls the balance between the three business
metrics: EFS, VA and JA.
β’ A Sample Business Strategy is
πππ₯ππππ§π. ππ΄(π₯)
π . π‘. πΈπΉπ π₯ > π πΈπΉπ
π½π΄ π₯ > ππ½π΄
β’ m β Member, u - Item
7. Major Challenges
β The optimal value of x (tuning parameters) changes over time
β Example of changes
β New content types are added
β Score distribution changes ( Feature drift, updated models, etc.)
β With every change engineers would manually find the optimal x
β Run multiple A/B tests
β Not the best use of engineering time
9. β ππ,π
π
π₯ β 0,1 denotes if the π-th member during the π-th session which was
served by parameter π₯, did action k or not. Here k = VA, EFS or JA.
β We model this data as follows
ππ
π
~ Binomial ππ π₯ , π ππ π₯
β where ππ π₯ is the total number of sessions of member π which was served by π₯
and ππ is a latent function for the particular metric.
β Assume a Gaussian process prior on each of the latent function ππ .
Modeling The Metrics
10. Reformulation
We approximate each of the metrics as:
ππ΄ π₯ = π πππ΄ π₯
πΈπΉπ π₯ = π ππΈπΉπ π₯
π½π΄ π₯ = π ππ½π΄ π₯
πππ₯ππππ§π. ππ΄(π₯)
π . π‘. πΈπΉπ π₯ > π πΈπΉπ
π½π΄ π₯ > ππ½π΄
πππ₯ππππ§π π πππ΄ π₯
π . π‘. π ππΈπΉπ π₯ > π πΈπΉπ
π ππ½π΄ π₯ > ππ½π΄
πππ₯ππππ§π π π₯
π₯ β π
Benefit: The last problem can now be solved using techniques from the literature of
Bayesian Optimization.
The original optimization problem can be written through this parametrization.
13. Bayesian Optimization
πππ₯ππππ§π π π₯
π₯ β π
A Quick Crash Course
β’ Explore-Exploit scheme to solve
β’ Assume a Gaussian Process prior
on π π₯ .
β’ Start with uniform sample
get (π₯, π π₯ )
β’ Estimate the mean function and
covariance kernel
14. Bayesian Optimization
πππ₯ππππ§π π π₯
π₯ β π
A Quick Crash Course
β’ Explore-Exploit scheme to solve
β’ Assume a Gaussian Process prior
on π π₯ .
β’ Start with uniform sample
get (π₯, π π₯ )
β’ Estimate the mean function and
covariance kernel
β’ Draw the next sample π₯ which
maximizes an βacquisition
functionβ or predictive posterior.
β’ Continue the process.
15. Bayesian Optimization
πππ₯ππππ§π π π₯
π₯ β π
A Quick Crash Course
β’ Explore-Exploit scheme to solve
β’ Assume a Gaussian Process prior
on π π₯ .
β’ Start with uniform sample
get (π₯, π π₯ )
β’ Estimate the mean function and
covariance kernel
β’ Draw the next sample π₯ which
maximizes an βacquisition
functionβ or predictive posterior.
β’ Continue the process.
16. Bayesian Optimization
πππ₯ππππ§π π π₯
π₯ β π
A Quick Crash Course
β’ Explore-Exploit scheme to solve
β’ Assume a Gaussian Process prior
on π π₯ .
β’ Start with uniform sample
get (π₯, π π₯ )
β’ Estimate the mean function and
covariance kernel
β’ Draw the next sample π₯ which
maximizes an βacquisition
functionβ or predictive posterior.
β’ Continue the process.
17. Bayesian Optimization
πππ₯ππππ§π π π₯
π₯ β π
A Quick Crash Course
β’ Explore-Exploit scheme to solve
β’ Assume a Gaussian Process prior
on π π₯ .
β’ Start with uniform sample
get (π₯, π π₯ )
β’ Estimate the mean function and
covariance kernel
β’ Draw the next sample π₯ which
maximizes an βacquisition
functionβ or predictive posterior.
β’ Continue the process.
18. Thompson Sampling
β Consider a Gaussian Process Prior on each ππ,
where k is VA, EFS or JA
β Observe the data (π₯, ππ(π₯))
β Obtain the posterior of each ππ which is another
Gaussian Process
β Sample from the posterior distribution and
generate samples for the overall objective
function.
β We get the next distribution of hyperparameters
by maximizing the sampled objectives (over a
grid of QMC points).
β Continue this process till convergence.
πππ₯ππππ§π π πππ΄ π₯
π . π‘. π ππΈπΉπ π₯ > π πΈπΉπ
π ππ½π΄ π₯ > ππ½π΄
21. Offline System
The heart of the product
Tracking
β’ All member activities are
tracked with the
parameter of interest.
β’ ETL into HDFS for easy
consumption
Utility Evaluation
β’ Using the tracking data
we generate (π₯, ππ π₯ ) for
each function k.
β’ The data is kept in
appropriate schema that
is problem agnostic.
Bayesian Optimization
β’ The data and the problem
specifications are input to
this.
β’ Using the data, we first
estimate each of the
posterior distributions of
the latent functions.
β’ Sample from those
distributions to get
distribution of the
parameter π₯ which
maximizes the objective.
22. β’ The Bayesian Optimization library generates
β’ A set of potential candidates for trying in the next round (π₯1, π₯2, β¦ , π₯ π)
β’ A probability of how likely each point is the true maximizer (π1, π2, β¦ , π π) such
that π=1
π
ππ = 1
β’ To serve members with the above distribution, each memberId is mapped to [0,1]
using a hashing function β. For example, if
π=1
π
ππ < β πΎπππππ β€ π=1
π+1
ππ
Then my feed is served with parameter π₯ π+1
β’ The parameter store (depending on use-case) can contain
β’ <parameterValue, probability> i.e. π₯π, ππ or
β’ <memberId, parameterValue>
The Parameter Store and Online Serving
23. Online System
Serving hundreds of millions of users
Parameter Sampling
β’ For each member π visiting LinkedIn,
β’ Depending on the parameter store, we either
evaluate <m, parameterValue>
β’ Or we directly call the store to retrieve
<m, parameterValue>
Online Serving
β’ Depending on the parameter value that is
retrieved (say x), the memberβs full feed is
scored according to the ranking function
and served
π π, π’ β πππ΄ π, π’ + π₯ πΈπΉπ ππΈπΉπ π, π’
+ π₯π½π΄ ππ½π΄ π, π’
24. Practical Design Considerations
β Consistency in user experience.
β Randomize at member level instead of session level.
β Offline Flow Frequency
β Batch computation where we collect data for an hour and run the offline flow each hour to
update the sampling distribution.
β Assume (πππ΄, ππΈπΉπ, ππ½π΄) to be Independent
β Works well in our setup. Joint modeling might reduce variance.
β Choice of Business Constraint Thresholds.
β Chosen to allow for a small drop.
29. Key Takeaways
β Removes the human in the loop: Fully automatic process to find the optimal
parameters.
β Drastically improves developer productivity.
β Can scale to multiple competing metrics.
β Very easy onboarding infra for multiple vertical teams. Currently used by Ads, Feed,
Notifications, PYMK, etc.
β Future Direction
β’ Add on other Explore-Exploit algorithms.
β’ Move from Black-Box to Grey-Box optimizations
β’ Create a dependent structure on different utilities to better model the
variance.
β’ Automatically identify the primary metric by understanding the models better.