SlideShare a Scribd company logo
1 of 26
Download to read offline
tech talk @ ferret
Andrii Gakhov
PROBABILISTIC DATA STRUCTURES
ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK
PART 2: CARDINALITY
CARDINALITY
Agenda:
▸ Linear Counting
▸ LogLog, SuperLogLog, HyperLogLog, HyperLogLog++
• To determine the number of distinct elements, also
called the cardinality, of a large set of elements
where duplicates are present
Calculating the exact cardinality of a multiset requires an amount of memory
proportional to the cardinality, which is impractical for very large data sets.
THE PROBLEM
LINEAR COUNTING
LINEAR COUNTING: ALGORITHM
• Linear counter is a bit map (hash table) of size m (all
elements set to 0 at the beginning).
• Algorithm consists of a few steps:
• for every element calculate hash function and set the
appropriate bit to 1
• calculate the fraction V of empty bits in the structure 

(divide the number of empty bits by the bit map size m )
• estimate cardinality as n ≈ -m ln V
LINEAR COUNTING: EXAMPLE
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
• Consider linear counter with 16 bits (m=16)
• Consider MurmurHash3 as the hash function h

(to calculate the appropriate index, we divide result by mod 16)
• Set of 10 elements: “bernau”, “bernau”, “bernau”,
“berlin”, “kiev”, “kiev”, “new york”, “germany”, “ukraine”,
“europe” (NOTE: the real cardinality n = 7)
h(“bernau”) = 4, h(“berlin”) = 4, h(“kiev”) = 6, h(“new york”) = 6,
h(“germany”) = 14, h(“ukraine”) = 7, h(“europe”) = 9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0
LINEAR COUNTING: EXAMPLE
number of empty bits: 11
m = 16
V = 11 / 16 = 0.6875
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0
• Cardinality estimation is
n ≈ - 16 * ln (0.6875) = 5.995
LINEAR COUNTING: READ MORE
• http://dblab.kaist.ac.kr/Prof/pdf/Whang1990(linear).pdf
• http://www.codeproject.com/Articles/569718/
CardinalityplusEstimationplusinplusLinearplusTimep
HYPERLOGLOG
HYPERLOGLOG: INTUITION
• The cardinality of a multiset of uniformly distributed numbers can be estimated
by the maximum number of leading zeros in the binary representation of each
number. If such value is k, then the number of distinct elements in the set is 2k
P(rank=1) = 1/2 - probability to find a binary representation, that starts with 1
P(rank = 2) = 1/2
2
- probability to find a binary representation, that start with 01
…
P(rank=k) = 1/2
k
rank = number of leading zeros + 1, e.g. rank(f) = 3
0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0
0 1 2
leading zeros
3
f =
• Therefore, for 2k
binary representations we shell find at least one
representation with rank = k
• If we remember the maximal rank we’ve seen and it’s equal to k, then we can
use 2k
as the approximation of the number of elements
HYPERLOGLOG
• proposed by Flajolet et. al., 2007
• an extension of the Flajolet–Martin algorithm (1985)
• HyperLogLog is described by 2 parameters:
• p – number of bits that determine a bucket to use averaging

(m = 2p
is the number of buckets/substreams)
• h - hash function, that produces uniform hash values
• The HyperLogLog algorithm is able to estimate cardinalities of
> 109
with a typical error rate of 2%, using 1.5kB of memory
(Flajolet, P. et al., 2007).
HYPERLOGLOG: ALGORITHM
• HyperLogLog uses randomization to approximate the
cardinality of a multiset.This randomization is achieved by
using hash function h
• Observe the maximum number of leading zeros that for all
hash values:
• If the bit pattern 0L−1 1 is observed at the beginning of a
hash value (so, rank = L), then a good estimation of the size
of the multiset is 2L.
HYPERLOGLOG: ALGORITHM
• Stochastic averaging is used to reduce the large variability:
• The input stream of data elements S is divided into m substreams Si using
the first p bits of the hash values (m = 2p)
.
• In each substream, the rank (after the initial p bits that are used for
substreaming) is measured independently.
• These numbers are kept in an array of registers M, where M[i] stores the
maximum rank it seen for the substream with index i.
• The cardinality estimation is calculated computes as the normalized bias
corrected harmonic mean of the estimations on the substreams
DVHLL = const(m)⋅m2
⋅ 2
−M j
j=1
m
∑
⎛
⎝⎜
⎞
⎠⎟
−1
HYPERLOGLOG: EXAMPLE
• Consider L=8 bits hash function h
• Index elements “berlin” and “ferret”:
h(“berlin”) = 0110111 h(“ferret”) = 1100011
• Define buckets and calculate values to store:

(use first p =3 bits for buckets and least L - p = 5 bits for ranks)
• bucket(“berlin”) = 011 = 3 value(“berlin”) = rank(0111) = 2
• bucket(“ferret”) = 110 = 6 value(“ferret”) = rank(0011) = 3
• Let’s use p=3 bits to define a bucket (then m=23
=8 buckets).
1 2 3 4 5 6 7
0 0 0 0 0 0 0 0
0
M
1 2 3 4 5 6 7
0 0 0 2 0 0 3 0
0
M
HYPERLOGLOG: EXAMPLE
• Estimate the cardinality be the HLL formula (C ≈ 0.66):
DVHLL ≈ 0.66 * 82
/ (2-2
+ 2-4
) = 0.66 * 204.8 ≈ 135≠3
• Index element “kharkov”:
• h(“kharkov”) = 1100001
• bucket(“kharkov”) = 110 = 6 value(“kharkov”) = rank(0001) = 4
• M[6] = max(M[6], 4) = max(3, 4) = 4
1 2 3 4 5 6 7
0 0 0 2 0 0 4 0
0
M
NOTE: For small cardinalities HLL has a strong bias!!!
HYPERLOGLOG: PROPERTIES
• Memory requirement doesn't grow linearly with L (unlike MinCount or
Linear Counting) - for hash function of L bits and precision p, required
memory:
• original HyperLogLog uses 32 bit hash codes, which requires 5 · 2
p
bits
• It’s not necessary to calculate the full hash code for the element
• first p bits and number of leading zeros of the remaining bits are
enough
• There are no evidence that some of popular hash functions (MD5, Sha1,
Sha256, Murmur3) performs significantly better than others.
log2 L +1− p( )⎡⎢ ⎤⎥⋅2p
bits
HYPERLOGLOG: PROPERTIES
• The standard error can be estimated as:
σ =
1.04
2p
so, if we use 16 bits (p=16) for bucket indices, we receive the
standard error in 0.40625%
• Algorithm has large error for small cardinalities.
• For instance, for n = 0 the algorithm always returns roughly 0.7m
• To achieve better estimates for small cardinalities, use
LinearCounting below a threshold of 5m/2
HYPERLOGLOG: APPLICATIONS
• PFCOUNT in Redis returns the approximated cardinality
computed by the HyperLogLog data structure 

(http://antirez.com/news/75)
• Redis implementation uses 12Kb per key to count with a standard
error of 0.81%, and there is no limit to the number of items you can
count, unless you approach 264 items
HYPERLOGLOG: READ MORE
• http://algo.inria.fr/flajolet/Publications/DuFl03-LNCS.pdf
• http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf
• https://stefanheule.com/papers/edbt13-hyperloglog.pdf
• https://highlyscalable.wordpress.com/2012/05/01/
probabilistic-structures-web-analytics-data-mining/
• https://hal.archives-ouvertes.fr/file/index/docid/465313/
filename/sliding_HyperLogLog.pdf
• http://stackoverflow.com/questions/12327004/how-does-
the-hyperloglog-algorithm-work
HYPERLOGLOG++
HYPERLOGLOG++
• proposed by Stefan Heule et. al., 2013 for Google PowerDrill
• an improved version of HyperLogLog (Flajolet et. al., 2007)
• HyperLogLog++ is described by 2 parameters:
• p – number of bits that determine a bucket to use averaging

(m = 2p
is the number of buckets/substreams)
• h - hash function, that produces uniform hash values
• The HyperLogLog++ algorithm is able to estimate cardinalities
of ~ 7.9 · 10
9
with a typical error rate of 1.625%, using 2.56KB of
memory (Micha Gorelick and Ian Ozsvald, High Performance
Python, 2014).
HYPERLOGLOG++: IMPROVEMENTS
• use 64-bit hash function
• algorithm that only uses the hash code of the input values is limited by the
number of bits of the hash codes when it comes to accurately estimating
large cardinalities
• In particular, a hash function of L bits can at most distinguish 2L
different
values, and as the cardinality n approaches 2L
hash collisions become
more and more likely and accurate estimation gets impossible
• if the cardinality approaches 264 ≈ 1.8 · 1019, hash collisions become a problem
• bias correction
• original algorithm overestimates the real cardinality for small sets, but
most of the error is due to bias.
• storage efficiency
• uses different encoding strategies for hash values, variable length
encoding for integers, difference encoding
HYPERLOGLOG++ VS HYPERLOGLOG
• accuracy is significantly better for large range of cardinalities
and equally good on the rest
• sparse representation allows for a more adaptive use of memory
• if the cardinality n is much smaller then m, then HyperLogLog++
requires significantly less memory
• For cardinalities between 12000 and 61000, the bias correction
allows for a lower error and avoids a spike in the error when
switching between sub-algorithms.
• 64 bit hash codes allow the algorithm to estimate cardinalities well
beyond 1 billion
HYPERLOGLOG++: APPLICATIONS
• cardinality metric in Elasticsearch is based on the
HyperLogLog++ algorithm for big cardinalities (adaptive
counting)
• Apache DataFu, collection of libraries for working with
large-scale data in Hadoop, has an implementation of
HyperLogLog++ algorithm
HYPERLOGLOG++: READ MORE
• http://static.googleusercontent.com/media/
research.google.com/en//pubs/archive/40671.pdf
• https://research.neustar.biz/2013/01/24/hyperloglog-
googles-take-on-engineering-hll/
▸ @gakhov
▸ linkedin.com/in/gakhov
▸ www.datacrucis.com
THANK YOU

More Related Content

What's hot

Garbage Collection
Garbage CollectionGarbage Collection
Garbage CollectionEelco Visser
 
Michael Häusler – Everyday flink
Michael Häusler – Everyday flinkMichael Häusler – Everyday flink
Michael Häusler – Everyday flinkFlink Forward
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixInfluxData
 
Time Series Processing with Solr and Spark
Time Series Processing with Solr and SparkTime Series Processing with Solr and Spark
Time Series Processing with Solr and SparkJosef Adersberger
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsFlink Forward
 
Faster persistent data structures through hashing
Faster persistent data structures through hashingFaster persistent data structures through hashing
Faster persistent data structures through hashingJohan Tibell
 
Go and Uber’s time series database m3
Go and Uber’s time series database m3Go and Uber’s time series database m3
Go and Uber’s time series database m3Rob Skillington
 
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry PiMonitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry PiInfluxData
 
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...Jen Aman
 
Map reduce: beyond word count
Map reduce: beyond word countMap reduce: beyond word count
Map reduce: beyond word countJeff Patti
 
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...InfluxData
 
A Fast and Efficient Time Series Storage Based on Apache Solr
A Fast and Efficient Time Series Storage Based on Apache SolrA Fast and Efficient Time Series Storage Based on Apache Solr
A Fast and Efficient Time Series Storage Based on Apache SolrQAware GmbH
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Flink Forward
 
How to Build a Telegraf Plugin by Noah Crowley
How to Build a Telegraf Plugin by Noah CrowleyHow to Build a Telegraf Plugin by Noah Crowley
How to Build a Telegraf Plugin by Noah CrowleyInfluxData
 

What's hot (20)

Garbage Collection
Garbage CollectionGarbage Collection
Garbage Collection
 
Michael Häusler – Everyday flink
Michael Häusler – Everyday flinkMichael Häusler – Everyday flink
Michael Häusler – Everyday flink
 
R and cpp
R and cppR and cpp
R and cpp
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul Dix
 
Time Series Processing with Solr and Spark
Time Series Processing with Solr and SparkTime Series Processing with Solr and Spark
Time Series Processing with Solr and Spark
 
Apache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API BasicsApache Flink Training: DataSet API Basics
Apache Flink Training: DataSet API Basics
 
Java 8 monads
Java 8   monadsJava 8   monads
Java 8 monads
 
Faster persistent data structures through hashing
Faster persistent data structures through hashingFaster persistent data structures through hashing
Faster persistent data structures through hashing
 
Go and Uber’s time series database m3
Go and Uber’s time series database m3Go and Uber’s time series database m3
Go and Uber’s time series database m3
 
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry PiMonitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
Monitoring Your ISP Using InfluxDB Cloud and Raspberry Pi
 
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
Massive Simulations In Spark: Distributed Monte Carlo For Global Health Forec...
 
Dotnet 18
Dotnet 18Dotnet 18
Dotnet 18
 
Map reduce: beyond word count
Map reduce: beyond word countMap reduce: beyond word count
Map reduce: beyond word count
 
R and C++
R and C++R and C++
R and C++
 
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
Scott Anderson [InfluxData] | InfluxDB Tasks – Beyond Downsampling | InfluxDa...
 
A Fast and Efficient Time Series Storage Based on Apache Solr
A Fast and Efficient Time Series Storage Based on Apache SolrA Fast and Efficient Time Series Storage Based on Apache Solr
A Fast and Efficient Time Series Storage Based on Apache Solr
 
Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced Apache Flink Training: DataStream API Part 2 Advanced
Apache Flink Training: DataStream API Part 2 Advanced
 
Python Memory Management 101(Europython)
Python Memory Management 101(Europython)Python Memory Management 101(Europython)
Python Memory Management 101(Europython)
 
Influxdb and time series data
Influxdb and time series dataInfluxdb and time series data
Influxdb and time series data
 
How to Build a Telegraf Plugin by Noah Crowley
How to Build a Telegraf Plugin by Noah CrowleyHow to Build a Telegraf Plugin by Noah Crowley
How to Build a Telegraf Plugin by Noah Crowley
 

Viewers also liked

Using Simplicity to Make Hard Big Data Problems Easy
Using Simplicity to Make Hard Big Data Problems EasyUsing Simplicity to Make Hard Big Data Problems Easy
Using Simplicity to Make Hard Big Data Problems Easynathanmarz
 
Анализ количества посетителей на сайте [Считаем уникальные элементы]
Анализ количества посетителей на сайте [Считаем уникальные элементы]Анализ количества посетителей на сайте [Считаем уникальные элементы]
Анализ количества посетителей на сайте [Считаем уникальные элементы]Qrator Labs
 
ReqLabs PechaKucha Евгений Сафроненко
ReqLabs PechaKucha Евгений СафроненкоReqLabs PechaKucha Евгений Сафроненко
ReqLabs PechaKucha Евгений СафроненкоPechaKucha Ukraine
 
Hyper loglog
Hyper loglogHyper loglog
Hyper loglognybon
 
Deep dive into Coroutines on JVM @ KotlinConf 2017
Deep dive into Coroutines on JVM @ KotlinConf 2017Deep dive into Coroutines on JVM @ KotlinConf 2017
Deep dive into Coroutines on JVM @ KotlinConf 2017Roman Elizarov
 

Viewers also liked (6)

Using Simplicity to Make Hard Big Data Problems Easy
Using Simplicity to Make Hard Big Data Problems EasyUsing Simplicity to Make Hard Big Data Problems Easy
Using Simplicity to Make Hard Big Data Problems Easy
 
Анализ количества посетителей на сайте [Считаем уникальные элементы]
Анализ количества посетителей на сайте [Считаем уникальные элементы]Анализ количества посетителей на сайте [Считаем уникальные элементы]
Анализ количества посетителей на сайте [Считаем уникальные элементы]
 
ReqLabs PechaKucha Евгений Сафроненко
ReqLabs PechaKucha Евгений СафроненкоReqLabs PechaKucha Евгений Сафроненко
ReqLabs PechaKucha Евгений Сафроненко
 
Big Data aggregation techniques
Big Data aggregation techniquesBig Data aggregation techniques
Big Data aggregation techniques
 
Hyper loglog
Hyper loglogHyper loglog
Hyper loglog
 
Deep dive into Coroutines on JVM @ KotlinConf 2017
Deep dive into Coroutines on JVM @ KotlinConf 2017Deep dive into Coroutines on JVM @ KotlinConf 2017
Deep dive into Coroutines on JVM @ KotlinConf 2017
 

Similar to Probabilistic data structures. Part 2. Cardinality

2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3abramsm
 
2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3Open Analytics
 
Counting (Using Computer)
Counting (Using Computer)Counting (Using Computer)
Counting (Using Computer)roshmat
 
RecSplit Minimal Perfect Hashing
RecSplit Minimal Perfect HashingRecSplit Minimal Perfect Hashing
RecSplit Minimal Perfect HashingThomas Mueller
 
Data streaming algorithms
Data streaming algorithmsData streaming algorithms
Data streaming algorithmsSandeep Joshi
 
Probabilistic data structure
Probabilistic data structureProbabilistic data structure
Probabilistic data structureThinh Dang
 
Count-Distinct Problem
Count-Distinct ProblemCount-Distinct Problem
Count-Distinct ProblemKai Zhang
 
358 33 powerpoint-slides_15-hashing-collision_chapter-15
358 33 powerpoint-slides_15-hashing-collision_chapter-15358 33 powerpoint-slides_15-hashing-collision_chapter-15
358 33 powerpoint-slides_15-hashing-collision_chapter-15sumitbardhan
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Big Data Spain
 
Digital logic-formula-notes-final-1
Digital logic-formula-notes-final-1Digital logic-formula-notes-final-1
Digital logic-formula-notes-final-1Kshitij Singh
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Muhammad Hammad Waseem
 
Максим Харченко. Erlang lincx
Максим Харченко. Erlang lincxМаксим Харченко. Erlang lincx
Максим Харченко. Erlang lincxAlina Dolgikh
 
0.5mln packets per second with Erlang
0.5mln packets per second with Erlang0.5mln packets per second with Erlang
0.5mln packets per second with ErlangMaxim Kharchenko
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesSreedhar Chowdam
 
A Lossless FBAR Compressor
A Lossless FBAR CompressorA Lossless FBAR Compressor
A Lossless FBAR CompressorPhilip Alipour
 
Large-scale real-time analytics for everyone
Large-scale real-time analytics for everyoneLarge-scale real-time analytics for everyone
Large-scale real-time analytics for everyonePavel Kalaidin
 
L3-.pptx
L3-.pptxL3-.pptx
L3-.pptxasdq4
 

Similar to Probabilistic data structures. Part 2. Cardinality (20)

2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3
 
2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3
 
Counting (Using Computer)
Counting (Using Computer)Counting (Using Computer)
Counting (Using Computer)
 
RecSplit Minimal Perfect Hashing
RecSplit Minimal Perfect HashingRecSplit Minimal Perfect Hashing
RecSplit Minimal Perfect Hashing
 
Data streaming algorithms
Data streaming algorithmsData streaming algorithms
Data streaming algorithms
 
Probabilistic data structure
Probabilistic data structureProbabilistic data structure
Probabilistic data structure
 
Count-Distinct Problem
Count-Distinct ProblemCount-Distinct Problem
Count-Distinct Problem
 
358 33 powerpoint-slides_15-hashing-collision_chapter-15
358 33 powerpoint-slides_15-hashing-collision_chapter-15358 33 powerpoint-slides_15-hashing-collision_chapter-15
358 33 powerpoint-slides_15-hashing-collision_chapter-15
 
Chapter1
Chapter1Chapter1
Chapter1
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015
 
Digital logic-formula-notes-final-1
Digital logic-formula-notes-final-1Digital logic-formula-notes-final-1
Digital logic-formula-notes-final-1
 
Analysis.ppt
Analysis.pptAnalysis.ppt
Analysis.ppt
 
Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]Data Structures - Lecture 1 [introduction]
Data Structures - Lecture 1 [introduction]
 
Unit ii algorithm
Unit   ii algorithmUnit   ii algorithm
Unit ii algorithm
 
Максим Харченко. Erlang lincx
Максим Харченко. Erlang lincxМаксим Харченко. Erlang lincx
Максим Харченко. Erlang lincx
 
0.5mln packets per second with Erlang
0.5mln packets per second with Erlang0.5mln packets per second with Erlang
0.5mln packets per second with Erlang
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture Notes
 
A Lossless FBAR Compressor
A Lossless FBAR CompressorA Lossless FBAR Compressor
A Lossless FBAR Compressor
 
Large-scale real-time analytics for everyone
Large-scale real-time analytics for everyoneLarge-scale real-time analytics for everyone
Large-scale real-time analytics for everyone
 
L3-.pptx
L3-.pptxL3-.pptx
L3-.pptx
 

More from Andrii Gakhov

Let's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureLet's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureAndrii Gakhov
 
Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Andrii Gakhov
 
Implementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaImplementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaAndrii Gakhov
 
Pecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsPecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsAndrii Gakhov
 
Probabilistic data structures. Part 4. Similarity
Probabilistic data structures. Part 4. SimilarityProbabilistic data structures. Part 4. Similarity
Probabilistic data structures. Part 4. SimilarityAndrii Gakhov
 
Вероятностные структуры данных
Вероятностные структуры данныхВероятностные структуры данных
Вероятностные структуры данныхAndrii Gakhov
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Apache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksApache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksAndrii Gakhov
 
Swagger / Quick Start Guide
Swagger / Quick Start GuideSwagger / Quick Start Guide
Swagger / Quick Start GuideAndrii Gakhov
 
API Days Berlin highlights
API Days Berlin highlightsAPI Days Berlin highlights
API Days Berlin highlightsAndrii Gakhov
 
ELK - What's new and showcases
ELK - What's new and showcasesELK - What's new and showcases
ELK - What's new and showcasesAndrii Gakhov
 
Apache Spark Overview @ ferret
Apache Spark Overview @ ferretApache Spark Overview @ ferret
Apache Spark Overview @ ferretAndrii Gakhov
 
Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Andrii Gakhov
 
Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Andrii Gakhov
 
Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Andrii Gakhov
 
Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Andrii Gakhov
 
Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Andrii Gakhov
 
Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Andrii Gakhov
 
Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Andrii Gakhov
 

More from Andrii Gakhov (20)

Let's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureLet's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architecture
 
Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...
 
DNS Delegation
DNS DelegationDNS Delegation
DNS Delegation
 
Implementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaImplementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and Lua
 
Pecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsPecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food Traditions
 
Probabilistic data structures. Part 4. Similarity
Probabilistic data structures. Part 4. SimilarityProbabilistic data structures. Part 4. Similarity
Probabilistic data structures. Part 4. Similarity
 
Вероятностные структуры данных
Вероятностные структуры данныхВероятностные структуры данных
Вероятностные структуры данных
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Apache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksApache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected Talks
 
Swagger / Quick Start Guide
Swagger / Quick Start GuideSwagger / Quick Start Guide
Swagger / Quick Start Guide
 
API Days Berlin highlights
API Days Berlin highlightsAPI Days Berlin highlights
API Days Berlin highlights
 
ELK - What's new and showcases
ELK - What's new and showcasesELK - What's new and showcases
ELK - What's new and showcases
 
Apache Spark Overview @ ferret
Apache Spark Overview @ ferretApache Spark Overview @ ferret
Apache Spark Overview @ ferret
 
Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014
 
Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014
 
Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014
 
Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014
 
Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014
 
Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014
 
Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)
 

Recently uploaded

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 

Recently uploaded (20)

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 

Probabilistic data structures. Part 2. Cardinality

  • 1. tech talk @ ferret Andrii Gakhov PROBABILISTIC DATA STRUCTURES ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK PART 2: CARDINALITY
  • 2. CARDINALITY Agenda: ▸ Linear Counting ▸ LogLog, SuperLogLog, HyperLogLog, HyperLogLog++
  • 3. • To determine the number of distinct elements, also called the cardinality, of a large set of elements where duplicates are present Calculating the exact cardinality of a multiset requires an amount of memory proportional to the cardinality, which is impractical for very large data sets. THE PROBLEM
  • 5. LINEAR COUNTING: ALGORITHM • Linear counter is a bit map (hash table) of size m (all elements set to 0 at the beginning). • Algorithm consists of a few steps: • for every element calculate hash function and set the appropriate bit to 1 • calculate the fraction V of empty bits in the structure 
 (divide the number of empty bits by the bit map size m ) • estimate cardinality as n ≈ -m ln V
  • 6. LINEAR COUNTING: EXAMPLE 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 • Consider linear counter with 16 bits (m=16) • Consider MurmurHash3 as the hash function h
 (to calculate the appropriate index, we divide result by mod 16) • Set of 10 elements: “bernau”, “bernau”, “bernau”, “berlin”, “kiev”, “kiev”, “new york”, “germany”, “ukraine”, “europe” (NOTE: the real cardinality n = 7) h(“bernau”) = 4, h(“berlin”) = 4, h(“kiev”) = 6, h(“new york”) = 6, h(“germany”) = 14, h(“ukraine”) = 7, h(“europe”) = 9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0
  • 7. LINEAR COUNTING: EXAMPLE number of empty bits: 11 m = 16 V = 11 / 16 = 0.6875 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0 0 0 1 0 1 1 0 1 0 0 0 0 1 0 • Cardinality estimation is n ≈ - 16 * ln (0.6875) = 5.995
  • 8. LINEAR COUNTING: READ MORE • http://dblab.kaist.ac.kr/Prof/pdf/Whang1990(linear).pdf • http://www.codeproject.com/Articles/569718/ CardinalityplusEstimationplusinplusLinearplusTimep
  • 10. HYPERLOGLOG: INTUITION • The cardinality of a multiset of uniformly distributed numbers can be estimated by the maximum number of leading zeros in the binary representation of each number. If such value is k, then the number of distinct elements in the set is 2k P(rank=1) = 1/2 - probability to find a binary representation, that starts with 1 P(rank = 2) = 1/2 2 - probability to find a binary representation, that start with 01 … P(rank=k) = 1/2 k rank = number of leading zeros + 1, e.g. rank(f) = 3 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 2 leading zeros 3 f = • Therefore, for 2k binary representations we shell find at least one representation with rank = k • If we remember the maximal rank we’ve seen and it’s equal to k, then we can use 2k as the approximation of the number of elements
  • 11. HYPERLOGLOG • proposed by Flajolet et. al., 2007 • an extension of the Flajolet–Martin algorithm (1985) • HyperLogLog is described by 2 parameters: • p – number of bits that determine a bucket to use averaging
 (m = 2p is the number of buckets/substreams) • h - hash function, that produces uniform hash values • The HyperLogLog algorithm is able to estimate cardinalities of > 109 with a typical error rate of 2%, using 1.5kB of memory (Flajolet, P. et al., 2007).
  • 12. HYPERLOGLOG: ALGORITHM • HyperLogLog uses randomization to approximate the cardinality of a multiset.This randomization is achieved by using hash function h • Observe the maximum number of leading zeros that for all hash values: • If the bit pattern 0L−1 1 is observed at the beginning of a hash value (so, rank = L), then a good estimation of the size of the multiset is 2L.
  • 13. HYPERLOGLOG: ALGORITHM • Stochastic averaging is used to reduce the large variability: • The input stream of data elements S is divided into m substreams Si using the first p bits of the hash values (m = 2p) . • In each substream, the rank (after the initial p bits that are used for substreaming) is measured independently. • These numbers are kept in an array of registers M, where M[i] stores the maximum rank it seen for the substream with index i. • The cardinality estimation is calculated computes as the normalized bias corrected harmonic mean of the estimations on the substreams DVHLL = const(m)⋅m2 ⋅ 2 −M j j=1 m ∑ ⎛ ⎝⎜ ⎞ ⎠⎟ −1
  • 14. HYPERLOGLOG: EXAMPLE • Consider L=8 bits hash function h • Index elements “berlin” and “ferret”: h(“berlin”) = 0110111 h(“ferret”) = 1100011 • Define buckets and calculate values to store:
 (use first p =3 bits for buckets and least L - p = 5 bits for ranks) • bucket(“berlin”) = 011 = 3 value(“berlin”) = rank(0111) = 2 • bucket(“ferret”) = 110 = 6 value(“ferret”) = rank(0011) = 3 • Let’s use p=3 bits to define a bucket (then m=23 =8 buckets). 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 0 M 1 2 3 4 5 6 7 0 0 0 2 0 0 3 0 0 M
  • 15. HYPERLOGLOG: EXAMPLE • Estimate the cardinality be the HLL formula (C ≈ 0.66): DVHLL ≈ 0.66 * 82 / (2-2 + 2-4 ) = 0.66 * 204.8 ≈ 135≠3 • Index element “kharkov”: • h(“kharkov”) = 1100001 • bucket(“kharkov”) = 110 = 6 value(“kharkov”) = rank(0001) = 4 • M[6] = max(M[6], 4) = max(3, 4) = 4 1 2 3 4 5 6 7 0 0 0 2 0 0 4 0 0 M NOTE: For small cardinalities HLL has a strong bias!!!
  • 16. HYPERLOGLOG: PROPERTIES • Memory requirement doesn't grow linearly with L (unlike MinCount or Linear Counting) - for hash function of L bits and precision p, required memory: • original HyperLogLog uses 32 bit hash codes, which requires 5 · 2 p bits • It’s not necessary to calculate the full hash code for the element • first p bits and number of leading zeros of the remaining bits are enough • There are no evidence that some of popular hash functions (MD5, Sha1, Sha256, Murmur3) performs significantly better than others. log2 L +1− p( )⎡⎢ ⎤⎥⋅2p bits
  • 17. HYPERLOGLOG: PROPERTIES • The standard error can be estimated as: σ = 1.04 2p so, if we use 16 bits (p=16) for bucket indices, we receive the standard error in 0.40625% • Algorithm has large error for small cardinalities. • For instance, for n = 0 the algorithm always returns roughly 0.7m • To achieve better estimates for small cardinalities, use LinearCounting below a threshold of 5m/2
  • 18. HYPERLOGLOG: APPLICATIONS • PFCOUNT in Redis returns the approximated cardinality computed by the HyperLogLog data structure 
 (http://antirez.com/news/75) • Redis implementation uses 12Kb per key to count with a standard error of 0.81%, and there is no limit to the number of items you can count, unless you approach 264 items
  • 19. HYPERLOGLOG: READ MORE • http://algo.inria.fr/flajolet/Publications/DuFl03-LNCS.pdf • http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf • https://stefanheule.com/papers/edbt13-hyperloglog.pdf • https://highlyscalable.wordpress.com/2012/05/01/ probabilistic-structures-web-analytics-data-mining/ • https://hal.archives-ouvertes.fr/file/index/docid/465313/ filename/sliding_HyperLogLog.pdf • http://stackoverflow.com/questions/12327004/how-does- the-hyperloglog-algorithm-work
  • 21. HYPERLOGLOG++ • proposed by Stefan Heule et. al., 2013 for Google PowerDrill • an improved version of HyperLogLog (Flajolet et. al., 2007) • HyperLogLog++ is described by 2 parameters: • p – number of bits that determine a bucket to use averaging
 (m = 2p is the number of buckets/substreams) • h - hash function, that produces uniform hash values • The HyperLogLog++ algorithm is able to estimate cardinalities of ~ 7.9 · 10 9 with a typical error rate of 1.625%, using 2.56KB of memory (Micha Gorelick and Ian Ozsvald, High Performance Python, 2014).
  • 22. HYPERLOGLOG++: IMPROVEMENTS • use 64-bit hash function • algorithm that only uses the hash code of the input values is limited by the number of bits of the hash codes when it comes to accurately estimating large cardinalities • In particular, a hash function of L bits can at most distinguish 2L different values, and as the cardinality n approaches 2L hash collisions become more and more likely and accurate estimation gets impossible • if the cardinality approaches 264 ≈ 1.8 · 1019, hash collisions become a problem • bias correction • original algorithm overestimates the real cardinality for small sets, but most of the error is due to bias. • storage efficiency • uses different encoding strategies for hash values, variable length encoding for integers, difference encoding
  • 23. HYPERLOGLOG++ VS HYPERLOGLOG • accuracy is significantly better for large range of cardinalities and equally good on the rest • sparse representation allows for a more adaptive use of memory • if the cardinality n is much smaller then m, then HyperLogLog++ requires significantly less memory • For cardinalities between 12000 and 61000, the bias correction allows for a lower error and avoids a spike in the error when switching between sub-algorithms. • 64 bit hash codes allow the algorithm to estimate cardinalities well beyond 1 billion
  • 24. HYPERLOGLOG++: APPLICATIONS • cardinality metric in Elasticsearch is based on the HyperLogLog++ algorithm for big cardinalities (adaptive counting) • Apache DataFu, collection of libraries for working with large-scale data in Hadoop, has an implementation of HyperLogLog++ algorithm
  • 25. HYPERLOGLOG++: READ MORE • http://static.googleusercontent.com/media/ research.google.com/en//pubs/archive/40671.pdf • https://research.neustar.biz/2013/01/24/hyperloglog- googles-take-on-engineering-hll/
  • 26. ▸ @gakhov ▸ linkedin.com/in/gakhov ▸ www.datacrucis.com THANK YOU