SlideShare a Scribd company logo
1 of 162
Download to read offline
SCALABILITY AND MEMORY
CS FUNDAMENTALS SERIES
http://bit.ly/1TPJCe6
HOW DO YOU MEASURE
AN ALGORITHM?
???
CLOCK TIME?
CLOCKTIME?
DEPENDS ON
WHO’S COUNTING.
ALSO, TOO FLAKY
EVEN ON THE SAME
MACHINE.
THE NUMBER OF
LINES?
THENUMBEROF
LINES?
THIS IS TWO LINES, BUT A WHOLE
LOT OF STUPID.
THE NUMBER OF
CPU CYCLES?
THENUMBEROF
CPUCYCLES?
DEPENDS ON THE
RUNTIME.
ALL THESE METHODS SUCK.
NONE OF THEM CAPTURE WHAT WE
ACTUALLY CARE ABOUT.
ENTER BIG O!
TEXT
ASYMPTOTIC ANALYSIS
TEXT
ASYMPTOTIC ANALYSIS
▸ Big O is about asymptotic analysis
TEXT
ASYMPTOTIC ANALYSIS
▸ Big O is about asymptotic analysis
▸ In other words, it’s about how an algorithm scales when
the numbers get huge
TEXT
ASYMPTOTIC ANALYSIS
▸ Big O is about asymptotic analysis
▸ In other words, it’s about how an algorithm scales when
the numbers get huge
▸ You can also describe this as “the rate of growth”
TEXT
ASYMPTOTIC ANALYSIS
▸ Big O is about asymptotic analysis
▸ In other words, it’s about how an algorithm scales when
the numbers get huge
▸ You can also describe this as “the rate of growth”
▸ How fast do the numbers become unmanageable?
TEXT
ASYMPTOTIC ANALYSIS
TEXT
ASYMPTOTIC ANALYSIS
▸ Another way to think about this is:
TEXT
ASYMPTOTIC ANALYSIS
▸ Another way to think about this is:
▸ What happens when your input size is 10,000,000? Will
your program be able to resolve?
TEXT
ASYMPTOTIC ANALYSIS
▸ Another way to think about this is:
▸ What happens when your input size is 10,000,000? Will
your program be able to resolve?
▸ It’s about scalability, not necessarily speed
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
▸ In computer science, it means essentially means
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
▸ In computer science, it means essentially means
“the asymptotic rate of growth”
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
▸ In computer science, it means essentially means
“the asymptotic rate of growth”
▸ In other words, how does the running time of this function
scale with the input size when the numbers get big?
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
▸ In computer science, it means essentially means
“the asymptotic rate of growth”
▸ In other words, how does the running time of this function
scale with the input size when the numbers get big?
▸ Big O notation looks like this:
TEXT
PRINCIPLES OF BIG O
▸ Big O is a kind of mathematical notation
▸ In computer science, it means essentially means
“the asymptotic rate of growth”
▸ In other words, how does the running time of this function
scale with the input size when the numbers get big?
▸ Big O notation looks like this:
O(n) O(nlog(n)) O(n2
)
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ n here refers to the input size
TEXT
PRINCIPLES OF BIG O
▸ n here refers to the input size
▸ Can be the size of an array, the length of a string, the
number of bits in a number, etc.
TEXT
PRINCIPLES OF BIG O
▸ n here refers to the input size
▸ Can be the size of an array, the length of a string, the
number of bits in a number, etc.
▸ O(n) means the algorithm scales linearly with the input
TEXT
PRINCIPLES OF BIG O
▸ n here refers to the input size
▸ Can be the size of an array, the length of a string, the
number of bits in a number, etc.
▸ O(n) means the algorithm scales linearly with the input
▸ Think like a line (y = x)
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ “Scaling linearly” can mean 1:1 (one iteration per extra
input), but it doesn’t necessarily
TEXT
PRINCIPLES OF BIG O
▸ “Scaling linearly” can mean 1:1 (one iteration per extra
input), but it doesn’t necessarily
▸ It can simply mean k:1 where k is constant, like 3:1 or 5:1
(i.e., a constant amount of time per extra input)
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ In Big O, we strip out any coefficients or smaller factors.
TEXT
PRINCIPLES OF BIG O
▸ In Big O, we strip out any coefficients or smaller factors.
▸ The fastest-growing factor wins. This is also known as the
dominant factor.
TEXT
PRINCIPLES OF BIG O
▸ In Big O, we strip out any coefficients or smaller factors.
▸ The fastest-growing factor wins. This is also known as the
dominant factor.
▸ Just think, when the numbers get huge, what dwarfs
everything else?
TEXT
PRINCIPLES OF BIG O
▸ In Big O, we strip out any coefficients or smaller factors.
▸ The fastest-growing factor wins. This is also known as the
dominant factor.
▸ Just think, when the numbers get huge, what dwarfs
everything else?
▸ O(5n) => O(n)
TEXT
PRINCIPLES OF BIG O
▸ In Big O, we strip out any coefficients or smaller factors.
▸ The fastest-growing factor wins. This is also known as the
dominant factor.
▸ Just think, when the numbers get huge, what dwarfs
everything else?
▸ O(5n) => O(n)
▸ O(½n - 10) also => O(n)
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ O(k) where k is any constant reduces to O(1).
TEXT
PRINCIPLES OF BIG O
▸ O(k) where k is any constant reduces to O(1).
▸ O(200) = O(1)
TEXT
PRINCIPLES OF BIG O
▸ O(k) where k is any constant reduces to O(1).
▸ O(200) = O(1)
▸ Where there are multiple factors of growth, the most
dominant one wins.
TEXT
PRINCIPLES OF BIG O
▸ O(k) where k is any constant reduces to O(1).
▸ O(200) = O(1)
▸ Where there are multiple factors of growth, the most
dominant one wins.
▸ O(n4 + n2 + 40n) = O(n4)
TEXT
PRINCIPLES OF BIG O
TEXT
PRINCIPLES OF BIG O
▸ If there are two inputs (say you’re trying to find all the
common substrings of two strings), then you use two
variables in your Big O notation => O(n * m)
TEXT
PRINCIPLES OF BIG O
▸ If there are two inputs (say you’re trying to find all the
common substrings of two strings), then you use two
variables in your Big O notation => O(n * m)
▸ Doesn’t matter if one variable probably dwarfs the other.
You always include both.
TEXT
PRINCIPLES OF BIG O
▸ If there are two inputs (say you’re trying to find all the
common substrings of two strings), then you use two
variables in your Big O notation => O(n * m)
▸ Doesn’t matter if one variable probably dwarfs the other.
You always include both.
▸ O(n + m) => this is considered linear
TEXT
PRINCIPLES OF BIG O
▸ If there are two inputs (say you’re trying to find all the
common substrings of two strings), then you use two
variables in your Big O notation => O(n * m)
▸ Doesn’t matter if one variable probably dwarfs the other.
You always include both.
▸ O(n + m) => this is considered linear
▸ O(2n + log(m)) => this is considered exponential
TEXT
COMPREHENSION TEST
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
▸ O(3n + 5)
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
▸ O(3n + 5)
▸ O(n + 1/5n2)
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
▸ O(3n + 5)
▸ O(n + 1/5n2)
▸ O(log(n) + 5000)
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
▸ O(3n + 5)
▸ O(n + 1/5n2)
▸ O(log(n) + 5000)
▸ O(2m3
+ 50 + ½n)
TEXT
COMPREHENSION TEST
Convert each of these to their appropriate Big O form!
▸ O(3n + 5)
▸ O(n + 1/5n2)
▸ O(log(n) + 5000)
▸ O(2m3
+ 50 + ½n)
▸ O(nlog(m) + 2m2
+ nm)
▸ What should n be for this function?
For each character in the string…
Unshift them into an array…
And then join the array together.
Let’s break it down.
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ Initialize an empty array => O(1)
▸ Then, split the string into an array of characters => O(n)
▸ Then for each character => O(n)
▸ Unshift into an array => O(n)
▸ Then join the characters into a string => O(n)
We’ll see later why this is.
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ Initialize an empty array => O(1)
▸ Then, split the string into an array of characters => O(n)
▸ Then for each character => O(n)
▸ Unshift into an array => O(n)
▸ Then join the characters into a string => O(n)
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ Initialize an empty array => O(1)
▸ Then, split the string into an array of characters => O(n)
▸ Then for each character => O(n)
▸ Unshift into an array => O(n)
▸ Then join the characters into a string => O(n)
These multiply. => O(n2
)
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ Initialize an empty array => O(1)
▸ Then, split the string into an array of characters => O(n)
▸ Then for each character => O(n)
▸ Unshift into an array => O(n)
▸ Then join the characters into a string => O(n)
These multiply. => O(n2
)
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ O(n2 + 2n) = O(n2)
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ O(n2 + 2n) = O(n2)
▸ This algorithm is quadratic.
Make an empty array.
For each character in the string…
Unshift them into an array…
And then join the array together.
▸ O(n2 + 2n) = O(n2)
▸ This algorithm is quadratic.
▸ Let’s see how badly it sucks.
Make an empty array.
Benchmark
away!
(showSlowReverse.js)
TEXT
TIME COMPLEXITIES WAY TOO FAST
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
Cubic O(n3
) triply nested looping, matrix multiplication
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
Cubic O(n3
) triply nested looping, matrix multiplication
Polynomial O(nk
) all “efficient” algorithms
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
Cubic O(n3
) triply nested looping, matrix multiplication
Polynomial O(nk
) all “efficient” algorithms
Exponential O(2n
) subsets, solving chess
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
Cubic O(n3
) triply nested looping, matrix multiplication
Polynomial O(nk
) all “efficient” algorithms
Exponential O(2n
) subsets, solving chess
Factorial O(n!) permutations
TEXT
TIME COMPLEXITIES WAY TOO FAST
Constant O(1)
math, pop, push, arr[i], property access,
conditionals, initializing a variable
Logarithmic O(logn) binary search
Linear O(n) linear search, iteration
Linearithmic O(nlogn) sorting (merge sort, quick sort)
Quadratic O(n2
) nested looping, bubble sort
Cubic O(n3
) triply nested looping, matrix multiplication
Polynomial O(nk
) all “efficient” algorithms
Exponential O(2n
) subsets, solving chess
Factorial O(n!) permutations
TIME TO IDENTIFY
TIME COMPLEXITIES
OPTIMIZATIONS DON’T
ALWAYS MATTER
BOTTLENECKS
BOTTLENECKS
▸ A bottleneck is the part of your code where your algorithm
spends most of its time.
BOTTLENECKS
▸ A bottleneck is the part of your code where your algorithm
spends most of its time.
▸ Asymptotically, it’s wherever the dominant factor is.
BOTTLENECKS
▸ A bottleneck is the part of your code where your algorithm
spends most of its time.
▸ Asymptotically, it’s wherever the dominant factor is.
▸ If your algorithm is has an O(n) part and an O(50) part,
the bottleneck is the O(n) part.
BOTTLENECKS
▸ A bottleneck is the part of your code where your algorithm
spends most of its time.
▸ Asymptotically, it’s wherever the dominant factor is.
▸ If your algorithm is has an O(n) part and an O(50) part,
the bottleneck is the O(n) part.
▸ As n => ∞, your algorithm will eventually spend 99%+ of
its time in the bottleneck.
BOTTLENECKS
BOTTLENECKS
▸ When trying to optimize or speed up an algorithm, focus
on the bottleneck.
BOTTLENECKS
▸ When trying to optimize or speed up an algorithm, focus
on the bottleneck.
▸ Optimizing code outside the bottleneck will have a
minuscule effect.
BOTTLENECKS
▸ When trying to optimize or speed up an algorithm, focus
on the bottleneck.
▸ Optimizing code outside the bottleneck will have a
minuscule effect.
▸ Bottleneck optimizations on the other hand can easily be
huge!
BOTTLENECKS
BOTTLENECKS
▸ If you cut down non-bottleneck code, you might be able to
save .01% of your runtime.
BOTTLENECKS
▸ If you cut down non-bottleneck code, you might be able to
save .01% of your runtime.
▸ If you cut down on bottleneck code, you might be able to
save 30% of your runtime.
BOTTLENECKS
▸ If you cut down non-bottleneck code, you might be able to
save .01% of your runtime.
▸ If you cut down on bottleneck code, you might be able to
save 30% of your runtime.
▸ Better yet, try to lower the time complexity altogether if
you can!
BOTTLENECK
EXERCISE
SPACE
COMPLEXITY
SPACE COMPLEXITY
SPACE COMPLEXITY
▸ Same thing, except now with memory instead of time.
SPACE COMPLEXITY
▸ Same thing, except now with memory instead of time.
▸ Do you take linear extra space relative to the input?
SPACE COMPLEXITY
▸ Same thing, except now with memory instead of time.
▸ Do you take linear extra space relative to the input?
▸ Do you allocate new arrays? Do you have to make a copy
of the original input? Are you creating nested data
structures?
COMPREHENSION CHECK
COMPREHENSION CHECK
▸ What is the space complexity of:
COMPREHENSION CHECK
▸ What is the space complexity of:
▸ max(arr)
COMPREHENSION CHECK
▸ What is the space complexity of:
▸ max(arr)
▸ firstFive(arr)
COMPREHENSION CHECK
▸ What is the space complexity of:
▸ max(arr)
▸ firstFive(arr)
▸ substrings(str)
COMPREHENSION CHECK
▸ What is the space complexity of:
▸ max(arr)
▸ firstFive(arr)
▸ substrings(str)
▸ hasVowel(str)
SO WHAT THE HELL
IS MEMORY ANYWAY
TO UNDERSTAND MEMORY, WE
NEED TO UNDERSTAND HOW A
COMPUTER IS STRUCTURED.
Immediate workspace. A CPU usually has 16 of these.
Data Layers
1 cycle
Immediate workspace. A CPU usually has 16 of these.
Data Layers
A nearby reservoir of useful data we’ve recently read. Close-by.
1 cycle
~4 cycles
Immediate workspace. A CPU usually has 16 of these.
Data Layers
A nearby reservoir of useful data we’ve recently read. Close-by.
More nearby data, but a little farther away.
1 cycle
~4 cycles
~10 cycles
Immediate workspace. A CPU usually has 16 of these.
Data Layers
A nearby reservoir of useful data we’ve recently read. Close-by.
More nearby data, but a little farther away.
~800 cycles. Getting pretty far now.
It’s completely random-access, but takes a while.
1 cycle
~4 cycles
~10 cycles
Immediate workspace. A CPU usually has 16 of these.
Data Layers
A nearby reservoir of useful data we’ve recently read. Close-by.
More nearby data, but a little farther away.
~800 cycles. Getting pretty far now.
It’s completely random-access, but takes a while.
1 cycle
~4 cycles
~10 cycles
On an SSD, you’re looking at ~5,000 cycles.
This is pretty much another country.
And on a spindle drive, it’s more like 50,000.
SO ALL DATA TAKES A JOURNEY
UP FROM THE HARD DISK TO
EVENTUALLY LIVE IN A REGISTER.
WHAT DOES MEMORY
ACTUALLY LOOK LIKE?
IT’S JUST A BUNCH OF CELLS
WITH SHIT IN ‘EM.
IT’S ALL BINARY DATA.
STRINGS, FLOATS, OBJECTS, THEY’RE
ALL STORED AS BINARY.
AND IT’S ALL STORED CONTIGUOUSLY.
THIS IS VERY IMPORTANT WHEN IT
COMES TO ARRAYS.
ARRAYS ARE JUST
CONTIGUOUS BLOCKS OF
MEMORY.
THAT’S WHY
THEY’RE SO FAST.
Garbage Also garbage
Garbage Also garbage
Assume each of these cells are 8 bytes (64-bits)
Garbage Also garbage
Assume each of these cells are 8 bytes (64-bits)
Let’s imagine they’re addressed like so…
Assume each of these cells are 8 bytes (64-bits)
Let’s imagine they’re addressed like so…
832968 833032 833096 833160 833224 833288 833352 833416 833480 833544
this.startAddr = 833096;
Assume each of these cells are 8 bytes (64-bits)
Let’s imagine they’re addressed like so…
832968 833032 833096 833160 833224 833288 833352 833416 833480 833544
Each cell is offset by exactly 64 in the address space
this.startAddr = 833096;
Assume each of these cells are 8 bytes (64-bits)
Let’s imagine they’re addressed like so…
832968 833032 833096 833160 833224 833288 833352 833416 833480 833544
Each cell is offset by exactly 64 in the address space
Meaning you can easily derive the address of any index
this.startAddr = 833096;
832968 833032 833096 833160 833224 833288 833352 833416 833480 833544
function get(i) {
return this.startAddr + i * 64;
}
this.startAddr = 833096;
832968 833032 833096 833160 833224 833288 833352 833416 833480 833544
function get(i) {
return this.startAddr + i * 64;
}
this.startAddr = 833096;
get(3) = 833096 + 3 * 64 = 83306 + 192 = 833288
THIS IS POINTER
ARITHMETIC.
THIS IS WHAT MAKES
ARRAY LOOKUPS O(1)
AND IT’S WHY ARRAYS ARE BY
FAR THE FASTEST DATA
STRUCTURE
LET’S WRAP UP BY
TALKING ABOUT CACHE
EFFICIENCY.
CACHES ARE
DUMB.
When the CPU needs data, it first looks in the cache.
When the CPU needs data, it first looks in the cache.
Say it’s not in the cache. This is called a cache miss.
When the CPU needs data, it first looks in the cache.
Say it’s not in the cache. This is called a cache miss.
The cache then loads the data the CPU requested from RAM…
When the CPU needs data, it first looks in the cache.
Say it’s not in the cache. This is called a cache miss.
The cache then loads the data the CPU requested from RAM…
But the cache guesses that if the CPU wanted this data, it probably will also want
other nearby data eventually. It would be stupid to have to make multiple round trips.
In other words, the cache assumes that
related data will be stored around the same
physical area.
In other words, the cache assumes that
related data will be stored around the same
physical area.
The cache assumes locality of data.
So the cache just loads a huge
contiguous chunk of data around
the address the CPU asked for.
OK. SO?
Remember this?
Loading from memory is slow as shit.
Remember this?
Loading from memory is slow as shit.
We really want to minimize cache misses.
SO KEEP YOUR DATA LOCAL AND
YOUR DATA STRUCTURES
CONTIGUOUS.
ARRAYS ARE KING, BECAUSE ALL OF
THE DATA IS LITERALLY RIGHT NEXT
TO EACH OTHER IN MEMORY!
An algorithm that jumps around in memory
or follows a bunch of pointers to other objects
will trigger lots of cache misses!
An algorithm that jumps around in memory
or follows a bunch of pointers to other objects
will trigger lots of cache misses!
Think linked lists, trees, even hash maps.
IDEALLY, YOU WANT TO WORK
LOCALLY WITHIN ARRAYS OF
CONTIGUOUS DATA.
LET’S DO A QUICK
EXERCISE.
QUESTIONS?
I AM
HASEEB QURESHI
You can find me on Twitter: @hosseeb
You can read my blog at: haseebq.com
PLEASE DONATE IF YOU GOT
SOMETHING OUT OF THIS
<3
Ranked by GiveWell as the most
efficient charity in the world!

More Related Content

What's hot

Predicates and Quantifiers
Predicates and Quantifiers Predicates and Quantifiers
Predicates and Quantifiers Istiak Ahmed
 
Unit 1-logic
Unit 1-logicUnit 1-logic
Unit 1-logicraksharao
 
Heuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureHeuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureAmshuman Hegde
 
Compatible Mapping and Common Fixed Point Theorem
Compatible Mapping and Common Fixed Point TheoremCompatible Mapping and Common Fixed Point Theorem
Compatible Mapping and Common Fixed Point TheoremIOSR Journals
 
AA Section 11-7
AA Section 11-7AA Section 11-7
AA Section 11-7Jimbo Lamb
 
Limits and Continuity - Intuitive Approach part 1
Limits and Continuity - Intuitive Approach part 1Limits and Continuity - Intuitive Approach part 1
Limits and Continuity - Intuitive Approach part 1FellowBuddy.com
 
On Series of Fuzzy Numbers
On Series of Fuzzy NumbersOn Series of Fuzzy Numbers
On Series of Fuzzy NumbersIOSR Journals
 
Partitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsPartitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsSSA KPI
 
[2019] Language Modeling
[2019] Language Modeling[2019] Language Modeling
[2019] Language ModelingJinho Choi
 
11X1 T13 04 polynomial theorems
11X1 T13 04 polynomial theorems11X1 T13 04 polynomial theorems
11X1 T13 04 polynomial theoremsNigel Simmons
 
Homogeneous Components of a CDH Fuzzy Space
Homogeneous Components of a CDH Fuzzy SpaceHomogeneous Components of a CDH Fuzzy Space
Homogeneous Components of a CDH Fuzzy SpaceIJECEIAES
 
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...11.[29 35]a unique common fixed point theorem under psi varphi contractive co...
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...Alexander Decker
 
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015stephane.jean
 
AP Calculus BC: p-int theory notes
AP Calculus BC: p-int theory notesAP Calculus BC: p-int theory notes
AP Calculus BC: p-int theory notesA Jorge Garcia
 
Evaluate functions &amp; fundamental operations of functions
Evaluate functions &amp; fundamental operations of functionsEvaluate functions &amp; fundamental operations of functions
Evaluate functions &amp; fundamental operations of functionsAjayQuines
 

What's hot (16)

Predicates and Quantifiers
Predicates and Quantifiers Predicates and Quantifiers
Predicates and Quantifiers
 
Unit 1-logic
Unit 1-logicUnit 1-logic
Unit 1-logic
 
Heuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureHeuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal Conjecture
 
Compatible Mapping and Common Fixed Point Theorem
Compatible Mapping and Common Fixed Point TheoremCompatible Mapping and Common Fixed Point Theorem
Compatible Mapping and Common Fixed Point Theorem
 
AA Section 11-7
AA Section 11-7AA Section 11-7
AA Section 11-7
 
Paper
PaperPaper
Paper
 
Limits and Continuity - Intuitive Approach part 1
Limits and Continuity - Intuitive Approach part 1Limits and Continuity - Intuitive Approach part 1
Limits and Continuity - Intuitive Approach part 1
 
On Series of Fuzzy Numbers
On Series of Fuzzy NumbersOn Series of Fuzzy Numbers
On Series of Fuzzy Numbers
 
Partitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problemsPartitioning procedures for solving mixed-variables programming problems
Partitioning procedures for solving mixed-variables programming problems
 
[2019] Language Modeling
[2019] Language Modeling[2019] Language Modeling
[2019] Language Modeling
 
11X1 T13 04 polynomial theorems
11X1 T13 04 polynomial theorems11X1 T13 04 polynomial theorems
11X1 T13 04 polynomial theorems
 
Homogeneous Components of a CDH Fuzzy Space
Homogeneous Components of a CDH Fuzzy SpaceHomogeneous Components of a CDH Fuzzy Space
Homogeneous Components of a CDH Fuzzy Space
 
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...11.[29 35]a unique common fixed point theorem under psi varphi contractive co...
11.[29 35]a unique common fixed point theorem under psi varphi contractive co...
 
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015
Cooperative Techniques for SPARQL Query Relaxation in RDF Databases: ESWC 2015
 
AP Calculus BC: p-int theory notes
AP Calculus BC: p-int theory notesAP Calculus BC: p-int theory notes
AP Calculus BC: p-int theory notes
 
Evaluate functions &amp; fundamental operations of functions
Evaluate functions &amp; fundamental operations of functionsEvaluate functions &amp; fundamental operations of functions
Evaluate functions &amp; fundamental operations of functions
 

Viewers also liked

Recursion For the Rest of Us (CS Fundamentals Series)
Recursion For the Rest of Us (CS Fundamentals Series)Recursion For the Rest of Us (CS Fundamentals Series)
Recursion For the Rest of Us (CS Fundamentals Series)Haseeb Qureshi
 
Lies, Damned Lies, and Substrings
Lies, Damned Lies, and SubstringsLies, Damned Lies, and Substrings
Lies, Damned Lies, and SubstringsHaseeb Qureshi
 
Metaprogramming and Folly
Metaprogramming and FollyMetaprogramming and Folly
Metaprogramming and FollyHaseeb Qureshi
 
Analysis instruction text tips to help streamline the essay writing process b...
Analysis instruction text tips to help streamline the essay writing process b...Analysis instruction text tips to help streamline the essay writing process b...
Analysis instruction text tips to help streamline the essay writing process b...Siti Purwaningsih
 
asymptotic notations i
asymptotic notations iasymptotic notations i
asymptotic notations iAli mahmood
 
Into the mind - How to teach vocabulary effectively
Into the mind - How to teach vocabulary effectivelyInto the mind - How to teach vocabulary effectively
Into the mind - How to teach vocabulary effectivelytelc gGmbH
 
How to teach vocabulary?
How to teach vocabulary?How to teach vocabulary?
How to teach vocabulary?yramirez23
 
Lec2 Algorth
Lec2 AlgorthLec2 Algorth
Lec2 Algorthhumanist3
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihmSajid Marwat
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notationsNikhil Sharma
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notationsEhtisham Ali
 
Asymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAsymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAmrinder Arora
 

Viewers also liked (20)

Recursion For the Rest of Us (CS Fundamentals Series)
Recursion For the Rest of Us (CS Fundamentals Series)Recursion For the Rest of Us (CS Fundamentals Series)
Recursion For the Rest of Us (CS Fundamentals Series)
 
Lies, Damned Lies, and Substrings
Lies, Damned Lies, and SubstringsLies, Damned Lies, and Substrings
Lies, Damned Lies, and Substrings
 
Metaprogramming and Folly
Metaprogramming and FollyMetaprogramming and Folly
Metaprogramming and Folly
 
Chapter 12
Chapter 12Chapter 12
Chapter 12
 
PDI of RAZ
PDI of RAZPDI of RAZ
PDI of RAZ
 
Analysis instruction text tips to help streamline the essay writing process b...
Analysis instruction text tips to help streamline the essay writing process b...Analysis instruction text tips to help streamline the essay writing process b...
Analysis instruction text tips to help streamline the essay writing process b...
 
English language teaching aspects
English language teaching aspects English language teaching aspects
English language teaching aspects
 
Lecture1
Lecture1Lecture1
Lecture1
 
02 asymp
02 asymp02 asymp
02 asymp
 
big_oh
big_ohbig_oh
big_oh
 
asymptotic notations i
asymptotic notations iasymptotic notations i
asymptotic notations i
 
Into the mind - How to teach vocabulary effectively
Into the mind - How to teach vocabulary effectivelyInto the mind - How to teach vocabulary effectively
Into the mind - How to teach vocabulary effectively
 
How to teach vocabulary (HW)
How to teach vocabulary (HW)How to teach vocabulary (HW)
How to teach vocabulary (HW)
 
How to teach vocabulary?
How to teach vocabulary?How to teach vocabulary?
How to teach vocabulary?
 
How To Effectively Teach Vocabulary
How To Effectively Teach VocabularyHow To Effectively Teach Vocabulary
How To Effectively Teach Vocabulary
 
Lec2 Algorth
Lec2 AlgorthLec2 Algorth
Lec2 Algorth
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihm
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
 
Asymptotic Notation and Data Structures
Asymptotic Notation and Data StructuresAsymptotic Notation and Data Structures
Asymptotic Notation and Data Structures
 

Similar to CS Fundamentals: Scalability and Memory

Bounded arithmetic in free logic
Bounded arithmetic in free logicBounded arithmetic in free logic
Bounded arithmetic in free logicYamagata Yoriyuki
 
Bounded arithmetic in free logic
Bounded arithmetic in free logicBounded arithmetic in free logic
Bounded arithmetic in free logicYamagata Yoriyuki
 
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdfconstructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdfSayanSamanta39
 
GAN in_kakao
GAN in_kakaoGAN in_kakao
GAN in_kakaoJunho Kim
 
Introduction to Graph Theory
Introduction to Graph TheoryIntroduction to Graph Theory
Introduction to Graph TheoryYosuke Mizutani
 
Hidden Markov Model in Natural Language Processing
Hidden Markov Model in Natural Language ProcessingHidden Markov Model in Natural Language Processing
Hidden Markov Model in Natural Language Processingsachinmaskeen211
 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detectionkirk68
 
Constructs and techniques and their implementation in different languages
Constructs and techniques and their implementation in different languagesConstructs and techniques and their implementation in different languages
Constructs and techniques and their implementation in different languagesOliverYoung22
 
正規表現に潜む対称性 〜等式公理による等価性判定〜
正規表現に潜む対称性 〜等式公理による等価性判定〜正規表現に潜む対称性 〜等式公理による等価性判定〜
正規表現に潜む対称性 〜等式公理による等価性判定〜Ryoma Sin'ya
 
Introduction to Recursion (Python)
Introduction to Recursion (Python)Introduction to Recursion (Python)
Introduction to Recursion (Python)Thai Pangsakulyanont
 
DIGITAL TEXT BOOK
DIGITAL TEXT BOOKDIGITAL TEXT BOOK
DIGITAL TEXT BOOKbintu55
 
CMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of FunctionsCMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of Functionsallyn joy calcaben
 
introduction to Genifer -- Deduction
introduction to Genifer -- Deductionintroduction to Genifer -- Deduction
introduction to Genifer -- DeductionYan Yin
 
lecture 1
lecture 1lecture 1
lecture 1sajinsc
 

Similar to CS Fundamentals: Scalability and Memory (20)

Bounded arithmetic in free logic
Bounded arithmetic in free logicBounded arithmetic in free logic
Bounded arithmetic in free logic
 
Bounded arithmetic in free logic
Bounded arithmetic in free logicBounded arithmetic in free logic
Bounded arithmetic in free logic
 
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdfconstructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
 
Analysis of algorithms
Analysis of algorithmsAnalysis of algorithms
Analysis of algorithms
 
GAN in_kakao
GAN in_kakaoGAN in_kakao
GAN in_kakao
 
Introduction to Graph Theory
Introduction to Graph TheoryIntroduction to Graph Theory
Introduction to Graph Theory
 
Algorithum Analysis
Algorithum AnalysisAlgorithum Analysis
Algorithum Analysis
 
Hidden Markov Model in Natural Language Processing
Hidden Markov Model in Natural Language ProcessingHidden Markov Model in Natural Language Processing
Hidden Markov Model in Natural Language Processing
 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detection
 
The bog oh notation
The bog oh notationThe bog oh notation
The bog oh notation
 
Discrete Math Lecture 02: First Order Logic
Discrete Math Lecture 02: First Order LogicDiscrete Math Lecture 02: First Order Logic
Discrete Math Lecture 02: First Order Logic
 
Constructs and techniques and their implementation in different languages
Constructs and techniques and their implementation in different languagesConstructs and techniques and their implementation in different languages
Constructs and techniques and their implementation in different languages
 
Big o
Big oBig o
Big o
 
正規表現に潜む対称性 〜等式公理による等価性判定〜
正規表現に潜む対称性 〜等式公理による等価性判定〜正規表現に潜む対称性 〜等式公理による等価性判定〜
正規表現に潜む対称性 〜等式公理による等価性判定〜
 
Introduction to Recursion (Python)
Introduction to Recursion (Python)Introduction to Recursion (Python)
Introduction to Recursion (Python)
 
DIGITAL TEXT BOOK
DIGITAL TEXT BOOKDIGITAL TEXT BOOK
DIGITAL TEXT BOOK
 
CMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of FunctionsCMSC 56 | Lecture 8: Growth of Functions
CMSC 56 | Lecture 8: Growth of Functions
 
introduction to Genifer -- Deduction
introduction to Genifer -- Deductionintroduction to Genifer -- Deduction
introduction to Genifer -- Deduction
 
lecture 1
lecture 1lecture 1
lecture 1
 
[Book Reading] 機械翻訳 - Section 3 No.1
[Book Reading] 機械翻訳 - Section 3 No.1[Book Reading] 機械翻訳 - Section 3 No.1
[Book Reading] 機械翻訳 - Section 3 No.1
 

Recently uploaded

Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdfCaalaaAbdulkerim
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionMebane Rash
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncssuser2ae721
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxsiddharthjain2303
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptJasonTagapanGulla
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadaditya806802
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsSachinPawar510423
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptMadan Karki
 

Recently uploaded (20)

Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of Action
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptx
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.ppt
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasad
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documents
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.ppt
 

CS Fundamentals: Scalability and Memory

  • 1. SCALABILITY AND MEMORY CS FUNDAMENTALS SERIES http://bit.ly/1TPJCe6
  • 2. HOW DO YOU MEASURE AN ALGORITHM?
  • 3. ???
  • 7. ALSO, TOO FLAKY EVEN ON THE SAME MACHINE.
  • 10. THIS IS TWO LINES, BUT A WHOLE LOT OF STUPID.
  • 11. THE NUMBER OF CPU CYCLES?
  • 13.
  • 15. ALL THESE METHODS SUCK. NONE OF THEM CAPTURE WHAT WE ACTUALLY CARE ABOUT.
  • 17.
  • 19. TEXT ASYMPTOTIC ANALYSIS ▸ Big O is about asymptotic analysis
  • 20. TEXT ASYMPTOTIC ANALYSIS ▸ Big O is about asymptotic analysis ▸ In other words, it’s about how an algorithm scales when the numbers get huge
  • 21. TEXT ASYMPTOTIC ANALYSIS ▸ Big O is about asymptotic analysis ▸ In other words, it’s about how an algorithm scales when the numbers get huge ▸ You can also describe this as “the rate of growth”
  • 22. TEXT ASYMPTOTIC ANALYSIS ▸ Big O is about asymptotic analysis ▸ In other words, it’s about how an algorithm scales when the numbers get huge ▸ You can also describe this as “the rate of growth” ▸ How fast do the numbers become unmanageable?
  • 24. TEXT ASYMPTOTIC ANALYSIS ▸ Another way to think about this is:
  • 25. TEXT ASYMPTOTIC ANALYSIS ▸ Another way to think about this is: ▸ What happens when your input size is 10,000,000? Will your program be able to resolve?
  • 26. TEXT ASYMPTOTIC ANALYSIS ▸ Another way to think about this is: ▸ What happens when your input size is 10,000,000? Will your program be able to resolve? ▸ It’s about scalability, not necessarily speed
  • 28. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation
  • 29. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation ▸ In computer science, it means essentially means
  • 30. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation ▸ In computer science, it means essentially means “the asymptotic rate of growth”
  • 31. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation ▸ In computer science, it means essentially means “the asymptotic rate of growth” ▸ In other words, how does the running time of this function scale with the input size when the numbers get big?
  • 32. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation ▸ In computer science, it means essentially means “the asymptotic rate of growth” ▸ In other words, how does the running time of this function scale with the input size when the numbers get big? ▸ Big O notation looks like this:
  • 33. TEXT PRINCIPLES OF BIG O ▸ Big O is a kind of mathematical notation ▸ In computer science, it means essentially means “the asymptotic rate of growth” ▸ In other words, how does the running time of this function scale with the input size when the numbers get big? ▸ Big O notation looks like this: O(n) O(nlog(n)) O(n2 )
  • 35. TEXT PRINCIPLES OF BIG O ▸ n here refers to the input size
  • 36. TEXT PRINCIPLES OF BIG O ▸ n here refers to the input size ▸ Can be the size of an array, the length of a string, the number of bits in a number, etc.
  • 37. TEXT PRINCIPLES OF BIG O ▸ n here refers to the input size ▸ Can be the size of an array, the length of a string, the number of bits in a number, etc. ▸ O(n) means the algorithm scales linearly with the input
  • 38. TEXT PRINCIPLES OF BIG O ▸ n here refers to the input size ▸ Can be the size of an array, the length of a string, the number of bits in a number, etc. ▸ O(n) means the algorithm scales linearly with the input ▸ Think like a line (y = x)
  • 40. TEXT PRINCIPLES OF BIG O ▸ “Scaling linearly” can mean 1:1 (one iteration per extra input), but it doesn’t necessarily
  • 41. TEXT PRINCIPLES OF BIG O ▸ “Scaling linearly” can mean 1:1 (one iteration per extra input), but it doesn’t necessarily ▸ It can simply mean k:1 where k is constant, like 3:1 or 5:1 (i.e., a constant amount of time per extra input)
  • 43. TEXT PRINCIPLES OF BIG O ▸ In Big O, we strip out any coefficients or smaller factors.
  • 44. TEXT PRINCIPLES OF BIG O ▸ In Big O, we strip out any coefficients or smaller factors. ▸ The fastest-growing factor wins. This is also known as the dominant factor.
  • 45. TEXT PRINCIPLES OF BIG O ▸ In Big O, we strip out any coefficients or smaller factors. ▸ The fastest-growing factor wins. This is also known as the dominant factor. ▸ Just think, when the numbers get huge, what dwarfs everything else?
  • 46. TEXT PRINCIPLES OF BIG O ▸ In Big O, we strip out any coefficients or smaller factors. ▸ The fastest-growing factor wins. This is also known as the dominant factor. ▸ Just think, when the numbers get huge, what dwarfs everything else? ▸ O(5n) => O(n)
  • 47. TEXT PRINCIPLES OF BIG O ▸ In Big O, we strip out any coefficients or smaller factors. ▸ The fastest-growing factor wins. This is also known as the dominant factor. ▸ Just think, when the numbers get huge, what dwarfs everything else? ▸ O(5n) => O(n) ▸ O(½n - 10) also => O(n)
  • 49. TEXT PRINCIPLES OF BIG O ▸ O(k) where k is any constant reduces to O(1).
  • 50. TEXT PRINCIPLES OF BIG O ▸ O(k) where k is any constant reduces to O(1). ▸ O(200) = O(1)
  • 51. TEXT PRINCIPLES OF BIG O ▸ O(k) where k is any constant reduces to O(1). ▸ O(200) = O(1) ▸ Where there are multiple factors of growth, the most dominant one wins.
  • 52. TEXT PRINCIPLES OF BIG O ▸ O(k) where k is any constant reduces to O(1). ▸ O(200) = O(1) ▸ Where there are multiple factors of growth, the most dominant one wins. ▸ O(n4 + n2 + 40n) = O(n4)
  • 54. TEXT PRINCIPLES OF BIG O ▸ If there are two inputs (say you’re trying to find all the common substrings of two strings), then you use two variables in your Big O notation => O(n * m)
  • 55. TEXT PRINCIPLES OF BIG O ▸ If there are two inputs (say you’re trying to find all the common substrings of two strings), then you use two variables in your Big O notation => O(n * m) ▸ Doesn’t matter if one variable probably dwarfs the other. You always include both.
  • 56. TEXT PRINCIPLES OF BIG O ▸ If there are two inputs (say you’re trying to find all the common substrings of two strings), then you use two variables in your Big O notation => O(n * m) ▸ Doesn’t matter if one variable probably dwarfs the other. You always include both. ▸ O(n + m) => this is considered linear
  • 57. TEXT PRINCIPLES OF BIG O ▸ If there are two inputs (say you’re trying to find all the common substrings of two strings), then you use two variables in your Big O notation => O(n * m) ▸ Doesn’t matter if one variable probably dwarfs the other. You always include both. ▸ O(n + m) => this is considered linear ▸ O(2n + log(m)) => this is considered exponential
  • 59. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form!
  • 60. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form! ▸ O(3n + 5)
  • 61. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form! ▸ O(3n + 5) ▸ O(n + 1/5n2)
  • 62. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form! ▸ O(3n + 5) ▸ O(n + 1/5n2) ▸ O(log(n) + 5000)
  • 63. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form! ▸ O(3n + 5) ▸ O(n + 1/5n2) ▸ O(log(n) + 5000) ▸ O(2m3 + 50 + ½n)
  • 64. TEXT COMPREHENSION TEST Convert each of these to their appropriate Big O form! ▸ O(3n + 5) ▸ O(n + 1/5n2) ▸ O(log(n) + 5000) ▸ O(2m3 + 50 + ½n) ▸ O(nlog(m) + 2m2 + nm)
  • 65. ▸ What should n be for this function?
  • 66. For each character in the string… Unshift them into an array… And then join the array together. Let’s break it down. Make an empty array.
  • 67. For each character in the string… Unshift them into an array… And then join the array together. ▸ Initialize an empty array => O(1) ▸ Then, split the string into an array of characters => O(n) ▸ Then for each character => O(n) ▸ Unshift into an array => O(n) ▸ Then join the characters into a string => O(n) We’ll see later why this is. Make an empty array.
  • 68. For each character in the string… Unshift them into an array… And then join the array together. ▸ Initialize an empty array => O(1) ▸ Then, split the string into an array of characters => O(n) ▸ Then for each character => O(n) ▸ Unshift into an array => O(n) ▸ Then join the characters into a string => O(n) Make an empty array.
  • 69. For each character in the string… Unshift them into an array… And then join the array together. ▸ Initialize an empty array => O(1) ▸ Then, split the string into an array of characters => O(n) ▸ Then for each character => O(n) ▸ Unshift into an array => O(n) ▸ Then join the characters into a string => O(n) These multiply. => O(n2 ) Make an empty array.
  • 70. For each character in the string… Unshift them into an array… And then join the array together. ▸ Initialize an empty array => O(1) ▸ Then, split the string into an array of characters => O(n) ▸ Then for each character => O(n) ▸ Unshift into an array => O(n) ▸ Then join the characters into a string => O(n) These multiply. => O(n2 ) Make an empty array.
  • 71. For each character in the string… Unshift them into an array… And then join the array together. Make an empty array.
  • 72. For each character in the string… Unshift them into an array… And then join the array together. ▸ O(n2 + 2n) = O(n2) Make an empty array.
  • 73. For each character in the string… Unshift them into an array… And then join the array together. ▸ O(n2 + 2n) = O(n2) ▸ This algorithm is quadratic. Make an empty array.
  • 74. For each character in the string… Unshift them into an array… And then join the array together. ▸ O(n2 + 2n) = O(n2) ▸ This algorithm is quadratic. ▸ Let’s see how badly it sucks. Make an empty array.
  • 77. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable
  • 78. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search
  • 79. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration
  • 80. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort)
  • 81. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort
  • 82. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort Cubic O(n3 ) triply nested looping, matrix multiplication
  • 83. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort Cubic O(n3 ) triply nested looping, matrix multiplication Polynomial O(nk ) all “efficient” algorithms
  • 84. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort Cubic O(n3 ) triply nested looping, matrix multiplication Polynomial O(nk ) all “efficient” algorithms Exponential O(2n ) subsets, solving chess
  • 85. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort Cubic O(n3 ) triply nested looping, matrix multiplication Polynomial O(nk ) all “efficient” algorithms Exponential O(2n ) subsets, solving chess Factorial O(n!) permutations
  • 86. TEXT TIME COMPLEXITIES WAY TOO FAST Constant O(1) math, pop, push, arr[i], property access, conditionals, initializing a variable Logarithmic O(logn) binary search Linear O(n) linear search, iteration Linearithmic O(nlogn) sorting (merge sort, quick sort) Quadratic O(n2 ) nested looping, bubble sort Cubic O(n3 ) triply nested looping, matrix multiplication Polynomial O(nk ) all “efficient” algorithms Exponential O(2n ) subsets, solving chess Factorial O(n!) permutations
  • 87.
  • 88. TIME TO IDENTIFY TIME COMPLEXITIES
  • 90.
  • 92. BOTTLENECKS ▸ A bottleneck is the part of your code where your algorithm spends most of its time.
  • 93. BOTTLENECKS ▸ A bottleneck is the part of your code where your algorithm spends most of its time. ▸ Asymptotically, it’s wherever the dominant factor is.
  • 94. BOTTLENECKS ▸ A bottleneck is the part of your code where your algorithm spends most of its time. ▸ Asymptotically, it’s wherever the dominant factor is. ▸ If your algorithm is has an O(n) part and an O(50) part, the bottleneck is the O(n) part.
  • 95. BOTTLENECKS ▸ A bottleneck is the part of your code where your algorithm spends most of its time. ▸ Asymptotically, it’s wherever the dominant factor is. ▸ If your algorithm is has an O(n) part and an O(50) part, the bottleneck is the O(n) part. ▸ As n => ∞, your algorithm will eventually spend 99%+ of its time in the bottleneck.
  • 97. BOTTLENECKS ▸ When trying to optimize or speed up an algorithm, focus on the bottleneck.
  • 98. BOTTLENECKS ▸ When trying to optimize or speed up an algorithm, focus on the bottleneck. ▸ Optimizing code outside the bottleneck will have a minuscule effect.
  • 99. BOTTLENECKS ▸ When trying to optimize or speed up an algorithm, focus on the bottleneck. ▸ Optimizing code outside the bottleneck will have a minuscule effect. ▸ Bottleneck optimizations on the other hand can easily be huge!
  • 101. BOTTLENECKS ▸ If you cut down non-bottleneck code, you might be able to save .01% of your runtime.
  • 102. BOTTLENECKS ▸ If you cut down non-bottleneck code, you might be able to save .01% of your runtime. ▸ If you cut down on bottleneck code, you might be able to save 30% of your runtime.
  • 103. BOTTLENECKS ▸ If you cut down non-bottleneck code, you might be able to save .01% of your runtime. ▸ If you cut down on bottleneck code, you might be able to save 30% of your runtime. ▸ Better yet, try to lower the time complexity altogether if you can!
  • 107. SPACE COMPLEXITY ▸ Same thing, except now with memory instead of time.
  • 108. SPACE COMPLEXITY ▸ Same thing, except now with memory instead of time. ▸ Do you take linear extra space relative to the input?
  • 109. SPACE COMPLEXITY ▸ Same thing, except now with memory instead of time. ▸ Do you take linear extra space relative to the input? ▸ Do you allocate new arrays? Do you have to make a copy of the original input? Are you creating nested data structures?
  • 111. COMPREHENSION CHECK ▸ What is the space complexity of:
  • 112. COMPREHENSION CHECK ▸ What is the space complexity of: ▸ max(arr)
  • 113. COMPREHENSION CHECK ▸ What is the space complexity of: ▸ max(arr) ▸ firstFive(arr)
  • 114. COMPREHENSION CHECK ▸ What is the space complexity of: ▸ max(arr) ▸ firstFive(arr) ▸ substrings(str)
  • 115. COMPREHENSION CHECK ▸ What is the space complexity of: ▸ max(arr) ▸ firstFive(arr) ▸ substrings(str) ▸ hasVowel(str)
  • 116. SO WHAT THE HELL IS MEMORY ANYWAY
  • 117. TO UNDERSTAND MEMORY, WE NEED TO UNDERSTAND HOW A COMPUTER IS STRUCTURED.
  • 118. Immediate workspace. A CPU usually has 16 of these. Data Layers 1 cycle
  • 119. Immediate workspace. A CPU usually has 16 of these. Data Layers A nearby reservoir of useful data we’ve recently read. Close-by. 1 cycle ~4 cycles
  • 120. Immediate workspace. A CPU usually has 16 of these. Data Layers A nearby reservoir of useful data we’ve recently read. Close-by. More nearby data, but a little farther away. 1 cycle ~4 cycles ~10 cycles
  • 121. Immediate workspace. A CPU usually has 16 of these. Data Layers A nearby reservoir of useful data we’ve recently read. Close-by. More nearby data, but a little farther away. ~800 cycles. Getting pretty far now. It’s completely random-access, but takes a while. 1 cycle ~4 cycles ~10 cycles
  • 122. Immediate workspace. A CPU usually has 16 of these. Data Layers A nearby reservoir of useful data we’ve recently read. Close-by. More nearby data, but a little farther away. ~800 cycles. Getting pretty far now. It’s completely random-access, but takes a while. 1 cycle ~4 cycles ~10 cycles On an SSD, you’re looking at ~5,000 cycles. This is pretty much another country. And on a spindle drive, it’s more like 50,000.
  • 123. SO ALL DATA TAKES A JOURNEY UP FROM THE HARD DISK TO EVENTUALLY LIVE IN A REGISTER.
  • 125. IT’S JUST A BUNCH OF CELLS WITH SHIT IN ‘EM.
  • 126. IT’S ALL BINARY DATA. STRINGS, FLOATS, OBJECTS, THEY’RE ALL STORED AS BINARY.
  • 127. AND IT’S ALL STORED CONTIGUOUSLY. THIS IS VERY IMPORTANT WHEN IT COMES TO ARRAYS.
  • 128. ARRAYS ARE JUST CONTIGUOUS BLOCKS OF MEMORY.
  • 130.
  • 132. Garbage Also garbage Assume each of these cells are 8 bytes (64-bits)
  • 133. Garbage Also garbage Assume each of these cells are 8 bytes (64-bits) Let’s imagine they’re addressed like so…
  • 134. Assume each of these cells are 8 bytes (64-bits) Let’s imagine they’re addressed like so… 832968 833032 833096 833160 833224 833288 833352 833416 833480 833544 this.startAddr = 833096;
  • 135. Assume each of these cells are 8 bytes (64-bits) Let’s imagine they’re addressed like so… 832968 833032 833096 833160 833224 833288 833352 833416 833480 833544 Each cell is offset by exactly 64 in the address space this.startAddr = 833096;
  • 136. Assume each of these cells are 8 bytes (64-bits) Let’s imagine they’re addressed like so… 832968 833032 833096 833160 833224 833288 833352 833416 833480 833544 Each cell is offset by exactly 64 in the address space Meaning you can easily derive the address of any index this.startAddr = 833096;
  • 137. 832968 833032 833096 833160 833224 833288 833352 833416 833480 833544 function get(i) { return this.startAddr + i * 64; } this.startAddr = 833096;
  • 138. 832968 833032 833096 833160 833224 833288 833352 833416 833480 833544 function get(i) { return this.startAddr + i * 64; } this.startAddr = 833096; get(3) = 833096 + 3 * 64 = 83306 + 192 = 833288
  • 140. THIS IS WHAT MAKES ARRAY LOOKUPS O(1)
  • 141. AND IT’S WHY ARRAYS ARE BY FAR THE FASTEST DATA STRUCTURE
  • 142. LET’S WRAP UP BY TALKING ABOUT CACHE EFFICIENCY.
  • 144. When the CPU needs data, it first looks in the cache.
  • 145. When the CPU needs data, it first looks in the cache. Say it’s not in the cache. This is called a cache miss.
  • 146. When the CPU needs data, it first looks in the cache. Say it’s not in the cache. This is called a cache miss. The cache then loads the data the CPU requested from RAM…
  • 147. When the CPU needs data, it first looks in the cache. Say it’s not in the cache. This is called a cache miss. The cache then loads the data the CPU requested from RAM… But the cache guesses that if the CPU wanted this data, it probably will also want other nearby data eventually. It would be stupid to have to make multiple round trips.
  • 148. In other words, the cache assumes that related data will be stored around the same physical area.
  • 149. In other words, the cache assumes that related data will be stored around the same physical area. The cache assumes locality of data.
  • 150. So the cache just loads a huge contiguous chunk of data around the address the CPU asked for.
  • 152. Remember this? Loading from memory is slow as shit.
  • 153. Remember this? Loading from memory is slow as shit. We really want to minimize cache misses.
  • 154. SO KEEP YOUR DATA LOCAL AND YOUR DATA STRUCTURES CONTIGUOUS.
  • 155. ARRAYS ARE KING, BECAUSE ALL OF THE DATA IS LITERALLY RIGHT NEXT TO EACH OTHER IN MEMORY!
  • 156. An algorithm that jumps around in memory or follows a bunch of pointers to other objects will trigger lots of cache misses!
  • 157. An algorithm that jumps around in memory or follows a bunch of pointers to other objects will trigger lots of cache misses! Think linked lists, trees, even hash maps.
  • 158. IDEALLY, YOU WANT TO WORK LOCALLY WITHIN ARRAYS OF CONTIGUOUS DATA.
  • 159. LET’S DO A QUICK EXERCISE.
  • 161. I AM HASEEB QURESHI You can find me on Twitter: @hosseeb You can read my blog at: haseebq.com
  • 162. PLEASE DONATE IF YOU GOT SOMETHING OUT OF THIS <3 Ranked by GiveWell as the most efficient charity in the world!