Is caching data in your application still relevant today, with all those HTTP caches, very fast key value stores, and microservices? During this presentation, you will learn the basics of caching (TTL, TTI, invalidation, tiering, and so on), key figures in the caching world, how the Java community came up with a specification (JSR 107), and how you can leverage it in your application with the implementation of your choice (EhCache 3). During the multiple demos, you’ll even see how you can sync up your (clustered) caches when you start scaling your application.
2. LET US INTRODUCE OURSELVES
„Henri Tremblay, Senior Software Engineer @
Terracotta, a Software AG company
„Working on Ehcache mostly
„Lead developer of EasyMock and Objenesis
„Java Champion, Oracle Groundbreaker Ambassador
and Montréal JUG leader
„Anthony Dahanne, Senior Software Engineer @
Terracotta, a Software AG company
„Working on the Terracotta Management Console
„Working on Terracotta cloud deployments (Docker,
Kubernetes, AWS, etc.)
„Montréal JUG leader
5. CACHE DEFINITION
“Store of things
that will be required in the future,
and can be retrieved rapidly.”
from wiktionary.com
6. CACHE DEFINITION
A Map (key/value mappings) with
• capacity control (via eviction)
• freshness control (via expiry)
7. WHERE IS CACHING USED ?
LET’S START WITH THE CPU !
Core
L1 D-cache
L1 I-cache
L2 Cache
L3 Cache
Core
L1 D-cache
L1 I-cache
L2 Cache
Core
L1 D-cache
L1 I-cache
L2 Cache
Core
L1 D-cache
L1 I-cache
L2 Cache
Not that long ago (Intel I7 series) :
L1 Instruction Cache and Data Cache : 32KB
L2 Cache : 256KB
L3 Cache : 8MB
8. LATENCIES TO REMEMBER
L1 cache reference 0.5 ns
L2 cache reference 7 ns 14x L1 cache
Main memory reference 100 ns 20x L2 cache
Read 1 MB sequentially from memory 250,000 ns 250 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory
Send packet CA->Netherlands->CA 150 ms 150 ms
from github.com/jboner
9. WHERE IS CACHING USED ?
Browser Caching CDN Caching
CPU Caching
Application Caching
Disk Caching
10. CACHING THEORY : AMDAHL’S LAW
“the theoretical speedup is always limited by the part of the task that cannot
benefit from the improvement.”, from Wikipedia
s : speedup in latency
p : percentage of the execution time
12. CACHING GLOSSARY
• Hit : when the cache returns a value
• Miss : when the cache does not have a value
• Cold / Hot : when the cache is empty / full
13. WHAT TO MEASURE WHEN CACHING
• Cache Usage (empty ? full ?)
• HitRatio : hits / (misses + hits)
• HitRate : hits / second
• Eviction rate
• Size (in entries or bytes)
25. n
…
SEVERAL CLIENTS, ACTIVE PASSIVE TERRACOTTA CLUSTER
MySQL
Webapp with
Ehcache3 Clustered
Terracotta Server Terracotta Server
26. LINKS AND REFERENCES
• Old version of this conference by Anthony (Devoxx):
• Slideshare: https://www.slideshare.net/anthonydahanne/terracotta-ehcache-simpler-faster-distributed
• Youtube: https://www.youtube.com/watch?v=-j6cNZc5wYM
• Caching 101: Caching on the JVM (and beyond) by Louis Jacomet & Aurelien Broszniowski (Devoxx UK)
• Youtube: https://www.youtube.com/watch?v=FQfd8x29Ud8
• Ehcache3 documentation: http://www.ehcache.org/
• Ehcache3 and Terracotta Server demos: https://github.com/ehcache/ehcache3-samples
• The essence of caching, by Greg Luck
• Youtube: https://www.youtube.com/watch?v=TszcAWgCXD0