Key is to manage large quantities of data under extreme load with accuracy and resilience reliably.
Big Data == data lake (any and all data)
Fast Data == processing streams of events in real-time
All about… Data Access
Scale Out rather than Scale Up
Throughput (or number of operations) increases as more nodes are added to the cluster
Data is stored in distributed, highly-concurrent, in-memory data structures to minimize context switching and contention
Data is replicated & partitioned for fast, predictable read/write throughput
In a nutshell… under-the-hood Apache Geode is implemented…
Stores data in-memory with puts.
Stores data to disk (synchronously (default) or asynchronously) on persistence and overflow
Oplogs are append-only; compaction is necessary
HDFS is new and Geode can feed Apache Spark processing streams.
Misconceptions about Spring…
Spring is a Web Application Framework
Spring’s programming model is unique and Spring uses it’s own conventions
Built on fundamental OO principles (POJO)
Software Design Patterns (IoC/DI, AOP) and…
Open Standards (OSS)
Apache Geode is a complex technology…
Too many configuration options and settings.
Inconsistent behavior between XML configuration (i.e. cache.xml) and API.