This document discusses how caching can help address performance, scalability, and autonomy challenges for microservices architectures. It introduces Pivotal Cloud Cache (PCC) as a caching solution for microservices on Pivotal Cloud Foundry. PCC provides an in-memory cache that can scale horizontally and increase performance. It also allows for data autonomy between microservices and teams while providing high availability. PCC offers an easy and cost-effective way to cache data and adopt microservices on Pivotal Cloud Foundry.
4. Ability to handle a large number of concurrent requests
Performance Drivers for Modern Applications
▪ More users of new mobile and web
applications
▪ Users expect real-time response,
even during peak usage
▪ Increasing number of requests from other
applications
▪ New use cases from new data sources,
ex: IoT and streaming data
▪ Scaling the application logic results in the
need for scaling the data access layer
5. Caches Provide Blazing Fast Performance
▪ Memory is orders of magnitude faster than disk
▪ Caches can present a structural view of data
optimized for performance
▪ Maximizing cache hits
- Preloading cache (cache warming)
- Expiration and eviction
• Application driven
• Time based
• Notifications and events
Microservice
Instance
Cache
Database
6. Externalizing state is a requirement for microservice instances to scale
Microservices Need Performance and Scalability
▪ Externalize microservices state for performance
and scalability of the business logic
- Store application state information in cache
for fast retrieval
- Adheres to 12-factor principles
▪ Dynamically change the number of application
instances without losing state information
7. Microservices with large, frequently accessed data sets need a cache layer
Microservices Need Performance and Scalability
Performance and scalability of data
▪ Add servers to a shared cluster
▪ Reduces the pressure to scale rigid
backing stores
▪ Enables availability and resilience
9. Fosters an agile, dynamic application culture
Team Autonomy Equates to Velocity
▪ Separate development and release cycles
- Evolve each microservice independently
- Independent development, test, production cycles
- Continuous integration, continuous delivery
▪ Independent technology decisions, including data layer
- Polyglot persistence
- Independent data model decisions
▪ Changes should be non-breaking for other teams and
microservices
10. Extreme ends of data sharing continuum present challenges
Autonomy in the Context of the Data Layer
Autonomous
Distributed workload
and data
management
challenges
Shared Database Database
Per
Service
No autonomy
Development
and runtime
coupling
11. Data APIs Present Autonomous Views of Data
● Define a Data API that projects a data model to match the
needs of the consuming microservices
● Data API is the access point to a microservice that’s primarily
responsible for accessing data
● Data API provides a contract for accessing data
● Teams create data caches optimized for their microservices
● Allows more flexibility for (and isolation from) changes to
backing stores
Caches provide data for each autonomous view
12. Data API evolve in support of the evolution of the microservice(s)
Versioned APIs Facilitate Change Management
▪ Analogous to the notion of versioned
microservices
▪ Parallel deployment of versions creates the
possibility of a managed evolution
▪ Allows for data transformations within the
microservice as an alternative to changing
the backing store(s)
V1 V2
13. Caching Can Present an Autonomous View of Data
● Provides a surface area to:
○ implement access control
○ implement throttling
○ perform logging
○ enforce other policies
Teams create data caches optimized for their microservices
14. Caching Can Present an Autonomous View of Data
● Data APIs project a bounded context
○ Each bounded context has a single, unified
model
○ Relationships between models are explicitly
defined
○ Teams are typically given full responsibility over
one or more bounded contexts
16. Several points of failure
Large Number of ‘Moving Parts’
▪ Single request can touch several components: servers,
distinct clusters, microservice instances
▪ Availability zones can fail
▪ Regions can become unstable
▪ Network is unreliable
▪ Cloud native architecture components are ephemeral by
design
- Instances added and removed dynamically
Enterprise Readiness Requires the Ability to Tolerate Failures
17. Highly Available Caching Layer Offers Protection
● Cache serves as the ‘primary’ data
store for the application
● High availability: copy data for
failure protection
● Immune to lapses in backing store
availability
● Backing stores kept up-to-date
through the cache
19. Expensive and Brittle
Legacy Application Infrastructures
▪ High startup costs
▪ Steep pricing curve for adding capacity
- Mainframe MIPS pricing
- Legacy RDBMS data stores are expensive to scale
▪ Complex deployments
▪ Easily disrupted
▪ Points of failure
▪ Scalability bottlenecks
20. 20
Legacy Modernization is key to success
$$$$
ROI Funds
Transformation
Replatform Modernize Migrate
Runs on
PCF
Existing Workloads
Cloud Native
Built for
PCF
New Initiatives
Cloud
Native
ModernizeReplatform Migrate
21. Legacy Systems: Part of a Cloud Native Evolution
Create microservices around the edges of the legacy system
● Caching layer to mediates between
the old and the new
● Optionally, re-platform the legacy
application
● Optionally, reduce reliance on legacy
application over time
Microservices
Legacy
Middleware
Legacy
Application
Monolithic
Application
23. Prepackaged for Simple Consumption
● Plans (use cases) based on caching patterns
● Look-aside pattern supported out of the box
○ Cache is controlled and managed by the
application
○ Good for saving application state,
microservices architectures, reducing load
on legacy systems, etc..
○ Perfect match for the Spring Framework
@Cacheable annotation
● Other caching patterns and options to come
(WAN replication, Session State Caching, Inline
caching pattern, etc.).
Look-Aside Cache
Look-aside pattern supported out of the box
App Instance
Cache
Database
24. Pivotal Cloud Cache
• Easy accessibility
through Marketplace
• Instant Provisioning
• Bind to apps through
easy to use interface
• Lifecycle management
• Common access
control and audit trails
across services
MySQL New Relic
Single Sign-
On
RabbitMQ
Config
Server
Service
Directory
Circuit
Breaker
Signal
Sciences
Crunchy
PostgreSQL AND
MORE
Services Marketplace
Pivotal Cloud
Cache
Dynatrace
Extending the Pivotal Cloud Foundry Platform for Microservices Architectures
25. Easily Provisioned for Developer Self Service
Operators create and register service plans with the Services Marketplace
Create
Service
Plans
Set
Quotas
Deploy
PCC
Broker
Define VMs
Define Memory
Define CPU
Define Disk Size
Max Cluster Size
Max # of Clusters
OpsMan Tile
Register with Marketplace
26. In-memory, and horizontally scalable for parallel execution
Pivotal Cloud Cache Performance
Grow cluster dynamically with no interruption of service or data loss
Data is sharded or replicated across servers
27. This is how an in-memory cache can horizontally scale
Partitioning (aka Sharding)
Take advantage of the memory and network bandwidth of all members of the cluster
29. High Availability
Spanning Servers and Availability Zones
Stretched cluster across availability zones
Replication for high availability of data in cache
Pivotal Cloud Foundry resurrects lost VMs
31. Integrated Security
Pre-configured Authentication and Authorization
● Role-based, configurable, authorization
for administrative activities
● Pre-defined, pre-configured roles
● Consistent mechanism for authenticating
and authorizing actions
● Every administrative function can require
authorization
● Every data access can require
authorization
● Some users can read/write data
● Others can start/stop servers
● Still others can configure cluster
User 1
User 2
User 3
User 4
Group 1
Group 2
Role 1
Role 2
Role 3
Role 1A
Role 1B
Role 1C
Role 2A
Role 3A
Role 3B
Determines
experience
Determines
permissions
32. Summary
Speed up your apps on Pivotal Cloud Foundry
● PCC can overcome the performance, elasticity and scaling, challenges of
microservices architectures
● Using PCC with data APIs can increase autonomy between teams
● PCC has rock solid availability and failure recovery
● PCC provides an evolutionary approach to adopting microservices that can
extend the life of legacy systems
● The combination of PCC and PCF make it possible to get started quickly
and easily adjust cache capacity as needed