As you adopt Kubernetes, the requirements for your edge change. You now have teams working on multiple services all with different requirements. How can you make sure your edge is Kubernetes-ready?
How to submit a standout Adobe Champion Application
Webinar: Effective Management of APIs and the Edge when Adopting Kubernetes
1. Effective Management of APIs and the Edge
When Adopting Kubernetes
Daniel Bryant
Product Architect, Datawire
2. tl;dr
Microservices and Kubernetes presents two primary challenges at the edge:
1. How to scale the management of many services and APIs
2. Supporting a broad range of microservice architectures and protocols
Three primary strategies for managing APIs in this new context
1. Deploying an additional in-cluster API gateway
2. Extending the existing API gateway
3. Deploying a self-service edge stack
14. Two Challenges Key Takeaways
Microservices help to scale development
Kubernetes helps to scale deployment and the runtime
We must scale the associated release/management mechanisms
We must provide support for a diverse range of requirements at the edge
Bottom line: More microservices, means more developer interaction at the edge
17. Overview of the Three Strategies
#1: Deploy an Additional Kubernetes API Gateway
#2: Extend Existing API Gateway
#3: Deploy a Comprehensive Self-Service Edge Stack
https://www.getambassador.io/resources/strategies-managing-apis-edge-kubernetes/
18. #1: Deploy an Additional Kubernetes API Gateway
Simply deploy an additional “in-cluster” gateway
● Below the existing gateway
● Below the load balancer
Management
● Development teams responsible
● OR existing ops team manages this
19. #1: Deploy an Additional Kubernetes API Gateway
Pros
There is minimal change to the core edge
infrastructure.
Incremental migration easily
Provides a limited blast radius
Cons
Increased management overhead of working
with different components
Increased operational overhead with this
approach
Challenging to expose the functionality to each
independent microservice teams
Challenging to locate and debug where within
the edge stack any issues occur.
20. #1: Deploy an Additional Kubernetes API Gateway
● Push edge functionality into the Kubernetes API Gateway
○ Directly exposed to application developers.
● For edge functionality that needs to remain centralized:
○ Ops team should create a workflow for application developers
○ Support functionality with SLAs
● Existing edge components should be configured to pass-through traffic on
specific routes to the additional gateway for proper functionality.
21. Strategy #2: Extend Existing API Gateway
Implemented by modifying or augmenting the
existing API gateway solution
Enable synchronization between the API
endpoints and location k8s services
Custom ingress controller for the existing API
Gateway or load balancer
22. Strategy #2: Extend Existing API Gateway
Pros
Reuse the existing tried and trusted API
gateway
Leverage existing integrations with on-premises
infrastructure and services
Minimal need to learn Kubernetes networking
technologies
Cons
Workflows must change to preserve a single
source of truth for the API gateway
configuration.
Limited amount of configuration parameters via
Kubernetes annotations
Additional education and training may be
required to educate developers
23. Strategy #2: Extend Existing API Gateway
● Shift away from the traditional API/UI-driven configuration model
● Coordinate/standardise modification of routes to services running outside
the Kubernetes cluster
● Review of anticipated edge requirements for microservices is essential.
24. #3: Deploy a Comprehensive Self-Service Edge Stack
Deploy Kubernetes-native API gateway with
integrated supporting edge components
Installed in each of the new Kubernetes
clusters, replacing existing edge
Ops team own, and provide sane defaults
Dev teams responsible for configuring the
edge stack as part of their normal workflow
25. #3: Deploy a Comprehensive Self-Service Edge Stack
Pros
Good integration with the new API gateway and
the surrounding “cloud native” stack
Edge management is simplified into a single
stack that is configured and deployed via the
same mechanisms as any other Kubernetes
configuration
Supports cloud native best practices: “single
source of truth”, GitOps etc
Cons
Potentially a large architectural shift.
Platform team must learn about new proxy
technologies and potentially new edge
component technologies.
Changes to engineering workflow, and
increased responsibility for devs
26. #3: Deploy a Comprehensive Self-Service Edge Stack
● Microservice teams are empowered to maintain the edge configuration
specific to each of their microservices
● The edge stack aggregates the distributed configuration into a single
consistent configuration for the edge
● To support the diversity of the edge services, adopt an edge stack that has
been built on a modern L7 proxy (e.g. Envoy Proxy)
27. Conclusions
Managing the edge of the system has always been complicated
More services with a diversity of architectures increases demands on the edge
Two primary challenges:
1. How to scale the management of the edge
2. How the gateway can support a broad range of requirements
Three patterns at the “cloud native” edge (additional gw, extend gw, edge stack)
We think the future is the self-service edge stack
28. Learn More
Read more about the challenges and strategies discussed today:
● https://www.getambassador.io/resources/challenges-api-gateway-kubernetes/
● https://www.getambassador.io/resources/strategies-managing-apis-edge-kubernetes/
Learn more about the Ambassador Edge Stack (getambassador.io) or try it yourself
at getambassador.io/user-guide/getting-started.
Contact us (getambassador.io/contact) to set up a personalized best practices call to
walk through your architecture with one of our experts.
29. Effective Management of APIs and the Edge when
Adopting Kubernetes
Daniel Bryant
Product Architect, Datawire