Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Pivotal Cloud Foundry + NSX
1. CONFIDENTIAL
NSX Use-Cases for Pivotal
1
• Agility Provision new networks and services without touching the physical infrastructure.
• Repeatability Automate once, use multiple times to stand up multiple installations
• Availability Built-in NSX as well as VMware HA/anti-affinity features can be used
• Network Services LB, NAT, Centralized Routing, Perimeter firewalling available on the same VM appliance.
• Co-existence Each Pivotal installation can co-exist as a tenant with legacy/other workloads using NSX.
• Security Edge Firewalling, DFW, Security Groups(BOSH integration)
• BOSH integration Dynamic inclusion of BOSH provisioned VMs into NSX Security Groups
• Monitoring Tools & vSphere ecosystem VRNI, vRealize Operations with Blue Medora content pack.
2. Network Automation
“I need to carve out networks for my
Pivotal foundation.”
2
Programmatic network provisioning
without touching the physical infra.
PCF_Infra
Logical
Switch
PCF Foundation
Define VXLAN logical switches and
run Pivotal foundations on overlay
networks.PCF_ERT
Logical
Switch
PCF_Tiles
Logical
Switch
PCF_Services
Logical
Switch
3. PCF Go Router VM Pool
NSX ESG
Network Services : Load Balancing
3
Software Load Balancer L4, L7, Health Check
SSL Certificate Offload
Go
Router
VM
Go
Router
VM
Go
Router
VM
Built-in High Availability
“I need to frontend my PCF installation with a highly
available feature-rich Load Balancer”
PCF Foundation
4. Network Services : NAT
”Pivotal Elastic Runtime requires a lot of IP
addresses
I want to preserve my routable IP space
addresses and only expose CF endpoints
which need exposure using SNAT/DNAT”
4
Programmatic network provisioning of
additional PCF foundations using overlapping
IP space
ESG Deployed in HA mode
Edge Load Balancing
Perimeter Firewall
NAT
PCF Foundation
VPN
Use of non-routed networks with DNAT/SNAT to limit
exposure to CF endpoints.
5. Security: Edge Firewall
5
“I would like to use NSX’s Perimeter firewall
capabilities to protect ingress inside my PCF
Installation”
PCF Go Router VM Pool
NSX ESG
Go
Router
VM
Go
Router
VM
Go
Router
VM
Allow Ingress Ops Manager 80/443/25555/22
Allow Ingress -> Elastic Runtime 80/443/22
Allow Egress -> DNS, LDAP,
Syslog
……………….
53,389,636
6. Network Services: Routing
6
PCF Foundation
VPN
External Network
”Distributed Routing can be used to
optimize E-W traffic”
“N/S Routing from the ESG to NorthBound”
App-to-App traffic trombones thru the LB and is always N-S.
DLR can be used to optimize E-W traffic
PCF_Infra
Logical
Switch
PCF_ERT
Logical
Switch
PCF_Tiles
Logical
Switch
PCF_Services
Logical
Switch
Routing can be enabled for N-S traffic
ESG deployed in HA mode
LB
Edge Firewall
N/S Routing
7. Co-existence with legacy workloads: 2 tier NSX+PCF Design
Transit LS
E1 E2 E3 E4 ECMP NSX
Edges
Physical Network
PCF Dev
Non PCF Tenants
VPN
2 Tier Design
Each Pivotal Installation is a
tenant in existing DC
Tenant ESG(A/S) per PCF
Foundation connect to the 2nd Tier
of Provider ECMP Edges
ESG deployed in HA mode
LB
NAT
Edge Firewall
N/S Routing
VPN
ESG deployed in HA mode
LB
NAT
Edge Firewall
N/S Routing
VPN
PCF Prod
With NAT (Overlapping IP
addresses)
8. Co-existence with legacy workloads: Routed Topology
Transit LS
E1 E2 E3 E4 ECMP NSX
Edges
Physical Network
PCF Dev
Non PCF Tenants
VPN
2 Tier Design
Each Pivotal Installation is a
tenant in existing DC
Tenant ESG(A/S) per PCF
Foundation connect to the 2nd Tier
of Provider ECMP Edges
ESG deployed in HA mode
LB
Edge Firewall
N/S RoutingVPN
ESG deployed in HA mode
LB
Edge Firewall
N/S RoutingVPN
PCF Prod
Routed topology (No overlapping
IP addresses)
9. Security Tools:
Use vRealize Network Insight or NSX
Application Rule Manager to understand
E-W traffic flows within the PCF
Installation
Use Edge firewall to secure any
ingress/egress to the PCF Installation
9
Use DFW and dynamic member inclusion
to secure elastic PCF Environment
10. NSX Application Rule Manager : Flow Analysis
10
Diego Cell accessing the Load Balancer VIP on Port 443
15. Diego Cell VM
web-app
Container
Guest vSwitch
192.168.100.100
Cloud Foundry Networking Recap: Inbound access to App
Edge Services Gateway
web-app.pcf-apps.corp.local
PCF Go Router Pool
VM IP Address 172.16.90.18/24
App A : Port 60012
web-app.pcf-apps.corp.local
*.pcf-apps.corp.local App domain
*.pcf-sys.corp.local -> System Domain
port mapping
172.16.90.18:60012
Go
Router1
Go
Router2
Go
Router3
App2
Container
Editor's Notes
Infra = Ops Manager/Director
Services = Brokers/Service Nodes
Deployment = Elastic Runtime
NATs = only accessed within the Foundry
We can deploy the ESG/ESGs in HA to provide LB functionality to the Go Router pool.
We can do SSL termination at the Edge LB.
Infra = Ops Manager/Director
Services = Brokers/Service Nodes
Deployment = Elastic Runtime
NATs = only accessed within the Foundry
Each topology can be deployed in minutes in a repeatable fashion
Each topology can be deployed in minutes in a repeatable fashion
-From vCenter, create three clusters. Pivotal recommends vSphere DVS (distributed virtual switching) for all clusters used by PCF.
-Populate each cluster with two VMware Resource Pools. Enable VMware distributed resource scheduler (DRS) for each Resource Pool, so vMotion can automatically migrate data to avoid downtime.
-For hosting capacity, populate each cluster with three ESXi hosts, making nine hosts for each installation. All installations collectively draw from the same nine hosts.
-In one PCF deployment, use Ops Manager to create three Availability Zones (AZs), each corresponding to one of the Resource Pools from each cluster.
-In the other PCF deployment, create an AZ for each of the three remaining Resource Pools.
-For storage, add dedicated datastores to each PCF deployment following one of the two approaches, vertical or horizontal, as described below:
Horizontal: You grant all hosts access to all datastores, and assign a subset to each installation. For example, with 6 datastores ds01 through ds06, you grant all nine hosts access to all six datastores, then provision PCF installation #1 to use stores ds01 through ds03, and installation #2 to use ds04 through ds06. Installation #1 will use ds01 until it is full, then ds02, and so on.
Vertical: You grant each host cluster its own dedicated datastores, giving each installation multiple datastores based on their host cluster. vSphere VSAN storage requires this architecture. With 6 datastores ds01 through ds06, for example, you assign datastores ds01 and ds02 to cluster 1, ds03 and ds04 to cluster 2, and ds05 and ds06 to cluster 3. Then you provision PCF installation #1 to use ds01, ds03 and ds05, and installation #2 to use ds02, ds04 and ds06. With this arrangement, all VMs in the same installation and cluster share a dedicated datastore.
Supply core networking for each deployment by configuring an NSX Edge with the following subnets.
It is best practice to deploy a load balancer on top of the CF router(CF router pool) for load-balancing.
You can use HA Proxy(automatically configurable from the deployment manifest).
Pivotal recommends using a 3rd party load-balancer in production environments to load-balance requests to the go router pool.
Lets go thru how networking works in CF today. An app user tries to access an app using its url.
The LB has 2 wildcard dns domains defined on for sys and one for app so all traffic entering the foundry hits the VIP of the LB and is Load balanced to the Go Routers.
The Go Routers that app one is mapped to a particular VM IP address/port number and then route the traffic to the Application VM which is hosting the app instance container. Internally each containers have private Ips which are not exposed to the outside world but are NATTED out.