Building a secure multi-tenant cloud necessitates proper tenant isolation and access control. Key network and security functions must scale independently based on the dynamic resource requirements across each tenant. Additionally, On-demand and self-service provisioning are required for achieving operational efficiencies. Robust, dynamic and elastic software abstractions are imperative to support applications built to run such complex environments.
This slide deck covers:
• Architectural design choices
• Implementation blueprints
• Operational best practices
that have been made to build OpenStack cloud at Symantec.
1. ARCHITECTING AND BUILDING A
SECURE MULTI-TENANT CLOUD
FOR SAAS APPLICATIONS
Dilip Sundarraj
Cloud Solutions Architect, Juniper Networks
April 8th, 2015
3. What is Network Virtualization?
• Independent of Physical Network Location or State
• Logical Network across any server, any rack, any cluster, any data-center
• Virtual Machines can migrate without requiring any reworking of security
policies, load balancing, etc
• New Workloads or Networks should not require provisioning of physical network
• Nodes in Physical Network can fail without any disruption to Workload
• Full Isolation for Multi-tenancy and Fault Tolerance
• MAC and IP Addresses are completely private per tenant
• Any failures or configuration errors by tenants do not affect other applications or
tenants
• Any failures in the virtual layer do not propagate to physical layer
4. OpenContrail
• OpenSource Network Virtualization Platform for Cloud
• Primary Use Cases:
• Cloud Networking
– IaaS, VPCs for Cloud SP, Private Cloud for Enterprises or SPs
• NFV in SP networks
– Value added services for SP edge networks.
6. Analytics
CONTRAIL CONTROLLER
ControlConfiguration
x86 Host + Hypervisor
ORCHESTRATOR
x86 Host + Hypervisor
Physical IP Network
(no changes)
vRouter vRouter
Gateway
Internet / WAN
Legacy Infra.
(VLAN, etc.)
Bi-directional real-time message bus using XMPP
Network orchestration
Standard protocol (M-
BGP) to talk with other
Contrail controller
instances
Compute / Storage
orchestration
Accepts and converts
orchestrator requests for
VM creation, translates
requests, and creates
network
Interacts with network
elements for VM network
provisioning and ensures
uptime
Real-time analytics
engine collects, stores
and analyzes network
elements
vRouter: Virtualized routing element handles
localized control plane and forwarding plane
work on the compute node
Gateway: MX Series (or other router)
serve as gateway improving scale &
performance
8. Openstack integration
Horizon
Nova API
Compute
Driver
Virtual-IF
Driver
Nova Compute
Contrail
Agent
vRouter
(kernel)
Virtual Router
Nova
Scheduler
Neutron
Driver
Neutron
Plugin
Configuration
Node
Control
Node
1
Create an Instance (VM Info,
Network, IPAM, Policies, etc)
2 Schedule an Instance on the
Compute Node
3
VM Network
Properties
4
Create VM
Interface
6 Publish VM
Intf on IFMap
5 Add Port
7
VM Interface
Config over XMPP
Scripts
9. OpenContrail – Control Node
• All Control Plane Nodes are active
active
• Each vRouter uses XMPP to connect
with multiple Control Plane nodes for
redundancy
• Each Control Plane Node connects
to multiple configuration nodes for
redundancy
• Control Plane Nodes federate using
BGP
Control Node
"BGP module"
Proxies
XMPP
Control
Node
Control
Node
Compute Node Compute Node
Configuration
Node
Configuration
Node
IF-MAP
XMPP
IBGP
IF-MAP Client
Gateway
Routers
Service Nodes
11. Compute node – Hypervisor, vRouter
Compute Node
Virtual
Machine
(Tenant B)
Virtual
Machine
(Tenant C)
Virtual
Machine
(Tenant C)
vRouter Forwarding Plane
Virtual
Machine
(Tenant A)
Routing
Instance
(Tenant A)
Routing
Instance
(Tenant B)
Routing
Instance
(Tenant C)
vRouter Agent
Flow Table
FIB
Flow Table
FIB
Flow Table
FIB
Overlay tunnels
MPLS over GRE or VXLAN
JUNOSV CONTRAIL CONTROLLER
JUNOSV CONTRAIL CONTROLLER
XMPP
Eth1Kernel
Tap Interfaces (vif)
pkt0
User
Eth0 EthN
Config
VRFs
Policy
Table
Top of Rack Switch
XMPP
12. Compute node – Forwarding/Tunneling
Overlay tunnels
MPLS over GRE or VXLAN
Compute Node
vRouter Forwarding Plane
Virtual
Machine
(VN-IP1)
Routing
Instance
Flow Table
FIB
Eth1 (Phy-IP1)
Tap Interfaces (vif)
Compute Node
vRouter Forwarding Plane
Virtual
Machine
(VN-IP2)
Routing
Instance
Flow Table
FIB
Eth1 (Phy-IP2)
Tap Interfaces (vif)
VIRTUAL
PHYSICAL
Virtual-IP2
Payload
Virtual-IP2
Payload
MPLS / VNI
Phy-IP2
Virtual-IP2
Payload
Virtual-IP2
Payload
MPLS / VNI
Phy-IP2
1. Guest OS ARPs for destination
within subnet or default GW
2. VRouter receives the ARP and
responds back with VRRP MAC
3. Guest OS sends traffic to the
VRRP MAC, Vrouter encapsulates
the packet with appropriate
MPLS/VNI tag and GRE header
1. Physical Fabric Routers on
Physical IP Address
1. Returning packets get forwarded to
appropriate Routing Instance by
the MPLS/VNI tag
1. VRouter de-capsulates the packet,
and forwards it to the Guest OS
14. DNSaaS
Contrail offers 4 different DNS modes
• Default DNS server
• The host OS’s configured DNS server
• Tenant DNS server
• Tenants can use their own DNS servers (different from host OS’s DNS server)
• Virtual DNS server
• Contrail Controller provides a per tenant DNS server
• None
• VMs don’t have any DNS resolution capability
One of these modes is selected when an IPAM instance is
created for a domain.
15. Contrail Virtual DNS
DNS Record Creation
• Each IPAM -> Virtual DNS servers configured
• Virtual Networks and VMs in IPAM use DNS domain of Virtual
DNS server specified in IPAM
• When a VM is spawned,
• A & PTR records are added into the vDNS server of the virtual network’s
IPAM
NOTE:
• DNS Records can also be added statically.
• A, CNAME, PTR and NS records are also supported.
16. Contrail Virtual DNS
DNS Resolution:
1. DNS requests from VM trapped to the vRouter agent on the
hypervisor
2. vRouter agent then forwards DNS request to the controllers (which
run BIND) for resolution.
3. BIND has the concept of views and every virtual DNS instance has
its own isolated view
view "default-domain-contrailtestdns" {
rrset-order {order random;};
forwarders {172.16.70.254; };
zone "6.6.6.in-addr.arpa." IN {
type master;
file "/etc/contrail/dns/default-domain-contrailtestdns.6.6.6.in-addr.arpa.zone";
allow-update {127.0.0.1;};
};
zone "contrail.us" IN {
type master;
file "/etc/contrail/dns/default-domain-contrailtestdns.contrail.us.zone";
allow-update {127.0.0.1;};
};
};
17. DNS & IPAM Relationship
• Neutron network maps to Contrail Virtual Network
• network-ipam & virtual-DNS (Contrail specific constructs)
• virtual-DNS object has domain as parent
• network-ipam has project as parent.
• So:
• virtual-network ==refers-to==> network-ipam ==refers-to==> virtual-DNS
18. Contrail Virtual DNS @SYMC
• By default, Contrail API server creates default-network-
ipam object under the default-domain -> default-project
hierarchy
• However, using Contrail API hooks mechanism
automatically
• Create default-network-ipam object within a newly created project
• Create default-virtual-DNS object within a newly created domain
• Link them to provide vDNS functionality.
• So, a new virtual-network when created, it is automatically linked to
the project specific default-network-ipam and corresponding virtual
DNS object
19. Floating IPs
• Neutron supports the concept of floating IP (routable IP).
• Instances are unaware of their Floating IP.
• Every Virtual Network -> Routing Instance
• Routing Instances
• Define network connectivity between VMs in the Virtual Network
• Contain routes only for VMs in the virtual network
• Two Routing Instances (Virtual Networks) can be connected using
• Neutron L3 agent
• Contrail Network Policy (explained later)
• By default, Virtual Network do not have access to a “public” (routable) network
• A Gateway must be used to provide connectivity to "public" network from a virtual-network.
• Floating IP support can be provided with
• Simple Gateway – x86 based Software GW
• Routing Device such as Juniper MX
20. Floating IP using Neutron L3 Router
• Create an external network
• neutron net-create public --router:external True
• Create a router
• neutron router-create router1
• Add interfaces from Virtual network to this router
• neutron router-interface-add router1 SUBNET1_UUID
• Set router-gateway-set on router instance
• Connects a router to an external network, which enables that router
to act as a NAT gateway for external connectivity.
• neutron router-gateway-set router1 EXT_NET_ID
21. Spine Spine
Leaf LeafLeaf
BMS
BMS
BMS
BMS
Node
Node
Node
Node
Node
Node
Node
Node
Mountain View DC
MX Router
Internet
Spine Spine
Leaf LeafLeaf
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Node
Boston DC
Public
VRF
Intra-
site
VRF
Internet
MX Router
Intra-
site
VRF
Public
VRF
Intra-site VPN
Multiple Floating IPs per VM @SYMC
VMs in the MTV DC
1. Internet
Routable
Floating IP
2. IP Routable from
Boston DC
22. LBaaS
• LBaaS load balancer enables
• Pool of VMs servicing apps accessible via a virtual IP.
• Contrail LBaaS features:
• Load balancing of traffic from clients to a pool of backend servers.
The load balancer proxies all connections to its virtual IP.
• Provides load balancing for HTTP, HTTPS, and TCP
• Provides health monitoring capabilities for applications
• Floating IP association to virtual IP for public access to the backend
pool.
24. Contrail LBaaS Implementation
• Supports OpenStack LBaaS Neutron APIs
• Creation of virtual-ip, loadbalancer-pool, loadbalancer-member, and
loadbalancer-healthmonitor.
• Creates a Service Instance when a loadbalancer-pool is
associated with a virtual-IP object.
• Service scheduler launches a namespace & spawns HAProxy on
it.
• HAProxy parameters obtained from the load balancer objects.
• HA of namespaces/HAProxy -> Active/Standby (2 diff computes)
25. Link Local Services
Provides VMs access to specific services on IP Fabric
infrastructure.
• @SYMC
• Keystone, Github, NTP, Logging, Monitoring and
Metering services
Once the link local service is configured, VMs can access
the service using the link local address.
• OpenStack Metadata Service on 169.254.169.254:80 is also
implemented using Link Local Service
(169.254.169.XXX, Service port) <-> (Destination IP, Service TCP/UDP port)
26. Contrail Network Policy
• Enforces connectivity and policy enforcement between
Virtual Networks
• Follows the 5-tuple semantics
• SRC/DST Virtual Network, SRC/DST Port, Protocol
27. Contrail Network Policy
• Connectivity between two Virtual Networks is established by leaking
routes between two Routing Instances when a network policy is
created interconnecting the two VNs
• Policy is enforced for specific traffic types by flow table programming
in every vRouter which has the relevant Virtual Networks
Compute Node
Virtual
Machine
(Tenant B)
Virtual
Machine
(Tenant C)
Virtual
Machine
(Tenant C)
vRouter Forwarding Plane
Virtual
Machine
(Tenant A)
Routing
Instance
(Tenant A)
Routing
Instance
(Tenant B)
Routing
Instance
(Tenant C)
vRouter Agent
Flow Table
FIB
Flow Table
FIB
Flow Table
FIB
Eth1Kernel
Tap Interfaces (vif)
pkt0
User
Eth0 EthN
Config
VRFs
Policy
Table
28. Environments & Operations
Environments
• Lab: > 10 nodes
• CI/CD test environment for SDN related features and functions
• Staging: > 50 nodes
• True IaaS for PaaS applications
• Production: > 250 nodes
• PaaS for end-user applications
Operations:
• Monitoring & Troubleshooting
• Contrail Analytics feeds into OpsView & LMM
• Upgrade
• Phased upgrades during maintenance windows without application downtime.
30. DEVSTACK + OPENCONTRAIL
• WHAT?
• Run OpenStack and OpenContrail on your laptop or in a VM
• WHY?
• Use to build & test OpenStack and OpenContrail code
• Just play with OpenStack/OpenContrail features
• HOW?
• Ubuntu server/VM with 4GB RAM, access to github
Tenants can use their own DNS servers using this mode. A list of servers can be configured in the IPAM
DNS Domain received via DHCP DOMAIN-NAME option.
Each record takes the type (A / CNAME / PTR / NS), class (IN), name, data and TTL values.
While the core network resource in Neutron maps to virtual-network in Contrail, network-ipam and virtual-DNS are resources introduced by Contrail. network-ipam is also defined as a Neutron extension and can be used via Neutron API as Horizon does it here.. virtual-DNS will also be added as a Neutron extension in future.
virtual-DNS object has domain as parent and network-ipam has project as parent. So:
virtual-network ==refers-to==> network-ipam ==refers-to==> virtual-DNS
Simple Gateway is a restricted implementation of gateway which can be used for experimental purposes. Simple gateway provides access to "public" network to virtual-networks.
Explain about VRFs and RTs
Metadata service is also a link-local service, with a fixed service name (metadata), a fixed service address (169.254.169.254:80), and a fabric address pointing to the server where the OpenStack Nova API server is running. All of the configuration and troubleshooting procedures for Contrail link-local services also apply to the metadata service.
However, for metadata service, the flow is always set up to the compute node, so the vrouter agent will update and proxy the HTTP request. The vrouter agent listens on a local port to receive the metadata requests. Consequently, the reverse flow has the compute node as the source IP, the local port on which the agent is listening is the source port, and the instance’s metadata IP is the destination IP address.