Tungsten Fabric provides a network fabric connecting all environments and clouds. It aims to be the most ubiquitous, easy-to-use, scalable, secure, and cloud-grade SDN stack. It has over 300 contributors and 100 active developers. Recent improvements include better support for microservices, containers, ingress/egress policies, and load balancing. It can provide consistent security and networking across VMs, containers, and bare metal.
2. MISSION
Build the world’s most ubiquitous, easy-to-use, scalable, secure, and cloud-grade SDN stack, providing a network
fabric connecting all environments, all clouds, all people.
4. CODE
• 2013-Today: >300 years of work
• 200-300 developer contributions
• ~100 active developers
• Languages: C++, Python, Node, Go
• Apache 2.0 license
• GitHub repositories
• Gerrit review processes
• Launchpad bug tracking and blueprints
• Other OSS used: Cassandra, Kafka, HAproxy,
Docker, Keystone
5. COMMUNITY
Principles:
• Open and inclusive
• Provide strong technical and architectural
oversight
• Competitive ideas welcome
• Rough consensus and running code will always
win
• Iterate and evolve
6. COMMUNITY
• Online:
• Downloads and trial sandbox
• Talk with 900+ people: Slack, Mailing lists
• Follow: Blog, YouTube, Facebook, Twitter
• GitHub: Presentations, Tutorials
• Live (see calendar) :
• Conferences: OpenStack, KubeCon, ONS, Re:invent and
GC Next
• Meetups: host your own or join some
• User Group events: often at conferences
• Governance summits
• Groups: Governance, Technical, Infrastructure
• Community manager: Greg Elkinbard
JOIN
• tungsten.io/slack
• tungsten.io/community
10. VIRTUAL
NETWORK
GREEN
Host + Hypervisor
Host + Hypervisor
Visualizing Tungsten Fabric’s Operational Effects
VIRTUAL
NETWORK
BLUE
VIRTUAL
NETWORK
YELLOW
TF Security Policy
(e.g. allow only HTTP traffic)
Service Chain
Policy with a
Firewall VNF
IP fabric
(switch underlay)
G1 G2 G3
B3
B1
B2
G1
G3
G2
Y1 Y2 Y3B1 B2 B3
Y2Y3
Y1
VM and virtualized Network
function pool
Intra-network traffic Inter-network traffic traversing a service
… …
LOGICAL
(PolicyDefinition)
PHYSICAL
(PolicyEnforcement)
Non-HTTP
traffic
Security
Groups
11. Seamless Multi-Cloud Overlay SDN
Telco POPs Private Cloud DC Public Cloud VPCUsers
Multicloud SDN
Virtual Networking: Overlay Virtual Networking provides connectivity for VM’s and Containers
Distributed Compute Platforms: Leverage the right balance of edge compute, private cloud
compute, and public cloud compute to deploy services
Ubiquitous Security – Centralized security policy orchestration with distributed enforcement across multiple clouds
Performance and Scale: Manage remote compute resources, high performance virtual network
functions, and containers using the same tools
Overlay SDN
12. ARCHITECTURE OVERVIEW
Ethernet / IP
underlay network
TF CONTROLLER, API & GUI
scale-out control and
management container
micro-services
REST
XMPP
ORCHESTRATION NODES
XMPP
virtual overlay networks
TF
Orchestration plug-ins
Control
COMPUTE NODE 2…
TF
vRouter
COMPUTE NODE 1
TF
vRouter
Compute Runtime Compute Runtime
Control
Networks isolated unless
connected with policy
13. USER EXPERIENCE
• REST API
• HTTPS authentication and role-
based authorization
• Used for GUI
• Used for declarative configurations
as code
• Generated from data model
NORTH-BOUND API GUI
14. VROUTER DEPLOYMENT MODELS
KERNEL VROUTER DPDK VROUTER
SRIOV/ VROUTER COEXISTENCE SMARTNIC VROUTER
…VM
1
vRouter
Agent
VNF
2
…VM
1
vRouter
Agent
VM
2
…VM
1
vRouter
Agent
VM
2
…VM
1
vRouter
Agent
VM
2
§ vRouter runs as a user
space process and uses
DPDK for fast path
Packet I/O.
§ Full set of SDN
Capabilities Supported
§ Requires the VMs to
have DPDK enabled for
performance benefits
§ vRouter fwding plane runs
within the NIC
§ Workloads are SRIOV-
connected to the NIC
§ Some workloads can directly
SRIOV into the NIC, while others
go through the vRouter
§ Sometimes a VNF can have
multiple interfaces some of which
are SRIOV-ed to the NIC
§ Interfaces that are SRIOV-ed into
NIC don’t get the benefits /
features of vRouter
§ This the normal operation where
fwding plane of vRouter runs in
the kernel and are connected to
VMs using TAP interface (or veth
pair for containers)
§ vRouter itself is enhanced using
other performance related
features:
o TSO / LRO
o Multi-Q Virtio
15. CONTAINERIZED WORKLOADS
kube-manager
TF Controller
kube-manager listens to K8s API Server and
conveys the API request to the Controller
Compute Node
…
POD 1
C
1
…
Compute Node
POD 2
C
2
…
POD 3
C
3
…
POD 4
C
4
…
API Server
K8s and Contrail Controller Nodes
Scheduler …
Replication Ctrl
kubectl
(user commands)
vRouter
(replaces kube-proxy)
CNI Plugin
vRouter
(replaces kube-proxy)
CNI Plugin
Kubele
t
Kubele
t
16. DIFFERENT LEVELS OF ISOLATION
N a m e s p a c e - B
S
3
S
4
POD 9
…
POD 13
…
…
N a m e s p a c e - A
S
1
S
2
POD 1
…
POD 5
…
…
N a m e s p a c e - D
S
7
S
8
POD 25
…
POD 29
…
…
N a m e s p a c e - C
S
5
S
6
POD 17
…
POD 21
…
…
N a m e s p a c e - F
S1
1
S1
2
POD 41
…
POD 45
…
…
N a m e s p a c e - E
S
9
S1
0
POD 33
…
POD 37
…
…
DEFAULT CLUSTER MODE NAMESPACE ISOLATION POD / SERVICE ISOLATION
§ This is how Kubernetes networking works
today
§ Flat subnet where -- Any workload can talk to
any other workload
§ In addition to default cluster, operator can
add isolation to different namespaces
transparent to the developer
§ In this mode, each POD is isolated from
one another
§ Note that all three modes can co-exist
17. The Latest from Tungsten Fabric
Ø Microservices
architecture
Ø Better cloud native
deployment options
Ø Comprehensive
support for Network
objects
Ø Ingress/Egress
Network Policy
Ø High performance
load balancing
Ø Improved flow
performance and
management
Ø SDN for Edge
Compute – Beta
Quality
House Keeping Container SDN VM’s and NFV
18. CONAINERIZED ARCHITECTURE
…
§ Multiple personalities of containers:
o 3 controller container – (Controller, Analytics,
Analytics DB) each representing a node
o LB to enable HA (based on HAProxy) will be
provided as container not a mandatory item
o vRouter Agent on containers
§ Containers are deployed using either Ansible / K8s / Helm
Charts / Docker Compose
§ Each of the nodes can independently scale (3 x)
§ Can be deployed on Bare Metal or VMs
§ No change in the role / functionality of the Control / config /
analytics nodes
SALIENT ASPECTS
BENEFITS
§ LCM is simplified [All dependencies within the container
(easy bring up) ]
§ Accelerate provisioning
§ Integration with 3rd party provisioning tools simplified
Config +
Control
Analytics
Analytics
DB
Compute Node Compute Node
…
…
…
…
…
…
Docker containers
orchestrated using
K8s or other
orchestration tools
HA Controller Nodes
vRouter
Agent
vRouter
Agent
vRouter vRouter
Containerizing Contrail Control Plane – for easier manageability
19. INSTALLATION
• Ansible playbook to flexibly deploy Tungsten Fabric binaries
• Helm charts to easily operate Tungsten Fabric components on Kubernetes
• Install-time option with OpenShift to deploy with Tungsten Fabric
• Tungsten Fabric binaries available on DockerHub and we’re improving CI/CD
• Commercial integrations into lifecycle tools like RH OpenStack Director
20. VERSATILE SDN SOLUTION
L4 Policy
Tungsten Fabric network and security policies
provide fine grain traffic control, while
abstracting away the underlay topology.
1
Svc Chain Policy2
Containers
App Tier DB Tier
BMSVMs VMsFWL
B
Web Tier
VMs
1
2
1
Consistent security and network functionality between VMs, containers, or bare metal.
…
VM
Compute Node
Nested Container
Compute Node
Tungsten Fabric
Username
Passwor
d
…
NFV
Compute Node
21. SOFTWARE DEFINED SECURE NETWORKING
…
We
b
Ap
p
d
b
App1, Deployment = Dev
We
b
Ap
p
d
b
App1, Deployment = Staging
We
b
Ap
p
d
b
App1, Deployment = Prod
Tungsten fabric provides a rich, consistent set of security policy capabilities across multiple platforms.
We
b
Ap
p
d
b
App1, Deployment = Dev-K8s
We
b
Ap
p
d
b
App1, Deployment = Dev-
Mesos
vRouter Security Groups
We
b
Ap
p
d
b
App1, Deployment = Staging-BMS
B a r e M e t a l S e r v e r
s
Network Policy
Device
Manager
1. Simplified Manageability (change control, etc.
is much easier)
2. Improved Scalability
3. Define / Review / Approve Once à Use
Everywhere
22. Handling and Matching Flows
22
● 3X flow setup rate improvement
● TCP state machine to bypass flow aging
● Fat flow protocol & port i.e. Protocol: UDP Port:53 (Fat Flow)
● Enable/Disable flows *
* Note: features likes SG, floating-IP, VN based policy and VRF assign rules will not function
3X Improvement
Fat Flow
2.2
Enable/Disable Flows
Contrail 3.0.X/3.1.X
TCP state machine
2.2
23. FAT Flow Enhancements
Fat Flow Current Implementation
23
A flow key is used to hash into a flow table (identify
a hash bucket). The flow key is based on five tuple
consisting of source and destination IP addresses,
ports and the IP protocol
Flow Key is reduced from a 5-Tuple to a 4-Tuple
consisting of source & destination IP, destination
port and IP protocol. The client port is not used in
the flow key.
SRC IP DST IP SRC Port DST Port IP PacketProtocol
Virtual Machine Interface
FAT Flow
Protocol (TCP/UDP/SCTP & ICMP) & Port Pairs
Flow Key Hash using 5 Tuple
24. Fat Flow Enhancements
2
4
To enhance vRouter Fat Flow handling to support ignore source/destination port or
source/destination IP address.
1. Ignore both source and destination ports
2. Ignore either source or destination IP
3. Combination of both (1) and (2) above
Virtual Machine Interface
FAT Flow
Protocol (TCP/UDP/SCTP & ICMP), Port Pairs, Ignore
Address (SRC/DST)
Virtual Network
Protocol (TCP/UDP/SCTP & ICMP), Port Pairs, Ignore
Address (SRC/DST)