Your applications are moving to the cloud, and your firewall is sure to follow. The concept of only protecting your network no longer makes sense. But, can a virtualized firewall adequately secure organizations as they become more and more distributed? What are your options to determine where your firewalls will reside? How can you evaluate which solution is best for your enterprise?
You can create a service chain of appliances, but you cant create a service chain of clouds.
In 2010 you’d start with DNS, which can easily handle terabytes/sec of traffic. As you begin chaining services like Next-Gen firewalls and IPS, the maximum throughput drops and latency increases with each appliance. By the time you’re adding SSL proxy you have to incorporate load-balancers to have enough throughput, and latency is really suffering. AV and DLP only make this worse, not to mention sandboxing.
On top of that, each of these solutions are from separate vendors, complicating configuration, enforcement, and logging.
“state of the art” in 2017 is to leverage SDN and Network function virtualization to replace and scale appliances.
Spin up VMS as you need them!
Dynamically Route traffic through the services you need
Service Chaining and Context Sharing between disparate functions
Scale out for Tenancy, and Scale out for Performance.. Operational Nightmare!
Assumes most advanced bundles will be less than 30% attach rate.
But the Challenge of going direct to the Internet with appliances
Deploy a bunch of appliances to all locations. How many locations does your customer have?
Can they realistically deploy the same appliance stack sitting in their gateway to every location? No – creates expensive appliance sprawl. Pan will say – create regional hubs and backhaul traffic – which defeats the point of cloud applications and local internet breakouts OR
Instead, security compromises - how many boxes can they afford and the level of security provided. Compromise leaves org vulnerable.
And it is not just us saying this. When we asked end users at RSA about their concerns about creating local internet breakouts, They were concerned that it would require additional appliances, about the lack security and control with that many appliances – and that it would be too complex to manage
Bottom line – appliances don’t work for breakouts.
It no longer makes sense to backhaul outbound Internet traffic to a firewall in a regional or corporate datacenter. Expensive MPLS backhauling = negative user experience. It no longer makes sense to compromise security by installing smaller boxes in the branch.
Single policy definition point
Immediate policy enforcement
Policy that follows the user
100+ Data Centers
150Gbps peak throughput a day
Peering in major Internet Exchanges
Every log transaction of every employee is available within a second or two
Log files (never data) are only written to disk in a location of your choice
Let me give you a bit more about what we mean by cloud scale and delivering the largest most reliable and available cloud. Our cloud is deployed in 100 data centers across 5 continents.
So for instance, your employees sitting in Brazil go through the Brazil data center and employees sitting in India who go to Mumbai connect to the local data center
I only talked about volume of traffic. The number of threats and level of innovation and sophistication is increasing rapidly, so you must be able to evolve your cloud to handle more frequent updates. Appliances were never designed for this frequency of updates.
We do about a120,000 unique security updates every day. Imagine trying to update an appliance 120,000 times day. How often do you upgrade your appliances and how do you manage change control?
The next thing I want to mention is appearing with Internet exchanges. We peer with all leading Internet exchanges and leading apps, ranging from Office 365, to Azure, AWS, Box and Salesforce. This helps you get the fastest performance because our data center sitting in Chicago and New York are peered with the content, giving you fastest connection from our cloud.
We made sure that our cloud is very secure. We do ongoing internal testing and third-party testing and we are very good with redundancy — our cloud is built in from day 1 within our own infrastructure and across data centers where they can fail over. We have nothing to hide and have a Trust Portal which provides full monitoring for full transparency of both Zscaler and third-party partners. We are proud of our cloud and like to show how it’s performing.
Thanks to many of our early large enterprise customers, we’ve received a number of certifications for our cloud, including ISO 7001. These certifications are very important to us and we go through regular audits to maintain compliance. We’ve also received certification from EU-US Privacy Shield (the new agreement between the EU and US for transatlantic exchanges of personal data for commercial purposes).
This is a screenshot of an analysis taken from a large European company that has 150,000 employees. Data taken over 3 months shows that employees clicked on malicious content over 13 million times and we blocked it. This company also had around 1 million botnet calls — that means infected pcs made 1 million calls to the company’s command and control center. It’s good thing we were in and we blocked all the calls. The next question is: How do you clean up those compromised PCs?
You can drill down for example, but what I’m showing here is that you want to understand the traffic, by location, where the botnet calls are coming from. You can see in this example that Beijing is the most impacted site followed by Sau Paulo. You can drill down even further to see the actual users that are infected. The first column blurred out for obvious privacy reasons. The next column shows the command and control center where the botnet was calling You can see here that a bunch of these domains are randomly generated numbers or sit in Russia.
Another thing to note is that over 47,000 times users were deceived and clicked on a URL that led them to a phishing site. And they had over a million botnet calls to C&Cs that were blocked.
Zscaler continues to be the fastest-growing vendor in this market. Gartner estimates that Zscaler owns more than 50% of the market share (as measured by revenue) for cloud-based SWG services. The vendor is a good option for most enterprises seeking a cloud-based SWG.