Today’s enterprise computing infrastructure is moving towards becoming more cloudy. More and more enterprises are taking their apps, their servers, their networks, and their data and migrating them into various kinds of cloud-based resources. It isn’t really a binary, either/or decision – most companies have a mixture of on-premises and cloud infrastructures and are changing the ratio between them over time.
We found that IT shops will make several transitions in their cloud migration. Not every enterprise goes through these steps in the same sequence. Let’s give you a few different scenarios.
Some companies start with on-premises apps running on their company-owned physical servers. This is the “normal” state for many companies, although as we will show you how this is changing in a few of our case studies, there are some enterprise IT shops that are skipping this step entirely. I know of several companies that have began their operations 100% based in the cloud and have no actual on premises data center, You can think of this as running their enterprise on a “cloud bus.” For those of you that are still not too far removed from the photo here, you can think of this as a new baseline for comparing costs and returns on your IT infrastructure investments.
Some of these servers will be virtualized on in-house hypervisors. One of the first steps taken by many IT shops is in moving their physical servers to on-premises hypervisors. This can increase server utilization and densities inside packed data centers that are looking to squeeze more resources into a smaller footprint. A typical threshold for many IT shops is when you surpass more than 50 physical servers.
Some of these apps will then move to the public cloud. many companies begin their journey by moving a few of the cherished in-house apps into the public cloud, such as email or other office general productivity apps. It represents an important beachhead in our journey, because it is the realization that the on-premises world is changing, and that IT can’t build and control its infrastructure end-to-end any longer
. Many companies make use of a variety of cloud providers, deciding on an app-by-app basis which ones to subscribe to. For example, the American Red Cross is using a variety of cloud providers: Microsoft’s Office 365, Unisys for its regular web hosting needs, an Oracle-hosted cloud service for several database apps, and Teradata’s Aprimo cloud CRM apps. One consideration is to put in place appropriate access controls so that not every IT staffer can shut down some random VM and cause all sorts of havoc. Another consideration is balancing the increase in operating costs with the savings in additional capital costs of new server purchases. A typical threshold for many IT shops is when they surpass $50,000 or more in annual email system operations costs.
Building hybrid clouds. The next step is in building a hybrid cloud, with a mixture of servers both on premises and in the cloud, often with firewalls or other security perimeters to segregate the Internet traffic. The motivation for doing so can depend several factors, such as being able to handle bursts in computing or storage capacity, or because consumer apps demand a variety of web and database servers with higher Internet bandwidth. The trick here is to ensure that you have enough bandwidth and low enough latency to handle the increase in Internet traffic, and to make sure that your on-premises apps don’t assume particular connection speeds or throughputs. You also need to ensure that you can extend your VPN into the cloud, or make use of the cloud provider’s VPN services. Up until recently, the job of managing hybrid cloud collections wasn’t easy or simple, but this is changing and now many of the cloud providers have improved their management tools. Hybrid clouds can be constructed in a variety of ways, either by migrating servers in a group or by particular applications. Presidio Health migrated their servers to the cloud but kept their data on premises for security and compliance reasons. They were able to increase their computing power by 70% without increasing their IT budget, and keep their security controls intact. A typical threshold is when an enterprise has $100,000 or more annual system operations cost, or wide variations in seasonal or daily computing loads.
Using colocation facilities or managed cloud-based services. The final step in the migration process is creating more complex infrastructures that involve a variety of approaches, including colocations or managed services. There are all sorts of reasons for going this route, spanning better scalability or performance or being able to outsource IT infrastructure management tasks to handle consumer-facing apps. The International Rotary is using colocations for their disaster recovery solution. One reason that a managed colocation can be cost-effective is the lax recovery time objectives. If parts of your IT infrastructure can be down for a few hours or days, this could be the way to go. Code42, a backup and cloud file storage vendor, is also using colocation facilities around the world to reduce network latency times, often the most critical metric for more effective cloud-based backups. In some cases, enterprises are mixing approaches, using their public clouds to provide a base capacity and then bursting that capacity to a private cloud.
You are going to face several choices of how you build out your cloud infrastructure. Let’s cover the four key questions you want to address as you are trying to evolve your IT infrastructure towards something more cloud-friendly.
This is what the Red Cross did so more of its volunteers could make use of its internal systems without having to carry around anything more sophisticated than a smart phone and a browser. And one medical provider built its own web portals for its internal apps that now have thousands of clinicians and other hospital users that are also browser-based. For both companies, this conversion frees up supporting outdated endpoint devices and the need to maintain either customized apps or outdated mainframe terminal communication tools.
If you move your servers to the cloud, you can ramp up (or down) your capacity quickly without having to purchase the hardware. Karmaloop, a large Boston-based clothing eCommerce retailer, has this philosophy. They call it “buying our baseline capacity but renting what we need for handling seasonal spikes.”
When companies employ a single-sign-on tool, they migrate their security needs to a single point of service delivery, and make things easier for both end users and their IT department. But a single-sign-on isn’t sufficient. Security needs to be part of every app, more of security-as-a-service, moving from the network edge to the individual app. This is what Mitsubishi Motors did to connect its North American car dealers to its headquarters infrastructure. In the past they relied on a VPN to get their users inside a secure perimeter; now each app authenticates each user individually.
We are looking at a pile of older PCs that were decommissioned by the US Department of Agriculture when they made an upgrade to using VMs.
Bud Albers talks about looking at server CPU utilization as a good decision point before virtualizing them. They found that many of their physical servers were operating at very low levels and could easily be converted into either virtual machines or migrated to the cloud. This frees up other data center resources and also spreads the cost of an expensive server across equipment that can run at higher loads.
Remember rolodex card diles, and the black desk phone? While this diagram is somewhat tongue in cheek, it does illustrate how far we have come since those desktops at the start of the 1980s, before PCs entered corporations. The same can be said about the enterprise app, which has taken over many of these totemic physical items.
Driving this evolution and cloud migration is a series of steps, and for many companies the first step is the notion of File sharing and how it becomes a collaboration mechanism. Before PCs were first connected to the Internet, there were local area networks and floppy disks. File sharing was cumbersome and crude, because PCs were essentially personal devices and collaboration was difficult. Then came the Internet and one of the first basic ways that work teams used this connectivity was to share documents, usually as email attachments or through tools such as Microsoft SharePoint. But while it solved document version and access issues, SharePoint is a very “heavy” client, meaning that there is a lot of software to install and maintain. It isn’t very friendly to mobile devices, or to people trying to come in from the Web to access their documents. The first step towards changing how apps are supported usually begins when enterprises realize that there is a better way to share files, and I have seen IT organizations ditching their SharePoint implementations in favor of using these file-sharing services.
Once these file sharing portals take hold, it isn’t much more of step towards running general office productivity apps in the cloud such as Google Docs and Microsoft Office 365. These apps were once the exclusive domain of the desktop, but as endpoints have blossomed into tablets and Web-only access, office productivity means something more pluralistic and functional than merely sharing documents. They are also the first tentative steps into supporting the public cloud too by IT. Many of these decisions are being driven because of what is the endpoint device has become almost irrelevant. Remember those days back in the early 2000’s, when an IT department would studiously determine what kind of PC brand or operating system would be the corporate standard. Now the particular endpoint, whether it is a desktop or a mobile, no longer matters. Mobiles are being used more and more as the main endpoint browser: for example, most of today’s Facebook posts come from mobile devices and more than 75% of Tweets are posted from phones.
This is the ultimate consequence of a “bring your own device” policy; because in effect the IT department recognizes that the actual apps themselves trump whatever device they are running on. There are some big benefits here for IT: you don’t have to invest time in a “nanny state” approach in tracking which users are running what endpoints. Instead, you can free up these staffers to improve your apps.
If you are serious about moving towards hybrid clouds for your enterprise, you need to consider these next four decision points and how you will implement key enabling technologies. Part of this evolution process is in making small steps, adding additional software and integration layers where it makes sense, rather than building a huge software infrastructure from the ground up with some grand design. IT managers need to add an app at a time and evaluate how each app in particular can deliver solid business benefits incrementally. As each app is added into the mix, IT should measure those returns before they put any further effort into building out their next enhancement or adding a new app to their stack
This is probably the most visible and also most accountable metric: it is one that IT departments have kept track of for decades. But instead of taking weeks to resolve “an open support ticket” we are talking days or hours because that is what customers now expect. These days IT needs to pay attention to how quickly they can turn around changes and software upgrades, just as Google and Facebook often introduce new software on a daily basis. And the velocity for these turn-around times is increasing, too. Consumer-facing SaaS vendors have set a new bar and end users’ expectations are now higher for internally developed enterprise apps too. One of the reasons why many companies go with managed services providers is that problems were often fixed before they even heard about them, keeping theirs customers happy and their websites up and running.
Part of the app evolution process is in making small steps, adding additional software and integration layers where it makes sense, rather than building a huge software infrastructure from the ground up with some grand design. This is how CrazyForEducation built its infrastructure. IT managers need to add an app at a time and evaluate how each app in particular can deliver solid business benefits incrementally. As each app is added into the mix, IT should measure those returns before they put any further effort into building out their next enhancement or adding a new app to their stack. If we compare this to the traditional build-and-deploy software model from the past century, you can see this is the complete reversal, and a more palatable and incremental approach.
One of the best ways to enable this new app universe is in the form of an app portal or corporate app store where users can download the most current apps to their endpoint devices, or login and connect to them in the cloud using some kind of single-sign-on tool, or a combination of approaches. This means that IT shops create a single place where end users can consume the necessary business apps to be productive. Users don’t want to wait on IT to finish a requirements analysis study or go through a lengthy approvals process: they want their apps here and now. Users want personal apps that are intuitive, purposeful and easy to use, and they now carry these same expectations into the workplace every day when they carry their smartphones in their pockets. There is dwindling patience for the convoluted, frustrating user experiences that many enterprise users have tolerated from corporate systems of the past. I am glad to see Microsoft has caught up and providing their own App Store in Windows 10.
Once you understand the cost of your cloud app, the next biggest issue is getting your hands around latency issues. This is one of the hardest things to measure and to track -- the end-to-end application latency. This is because so much of the infrastructure now depends on external Internet links. In the old days we had application-based response time measurements that were easy to calculate because they based completely on the mainframe infrastructure that ran most of the internal apps. But with the new era of customer-facing infrastructure, we could have apps from suppliers and vendors from literally all over the world. This means that IT no longer has control over every possible piece of bandwidth. On top of this issue, traditional latency measurement tools, such as pinging routers and examining traceroutes, don’t necessarily provide a picture of what customers are actually experiencing. Network engineers have long studied the effects of latency on applications performance, reducing router hops and increasing router packet processing. At one international organization, they have to deal with two-second latencies across satellite networks and over lousy Internet links across the world. Their IT manager told me that "No one is really building cloud apps that deal with these huge latencies," he said. "And simulating and testing apps under these conditions is also really difficult. You really need plenty of Internet bandwidth for even the simplest cloud app."
Users can be located anywhere ranging from high-speed fiber connections to 3G mobile data networks. This makes latencies sometimes horrific and often unpredictable. Second, and it may be difficult for IT managers to even calculate the built-in latencies of the cloud provider's network. Third, applications are becoming more virtualized and distributed across large-scale computing infrastructures, such as Hadoop clusters of hundreds of machines. This introduces additional latencies.
Finally, IT may not be completely aware of the ultimate end users and application owners, nor have the right service-level agreements (SLAs) in place to enforce minimum latency standards. Many SLAs specify ping or traceroute transit times, but most modern applications use other protocols that don't necessarily correlate with ping times. Many SLAs also don't differentiate among outages on a server, a network card, a piece of the routing infrastructure, or a security event.
Oakley Sunglasses chose their web-hosting provider based on one metric: the number of outages over the past year. Particularly for eCommerce companies, having a website that isn’t up and running means they are losing business. Engagency decided to start using the cloud for its Sitecore hosting needs when it saw that these implementations were just as reliable as ordinary physical server implementations. As IT organizations migrate more infrastructure into the cloud, this reliability becomes important.
Let’s look at a few different case studies to show you how companies have migrated to the cloud, taken from a variety of different industries and approaches.
Engagency.com is a 12-year-old company that provides consulting, training, implementation, and managed services for enterprise web content, ecommerce, and digital marketing solutions based on the Sitecore platform. Part of their business is to make recommendations to customers about the appropriate hosting infrastructure and support services needed to effectively manage large mission-critical web properties. As businesses continue to shift marketing, customer service, and commerce to the web, they inevitably experience increasing visitor traffic and security vulnerabilities that internal IT may be unprepared or unfamiliar with how to manage and mitigate effectively. In these situations, any site downtime could result in lost revenue or hurt a brand’s reputation or delivery of services.
Engagency was all about recommending solutions that offered the maximum uptime and preserved the most business continuity. This is why that up until a year ago, Engagency wasn't recommending that their customers use a cloud provider to host Sitecore solutions. But that has changed, largely as a result of the improved performance and reliability that they have experienced. The change has been dramatic in how they formulate their own offerings. One example is in how they recommend particular hardware and software configurations to support Sitecore installations. Sitecore is an enterprise solution that can be used to manage up to thousands of websites on a single instance; as such it can be a resource-intensive platform and has specific bandwidth and machine tolerances. Engagency uses its understanding of these elements to make recommendations about the most appropriate and effective use of cloud offerings, and also to find providers that understand managing these kinds of installations. They ended up partnering with Rackspace to provide their customers with custom tailored Sitecore hosting and managed support offerings. Now they frequently recommend a hybrid approach that mixes physical hardware and cloud offerings. This combination gives the customer the best of both worlds in terms of reliability and cost savings. However, the cloud is not just about reducing capital cost expenditures. In fact, the greatest benefit seems to be how it is allowing companies to optimize their operating costs by giving them back budget to reallocate towards round-the-clock managed support services, which in turn maximizes their system performance and reliability while minimizing the strain and undue burden of responsibility on their internal IT team.
As the demands of keeping a mission-critical website continuously in operation have gone up, Engagency has seen an increased need for cloud offerings combined with managed support and monitoring services. The challenge in finding the right partner was that many cloud providers just offer a rack and a pipe at the lowest cost, but Engagency was looking for a provider that offered the value-added support services necessary to ensure maximum business continuity. For example, they recently had a financial customer that experienced a DDoS attack on their website. Their cloud provider reacted immediately and helped them diagnose and fix the problem. From this experience, they recommended a DDos monitoring service they offered that is now proactively identifying and helping to prevent such distributed denial-of-service attacks in the future. Given the number of these kinds of intrusions and attacks, having this kind of support is becoming more important. Combining these types of value-added support services, with more cost effective cloud offerings, are helping companies rethink how they address these increasing demands and get more for their money.
A good example here is how the American Red Cross deploys its apps. A few years ago it was one of the more conservative IT shops around. Most of its apps ran on its own mainframes or were installed on specially provisioned PCs that were under the thumb of the central IT organization based in Washington, D.C. But then people started bring their own devices along to staff its disaster response teams. Their IT department started out trying to manage their users’ mobiles and standardize on them. But within two or three months their IT staff found the mobile vendors came out with newer versions, making their recommendations obsolete. Like many IT shops, they found that their teams would rather use their own devices. In the end, they realized that they had to change the way they delivered their applications to make them accessible from the Internet and migrate their apps to become more browser-based. The Red Cross, like many other IT organizations, has learned that they have to be able to adapt to the rapidly changing mobile environment. But the good news is that they don’t have to buy as many laptops.
Some organizations have always had their infrastructure in the cloud. This is the strategy that the startup CrazyForEducation.com used when began its operations last year. The company is a SaaS provider of tutorials and is used as a way to post short online video lessons by K12 classroom teachers who explain common concepts such as short algebra or geography lessons to students. These lectures are viewed by students the world over. The notion is referred to as flip teaching, meaning that the classroom time is used for working on what would be traditionally homework assignments, and the readings and lectures that were normally part of the classroom day are done in the evenings at home. To deploy their solution, the startup uses a complete online infrastructure. The company is also using a variety of customer-facing apps and SaaS/IaaS infrastructure so that they can quickly scale as demand for their services rises. This means that there is no single cloud provider that is used but rather they leverage more than a dozen different vendors for their various infrastructure needs.
When the company began operations the principles wanted to build their infrastructure incrementally, using a Lego approach to build interchangeable parts that could easily connect together. They understood that each part could be replaced if the provider went out of business or when they found something more appropriate or cost-effective. As they added new providers, they looked at what the incremental return on their investment would be for that particular tool. In some cases, they found they could build their own tool for less than the monthly cost for one of their providers. In other cases, such as for CRM providers, they found that there were many solid alternatives and so they shouldn’t even attempt to build their own. As another example, they needed a solid video-rendering engine since so much of their content was video-related. They looked at a number of providers but eventually ended up using the UK-based provider Vzaar.com, which was much less expensive than any American provider they could find.
The firm spends about $1500 a month on their infrastructure, and has purchased services from vendors around the world for its accounting, Web hosting, payment processing and databases. They have chosen more than a dozen different vendors, some of them offering consumer apps and some that are geared towards businesses. As another example, they purchased their email using Google’s business-grade hosting service and Box for their file sharing but use Join.me for their video conferencing solution. For each provider, they look at what happens to their performance when they scale up and support more traffic as the company grows. They do all sorts of stress testing to see what happens when their loads are ten times what they currently support and make sure that any of their providers continue to deliver the same latency and performance they currently have. They also have segmented their data security so that they don’t store customer financial data in the cloud, other than using their payment processor to handle credit card transactions when it is time for their teachers to be paid for their video lessons.
Our next case study is the Missouri healthcare group Delta Dental, which is one of the largest practices in the state. As you might imagine, they have various compliance regulations to ensure that personal information is probably maintained. They wanted to ensure that they could encrypt this data without having to rewrite a lot of their existing code, and also wanted a mechanism where they could deploy a policy-based solution that could grant access permissions at both the user and app levels. They didn’t want to deal with the encryption key management, which is why they turned to Vormetric’s products. It didn’t matter where the data was stored and didn’t require the user to do anything differently from what they had before. It took a few days to implement too. They have their compliance and can move their data in and out of the cloud knowing that it is protected.
Companies such as Unisys and global financial institution ING, headquartered in Amsterdam, are using hybrid clouds as a way to consolidate data centers. The move makes sense: you don’t have to provide the up-front capital to house your servers and can rent capacity and charge to an operating budget as you need it. Rather than invest in more real estate, you can leverage the infrastructure providers and rent the equipment when it is needed. Security can be built-in when the consolidation happens, making the cloud just as security as a traditional raised-floor data center. Rather than invest in more real estate, you can leverage the services and expertise of infrastructure as a service providers and rent the equipment only when it’s necessary. You can now have a complete virtual infrastructure that can be used to build secure environments, with a business-class service that is a cut above the consumer versions of the past.
Bill Gillis is the CIO for Beth Israel Deaconess Medical Center in Boston. They built a cloud-based electronic medical record system more than six years ago that is hosted at their Internet provider and supports the clinical practices at their smaller doctor offices in their network. "These are offices where servers could be sitting under plants or in broom closets," he said, "and we wanted to get them centralized and virtualized. We call it our accidental cloud, because we didn't start out to build it that way. But these are very mission-critical apps: without them, a doctor's office couldn't see any patients." Beth Israel owns all the hardware, software and other infrastructure, and each office connects to their app via just a Web browser and an SSL connection. When they need to debug an issue, they have to install specialized application and network performance management tools to track down latency and packet losses. "We just ship the practice office a monitoring box in the mail, and the office manager just plugs it in to the network. In a week, we can capture enough data to figure out where the problem is and get the right vendor to fix it."
Some companies find that it isn’t just the cloud’s ability is being able to add VMs but how they are provisioned and added and subtracted to the mix that is important. This includes policy-based workload management and deployments, and real-time resource monitoring. Some cloud providers are using orchestration tools that automatically start or stop particular VMs in sequence so that a directory server can start ahead of a database server for example. This is what the Motley Fool IT department realized early on. "You want to take the human element as much as possible out of your deployment and provisioning process," said one of their managers. "This helps you to minimize failures and realize higher returns on your investment.” They are using open source frameworks such as Chef and Puppet that tohelp with this automated configuration and provisioning management.
In a story for TechTarget, Boeing security engineer EJ Jones was quoted several years ago talking about how he initially designed a five-part checklist and graded each cloud provider. The requirements included questions like, "Can a provider tell us when and how a failure has occurred?" and "Can they guarantee uptime?" More recently, Boeing has employed a third-party auditing firm that was allowed more detailed access to the cloud provider's methods. Boeing recommends that when you are looking to evaluate each cloud provider you should have consistent IT controls in place. These should include standardized deliverables and touch points between your own IT organization and your cloud providers. You can see the level of their analysis with the infographic here that Boeing data architect Stephen Whitlock shared at the Gartner Catalyst conference a few years ago in terms of evaluating each cloud vendor for storage, platform and hosting services.
Increasingly more mobile and global workforces. As we hinted earlier, the days when everyone is chained to a fixed desktop computer are long over. But it isn’t just about being more mobile, or using more mobile devices. It is also that the workday is no longer 9-to-5 and users expect to get their jobs done whenever and wherever they might be in the world. This goes beyond telecommuting, and extends into being able to open up their laptops and tablets in the middle of the night or the early dawn when they feel the most productive. It also means that users expect to collaborate with their colleagues halfway across the world with the same ease that they have had with working with them down the hall. That puts an additional strain on the IT infrastructure technologies to always be working and to deliver access to a user no matter where he or she is located. The right solutions can help apply security policies no matter how and where applications are accessed.
The self-service IT economy. Self-service portals are critical: users don’t want to wait on IT to be activated, on-boarded, installed, or supported. They just want to login (only once, please!), download their apps and get started. The identity technologies are a key enabler of the self-service portal and making sure that it is current on supplying the latest business apps.
Supporting a variety of systems. Most large organizations require their identity management platform to handle connections using a wide variety of programming interfaces, including SAML, WS-Federation , OpenID Connect, and OAuth. Each of these have different mechanisms for just-in-time user provisioning, how they automate the interaction between service and identity providers, and whether they are more suitable for consumer-based SaaS services or for the enterprise. While a detailed comparison of each protocol is beyond the scope of this paper, the key take-away is that an enterprise identify technology has to cover as many of these methods as possible in order to be effective, especially for the large enterprise that most certainly uses multiple protocols.
Earlier I mentioned the rise of the corporate app portal, and I wanted to show you what one looks like – it has been purposely designed to mimic the Windows 10 desktop. It used to be that the web browser and the file browser were cloned interfaces, but now the entire desktop has moved into the cloud.
As these app stores take hold, more companies are now testing their apps in production. This movement grew out of strategies that various cloud providers and Google and others used to roll out new features of their services and code several years ago. The tools used for these kinds of testing include ramped or limited deployment and A/B tests. This has created a new kind of IT department: one that delivers continuous app upgrades, just like the consumer social clouds of Facebook and Twitter are doing with their software. Today these IT groups add improvements without waiting for formal requirements documents from a ponderous and seemingly endless architecture review process. Instead, user interfaces are added almost on a whim, and these continual changes make the notion of a “version number” for software seem almost quaint. Think about this for a moment: back in the early 2000s, the thought of actually doing this kind of testing would have probably gotten an IT director fired. Now it is becoming common practice. Maybe we have Austin Powers to thank for this transition.
When migrating your apps to the cloud, keep in mind the following issues: 1. Build apps from the beginning with the cloud in mind. 2. Keep track of the monthly cloud computing bills and understand how they are calculated. 3. Build or find appropriate tools to monitor your apps' uptime. 4. Understand network latencies and end-to-end performance 5. Involve your users and make them part of the decision process. 6. Think about using the cloud for testing new apps too.
Speaking of real time monitoring, facebook has this public page where they show you in real time what is going on with one of their data centers, this one outside of Portland Oregon. You can see current stats for water and energy usage in their attempt to build greener data centers.
(Replace if DD has something similar)
Let’s discuss these fivesolutions in more detail.
This can help understand where your exposure is and what you need to do a better job with locking down virtual resources. Ideally, your management tool should be able to examine the hybrid cloud and understand how to make adjustments to both physical and virtual resources and workloads, and automate the provisioning and deprovisioning of your entire hybrid infrastructure. This kind of tool also helps to address regulatory compliance requirements and establish security hardening guidelines that can make a decisive difference.
. Back in the days when the mainframes ruled, it was easy to enforce who had access to what data. That needs to be the case with the cloud. In many cases, this access is an all-or-nothing proposition, meaning that once a user authentications themselves to their cloud, they have the freedom to roam around at will, starting and stopping various VMs and causing all sorts of damage. This can be a compliance nightmare, which is why some cloud providers now offer more granular access to their resources. There are a variety of tools that can help improve your security posture of your VMs too.
There are many cloud providers that offer independent and geographically distinct data centers and have ways to duplicate data among them so that your infrastructure will remain running even if one of your cloud data centers fails. This is just good security practice. Netflix has developed a series of open source tools called its Simian Army, where it has tried to understand where its failure points are and how to recover from them. They wanted to know what VMs are dependent on others and how to restart particular services in the appropriate order in case of an outage. They found that the best defense against major unexpected failures is to fail often. By frequently causing failures, they force their cloud-based services to be built in a way that is more resilient. Even if your cloud deployment is still relatively modest, at some point your demand will grow and you don’t want to depend on your coding skills or being awake when this happens and have to respond to these changes.
Certainly, the least secure aspect of any cloud deployment is your Web applications and how they are connected to the rest of your cloud-based infrastructure. The challenge is being able to virtualize as many of your protective devices as you have for your on-premises servers, such as load balancers, intrusion prevention appliances, firewalls, and other gear. The major cloud providers are beginning to add these tools to their list of services so that IT developers can migrate their applications over to the cloud and still maintain the level of security that they have come to expect with the ones running inside their own data centers.
This is an issue so that that pre-dates the birth years of the teen hackers that are using them: remember when SQL injection was first discovered? It is still an issue
And there is a new variant on SQL injection, called Blind SQL Injection, that OWASP describes this way: This is a type of SQL injection attack that asks the database true or false questions and determines the answer based on the application’s response. This attack is often used when the Web application is configured to show generic error messages but has not mitigated the code that is vulnerable to SQL injection. f the attacker is able to determine when his query returns True or False, then he may fingerprint the RDBMS. This will make the whole attack much easier. The number of attacks isn’t predictable, as you can see from this analysis from an IBM research report. OWASP has a number of recommendations, including range checking and enforcing least privilege best practices.
Finally you want to use stronger authentication methods to secure your cloud access, such as these options from Onelogin, a single sign on vendor that offers a variety of multifactor tools. When all of your assets are just a username and password away, it makes sense to implement multi-factor authentication (MFA) and single sign-on methods (SSO) to better protect these assets. The SSO tools are getting better at supporting a wider array of cloud-based applications and circumstances. Most SSO products now automate the logins for thousands of applications. Some SSO tools such as SecureAuth, Okta, Ping and Centrify can specify MFA for particular applications as part of a risk-based authentication approach. This makes using SSO a powerful protective tool and can secure logins better than relying on users to choose individual passwords. It also means that IT can play a more critical role in defining cloud-based assets and matching up the appropriate security levels.
How to make the move towards hybrid cloud computing
How to make the move to the hybrid
Techtarget/Dimension Data event
• Contributor to SearchSecurity
• Former Editor-in-Chief at Network
Computing, Tom’s Hardware.com
• Toiled in end-user computing since
• Written two computer network
technology books, thousands of
• The different ways IT shops are moving to the
• The evolution of the enterprise application
• 4 key decision points to ponder
• Misperceptions and the security blame game
• Case studies
• Key takeaways, suggestions and lessons learned
3. What is your end-to-end app latency?
• Endpoints aren't fixed like they are for most
• The cloud infrastructure may not be optimally
connected to your own
• Applications are becoming more virtualized
• Users are becoming more distributed too
• IT may not be completely aware of the
ultimate end users and application owners
4. What is the frequency of overall infrastructure outages?
Consider these three issues
• How your servers are configured,
• What kinds of monitoring tools you are using
to ensure that they aren’t breeched,
• Whether your applications have built-in
security or not
• The cloud isn’t as secure as on-premises
• Data can easily be stolen from clouds, so
personal info shouldn’t reside there
Things you can’t blame on the cloud
• Insecure Web applications
• Lax network intrusion detection and
• Bad password policies
• American Red Cross
• Missouri Delta Dental
• Unisys and ING
• Beth Israel Medical Center
• Boeing’s cloud evaluation matrix
• It is all about speed of app delivery
• The rate of evolution varies tremendously for each
business, and for departments within each business
• There is no single monolithic app
• There is also no single cloud situation
• Mobile devices have become the defacto computing
• IT staffs will have to evolve and become more
• Everything becomes browser-based, even mainframe
• Availability and disaster recovery needs to be baked
• Self-service portals become more important
Cloud environments can be more or less secure:
• How they are configured
• Who has access to them
• What kinds of encryption methods are used to
protect their data and
• The sensitivity of the data itself
Thanks for listening to our seminar, and do share
your own experiences about your own hybrid cloud
Presentation slides available:
Feel free to contact me at:
• @dstrom on Twitter