Presentation from one of the remarkable IT Security events in the Baltic States organized by “Data Security Solutions” (www.dss.lv ) Event took place in Riga, on 7th of November, 2013 and was visited by more than 400 participants at event place and more than 300 via online live streaming.
This is a great depiction of the paradigm change we are talking about…..
2012 was a record year for reported data breaches and security incidents, with a 40 percent increase in total volume over 2011.1 In the first half of 2013, security incidents have already surpassed the total number reported in 2011 and are on track to surpass 2012.
This year kicked off with a number of high profile sophisticated attacks on major websites, media, and tech companies
The IBM X-Force team is a group that researches the threat landscape and publishes a bi-annual report. This report is publicly available, and is recommended reading for anyone interested in security vulnerabilities.
Application vulnerabilities are the largest category of vulnerabilities identified by the X-Force team, and they continue to grow at an alarming rate. It is important to note that application vulnerabilities may be present in both applications you develop and applications you buy (i.e. in-house, out-sourced, or off-the-shelf).
Furthermore, Verizon 2010 Data Breach Investigations Report shows that 92% of compromised data records are obtained through web applications, indicating that application vulnerabilities are the attack vector of choice for hackers.
The X-Force report is available at http://www-935.ibm.com/services/us/iss/xforce/trendreports/
No single automated analysis technique can find all possible vulnerabilities. Each technique has its own strengths and blind spots, which is why a single point tool can leave you exposed.
To find the most vulnerabilities, you should employ all the analysis techniques available today. IBM has combined a leading Static Analysis solution (developed by Ounce Labs) with a leading Dynamic Analysis solution (developed by Watchfire). IBM has combined these two established technologies, and has since added Hybrid analysis to combine and correlate their results. In 2011, IBM added new techniques for client-side analysis (aka Javascript Analyzer) and most recently run-time analysis (aka Glassbox).
Static Analysis examines the source code for potential vulnerabilities. Static analysis can be used earlier in the development cycle, because you don’t need a running application. Static analysis can also produce a large volume of results, which can overwhelm development teams. Also, developers may question whether an identified vulnerability can be exploited (i.e. the “issue” could be mitigated somewhere else in the code, so it may not manifest itself as a true vulnerability).
Dynamic Analysis tests a running application, by probing it in similar ways to what a hacker would use. With Dynamic Analysis results, it is easier to connect the vulnerability and a potential exploit. Dynamic Analysis is reliant on an ability to automatically traverse an application and test possible inputs. With Dynamic Analysis, the auditor is always asking “did I get proper test coverage”. Because Dynamic Analysis requires a running application, it typically cannot be used until an application is ready for functional testing (i.e. later in the development cycle).
Hybrid Analysis brings together Dynamic and Static to correlate and verify the results. Issues identified using dynamic analysis can be traced to the offending line of code. Issues identified in static analysis can be validated with an external test.
Client-side Analysis (aka JSA) analyzes code which is downloaded to the client. As more functionality is performed client-side, the prospect of client-side vulnerabilities and exploits increases. This capability, new in 2011, is unique in the market.
Run-time Analysis (aka Glassbox) places a run-time agent on the application machine, and analyzes the application as it is being tested. This combines the aspects of Dynamic and Static analysis at run-time, finding more vulnerabilities with greater accuracy. Glassbox analysis was introduced in the most recent release of AppScan, at the end of 2011.
Organizations cannot afford the risk of a data breach. In 2009-2010, the average cost of a data breach was calculated as greater than $7.2M per breach. Organizations that we have talked to calculate the potential cost of a data breach to be in the millions of dollars, not counting the potential loss of customer trust or damage to the company’s brand.
Once you decide that you cannot afford a data breach, your objective must be to avoid a breach at the lowest possible cost. Development teams have known for a long time that the most cost-effective way to fix defects is to fix them as early as possible in the development cycle. It is well-documented that fixing defects found late can be orders of magnitude more expensive than fixing them during development.
The traditional time for a security audit is just before an application goes into production. As you can see from this chart, there is a very high cost of fixing a defect which is found at this time. The cost is so high, in fact, that many organizations will accept the risk of a breach and queue up the security fix for their next release cycle. This decision is usually driven by the business imperative to get an application into production to meet an external deadline. Clearly a more prudent and cost-effective approach is to find the defect in development, at build time, or in QA. To make this happen, you need tools which the development and QA team can use – which do not require you to be a security expert.
Due to the multiple technologies employed, AppScan can be used earlier in the development cycle. Thanks to AppScan’s developer-friendly reporting, AppScan produces actionable information for development teams. AppScan also supports integration with the development tools, causing the least disruption to the current development processes.
(Data source for defect costs:
Source: GBS Industry standard study
Defect cost derived in assuming it takes 8 hrs to find, fix and repair a defect when found in code and unit test. Defect FFR cost for other phases calculated by using the multiplier on a blended rate of $80/hr.)
In many organizations, security and development teams do not communicate well. This is understandable, because there are very few tools and processes to facilitate that communication. Using AppScan Enterprise, customers can have a single repository of application security information, which ties in to the other development tools in use. Security analysts and auditors can establish security testing policies and templates to be used by the development team. Security Auditors can submit identified vulnerabilities as software defects. Developers can run tests early in the life cycle, and obtain valuable remediation advice and assistance. Managers can maintain oversight of the process. This visibility can be a key component of a compliance program as well.
To summarize, a proactive approach is required for Application Security. Organizations should not ignore Application vulnerabilities – the cost of a breach is too high and the risk is too great. The question becomes “what is the most cost-effective way of reducing the risk of a data breach?”
This chart summarizes the key steps that an organization can take:
Test early in the cycle. As seen earlier in this presentation, this reduces the cost of fixing by orders of magnitude.
Bridge the gap between “Security” and “Development”, by providing an application security toolset that is intended to support both of their needs, and facilitating communication and common visibility.
Use automation to integrate with application development tools, improving the flow of data and reducing disruption of the current development process.
Threat Landscape:
Vulnerabilities increasing by rate of 12 / day
Automated exploit kits appear within weeks of new disclosures
Persistent and stealthy attacks continuously search chosen targets for weaknesses
IT Infrastructure:
Mobile device integration multiplies complexity of endpoints
Evolving networking and connectivity standards
Rapid growth of Web applications
Compliance isn’t enough
Routine tactics only appease auditors
Protecting business assets requires continuous monitoring
Complete spectrum of tools required to safe-guard networks
These dynamics contribute to a whack-a-mole scenario where it’s impossible to totally secure the network.
a. Data overload: ‘Ocean’ of issues overwhelming patching and remediation processes. You should have the ability to identify and prioritize vulnerabilities based on context (link to intelligent)
b. Siloed system limitations: Multiple systems housing vulnerabilities for network, application, databases create huge inefficiencies in both time and effort. You should have the ability to integrate vulnerability management processes and data into a single platform (link to integrated)
c. Unknown risks remain: Dated information and missing coverage allows security weaknesses to remain hidden. You should have the ability to discover new assets and scale to new environments with ease (link to automated)
Integration
Shares QRadar deployed infrastructure, asset and network models, data repositories, reports, dashboards, APIs, and workflow
Incorporates data from IPS, Firewalls, X-Force, flow monitors, web application scanners, endpoint systems, and more
Automation
Quickly and dynamically scans discovered network assets
Alerts users to out-of-policy or high risk vulnerabilities
Updates include new vulnerability signatures
Provides complete audit trail from detection through remediation
Intelligence
Aggregates vulnerability data from multiple scanners and database feeds for superior visibility
Reduces data overload by applying network security and usage context
Excludes remediated issues from future reports
We partnered with an established vendor to revamp and integrate a new scanning engine into QRadar. Between us and our partner, we’ve been monitoring and managing vulnerabilities longer than anyone else in the industry (considering IBM also has an older ISS scanner engine).”
Revamped the architecture of our product
Totally integrated into QRadar
Used well established PCI-certified engine
Partnered to achieve
QRadar Vulnerability Manager's primary competitors are standalone VM solutions, including Qualys, Nessus, Rapid 7 and nCircle.
The primary differentiation between QVM and these solutions comes from QVM's integration with QRadar, specifically;
.*QVM Is the only vulnerability management solution that offers complete network context *
Network context means customers can reduce the number of vulnerabilities they need to focus on..
QVM can apply network usage context to vulnerability management. Identifying what vulnerable assets are communicating with internal and external threat sources,* Standalone vulnerability solutions cannot do this, as they have no network traffic visibility *
QVM can apply QFLow layer 7 traffic to vulnerabilities highlighting what vulnerability have (or have no) associated network traffic indicating those vulnerable applications are not active* Standalone vulnerability solutions cannot do this, as they have no network traffic visibility *
QVM can understand what vulnerabilities are exposed to threat sources in the network due to firewall and IPS configuration* nCircle has some limited capability in this area, but the other leading vulbnerability tools do not *Standalone VA solutions require additional integrations with tools such as Redseal, Skybox and AlgoSec to do this, adding cost, integration headaches and duplication of work.
2. QVM can provide complete visibliity of web application, database, end point and network infrastructure from multiple VA solutions
Standalone VM solution will offer web application, database scanning, end poitnt scanning, but there are not as comprehensive as specific point solutions in this space which is why many customers also have point solutiosn to address these areas. * QVM is the only VA solution on the market that can do this *
3.. QVM can provide internal and external scanning without any additional infrastructur
Threat Landscape:
Vulnerabilities increasing by rate of 12 / day
Automated exploit kits appear within weeks of new disclosures
Persistent and stealthy attacks continuously search chosen targets for weaknesses
IT Infrastructure:
Mobile device integration multiplies complexity of endpoints
Evolving networking and connectivity standards
Rapid growth of Web applications
Compliance isn’t enough
Routine tactics only appease auditors
Protecting business assets requires continuous monitoring
Complete spectrum of tools required to safe-guard networks
These dynamics contribute to a whack-a-mole scenario where it’s impossible to totally secure the network.
The XGS 5100 is a follow-on release from our initial launch of this product last year
Positioning the solution around three main pillars
- Threat protection
- Network control
- Integration
We’ll get into each of these pillars a bit more in a minute…
Getting back into the three pillars of XGS that I laid out previously, let’s talk about the protection capabilities
Having protection capabilities is table stakes for anyone who claims to be an IPS
The type of protection offered is very important as well
This is something we’ve been known for over many years, comes from ISS, who helped invent this whole market back in the late 1990s
Infrastructure protection
- Still very key, but definition is blurry
- Infrastructure attacks – OS/service up about 4% YoY; webapp attacks up 14% YoY – protection is key
- Our solution offers protection against all of these different types of attacks
User protection
- Common addage these days in security – “Why hack the infrastructure when you can hack the users?”
- We have seen an 8x increase since 2010 in the number of spear phishing attacks
- With this in mind, the XGS adds a new layer of user protection capabilities to help prevent user-based attacks
The second pillar of our positioning involves comprehensive network visibiltiy and control
This involves:
identifying applications on the network,
associating them with their corresponding users,
and controlling actions
Security use cases for this:
botnet C&C,
phishing links,
anonymous proxies, etc.
Non-security use cases as well,
like blocking Skype,
posting access to Facebook,
controlling access to Pandora
Finally, moving to our third pillar, integration is something that we do particularly well compared to the rest of the industry.
IPS never stands alone, must play well with others
This starts with adaptable deployment
- Network interfaces to match what’s there
- Flexible licensing so you don’t pay for throughput you’re not yet using
- Integrated bypass and SSL in a 1U appliance so customers save on both power and rack space
Integration with Qradar
- not just for events, but also for flow data
- gives customers more complete view of network, saves on flow collectors
Depth of portfolio is also key, especially for the types of clients that IBM services