Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
2. 2
Disclaimer
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results
could differ materially. For important factors that may cause actual results to differ from those contained
in our forward-looking statements, please review our filings with the SEC. The forward-looking
statements made in the this presentation are being made as of the time and date of its live presentation.
If reviewed after its live presentation, this presentation may not contain current or accurate information.
We do not assume any obligation to update any forward looking statements we may make.
In addition, any information about our roadmap outlines our general product direction and is subject to
change at any time without notice. It is for informational purposes only and shall not, be incorporated
into any contract or other commitment. Splunk undertakes no obligation either to develop the features
or functionality described or to include any such feature or functionality in a future release.
7. 7
Detect Network Outliers at Large Telco
• Monitor network behavior in real time & respond to changes in environment
– “The ability to model the behavior of complex systems and alert on deviations is
where IT operations and security operations are headed and Splunk and MLTS have
given us a head start in moving to the next level...”
• Benefits with Splunk ML:
– Reduced Downtime
– Increased Service Availability
– Better Customer Satisfaction
– Better prioritization of anomalous events
– Increased revenue per unit (RPU)
• Tech Overview:
– An operationalized solution in production that is based on Splunk ML Toolkit outlier
detection. Customized outlier detection that leverages the data of up to a month
using voting strategies for a robust outlier detection.
8. 8
Noise Rise Monitoring at Large Telco
• Cell Tower Monitoring:
– Monitor noise rise of more than 20,000 cells per tower automatically
– Detect if cells are misconfigured
– Detect if cells are jammed by an external interfering device
• Benefits with Splunk ML:
– Better Troubleshooting
– Increased Service Availability
– Increased Device Lifespan
• Tech Overview:
– Use Linear Regression to model the Received Total Wideband Power (RTWP)
using the total traffic at the cell.
– By checking model fit, customer is able to say if the cell is configured correctly.
– Monitor cells in real time using properly fit models, identify sudden changes in their
behavior caused by overloading or external interference.
10. 11
ML 101: What is it?
• Machine Learning (ML) is a process for generalizing from examples
– Examples = example or “training” data
– Generalizing = build “statistical models” to capture correlations
– Process = ML is never done, you must keep validating & refitting models
• Simple ML workflow:
– Explore data
– FIT models based on data
– APPLY models in production
– Keep validating models
“All models are wrong, but some are useful.”
- George Box
11. 12
3 Types of Machine Learning
1. Supervised Learning: generalizing from labeled data
12. 13
3 Types of Machine Learning
2. Unsupervised Learning: generalizing from unlabeled data
13. 14
3 Types of Machine Learning
3. Reinforcement Learning: generalizing from rewards in time
Leitner System Recommender systems
15. 16
IT Ops: Predictive Maintenance
1. Get resource usage data (CPU, latency, outage reports)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast resource saturation, demand & usage
5. Surface incidents to IT Ops, who INVESTIGATES & ACTS
Problem: Network outages and truck rolls cause big time & money expense
Solution: Build predictive model to forecast outage scenarios, act pre-emptively & learn
Operationalize
16. 17
Security: Find Insider Threats
Problem: Security breaches cause big time & money expense
Solution: Build predictive model to forecast threat scenarios, act pre-emptively & learn
1. Get security data (data transfers, authentication, incidents)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast abnormal behavior, risk scores & notable events
5. Surface incidents to Security Ops, who INVESTIGATES & ACTS
Operationalize
17. 18
Business Analytics: Predict Customer Churn
Problem: Customer churn causes big time & money expense
Solution: Build predictive model to forecast possible churn, act pre-emptively & learn
1. Get customer data (set-top boxes, web logs, transaction history)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast churn rate & identify customers likely to churn
5. Surface incidents to Business Ops, who INVESTIGATES & ACTS
Operationalize
18. 19
Summary: The ML Process
Problem: <Stuff in the world> causes big time & money expense
Solution: Build predictive model to forecast <possible incidents>, act pre-emptively & learn
1. Get all relevant data to problem
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast KPIs & notable events associated to use case
5. Surface incidents to X Ops, who INVESTIGATES & ACTS
Operationalize
20. 22
Splunk User Behavior Analytics (UBA)
• ~100% of breaches involve valid credentials (Mandiant Report)
• Need to understand normal & anomalous behaviors for ALL users
• UBA detects Advanced Cyberattacks and Malicious Insider Threats
• Lots of ML under the hood:
– Behavior Baselining & Modeling
– Anomaly Detection (30+ models)
– Advanced Threat Detection
• E.g., Data Exfil Threat:
– “Saw this strange login & data transfer
for user mpittman at 3am in China…”
– Surface threat to SOC Analysts
21. 23
Machine Learning in Splunk ITSI
Adaptive Thresholding:
• Learn baselines & dynamic thresholds
• Alert & act on deviations
• Manage for 1000s of KPIs & entities
• Stdev/Avg, Quartile/Median, Range
Anomaly Detection:
• Find “hiccups” in expected patterns
• Catches deviations beyond thresholds
• Uses Holt-Winters algorithm
22. 24
ML Toolkit & Showcase
• Splunk Supported framework for building ML Apps
– Get it for free: http://tiny.cc/splunkmlapp
• Leverages Python for Scientific Computing (PSC) add-on:
– Open-source Python data science ecosystem
– NumPy, SciPy, scitkit-learn, pandas, statsmodels
• Showcase use cases: Predict Hard Drive Failure, Server Power
Consumption, Application Usage, Customer Churn & more
• Standard algorithms out of the box:
– Supervised: Logistic Regression, SVM, Linear Regression, Random Forest, etc.
– Unsupervised: KMeans, DBSCAN, Spectral Clustering, PCA, KernelPCA, etc.
• Implement one of 300+ algorithms by editing Python scripts
24. 28
Analysts Business Users
1. Get Data & Find Decision-Makers
2
IT Users
ODBC
SDK
API
DB Connect
Look-Ups
Ad Hoc
Search
Monitor
and Alert
Reports /
Analyze
Custom
Dashboards
GPS /
Cellular
Devices Networks Hadoop
Servers Applications Online
Shopping Carts
Analysts Business Users
Structured Data Sources
CRM ERP HR Billing Product Finance
Data Warehouse
Clickstreams
25. 29
2. Explore Data, Build Searches & Dashboards
• Start with the Exploratory Data Analysis phase
– “80% of data science is sourcing, cleaning, and preparing the data”
– Tip: leverage ITSI KPIs – lots of domain knowledge
• For each data source, build “data diagnostic” dashboard
– What’s interesting? Throw up some basic charts.
– What’s relevant for this use case?
– Any anomalies? Are thresholds useful?
• Mix data streams & compute aggregates
– Compute KPIs & statistics w/ stats, eventstats, etc.
– Enrich data streams with useful structured data
– stats count by X Y – where X,Y from different sources
– Build new KPIs from what you find
26. 30
3. Fit, Apply & Validate Models
• ML SPL – New grammar for doing ML in Splunk
• fit – fit models based on training data
– [training data] | fit LinearRegression costly_KPI
from feature1 feature2 feature3 into my_model
• apply – apply models on testing and production data
– [testing/production data] | apply my_model
• Validate Your Model (The Hard Part)
– Why hard? Because statistics is hard! Also: model error ≠ real world risk.
– Analyze residuals, mean-square error, goodness of fit, cross-validate, etc.
– Take Splunk’s Analytics & Data Science Education course
27. 31
4. Predict & Act
• Forecast KPIs & predict notable events
– When will my system have a critical error?
– In which service or process?
– What’s the probable root cause?
• How will people act on predictions?
– Is this a Sev 1/2/3 event? Who responds?
– Deliver via Notable Events or dashboard?
– Human response or automated response?
• How do you improve the models?
– Iterate, add more data, extract more features
– Keep track of true/false positives
28. 32
5. Operationalize Your Models
• Operationalizing closes the loop of the ML Process:
1. Get data
2. Explore data & fit models
3. Apply & validate models
4. Forecast KPIs & events
5. Surface incidents to Ops team
• When you deliver the outcome, keep track of the response
– Human-generated response (detailed journal logs, etc)
– Machine-generated response (workflow actions, etc)
– External knowledge (closed tickets data, DB records, etc)
• Then operationalize: feed back Ops analysis to data inputs, repeat
• Lots of hard work & stats, but lots of value will come out.
Operationalize
30. 34
Next Steps with Splunk ML
• Reach out to your Tech Team! We can help architect ML solutions.
• Lots of ML commands in Core Splunk (predict, anomalydetection, stats)
• ML Toolkit & Showcase – available and free, ready to use
– Get it for free: http://tiny.cc/splunkmlapp
• Splunk UBA: Applied ML for Security
– Unsupervised learning of Users & Entities
– Surfaces Anomalies & Threats
• Splunk ITSI: Applied ML for ITOA use cases
– Manage 1000s of KPIs & alerts
– Adaptive Thresholding & Anomaly Detection
• ML Early Adopter Program:
– Connect with Product & Engineering teams - mlprogram@splunk.com
Editor's Notes
We’re headed to the East Coast!
2 inspired Keynotes – General Session and Security Keynote + Super Sessions with Splunk Leadership in Cloud, IT Ops, Security and Business Analytics!
165+ Breakout sessions addressing all areas and levels of Operational Intelligence – IT, Business Analytics, Mobile, Cloud, IoT, Security…and MORE!
30+ hours of invaluable networking time with industry thought leaders, technologists, and other Splunk Ninjas and Champions waiting to share their business wins with you!
Join the 50%+ of Fortune 100 companies who attended .conf2015 to get hands on with Splunk. You’ll be surrounded by thousands of other like-minded individuals who are ready to share exciting and cutting edge use cases and best practices. You can also deep dive on all things Splunk products together with your favorite Splunkers.
Head back to your company with both practical and inspired new uses for Splunk, ready to unlock the unimaginable power of your data! Arrive in Orlando a Splunk user, leave Orlando a Splunk Ninja!
REGISTRATION OPENS IN MARCH 2016 – STAY TUNED FOR NEWS ON OUR BEST REGISTRATION RATES – COMING SOON!
[Shawn]
Q: Why does BA matter?
A1: BA can drive VOLUME in your accounts. Customer examples to come.
A2: BA can drive VALUE in your accounts. Strategic, high-level, high-profile use cases. Use HIGH-VALUE SMALL DATA to enrich LARGE VOLUMES of MACHINE
Q: What is a statistical model?A: A model is a little copy of the world you can hold in your hands.
Formal: A model is a parametrized relationship between variables.
FITTING a model sets the parameters using feature variables & observed values
APPLYING a model fills in predicted values using feature variables
Image source: http://phdp.github.io/posts/2013-07-05-dtl.html
Supervised learning is where you have existing LABELS in the data to help you out.
Example: If you’re training a model for CUSTOMER CHURN, historically you know which customers stayed and which left. You can build a model to correlate historical churn with other features in the data. Then you can PREDICT churn for each customer based on everything they’re doing in real-time and have done in the past.
Unsupervised learning is where you have NO LABELS to help you out. You have to figure out patterns
Example: If you’re trying to do BEHAVIORAL ANALYTICS, you might just have a big confusing pile of IT & Security data to wade through. Unsupervised learning is the art & science of finding PATTERNS, BASELINES and ANOMALIES in the data. Once you understand all this (that’s hard!) you can try to predict possible INCIDENTS and THREATS.
Good ML involves FEEDBACK loops. Best bet is to incorporate INCIDENT RESPONSE data and learn from what analysts have done in the past.
[NEXT SLIDE: Reinforcement Learning]
Reinforcement Learning is basically Supervised Learning where LABELS = REWARDS, and there is a strong focus on TIME and FEEDBACK LOOPS. This is how you OPERATIONALIZE machine learning: by looping back results of analysis and workflow and LEARN from interactions with the world.
Rewards can be POSITIVE or NEGATIVE.
Image: The Leitner system is reinforcement learning for flashcards. Correct answers “advance” and accumulate more points. Incorrect answers go back to the beginning.
https://en.wikipedia.org/wiki/Leitner_system
Reinforcement learning is rooted in behavioral psychology. Humans & animals are hard-wired for rewards
https://en.wikipedia.org/wiki/Reinforcement_learning
Q: How is this slide similar to the previous one? (go back and forth)
A: The ML Process is the same, it’s just that the data & the operations teams are different. Also different will be the actual analysis in the middle, but the *process* of doing that analysis is the same.
The ML process is itself a generalization of the different use cases. ML spans domains!
The arrow means OPERATIONALIZE. Feed back incident data & other high-level analysis back into the ML Process. Keep exploring that data & fitting better models to align with reality. Loop Step #5 (Act) back to Step #1 (Data).
Reinforcement learning lets us OPERATIONALIZE machine learning. When the machine recommends something to an analyst, the model can LEARN from the outcome of their work. Create a culture of REWARDS for your analytics team, not punishments.
The machines can/should learn from ALL the available data. You might have to build complex ML workflows. Want good Splunk admin to help architect.
Want VIRTUOUS CYCLE between human-machine interaction
Re: 100% of breaches involve valid credentials:
"Mandiant is the leader in incident response. They are the best of the best. They're brought in to deal with the largest, highest profile, most damaging breaches. When you read a news headline about a large organization being compromised, there is a great chance that Mandiant is working behind the scenes to eradicate the attackers from the environment. In a recent yearly report-when they looked across all the very damaging attacks they responded to-they noticed that valid credentials were used at some point in every single one of them. Why do we care? Well we care because it means that we cannot use simple techniques like counting failed logins to detect an attack. In fact based on this stat, there may not even be any failed logins to count! Instead we need to be much smarter. For example how can we look at 1000 successful logins and determine which of them was the malicious one? We do that through behavior analytics, baseline/outlier, ML, etc."
Free app! Toolkit & PSC are both free. Go to ML App link above, and click Documentation. Links for all distros ()
Q: Why standalone SH?
A: Don’t want ML exploration & production to bring down other Splunk workloads
Can use standalone 6.4 SH with older version SH cluster & indexers.
Re: ML App v0.9. To be updated after new release. Stay tuned! Lots to come w/ Splunk ML.
Image modified from cover of book Protecting Study Volunteers in Research
Publisher: CenterWatch LLC; 4th Edition edition (June 15, 2012)
NEXT: either leave slide & discuss OR show ML demo
Before you do machine learning, you need DATA and DECISION-MAKERS. Walk before you can run! Start with useful data sources that can help people solve problems, and build basic dashboards correlating different things in the data.
This is called EXPLORATORY DATA ANALYSIS. Once you do that, THEN try to fit models based on what seems to correlate. Interviewing & iterating with decision-makers is key.
DATA: ML isn’t magic. You need good data to learn from.
DECISION-MAKERS: Once you find patterns, anomalies, etc., who are you going to deliver to them to? How do they want information presented? Emails? Dashboards? Incident tickets?
Walk before running! Precursor to building models & doing ML.
Source for “80% of data science is EDA” quote:
http://www.nytimes.com/2014/08/18/technology/for-big-data-scientists-hurdle-to-insights-is-janitor-work.html?_r=0
Image: OpenStreetMap logo, from Wikipedia. Creative Commons
Remember: Machine Learning is a PROCESS. Takes a lot of work & elbow grease to get from Exploratory Data Analysis to ML Models in Production.
Q: Why hard?
A1: Statistics is hard. Subtle questions re: model error & statistical assumptions. Remember: “All models are wrong, some are useful”
A2: Validation is also difficult because not everyone has the same requirements. For example, for some users false positives may be much more expensive than false negatives; for others, the opposite may be true. For some users, being 2X wrong is twice as bad as being X wrong; for others, there may be a non-linear relationship between error and badness.
Also: Output an aggregate table for further analysis? (send via ODBC Driver, DB Connect or Hunk/Hadoop)
Re: ML App v0.9. To be updated after new release. Time estimate: “soon, stay tuned!”
If you want to use ML in production, let us know! We have customers using ML in production TODAY. e.g., New York Air Brake
Time for ML demo!
Get the ML App: http://tiny.cc/splunkmlapp
Want more? Take Splunk’s Analytics & Data Science course!
Course prework: http://bit.ly/splunkanalytics
Re: ML App v0.9. To be updated after new release. Stay tuned! Lots to come w/ Splunk ML.
Image modified from cover of book Protecting Study Volunteers in Research
Publisher: CenterWatch LLC; 4th Edition edition (June 15, 2012)
NEXT: either leave slide & discuss OR show ML demo
A direct customer-Splunk engagement focused on real-world use of the Splunk Enterprise - MachineLearning Toolkit and Showcase app and related SPL commands
Objectives• Help the customer to be successful in the impactful use of ML• Help Splunk to understand customer use cases and product requirements
Details• Splunk Account SE plus PM/Engineering work directly with customer to guide usage, providesupport, note analytics and product requirements and refine product where feasible• Customer participates in the above, developing 1 or more models and putting them in production• Customer agrees to be referenced publically; sharing reasonable detail and business impact• Customer agrees to participate in a set of activities that may include: case study, press quote, use
of logo, PR/AR reference call, video profile