SlideShare a Scribd company logo
1 of 51
Copyright © 2015 Splunk Inc.
Splunk Ninjas:
New Features and Search Dojo
Steve Hogan
Sr. Sales Engineer
shogan@splunk.com
Copyright © 2014 Splunk Inc.
Name: Intercontinental
Dallas Meetings
Access Code: AWS2015
Thanks to Our Sponsors
4
Safe Harbor Statement
During the course of this presentation,we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results could
differ materially. For important factors that may cause actual results to differ from those contained in our
forward-looking statements, please review our filings with the SEC. The forward-looking statements
made in this presentation are being made as of the time and date of its live presentation. If reviewed
after its live presentation, this presentation may not contain current or accurate information. We do not
assume any obligation to update any forward looking statements we may make. In addition, any
information about our roadmap outlines our general product direction and is subject to change at any
time without notice. It is for informational purposes only and shall not be incorporated into any contract
or other commitment. Splunk undertakes no obligation either to develop the features or functionality
described orto includeany suchfeatureor functionalityina futurerelease.
5
Agenda
What’s new in 6.3
– Breakthrough Performance and Scale
– Advanced Analysis and Visualization
– High Volume Event Collection
– Enterprise-Scale Platform
Harness the power of search and dashboards
– Search Examples and dashboard tips.
6
Splunk Enterprise & Cloud 6.3
Breakthrough
Performance & Scale
Doubles performance
and lowers TCO
Advanced Analysis
& Visualization
High Volume Event
Collection
Enterprise-Scale
Platform
Supports DevOps and IoT
data analysis at scale
Simplifies analysis of
large datasets
Enterprise
management and
integration
7
Breakthrough Performance and Scale
Vertical scaling maximizes use of CPU power through:
– Indexer Parallelization
– Search Parallelization
Improved Search Performance and System Capacity through:
– Intelligent Job Scheduling
7
0
20
40
60
80
100
120
Splunk 6.2 Splunk 6.3
(2 pipelines)
Splunk 6.3
(4 pipelines)
Indexer Parallelization: Cisco UCS Benchmark Preview
MB/sec
4X More Data Indexing
(Pure indexing)
Search Parallelization: Cisco UCS Benchmark Preview
0
4
8
12
16
20
24
seconds seconds
0
10
20
30
40
50
60
Splunk 6.2 Splunk 6.3
(2 pipelines)
Splunk 6.3
(4 pipelines)
Splunk 6.2 Splunk 6.3
(2 pipelines)
Splunk 6.3
(4 pipelines)
3X Faster Search
2X More Data
(8 concurrent searches)
(70 MB/sec indexing)
6X Faster Search Speed
(8 concurrent searches)
10
Summary of Parallelization Settings
Setting Description Setting name / location
Default
Value
Max
Recmd
Value
Impact
Batch mode search
parallelization
Allows a batch mode search to open
additional search pipelines on each
indexer
limits.conf
batch_search_max_pipeline
1 2
Multiples the number of
search pipelines per batch
mode search per indexer.
Parallel
summarization for
data models
Allows the scheduler to run
concurrent data model acceleration
searches on the indexers.
datamodels.conf
acceleration.max_concurrent
2 2
Multiples the number of
scheduled acceleration
searches per data model per
indexer.
Parallel
summarization for
report accelerations
Allows the scheduler to run
concurrent report acceleration
searches on the indexers.
savedsearches.conf
auto_summarize.max_concurrent
1 2
Multiples the number of
scheduled acceleration
searches per search per
indexer.
Index parallelization
Allows concurrent data processing
pipelines on indexers and
forwarders.
server.conf
parallelIngestionPipelines
1 2
Multiples the number of
pipelines per indexer.
http://docs.splunk.com/Documentation/Splunk/latest/Capacity/Parallelization
11
Intelligent Job Scheduling
• Adds better priority scoring and
search windows for much improved
saved search scheduling
• Reduces # of skipped searches
• Re-run failed searches during
downtime
11
6.3
6.2
12
Splunk Enterprise & Cloud 6.3
Breakthrough
Performance & Scale
Doubles performance
and lowers TCO
Advanced Analysis
& Visualization
High Volume Event
Collection
Enterprise-Scale
Platform
Supports DevOps and IoT
data analysis at scale
Simplifies analysis of
large datasets
Enterprise
management and
integration
13
Single Value Display
At-a-glance, single-value indicators with useful context
No JS coding / CSS styling necessary!
Configurable sparkline
Value rangemap, custom thresholds
Trend up/down - reversible
Great for Operation Centers and
War Rooms
13
14
Anomaly Detection
Incorporates Z-Score, IQR & histogram methodologies in a single command
Detect and summarize anomalies
Return anomalous values and
outliers
3 Commands in one
Easy-to-use
Configurable threshold
1
15
Choropleth maps
Visualize how a metric varies across a (custom) geographic area
50 States and World Countries built-in
3 Different Color Modes
– Sequential
– Divergent
– Categorical
Custom Polygon Definitions
– Use KMZs and also make your own!
– Shapester App!
Point-in Polygon lookups
15
16
Demo
17
Splunk Enterprise & Cloud 6.3
Breakthrough
Performance & Scale
Doubles performance
and lowers TCO
Advanced Analysis
& Visualization
High Volume Event
Collection
Enterprise-Scale
Platform
Supports DevOps and IoT
data analysis at scale
Simplifies analysis of
large datasets
Enterprise
management and
integration
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d
'{"event":"Hello Event Collector"}'
Applications IoT Devices
Agentless, direct data onboarding via a standard developer API
HTTP Event Collector
20
Splunk Enterprise & Cloud 6.3
Breakthrough
Performance & Scale
Doubles performance
and lowers TCO
Advanced Analysis
& Visualization
High Volume Event
Collection
Enterprise-Scale
Platform
Supports DevOps and IoT
data analysis at scale
Simplifies analysis of
large datasets
Enterprise
management and
integration
21
Distributed Management Console - II
New topology views, status, and alerting for Splunk deployments
• Visualizes Search Head/Indexer matrix
with KPI and performance overlays
• Search Head clustering replication
and scheduler views
• Forwarder views with status and
performance data
• Index and metadata storage utilization
• System health alerting
21
22
Custom Alert Actions
Use Splunk Alerts to trigger & automate workflows
• Allows packaged integration with
third-party applications
• Simple admin/user configuration
• Developers can build, package, and
publish alert actions within an app
• Growing list of integrations available
22
23
Other Notable Additions
23
24
Demo
25
Download the Overview App (6.3) & 6.x Dashboard Examples
Harness the Power of
Search
27
search and filter | munge | report | cleanup
Search Processing Language
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) dc(clientip)
| rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
28
Five Commands that will Solve Most Data Questions
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
30
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection =
clientip.":".status_description
eval - Modify or Create New Fields and Values
31
eval - Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
32
eval - Modify or Create New Fields and Values
Examples
• Calculation:
sourcetype=access*
|eval KB=bytes/1024
• Evaluation:
sourcetype=access*
| eval http_response = if(status == 200,
"OK", "Error”)
• Concatenation:
sourcetype=access*
| eval connection = clientip.":".port
34
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
35
stats – Calculate Statistics Based on Field Values
Examples
• Calculate stats and rename
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) as “Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
36
stats – Calculate Statistics Based on Field Values
Examples
• Calculate statistics
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) AS "Total KB”
• Multiple statistics
sourcetype=access*
| eval KB=bytes/1024
| stats avg(KB) sum(KB)
• By another field
sourcetype=access*
| eval KB=bytes/1024
| stats sum(KB) avg(KB) by clientip
38
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
39
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
eventstats – Add Summary Statistics to Search Results
40
eventstats – Add Summary Statistics to Search Results
Examples
• Overlay Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes
| timechart latest(avg_bytes) avg(bytes)
• Moving Average
sourcetype=access*
| eventstats avg(bytes) AS avg_bytes by date_hour
| timechart latest(avg_bytes) avg(bytes)
• By created field
sourcetype=access*
| eval http_response = if(status == 200, "OK", "Error”)
| eventstats avg(bytes) AS avg_bytes by http_response
| timechart latest(avg_bytes) avg(bytes) by http_response
42
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total
| timechart max(bytes_total)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
43
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
44
streamstats – Cumulative Statistics for Each Event
Examples
• Cumulative Sum
sourcetype=access*
| timechart sum(bytes) as bytes
| streamstats sum(bytes) as cumulative_bytes
| timechart max(cumulative_bytes)
• Cumulative Sum by Field
sourcetype=access*
| reverse
| streamstats sum(bytes) as bytes_total by status
| timechart max(bytes_total) by status
• Moving Average
sourcetype=access*
| timechart avg(bytes) as avg_bytes
| streamstats avg(avg_bytes) AS moving_avg_bytes
window=10
| timechart latest(moving_avg_bytes) latest(avg_bytes)
46
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
47
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
48
transaction – Group Related Events Spanning Time
Examples
• Group by Session ID
sourcetype=access*
| transaction JSESSIONID
• Calculate Session Durations
sourcetype=access*
| transaction JSESSIONID
| stats min(duration) max(duration) avg(duration)
• Stats is Better
sourcetype=access*
| stats min(_time) AS earliest max(_time) AS latest by JSESSIONID
| eval duration=latest-earliest
| stats min(duration) max(duration) avg(duration)
49
Learn Them Well and Become a Ninja
eval - Modify or Create New Fields and Values
stats - Calculate Statistics Based on Field Values
eventstats - Add Summary Statistics to Search Results
streamstats - Cumulative Statistics for Each Event
transaction - Group Related Events Spanning Time
See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
50
Bonus DashboardSee Notes for xml
Questions?
Bonus Command
53
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
54
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
55
cluster – Find Common and/or Rare Events
Examples
• Find the most common events
*
| cluster showcount=t t=0.1
| table cluster_count, _raw
| sort - cluster_count
• Select a field to cluster on
sourcetype=access*
| cluster field=bc_uri showcount=t
| table cluster_count bc_uri _raw
| sort -cluster_count
• Most or least common errors
index=_internal source=*splunkd.log* log_level!=info
| cluster showcount=t
| table cluster_count _raw
| sort -cluster_count
56
Splunk Mobile App
EMBEDDING
OPERATIONAL
INTELLIGENCE
• Access dashboards and
reports
• Annotate dashboards and
share with others
• Receive push notifications
Native Mobile Experience
Thank You

More Related Content

What's hot

SplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk EnterpriseSplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk Enterprise
Splunk
 
SplunkLive! Beginner Session
SplunkLive! Beginner SessionSplunkLive! Beginner Session
SplunkLive! Beginner Session
Splunk
 
Splunk Dynamic lookup
Splunk Dynamic lookupSplunk Dynamic lookup
Splunk Dynamic lookup
Splunk
 
SplunkLive! Dallas Nov 2012 - Metro PCS
SplunkLive! Dallas Nov 2012 - Metro PCSSplunkLive! Dallas Nov 2012 - Metro PCS
SplunkLive! Dallas Nov 2012 - Metro PCS
Splunk
 
SplunkLive 2011 Beginners Session
SplunkLive 2011 Beginners SessionSplunkLive 2011 Beginners Session
SplunkLive 2011 Beginners Session
Splunk
 
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
Databricks
 
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
Databricks
 

What's hot (20)

Splunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search DojoSplunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search Dojo
 
SplunkLive! Presentation - Data Onboarding with Splunk
SplunkLive! Presentation - Data Onboarding with SplunkSplunkLive! Presentation - Data Onboarding with Splunk
SplunkLive! Presentation - Data Onboarding with Splunk
 
SplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk EnterpriseSplunkLive! Analytics with Splunk Enterprise
SplunkLive! Analytics with Splunk Enterprise
 
SplunkLive! Beginner Session
SplunkLive! Beginner SessionSplunkLive! Beginner Session
SplunkLive! Beginner Session
 
Splunk Dynamic lookup
Splunk Dynamic lookupSplunk Dynamic lookup
Splunk Dynamic lookup
 
SplunkLive! Dallas Nov 2012 - Metro PCS
SplunkLive! Dallas Nov 2012 - Metro PCSSplunkLive! Dallas Nov 2012 - Metro PCS
SplunkLive! Dallas Nov 2012 - Metro PCS
 
Omid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache PhoenixOmid: scalable and highly available transaction processing for Apache Phoenix
Omid: scalable and highly available transaction processing for Apache Phoenix
 
Advanced Analytics using Apache Hive
Advanced Analytics using Apache HiveAdvanced Analytics using Apache Hive
Advanced Analytics using Apache Hive
 
Apache Eagle Strata Hadoop World London 2016
Apache Eagle Strata Hadoop World London 2016Apache Eagle Strata Hadoop World London 2016
Apache Eagle Strata Hadoop World London 2016
 
Splunk Ninjas: New Features, Pivot and Search Dojo
Splunk Ninjas: New Features, Pivot and Search DojoSplunk Ninjas: New Features, Pivot and Search Dojo
Splunk Ninjas: New Features, Pivot and Search Dojo
 
Reliable and Scalable Data Ingestion at Airbnb
Reliable and Scalable Data Ingestion at AirbnbReliable and Scalable Data Ingestion at Airbnb
Reliable and Scalable Data Ingestion at Airbnb
 
Is there a way that we can build our Azure Synapse Pipelines all with paramet...
Is there a way that we can build our Azure Synapse Pipelines all with paramet...Is there a way that we can build our Azure Synapse Pipelines all with paramet...
Is there a way that we can build our Azure Synapse Pipelines all with paramet...
 
SplunkLive 2011 Beginners Session
SplunkLive 2011 Beginners SessionSplunkLive 2011 Beginners Session
SplunkLive 2011 Beginners Session
 
Power of Splunk Search Processing Language (SPL) ...
Power of Splunk Search Processing Language (SPL)                             ...Power of Splunk Search Processing Language (SPL)                             ...
Power of Splunk Search Processing Language (SPL) ...
 
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
Redis + Structured Streaming—A Perfect Combination to Scale-Out Your Continuo...
 
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
Spark SQL Adaptive Execution Unleashes The Power of Cluster in Large Scale wi...
 
Data Onboarding
Data Onboarding Data Onboarding
Data Onboarding
 
Apache Calcite overview
Apache Calcite overviewApache Calcite overview
Apache Calcite overview
 
Using Apache Solr for Images as Big Data: Presented by Kerry Koitzsch, Wipro...
Using Apache Solr for Images as Big Data: Presented by Kerry Koitzsch,  Wipro...Using Apache Solr for Images as Big Data: Presented by Kerry Koitzsch,  Wipro...
Using Apache Solr for Images as Big Data: Presented by Kerry Koitzsch, Wipro...
 
Hadoop Eagle - Real Time Monitoring Framework for eBay Hadoop
Hadoop Eagle - Real Time Monitoring Framework for eBay HadoopHadoop Eagle - Real Time Monitoring Framework for eBay Hadoop
Hadoop Eagle - Real Time Monitoring Framework for eBay Hadoop
 

Viewers also liked

SplunkLive! Splunk for Security
SplunkLive! Splunk for SecuritySplunkLive! Splunk for Security
SplunkLive! Splunk for Security
Splunk
 

Viewers also liked (20)

Long Term Reporting with Savvius and Splunk
Long Term Reporting with Savvius and SplunkLong Term Reporting with Savvius and Splunk
Long Term Reporting with Savvius and Splunk
 
Splunk live! Customer Presentation – Prelert
Splunk live! Customer Presentation – PrelertSplunk live! Customer Presentation – Prelert
Splunk live! Customer Presentation – Prelert
 
Splunk for Enterprise Security featuring UBA
Splunk for Enterprise Security featuring UBA Splunk for Enterprise Security featuring UBA
Splunk for Enterprise Security featuring UBA
 
Splunk for Developers
Splunk for DevelopersSplunk for Developers
Splunk for Developers
 
Splunk Enterprise for InfoSec Hands-On
Splunk Enterprise for InfoSec Hands-OnSplunk Enterprise for InfoSec Hands-On
Splunk Enterprise for InfoSec Hands-On
 
Accelerate Troubleshooting and Reinvent Monitoring with Interactive Visualiza...
Accelerate Troubleshooting and Reinvent Monitoring with Interactive Visualiza...Accelerate Troubleshooting and Reinvent Monitoring with Interactive Visualiza...
Accelerate Troubleshooting and Reinvent Monitoring with Interactive Visualiza...
 
Splunk for Security - Hands-On
Splunk for Security - Hands-On Splunk for Security - Hands-On
Splunk for Security - Hands-On
 
SplunkLive! Splunk for Security
SplunkLive! Splunk for SecuritySplunkLive! Splunk for Security
SplunkLive! Splunk for Security
 
Threat Hunting with Splunk
Threat Hunting with SplunkThreat Hunting with Splunk
Threat Hunting with Splunk
 
Managing SCADA Operations and Security with Splunk Enterprise
Managing SCADA Operations and Security with Splunk EnterpriseManaging SCADA Operations and Security with Splunk Enterprise
Managing SCADA Operations and Security with Splunk Enterprise
 
Splunk Enterprise for InfoSec Hands-On
Splunk Enterprise for InfoSec Hands-OnSplunk Enterprise for InfoSec Hands-On
Splunk Enterprise for InfoSec Hands-On
 
Splunk for ITOps
Splunk for ITOpsSplunk for ITOps
Splunk for ITOps
 
Webinar: Splunk Enterprise Security Deep Dive: Analytics
Webinar: Splunk Enterprise Security Deep Dive: AnalyticsWebinar: Splunk Enterprise Security Deep Dive: Analytics
Webinar: Splunk Enterprise Security Deep Dive: Analytics
 
Art of the Possible with Splunk Analytics
Art of the Possible with Splunk AnalyticsArt of the Possible with Splunk Analytics
Art of the Possible with Splunk Analytics
 
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...
 
Splunk Enterprise for IT Troubleshooting Hands-On
Splunk Enterprise for IT Troubleshooting Hands-OnSplunk Enterprise for IT Troubleshooting Hands-On
Splunk Enterprise for IT Troubleshooting Hands-On
 
Making Pretty Charts in Splunk
Making Pretty Charts in SplunkMaking Pretty Charts in Splunk
Making Pretty Charts in Splunk
 
Splunk for Security-Hands On
Splunk for Security-Hands OnSplunk for Security-Hands On
Splunk for Security-Hands On
 
Getting Started with IT Service Intelligence
Getting Started with IT Service IntelligenceGetting Started with IT Service Intelligence
Getting Started with IT Service Intelligence
 
Threat Hunting Workshop
Threat Hunting WorkshopThreat Hunting Workshop
Threat Hunting Workshop
 

Similar to Splunk Ninjas: New Features, Pivot, and Search Dojo

SplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
SplunkSummit 2015 - HTTP Event Collector, Simplified Developer LoggingSplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
SplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
Splunk
 

Similar to Splunk Ninjas: New Features, Pivot, and Search Dojo (20)

SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
SplunkLive! Tampa: Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk Ninjas: New Features, Pivot, and Search Dojo
Splunk Ninjas: New Features, Pivot, and Search Dojo
 
Splunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search DojoSplunk Ninjas: New Features and Search Dojo
Splunk Ninjas: New Features and Search Dojo
 
SplunkLive! London: Splunk ninjas- new features and search dojo
SplunkLive! London: Splunk ninjas- new features and search dojoSplunkLive! London: Splunk ninjas- new features and search dojo
SplunkLive! London: Splunk ninjas- new features and search dojo
 
SplunkSummit 2015 - Update on Splunk Enterprise 6.3 & Hunk 6.3
SplunkSummit 2015 - Update on Splunk Enterprise 6.3 & Hunk 6.3SplunkSummit 2015 - Update on Splunk Enterprise 6.3 & Hunk 6.3
SplunkSummit 2015 - Update on Splunk Enterprise 6.3 & Hunk 6.3
 
Splunk Enterprise 6.3 - Splunk Tech Day
Splunk Enterprise 6.3 - Splunk Tech DaySplunk Enterprise 6.3 - Splunk Tech Day
Splunk Enterprise 6.3 - Splunk Tech Day
 
Getting Started with Splunk Enterprise
Getting Started with Splunk EnterpriseGetting Started with Splunk Enterprise
Getting Started with Splunk Enterprise
 
Splunk Enterprise 6.4
Splunk Enterprise 6.4Splunk Enterprise 6.4
Splunk Enterprise 6.4
 
SplunkSummit 2015 - Security Ninjitsu
SplunkSummit 2015 - Security NinjitsuSplunkSummit 2015 - Security Ninjitsu
SplunkSummit 2015 - Security Ninjitsu
 
Getting Started with Splunk Enterprise Hands-On
Getting Started with Splunk Enterprise Hands-OnGetting Started with Splunk Enterprise Hands-On
Getting Started with Splunk Enterprise Hands-On
 
Getting Started with Splunk Enterprise
Getting Started with Splunk EnterpriseGetting Started with Splunk Enterprise
Getting Started with Splunk Enterprise
 
Getting Started with Splunk Enterprise
Getting Started with Splunk EnterpriseGetting Started with Splunk Enterprise
Getting Started with Splunk Enterprise
 
Optimizely Agent: Scaling Resilient Feature Delivery
Optimizely Agent: Scaling Resilient Feature DeliveryOptimizely Agent: Scaling Resilient Feature Delivery
Optimizely Agent: Scaling Resilient Feature Delivery
 
Azure Stream Analytics : Analyse Data in Motion
Azure Stream Analytics  : Analyse Data in MotionAzure Stream Analytics  : Analyse Data in Motion
Azure Stream Analytics : Analyse Data in Motion
 
ACM BPM and elasticsearch AMIS25
ACM BPM and elasticsearch AMIS25ACM BPM and elasticsearch AMIS25
ACM BPM and elasticsearch AMIS25
 
A Lap Around Developer Awesomeness in Splunk 6.3
A Lap Around Developer Awesomeness in Splunk 6.3A Lap Around Developer Awesomeness in Splunk 6.3
A Lap Around Developer Awesomeness in Splunk 6.3
 
DEVNET-1140 InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...
DEVNET-1140	InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...DEVNET-1140	InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...
DEVNET-1140 InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...
 
Intershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL ServerIntershop Commerce Management with Microsoft SQL Server
Intershop Commerce Management with Microsoft SQL Server
 
SplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
SplunkSummit 2015 - HTTP Event Collector, Simplified Developer LoggingSplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
SplunkSummit 2015 - HTTP Event Collector, Simplified Developer Logging
 

More from Splunk

More from Splunk (20)

.conf Go 2023 - Data analysis as a routine
.conf Go 2023 - Data analysis as a routine.conf Go 2023 - Data analysis as a routine
.conf Go 2023 - Data analysis as a routine
 
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
.conf Go 2023 - How KPN drives Customer Satisfaction on IPTV
 
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
.conf Go 2023 - Navegando la normativa SOX (Telefónica).conf Go 2023 - Navegando la normativa SOX (Telefónica)
.conf Go 2023 - Navegando la normativa SOX (Telefónica)
 
.conf Go 2023 - Raiffeisen Bank International
.conf Go 2023 - Raiffeisen Bank International.conf Go 2023 - Raiffeisen Bank International
.conf Go 2023 - Raiffeisen Bank International
 
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett .conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
.conf Go 2023 - På liv og død Om sikkerhetsarbeid i Norsk helsenett
 
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär).conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
.conf Go 2023 - Many roads lead to Rome - this was our journey (Julius Bär)
 
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu....conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...
 
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever....conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
.conf go 2023 - Cyber Resilienz – Herausforderungen und Ansatz für Energiever...
 
.conf go 2023 - De NOC a CSIRT (Cellnex)
.conf go 2023 - De NOC a CSIRT (Cellnex).conf go 2023 - De NOC a CSIRT (Cellnex)
.conf go 2023 - De NOC a CSIRT (Cellnex)
 
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)
 
Splunk - BMW connects business and IT with data driven operations SRE and O11y
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk - BMW connects business and IT with data driven operations SRE and O11y
Splunk - BMW connects business and IT with data driven operations SRE and O11y
 
Splunk x Freenet - .conf Go Köln
Splunk x Freenet - .conf Go KölnSplunk x Freenet - .conf Go Köln
Splunk x Freenet - .conf Go Köln
 
Splunk Security Session - .conf Go Köln
Splunk Security Session - .conf Go KölnSplunk Security Session - .conf Go Köln
Splunk Security Session - .conf Go Köln
 
Data foundations building success, at city scale – Imperial College London
 Data foundations building success, at city scale – Imperial College London Data foundations building success, at city scale – Imperial College London
Data foundations building success, at city scale – Imperial College London
 
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...
 
SOC, Amore Mio! | Security Webinar
SOC, Amore Mio! | Security WebinarSOC, Amore Mio! | Security Webinar
SOC, Amore Mio! | Security Webinar
 
.conf Go 2022 - Observability Session
.conf Go 2022 - Observability Session.conf Go 2022 - Observability Session
.conf Go 2022 - Observability Session
 
.conf Go Zurich 2022 - Keynote
.conf Go Zurich 2022 - Keynote.conf Go Zurich 2022 - Keynote
.conf Go Zurich 2022 - Keynote
 
.conf Go Zurich 2022 - Platform Session
.conf Go Zurich 2022 - Platform Session.conf Go Zurich 2022 - Platform Session
.conf Go Zurich 2022 - Platform Session
 
.conf Go Zurich 2022 - Security Session
.conf Go Zurich 2022 - Security Session.conf Go Zurich 2022 - Security Session
.conf Go Zurich 2022 - Security Session
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 

Splunk Ninjas: New Features, Pivot, and Search Dojo

  • 1. Copyright © 2015 Splunk Inc. Splunk Ninjas: New Features and Search Dojo Steve Hogan Sr. Sales Engineer shogan@splunk.com
  • 2. Copyright © 2014 Splunk Inc. Name: Intercontinental Dallas Meetings Access Code: AWS2015
  • 3. Thanks to Our Sponsors
  • 4. 4 Safe Harbor Statement During the course of this presentation,we may make forward looking statements regarding future events or the expected performance of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results could differ materially. For important factors that may cause actual results to differ from those contained in our forward-looking statements, please review our filings with the SEC. The forward-looking statements made in this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, this presentation may not contain current or accurate information. We do not assume any obligation to update any forward looking statements we may make. In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionality described orto includeany suchfeatureor functionalityina futurerelease.
  • 5. 5 Agenda What’s new in 6.3 – Breakthrough Performance and Scale – Advanced Analysis and Visualization – High Volume Event Collection – Enterprise-Scale Platform Harness the power of search and dashboards – Search Examples and dashboard tips.
  • 6. 6 Splunk Enterprise & Cloud 6.3 Breakthrough Performance & Scale Doubles performance and lowers TCO Advanced Analysis & Visualization High Volume Event Collection Enterprise-Scale Platform Supports DevOps and IoT data analysis at scale Simplifies analysis of large datasets Enterprise management and integration
  • 7. 7 Breakthrough Performance and Scale Vertical scaling maximizes use of CPU power through: – Indexer Parallelization – Search Parallelization Improved Search Performance and System Capacity through: – Intelligent Job Scheduling 7
  • 8. 0 20 40 60 80 100 120 Splunk 6.2 Splunk 6.3 (2 pipelines) Splunk 6.3 (4 pipelines) Indexer Parallelization: Cisco UCS Benchmark Preview MB/sec 4X More Data Indexing (Pure indexing)
  • 9. Search Parallelization: Cisco UCS Benchmark Preview 0 4 8 12 16 20 24 seconds seconds 0 10 20 30 40 50 60 Splunk 6.2 Splunk 6.3 (2 pipelines) Splunk 6.3 (4 pipelines) Splunk 6.2 Splunk 6.3 (2 pipelines) Splunk 6.3 (4 pipelines) 3X Faster Search 2X More Data (8 concurrent searches) (70 MB/sec indexing) 6X Faster Search Speed (8 concurrent searches)
  • 10. 10 Summary of Parallelization Settings Setting Description Setting name / location Default Value Max Recmd Value Impact Batch mode search parallelization Allows a batch mode search to open additional search pipelines on each indexer limits.conf batch_search_max_pipeline 1 2 Multiples the number of search pipelines per batch mode search per indexer. Parallel summarization for data models Allows the scheduler to run concurrent data model acceleration searches on the indexers. datamodels.conf acceleration.max_concurrent 2 2 Multiples the number of scheduled acceleration searches per data model per indexer. Parallel summarization for report accelerations Allows the scheduler to run concurrent report acceleration searches on the indexers. savedsearches.conf auto_summarize.max_concurrent 1 2 Multiples the number of scheduled acceleration searches per search per indexer. Index parallelization Allows concurrent data processing pipelines on indexers and forwarders. server.conf parallelIngestionPipelines 1 2 Multiples the number of pipelines per indexer. http://docs.splunk.com/Documentation/Splunk/latest/Capacity/Parallelization
  • 11. 11 Intelligent Job Scheduling • Adds better priority scoring and search windows for much improved saved search scheduling • Reduces # of skipped searches • Re-run failed searches during downtime 11 6.3 6.2
  • 12. 12 Splunk Enterprise & Cloud 6.3 Breakthrough Performance & Scale Doubles performance and lowers TCO Advanced Analysis & Visualization High Volume Event Collection Enterprise-Scale Platform Supports DevOps and IoT data analysis at scale Simplifies analysis of large datasets Enterprise management and integration
  • 13. 13 Single Value Display At-a-glance, single-value indicators with useful context No JS coding / CSS styling necessary! Configurable sparkline Value rangemap, custom thresholds Trend up/down - reversible Great for Operation Centers and War Rooms 13
  • 14. 14 Anomaly Detection Incorporates Z-Score, IQR & histogram methodologies in a single command Detect and summarize anomalies Return anomalous values and outliers 3 Commands in one Easy-to-use Configurable threshold 1
  • 15. 15 Choropleth maps Visualize how a metric varies across a (custom) geographic area 50 States and World Countries built-in 3 Different Color Modes – Sequential – Divergent – Categorical Custom Polygon Definitions – Use KMZs and also make your own! – Shapester App! Point-in Polygon lookups 15
  • 17. 17 Splunk Enterprise & Cloud 6.3 Breakthrough Performance & Scale Doubles performance and lowers TCO Advanced Analysis & Visualization High Volume Event Collection Enterprise-Scale Platform Supports DevOps and IoT data analysis at scale Simplifies analysis of large datasets Enterprise management and integration
  • 18. curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"event":"Hello Event Collector"}' Applications IoT Devices Agentless, direct data onboarding via a standard developer API HTTP Event Collector
  • 19. 20 Splunk Enterprise & Cloud 6.3 Breakthrough Performance & Scale Doubles performance and lowers TCO Advanced Analysis & Visualization High Volume Event Collection Enterprise-Scale Platform Supports DevOps and IoT data analysis at scale Simplifies analysis of large datasets Enterprise management and integration
  • 20. 21 Distributed Management Console - II New topology views, status, and alerting for Splunk deployments • Visualizes Search Head/Indexer matrix with KPI and performance overlays • Search Head clustering replication and scheduler views • Forwarder views with status and performance data • Index and metadata storage utilization • System health alerting 21
  • 21. 22 Custom Alert Actions Use Splunk Alerts to trigger & automate workflows • Allows packaged integration with third-party applications • Simple admin/user configuration • Developers can build, package, and publish alert actions within an app • Growing list of integrations available 22
  • 24. 25 Download the Overview App (6.3) & 6.x Dashboard Examples
  • 25. Harness the Power of Search
  • 26. 27 search and filter | munge | report | cleanup Search Processing Language sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) dc(clientip) | rename sum(KB) AS "Total MB" dc(clientip) AS "Unique Customers"
  • 27. 28 Five Commands that will Solve Most Data Questions eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time
  • 28. 30 Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".status_description eval - Modify or Create New Fields and Values
  • 29. 31 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
  • 30. 32 eval - Modify or Create New Fields and Values Examples • Calculation: sourcetype=access* |eval KB=bytes/1024 • Evaluation: sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) • Concatenation: sourcetype=access* | eval connection = clientip.":".port
  • 31. 34 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 32. 35 stats – Calculate Statistics Based on Field Values Examples • Calculate stats and rename sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) as “Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 33. 36 stats – Calculate Statistics Based on Field Values Examples • Calculate statistics sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS "Total KB” • Multiple statistics sourcetype=access* | eval KB=bytes/1024 | stats avg(KB) sum(KB) • By another field sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip
  • 34. 38 eventstats – Add Summary Statistics to Search Results Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
  • 35. 39 Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response eventstats – Add Summary Statistics to Search Results
  • 36. 40 eventstats – Add Summary Statistics to Search Results Examples • Overlay Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes) • Moving Average sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes) • By created field sourcetype=access* | eval http_response = if(status == 200, "OK", "Error”) | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes) by http_response
  • 37. 42 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total | timechart max(bytes_total) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 38. 43 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 39. 44 streamstats – Cumulative Statistics for Each Event Examples • Cumulative Sum sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes) • Cumulative Sum by Field sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status • Moving Average sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  • 40. 46 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 41. 47 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 42. 48 transaction – Group Related Events Spanning Time Examples • Group by Session ID sourcetype=access* | transaction JSESSIONID • Calculate Session Durations sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration) • Stats is Better sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  • 43. 49 Learn Them Well and Become a Ninja eval - Modify or Create New Fields and Values stats - Calculate Statistics Based on Field Values eventstats - Add Summary Statistics to Search Results streamstats - Cumulative Statistics for Each Event transaction - Group Related Events Spanning Time See many more examples and neat tricks at docs.splunk.com and answers.splunk.com
  • 47. 53 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 48. 54 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 49. 55 cluster – Find Common and/or Rare Events Examples • Find the most common events * | cluster showcount=t t=0.1 | table cluster_count, _raw | sort - cluster_count • Select a field to cluster on sourcetype=access* | cluster field=bc_uri showcount=t | table cluster_count bc_uri _raw | sort -cluster_count • Most or least common errors index=_internal source=*splunkd.log* log_level!=info | cluster showcount=t | table cluster_count _raw | sort -cluster_count
  • 50. 56 Splunk Mobile App EMBEDDING OPERATIONAL INTELLIGENCE • Access dashboards and reports • Annotate dashboards and share with others • Receive push notifications Native Mobile Experience

Editor's Notes

  1. Here is what you need for this presentation: Link to videos on box: <coming soon> You should have the following installed: 6.3 Overview OI Demo 3.1 – Get it from the Technical Enablement Portal under SE tools –> Demos https://splunk--c.na2.visual.force.com/apex/LMS_TechnicalEnablementPortal NOTE: Configure your role to search the oidemo index by default, otherwise you will have to type “index=oidemo” for the examples later on. There is a lot to cover in this presentation! Try to go quickly and at a pretty high level. When you get through the presentation judge the audience’s interest and go deeper in whichever section. For example, if they want to know more about Choropleths and polygons spend some time there, or if they want to go deeper on the search commands talk through the extra examples.
  2. Without our sponsors we couldn’t be here today. So please stop by outside this room in the pavilion. Thanks to all of you for being here and most of all sponsoring our happy hour!
  3. Splunk safe harbor statement.
  4. Previously, Splunk made use of available CPU cores to execute multiple simultaneous searches while indexing data. Release 6.3 vertical scaling uses allows both individual searches and the data indexing process to execute more efficiently by using multiple CPU cores per task. For systems with available CPU cores, the benefits are broad performance improvements in search processing, report generation, data on-boarding capacity and data forwarding efficiency. We didn’t want to just make the searches faster, but also smarter. That is why we created an intelligent job scheduler. Let’s take a look at these features.
  5. Indexer Parallelization helped Cisco UCS achieve 4x the data ingestion (doing pure indexing)
  6. This is an eye chart, BUT it summarizes the parallelization parameters and how to enable them.
  7. This scheduler optimizes which scheduled searches are run and when. Instead of just telling searches when to start, you can tell them a window to run by. It’s like saying you need to get to work by 8am and now Splunk can tell you when to start your journey so you aren’t stuck in traffic. Continues Scheduled Searches (CSSs) Problem in 6.2: Continuous Scheduled Searches (CSSs) are missed due to Splunk downtime creating data gaps Solution in 6.3: By remembering last execution time, missed CSSs are run as soon as Splunk comes back up to fill in data gaps Schedule Window is an option when scheduling your search. It’s that easy to use! When combined with 6.3 parallel search capabilities, you may see even more of a reduction or elimination of skipped searches AND increased capacity of job execution For infrequent searches (hourly, daily, etc.) use schedule windows. Use the built-in scheduler performance reports (under Activity > System Activity > Scheduler) to monitor performance: lots of skipped searches or high lag is bad.
  8. Release 6.3 improves big data analysis and visualization. I’m going to talk about and show you: Single Value Display Anomaly Detection Command Geospatial mapping and choropleths
  9. New SPL command that offers histogram based approach for detecting anomalies. Also includes the capabilities of existing anomalousvalue & outlier SPL commands. Options include Histogram, Z-Score and IQR.
  10. Use Splunk 6.3 Overview App. Go to Single Value Visualization and explain components. Edit in panel and show how to turn on and off, change the sparkline granularity using timechart span=1h, 1m, etc. Go to Anomaly Detection example. Explain story of using vehicle data. Imagine thousands of cars in a fleet and hundreds of attributes per car to look for anomalies in. You can’t chart all of this at once over all time. Anomaly detection is a great starting point. Then you can chart the findings to investigate further or alert on the results. Go to Choropleth Maps and explain the different options. Now we’re going to create our own using an app called Shapester built by one of our Splunkers. Go to splunkbase, d/l shapester and load up the app. (Can have this preloaded to save time – but mention how easy it is to install). Create some custom polygons such as Sales Regions (East West Central) and use OI Demo data to show sales by region. See search below: TBD See video for more details
  11. Release 6.3 includes the new HTTP Event Collector that directly onboards data from applications, DevOps and IoT devices in real-time, scaling to millions of events per second
  12. This new data input makes it simple and fast to collect data from any application and the world of IoT – at massive scale and speed. Think about it, your phones sent data directly into Splunk without using a forwarder.  Application developers can use a standard API or logging libraries directly. For example, if you’re using AWS Lambda or containers like Docker, you can push events directly to Splunk.  IoT devices can use the same direct method, and there is a growing list of IoT collection services already. Like xively, and Citrix Octoblu. And it scales to millions of events per second
  13. Use Splunk 6.3 Overview App for tutorial. Set up HEC, show test using Curl command. (Use 6.3 Over App Tutorial) Do Splunk Shake Demo! Reference: TBD
  14. Interactive, topology-oriented display with mouse-overs for status Today, a large Splunk deployment can include 100’s of individual system components. The new Distributed Management Console (DMC) provides a complete monitoring console, including topology views, system status, and health alerting, for all components of an on-premise deployment. DMC creates a single interface to view the status, performance, capacity, and interconnectivity of these components, allowing the admin to optimize solution operation and efficiency.
  15. Custom Alert Actions provide the ability to use Splunk Alerts to trigger custom actions or pre-packaged integrations with 3rd party products such as trouble ticketing or support systems. Developers can build and publish integrations or custom action packages that users or admins can use via a simple menu within the Splunk Alert Interface. Splunk and partners provide a growing set of integrations including, ServiceNow, xMatters, Webhooks and more. Previously these integrations were complex, ad-hoc efforts requiring custom scripts. The new scheme makes it simple for partners (and customers) to create and contribute out-of-the-box integration templates, and for customers to use them via a simple pull-down menu.
  16. Provide Quick Overview of each. Mention You can learn more in the overview app that can be downloaded from Splunkbase
  17. Use OI Demo 3.1 Go to Settings  Alert Actions. Discuss the capability of d/l custom alert actions from SplunkBase. Go to hipchat and show how configured to run. Go to IoT DataCenter dashboard and point out the use of anomalydetection being used to detect Power anomalies. Open in search and talk about how the usual “Save as: Alert” process would work. Show the new dropdown of triggers at the bottom. “Imagine changing the colors of lights in the NOC when a critical event occurs using the Phillips Hue plugin, etc.” Go to Settings -> Searches, Reports, Alerts and enable” OI Demo Anomaly Detection Alert” --- Go to hipchat room OIDemo3 Alerts and alert should show up in 1 minute or less. Show how can pass tokens.
  18. For more information, or to try out the features yourself. Check out the overview app which explains each of the features and includes code samples and examples where applicable.
  19. <This section should take ~15 minutes> Search is the most powerful part of Splunk.
  20. The Splunk search language is very expressive and can perform a wide variety of tasks ranging from filtering to data, to munging, and reporting. The results can be used to answer questions, visualize results, or even send to a third party application in whatever format they require. Although there are 135 documented search commands; however, most questions can be answered by using just a handful.
  21. These are the five commands you should get very familiar with. If you know how to use these well, you will be able to solve most data questions that come your way. Let’s take a quick look at each of these.
  22. <Walk through the examples with a demo. Hidden slides are available as backup. NOTE: Each of the grey boxes is clickable. If you are running Splunk on port 8000 you won’t have to type in the searches, this will save time.>
  23. sourcetype=access* | eval http_response = if(status == 200, "OK", "Error") | eventstats avg(bytes) AS avg_bytes by http_response | timechart latest(avg_bytes) avg(bytes)
  24. sourcetype=access* | eval KB=bytes/1024 | eval http_response = if(status == 200, "OK", "Error") | eval connection = clientip.":".status_description | table connection, KB, http_response
  25. Note: Chart is just stats visualized. Timechart is just stats by _time visualized.
  26. sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS "Sum of KB"
  27. sourcetype=access* | stats values(useragent) avg(bytes) max(bytes) by clientip
  28. sourcetype=access* | stats values(useragent) avg(bytes) max(bytes) by clientip sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS Sum_KB, avg(KB) AS Avg_KB by clientip, url_domain | sort -Sum_KB, | eval Client_Summary=clientip." - Total KB (".Sum_KB.")" | stats list(Client_Summary) AS Client_Summary, sum(Sum_KB) AS Total_KB by url_domain sourcetype=access* | eval KB=bytes/1024 | stats sum(KB) AS Sum_KB, avg(KB) AS Avg_KB by clientip, url_domain | sort -Sum_KB, | eval Client_Summary=clientip." - Total KB (".Sum_KB.")" | stats list(Client_Summary) AS Client_Summary, sum(Sum_KB) AS Total_KB by url_domain | eval Client_Summary=mvindex(Client_Summary,0,4) | eval Total_KB=tostring(round(Total_KB,2),"commas")
  29. Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event. <Walk through the examples with a demo. Hidden slides are available as backup>
  30. Eventstats let’s you add statistics about the entire search results and makes the statistics available as fields on each event. Let’s use eventstats to create a timechart of the average bytes on top of the overall average. index=* sourcetype=access* | eventstats avg(bytes) AS avg_bytes | timechart latest(avg_bytes) avg(bytes)
  31. We can turn this into a moving average simply by adding “by date_hour” to calculate the average per hour instead of the overall average. index=* sourcetype=access* | eventstats avg(bytes) AS avg_bytes by date_hour | timechart latest(avg_bytes) avg(bytes)
  32. sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)
  33. Streamstats calculates statistics for each event at the time the event is seen. So for example, if I had an event with a temperature reading I could use streamstats to create a new field to tell me the temperature difference between the event and one or more previous events. Similar to the delta command, but more powerful. In this example, I’m going to take the bytes field of my access logs and see how much total data is being transferred code over time.
  34. To create a cumulative sum: sourcetype=access* | timechart sum(bytes) as bytes | streamstats sum(bytes) as cumulative_bytes | timechart max(cumulative_bytes)
  35. sourcetype=access* | reverse | streamstats sum(bytes) as bytes_total by status | timechart max(bytes_total) by status
  36. sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes) Bonus: This could also be completed using the trendline command with the simple moving average (sma) parameter: sourcetype=access* | timechart avg(bytes) as avg_bytes | trendline sma10(avg_bytes) as moving_average_bytes | timechart latest(avg_bytes) latest(moving_average_bytes) Double Bonus: Cumulative sum by period sourcetype=access* | timechart span=15m sum(bytes) as cumulative_bytes by status | streamstats global=f sum(cumulative_bytes) as bytes_total sourcetype=access* | timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes) sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration) sourcetype=access* | transaction JSESSIONID | stats min(duration) AS min_dur max(duration) AS max_dur avg(duration) AS avg_dur by clientip | sort -max_dur | head 10
  37. A transaction is any group of related events that span time. It’s quite useful for finding overall durations. For example, how long did it take a user to complete a transaction. This really shows the power of Splunk. Think about it, if you are sending all your data to splunk then you have data from multiple subsystems (think database, webserver, and app server), you can see the overall time it’s taking AND how long each subsystem is taking. So many customers are using this to quickly pinpoint whether slowness is because of the network, database, or app server.
  38. sourcetype=access* | transaction JSESSIONID
  39. sourcetype=access* | transaction JSESSIONID | stats min(duration) max(duration) avg(duration)
  40. NOTE: Many transactions can be re-created using stats. Transaction is easy but stats is way more efficient and it’s a mapable command (more work will be distributed to the indexers). sourcetype=access* | stats min(_time) AS earliest max(_time) AS latest by JSESSIONID | eval duration=latest-earliest | stats min(duration) max(duration) avg(duration)
  41. There is much more each of these commands can be used for. Check out answers.splunk.com and docs.splunk.com for many more examples.
  42. <form> <label>Splunk live - Post Process - Dashboard</label> <search id="my_base_search"> <query>sourcetype=access* | fields _time, bytes, status, status_description, clientip, url_domain, JSESSIONID | table _time, bytes, status, status_description, clientip, url_domain, JSESSIONID</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="tok_clientip" searchWhenChanged="true"> <label>clientip</label> <choice value="*">All</choice> <search base="my_base_search"> <query>stats count by clientip | sort -count</query> </search> <fieldForLabel>clientip</fieldForLabel> <fieldForValue>clientip</fieldForValue> <default>*</default> <initialValue>*</initialValue> </input> <input type="multiselect" token="field2"></input> </fieldset> <row> <panel> <table> <title>Splunk live - Search - Step 1</title> <search base="my_base_search"> <query>search clientip="$tok_clientip$" | eval KB=bytes/1024 | eval http_response = if(status == 200, "OK", "Error") | eval connection = clientip.":".status_description | table connection, KB, http_response</query> </search> <option name="wrap">true</option> <option name="rowNumbers">false</option> <option name="drilldown">cell</option> <option name="dataOverlayMode">none</option> <option name="count">10</option> </table> </panel> <panel> <table> <title>Splunk live - Search - Step 2</title> <search base="my_base_search"> <query>search clientip="$tok_clientip$" |eval KB=bytes/1024 | stats sum(KB) avg(KB) by clientip</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="wrap">true</option> <option name="rowNumbers">false</option> <option name="drilldown">cell</option> <option name="dataOverlayMode">none</option> <option name="count">10</option> </table> </panel> <panel> <table> <title>Splunk live - Search - Step 2a</title> <search base="my_base_search"> <query>search clientip="$tok_clientip$" |eval KB=bytes/1024 | stats sum(KB) AS Sum_KB, avg(KB) AS Avg_KB by clientip, url_domain | sort -Sum_KB, | eval Client_Summary=clientip." - Total KB (".Sum_KB.")" | stats list(Client_Summary) AS Client_Summary, sum(Sum_KB) AS Total_KB by url_domain | eval Client_Summary=mvindex(Client_Summary,0,4) | eval Total_KB=tostring(round(Total_KB,2),"commas")</query> </search> <option name="wrap">true</option> <option name="rowNumbers">false</option> <option name="drilldown">cell</option> <option name="dataOverlayMode">none</option> <option name="count">10</option> </table> </panel> </row> <row> <panel> <chart> <title>Splunk live - Search - Step 3</title> <search base="my_base_search"> <query>search clientip="$tok_clientip$" |timechart avg(bytes) as avg_bytes | streamstats avg(avg_bytes) AS moving_avg_bytes window=10 | timechart latest(moving_avg_bytes) latest(avg_bytes)</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">latest(moving_avg_bytes)</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> </chart> </panel> <panel> <chart> <title>Splunk live - Search - Step 4 Transaction</title> <search base="my_base_search"> <query>search clientip="$tok_clientip$" |transaction JSESSIONID | stats min(duration) AS min_dur max(duration) AS max_dur avg(duration) AS avg_dur by clientip | sort -max_dur | head 10</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">bar</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">latest(moving_avg_bytes)</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> </chart> </panel> </row> </form>
  43. <If you have time, feel free to show one of your favorite commands or a neat use case of a command. The cluster command is provided here as an example > “There are over 135 splunk commands, the five you have just seen are incredibly powerful. Here is another to add to your arsenal.”
  44. You can use the cluster command to learn more about your data and to find common and/or rare events in your data. For example, if you are investigating an IT problem and you don't know specifically what to look for, use the cluster command to find anomalies. In this case, anomalous events are those that aren't grouped into big clusters or clusters that contain few events. Or, if you are searching for errors, use the cluster command to see approximately how many different types of errors there are and what types of errors are common in your data.
  45. Decrease the threshold of similarity and see the change in results sourcetype=access* | cluster field=bc_uri showcount=t t=0.1| table cluster_count bc_uri _raw | sort -cluster_count
  46. Android coming soon!