SlideShare a Scribd company logo
1 of 18
RED HAT CEPH STORAGE
ACCELERATION UTILIZING FLASH
TECHNOLOGY
Applications and Ecosystem Solutions Development
Rick Stehno
Red Hat Storage Day -
Dallas 2017 1
Seagate Confidential 2
• Utilize flash caching features to accelerate critical data. Caching methods
can be write-back for writes, write-thru for disk/cache transparency, read
cache, etc..
• Utilize storage tiering capabilities. Performance critical data resides on
flash storage, colder data resides on HDD
• Utilize all flash storage to accelerate performance when all application
data is performance critical or when the application does not provide the
features or capabilities to cache or to migrate the data
Three ways to accelerate application performance with flash
Flash Acceleration for Applications
Seagate Confidential 3
Configurations:
• All flash storage - Performance
• Highest performance per node
• Less maximum capacity per node
• Hybrid HDD and flash storage - Balanced
• Balances performance, capacity and cost
• Application and workload suitable for
• Performance critical data on flash
• Utilize host software caching or tiering on flash
• All HDD storage - Capacity
• Maximum capacity per node, lowest cost
• Lower performance per node
Ceph Software Defined Storage (SDS) Acceleration
Seagate Confidential 4
–Higher performance in half the rack space
–28% less power and cooling
–Higher MTBF inherent with reduced component count
–Reduced OSD recovery time per Ceph node
–Lower TCO
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
Storage - NVMe vs SATA SSD
Seagate Confidential 5
• 4.5x increase for 128k sequential
reads
• 3.5x increase for 128k sequential
writes
• 3.7x increase for 4k random reads
• 1.4x increase for 4k random 70/30
RR/RW
• Equal performance for 4k random
writes
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
All Flash Storage - NVMe vs SATA SSD cont’d
FIO Benchmarks
(1x represents 24 SATA SSD baseline)
Seagate Confidential 6
Why 1U server with 10 NVMe SSDs may be better choice
vs. 2U Server with 24 SATA SSDs
All Flash Storage - NVMe vs SATA SSD cont’d
Increasing the load to extend NVMe
advantage over and above the 128
thread SATA SSD Test:
• 5.8x increase for Random Writes at
512 threads
• 3.1x increase for 70/30 RR/RW at
512 threads
• 4.2x increase for Random Reads at
790 threads
• 8.2x increase for Sequential Reads
at 1264 threads
10 NVMe SSDs support higher
workloads and more users
3x
5.8x
1.4x
3.1x
1.0x
4.2x
1.3x
8.2x
128
Theads
512
Theads
128
Threads
512
Threads
128
threads
790
threads
128
threads
1264
threads
Gains
Random Write 70/30 RR/RW
Random Reads Sequential Reads
Ceph RBD NVMe Performance Gains over
SATA SSD
Random Writes 70/30 RR/RW Random Reads Sequential Reads
128k FIO RBD IOEngine Benchmark
Seagate Confidential 7
Price per MB/s: Cost of ((Retail Cost of SSD) / MB/s for each test)
SSD
Total SSD
Price
Price MB/s 128k Random Writes
128 threads
Price MB/s 128k Random Writes
512 threads
24 - SATA SSD 960G $7,896 24 - SATA SSD 960G $15.00
10 - NVMe 2TB $10,990 10 - NVMe 2TB $7.00 10 – NVMe 2TB $3.00
These prices do not include savings from electrical/cooling costs, reducing datacenter floor space, from the reduction of SATA SSD
Note: 128k random write FIO RBD benchmark: SATA SSD averaged 85% busy, NVMe averaged 80% busy with 512 threads
FIO RBD Maximum Threads Random Write Performance for NVMe
Ceph Storage Costs
Seagate SATA SSD vs. Seagate NVMe SSD
Seagate Confidential 8
MySQL
• MySQL is the most popular and the most widely used open-source database in the world
• MySQL is both feature rich in the areas of performance, scalability and reliability
• Database users demand high OLTP performance - Small random reads/writes
Ceph
• Most popular Software Defined Storage system
• Scalable
• Reliable
Does it make sense implementing Ceph into a MySQL
Database environment?
Ceph was not designed to provide high performance for OLTP environments
OLTP entails small random reads/writes
Seagate Confidential 9
MySQL Setup:
Release 5.7
45,000,000 rows
6GB Buffer
4G logfiles
RAID 0 over 18 HDD
Ceph Setup:
3 Nodes each containing:
Jewel Using Filestore
4 NVMe SSDs
1 Pool over 12 NVMe SSDs
Replica 2
40G private and public
network
For all tests, all MySQL
files were local on local
server except the database
file, this file was moved to
the Ceph cluster.
MySQL - Comparing Local HDD to Ceph Cluster
Threads
Seagate Confidential 10
MySQL - Comparing Local NVMe SSD to Ceph Cluster
MySQL Setup:
Release 5.7
45,000,000 rows
6GB Buffer
4G logfiles
RAID 0 over 4 NVMe SSDs
Ceph Setup:
3 Nodes each containing:
Jewel Using Filestore
4 NVMe SSDs
1 Pool over 12 NVMe SSDs
Replica 1
40G private and public
network
For all tests, all MySQL
files were local on local
server except the database
file, this file was moved to
the Ceph cluster.
Seagate Confidential 11
All SSD
Case-1: Case-2: Case-3:
2 SSDs 2 SSDs 1 PCIe flash
1 OSD/SSD 4 OSDs/SSD 4 OSDs/SSD
8 OSD journals on PCIe flash
0
100000
200000
300000
400000
500000
600000
700000
800000
0
200000
400000
600000
800000
1000000
1200000
2 ssd, 2 osd 2 ssd, 8 osd 2 ssd, 8 osd,
+journal
IOPS
KB/s
FIO Random Write - 200 Threads -
128k Data
Seagate SSD and Seagate PCIe Storage
Ceph All Flash Storage Acceleration
Seagate Confidential 12
Ceph All Flash Storage Acceleration
4K FIO RBD Benchmarks
3 node Ceph cluster
100G Public and Private Networks
4 - Seagate NVMe SSD per node
12 Seagate NVMe SSD per cluster
Benchmark 1:
1 - OSD per NVMe SSD
Benchmark 2:
4 - OSD per NVMe
Seagate Confidential 13
• Use RAW device or create 1st partition on 1M boundary (sector 2048 for 512B
sectors, sector 256 for 4k sectors)
• Ceph-deploy uses the optimal alignment when creating an OSD
• Use blk-mq/scsi-mq if kernel supports it
• rq_affinity = 1 for NVMe, rq_affinity = 2 for non-NVMe
• rotational = 0
• blockdev --setra 256 (for 4k sectors, 4096 for 512B sectors)
Linux tuning is still a requirement to get optimum performance out of a SSD
Linux Flash Storage Tuning
Seagate Confidential 14
• If using an older kernel that doesn’t support BLK-MQ, use:
• “deadline” IO-Scheduler with supporting variables:
• fifo-batch
• front-merges
• writes-starved
• XFS Mount options:
• nobarrier,discard,noatime,attr2,inode64,noquota
• MySQL – when using flash, configure both innodb_io_capacity and
innodb_lru_scan_depth
• Modify Linux read ahead on mapped RBD image on client
• echo 1024 > /sys/class/block/rbd0/queue/read_ahead_kb
Linux tuning is still a requirement to get optimum performance out of a SSD
Linux Flash Storage Tuning cont’d
Seagate Confidential 15
Flash Storage Device Configuration
Ceph tuning is still a requirement to get optimum performance out of a SSD
Ceph tuning options that can make a difference:
• RBD Cache:
• If using a smaller number of SSD/NVMe SSD, test with creating multiple OSD’s
per SSD/NVMe SSD. Have seen good performance increases using 4 OSD per
SSD/NVMe SSD
Seagate Confidential 16
Flash Storage Device Configuration
If the NVMe SSD or SAS/SATA SSD device can be configured to use a 4k sector size,
this could increase performance for certain applications like databases.
For all of my FIO tests with the RBD engine and for all of my MySQL tests, I saw up to
a 3x improvement (depending on the test) when using 4k sector sizes compared to
using 512 byte sectors.
Precondition all SSD before running benchmarks. Have seen over a 3x gain in
performance after preconditioning
Storage devices used for all of the above benchmarks/tests:
• Seagate Nytro XF1440 NVMe SSD
• Seagate Nytro XF1230 SATA SSD
• Seagate 1200.2 SAS SSD
• Seagate XP6500 PCIe Flash Accelerator Card
Seagate Confidential 17
Seagate Broadest PCIe, SAS and SATA Portfolio
Seagate Confidential 18Seagate Confidential
Thank You!
Questions?
Learn how Seagate accelerates storage
with one of the broadest SSD and Flash
portfolios in the market

More Related Content

What's hot

Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightColleen Corrice
 
inwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwin stack
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Community
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleJames Saint-Rossy
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Community
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red_Hat_Storage
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Community
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storageAndrew Underwood
 
SUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephSUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephinwin stack
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red_Hat_Storage
 

What's hot (20)

Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
inwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetesinwinSTACK - ceph integrate with kubernetes
inwinSTACK - ceph integrate with kubernetes
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Ceph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking ToolCeph Tech Talk -- Ceph Benchmarking Tool
Ceph Tech Talk -- Ceph Benchmarking Tool
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
Red Hat Storage Day Seattle: Stabilizing Petabyte Ceph Cluster in OpenStack C...
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
2015 open storage workshop ceph software defined storage
2015 open storage workshop   ceph software defined storage2015 open storage workshop   ceph software defined storage
2015 open storage workshop ceph software defined storage
 
SUSE - performance analysis-with_ceph
SUSE - performance analysis-with_cephSUSE - performance analysis-with_ceph
SUSE - performance analysis-with_ceph
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 

Viewers also liked

Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
 
Percona live linux filesystems and my sql
Percona live   linux filesystems and my sqlPercona live   linux filesystems and my sql
Percona live linux filesystems and my sqlMichael Zhang
 
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...Red_Hat_Storage
 
Storage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future NeedsStorage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future NeedsRed_Hat_Storage
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed_Hat_Storage
 
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutes
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutesCEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutes
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutesKaran Singh
 

Viewers also liked (20)

Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
 
Percona live linux filesystems and my sql
Percona live   linux filesystems and my sqlPercona live   linux filesystems and my sql
Percona live linux filesystems and my sql
 
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...
Red Hat Storage Day LA - Why Software-Defined Storage Matters and Web-Scale O...
 
Storage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future NeedsStorage: Limitations, Frustrations, and Coping with Future Needs
Storage: Limitations, Frustrations, and Coping with Future Needs
 
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized ApplicationsRed Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
 
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutes
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutesCEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutes
CEPH introduction , Bootstrapping your first Ceph cluster in just 10 minutes
 

Similar to Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flash Technology

Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Community
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...Red_Hat_Storage
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021Gene Leyzarovich
 
V mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep diveV mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep divesolarisyougood
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsMarco Obinu
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterAaron Joue
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...DataStax
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide DeckVarrow Inc.
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Seagate Implementation of Dense Storage Utilizing HDDs and SSDs
Seagate Implementation of Dense Storage Utilizing HDDs and SSDsSeagate Implementation of Dense Storage Utilizing HDDs and SSDs
Seagate Implementation of Dense Storage Utilizing HDDs and SSDsRed_Hat_Storage
 
Backup netezza-tsm-v1403c-140330170451-phpapp01
Backup netezza-tsm-v1403c-140330170451-phpapp01Backup netezza-tsm-v1403c-140330170451-phpapp01
Backup netezza-tsm-v1403c-140330170451-phpapp01Arunkumar Shanmugam
 
SUSE Enterprise Storage
SUSE Enterprise StorageSUSE Enterprise Storage
SUSE Enterprise StorageSUSE
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 

Similar to Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flash Technology (20)

Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...Implementation of Dense Storage Utilizing  HDDs with SSDs and PCIe Flash  Acc...
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
V mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep diveV mware virtual san 5.5 deep dive
V mware virtual san 5.5 deep dive
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Storage spaces direct webinar
Storage spaces direct webinarStorage spaces direct webinar
Storage spaces direct webinar
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
Building Data Pipelines with SMACK: Designing Storage Strategies for Scale an...
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide Deck
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Seagate Implementation of Dense Storage Utilizing HDDs and SSDs
Seagate Implementation of Dense Storage Utilizing HDDs and SSDsSeagate Implementation of Dense Storage Utilizing HDDs and SSDs
Seagate Implementation of Dense Storage Utilizing HDDs and SSDs
 
Backup netezza-tsm-v1403c-140330170451-phpapp01
Backup netezza-tsm-v1403c-140330170451-phpapp01Backup netezza-tsm-v1403c-140330170451-phpapp01
Backup netezza-tsm-v1403c-140330170451-phpapp01
 
VM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache PrefetchingVM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache Prefetching
 
SUSE Enterprise Storage
SUSE Enterprise StorageSUSE Enterprise Storage
SUSE Enterprise Storage
 
IaaS for DBAs in Azure
IaaS for DBAs in AzureIaaS for DBAs in Azure
IaaS for DBAs in Azure
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 

Recently uploaded

How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 

Recently uploaded (20)

How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 

Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flash Technology

  • 1. RED HAT CEPH STORAGE ACCELERATION UTILIZING FLASH TECHNOLOGY Applications and Ecosystem Solutions Development Rick Stehno Red Hat Storage Day - Dallas 2017 1
  • 2. Seagate Confidential 2 • Utilize flash caching features to accelerate critical data. Caching methods can be write-back for writes, write-thru for disk/cache transparency, read cache, etc.. • Utilize storage tiering capabilities. Performance critical data resides on flash storage, colder data resides on HDD • Utilize all flash storage to accelerate performance when all application data is performance critical or when the application does not provide the features or capabilities to cache or to migrate the data Three ways to accelerate application performance with flash Flash Acceleration for Applications
  • 3. Seagate Confidential 3 Configurations: • All flash storage - Performance • Highest performance per node • Less maximum capacity per node • Hybrid HDD and flash storage - Balanced • Balances performance, capacity and cost • Application and workload suitable for • Performance critical data on flash • Utilize host software caching or tiering on flash • All HDD storage - Capacity • Maximum capacity per node, lowest cost • Lower performance per node Ceph Software Defined Storage (SDS) Acceleration
  • 4. Seagate Confidential 4 –Higher performance in half the rack space –28% less power and cooling –Higher MTBF inherent with reduced component count –Reduced OSD recovery time per Ceph node –Lower TCO Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs Storage - NVMe vs SATA SSD
  • 5. Seagate Confidential 5 • 4.5x increase for 128k sequential reads • 3.5x increase for 128k sequential writes • 3.7x increase for 4k random reads • 1.4x increase for 4k random 70/30 RR/RW • Equal performance for 4k random writes Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs All Flash Storage - NVMe vs SATA SSD cont’d FIO Benchmarks (1x represents 24 SATA SSD baseline)
  • 6. Seagate Confidential 6 Why 1U server with 10 NVMe SSDs may be better choice vs. 2U Server with 24 SATA SSDs All Flash Storage - NVMe vs SATA SSD cont’d Increasing the load to extend NVMe advantage over and above the 128 thread SATA SSD Test: • 5.8x increase for Random Writes at 512 threads • 3.1x increase for 70/30 RR/RW at 512 threads • 4.2x increase for Random Reads at 790 threads • 8.2x increase for Sequential Reads at 1264 threads 10 NVMe SSDs support higher workloads and more users 3x 5.8x 1.4x 3.1x 1.0x 4.2x 1.3x 8.2x 128 Theads 512 Theads 128 Threads 512 Threads 128 threads 790 threads 128 threads 1264 threads Gains Random Write 70/30 RR/RW Random Reads Sequential Reads Ceph RBD NVMe Performance Gains over SATA SSD Random Writes 70/30 RR/RW Random Reads Sequential Reads 128k FIO RBD IOEngine Benchmark
  • 7. Seagate Confidential 7 Price per MB/s: Cost of ((Retail Cost of SSD) / MB/s for each test) SSD Total SSD Price Price MB/s 128k Random Writes 128 threads Price MB/s 128k Random Writes 512 threads 24 - SATA SSD 960G $7,896 24 - SATA SSD 960G $15.00 10 - NVMe 2TB $10,990 10 - NVMe 2TB $7.00 10 – NVMe 2TB $3.00 These prices do not include savings from electrical/cooling costs, reducing datacenter floor space, from the reduction of SATA SSD Note: 128k random write FIO RBD benchmark: SATA SSD averaged 85% busy, NVMe averaged 80% busy with 512 threads FIO RBD Maximum Threads Random Write Performance for NVMe Ceph Storage Costs Seagate SATA SSD vs. Seagate NVMe SSD
  • 8. Seagate Confidential 8 MySQL • MySQL is the most popular and the most widely used open-source database in the world • MySQL is both feature rich in the areas of performance, scalability and reliability • Database users demand high OLTP performance - Small random reads/writes Ceph • Most popular Software Defined Storage system • Scalable • Reliable Does it make sense implementing Ceph into a MySQL Database environment? Ceph was not designed to provide high performance for OLTP environments OLTP entails small random reads/writes
  • 9. Seagate Confidential 9 MySQL Setup: Release 5.7 45,000,000 rows 6GB Buffer 4G logfiles RAID 0 over 18 HDD Ceph Setup: 3 Nodes each containing: Jewel Using Filestore 4 NVMe SSDs 1 Pool over 12 NVMe SSDs Replica 2 40G private and public network For all tests, all MySQL files were local on local server except the database file, this file was moved to the Ceph cluster. MySQL - Comparing Local HDD to Ceph Cluster Threads
  • 10. Seagate Confidential 10 MySQL - Comparing Local NVMe SSD to Ceph Cluster MySQL Setup: Release 5.7 45,000,000 rows 6GB Buffer 4G logfiles RAID 0 over 4 NVMe SSDs Ceph Setup: 3 Nodes each containing: Jewel Using Filestore 4 NVMe SSDs 1 Pool over 12 NVMe SSDs Replica 1 40G private and public network For all tests, all MySQL files were local on local server except the database file, this file was moved to the Ceph cluster.
  • 11. Seagate Confidential 11 All SSD Case-1: Case-2: Case-3: 2 SSDs 2 SSDs 1 PCIe flash 1 OSD/SSD 4 OSDs/SSD 4 OSDs/SSD 8 OSD journals on PCIe flash 0 100000 200000 300000 400000 500000 600000 700000 800000 0 200000 400000 600000 800000 1000000 1200000 2 ssd, 2 osd 2 ssd, 8 osd 2 ssd, 8 osd, +journal IOPS KB/s FIO Random Write - 200 Threads - 128k Data Seagate SSD and Seagate PCIe Storage Ceph All Flash Storage Acceleration
  • 12. Seagate Confidential 12 Ceph All Flash Storage Acceleration 4K FIO RBD Benchmarks 3 node Ceph cluster 100G Public and Private Networks 4 - Seagate NVMe SSD per node 12 Seagate NVMe SSD per cluster Benchmark 1: 1 - OSD per NVMe SSD Benchmark 2: 4 - OSD per NVMe
  • 13. Seagate Confidential 13 • Use RAW device or create 1st partition on 1M boundary (sector 2048 for 512B sectors, sector 256 for 4k sectors) • Ceph-deploy uses the optimal alignment when creating an OSD • Use blk-mq/scsi-mq if kernel supports it • rq_affinity = 1 for NVMe, rq_affinity = 2 for non-NVMe • rotational = 0 • blockdev --setra 256 (for 4k sectors, 4096 for 512B sectors) Linux tuning is still a requirement to get optimum performance out of a SSD Linux Flash Storage Tuning
  • 14. Seagate Confidential 14 • If using an older kernel that doesn’t support BLK-MQ, use: • “deadline” IO-Scheduler with supporting variables: • fifo-batch • front-merges • writes-starved • XFS Mount options: • nobarrier,discard,noatime,attr2,inode64,noquota • MySQL – when using flash, configure both innodb_io_capacity and innodb_lru_scan_depth • Modify Linux read ahead on mapped RBD image on client • echo 1024 > /sys/class/block/rbd0/queue/read_ahead_kb Linux tuning is still a requirement to get optimum performance out of a SSD Linux Flash Storage Tuning cont’d
  • 15. Seagate Confidential 15 Flash Storage Device Configuration Ceph tuning is still a requirement to get optimum performance out of a SSD Ceph tuning options that can make a difference: • RBD Cache: • If using a smaller number of SSD/NVMe SSD, test with creating multiple OSD’s per SSD/NVMe SSD. Have seen good performance increases using 4 OSD per SSD/NVMe SSD
  • 16. Seagate Confidential 16 Flash Storage Device Configuration If the NVMe SSD or SAS/SATA SSD device can be configured to use a 4k sector size, this could increase performance for certain applications like databases. For all of my FIO tests with the RBD engine and for all of my MySQL tests, I saw up to a 3x improvement (depending on the test) when using 4k sector sizes compared to using 512 byte sectors. Precondition all SSD before running benchmarks. Have seen over a 3x gain in performance after preconditioning Storage devices used for all of the above benchmarks/tests: • Seagate Nytro XF1440 NVMe SSD • Seagate Nytro XF1230 SATA SSD • Seagate 1200.2 SAS SSD • Seagate XP6500 PCIe Flash Accelerator Card
  • 17. Seagate Confidential 17 Seagate Broadest PCIe, SAS and SATA Portfolio
  • 18. Seagate Confidential 18Seagate Confidential Thank You! Questions? Learn how Seagate accelerates storage with one of the broadest SSD and Flash portfolios in the market

Editor's Notes

  1. SMRs – Drive Managed
  2. SMRs – Drive Managed
  3. SMRs – Drive Managed
  4. SMRs – Drive Managed