SlideShare a Scribd company logo
1 of 40
Software Defined storage, Big
Data and Ceph.
What is all the fuss about?
Kamesh Pemmaraju, Sr. Product Mgr, Dell
Neil Levine, Dir. of Product Mgmt, Red Hat
OpenStack Summit Atlanta,
May 2014
CEPH
CEPH UNIFIED STORAGE
FILE
SYSTEM
BLOCK
STORAGE
OBJECT
STORAGE
Keystone
Geo-Replication
Native API
3
Multi-tenant
S3 & Swift
OpenStack
Linux Kernel
iSCSI
Clones
Snapshots
CIFS/NFS
HDFS
Distributed Metadata
Linux Kernel
POSIX
Copyright © 2013 by Inktank | Private and Confidential
ARCHITECTURE
4
Copyright © 2013 by Inktank | Private and Confidential
APP HOST/VM CLIENT
COMPONENTS
5
S3/SWIFT HOST/HYPERVISOR iSCSI CIFS/NFS SDK
INTERFACESSTORAGECLUSTERS
MONITORS OBJECT STORAGE DAEMONS (OSD)
BLOCK STORAGE FILE SYSTEMOBJECT STORAGE
Copyright © 2014 by Inktank | Private and Confidential
THE PRODUCT
7
INKTANK CEPH ENTERPRISE
WHAT’S INSIDE?
Ceph Object and Ceph Block
Calamari
Enterprise Plugins (2014)
Support Services
Copyright © 2013 by Inktank | Private and Confidential
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
9
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
10
Volumes Ephemeral
Copy-on-Write Snapshots
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
11
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: CLOUD STORAGE
12
S3/Swift S3/Swift S3/Swift S3/Swift
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: WEBSCALE APPLICATIONS
13
Native
Protocol
Native
Protocol
Native
Protocol
Native
Protocol
ROADMAP
INKTANK CEPH ENTERPRISE
14
Copyright © 2013 by Inktank | Private and Confidential
May 2014 Q4 2014 2015
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
15
CEPH STORAGE CLUSTER
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
16
CEPH STORAGE CLUSTER
Read/Write Read/Write
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
17
CEPH STORAGE CLUSTER
Write Write Read Read
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: ARCHIVE / COLD STORAGE
18
CEPH STORAGE CLUSTER
ROADMAP
INKTANK CEPH ENTERPRISE
19
Copyright © 2013 by Inktank | Private and Confidential
April 2014 September 2014 2015
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: DATABASES
20
Native
Protocol
Native
Protocol
Native
Protocol
Native
Protocol
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: HADOOP
21
Native
Protocol
Native
Protocol
Native
Protocol
Native
Protocol
22
Training for Proof of Concept
or Production Users
Online Training for Cloud
Builders and Storage
Administrators
Instructor led with virtual
lab environment
INKTANK UNIVERSITY
Copyright © 2014 by Inktank | Private and Confidential
VIRTUAL PUBLIC
May 21 – 22
European Time-zone
June 4 - 5
US Time-zone
Ceph Reference
Architectures and case
study
Outline
• Planning your Ceph implementation
• Choosing targets for Ceph deployments
• Reference Architecture Considerations
• Dell Reference Configurations
• Customer Case Study
• Business Requirements
– Budget considerations, organizational commitment
– Avoiding lock-in – use open source and industry standards
– Enterprise IT use cases
– Cloud applications/XaaS use cases for massive-scale, cost-effective storage
– Steady-state vs. Spike data usage
• Sizing requirements
– What is the initial storage capacity?
– What is the expected growth rate?
• Workload requirements
– Does the workload need high performance or it is more capacity focused?
– What are IOPS/Throughput requirements?
– What type of data will be stored?
– Ephemeral vs. persistent data, Object, Block, File?
Planning your Ceph Implementation
How to Choose Targets Use Cases for Ceph
Virtualization and Private
Cloud
(traditional SAN/NAS)
High Performance
(traditional SAN)
PerformanceCapacity
NAS & Object
Content Store
(traditional NAS)
Cloud
Applications
Traditional IT
XaaS Compute Cloud
Open Source Block
XaaS Content Store
Open Source NAS/Object
Ceph
Target
Ceph Target
• Tradeoff between Cost vs. Reliability (use-case dependent)
• Use the Crush configs to map out your failures domains and performance pools
• Failure domains
– Disk (OSD and OS)
– SSD journals
– Node
– Rack
– Site (replication at the RADOS level, Block replication, consider latencies)
• Storage pools
– SSD pool for higher performance
– Capacity pool
• Plan for failure domains of the monitor nodes
• Consider failure replacement scenarios, lowered redundancies, and performance
impacts
Architectural considerations – Redundancy and
replication considerations
Server Considerations
• Storage Node:
– one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD,
– SSD’s for journaling and for using the tiering feature in Firefly
– Erasure coding will increase useable capacity at the expense of additional compute
load
– SAS JBOD expanders for extra capacity (beware of extra latency and
oversubscribed SAS lanes)
• Monitor nodes (MON): odd number for quorum, services
can be hosted on the storage node for smaller
deployments, but will need dedicated nodes larger
installations
• Dedicated RADOS Gateway nodes for large object store
deployments and for federated gateways for multi-site
Networking Considerations
• Dedicated or Shared network
– Be sure to involve the networking and security teams early when design your
networking options
– Network redundancy considerations
– Dedicated client and OSD networks
– VLAN’s vs. Dedicated switches
– 1 Gbs vs 10 Gbs vs 40 Gbs!
• Networking design
– Spine and Leaf
– Multi-rack
– Core fabric connectivity
– WAN connectivity and latency issues for multi-site deployments
Ceph additions coming to the Dell Red Hat
OpenStack solution
Pilot configuration Components
• Dell PowerEdge R620/R720/R720XD Servers
• Dell Networking S4810/S55 Switches, 10GB
• Red Hat Enterprise Linux OpenStack Platform
• Dell ProSupport
• Dell Professional Services
• Avail. w/wo High Availability
Specs at a glance
• Node 1: Red Hat Openstack Manager
• Node 2: OpenStack Controller (2 additional controllers
for HA)
• Nodes 3-8: OpenStack Nova Compute
• Nodes: 9-11: Ceph 12x3 TB raw storage
• Network Switches: Dell Networking S4810/S55
• Supports ~ 170-228 virtual machines
Benefits
• Rapid on-ramp to OpenStack cloud
• Scale up, modular compute and storage blocks
• Single point of contact for solution support
• Enterprise-grade OpenStack software package
Storage
bundles
Example Ceph Dell Server Configurations
Type Size Components
Performance 20 TB • R720XD
• 24 GB DRAM
• 10 X 4 TB HDD (data drives)
• 2 X 300 GB SSD (journal)
Capacity 44TB /
105 TB*
• R720XD
• 64 GB DRAM
• 10 X 4 TB HDD (data drives)
• 2 X 300 GB SSH (journal)
• MD1200
• 12 X 4 TB HHD (data drives)
Extra Capacity 144 TB /
240 TB*
• R720XD
• 128 GB DRAM
• 12 X 4 TB HDD (data drives)
• MD3060e (JBOD)
• 60 X 4 TB HHD (data drives)
• Dell & Red Hat & Inktank have partnered to bring a complete
Enterprise-grade storage solution for RHEL-OSP + Ceph
• The joint solution provides:
– Co-engineered and validated Reference Architecture
– Pre-configured storage bundles optimized for performance or
storage
– Storage enhancements to existing OpenStack Bundles
– Certification against RHEL-OSP
– Professional Services, Support, and Training
› Collaborative Support for Dell hardware customers
› Deployment services & tools
What Are We Doing To Enable?
UAB Case Study
Overcoming a data deluge
Inconsistent data management across research teams hampers productivity
• Growing data sets challenged available resources
• Research data distributed across laptops,
USB drives, local servers, HPC clusters
• Transferring datasets to HPC clusters took too
much time and clogged shared networks
• Distributed data management reduced
researcher productivity and put data at risk
Solution: a storage cloud
Centralized storage cloud based on OpenStack and Ceph
• Flexible, fully open-source infrastructure
based on Dell reference design
− OpenStack, Crowbar and Ceph
− Standard PowerEdge servers and storage
− 400+ TBs at less than 41¢ per gigabyte
• Distributed scale-out storage provisions
capacity from a massive common pool
− Scalable to 5 petabytes
• Data migration to and from HPC clusters via
dedicated 10Gb Ethernet fabric
• Easily extendable framework for developing
and hosting additional services
− Simplified backup service now enabled
“We’ve made it possible for users to
satisfy their own storage needs with
the Dell private cloud, so that their
research is not hampered by IT.”
David L. Shealy, PhD
Faculty Director, Research Computing
Chairman, Dept. of Physics
Building a research cloud
Project goals extend well beyond data management
• Designed to support emerging
data-intensive scientific computing paradigm
– 12 x 16-core compute nodes
– 1 TB RAM, 420 TBs storage
– 36 TBs storage attached to each compute node
• Virtual servers and virtual storage meet HPC
− Direct user control over all aspects of the
application environment
− Ample capacity for large research data sets
• Individually customized test/development/
production environments
− Rapid setup and teardown
• Growing set of cloud-based tools & services
− Easily integrate shareware, open source, and
commercial software
“We envision the OpenStack-based
cloud to act as the gateway to our
HPC resources, not only as the
purveyor of services we provide, but
also enabling users to build their own
cloud-based services.”
John-Paul Robinson, System Architect
Research Computing System (Next Gen)
A cloud-based computing environment with high speed access to
dedicated and dynamic compute resources
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
Open
Stack
node
HPC
Cluster
HPC
Cluster
HPC
Storage
DDR Infiniband QDR Infiniband
10Gb Ethernet
Cloud services layer
Virtualized server and storage computing cloud
based on OpenStack, Crowbar and Ceph
UAB Research Network
THANK YOU!
Contact Information
Reach Kamesh and Neil for additional information:
Dell.com/OpenStack
Dell.com/Crowbar
Inktank.com/Dell
Kamesh_Pemmaraju@Dell.com
@kpemmaraju
Neil.Levine@Inktank.com
@neilwlevine
Visit the Dell and Inktank booths in the OpenStack Summit Expo Hall
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?

More Related Content

What's hot

QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-EnglishPeggy Ho
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetupktdreyer
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014Kyle Bader
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightColleen Corrice
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 
Building modern data lakes
Building modern data lakes Building modern data lakes
Building modern data lakes Minio
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about cephEmma Haruka Iwao
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxRobert Starmer
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsKaran Singh
 
Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstackopenstackindia
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed_Hat_Storage
 

What's hot (20)

QCT Fact Sheet-English
QCT Fact Sheet-EnglishQCT Fact Sheet-English
QCT Fact Sheet-English
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver MeetupCeph Overview for Distributed Computing Denver Meetup
Ceph Overview for Distributed Computing Denver Meetup
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
librados
libradoslibrados
librados
 
SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014SF Ceph Users Jan. 2014
SF Ceph Users Jan. 2014
 
Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 
Building modern data lakes
Building modern data lakes Building modern data lakes
Building modern data lakes
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
What you need to know about ceph
What you need to know about cephWhat you need to know about ceph
What you need to know about ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
Ceph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion ObjectsCeph scale testing with 10 Billion Objects
Ceph scale testing with 10 Billion Objects
 
Glusterfs and openstack
Glusterfs  and openstackGlusterfs  and openstack
Glusterfs and openstack
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Red Hat Storage for Mere Mortals
Red Hat Storage for Mere MortalsRed Hat Storage for Mere Mortals
Red Hat Storage for Mere Mortals
 
Red Hat Storage Roadmap
Red Hat Storage RoadmapRed Hat Storage Roadmap
Red Hat Storage Roadmap
 

Viewers also liked

Scalable Object Storage with Apache CloudStack and Apache Hadoop
Scalable Object Storage with Apache CloudStack and Apache HadoopScalable Object Storage with Apache CloudStack and Apache Hadoop
Scalable Object Storage with Apache CloudStack and Apache HadoopChiradeep Vittal
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduceAlexander Alten
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2Tommy Lee
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelRed_Hat_Storage
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed_Hat_Storage
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 

Viewers also liked (7)

Scalable Object Storage with Apache CloudStack and Apache Hadoop
Scalable Object Storage with Apache CloudStack and Apache HadoopScalable Object Storage with Apache CloudStack and Apache Hadoop
Scalable Object Storage with Apache CloudStack and Apache Hadoop
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduce
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and Future
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 

Similar to Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?

Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Community
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Ontico
 
Speed up Digital Transformation with Openstack Cloud & Software Defined Storage
Speed up Digital Transformation with Openstack Cloud & Software Defined StorageSpeed up Digital Transformation with Openstack Cloud & Software Defined Storage
Speed up Digital Transformation with Openstack Cloud & Software Defined StorageMatthew Sheppard
 
Big data talk barcelona - jsr - jc
Big data talk   barcelona - jsr - jcBig data talk   barcelona - jsr - jc
Big data talk barcelona - jsr - jcJames Saint-Rossy
 
Red hat storage el almacenamiento disruptivo
Red hat storage el almacenamiento disruptivoRed hat storage el almacenamiento disruptivo
Red hat storage el almacenamiento disruptivoNextel S.A.
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final Juergen Domnik
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on DockerDataWorks Summit
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFSUSE Italy
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoopChiou-Nan Chen
 
OSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOpenStorageSummit
 
Oracle big data appliance and solutions
Oracle big data appliance and solutionsOracle big data appliance and solutions
Oracle big data appliance and solutionssolarisyougood
 
Oracle Database Appliance (ODA) X6-2 Portfolio Overview
Oracle Database Appliance (ODA) X6-2 Portfolio OverviewOracle Database Appliance (ODA) X6-2 Portfolio Overview
Oracle Database Appliance (ODA) X6-2 Portfolio OverviewDaryll Whyte
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarKamesh Pemmaraju
 
Introduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSIntroduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSSteve Wong
 
Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2hdhappy001
 
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014Gene Leyzarovich
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and FutureDataWorks Summit
 
OpenStack at the speed of business with SolidFire & Red Hat
OpenStack at the speed of business with SolidFire & Red Hat OpenStack at the speed of business with SolidFire & Red Hat
OpenStack at the speed of business with SolidFire & Red Hat NetApp
 

Similar to Software Defined Storage, Big Data and Ceph - What Is all the Fuss About? (20)

Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...Key trends in Big Data and new reference architecture from Hewlett Packard En...
Key trends in Big Data and new reference architecture from Hewlett Packard En...
 
Speed up Digital Transformation with Openstack Cloud & Software Defined Storage
Speed up Digital Transformation with Openstack Cloud & Software Defined StorageSpeed up Digital Transformation with Openstack Cloud & Software Defined Storage
Speed up Digital Transformation with Openstack Cloud & Software Defined Storage
 
Big data talk barcelona - jsr - jc
Big data talk   barcelona - jsr - jcBig data talk   barcelona - jsr - jc
Big data talk barcelona - jsr - jc
 
Red hat storage el almacenamiento disruptivo
Red hat storage el almacenamiento disruptivoRed hat storage el almacenamiento disruptivo
Red hat storage el almacenamiento disruptivo
 
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Deploying Big-Data-as-a-Service (BDaaS) in the EnterpriseDeploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
 
Whd master deck_final
Whd master deck_final Whd master deck_final
Whd master deck_final
 
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
VMworld 2015: The Future of Software- Defined Storage- What Does it Look Like...
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on Docker
 
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMFGestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
Gestione gerarchica dei dati con SUSE Enterprise Storage e HPE DMF
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoop
 
OSS Presentation by Bryan Badger
OSS Presentation by Bryan BadgerOSS Presentation by Bryan Badger
OSS Presentation by Bryan Badger
 
Oracle big data appliance and solutions
Oracle big data appliance and solutionsOracle big data appliance and solutions
Oracle big data appliance and solutions
 
Oracle Database Appliance (ODA) X6-2 Portfolio Overview
Oracle Database Appliance (ODA) X6-2 Portfolio OverviewOracle Database Appliance (ODA) X6-2 Portfolio Overview
Oracle Database Appliance (ODA) X6-2 Portfolio Overview
 
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with CrowbarWicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar
 
Introduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OSIntroduction to Apache Mesos and DC/OS
Introduction to Apache Mesos and DC/OS
 
Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2Nicholas:hdfs what is new in hadoop 2
Nicholas:hdfs what is new in hadoop 2
 
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014
QLogic - CrossIT - ACNC/JetStor FibricCache VMUG 2014
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and Future
 
OpenStack at the speed of business with SolidFire & Red Hat
OpenStack at the speed of business with SolidFire & Red Hat OpenStack at the speed of business with SolidFire & Red Hat
OpenStack at the speed of business with SolidFire & Red Hat
 

More from Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed_Hat_Storage
 

More from Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage MattersRed Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
 

Recently uploaded

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?

  • 1. Software Defined storage, Big Data and Ceph. What is all the fuss about? Kamesh Pemmaraju, Sr. Product Mgr, Dell Neil Levine, Dir. of Product Mgmt, Red Hat OpenStack Summit Atlanta, May 2014
  • 3. CEPH UNIFIED STORAGE FILE SYSTEM BLOCK STORAGE OBJECT STORAGE Keystone Geo-Replication Native API 3 Multi-tenant S3 & Swift OpenStack Linux Kernel iSCSI Clones Snapshots CIFS/NFS HDFS Distributed Metadata Linux Kernel POSIX Copyright © 2013 by Inktank | Private and Confidential
  • 4. ARCHITECTURE 4 Copyright © 2013 by Inktank | Private and Confidential APP HOST/VM CLIENT
  • 5. COMPONENTS 5 S3/SWIFT HOST/HYPERVISOR iSCSI CIFS/NFS SDK INTERFACESSTORAGECLUSTERS MONITORS OBJECT STORAGE DAEMONS (OSD) BLOCK STORAGE FILE SYSTEMOBJECT STORAGE Copyright © 2014 by Inktank | Private and Confidential
  • 7. 7 INKTANK CEPH ENTERPRISE WHAT’S INSIDE? Ceph Object and Ceph Block Calamari Enterprise Plugins (2014) Support Services Copyright © 2013 by Inktank | Private and Confidential
  • 8.
  • 9. Copyright © 2013 by Inktank | Private and Confidential USE CASE: OPENSTACK 9
  • 10. Copyright © 2013 by Inktank | Private and Confidential USE CASE: OPENSTACK 10 Volumes Ephemeral Copy-on-Write Snapshots
  • 11. Copyright © 2013 by Inktank | Private and Confidential USE CASE: OPENSTACK 11
  • 12. Copyright © 2013 by Inktank | Private and Confidential USE CASE: CLOUD STORAGE 12 S3/Swift S3/Swift S3/Swift S3/Swift
  • 13. Copyright © 2013 by Inktank | Private and Confidential USE CASE: WEBSCALE APPLICATIONS 13 Native Protocol Native Protocol Native Protocol Native Protocol
  • 14. ROADMAP INKTANK CEPH ENTERPRISE 14 Copyright © 2013 by Inktank | Private and Confidential May 2014 Q4 2014 2015
  • 15. Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 15 CEPH STORAGE CLUSTER
  • 16. Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 16 CEPH STORAGE CLUSTER Read/Write Read/Write
  • 17. Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 17 CEPH STORAGE CLUSTER Write Write Read Read
  • 18. Copyright © 2013 by Inktank | Private and Confidential USE CASE: ARCHIVE / COLD STORAGE 18 CEPH STORAGE CLUSTER
  • 19. ROADMAP INKTANK CEPH ENTERPRISE 19 Copyright © 2013 by Inktank | Private and Confidential April 2014 September 2014 2015
  • 20. Copyright © 2013 by Inktank | Private and Confidential USE CASE: DATABASES 20 Native Protocol Native Protocol Native Protocol Native Protocol
  • 21. Copyright © 2013 by Inktank | Private and Confidential USE CASE: HADOOP 21 Native Protocol Native Protocol Native Protocol Native Protocol
  • 22. 22 Training for Proof of Concept or Production Users Online Training for Cloud Builders and Storage Administrators Instructor led with virtual lab environment INKTANK UNIVERSITY Copyright © 2014 by Inktank | Private and Confidential VIRTUAL PUBLIC May 21 – 22 European Time-zone June 4 - 5 US Time-zone
  • 24. Outline • Planning your Ceph implementation • Choosing targets for Ceph deployments • Reference Architecture Considerations • Dell Reference Configurations • Customer Case Study
  • 25. • Business Requirements – Budget considerations, organizational commitment – Avoiding lock-in – use open source and industry standards – Enterprise IT use cases – Cloud applications/XaaS use cases for massive-scale, cost-effective storage – Steady-state vs. Spike data usage • Sizing requirements – What is the initial storage capacity? – What is the expected growth rate? • Workload requirements – Does the workload need high performance or it is more capacity focused? – What are IOPS/Throughput requirements? – What type of data will be stored? – Ephemeral vs. persistent data, Object, Block, File? Planning your Ceph Implementation
  • 26. How to Choose Targets Use Cases for Ceph Virtualization and Private Cloud (traditional SAN/NAS) High Performance (traditional SAN) PerformanceCapacity NAS & Object Content Store (traditional NAS) Cloud Applications Traditional IT XaaS Compute Cloud Open Source Block XaaS Content Store Open Source NAS/Object Ceph Target Ceph Target
  • 27. • Tradeoff between Cost vs. Reliability (use-case dependent) • Use the Crush configs to map out your failures domains and performance pools • Failure domains – Disk (OSD and OS) – SSD journals – Node – Rack – Site (replication at the RADOS level, Block replication, consider latencies) • Storage pools – SSD pool for higher performance – Capacity pool • Plan for failure domains of the monitor nodes • Consider failure replacement scenarios, lowered redundancies, and performance impacts Architectural considerations – Redundancy and replication considerations
  • 28. Server Considerations • Storage Node: – one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD, – SSD’s for journaling and for using the tiering feature in Firefly – Erasure coding will increase useable capacity at the expense of additional compute load – SAS JBOD expanders for extra capacity (beware of extra latency and oversubscribed SAS lanes) • Monitor nodes (MON): odd number for quorum, services can be hosted on the storage node for smaller deployments, but will need dedicated nodes larger installations • Dedicated RADOS Gateway nodes for large object store deployments and for federated gateways for multi-site
  • 29. Networking Considerations • Dedicated or Shared network – Be sure to involve the networking and security teams early when design your networking options – Network redundancy considerations – Dedicated client and OSD networks – VLAN’s vs. Dedicated switches – 1 Gbs vs 10 Gbs vs 40 Gbs! • Networking design – Spine and Leaf – Multi-rack – Core fabric connectivity – WAN connectivity and latency issues for multi-site deployments
  • 30. Ceph additions coming to the Dell Red Hat OpenStack solution Pilot configuration Components • Dell PowerEdge R620/R720/R720XD Servers • Dell Networking S4810/S55 Switches, 10GB • Red Hat Enterprise Linux OpenStack Platform • Dell ProSupport • Dell Professional Services • Avail. w/wo High Availability Specs at a glance • Node 1: Red Hat Openstack Manager • Node 2: OpenStack Controller (2 additional controllers for HA) • Nodes 3-8: OpenStack Nova Compute • Nodes: 9-11: Ceph 12x3 TB raw storage • Network Switches: Dell Networking S4810/S55 • Supports ~ 170-228 virtual machines Benefits • Rapid on-ramp to OpenStack cloud • Scale up, modular compute and storage blocks • Single point of contact for solution support • Enterprise-grade OpenStack software package Storage bundles
  • 31. Example Ceph Dell Server Configurations Type Size Components Performance 20 TB • R720XD • 24 GB DRAM • 10 X 4 TB HDD (data drives) • 2 X 300 GB SSD (journal) Capacity 44TB / 105 TB* • R720XD • 64 GB DRAM • 10 X 4 TB HDD (data drives) • 2 X 300 GB SSH (journal) • MD1200 • 12 X 4 TB HHD (data drives) Extra Capacity 144 TB / 240 TB* • R720XD • 128 GB DRAM • 12 X 4 TB HDD (data drives) • MD3060e (JBOD) • 60 X 4 TB HHD (data drives)
  • 32. • Dell & Red Hat & Inktank have partnered to bring a complete Enterprise-grade storage solution for RHEL-OSP + Ceph • The joint solution provides: – Co-engineered and validated Reference Architecture – Pre-configured storage bundles optimized for performance or storage – Storage enhancements to existing OpenStack Bundles – Certification against RHEL-OSP – Professional Services, Support, and Training › Collaborative Support for Dell hardware customers › Deployment services & tools What Are We Doing To Enable?
  • 34. Overcoming a data deluge Inconsistent data management across research teams hampers productivity • Growing data sets challenged available resources • Research data distributed across laptops, USB drives, local servers, HPC clusters • Transferring datasets to HPC clusters took too much time and clogged shared networks • Distributed data management reduced researcher productivity and put data at risk
  • 35. Solution: a storage cloud Centralized storage cloud based on OpenStack and Ceph • Flexible, fully open-source infrastructure based on Dell reference design − OpenStack, Crowbar and Ceph − Standard PowerEdge servers and storage − 400+ TBs at less than 41¢ per gigabyte • Distributed scale-out storage provisions capacity from a massive common pool − Scalable to 5 petabytes • Data migration to and from HPC clusters via dedicated 10Gb Ethernet fabric • Easily extendable framework for developing and hosting additional services − Simplified backup service now enabled “We’ve made it possible for users to satisfy their own storage needs with the Dell private cloud, so that their research is not hampered by IT.” David L. Shealy, PhD Faculty Director, Research Computing Chairman, Dept. of Physics
  • 36. Building a research cloud Project goals extend well beyond data management • Designed to support emerging data-intensive scientific computing paradigm – 12 x 16-core compute nodes – 1 TB RAM, 420 TBs storage – 36 TBs storage attached to each compute node • Virtual servers and virtual storage meet HPC − Direct user control over all aspects of the application environment − Ample capacity for large research data sets • Individually customized test/development/ production environments − Rapid setup and teardown • Growing set of cloud-based tools & services − Easily integrate shareware, open source, and commercial software “We envision the OpenStack-based cloud to act as the gateway to our HPC resources, not only as the purveyor of services we provide, but also enabling users to build their own cloud-based services.” John-Paul Robinson, System Architect
  • 37. Research Computing System (Next Gen) A cloud-based computing environment with high speed access to dedicated and dynamic compute resources Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node Open Stack node HPC Cluster HPC Cluster HPC Storage DDR Infiniband QDR Infiniband 10Gb Ethernet Cloud services layer Virtualized server and storage computing cloud based on OpenStack, Crowbar and Ceph UAB Research Network
  • 39. Contact Information Reach Kamesh and Neil for additional information: Dell.com/OpenStack Dell.com/Crowbar Inktank.com/Dell Kamesh_Pemmaraju@Dell.com @kpemmaraju Neil.Levine@Inktank.com @neilwlevine Visit the Dell and Inktank booths in the OpenStack Summit Expo Hall

Editor's Notes

  1. R720XD configurations use 4 TB drives 2 X 300 GB OS drives 2 X 10 GB NIC iDRAC 7 Enterprise LSI 9207-[8i, 8e] HBAs 2 X E5-2650 2 GHz processors (*) - The larger capacity is that were erasure encoding is in use. To get the same redundancy as 2 X in erasure encoding uses a factor of 1.2. Erasure encoding is a feature of the Ceph Firefly release, which is in its final phase of development. Additional performance could be gained by adding either Intel’s CAS or Dell FluidFS DAS caching software packages. Doing so would impose additional memory and processing overhead, and more work in the deployment/installation bucket (because we would have to install and configure it).
  2. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  3. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  4. https://dev.uabgrid.uab.edu/wiki/OpenStackPlusCeph The research computing system (RCS) is built on a collection of distinct hardware systems designed to provide specific services to applications. The RCS hardware includes dedicated compute fabrics that support high performance computing (HPC) applications where hundreds of compute cores can work together on a single application. These clusters of commodity compute hardware make it possible to do data analysis and modelling work in hours, work that would have taken months using a single computer. The clusters are connected with dedicated high bandwidth, low latency networks for applications to efficiently coordinate their actions across many computers and access a shared high speed storage system for working efficiently with terabytes of data. Our newest hardware fabric, acquired 2012Q4, is designed to support emerging data intensive scientific computing and virtualization paradigms. This hardware is very similar to the commodity computers used by our traditional HPC fabrics, however, in addition to having many compute cores and lots of RAM, each individual computer contains 36TB of built in disk storage. Taken together, this newest hardware fabric adds 192 cores, 1TB RAM, and 420TB of storage to the RCS. The built-in disk storage is designed to support applications running local to each computer. The data intensive computing paradigm exchanges the external storage networks of traditional HPC clusters with the native, very high speed system buses that provide access to local hard disks in each computer. Large datasets are distributed across these computers and then applications are assigned to run on the specific computer that stores the portion of the dataset it has been assigned to analyze. The hardware requirements for data intensive computing closely resemble the requirements for virtualization and can benefit tremendously from the configuration flexibility that a virtualization fabric offers. In order to enhance flexibility and further improve support for scaling research applications, we are engineering our latest hardware cluster to act as a virtualized storage and compute fabric. This enables support for a wide variety of storage and compute use cases, most prominently, ample storage capacity for reliably housing large research data collections and flexible application development and deployment capabilities that allow direct user control over all aspects of the application environment. In short, we are tooling this hardware to build a cloud computing environment. We are building this cloud using OpenStack for compute virtualization and Ceph for storage virtualization. Crowbar will provision the raw hardware fabric. This approach is very similar to the mode we have been following with our traditional ROCKS-based HPC cluster environment. The new approach enhances our ability to automatically provision hardware and further improve the economics large scale computing. We are implementing this environment with Dell and Inktank. These vendors and the upstream open source projects on which this platform is built, embrace the DevOps model for systems development. This will support further engineering collaboration with our vendors, enabling the UAB research community to continually enhance our fabric as needed and feed those enhancements upstream for inclusion in future support releases. This solution rounds out the feature set of the RCS core and will provide a general framework to scale future growth.
  5. User base: 900+ researchers across Campus. KVM-based 2 Nova nodes 4 primary storage nodes 4 replication nodes 2 control nodes 12 x R720XD systems