SlideShare a Scribd company logo
1 of 48
Download to read offline
WHAT'S NEW IN JEWEL AND BEYOND
SAGE WEIL
CEPH DAY CERN – 2016.06.14
2
AGENDA
● New in Jewel
● BlueStore
● Kraken and Luminous
3
RELEASES
● Hammer v0.94.x (LTS)
– March '15
● Infernalis v9.2.x
– November '15
● Jewel v10.2.x (LTS)
– April '16
● Kraken v11.2.x
– November '16
● Luminous v12.2.x (LTS)
– April '17
4
JEWEL v10.2.x – APRIL 2016
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
intelligent storage nodes
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
LIBRADOS
A library allowing
apps to directly
access RADOS,
with support for
C, C++, Java,
Python, Ruby,
and PHP
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
RBD
A reliable and fully-
distributed block
device, with a Linux
kernel client and a
QEMU/KVM driver
RADOSGW
A bucket-based
REST gateway,
compatible with S3
and Swift
RADOSGW
A bucket-based
REST gateway,
compatible with S3
and Swift
APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT
CEPH FS
A POSIX-compliant
distributed file
system, with a
Linux kernel client
and support for
FUSE
CEPH FS
A POSIX-compliant
distributed file
system, with a
Linux kernel client
and support for
FUSE
NEARLY
AWESOME
AWESOMEAWESOME
AWESOME
AWESOME
6
2016 =
RGW
A web services gateway
for object storage,
compatible with S3 and
Swift
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)
RADOS
A software-based, reliable, autonomous, distributed object store comprised of
self-healing, self-managing, intelligent storage nodes and lightweight monitors
RBD
A reliable, fully-distributed
block device with cloud
platform integration
CEPHFS
A distributed file system
with POSIX semantics and
scale-out metadata
management
OBJECT BLOCK FILE
FULLY AWESOME
CEPHFS
8
CEPHFS: STABLE AT LAST
● Jewel recommendations
– single active MDS (+ many standbys)
– snapshots disabled
● Repair and disaster recovery tools
● CephFSVolumeManager and Manila driver
● Authorization improvements (confine client to a directory)
9
LINUX HOST
M M
M
RADOS CLUSTER
KERNEL MODULE
datametadata 01
10
or ceph-fuse,
Samba,
Ganesha
10
SCALING FILE PEFORMANCE
● Data path is direct to RADOS
– scale IO path by adding OSDs
– or use SSDs, etc.
● No restrictions on file count or file system size
– MDS cache performance related to size of active set, not total file count
● Metadata performance
– provide lots of RAM for MDS daemons (no local on-disk state needed)
– use SSDs for RADOS metadata pool
● Metadata path is scaled independently
– up to 128 active metadata servers tested; 256 possible
– in Jewel, only 1 is recommended
– stable multi-active MDS coming in Kraken or Luminous
11
DYNAMIC SUBTREE PARTITIONING
12
DYNAMIC SUBTREE PARTITIONING
13
DYNAMIC SUBTREE PARTITIONING
14
DYNAMIC SUBTREE PARTITIONING
15
DYNAMIC SUBTREE PARTITIONING
16
POSIX AND CONSISTENCY
● CephFS has “consistent caching”
– clients can cache data
– caches are coherent
– MDS invalidates data that is changed – complex locking/leasing protocol
● This means clients never see stale data of any kind
– consistency is much stronger than, say, NFS
● file locks are fully supported
– flock and fcntl locks
17
RSTATS
# ext4 reports dirs as 4K
ls -lhd /ext4/data
drwxrwxr-x. 2 john john 4.0K Jun 25 14:58 /home/john/data
# cephfs reports dir size from contents
$ ls -lhd /cephfs/mydata
drwxrwxr-x. 1 john john 16M Jun 25 14:57 ./mydata
18
OTHER GOOD STUFF
● Directory fragmentation
– shard directories for scaling, performance
– disabled by default in Jewel; on by default in Kraken
● Snapshots
– create snapshot on any directory
mkdir some/random/dir/.snap/mysnapshot
ls some/random/dir/.snap/mysnapshot
– disabled by default in Jewel; hopefully on by default in Luminous
● Security authorization model
– confine a client mount to a directory and to a rados pool namespace
19
FSCK AND RECOVERY
● metadata scrubbing
– online operation
– manually triggered in Jewel
– automatic background scrubbing coming in Kraken, Luminous
● disaster recovery tools
– rebuild file system namespace from scratch if RADOS loses it or something
corrupts it
20
OPENSTACK MANILA FSaaS
● CephFS native
– Jewel and Mitaka
– CephFSVolumeManager to orchestrate shares
● CephFS directories
● with quota
● backed by a RADOS pool + namespace
● and clients locked into the directory
– VM mounts CephFS directory (ceph-fuse, kernel client, …)
– tenant VM talks directory to Ceph cluster; deploy with caution
OTHER JEWEL STUFF
22
GENERAL
● daemons run as ceph user
– except upgraded clusters that don't want to
chown -R
● selinux support
● all systemd
● ceph-ansible deployment
● ceph CLI bash completion
● “calamari on mons”
23
BUILDS
●
aarch64 builds
– centos7, ubuntu xenial
● armv7l builds
– debian jessie
– http://ceph.com/community/500-osd-ceph-cluster/
RBD
25
RBD IMAGE MIRRORING
● image mirroring
– asynchronous replication to another cluster
– replica(s) crash consistent
– replication is per-image
– each image has a data journal
– rbd-mirror daemon does the work
QEMU
librbd
Image Journal
QEMU
librbd
Image
rbd-mirror
A B
26
OTHER RBD STUFF
● fast dif
– use object-map to do O(1) time dif
● deep flatten
– separate clone from parent while retaining snapshot history
● dynamic features
– turn on/of: exclusive-lock, object-map, fast-dif, journaling
– turn of: deep-flatten
– useful for compatibility with kernel client, which lacks some new features
● new default features
– layering, exclusive-lock, object-map, fast-dif, deep-flatten
● snapshot rename
● rbd du
● improved/rewritten CLI (with dynamic usage/help)
RGW
28
NEW IN RGW
● Rewritten multi-site capability
– N zones, N-way sync
– fail-over and fail-back
– simpler configuration
● NFS interface
– export a bucket over NFSv4
– designed for import/export of data – not general a purpose file system!
– based on nfs-ganesha
● Indexless buckets
– bypass RGW index for certain buckets
(that don't need enumeration, quota, ...)
29
● S3
– AWS4 authentication support
– LDAP and AD/LDAP support
– static website
– RGW STS (coming shortly)
● Kerberos, AD integration
● Swift
– Keystone V3
– multi-tenancy
– object expiration
– Static Large Object (SLO)
– bulk delete
– object versioning
– refcore compliance
RGW API UPDATES
RADOS
31
RADOS
● queuing improvements
– new IO scheduler “wpq” (weighted priority queue) stabilizing
– (more) unified queue (client io, scrub, snaptrim, most of
recovery)
– somewhat better client vs recovery/rebalance isolation
● mon scalability and performance improvements
– thanks to testing here @ CERN
● optimizations, performance improvements (faster on SSDs)
● AsyncMessenger – new implementation of networking layer
– fewer threads, friendlier to allocator (especially tcmalloc)
32
MORE RADOS
● no more ext4
● cache tiering improvements
– proxy write support
– promotion throttling
– better, still not good enough for RBD and EC base
● SHEC erasure code (thanks to Fujitsu)
– trade some extra storage for recovery performance
● [test-]reweight-by-utilization improvements
– more better data distribution optimization
● BlueStore – new experimental backend
BLUESTORE
34
BLUESTORE: NEW BACKEND GOALS
● no more double writes to a full data journal
● efficient object enumeration
● efficient clone operation
● efficient splice (“move these bytes from object X to object Y”)
– (will enable EC overwrites for RBD over EC)
● efficient IO pattern for HDDs, SSDs, NVMe
● minimal locking, maximum parallelism (between PGs)
● full data and metadata checksums
● inline compression (zlib, snappy, etc.)
35
● BlueStore = Block + NewStore
– consume raw block device(s)
– key/value database (RocksDB) for metadata
– data written directly to block device
– pluggable block Allocator
● We must share the block device with RocksDB
– implement our own rocksdb::Env
– implement tiny “file system” BlueFS
– make BlueStore and BlueFS share
BLUESTORE
BlueStore
BlueFS
RocksDB
BlockDeviceBlockDeviceBlockDevice
BlueRocksEnv
data metadata
Allocator
ObjectStore
36
Full data checksums
● We scrub... periodically
● We want to validate checksum
on every read
Compression
● 3x replication is expensive
● Any scale-out cluster is
expensive
WE WANT FANCY STUFF
37
Full data checksums
● We scrub... periodically
● We want to validate checksum
on every read
● More metadata in the blobs
– 4KB of 32-bit csum metadata
for 4MB object and 4KB blocks
– larger csum blocks?
– smaller csums (8 or 16 bits)?
Compression
● 3x replication is expensive
● Any scale-out cluster is
expensive
● Need largish extents to get
compression benefit (64 KB, 128
KB)
– overwrites occlude/obscure
compressed blobs
– compacted (rewritten) when >N
layers deep
WE WANT FANCY STUFF
38
0
50
100
150
200
250
300
350
400
450
500
Ceph 10.1.0 Bluestore vs Filestore Sequential Writes
FS HDD
BS HDD
IO Size
Throughput(MB/s)
HDD: SEQUENTIAL WRITE
39
HDD: RANDOM WRITE
0
100
200
300
400
500
Ceph 10.1.0 Bluestore vs Filestore Random Writes
FS HDD
BS HDD
IO Size
Throughput(MB/s)
0
200
400
600
800
1000
1200
1400
1600
Ceph 10.1.0 Bluestore vs Filestore Random Writes
FS HDD
BS HDD
IO Size
IOPS
40
HDD: SEQUENTIAL READ
0
200
400
600
800
1000
1200
Ceph 10.1.0 Bluestore vs Filestore Sequential Reads
FS HDD
BS HDD
IO Size
Throughput(MB/s)
KRAKEN AND LUMINOUS
42
RADOS
●
BlueStore!
●
erasure code overwrites (RBD + EC)
●
ceph-mgr – new mon-like daemon
– management API endpoint (Calamari)
– metrics
● config management in mons
● on-the-wire encryption
● OSD IO path optimization
● faster peering
●
QoS
●
ceph-disk support for dm-cache/bcache/FlashCache/...
43
RGW
● AWS STS (kerberos support)
● pluggable full-zone syncing
– tiering to tape
– tiering to cloud
– metadata indexing (elasticsearch?)
● S3 encryption API
● compression
● performance
44
RBD
● RBD mirroring improvements
– cooperative daemons
– Cinder integration
● RBD client-side persistent cache
– write-through and write-back cache
– ordered writeback → crash consistent on loss of cache
● RBD consistency groups
● client-side encryption
● Kernel RBD improvements
45
CEPHFS
● multi-active MDS
and/or
● snapshots
● Manila hypervisor-mediated FsaaS
– NFS over VSOCK →
libvirt-managed Ganesha server →
libcephfs FSAL →
CephFS cluster
– new Manila driver
– new Nova API to attach shares to Vms
● Samba and Ganesha integration improvements
● richacl (ACL coherency between NFS and CIFS)
46
OTHER COOL STUFF
● librados backend for RocksDB
– and rocksdb is not a backend for MySQL...
● PMStore
– Intel OSD backend for 3D-Xpoint
● multi-hosting on IPv4 and IPv6
● ceph-ansible
● ceph-docker
47
● SUSE
● Mirantis
● XSKY
● Intel
● Fujitsu
● DreamHost
● ZTE
● SanDisk
● Gentoo
● Samsung
● LETV
● Igalia
● Deutsche Telekom
● Kylin Cloud
● Endurance International Group
● H3C
● Johannes Gutenberg-Universität
Mainz
● Reliance Jio Infocomm
● Ebay
● Tencent
GROWING DEVELOPMENT COMMUNITY
THANK YOU!
Sage Weil
CEPH PRINCIPAL ARCHITECT
sage@redhat.com
@liewegas

More Related Content

What's hot

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondSage Weil
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelColleen Corrice
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InSage Weil
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldSage Weil
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous updateinwin stack
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016John Spray
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkXiaoxi Chen
 
Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Sean Cohen
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 

What's hot (19)

BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Making distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyondMaking distributed storage easy: usability in Ceph Luminous and beyond
Making distributed storage easy: usability in Ceph Luminous and beyond
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackCeph at Work in Bloomberg: Object Store, RBD and OpenStack
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
 
Ceph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to JewelCeph Performance: Projects Leading up to Jewel
Ceph Performance: Projects Leading up to Jewel
 
Bluestore
BluestoreBluestore
Bluestore
 
BlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year InBlueStore, A New Storage Backend for Ceph, One Year In
BlueStore, A New Storage Backend for Ceph, One Year In
 
Ceph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud worldCeph data services in a multi- and hybrid cloud world
Ceph data services in a multi- and hybrid cloud world
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
XSKY - ceph luminous update
XSKY - ceph luminous updateXSKY - ceph luminous update
XSKY - ceph luminous update
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
CephFS update February 2016
CephFS update February 2016CephFS update February 2016
CephFS update February 2016
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmark
 
librados
libradoslibrados
librados
 
Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015Dude where's my volume, open stack summit vancouver 2015
Dude where's my volume, open stack summit vancouver 2015
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 

Viewers also liked

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Sage Weil
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red_Hat_Storage
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsRed_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Development of the irods rados plugin @ iRODS User group meeting 2014
Development of the irods rados plugin @ iRODS User group meeting 2014Development of the irods rados plugin @ iRODS User group meeting 2014
Development of the irods rados plugin @ iRODS User group meeting 2014mgrawinkel
 
Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Community
 
Storage by Red Hat #rhcpday 2015
Storage by Red Hat #rhcpday 2015Storage by Red Hat #rhcpday 2015
Storage by Red Hat #rhcpday 2015Emma Haruka Iwao
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
 
EUDAT data architecture and interoperability aspects – Daan Broeder
EUDAT data architecture and interoperability aspects – Daan BroederEUDAT data architecture and interoperability aspects – Daan Broeder
EUDAT data architecture and interoperability aspects – Daan BroederOpenAIRE
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelRed_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
基于Fuel的超融合一体机
基于Fuel的超融合一体机基于Fuel的超融合一体机
基于Fuel的超融合一体机EdwardBadBoy
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 

Viewers also liked (18)

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016Red Hat Ceph Storage Roadmap: January 2016
Red Hat Ceph Storage Roadmap: January 2016
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Development of the irods rados plugin @ iRODS User group meeting 2014
Development of the irods rados plugin @ iRODS User group meeting 2014Development of the irods rados plugin @ iRODS User group meeting 2014
Development of the irods rados plugin @ iRODS User group meeting 2014
 
Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt
 
Storage by Red Hat #rhcpday 2015
Storage by Red Hat #rhcpday 2015Storage by Red Hat #rhcpday 2015
Storage by Red Hat #rhcpday 2015
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
EUDAT data architecture and interoperability aspects – Daan Broeder
EUDAT data architecture and interoperability aspects – Daan BroederEUDAT data architecture and interoperability aspects – Daan Broeder
EUDAT data architecture and interoperability aspects – Daan Broeder
 
Ceph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to JewelCeph Performance: Projects Leading Up to Jewel
Ceph Performance: Projects Leading Up to Jewel
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
基于Fuel的超融合一体机
基于Fuel的超融合一体机基于Fuel的超融合一体机
基于Fuel的超融合一体机
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 

Similar to What's new in Jewel and Beyond

20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for youTaco Scargo
 
Ceph Day New York 2014: Future of CephFS
Ceph Day New York 2014:  Future of CephFS Ceph Day New York 2014:  Future of CephFS
Ceph Day New York 2014: Future of CephFS Ceph Community
 
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Community
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCeph Community
 
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.The Hive
 
Ceph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Community
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introductionkanedafromparis
 
Ceph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Community
 
Open vStorage Road show 2015 Q1
Open vStorage Road show 2015 Q1Open vStorage Road show 2015 Q1
Open vStorage Road show 2015 Q1wim_provoost
 
Ceph Goes on Online at Qihoo 360 - Xuehan Xu
Ceph Goes on Online at Qihoo 360 - Xuehan XuCeph Goes on Online at Qihoo 360 - Xuehan Xu
Ceph Goes on Online at Qihoo 360 - Xuehan XuCeph Community
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Community
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Road show 2015 triangle meetup
Road show 2015 triangle meetupRoad show 2015 triangle meetup
Road show 2015 triangle meetupwim_provoost
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Divejoshdurgin
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep DiveRed_Hat_Storage
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph Community
 
High Availability Storage (susecon2016)
High Availability Storage (susecon2016)High Availability Storage (susecon2016)
High Availability Storage (susecon2016)Roger Zhou 周志强
 
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB,  or how we implemented a 10-times faster CassandraSeastar / ScyllaDB,  or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB, or how we implemented a 10-times faster CassandraTzach Livyatan
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETNikos Kormpakis
 

Similar to What's new in Jewel and Beyond (20)

20171101 taco scargo luminous is out, what's in it for you
20171101 taco scargo   luminous is out, what's in it for you20171101 taco scargo   luminous is out, what's in it for you
20171101 taco scargo luminous is out, what's in it for you
 
Ceph Day New York 2014: Future of CephFS
Ceph Day New York 2014:  Future of CephFS Ceph Day New York 2014:  Future of CephFS
Ceph Day New York 2014: Future of CephFS
 
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with LibradosCeph Day Santa Clara: The Future of CephFS + Developing with Librados
Ceph Day Santa Clara: The Future of CephFS + Developing with Librados
 
CephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at LastCephFS in Jewel: Stable at Last
CephFS in Jewel: Stable at Last
 
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
The Hive Think Tank: Ceph + RocksDB by Sage Weil, Red Hat.
 
Ceph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFSCeph Day NYC: The Future of CephFS
Ceph Day NYC: The Future of CephFS
 
Ippevent : openshift Introduction
Ippevent : openshift IntroductionIppevent : openshift Introduction
Ippevent : openshift Introduction
 
Ceph Tech Talk: Bluestore
Ceph Tech Talk: BluestoreCeph Tech Talk: Bluestore
Ceph Tech Talk: Bluestore
 
XenSummit - 08/28/2012
XenSummit - 08/28/2012XenSummit - 08/28/2012
XenSummit - 08/28/2012
 
Open vStorage Road show 2015 Q1
Open vStorage Road show 2015 Q1Open vStorage Road show 2015 Q1
Open vStorage Road show 2015 Q1
 
Ceph Goes on Online at Qihoo 360 - Xuehan Xu
Ceph Goes on Online at Qihoo 360 - Xuehan XuCeph Goes on Online at Qihoo 360 - Xuehan Xu
Ceph Goes on Online at Qihoo 360 - Xuehan Xu
 
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - The current state of CephFS development
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Road show 2015 triangle meetup
Road show 2015 triangle meetupRoad show 2015 triangle meetup
Road show 2015 triangle meetup
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices: A Deep DiveCeph Block Devices: A Deep Dive
Ceph Block Devices: A Deep Dive
 
Ceph Block Devices: A Deep Dive
Ceph Block Devices:  A Deep DiveCeph Block Devices:  A Deep Dive
Ceph Block Devices: A Deep Dive
 
Ceph as storage for CloudStack
Ceph as storage for CloudStack Ceph as storage for CloudStack
Ceph as storage for CloudStack
 
High Availability Storage (susecon2016)
High Availability Storage (susecon2016)High Availability Storage (susecon2016)
High Availability Storage (susecon2016)
 
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB,  or how we implemented a 10-times faster CassandraSeastar / ScyllaDB,  or how we implemented a 10-times faster Cassandra
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
 
Open Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNETOpen Source Storage at Scale: Ceph @ GRNET
Open Source Storage at Scale: Ceph @ GRNET
 

Recently uploaded

Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsArshad QA
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...OnePlan Solutions
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️anilsa9823
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceanilsa9823
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionSolGuruz
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 

Recently uploaded (20)

Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
Tech Tuesday-Harness the Power of Effective Resource Planning with OnePlan’s ...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 

What's new in Jewel and Beyond

  • 1. WHAT'S NEW IN JEWEL AND BEYOND SAGE WEIL CEPH DAY CERN – 2016.06.14
  • 2. 2 AGENDA ● New in Jewel ● BlueStore ● Kraken and Luminous
  • 3. 3 RELEASES ● Hammer v0.94.x (LTS) – March '15 ● Infernalis v9.2.x – November '15 ● Jewel v10.2.x (LTS) – April '16 ● Kraken v11.2.x – November '16 ● Luminous v12.2.x (LTS) – April '17
  • 4. 4 JEWEL v10.2.x – APRIL 2016
  • 5. RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP RBD A reliable and fully- distributed block device, with a Linux kernel client and a QEMU/KVM driver RBD A reliable and fully- distributed block device, with a Linux kernel client and a QEMU/KVM driver RADOSGW A bucket-based REST gateway, compatible with S3 and Swift RADOSGW A bucket-based REST gateway, compatible with S3 and Swift APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE NEARLY AWESOME AWESOMEAWESOME AWESOME AWESOME
  • 6. 6 2016 = RGW A web services gateway for object storage, compatible with S3 and Swift LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP) RADOS A software-based, reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors RBD A reliable, fully-distributed block device with cloud platform integration CEPHFS A distributed file system with POSIX semantics and scale-out metadata management OBJECT BLOCK FILE FULLY AWESOME
  • 8. 8 CEPHFS: STABLE AT LAST ● Jewel recommendations – single active MDS (+ many standbys) – snapshots disabled ● Repair and disaster recovery tools ● CephFSVolumeManager and Manila driver ● Authorization improvements (confine client to a directory)
  • 9. 9 LINUX HOST M M M RADOS CLUSTER KERNEL MODULE datametadata 01 10 or ceph-fuse, Samba, Ganesha
  • 10. 10 SCALING FILE PEFORMANCE ● Data path is direct to RADOS – scale IO path by adding OSDs – or use SSDs, etc. ● No restrictions on file count or file system size – MDS cache performance related to size of active set, not total file count ● Metadata performance – provide lots of RAM for MDS daemons (no local on-disk state needed) – use SSDs for RADOS metadata pool ● Metadata path is scaled independently – up to 128 active metadata servers tested; 256 possible – in Jewel, only 1 is recommended – stable multi-active MDS coming in Kraken or Luminous
  • 16. 16 POSIX AND CONSISTENCY ● CephFS has “consistent caching” – clients can cache data – caches are coherent – MDS invalidates data that is changed – complex locking/leasing protocol ● This means clients never see stale data of any kind – consistency is much stronger than, say, NFS ● file locks are fully supported – flock and fcntl locks
  • 17. 17 RSTATS # ext4 reports dirs as 4K ls -lhd /ext4/data drwxrwxr-x. 2 john john 4.0K Jun 25 14:58 /home/john/data # cephfs reports dir size from contents $ ls -lhd /cephfs/mydata drwxrwxr-x. 1 john john 16M Jun 25 14:57 ./mydata
  • 18. 18 OTHER GOOD STUFF ● Directory fragmentation – shard directories for scaling, performance – disabled by default in Jewel; on by default in Kraken ● Snapshots – create snapshot on any directory mkdir some/random/dir/.snap/mysnapshot ls some/random/dir/.snap/mysnapshot – disabled by default in Jewel; hopefully on by default in Luminous ● Security authorization model – confine a client mount to a directory and to a rados pool namespace
  • 19. 19 FSCK AND RECOVERY ● metadata scrubbing – online operation – manually triggered in Jewel – automatic background scrubbing coming in Kraken, Luminous ● disaster recovery tools – rebuild file system namespace from scratch if RADOS loses it or something corrupts it
  • 20. 20 OPENSTACK MANILA FSaaS ● CephFS native – Jewel and Mitaka – CephFSVolumeManager to orchestrate shares ● CephFS directories ● with quota ● backed by a RADOS pool + namespace ● and clients locked into the directory – VM mounts CephFS directory (ceph-fuse, kernel client, …) – tenant VM talks directory to Ceph cluster; deploy with caution
  • 22. 22 GENERAL ● daemons run as ceph user – except upgraded clusters that don't want to chown -R ● selinux support ● all systemd ● ceph-ansible deployment ● ceph CLI bash completion ● “calamari on mons”
  • 23. 23 BUILDS ● aarch64 builds – centos7, ubuntu xenial ● armv7l builds – debian jessie – http://ceph.com/community/500-osd-ceph-cluster/
  • 24. RBD
  • 25. 25 RBD IMAGE MIRRORING ● image mirroring – asynchronous replication to another cluster – replica(s) crash consistent – replication is per-image – each image has a data journal – rbd-mirror daemon does the work QEMU librbd Image Journal QEMU librbd Image rbd-mirror A B
  • 26. 26 OTHER RBD STUFF ● fast dif – use object-map to do O(1) time dif ● deep flatten – separate clone from parent while retaining snapshot history ● dynamic features – turn on/of: exclusive-lock, object-map, fast-dif, journaling – turn of: deep-flatten – useful for compatibility with kernel client, which lacks some new features ● new default features – layering, exclusive-lock, object-map, fast-dif, deep-flatten ● snapshot rename ● rbd du ● improved/rewritten CLI (with dynamic usage/help)
  • 27. RGW
  • 28. 28 NEW IN RGW ● Rewritten multi-site capability – N zones, N-way sync – fail-over and fail-back – simpler configuration ● NFS interface – export a bucket over NFSv4 – designed for import/export of data – not general a purpose file system! – based on nfs-ganesha ● Indexless buckets – bypass RGW index for certain buckets (that don't need enumeration, quota, ...)
  • 29. 29 ● S3 – AWS4 authentication support – LDAP and AD/LDAP support – static website – RGW STS (coming shortly) ● Kerberos, AD integration ● Swift – Keystone V3 – multi-tenancy – object expiration – Static Large Object (SLO) – bulk delete – object versioning – refcore compliance RGW API UPDATES
  • 30. RADOS
  • 31. 31 RADOS ● queuing improvements – new IO scheduler “wpq” (weighted priority queue) stabilizing – (more) unified queue (client io, scrub, snaptrim, most of recovery) – somewhat better client vs recovery/rebalance isolation ● mon scalability and performance improvements – thanks to testing here @ CERN ● optimizations, performance improvements (faster on SSDs) ● AsyncMessenger – new implementation of networking layer – fewer threads, friendlier to allocator (especially tcmalloc)
  • 32. 32 MORE RADOS ● no more ext4 ● cache tiering improvements – proxy write support – promotion throttling – better, still not good enough for RBD and EC base ● SHEC erasure code (thanks to Fujitsu) – trade some extra storage for recovery performance ● [test-]reweight-by-utilization improvements – more better data distribution optimization ● BlueStore – new experimental backend
  • 34. 34 BLUESTORE: NEW BACKEND GOALS ● no more double writes to a full data journal ● efficient object enumeration ● efficient clone operation ● efficient splice (“move these bytes from object X to object Y”) – (will enable EC overwrites for RBD over EC) ● efficient IO pattern for HDDs, SSDs, NVMe ● minimal locking, maximum parallelism (between PGs) ● full data and metadata checksums ● inline compression (zlib, snappy, etc.)
  • 35. 35 ● BlueStore = Block + NewStore – consume raw block device(s) – key/value database (RocksDB) for metadata – data written directly to block device – pluggable block Allocator ● We must share the block device with RocksDB – implement our own rocksdb::Env – implement tiny “file system” BlueFS – make BlueStore and BlueFS share BLUESTORE BlueStore BlueFS RocksDB BlockDeviceBlockDeviceBlockDevice BlueRocksEnv data metadata Allocator ObjectStore
  • 36. 36 Full data checksums ● We scrub... periodically ● We want to validate checksum on every read Compression ● 3x replication is expensive ● Any scale-out cluster is expensive WE WANT FANCY STUFF
  • 37. 37 Full data checksums ● We scrub... periodically ● We want to validate checksum on every read ● More metadata in the blobs – 4KB of 32-bit csum metadata for 4MB object and 4KB blocks – larger csum blocks? – smaller csums (8 or 16 bits)? Compression ● 3x replication is expensive ● Any scale-out cluster is expensive ● Need largish extents to get compression benefit (64 KB, 128 KB) – overwrites occlude/obscure compressed blobs – compacted (rewritten) when >N layers deep WE WANT FANCY STUFF
  • 38. 38 0 50 100 150 200 250 300 350 400 450 500 Ceph 10.1.0 Bluestore vs Filestore Sequential Writes FS HDD BS HDD IO Size Throughput(MB/s) HDD: SEQUENTIAL WRITE
  • 39. 39 HDD: RANDOM WRITE 0 100 200 300 400 500 Ceph 10.1.0 Bluestore vs Filestore Random Writes FS HDD BS HDD IO Size Throughput(MB/s) 0 200 400 600 800 1000 1200 1400 1600 Ceph 10.1.0 Bluestore vs Filestore Random Writes FS HDD BS HDD IO Size IOPS
  • 40. 40 HDD: SEQUENTIAL READ 0 200 400 600 800 1000 1200 Ceph 10.1.0 Bluestore vs Filestore Sequential Reads FS HDD BS HDD IO Size Throughput(MB/s)
  • 42. 42 RADOS ● BlueStore! ● erasure code overwrites (RBD + EC) ● ceph-mgr – new mon-like daemon – management API endpoint (Calamari) – metrics ● config management in mons ● on-the-wire encryption ● OSD IO path optimization ● faster peering ● QoS ● ceph-disk support for dm-cache/bcache/FlashCache/...
  • 43. 43 RGW ● AWS STS (kerberos support) ● pluggable full-zone syncing – tiering to tape – tiering to cloud – metadata indexing (elasticsearch?) ● S3 encryption API ● compression ● performance
  • 44. 44 RBD ● RBD mirroring improvements – cooperative daemons – Cinder integration ● RBD client-side persistent cache – write-through and write-back cache – ordered writeback → crash consistent on loss of cache ● RBD consistency groups ● client-side encryption ● Kernel RBD improvements
  • 45. 45 CEPHFS ● multi-active MDS and/or ● snapshots ● Manila hypervisor-mediated FsaaS – NFS over VSOCK → libvirt-managed Ganesha server → libcephfs FSAL → CephFS cluster – new Manila driver – new Nova API to attach shares to Vms ● Samba and Ganesha integration improvements ● richacl (ACL coherency between NFS and CIFS)
  • 46. 46 OTHER COOL STUFF ● librados backend for RocksDB – and rocksdb is not a backend for MySQL... ● PMStore – Intel OSD backend for 3D-Xpoint ● multi-hosting on IPv4 and IPv6 ● ceph-ansible ● ceph-docker
  • 47. 47 ● SUSE ● Mirantis ● XSKY ● Intel ● Fujitsu ● DreamHost ● ZTE ● SanDisk ● Gentoo ● Samsung ● LETV ● Igalia ● Deutsche Telekom ● Kylin Cloud ● Endurance International Group ● H3C ● Johannes Gutenberg-Universität Mainz ● Reliance Jio Infocomm ● Ebay ● Tencent GROWING DEVELOPMENT COMMUNITY
  • 48. THANK YOU! Sage Weil CEPH PRINCIPAL ARCHITECT sage@redhat.com @liewegas