Ceph and Storage Management with openATTIC, openSUSE Conference 2016-06-23
1. Ceph and Storage
Management with openATTIC
openSUSE Conference, Nuremberg, Germany
2016-06-23
Lenz Grimmer <lenz@openattic.org>
2. 2
openATTIC – Our Vision
─ Develop an open source alternative to proprietary storage
management systems
─ „Traditional” unified storage (NAS/SAN)
─ Support Ceph for scale-out scenarios
─ Backed with commercial support and services
3. 3
openATTIC – What Sets us Apart?
─ Focus on data center storage management
─ Support both SAN and NAS functionality without limitations
─ Fully Open Source (GPLv2)
─ No arbitrary functional restrictions
─ Low entrance barrier for adoption
─ Based on Linux / OSS tools
─ Multiple Linux distributions (Debian/Ubuntu/Red Hat/SUSE)
─ Well-established technology stack (e.g. drivers, hardware support)
─ Broad user base
4. 4
─ Modern Web UI
─ RESTful API (Software-
Defined Storage)
─ Unified Storage
─ NAS (NFS, CIFS, HTTP)
─ SAN (iSCSI, Fibre Channel)
─ LVM, XFS, ZFS, Btrfs,
ext3/4
─ Volume mirroring (DRBD®)
─ Multi-node support
─ Monitoring (Nagios/Icinga)
built-in
─ Ceph management and
monitoring (WIP)
─ Development sponsored by
it-novum
openATTIC – Open Source Storage Management
5. 5
─ Backend
─ Python (Django)
─ Django REST Framework
(RESTful API)
─ Linux tools for storage
management, e.g. LVM, LIO,
filesystem utilities, DRBD, etc.
─ Nagios/Icinga & PNP4Nagios
(Monitoring and Graphing)
─ Web Frontend
─ AngularJS (JS framework)
─ Bootstrap (HTML, CSS, and
JS framework)
─ Uses REST API exclusively
openATTIC – Components
7. 7
openATTIC – Installation on SUSE Linux
─ OBS Project filesystems:openATTIC
─ Packages available for openSUSE Leap 42.1 & SLES12
─ Thanks to Eric Jackson (swiftgeist) for the support!
─ Feedback is welcome!
8. 8
openATTIC – Storage Development Roadmap
─ Add Disk and Storage Pool Management to the API & WebUI
─ Creating/Modifying LVM Volume Groups / MD RAID setups
─ Creating/Modifying Btrfs/ZFS Pools (incl. RAID setups)
─ Automatic discovery of disks/pools (via udev)
─ Monitoring Disk health (SMART)
─ Manage HW RAID controllers
─ Add volume mirroring support to the WebUI
─ Extend SAN functionality (more iSCSI/FC features)
─ Public Roadmap on the openATTIC Jira/Wiki to solicit community feedback
9. 9
openATTIC – Ceph Management Challenges
─ Managing and monitoring Ceph is complex
─ Many tools exist (e.g. Calamari, Intel VSM, ceph-dash)
─ Limited functionality, unclear roadmaps
─ Finding the best approach for managing Ceph
10. 10
openATTIC – Ceph Management Goals
─ Create a management & monitoring GUI tool
─ A tool that administrators actually want to use
─ That scales without becoming overwhelming
─ Still should allow changes to be made elsewhere, without
becoming inconsistent
11. 11
openATTIC – Ceph Management Implementation
─ Which Ceph Management API?
─ How to manage a distributed system?
─ How to monitor the cluster‘s health/performance?
─ How to perform remote management tasks?
─ How to monitor cluster nodes in a scalable way?
13. 13
openATTIC – Current Ceph Development Status
─ “NoDB” backend architecture / framework in place
─ Create and map RBDs as block devices (volumes)
─ Pool Management Web UI (table view)
─ OSD Management Web UI (table view)
─ RBD Management Web UI (table view)
─ Monitor a cluster health and performance
─ CRUSH Map Editor
─ Support for managing multiple Ceph clusters
14. 14
openATTIC – Ceph REST API Overview
https://wiki.openattic.org/display/OP/openATTIC+Ceph+REST+API+overview
15. 15
openATTIC – Ceph Development Roadmap
─ Ceph Cluster Status Dashboard incl. Performance Graphs
─ Extend Pool Management
─ OSD Monitoring/Management
─ RBD Management/Monitoring
─ CephFS Management
─ RGW Management (users, buckets keys)
─ Deployment, remote configuration of Ceph nodes (via Salt)
─ Public Roadmap on the openATTIC Wiki to solicit community feedback