This document discusses a three step approach to modern media asset management with an active archive:
1) Using object storage like Cleversafe for scalable, low-cost archive storage that is geo-dispersed for resilience.
2) Making the archive easily accessible using tools like Avere to provide NAS simplicity and performance.
3) Managing large quantities of media assets using asset management tools like CatDV for ingest, metadata, search, collaboration and workflows.
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
Three Steps to Modern Media Asset Management with Active Archive
1. Three Steps to Modern Media Asset
Management with Active Archive
December 3, 2015
2. Housekeeping
• Recording
• Available on-demand approximately 5 minutes after
today’s presentation
• Resources
• Solution Briefs, Papers, and more…
• Today’s slides
• Questions
• Please rate this webinar
3. Agenda
• Review the challenges with managing, accessing, and storing media
assets
• Using object storage for an archive
• How to make that archive easily accessible
• Tools to manage large quantities of media assets
6. Challenges in Media Asset Management
• Increasing volumes of content
– For more distribution channels
• Content is hugely valuable
– Costly or impossible to reshoot
• Teams are geographically dispersed
• Demand for high availability and resilience
– Time is money
• Moves toward object / cloud storage
• Workflow automation is critical
7. NAS
CS CS CS
NYC NC LV
FXT 3850
Clients
CatDV Server
Modernization for MAM with Active Archive
8. Object Storage for Archives
Nancy Bennis, Director of Partner Sales, Cleversafe
9. NAS
CS CS CS
NYC NC LV
FXT 3850
Clients
CatDV Server
Active Archive
10. Archive to Cloud
Primary Data
Legacy Data Sources
Benefits
Operational Gains
• Scalable
• Always ON
• Zero Touch Security
• Easy To Manage
Economic Advantage
• Consolidated enterprise view
• Reclaim 60% NAS
• Eliminate
• Copies
• Replication
• Backup SW
• Encryption devices
• Proprietary HW
• Tape
• Dramatically Lower FTE hours
• Reduce storage costs by 80%
• Index, content search
• Analytics ready
Consolidated, Centralized Enterprise Archive
No backup required
12. New Generation of Storage: Cloud Storage
• Legacy storage is broken
– Scale beyond PB
– Data protection relies on replication
– Availability and Reliability
• 4 TB Drives takes 10 hours plus
• RAID rebuild of a 20 TB drive, 7,400 - 15,000 rpm, will take many days.
• Requires new storage architecture for next generation
– Traditional storage will be unable to keep pace with data growth resulting in
isolated pools of data
– A ‘tipping point’ will come; “the earlier you can make the transition, the better”
– The sooner you can free resources from simply keeping pace with infrastructure, the sooner you can
focus on adding business value and for a competitive edge
ESG Market Landscape Report: Object Storage June 2015
13. What is Object Storage?
• Objects are more efficient than file system
• Software defined storage (SDS)
• Single virtual pool of storage- Distributed access
• Unprecedented scale
• Data protection using Erasure coding or RAID
• Increased data integrity and availability
14. Uncertain data security
Data ownership is ambiguous
Exit strategy is unknown
Potential vendor lock-in
4 9’s of availability targeted, but not
committed
On premises application and infrastructure
integration is challenging
High costs at scale including data capacity
AND access fees
Analytics
Collaboration
Content
Distribution
Active
Archive
Traditional Public
Cloud
SecurityLegal/Privacy
Implications
Multi
Tenancy
Availability
TCO at Scale
Integration
Change
Challenges with Traditional Public Cloud
15. Analytics
Collaboration
Content
Distribution
Active
Archive
Data Security and Privacy can be
effectively controlled
Scale as you Grow
Always-ON infrastructures can be built
Multiple Tenants can securely share
infrastructure resources
Hardware can be repurposed, refreshed or
decommissioned to meet financial and
technical goals
Comprehensive protocol and application
support
The most stringent of RPO/RTO can be
realized
Private Cloud
Security
Multi
Tenancy
Availability
DR
Integration
Change
Legal/Privacy
Implications
Benefits of Private Cloud
16. SITE 3
1 2 3 4 5 8
SITE 1 SITE 2
DATA
Accessers®
6 7 9
Data is virtualized,
encrypted, and sliced
using Information
Dispersal Algorithms.
1
2
Slices are
dispersed to
separate disks,
storage nodes and
geographic
locations.
3
Even with
individual servers
or entire sites
down, real time bit
perfect data is
retrieved from a
subset of slices.
2 3 4 5 86 7
Accessers®
DATA
SECURERELIABLESCALABLE
10 11 12
How Cleversafe Storage® Technology Works
DS Manager
Accesser
Slicestor
19. $0
$2,000,000
$4,000,000
$6,000,000
$8,000,000
$10,000,000
480 TB 960 TB 1920 TB 3840 TB
S3
dsNet
• 5 Year TCO comparison by Cleversafe
• Amazon S3 published prices and capacity discounts
(pricing as of 10/31/14, assumes 20% reads)
• $/TB comparison
• Analysis by Cleversafe customer
Cost: 29–61%+ lower
$8,400
$4,210
$1,613
$1,053
Current NAS DR
Protected
Current NAS Single Copy
NAS Gateway and dsNet
dsNet Object Protected
Cleversafe vs S3 Cleversafe vs NAS
Cost: 80%+ lower Economics
Cleversafe Platform Economics
20. Benefits of Cleversafe
For Media Asset Archives
• Single pane of glass
• Scalable to petabytes and exabytes
• Economics of cloud – 80% cost
reduction
• Geo dispersed collaboration
• Always On
• Future-proof
22. NAS
CS CS CS
NYC NC LV
FXT 3850
Clients
CatDV Server
Accessing the Object-Storage Archive
23. Primary storage
• Legacy NAS
• $2k/TB
Content archive
• High capacity
• Low cost
• Geo-dispersed
NAS
CS CS CS
NYC NC LV
FXT 3850
Clients
CatDV Server
CatDV – Asset Management Active Archive
24. Customer Challenges
– Data capacity growth
– Containing cost
– Data protection
– Want NAS simplicity + object
efficiency and built-in DR
Avere + Cleversafe Benefits
– NAS + object 70% lower cost
– NAS simplicity no application changes
– NAS clustering scale access throughput
– Object storage max capacity and density
– Object geo-dispersal survive site outage
CatDV – Asset Management Active Archive
25. Site Failure Recovery Capabilities
Primary Site Failure
CS
CS CS
NYC
NC
LV
FXT 3850
Clients
CatDV Server
FXT 3850
NAS
Content archive
• Geo-dispersed
Secondary Site
26. Site Redundancy Benefits
– Object geo-dispersal survive site
loss/outage
– Redundant site Edge filer activation in event
of failure
Customer DR Challenges
– Retain massive storage archive
online for continuous access
– Allow application server
instances to “reconnect” to
storage
Site Failure Recovery Capabilities
28. How useful is all this storage,
archive, scale & reliability …
...without powerful search and
workflow automation ?
29. Storage Without Asset Management …
• “important assets are scattered through the organization”
• 62% of marketing and creative professionals spend 1-6 hours per
week managing files
• 1 out of every 10 hours ($8,200 per year) on file management –
Aberdeen Group
• the average creative person looks for media 83 times per week …
• … fails to find it 35% of the time – GISTICS
30. Storage Without Asset Management …
• “important assets are scattered through the organization”
• 62% of marketing and creative professionals spend 1-6 hours per
week managing files
• 1 out of every 10 hours ($8,200 per year) on file management –
Aberdeen Group
• the average creative person looks for media 83 times per week …
• … fails to find it 35% of the time – GISTICS
31. The Best MAM Tools Add …
• Simple tagging, logging and search …
... Easily customised
... across all storage: on premise, cloud, on-line, off-line, SAN / NAS etc
• Powerful automation
... For ingest, moves between storage tiers, and distribution
• Lower quality proxies always available
... Even for off-line content
• Collaboration tools for content creators …
... I’ve found my asset, now use it in the edit
• The Right User Experience ....
... Producers / Directors / Editors / Archivists / Customers / Consumers
32. The best MAM tools unlock the
value of your media assets
39. NAS
CS CS CS
NYC NC LV
FXT 3850
Clients
CatDV Server
Modernization for MAM with Active Archive
40. Next steps
• Ask questions
• Review the attachments section for relevant resources
• Rate this webinar
• Visit each website to learn more
– www.squarebox.com
– www.cleversafe.com
– www.averesystems.com
41. More Information
Dave Clack
CEO
Nancy Bennis
Director of Partner Sales
Bernie Behn
Principal Engineer
Avere Systems
www.squarebox.com www.cleversafe.com www.averesystems.com
Editor's Notes
Title Slide
Avere Systems’ Edge filer brings a highly-available and highly-scalable filesystem front-end to high-capacity and highly-durable object storage.
Media Asset Management use-cases are faced with the challenges of storing massive amounts of data, requiring data protection, and generally, requiring a filesystem interface.
This bridges the gap between massively scalable object storage architectures, and the file I/O demands of applications not written for the S3 API.
At-rest data is contained on the dsNet system, and replicated across multiple geographies for durability across multiple sites.
CatDV application services will use file system mounts (NFS) to access the active archive housing petabytes of stored media assets.
This requires minimal change to the CatDV server configuration and *no* changes to the application itself, since Avere presents a filesystem over NFS.
The Edge filer will cache recent data requests to service subsequent reads/lookups of files, and cache write/ingest requests for prompt acknowledgement to the application server (CatDV)
All file system data and metadata is persisted on the dsNet, thus a total failure of the Avere Edge filer cluster can be recovered from by attaching a new cluster to the dsNet vault.
Avere Systems’ Edge filer brings a highly-available and highly-scalable filesystem front-end to high-capacity and highly-durable object storage.
Media Asset Management use-cases are faced with the challenges of storing massive amounts of data, requiring data protection, and generally, requiring a filesystem interface.
This bridges the gap between massively scalable object storage architectures, and the file I/O demands of applications not written for the S3 API.
At-rest data is contained on the dsNet system, and replicated across multiple geographies for durability across multiple sites.
CatDV application services will use file system mounts (NFS) to access the active archive housing petabytes of stored media assets.
This requires minimal change to the CatDV server configuration and *no* changes to the application itself, since Avere presents a filesystem over NFS.
The Edge filer will cache recent data requests to service subsequent reads/lookups of files, and cache write/ingest requests for prompt acknowledgement to the application server (CatDV)
All file system data and metadata is persisted on the dsNet, thus a total failure of the Avere Edge filer cluster can be recovered from by attaching a new cluster to the dsNet vault.
By backing up your data to Cleversafe private cloud storage, you eliminate the complexity and risk associated with traditional tape or d2d2t backup , speeding the backup process while optimizing the recovery of your data. In many cases, tape can be eliminated. Your cost to protect data is reduced by 80% to $2 per protected GB. In addition, this same solution, using Commvault Simpana is archive ready. The same platform that delivers these benefits for backup is capable of actively archiving stale data to the Cleversafe cloud. By archiving less active data, you are essentially removing a large percentage of data from expensive Tier 1 storage to $2/GB storage. The combined use of Cleversafe for both backup and archive dramatically simplifies processes and drives huge costs out of your storage infrastructure enabling and accommodating future data growth on the most scalable of platforms.
Like the large cloud service providers, public cloud is being ‘advertised’ as the answer to these massive data challenges. The incredibly low cost, pay by the drink cloud storage is very appealing. Most CIOs have cloud initiatives underway to evaluate how their IT department can leverage cloud without giving away data security and control.
Many aspects of cloud storage must be considered before putting your mission critical data out in public clouds. The news headlines have consistently clamored about data breaches, hacking and the risk of keeping confidential data in a public cloud. But there are equally compelling concerns over and above data security that should be addressed:
Who owns the data? Once your data is in a public cloud, is it legally yours? Or does it belong to the public cloud provider?
Getting locked into a certain cloud provider and making it costly and difficult to get your data out of the cloud should you need to, has become a realistic concern. Consider the nightmare of Nirvanix.
4 9’s of availability guarantees sometimes just don’t cut it.
Integration with your own applications and infrastructure can be complicated.
And when you consider all the hidden costs, the actual costs to scale and protect your data can get way more expensive than originally planned.
Here’s how Cleversafe storage technology works. It consists of 3 separate software components:
Ds Manager – management GUI
Accessers- brains of the object storage architecture;
Slicestors – storage arrays
First, the data is virtualized then encrypted and sliced using mathematical algorithms called Information Dispersal Algorithms. The number of slices can vary based on the nature of the data. Each slice is dispersed to separate disks, separate nodes and geographic locations.
Because of these unique algebraic formulas used, only a subset of the slices is required to retrieve the object. As a result, even with individual servers , storage nodes and/or entire sites down, real time bit perfect data is retrieved.
This architecture eliminates the need for replication which dramatically drives the costs out compared to other storage systems.
You can scale performance and capacity independently by adding more accessers (for performance) and or more disks (for capacity).
Avere Systems’ Edge filer brings a highly-available and highly-scalable filesystem front-end to high-capacity and highly-durable object storage.
Media Asset Management use-cases are faced with the challenges of storing massive amounts of data, requiring data protection, and generally, requiring a filesystem interface.
This bridges the gap between massively scalable object storage architectures, and the file I/O demands of applications not written for the S3 API.
At-rest data is contained on the dsNet system, and replicated across multiple geographies for durability across multiple sites.
CatDV application services will use file system mounts (NFS) to access the active archive housing petabytes of stored media assets.
This requires minimal change to the CatDV server configuration and *no* changes to the application itself, since Avere presents a filesystem over NFS.
The Edge filer will cache recent data requests to service subsequent reads/lookups of files, and cache write/ingest requests for prompt acknowledgement to the application server (CatDV)
All file system data and metadata is persisted on the dsNet, thus a total failure of the Avere Edge filer cluster can be recovered from by attaching a new cluster to the dsNet vault.
Avere Systems’ Edge filer brings a highly-available and highly-scalable filesystem front-end to high-capacity and highly-durable object storage.
Media Asset Management use-cases are faced with the challenges of storing massive amounts of data, requiring data protection, and generally, requiring a filesystem interface.
This bridges the gap between massively scalable object storage architectures, and the file I/O demands of applications not written for the S3 API.
At-rest data is contained on the dsNet system, and replicated across multiple geographies for durability across multiple sites.
CatDV application services will use file system mounts (NFS) to access the active archive housing petabytes of stored media assets.
This requires minimal change to the CatDV server configuration and *no* changes to the application itself, since Avere presents a filesystem over NFS.
The Edge filer will cache recent data requests to service subsequent reads/lookups of files, and cache write/ingest requests for prompt acknowledgement to the application server (CatDV)
All file system data and metadata is persisted on the dsNet, thus a total failure of the Avere Edge filer cluster can be recovered from by attaching a new cluster to the dsNet vault.
In the event of a site failure where the primary servers are located (CatDV, Avere FXT and dsNet Accesser) the goal it to be able to restore services as quickly and efficiently as possible.
Since the at-rest archive of data is geo-dispersed across all sites by the dsNet, another Accesser can provide S3/REST access to the dataset.
A secondary Avere FXT cluster that was previously unconfigured/inactive can be configured on-the-fly via Avere-API calls to attach to the Vault where the data is stored.
The Avere FXT namespace can be configured on-the-fly via API to export the Avere FlashCloud filesystem contents of the Vault to the clients in the secondary site.
The CatDV server would be considered an NFS client in this scenario, and a new server can be configured to mount the Avere FXT to gain access to the archived data.
At this point, the CatDV server can be made available to the external users/consumers of the data held under the Media Asset Management system (CatDV)
This type of architecture provides for passive failover between sites without prolonged impact to archive data availability.
Avere Systems’ Edge filer brings a highly-available and highly-scalable filesystem front-end to high-capacity and highly-durable object storage.
Media Asset Management use-cases are faced with the challenges of storing massive amounts of data, requiring data protection, and generally, requiring a filesystem interface.
This bridges the gap between massively scalable object storage architectures, and the file I/O demands of applications not written for the S3 API.
At-rest data is contained on the dsNet system, and replicated across multiple geographies for durability across multiple sites.
CatDV application services will use file system mounts (NFS) to access the active archive housing petabytes of stored media assets.
This requires minimal change to the CatDV server configuration and *no* changes to the application itself, since Avere presents a filesystem over NFS.
The Edge filer will cache recent data requests to service subsequent reads/lookups of files, and cache write/ingest requests for prompt acknowledgement to the application server (CatDV)
All file system data and metadata is persisted on the dsNet, thus a total failure of the Avere Edge filer cluster can be recovered from by attaching a new cluster to the dsNet vault.