Today’s Media and Entertainment workflows can depend on multiple storage types and their associated protocols. Assets might be delivered via an AWS S3 bucket, Dropbox share, Azure Blob container, proprietary file acceleration vendor, FTP, SFTP, or a host of other new technologies. Once delivered an asset needs to navigate Hot, Nearline, and Archival storages while being edited, conformed, transcoded, shared, distributed and broadcasted.
While some software vendors have updates allowing access to some of the cloud based storage protocols, other applications remain stuck exclusively on only internal NFS or CIFS access. When this happens many workflows are downgraded to the lowest common denominator (CIFS/NFS) preventing the use of more cost effective, durable or highly available scale-out storage systems. Alternatively, a CIFS/NFS gateway might be considered to “convert” protocols. However, almost all gateways proprietize data and compromise the cost savings of scale-out storage.
The Enterprise File Fabric speaks multiple protocols/APIs, indexes data in-place and can eliminate workflow headaches without proprietizing the data.
Content Ingest
Most content asset owners deliver assets via the Internet. Commonly a “meet in the middle” approach is used, where assets are delivered to a common resource such as an AWS S3 bucket, Dropbox folder, or other via other cloud sharing technology. In order to gain access to these assets and ingest them software packages need to be installed on servers or desktops to pull files locally. If every content owner uses a different technology, then number of applications and scripts needed to maintain this workflow can quickly add up.
Alternatively, proprietary file acceleration technologies exist which allow the content owners to push assets. While this can simplify the approach, these solutions can be costly and only perform this one function. Most file acceleration technologies are UDP based and therefore the protocol/API is proprietary, which may require 2-3 of these solutions to be deployed to ensure compatibility with multiple content owners.
The Enterprise File Fabric simplifies both pull and push based content ingestion workflows:
- By connecting with 60+ cloud providers and protocols, it’s a single solution in almost any “meet in the middle”/pull based scenario. Large asset transfers can accelerated via the Enterprise File Fabric’s M-Stream technology.
- Links to shared DropFolders can be created and shared with content owners. Using either a standard web browser or desktop clients, content owners can securely push assets to on-premises storage. Large assets can leverage the Enterprise File Fabric’s M-Stream acceleration.
Content Lifecycle
Once ingested, a media asset’s lifecycle flows between Hot, Nearline and Archival storages. Common workflows involve editing, proxy generation, conforms, and transcoding. Today’s Nearline and Archival tiers of storage are commonly migrating to on-premises object storage. Object storage has many advantages over legacy archival storage including:
- Replication between multiple regions
- Erasure coding for 11+ 9’s of durability
- Instant access to data
- High Availability – even with multiple nodes offline
- Scale out growth
- No forklift upgrades
- Multiple generation / mixed hardware
- Single Namespace
- 10-100+ PB scale
With an impressive listing of advantages over legacy scale-up file or tape based storage the biggest consideration for object storage is whether the API is compatible with current/legacy workflows. Most on-premises based object storage provide a compatible Amazon S3 API. When some, but not all of the critical workflow applications can utilize S3, then often legacy storage must be used.
To address this shortcoming gateway NAS devices that export S3 API as CIFS or NFS might be considered. While this can allow of the utilization of object storage in a workflow, it comes with a number of shortcomings:
- Data is often proprietized – this creates vendor lock in
- Most gateways are expensive, negating any cost savings in switching to object storage
- Many gateways lack an HA feature, introducing downtime risk
- Gateways can be a bottleneck reducing the performance of object storage
The same Enterprise File Fabric that simplified the content ingestion workflow can be utilized to allow multiple tiers of storage to utilized:
- Data is never stored in a proprietary format. This allows applications which are compatible with multiple storage protocols to work directly against the storage eliminating any bottlenecks or vendor lock in.
- Enterprise File Fabric M-Stream accelerates data transfers between tiers of on-premises storage. Inactive projects can be quickly migrated off to cheaper storage tiers or clips retrieved for a breaking news cycle.
- Single namespace index of all files allowing end users to search PB’s of data quickly.
- The File Fabric’s Cloud drive desktop integration allows legacy applications to work with modern APIs through a local mount point
- Proxy transcoding allows web viewing of assets while still on Nearline storage to ensure the right clip is retrieved
Content Distribution
Many of the same issues common the content ingestion are relevant to content distribution. CDNs such as AWS Cloudfront optimally rely on assets being located in AWS S3 storage. The Enterprise File Fabric steps up by sending the data to where it’s required for distribution.
Conclusion
The Enterprise File Fabric is a software solution which can augment existing workflows allowing Media and Entertainment shops to leverage newer storage options without vendor lock in. In most situations the Enterprise File Fabric can replace some existing software applications further enhancing the value proposition in modernization of legacy workflows.







Douglas Soltesz
Latest posts by Douglas Soltesz (see all)
- LucidLink Technology Preview Setup Guide - September 6, 2021
- The features and benefits of using LucidLink with the Enterprise File Fabric - September 2, 2021