Object Storage Drive vs. Object Storage Explorer: The Case for Using Both

At Storage Made Easy we are best known for our Enterprise File FabricTM, but we also sell desktop only Drives and dedicatedaApplication Explorers that make using object storage systems such as S3 and compatibles, Azure Blob Storage, Google Cloud Storage and OpenStack Swift, from the Windows desktop, simple.  These desktop tools don’t need or use the File Fabric; you just install them, point them directly at your storage (on-premises or on-cloud) and they are ready for use.

Continue reading “Object Storage Drive vs. Object Storage Explorer: The Case for Using Both”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

File Fabric Feature Focus – Collaborating on the move using FileBox

The File Fabric has lots of hidden cool features that are very powerful and as a part of a new series of posts on File Fabric features we are going to kick off with a feature called ‘FileBox’.

FileBox is a feature that enables files to be emailed, from any standard email client, directly into any FileBox nominated folder, or shared team folder.

Continue reading “File Fabric Feature Focus – Collaborating on the move using FileBox”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Writing to an erasure coded pool in Ceph Rados

CephLately we’ve been working very closely with RedHat’s Ceph Rados storage and it’s librados API, where we’re seeking an ever closer integration with the backend storage to utilise many of Ceph’s benefits.

However lately, we hit a issue where one of our customers had configured their Pool to be erasure coded. Erasure coding is a form of data protection and data redundancy whereby the original file or object is split up into a number of parts, and distributed across a number of storage nodes, either within the same data-centres or across multiple multiple data-centres and regions.

Continue reading “Writing to an erasure coded pool in Ceph Rados”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Introducing the lower-level Ceph Rados connector

Red Hat Ceph StorageFor a while now, Storage Made Easy has had support for the  Red Hat Ceph Storage platform. For this particular integration, Storage Made Easy made use of the Red Hat Ceph Rados Gateway, which is an abstraction on top of the Red Hat Ceph Rados platform, that provides protocol adaptors for S3 and OpenStack Swift.

 

Many of our customers however choose not to deploy the Rados Gateway alongside their Ceph clusters, but still want to make use of the great enterprise file share and sync fabric that Storage Made Easy provides. It gives us great pleasure to announce that Storage Made Easy has now released a Ceph Rados connector that can work directly with the Ceph Rados platform (using it’s librados API).

Our new connector uses their librados APIs, which gives us lower level access to the Ceph Rados storage. We recently announced that the Université of Lorraine have chosen Storage Made Easy as their enterprise file share and sync fabric, in which they will be one of our first customers using this new connector.

Storage Made Easy will continue to offer and support the  Red Hat Ceph Storage connector that uses the Rados Gateway. If you are interested in our new connector, utilising Red Hat Ceph Storage Rados directly, please contact support@storagemadeeasy.com.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How to deploy a multi-node Ceph cluster using Vagrant

CephOne of the 50+ available cloud providers Storage Made Easy supports is Ceph, a distributed object storage based software storage platform. A number of our customers currently use Storage Made Easy to provide file share and sync capabilities to their end users with Ceph as their backend storage.

When integrating with Ceph, SME currently uses its S3 compatible interface through its Rados Gateway (RADOSGW) (see the picture below), which was a perfect fit, since we support a number of clouds that present such interfaces. However, recently, we have begun to evaluate building a connector using Ceph’s lower level APIs through Librados.

Continue reading “How to deploy a multi-node Ceph cluster using Vagrant”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather