New – Edit your documents with Microsoft Office Online and the File Fabric

Sometimes, editing documents can be a pain. Your colleague might ask you to add something to a document, but to edit it, you need to download it, open it, edit the document, save it, re-upload it. You then might have the email exchange of why it won’t open on their machine but it  does open fine on yours. It’s a hassle. But there is an easier way. Continue reading “New – Edit your documents with Microsoft Office Online and the File Fabric”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Replacing the Microsoft Forefront Unified Access Gateway (UAG) to securely access on-premises DFS or Windows File Shares

Providing remote users with access to on-premises file systems, such as Microsoft DFS or Windows CIFS shares,  can be a hard task, especially as multiple security layers often exist, like firewalls and VPNs.

Some companies have opted to solve this challenge by deploying the Microsoft Forefront Unified Access Gateway (UAG) as a bridge to provide users who are remote with access to these on premise systems. The UAG was a software solution, facilitating access to file shares, intranets and corporate systems.

The Microsoft Forefront Unified Access Gateway is now officially deprecated.

Continue reading “Replacing the Microsoft Forefront Unified Access Gateway (UAG) to securely access on-premises DFS or Windows File Shares”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Writing to an erasure coded pool in Ceph Rados

CephLately we’ve been working very closely with RedHat’s Ceph Rados storage and it’s librados API, where we’re seeking an ever closer integration with the backend storage to utilise many of Ceph’s benefits.

However lately, we hit a issue where one of our customers had configured their Pool to be erasure coded. Erasure coding is a form of data protection and data redundancy whereby the original file or object is split up into a number of parts, and distributed across a number of storage nodes, either within the same data-centres or across multiple multiple data-centres and regions.

Continue reading “Writing to an erasure coded pool in Ceph Rados”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Introducing the lower-level Ceph Rados connector

Red Hat Ceph StorageFor a while now, Storage Made Easy has had support for the  Red Hat Ceph Storage platform. For this particular integration, Storage Made Easy made use of the Red Hat Ceph Rados Gateway, which is an abstraction on top of the Red Hat Ceph Rados platform, that provides protocol adaptors for S3 and OpenStack Swift.

 

Many of our customers however choose not to deploy the Rados Gateway alongside their Ceph clusters, but still want to make use of the great enterprise file share and sync fabric that Storage Made Easy provides. It gives us great pleasure to announce that Storage Made Easy has now released a Ceph Rados connector that can work directly with the Ceph Rados platform (using it’s librados API).

Our new connector uses their librados APIs, which gives us lower level access to the Ceph Rados storage. We recently announced that the Université of Lorraine have chosen Storage Made Easy as their enterprise file share and sync fabric, in which they will be one of our first customers using this new connector.

Storage Made Easy will continue to offer and support the  Red Hat Ceph Storage connector that uses the Rados Gateway. If you are interested in our new connector, utilising Red Hat Ceph Storage Rados directly, please contact support@storagemadeeasy.com.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How to deploy a multi-node Ceph cluster using Vagrant

CephOne of the 50+ available cloud providers Storage Made Easy supports is Ceph, a distributed object storage based software storage platform. A number of our customers currently use Storage Made Easy to provide file share and sync capabilities to their end users with Ceph as their backend storage.

When integrating with Ceph, SME currently uses its S3 compatible interface through its Rados Gateway (RADOSGW) (see the picture below), which was a perfect fit, since we support a number of clouds that present such interfaces. However, recently, we have begun to evaluate building a connector using Ceph’s lower level APIs through Librados.

Continue reading “How to deploy a multi-node Ceph cluster using Vagrant”

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather