Object storage is a trendy topic these days. Whatever its ultimate potential, it is finding increasing usage in a couple of areas, said storage analyst Greg Schulz of StorageIO Group.
Regulatory archiving in areas like financial, healthcare and legal is one big area of object storage deployment. Another usage group utilizes object access or object architecture-based solutions as a home for large amounts of data that may not be as active as primary high performance storage such as for database, email and others such as large-scale web or file, high-performance compute, video, media and entertainment.
But what is it? It seems that object storage proponents assume that we all “get it” when relatively few of us do. So let’s take a moment to define and explain the concept and how it fits into the greater storage landscape.
Mark Pasto, Product Marketing Manager, Archive Products, Quantum, defines object storage as a set of technologies that allows file data to be stored and preserved with its business context as additional meta-data.
“Its flat file system orientation enables enterprises to access all available data, wherever it is stored in order to optimize the extraction of value for competitive advantage,” said Pasto.
Mike Chase, EVP and CTO, dinCloud, added that object storage treats all types of data (block, file, pure objects, etc.) to be saved as an object and stores multiple copies of it, often on different storage servers and drives. This technique is said to replace RAID for data availability.
“Object storage can rebuild a failed 1TB disk in 20 minutes, whereas RAID6 often takes one to three days,” said Chase. Also, the cost of storage is often reduced by 72%.”
As object storage has several (up to thousands!) of storage server “heads,” there is never a bottleneck. Servers access data at 10g and 40g speeds directly as they know where all data lives as part of a dynamic map which is constantly updated, said Chase.
Shahbaz Ali, president and CEO Tarmin, augmented that by explaining that object storage platforms were designed to address the scalability problem of NAS and file server systems. Object storage does not use a file system hierarchy to store data, like traditional approaches. Instead, data is stored as ‘objects’ and every object is assigned its own unique identifier. Users retrieve the stored data object using a “claim check” code; its actual location on physical media is abstracted within the pool of storage. By using this approach, the virtual storage pool enables boundless scalability.
“Users access object storage through applications that typically use a HTTP REST API (an internet protocol, optimized for online applications),”said Ali. “This makes object storage ideal for all online and cloud environments.”
Now that we have covered what it is, let’s look at a few of the options out there.
Quantum’s Lattus Object Storage appliance is aimed at large-scale, long-term archives. The company’s QSpread fountain erasure code technology combines data protection with data storage to ensure a single instance of data is protected and thereby eliminating the need to replicate data. The Lattus portfolio includes several application integration options: StorNext integration, a NAS gateway, Arkivio data mover software, and a native S3 RESTful interface to connect to various cloud applications. The Lattus architecture separates the object storage controller nodes from the storage nodes, so users can scale performance and capacity independently.
“When combined with QSpread, users can add new storage nodes to the system, and old ones retired, and the data will automatically respread to include the new nodes, and exclude the old nodes,” said Pasto. “Given its entry point above 150 TB usable capacity, its price/performance and unlimited scalability, Lattus is best suited for large archives with high or unpredictable growth where business and production users need fast access to the wealth of assets in their archive.”
He believes that Lattus object storage with QSpread is more scalable and cost effective than RAID, and delivers higher throughput per dollar than the competition.
dinCloud provides persistent (block storage for virtual servers and desktops), snapshot (to help protect images from deletion, etc.), and D3 storage (Amazon S3 compatible object storage) on top of its object system. dinStorage D3 is an Amazon S3 API-compatible storage alternative. D3 is cloud storage that collaborates with third-party products such as CA Technologies and Symantec, including replicating data across multiple data centers. dinStorage D3, said Chase, is optimal for file sharing and file servers.
“All data is AES256 encrypted, we keep 3 copies of all data, we snapshot the entire cloud once per day for 10 days free, we have no data transfer fees, and we number one in virtual desktops and cloud migrations worldwide,” said Chase.