Download the authoritative guide: Cloud Computing 2019: Using the Cloud for Competitive Advantage
Businesses are generating more and more data each and every year, much of it unstructured. As data continues to grow exponentially, the management and storage of data and multimedia files is becoming an unwieldy challenge.
Without effective data storage and management solutions, businesses can find themselves at a loss when trying to retrieve assets, losing versioning control, and creating redundancies throughout their organization – translating into increased costs and man hours devoted to retrieval and recreating files.
While unstructured data is growing at an average rate of about 50-60% per year, according to IDC, at the same time more and more computing workloads are being virtualized. Most experts believe that 50% or more of all workloads will be virtualized by 2012. Add it all up and there is a distinct need for vastly different, dynamic approaches to storage that take advantage of the cloud.
The following vendors are at the forefront of tackling the various challenges of data storage and management – from assisting small and medium-sized businesses to enabling management of the largest corpora of data to date. In alphabetical order, here are 7 emerging cloud storage vendors who could help you get a handle on your data headaches.
What problem do they solve? All business can struggle as employees generate a multitude of files in a short period of time. But for industries that create, augment, and archive rich media, such as videos, images, simulation data and more, efficiently and reliably storing unstructured data (and metadata) can present an epic challenge. Large volumes of storage space are necessary to manage existingassets. Projecting ahead to the new data that will be created in months, years (and decades) and storage needs can suddenly seem nearly unmanageable.
Existing storage systems that employ RAID to protect data were never specifically designed to handle today’s multi-terabyte capacity disk drives. Scalability, data resilience, and cost-efficiency in addition to ease of management are all factors in seeking a solution to managing vast amounts of data.
What they do: Amplidata’s AmpliStor is a storage platform designed for petabyte-scale storage clouds. AmpliStor addresses the needs of massive-scale online storage services, with incremental scalability to petabyte levels or more. Maintenance requirements are reduced with auto-detection of new capacity. With minimal intervention, the system monitors disk and node health and heals them as needed.
The system provides any desired level of data reliability and availability by tolerating any number of failures. It thereby solves the reliability issues of RAID on high-density (multi-terabyte) disk drives. Storage nodes within an AmpliStor system can be distributed across multiple data centers to provide uninterrupted data access in the event of network or data center outages or unavailability, and can be optimized by service providers with multiple distributed data center locations.
Why they're an up-and-comer: In 2010, the company raised two quick rounds of funding. They first secured $2.5 million Euros from Big Bang Ventures and later in the year followed it with a $6 million round from Big Bang Ventures, Endeavour Vision and Swisscom Ventures. With applications and data sprawling out beyond corporate perimeters, Amplidata’s ability to scale up quickly and distribute data storage puts them in a good position to take advantage of changes brought about by virtualization and cloud computing,.
What problem do they solve? Legacy storage solutions are not only more costly than cloud-based alternatives, but using connection-based, point-to-point architecture requires configuring multiple technologies, making troubleshooting and management a drain on resources. Limitations in storage processing mean that as a company grows, so does its cost of provisioning, scaling and managing its SAN.
What they do: Cortaid’s EtherDrive provides a scale-out architecture, based on a highly parallel, connection-less protocol that eliminates many of the layers of complexity in legacy SANs. According to the company, with their solution, multi-pathing happens automatically, port-bonding is no longer needed, every LUN is automatically visible to every host, and the entire network is built with standard, off-the-shelf hardware.
Why they're an up-and-comer: Coraid emphasizes cost when positioning its solution. Compared to traditional storage solutions, such as those from EMC or NetApp, Coraid claims that it undercuts them on price because it uses 100-percent commodity hardware and raw Ethernet. According to the company, its EtherDrive storage arrays “enable a scale-out SAN architecture that is ideally suited to dynamic virtualization and cloud environments,” with pricing under $500 per terabyte and scaling to multiple petabytes.
Coraid closed a $25 million Series B round of funding last year, bringing total funding to $35 million. The company claims more than 1,400 customers, including GE, HP, Ford, Harvard University and the United States Marine Corps.
What problem do they solve? Managing the constant growth of unstructured data, especially across a distributed organization, is a situation that many businesses find themselves struggling with. Although data management solutions may seem sufficient upon deployment, organizations need to be confident that as they grow, their data can grow withthem. Virtualizing that data is one step many organizations are taking to manage a vast range of data, but customers want assurance that the solution is scalable, persistently available and performs well at high volume.
What they do: At the core of Gluster’s storage solutions is the GlusterFS (v.3.2) software: an open source distributed file system that can handle thousands of clients scaling to several petabytes of data. The software is based on a stackable user space design and was developed to manage a diverse workload. As far as scalability, the GlusterFS 3.2 breaks from the traditional SAN-based environment as a Network-attached storage (NAS), reducing many of the costs associated with storing virtual machines. Gluster’s focus is on making more effective and efficient use of cloud infrastructure storage, whether on a public or private cloud.
Why they're an up-and-comer: In October 2010, Gluster received $8.5 million in Series B funding, with investors including Nexus Venture Partners and Index Ventures. The company has an impressive leadership team. CEO Ben Golub previously served as President and CEO of Plaxo, as well as CMO and Senior VP of Enterprise Security, SSL, and Payments divisions at VeriSign. CTO and Co-founder, AB Periasamy, led the development of the world’s second fasted supercomputer in 2004 at Lawrence Livermore National Laboratory.
Gluster has one of the best next-generation storage proofs of concept under their belt with their customer Pandora. Gluster provides the primary storage solution for Pandora Internet Radio, which requires storage and delivery of massive amounts of data on a daily basis. Gluster is also deployed by Scandinavia’s largest hosting providers, City Network Hosting, which provides an enterprise-class cloud platform.
What problem do they solve? For small- and medium-sized enterprises, backup and disaster recovery can too often slip under the cracks until an incident occurs. With data growth increasing the burdens on storage, backup, and the network, data protection must not only be reliable, but also scalable. Often the amount of data that needs to be backed up far exceeds the rate of backup – often surpassing the available window of downtime between business hours that most companies rely on. This can result in increased costs, as well as incomplete processes that result in gaps in data protection.
What they do: Nasuni delivers data protection via a NAS appliance with built in data backup and recovery. By shifting the burden of data protection to the cloud, customers have persistent access and the capacity to manage a higher volume of data. Instead of backing up files to the server, Nasuni links the LAN to a trusted third-party cloud storage provider. The Nasuni Filer automatically synchronizes data between the local cache and the cloud, encrypting the data before it reaches the cloud.
To assist with recovery, Nasuni also creates and saves synchronous snapshots. As a result, Nasuni can restore some or an entire file system to a previous time in an instant. Restores are available at any location, with proper credentials. Finally, Nasuni also uses patent-pending intelligent caching technology that keeps the most active files (known as “the working set”) in the local cache, which gives users performance on par with a local NAS.
Why they're an up-and-comer: Having trust in data put into the cloud is still a big issue. Can you trust that it is backed up and available at all times? Not always.
Nasuni claims to protect data in the cloud with a snapshot system. These snapshots capture the entire file, which enables the Nasuni Filer to recover data instantly from any point in time. The snapshots are small, because Nasuni de-duplicates and compresses them, so customers can retain an unlimited number of snapshots in the cloud without greatly increasing the amount of cloud storage used. If a file is accidentally deleted, the file can always be recovered, usually almost instantaneously, through a snapshot that captured it.
Nasuni has raised $23 million from North Bridge Venture Partners, Sigma Partners and Flybridge Capital Partners. With a customer base focused on small- to medium-sized enterprises, the company saw 200 percent growth in customers (from a wide range of industries) adopting cloud storage in the last quarter of 2010.
What problem do they solve? With vast amounts of data, complying with various regulations complicates the management and storage of data. Storage solutions must not only meet space requirements, but provide security and ease of management over extended periods of time. When sensitive records or data must be destroyed, it could be a liability to keep data for longer than required. Industries that must abide by those regulations need to be able to manage each sensitive asset and track it over a specific timeframe to avoid penalties and ensure compliance.
What they do: RainStor provides a specialized data repository for inactive structured data. RainStor loads and compresses structured data by typically 40:1 from any source (database, log, or event data) into secure and accessible containers. These containers are discrete files that encapsulate the data without any loss of content or structure. As such, the containers can be managed using standard file systems and stored on any storage platform, including SAN, NAS, DAS, CAS, or cloud storage. RainStor provides full relational access to the compressed data containers using standard SQL without the need to re-inflate the data.
RainStor argues that it has been used to effectively retire hundreds of legacy business applications by retaining the critical data, while at the same time providing full accessibility to the data for business users to report and audit. The result, according to RainStor, is millions of dollars in hardware, software and personnel savings, along with risk avoidance from any potential fines or penalties from regulatory bodies.
Why they're an up-and-comer: RainStor is pursuing the “Intel Inside” model, embedding its technology into storage solutions from such heavy hitters as Teradata, HP and Informatica.
The company is backed by $11 million in VC funding from Storm Ventures, Informatica Corporation (also a strategic partner), Doughty Hanson Technology Ventures and The Dow Chemical Company.
What problem do they solve? To manage multi-vendor, heterogeneous storage environments successfully, organizations must be able to quickly pull data from multiple arrays into a single report that displays capacity, allocation, usage and forecasting at all levels of the storage environment (Raw, RAID, LUN, Datastore and file system).
According to a recent Storage Switzerland industry survey, the firm found that more than a quarter of IT administrators believe that the biggest storage problem caused by virtualization is capacity management. Half of the respondents reported that they were not confident in how they are using storage resources, believing they are wasting valuable storage assets. And that's just a rating of confidence.
Without getting an overview of storage utilization, how can admins really make the best choice for their organization and industry? How do they know where problems exist in the operational environment?
What they do: SolarWinds’ Storage Manager is designed to appeal to storage managers and their pain points. According to the company, Storage Manager can quickly assess how much storage is available, whose data is hogging up server space, how rapidly storage space is diminishing, and other critical statistics that help IT and business pros make choices about storage needs. The solution also helps them manage a dynamic environment utilizing multiple arrays.
The Storage Manager monitors, collects, and analyzes data from SAN arrays, NAS arrays, and fiber channel switches, so administrators can then access the management platform to view real-time and historical statistics and availability of storage infrastructure from any web browser.
Why they're an up-and-comer: SolarWinds is well positioned to take advantage of all the problems inherent with storage sprawl in heterogeneous environments. Traditional storage resource management solutions are notoriously difficult to deploy, cumbersome to use and extremely expensive.
According to the company, “Storage Manager gives users unprecedented visibility into shared storage that is not available through other SRM solutions. Users gain perspective not just through the perspective of the hypervisor but into the individual arrays all the way to the disks.”
SolarWinds claims that Storage Manager was designed with ease-of-use being as much of a priority as the feature set. The solution lists at just under $3K and is available as a download at the SolarWinds website.
What problem do they solve? With more and more companies moving to the cloud, the challenge for many organizations is integrating the cloud with existing IT environments and sustaining the same performance, security and availability needed to conduct business each day. In moving to the cloud, gaps in how data is stored and protected – as well as how it is managed in case of a disaster – can impact day-to-day business productivity.
What they do: TwinStrata’s CloudArray solution is intended to deliver the many benefits of cloud storage combined with the enterprise-class features, as well as the performance and availability, traditionally associated with on-premise storage solutions.
Key features include iSCSI plug-and-play connectivity to any host OS, high-availability software and hardware form factors, options for local and/or cloud data copies, snapshots for recovery and retention, encryption at-rest and in-flight, compression to reduce capacity utilization and bandwidth, disaster recovery capability on-premise, off-premise and in the cloud, and dynamic policies to meet application performance needs.
CloudArray is available either as a virtual or physical appliance, and is delivered on a pay-as-you-go, utility model.
Why they're an up-and-comer: The company is backed by more than $6 million in funding. Since emerging from stealth mode in 2010, the TwinStrata began adding customers at an impressive clip. According to a TwinStrata spokesperson, the company’s revenues increased by 400 percent in Q4 2010.
Customers include Karmaloop, American Academy of Dermatology, Westway Group, Color Kinetics, the American Federation of Government Employees and NSK Inc.