This resource is no longer available

Demystifying Data De-Duplication: Choosing the Best Solution

Cover

In an August 2006 report from the Clipper Group entitled "The Evolution of Backups - Part Two - Improving Capacity", author Dianne McAdam writes: "De-duplication is the next evolutionary step in backup technology." Eliminating duplicate data in secondary storage archives can slash media costs, streamline management tasks, and minimize the bandwidth required to replicate data.

Despite the appeal of this data de-duplication concept, widespread adoption has been slowed by the high costs of processing power required to identify duplicate data, index unique data, and restore compacted data to its original state. The path has only recently been cleared for a proliferation of data de-duplication solutions as processing power has become more cost-effective.

Many vendors lay claim to the best data de-duplication approach, leaving customers to face the difficulty of separating hype from reality and determining which factors are really important to their business. With some vendors setting unrealistic expectations by predicting huge reductions in data volume, early adopters may find themselves ultimately disappointed with their solution.

Companies must consider a number of key factors in order to select a data de-duplication solution that actually delivers cost effective, high-performance, and scalable long-term data storage. This document provides the background information required to make an informed data de-duplication purchasing decision.

Vendor:
FalconStor Software
Posted:
12 Feb 2009
Published:
12 Feb 2009
Format:
PDF
Type:
White Paper
Language:
English

This resource is no longer available.