This resource is no longer available

Data Deduplication Background: A Technical White Paper


The term “data deduplication”, as it is used and implemented by Quantum Corporation in this white paper, refers to a specific approach to data reduction built on a methodology that systematically substitutes reference pointers for redundant variable-length blocks (or data segments) in a specific dataset. The purpose of data deduplication is to increase the amount of information that can be stored on disk arrays and to increase the effective amount of data that can be transmitted over networks. When it is based on variable-length data segments, data deduplication has the capability of providing greater granularity than single-instance store technologies that identify and eliminate the need to store repeated instances of identical whole files. In fact, variable-length block data deduplication can be combined with file-based data reduction systems to increase their effectiveness. It is also compatible with established compression systems used to compact data being written to tape or to disk, and may be combined with compression at a solution level. Key elements of variable-length data deduplication were first described in a patent issued to Rocksoft, Ltd (now a part of Quantum Corporation) in 1999. Read on to learn more.

Quantum Corporation
10 Feb 2011
10 Feb 2011
White Paper

This resource is no longer available.