"Data deduplication is inarguably one of the most new important technologies in storage for the past decade" says Gartner. So let's take a detailed look at what it actually means.
Data deduplication or single instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required.
In the same way that the phrase "single instancing" turns the noun "single-instance storage" into a verb, the word "dedupe" becomes the verb for "deduplication." For example: "After we deduped our critical business data with Druva's inSync Backup, 90% of the storage space and bandwidth it was using opened up--giving us the breathing room we need to innovate!"
Example: A typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB.
The practical benefits of this technology depend upon various factors, including
- Point of Application: Source vs. Target
- Time of Application: Inline vs. Post-Process
- Granularity: File vs. Sub-File level
- Algorithm: Fixed-size blocks vs. variable length data segments
A simple relation between these factors can be explained using the diagram below:
Target- vs. Source-based Deduplication
Target-based deduplication acts on the target data storage media. In this case, the client is unmodified and is not aware of any deduplication. The deduplication engine can be embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively, the engine can be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases, it improves only the storage utilization.
Source-based deduplication, in contrast, acts on the data at the source before it’s moved. A deduplication-aware backup agent is installed on the client which backs up only unique data. The result is improved bandwidth and storage utilization. However, this imposes additional computational load on the backup client.
Inline vs. Post-process Deduplication
In target-based deduplication, the deduplication engine can either process data for duplicates in real time (i.e. as and when data is sent to the target) or after its been stored in the target storage.
The former is called inline deduplication. The obvious advantages are:
- Increase in overall efficiency as data is only passed and processed once
- The processed data is instantaneously available for post storage processes, such as recovery and replication, reducing the RPO and RTO window.
The disadvantages are:
- Decrease in write throughput
- Extent of deduplication is less; only fixed-length block deduplication approach can be used.
The inline deduplication only processes incoming raw blocks and does not have any knowledge of the files or file-structure. This forces it to use the fixed-length block approach (discussed in detail later).
The post-process deduplication asynchronously acts on the stored data. And it has an exact opposite effect on advantages and disadvantages of the inline deduplication listed above.
File vs. Sub-file Level Deduplication
The duplicate removal algorithm can be applied on full file or sub-file levels. Full file level duplicates easily can be eliminated by calculating single checksum of the complete file data and comparing it against existing checksums of the already-backed-up files. It’s simple and fast, but the extent of deduplication is less, as this process does not address the problem of duplicate content found inside different files or data-sets (e.g. specific email messages).
The sub-file level deduplication technique breaks the file into smaller fixed or variable size blocks, and then uses a standard hash-based algorithm to find similar blocks.
Fixed-Length Blocks vs. Variable-Length Data Segments
A fixed-length block approach, as the name suggests, divides the files into fixed-size length blocks and uses a simple checksum-based approach (MD5/SHA etc.) to find duplicates. Although it's possible to look for repeated blocks, the approach provides very limited effectiveness. The reason is that the primary opportunity for data reduction is in finding duplicate blocks in two transmitted datasets that are made up mostly - but not completely - of the same data segments.
For example, similar data blocks may be present at different offsets in two different datasets. In other words, the block boundary of similar data may be different. This is very common when some bytes are inserted in a file, and when the changed file processes again and divides into fixed-length blocks, all blocks appear to have changed.
Therefore, two datasets with a small amount of difference are likely to have very few identical fixed length blocks.
Variable-Length Data Segment technology divides the data stream into variable-length data segments using a methodology that can find the same block boundaries in different locations and contexts. This allows the boundaries to "float" within the data stream so that changes in one part of the dataset have little or no impact on the boundaries in other locations of the dataset.
Each organization has a capacity to generate data. The extent of savings depends upon – but not directly proportional to – the number of applications or end users generating data. Overall the deduplication savings depend upon following parameters:
- The number of applications or end users generating data
- Total data
- Daily change in data
- Type of data (email messages, documents, media, etc.)
- Backup policy (weekly-full, daily-incremental, or daily-full)
- Retention period (90 days, 1 year, etc.)
- Deduplication technology in place
The actual benefits of deduplication are realized once the same dataset is processed multiple times over a span of time for weekly/daily backups. This is especially true for variable length data segment
technology which has a much better capability for dealing with arbitrary byte insertions.
The dedupication ratio increases everytime to pass the same complete data-set through the deduplication engine.
If compared against daily full backups, which I think is not widely used today, the ratios are close to 1:300. Most vendors use this as a marketing jargon to attract customers, even though none of their customers could be doing daily full-backup. If compared against modern day incremental backups, our customer statistics show that, the results are between 1:4 to 1:50 for source based deduplication.