Understanding Data Deduplication
By Jaspreet Singh
"Data deduplication is inarguably one of the most new important technologies in storage for the past decade" says Gartner. So let's take a detailed look at what it actually means.
Data deduplication or Single Instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required.
In the same way that the phrase "Single Instancing" turns the noun "Single-instance storage" into a verb, the word "dedupe" is the verb for "deduplication." For example:
"After we deduped our critical business data with Druva's inSync Backup, 90% of the storage space and bandwidth it was using opened up--giving us the breathing room we need to innovate!"
Example A typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB.
The practical benefits of this technology depend upon various factors like –
- Point of Application
- Source Vs Target
Time of Application
- Inline vs Post-Process
- File vs Sub-File level
- Fixed size blocks Vs Variable length data segments
A simple relation between these factors can be explained using the diagram below -
Target Vs Source based Deduplication
Target based deduplication acts on the target data storage media. In this case the client is unmodified and not aware of any deduplication. The deduplication engine can embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively it can also be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases it improves only the storage utilization.
Source based deduplication, on the contrary, acts on the data at the source before it’s moved. A deduplication aware backup agent is installed on the client which backs up only unique data. The result is improved bandwidth and storage utilization. But, this imposes additional computational load on the backup client.
Inline Vs Post-process Deduplication
In target based deduplication, the deduplication engine can either process data for duplicates in real time (i.e. as and when its send to target) or after its been stored in the target storage.
The former is called inline deduplication. The obvious advantages are -
- Increase in overall efficiency as data is only passed and processed once
- The processed data is instantaneously available for post storage processes like recovery and replication reducing the RPO and RTO window.
The disadvantages are -
- Decrease in write throughput
- Extent of deduplication is less - Only fixed-length block deduplication approach can be use
The inline deduplication only processed incoming raw blocks and does not have any knowledge of the files or file-structure. This forces it to use the fixed-length block approach (discussed in details later).
The post-process deduplication asynchronously acts on the stored data. And has an exact opposite effect on advantages and disadvantages of the inline deduplication listed above.
File vs Sub-file Level Deduplication
The duplicate removal algorithm can be applied on full file or sub-file levels. Full file level duplicates can be easily eliminated by calculating single checksum of the complete file data and comparing it against existing checksums of already backed up files. It’s simple and fast, but the extent of deduplication is very less, as it does not address the problem of duplicate content found inside different files or data-sets (e.g. emails).
The sub-file level deduplication technique breaks the file into smaller fixed or variable size blocks, and then uses standard hash based algorithm to find similar blocks.
Fixed-Length Blocks v/s Variable-Length Data Segments
Fixed-length block approach, as the name suggests, divides the files into fixed size length blocks and uses simple checksum (MD5/SHA etc.) based approach to find duplicates. Although it's possible to look for repeated blocks, the approach provides very limited effectiveness. The reason is that the primary opportunity for data reduction is in finding duplicate blocks in two transmitted datasets that are made up mostly - but not completely - of the same data segments.
For example, similar data blocks may be present at different offsets in two different datasets. In other words the block boundary of similar data may be different. This is very common when some bytes are inserted in a file, and when the changed file processes again and divides into fixed-length blocks, all blocks appear to have changed.
Therefore, two datasets with a small amount of difference are likely to have very few identical fixed length blocks.
Variable-Length Data Segment technology divides the data stream into variable length data segments using a methodology that can find the same block boundaries in different locations and contexts. This allows the boundaries to "float" within the data stream so that changes in one part of the dataset have little or no impact on the boundaries in other locations of the dataset.
Each organization has a capacity to generate data. The extent of savings depends upon – but not directly proportional to – the number of applications or end users generating data. Overall the deduplication savings depend upon following parameters –
- No. of applications or end users generating data
- Total data
- Daily change in data
- Type of data (emails/ documents/ media etc.)
- Backup policy (weekly-full – daily-incremental or daily-full)
- Retention period (90 days, 1 year etc.)
- Deduplication technology in place
The actual benefits of deduplication are realized once the same dataset is processed multiple times over a span of time for weekly/daily backups. This is especially true for variable length data segment
technology which has a much better capability for dealing with arbitrary byte insertions.
The dedupication ratio increases everytime to pass the same complete data-set through the deduplication engine.
If compared against daily full backups, which I think is not widely used today, the ratios are close to 1:300. Most if the venders use this as a marketing jargon to attract customers, even though none of their customers could be doing daily full-backup :)
If compared against modern day incremental backups, our customer statistics show that, the results are between 1:4 to 1:50 for source based deduplication.