Understanding Data Deduplication

Understanding Data Deduplication
24

“Data deduplication is inarguably one of the most new important technologies in storage for the past decade” says Gartner. So let’s take a detailed look at what it actually means.

Data deduplication or single instancing essentially refers to the elimination of redundant data. In the deduplication process, duplicate data is deleted, leaving only one copy (single instance) of the data to be stored. However, indexing of all data is still retained should that data ever be required.

In the same way that the phrase “single instancing” turns the noun “single-instance storage” into a verb, the word “dedupe” becomes the verb for “deduplication.” For example: “After we deduped our critical business data with Druva’s inSync Backup, 90% of the storage space and bandwidth it was using opened up–giving us the breathing room we need to innovate!”

Example: A typical email system might contain 100 instances of the same 1 MB file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy reducing storage and bandwidth demand to only 1 MB.

Technological Classification

The practical benefits of this technology depend upon various factors, including

  • Point of Application: Source vs. Target
  • Time of Application: Inline vs. Post-Process
  • Granularity: File vs. Sub-File level
  • Algorithm:  Fixed-size blocks vs. variable length data segments

A simple relation between these factors can be explained using the diagram below:

Deduplication Technological Classification

Target- vs. Source-based Deduplication

Target-based deduplication acts on the target data storage media. In this case, the client is unmodified and is not aware of any deduplication. The deduplication engine can be embedded in the hardware array, which can be used as NAS/SAN device with deduplication capabilities. Alternatively, the engine can be offered as an independent software or hardware appliance which acts as intermediary between backup server and storage arrays. In both cases, it improves only the storage utilization.

Target Vs Source Deduplication

Source-based deduplication, in contrast, acts on the data at the source before it’s moved. A deduplication-aware backup agent is installed on the client which backs up only unique data. The result is improved bandwidth and storage utilization. However, this imposes additional computational load on the backup client.

Inline vs. Post-process Deduplication

In target-based deduplication, the deduplication engine can either process data for duplicates in real time (i.e. as and when data is sent to the target) or after its been stored in the target storage.

The former is called inline deduplication. The obvious advantages are:

  1. Increase in overall efficiency as data is only passed and processed once
  2. The processed data is instantaneously available for post storage processes, such as recovery and replication, reducing the RPO and RTO window.

The disadvantages are:

  1. Decrease in write throughput
  2. Extent of deduplication is less; only fixed-length block deduplication approach can be used.

The inline deduplication only processes incoming raw blocks and does not have any knowledge of the files or file-structure. This forces it to use the fixed-length block approach (discussed in detail later).

Inline vs Post Process Deduplication


The post-process deduplication asynchronously acts on the stored data. And it has an exact opposite effect on advantages and disadvantages of the inline deduplication listed above.

File vs. Sub-file Level Deduplication

The duplicate removal algorithm can be applied on full file or sub-file levels. Full file level duplicates easily can be  eliminated by calculating single checksum of the complete file data and comparing it against existing checksums of the already-backed-up files. It’s simple and fast, but the extent of deduplication is less, as this process does not address the problem of duplicate content found inside different files or data-sets (e.g. specific email messages).

The sub-file level deduplication technique breaks the file into smaller fixed or variable size blocks, and then uses a standard hash-based algorithm to find similar blocks.

Fixed-Length Blocks vs. Variable-Length Data Segments

A fixed-length block approach, as the name suggests, divides the files into fixed-size length blocks and uses a simple checksum-based approach (MD5/SHA etc.) to find duplicates. Although it’s possible to look for repeated blocks, the approach provides very limited effectiveness. The reason is that the primary opportunity for data reduction is in finding duplicate blocks in two transmitted datasets that are made up mostly – but not completely – of the same data segments.

Data Sets and Block Allignment

For example, similar data blocks may be present at different offsets in two different datasets. In other words, the block boundary of similar data may be different. This is very common when some bytes are inserted in a file, and when the changed file processes again and divides into fixed-length blocks, all blocks appear to have changed.

Therefore, two datasets with a small amount of difference are likely to have very few identical fixed length blocks.

Variable-Length Data Segment technology divides the data stream into variable-length data segments using a methodology that can find the same block boundaries in different locations and contexts. This allows the boundaries to “float” within the data stream so that changes in one part of the dataset have little or no impact on the boundaries in other locations of the dataset.

ROI Benefits

Each organization has a capacity to generate data. The extent of savings depends upon – but not directly proportional to – the number of applications or end users generating data. Overall the deduplication savings depend upon following parameters:

  • The number of applications or end users generating data
  • Total data
  • Daily change in data
  • Type of data (email messages, documents, media, etc.)
  • Backup policy (weekly-full, daily-incremental, or daily-full)
  • Retention period (90 days, 1 year, etc.)
  • Deduplication technology in place

The actual benefits of deduplication are realized once the same dataset is processed multiple times over a span of time for weekly/daily backups. This is especially true for variable length data segment
technology which has a much better capability for dealing with arbitrary byte insertions.

Numbers

The deduplication ratio increases everytime to pass the same complete data-set through the deduplication engine.

If compared against daily full backups, which I think is not widely used today, the ratios are close to 1:300. Most vendors use this as a marketing jargon to attract customers, even though none of their customers could be doing daily full-backup. If compared against modern day incremental backups, our customer statistics show that, the results are between 1:4 to 1:50 for source based deduplication.

Want to learn more? Download our white paper, 8 Must-Have Features for Endpoint Backup.

deduplication, data deduplication, global deduplication, global data deduplication, data deduplication for corporate endpoints

druva-leadership-jaspreet-color

Jaspreet Singh

Founder and CEO, Druva

Jaspreet bootstrapped the company while defining the product, sales and marketing strategies that have resulted in Druva's early and impressive success. Prior to founding Druva, Jaspreet was a member of the storage foundation group at Veritas.

24 Comments

  1. Borja 7 years ago

    Good post, Jaspreet. It is clear and neutral, but I’m missing what is the technology used by inSync in each of the cases:
    – Source or Target ?
    – Inline or Post-Process ?
    – File or Sub-File level ?
    – Fixed size blocks or Variable length data segments ?

  2. Jaspreet 7 years ago

    Thanks Borja,

    I wanted to keep the post neutral.

    InSync is a backup software hence its a source based deduplication tech. The product uses sub-file level approach with variable size data segment algorithm.

    This helps it find those duplicate emails between large PST files. And the result is up to 90% bandwidth, storage savings.

    Its one of the rare in-production products to do so.

    The Inline and Post-process appoaches are only valid in case of target dedup.

    Jaspreet

  3. Laxman 7 years ago

    How is the block size determined when using variable block length? Can you shed more light on the so called “Variable-Length Data Segment technology”? Specifically, how is the block size determined? Do you have any references? Thanks.

  4. Jaspreet 7 years ago

    Laxman,

    That’s the “key” to this technology. Some players use heuristics .. and some signatures or leaner checksums.

    It depends on the data type and sometimes the algorithm needs to be trained as well.

    Druvaa has filed multiple patent applications on this :)

  5. Mike Dutch 7 years ago

    The inline vs post-process discussion is not accurate. Most inline solutions (both source and target) use variable length segmentation. Inline solutions can also be content aware.

    The numbers section is also inaccurate. Very high dedupe ratios (e.g., 500:1) are common but it really depends on what exactly you are measuring. For a good discussion, see the SNIA white paper titled “Understanding Data Deduplication Ratios” here:
    .

  6. Jaspreet 7 years ago

    Mike,

    Thanks for the information.

    IMO, Inline always applied to target based dedup. But, point me to right sources if I am wrong.

    Any deduped NAS/SAN device can-not control what data is flushed to it. The mounted file-system or storage driver flushes information which it may not make any sense to the device. In such cases the variable data-segment algorithm can’t be applied.

    Yup, most vendors .. present twisted information. They compare ratio’s against daily full-backups which are rare in enterprises today.

    I corrected the ratio’s part. Thanks.

  7. Jered Floyd 6 years ago

    Jaspreet,

    Mike is right; in-line deduplication is not limited to fixed-length blocking. Our Permabit Enterprise Archive product, for example, does variable-sized chunking for optimal deduplication.

    You say:

    Any deduped NAS/SAN device can-not control what data is flushed to it. The mounted file-system or storage driver flushes information which it may not make any sense to the device. In such cases the variable data-segment algorithm can’t be applied.

    We cannot control when data is flushed to our device, but this does not mean that we cannot inspect the structure of the file as it is being written an make intelligent choices about where to set boundaries. Data being written sequentially is generally flushed sequentially, and in the case of out-of-order writes from the block cache we are able to do reassembly in memory or make guesses about file structure based on previously seen landmarks in the file. In our experience, we nearly always get deduplication as good as post-processing the file after it has been written entirely, and we do not introduce a dangerous “dedupe window” which can lead to falling far behind in the data stream.

    This is technically more complicated to implement than post-process, but it can and has been done. There is more information about our deduplication technologies, which we call Scalable Data Reduction, on our website at http://www.permabit.com/products/sdr.asp.

    Regards,
    Jered Floyd
    CTO, Permabit

  8. Milind 6 years ago

    I agree with Jered. All document are written in entirety and the filesystem cache also flushes out in a sequential fashion. In case of random overwrites, you may need to read-back and merge to reconstruct the block. For database files, fixed block sizes would perform better. In essence, inline variable sized chunking is possible. Most NAS servers still prefer post-processing to avoid impact on in-band performance.

    Milind Borate,
    CTO, Druvaa

  9. jitendra 6 years ago

    hi this good but, still i confuse
    just i want to how variable size block is manage
    i want to it’s file format in a specific manner.
    so plz help me..

  10. Krishnaprasad 4 years ago

    Hai,

    Can some one please redirect me to any, variable length deduplication algorithms. (atleast the working principle)

  11. Prasad 4 years ago

    Hai,

    You found any variable length deduplication algorithm ?
    I found one method try by searching “deduplication” in ACM.

    result: “Anchor-driven subchunk deduplication”

  1. Pingback: PuneTech » Understanding Data De-duplication » Technology in Pune
  2. Pingback: Data De-Duplication « COMPUTER ARTICLES
  3. Pingback: Best Practices For Laptop Backup Software Users
  4. Pingback: Laptop Backup Software Solves Your Enterprise Problems | Huimalamainakupuna The Hawaiian Blog
  5. Pingback: Laptop Backup Software Protects Your Digital Assets | Very Simple Articles
  6. Pingback: Home, Arts, Entertainment » Blog Archive » Secure Laptop Backup Software
  7. Pingback: Data Protection with a Laptop Backup Software | Netbookz Dot CN
  8. Pingback: What Are The Benefits of a Laptop Backup Software Programs | Ebay Cruncher
  9. Pingback: Data Deduplication - A Detailed Overview | Misc |
  10. Pingback: Understanding Data Deduplication - Storage Informer
  11. Pingback: Remote Data Backup Program – Boost Your File Security « Druvaa
  12. Pingback: Remote Data Backup Program Technology « Druvaa