Product

Druva’s Secret Sauce: Meet the Technology Behind Dru’s GenAI Magic

David Gildea, VP of Product

Welcome to a behind-the-scenes look at the technology behind Dru, our newest GenAI team member! In this interview with our VP of Product, David Gildea, we dive into the journey and motivation behind developing our own GenAI capabilities, the technology (and why we chose it) that ensures its security, how we solved and learned from technical challenges during its development, and so much more.

Why did Druva decide to make its own GenAI capabilities?

We built our own GenAI capabilities to help get the very best from Generative AI and offer it to our customers. LLMs and Generative AI offer a lot, but part of their downfall is that they are very general and designed to meet the needs of every use case to a certain extent. In our case, we needed to meet the specific use cases of our customers to the best possible extent.

Technically, how did Druva go about developing these GenAI capabilities?

One word jumps out here, and that is experimentation. We experimented repeatedly with different LLMs, approaches, solutions, and capabilities to get to where we are today. This involved looking at a wide range of LLMs, like those from Anthropic, AWS, A21, OpenAI, Meta, and HuggingFace — then testing how they compared to each other to solve the various problems our customers face. 

We have settled on a few for now, but this is always likely to change as our requirements and the capabilities of these LLMs will also change. One of the early considerations that we started to evaluate was how to think about the capabilities of LLMs. We divided these into three areas and prioritized them. These areas are accuracy, latency, and cost. For us, the accuracy of the LLM was the most important factor, followed by the latency of the responses of the LLM and, finally, the cost.

How do you ensure user security?

We prioritized customer security and privacy within the architecture. The AI’s focus is on enhancing customer interactions, not on analyzing or using customer data as part of the learning process. All LLMs are securely hosted separately and respond exclusively through Druva’s existing APIs, respecting your controls and permissions.

The diagram below shows the security and privacy controls we have in place within the architecture, demonstrating that:

Dru AI architecture diagram
  • LLMs never connect to any Druva backend services directly

  • LLMs are hosted in entirely separate Accounts and VPC from any backend services

  • LLMs can never initiate any action independently

  • LLMs can only respond to questions sent from Druva's API

  • All normal customer permissions apply for all API calls made by Dru

We look to use AWS services where possible so that we can keep everything inside our VPC to provide better security. Amazon Bedrock exposes Foundational Models like Claude 2 and Titan inside our VPC which are comparable with OpenAI in terms of quality so it makes sense for us to pick them. AWS has gone to considerable lengths to protect our LLM environments through Amazon Bedrock. Also, because Druva is built in AWS, the integration works so well.

What was this journey like from the idea, to blueprint, to testing, to implementation? 

This is an exciting journey where we have been lucky enough to go from idea to blueprint to testing and implementation in a very short time. This was made possible by tools like LangChain and LLamaIndex that help us quickly test and evaluate the capabilities of the various generative AI frameworks and how they solve our problems. 

We approached this project in a true startup fashion by identifying the customer pain points first and how we can solve those. We then immediately matched the capabilities of this new technology to solve the problem in the best possible way, coding from day one. 

Another major factor in our ability to develop and iterate quickly was utilizing AWS tools like Amazon Bedrock to allow us to focus purely on utilizing the elements and offer, and not need to worry about provisioning or management of all of the underlying infrastructure for those LLMs.

What technical challenges did you face during development?

A significant challenge and benefit of developing Generative AI applications is the speed of change of what is capable and the constant introduction of new LLMs and tools to work with them. With constant updates and so many new capabilities on offer, it's easy to get distracted constantly trying new capabilities. However, this also presents an excellent opportunity to use cutting-edge technology to solve our customer's problems.

How did you solve these challenges and what were your learnings?

One of the significant learnings that came out of building Druva AI so far has been how important it is to break down the various components of our solution so that we can use different LLMs with different characteristics to solve those components. This allows us to drastically increase the quality of our answers to customers. 

An example of this is using the likes of Claude Instant, for understanding the high-level intent of our questions. We can do this with a very low latency, sub 1 second, and a very high level of quality. On the other hand, when we need more creative answers and can allow for higher latency, like data retrieval,  we can utilize a more sophisticated LLM like Claude 2 or GPT 4, which can better handle the complexity of our requirements. 

The ability to break our solution into much smaller areas has allowed us to better utilize the right LLM in the right place with the right parameters.

What does it mean for Druva to be the first backup vendor to release its own GenAI capabilities?

Solving data protection for our customers is our number one job, and we are constantly innovating to continue to deliver on this promise. Utilizing Generative AI to continue to do this is no different. Our ability to get AI into the market before other backup vendors indicates our drive to support our customers at every step of their data protection journey.

Next steps

Dru is now available in technology preview for Amazon EC2 protection. Read our press release to learn more about Druva’s new AI capabilities, and register for our upcoming webinar to see it in action.

Stay tuned to the Druva blog as we delve into Dru’s real-world impact by examining specific use cases and showcasing how our AI integration is solving customer needs. Finally, we’ll show some of these capabilities in action via recorded demos. Eager to explore more? Check out our product tour where you can see Dru in use for Amazon EC2 backup!