Cloud Data Protection: Best Practices

A leading expert in cloud data protection talks about ransomware, data mining, challenges in cybersecurity and more.


Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

One of the key challenges for data protection in today’s heterogenous environment is that companies have so many different environments: cloud, in-house, remote office, many endpoints. How are companies doing with this challenge? What about monitoring data across this mixed landscape?

To provide guidance on data protection in the cloud era, I spoke with a leading expert:

Prem Ananthakrishnan, VP of Product, Druva

We discussed:

  • Data governance is a thorny problem because compliance in a mixed environment is a constant headache. How can companies stay ahead of compliance demands?
  • Of course data security is the ultimate fear for many companies, with data in a multicloud environment. Ransomware in particular is a major problem. What are a few best practices for data security?
  • What about strategies for making ideal use of your data, mining it for maximum competitive insight, even as you keep it fully protected?
  • The future of data protection in the cloud? What do you see several years out?

James Maguire, Managing Editor, Datamation – moderator

Download the podcast:

Watch the video

Edited highlights from the discussion — all quotes from Prem Ananthakrishnan: 

COVID, the Cloud and Digital Transformation

What the pandemic has done is really accelerated some of those pains or problems and brought it out in a more compelling way, when you think about, made it top-of-mind for our customers. Again, over the last 15 years or so, and I know I spent a good part of my career before I got into the vendor side of being in IT. And IT was always looking to find a way to become the enabler for the organization, versus just being a call center or a risk management center. And I think that yeah, the digital transformation and, obviously, cloud is a big part of that, has helped IT really get into the front seat and help the business drive outcomes around… Whether it is bringing some new solutions to market or helping the business transform into new areas.

At the same time, they’re bringing that agility and velocity that most companies were really looking forward to. But if you think about what’s happened along with that, as part of that whole digital transformation, we started offering all these new applications. We started looking at new things in the cloud, whether it’s Kubernetes or Microsoft 365, or a way for clients to re-engineer their applications in the cloud. Which is great, by the way. That was the whole promise, and that’s been the promise of digital transformation.

But along with that came a lot of challenges around data protection. Because the definition of data protection before that was confined to protecting what used to be sitting behind the walls of what you would call the data center.

And as the data has really become distributed and fragmented, I think you now have to accept the fact that that’s the norm. And when you think about it that way, I think the definition of data protection also needs to change.”

Centralized Data Protection

One thing we all talk about on the IT side and also on the vendor side is, as your data gets more and more fragmented, it’s really important to think about your data protection in a more centralized way. Because how do you protect something when you don’t have the right visibility into where your data is and what type of information needs to get protected?

Old data sets are not necessarily being handled the right way. You may not even know what’s sitting out there, maybe in an S3 bucket. Even Amazon, sometimes it’s sitting under somebody’s desk. [chuckle]”

Data Protection is Not Enough

I’ve been in security for a few years before I moved more into this whole data protection side of the house. And historically, the security in this side of the world, whether you’re in IT or maybe even on the vendor side, you’re predominantly being focused on keeping the bad actors out. A lot of technology process and even people skills were oriented around keeping the bad actors out, putting prevention in place and ensuring that you make sure that nothing goes wrong. I think that’s important. That is still really important. You have to put those critical prevention measures in place.

But I think what’s changing quite a bit now, and I think people are starting to realize, is that that alone is not enough, because no matter what do you do, because of this fragmented information landscape and also the way in which we are developing these new-age applications, there are too many threat vectors and they are too many points, which can easily go unmonitored and unprotected.”


The attackers are realizing that organizations are holding backups as a the last line of defense, so they are now trying to attack that as well. So I think if you look at it from a backup perspective, it becomes increasingly important to have much more stronger levels of protection from being able to ensure that those backups.

Are you monitoring the backup environments really well? Do you know who is accessing that backup environment? Are you able to identify anomalies and have the right kind of signals in your organization? And also, in case somebody’s trying to delete that backup or do any things with it, are your backups protected well enough so that you can actually provide that level of resiliency?”

Data Mining

If you think about how people are doing data science or data analytics today, what they’re not doing yet is to look at that backup repository as a de facto data lake.

And the negative side of that is that now you’re creating more and more copies of data, which increases your risk, it increases your cost, it has a lot of ramifications. So the de facto thinking for any data science team today is, “Hey, we need to kind of mine the data to create a competitive advantage.”

But let’s say I need to create a special healthcare pharma tool for improving my business operations. So the de facto approach is they just go create a copy by pulling all this data from so many different places, and then they actually go build yet another silo, yet another tool to actually do all the data mining.

And what they actually don’t know yet, and I think the industry really needs to educate, work with customers involved, is that you have this great repository of datasets sitting inside the backup world, which they can actually start mining and then start consuming that instead of having to go and create another siloed data lake. So I think that itself is a big transformation.”

Future: A Hub of Data vs. Silos

From data protection to management, all of these solutions that are getting built for data science and analytics, it requires a certain way in which data needs to be consumed.

It has to be in certain formats, you have to make sure the data is available and usable in certain ways. I think there’s a great opportunity, and five years from now, I see a lot of the data protection cutting edge, innovative vendors opening up their stack. Because traditionally we look at the data protection solution, they were very proprietary because nobody could easily access any of these. They had their own protocols and everything that it was always propriety and nobody could consume that data.

I think that consumption of data and being able to access it easily with APIs and automation is a big trend. And I think over the next five years, it’s going to take a completely new form where anybody, everyone can tap into this hub of data and can use it as a data exchange platform. And I think that’s part of this big movement I see in the data protection and the backup industry.”