We just raised a $30M Series A: Read our story

Cohesity DataProtect Competitors and Alternatives

Get our free report covering Rubrik, Veeam Software, Commvault, and other competitors of Cohesity DataProtect. Updated: November 2021.
555,358 professionals have used our research since 2012.

Read reviews of Cohesity DataProtect competitors and alternatives

John Leitgeb
IT Director at Kingston Technology
Real User
Top 20
Easy-to-use interface, good telemetry data, and the support is good

Pros and Cons

  • "If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day."
  • "The onset of configuring an environment in the cloud is difficult and could be easier to do."

What is our primary use case?

Originally, I was looking for a solution that allowed us to replicate our critical workloads to a cloud target and then pay a monthly fee to have it stored there. Then, if some kind of disaster happened, we would have the ability to instantiate or spin up those workloads in a cloud environment and provide access to our applications. That was the ask of the platform.

We are a manufacturing company, so our environment wouldn't be drastically affected by a webpage outage. However, depending on the applications that are affected, being a $15 billion dollar company, there could be a significant impact.

How has it helped my organization?

Zerto is very good in terms of providing continuous data protection. Now bear in mind the ability to do this in the cloud is newer to them than what they've always done traditionally on-premises. Along the way, there are some challenges when working with a cloud provider and having the connectivity methodology to replicate the VMs from on-premises to Azure, through the Zerto interface, and make sure that there's a healthy copy of Zerto in the cloud. For that mechanism, we spent several months working with Zerto, getting it dialed in to support what we needed to do. Otherwise, all of the other stuff that they've been known to do has worked flawlessly.

The interface is easy to use, although configuring the environment, and the infrastructure around it, wasn't so clear. The interface and its dashboard are very good and very nice to use. The interface is very telling in that it provides a lot of the telemetry that you need to validate that your backup is healthy, that it's current, and that it's recoverable.

A good example of how Zerto has improved the way our organization functions is that it has allowed us to decommission repurposed hardware that we were using to do the same type of DR activity. In the past, we would take old hardware and repurpose it as DR hardware, but along with that you have to have the administration expertise, and you have to worry about third-party support on that old hardware. It inevitably ends up breaking down or having problems, and by taking that out of the equation, with all of the DR going to the cloud, all that responsibility is now that of the cloud provider. It frees up our staff who had to babysit the old hardware. I think that, in and of itself, is enough reason to use Zerto.

We've determined that the ability to spin up workloads in Azure is the fastest that we've ever seen because it sits as a pre-converted VM. The speed to convert it and the speed to bring it back on-premises is compelling. It's faster than the other ways that we've tried or used in the past. On top of that, they employ their own compression and deduplication in terms of replicating to a target. As such, the whole capability is much more efficient than doing it the way we were doing it with Rubrik.

If we lost our data center and had to recover it, Zerto would save us a great deal of time. In our testing, we have found that recovering the entire data center would be completed within a day. In the past, it was going to take us close to a month. 

Using Zerto does not mean that we can reduce the number of people involved in a failover.  You still need to have expertise with VMware, Zerto, and Azure. It may not need to be as in-depth, and it's not as complicated as some other platforms might be. The person may not have to be such an expert because the platform is intuitive enough that somebody of that level can administer it. Ultimately, you still need a human body to do it.

What is most valuable?

The most valuable feature is the speed at which it can instantiate VMs. When I was doing the same thing with Rubrik, if I had 30 VMs on Azure and I wanted to bring them up live, it would take perhaps 24 hours. Having 1,000 VMs to do, it would be very time-consuming. With Zerto, I can bring up almost 1,000 VMs in an hour. This is what I really liked about Zerto, although it can do a lot of other things, as well.

The deduplication capabilities are good.

What needs improvement?

The onset of configuring an environment in the cloud is difficult and could be easier to do. When it's on-premises, it's a little bit easier because it's more of a controlled environment. It's a Windows operating system on a server and no matter what server you have, it's the same.

However, when you are putting it on AWS, that's a different procedure than installing it on Azure, which is a different procedure than installing it on GCP, if they even support it. I'm not sure that they do. In any event, they could do a better job in how to build that out, in terms of getting the product configured in a cloud environment.

There are some other things they can employ, in terms of the setup of the environment, that would make things a little less challenging. For example, you may need to have an Azure expert on the phone because you require some middleware expertise. This is something that Zerto knew about but maybe could have done a better job of implementing it in their product.

Their long-term retention product has room for improvement, although that is something that they are currently working on.

For how long have I used the solution?

We have been with Zerto for approximately 10 years. We were probably one of the first adopters on the platform.

What do I think about the stability of the solution?

With respect to stability, on-premises, it's been so many years of having it there that it's baked in. It is stable, for sure. The cloud-based deployment is getting there. It's strong enough in terms of the uptime or resilience that we feel confident about getting behind a solution like this.

It is important to consider that any issues with instability could be related to other dependencies, like Azure or network connectivity or our on-premises environment. When you have a hybrid environment between on-premises and the cloud, it's never going to be as stable as a purely on-premises or purely cloud-based deployment. There are always going to be complications.

What do I think about the scalability of the solution?

This is a scalable product. We tested scalability starting with 10 VMs and went right up to 100, and there was no difference. We are an SMB, on the larger side, so I wouldn't know what would happen if you tried to run it with 50,000 VMs. However, in an SMB-sized environment, it can definitely handle or scale to what we do, without any problems.

This is a global solution for us and there's a potential that usage will increase. Right now, it is protecting all of our criticals but not everything. What I mean is that some VMs in a DR scenario would not need to be spun up right away. Some could be done a month later and those particular ones would just fall into our normal recovery process from our backup. 

The backup side is what we're waiting on, or relying on, in terms of the next ask from Zerto. Barring that, we could literally use any other backup solution along with Zerto. I'm perfectly fine doing that but I think it would be nice to use Zerto's backup solution in conjunction with their DR, just because of the integration between the two.  

How are customer service and technical support?

In general, the support is pretty good. They were just acquired by HP, and I'm not sure if that's going to make things better or worse. I've had experiences on both sides, but I think overall their support's been very good.

Which solution did I use previously and why did I switch?

Zerto has not yet replaced any of our legacy backup products but it has replaced our DR solution. Prior to Zerto, we were using Rubrik as our DR solution. We switched to Zerto and it was a much better solution to accommodate what we wanted to do. The reason we switched had to do with support for VMware.

When we were using Rubrik, one of the problems we had was that if I instantiated the VM on Azure, it's running as an Azure VM, not as a VMware VM. This meant that if I needed to bring it back on-premises from Azure, I needed to convert it back to a VMware VM. It was running as a Hyper-V VM in Azure, but I needed an ESX version or a VMware version. At the time, Rubrik did not have a method to convert it back, so this left us stuck.

There are not a lot of other DR solutions like this on the market. There is Site Recovery Manager from VMware, and there is Zerto. After so many years of using it, I find that it is a very mature platform and I consider it easy to use. 

How was the initial setup?

The initial setup is complex. It may be partly due to our understanding of Azure, which I would not put at an expert level. I would rate our skill at Azure between a neophyte and the mid-range in terms of understanding the connectivity points with it. In addition to that, we had to deal with a cloud service provider.

Essentially, we had to change things around, and I would not say that it was easy. It was difficult and definitely needed a third party to help get the product stood up.

Our deployment was completed within a couple of months of ending the PoC. Our PoC lasted between 30 and 60 days, over which time we were able to validate it. It took another 60 days to get it up and running after we got the green light to purchase it.

We're a multisite location, so the implementation strategy started with getting it baked at our corporate location and validating it. Then, build out an Azure footprint globally and then extend the product into those environments. 

What about the implementation team?

We used a company called Insight to assist us with implementation. We had a previous history with one of their engineers, from previous work that we had done. We felt that he would be a good person to walk us through the implementation of Zerto. That, coupled with the fact that Zerto engineers were working with us as well. We had a mix of people supporting the project.

We have an infrastructure architect who's heading the project. He validates the environment, builds it out with the business partners and the vendor, helps figure out how it should be operationalized, configure it, and then it gets passed to our data protection group who has admins that will basically administrate the platform and it maintains itself.

Once the deployment is complete, maintaining the solution is a half-person effort. There are admins who have a background in data protection, backup products, as well as virtualization and understanding of VMware. A typical infrastructure administrator is capable of administering the platform.

What was our ROI?

Zerto has very much saved us money by enabling us to do DR in the cloud, rather than in our physical data center. To do what we want to do and have that same type of hardware, to be able to stand up on it and have that hardware at the ready with support and maintenance, would be huge compared to what I'm doing.

By the way, we are doing what is considered a poor man's DR. I'm not saying that I'm poor, but that's the term I place on it because most people have a replica of their hardware in another environment. One needs to pay for those hardware costs, even though it's not doing anything other than sitting there, just in case. Using Zerto, I don't have to pay for that hardware in the cloud.

All I pay for is storage, and that's much less than what the hardware cost would be. To run that environment with everything on there, just sitting, would cost a factor of ten to one.

I would use this ratio with that because the storage that it replicates to is not the fastest. There's no VMs, there's no compute or memory associated with replicating this, so all I'm paying for is the storage.

So in one case, I'm paying only for storage, and in the other case, I have to pay for storage and for hardware, compute, and connectivity. If you add all that up into what storage would be, I think it would be that storage is inexpensive, but compute added up with maintenance and everything, and networking connectivity between there and the soft costs and man-hours to support that environment, just to have it ready, I would say ten to one is probably a fair assessment.

When it comes to DR, there is no real return on investment. The return comes in the form of risk mitigation. If the question is whether I think that I spent the least amount of money to provide a resilient environment then I would answer yes. Without question.

What's my experience with pricing, setup cost, and licensing?

If you are an IT person and you think that DR is too expensive then the cloud option from Zerto is good because anyone can afford to use it, as far as getting one or two of their criticals protected. The real value of the product is that if you didn't have any DR strategy, because you thought you couldn't afford it, you can at least have some form of DR, including your most critical apps up and running to support the business.

A lot of IT people roll the dice and they take chances that that day will never come. This way, they can save money. My advice is to look at the competition out there, such as VMware Site Recovery, and like anything else, try to leverage the best price you can.

There are no costs in addition to the standard licensing fees for the product itself. However, for the environment that it resides in, there certainly are. With Azure, for example, there are several additional costs including connectivity, storage, and the VPN. These ancillary costs are not trivial and you definitely have to spend some time understanding what they are and try to control them.

Which other solutions did I evaluate?

I looked at several solutions during the evaluation period. When Zerto came to the table, it was very good at doing backup. The other products could arguably instantiate and do the DR but they couldn't do everything that Zerto has been doing. Specifically, Zerto was handling that bubbling of the environment to be able to test it and ensure that there is no cross-contamination. That added feature, on top of the fact that it can do it so much faster than what Rubrik could, was the compelling reason why we looked there.

Along the way, I looked at Cohesity and Veeam and a few other vendors, but they didn't have an elegant solution or an elegant way of doing what I wanted to do, which is sending copies to an expensive cloud storage target, and then having the mechanism to instantiate them. The mechanism wasn't as elegant with some of those vendors.

What other advice do I have?

We initially started with the on-premises version, where we replicated our global DR from the US to Taiwan. Zerto recently came out with a cloud-based, enterprise variant that gives you the ability to use it on-premises or in the cloud. With this, we've migrated our licenses to a cloud-based strategy for disaster recovery.

We are in the middle of evaluating their long-term retention, or long-term backup solution. It's very new to us. In the same way that Veeam, and Rubrik, and others were trying to get into Zerto's business, Zerto's now trying to get into their business as far as the backup solution.

I think it's much easier to do backup than what Zerto does for DR, so I don't think it will be very difficult for them to do table stakes back up, which is file retention for multiple targets, and that kind of thing.

Right now, I would say they're probably at the 70% mark as far as what I consider to be a success, but each version they release gets closer and closer to being a certifiable, good backup solution.

We have not had to recover our data after a ransomware attack but if our whole environment was encrypted, we have several ways to recover it. Zerto is the last resort for us but if we ever have to do that, I know that we can recover our environment in hours instead of days.

If that day ever occurs, which would be a very bad day if we had to recover at that level, then Zerto will be very helpful. We've done recoveries in the past where the on-premises restore was not healthy, and we've been able to recover them very fast. It isn't the onesie twosies that are compelling in terms of recovery because most vendors can provide that. It's the sheer volume of being able to restore so many at once that's the compelling factor for Zerto.

My advice for anybody who is implementing Zerto is to get a good cloud architect. Spend the time to build out your design, including your IP scheme, to support the feature sets and capabilities of the product. That is where the work needs to be done, more so than the Zerto products themselves. Zerto is pretty simple to get up and running but it's all the work ahead in the deployment or delivery that needs to be done. A good architect or cloud person will help with this.

The biggest lesson that I have learned from using Zerto is that it requires good planning but at the end of it, you'll have a reasonable disaster recovery solution. If you don't currently have one then this is certainly something that you should consider.

I would rate Zerto a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
RJ
Storage Administrator at a healthcare company with 5,001-10,000 employees
Real User
Top 20
Cut our backup management time significantly, and near-instant recovery reduces our downtime

Pros and Cons

  • "We do like the instant recovery... Now, we say, "Okay, give me 15 seconds and I can get this back up for you." And within that 15 seconds it's on and the only thing that we have to do afterwards is vMotion it off of the Rubrik storage back to where it should rest."
  • "The interface is still slightly clunky and has room for improvement. They do work with us whenever we mention anything that needs to be done or anything that we want. We find that bringing up the management interface is a little slow and not as intuitive as we would like, but it's been getting better as it evolves."

What is our primary use case?

We came from two different systems. We had one product that was for our campus side and a different product for the hospital side. We wanted to bring those together and not have too many products in one environment. Rubrik covers everything in our VMware, for both campus and hospital. It does all of our backups. Anything that gets backed up for either side now goes through it.

We were siloed out into many different teams on both sides and we had a backup team on campus and a backup team on the hospital side. When those were brought together, the backup teams were dissolved and they were put into the VMware side where they're now managing hardware and server hardware refreshes.

My team is now the storage and backup team and we've taken on that task. Backups are offered as part of pretty much any ticket requesting a new server, for campus or hospital, that is a request for a new server. We spin up the backup at the server creation.

Our Rubrik is all on-prem. We back up our VMware environment and we also do a few physicals. We do some SQL and we do some Oracle.

How has it helped my organization?

It depends on what we're recovering, but some recoveries, before Rubrik, would take 30 minutes-plus. Now, similar recoveries that we've done have taken only seconds.

Also, when we first put this into place, we were actually moving to a hybrid cloud approach as well. We were trying to offer server creation as a simple ticket. We were doing this through offering the products, the catalog, and the automation behind everything to spin up the servers and deal out the storage. The two products that we actually have in our environment weren't very friendly with that automation piece but Rubrik, with its SLA policies, makes it very easy for us to say, "Hey, if this is a tier-zero application, we want this SLA applied globally," although there aren't very many of those in our environment. And if it's a tier-one application we can say, "Oh, we want this SLA applied." It does a very good job of keeping things clean in our environment. We also went through making sure we have everything tagged in VMware so that Rubrik can just pull that tag and apply that SLA. So things work pretty smoothly with all of that together.

We use the archival functionality. We tend to keep things on a Brik for a certain amount of time and, of course, it's a larger amount of time for tier-zero applications. And then we archive off to a private cloud that we have here at the university. That definitely keeps costs down because we have a deep and cheap storage solution for that cloud, Hitachi Content Platform. That was one of the main reasons that we went with Rubrik, as well, as it is compatible with HCP. We have quite a few petabytes of that and we wanted to make sure that we could leverage that and use it to our advantage.

Another benefit has been that management time has gone down significantly. Before, we had those two teams, one team for NetBackup and one team for Commvault. Each of those teams had two people on them. Now, we have one person on the storage team who is dedicated pretty much to backups, and the rest of us jump in as needed. We've really been able to consolidate that effort, and since it's an easy to use interface, we were able to pick up and run with that as a storage team. But with NetBackup before, we did have to build out quite a few servers and other stuff to get it into HCP. The whole model behind that, having lots of media servers, was very costly when you add in all of the hardware costs, licensing, et cetera. With this, it's quite a bit cheaper.

And Rubrik has definitely reduced downtime, because if we can spin up a recovery faster to that local CPU and the storage of Rubrik and have it up instantly, we can definitely get back to work sooner.

What is most valuable?

We do like the instant recovery because, beforehand, we would tell people, "Hey, it's going to take anywhere from 30 minutes to an hour to spin this up and, in that time, we're going to need your help with certain questions." We would sit there and work with them, but it always took quite a while. Now, we say, "Okay, give me 15 seconds and I can get this back up for you." And within that 15 seconds it's on and the only thing that we have to do afterwards is vMotion it off of the Rubrik storage back to where it should rest.

We also like the web interface. We mainly log in to the node and work from that, but occasionally we will log in and look at things when offsite. It's very intuitive and it works really well.

In addition, the solution's APIs play in with our automation piece for hybrid cloud. We wanted everything to work without manual interaction. We wanted everything to just play through when a ticket is submitted and automatically spin up the backup that we wanted, based on the tag in the VMware object. Our VMware team was the one that mainly looked at those APIs and built all of that out, but they haven't had any issues with it. It's worked exactly as designed.

What needs improvement?

The interface is still slightly clunky and has room for improvement. They do work with us whenever we mention anything that needs to be done or anything that we want. We find that bringing up the management interface is a little slow and not as intuitive as we would like, but it's been getting better as it evolves.

Rubrik is a somewhat new company, so it needs to become a little more established, and that just comes with time. It's not really too much of a concern or a weakness. It's just something that hasn't happened yet.

For how long have I used the solution?

We've been using Rubrik for about a year and a half to two years now.

What do I think about the stability of the solution?

The stability has been good. We don't run into a ton of issues on it.

What do I think about the scalability of the solution?

The scalability is wonderful. That is one of our biggest advantages with this. We can scale out as big or as small as we need to. We went with 20 nodes or so at the start and we've got over 40 now. We continue to expand as needed. We're still not all the way done with rolling this out to replace everything, but every year we're getting more and more nodes in there and replacing more and more.

We've covered about 85 percent of our environment. With the other 15 percent, it wasn't that Rubrik couldn't handle it, it's that the budget only allows for so many nodes to be purchased at a time. On top of that, we need to make sure that we do it in a way that's non-disruptive for work, and there are some teams that would be affected by disruption. We need to go a little bit at a time, which is what we've done. 

For the future, I do see us using it more. We have been doing a soft launch on Oracle, because we needed the tool that Rubrik has that allows for integration. That was still in something of an early stage of development, and we weren't comfortable putting it into production until it was in a more developed state. So we have used Rubrik to back up Oracle, but we've gone about using less of the automation pieces that Rubrik offers, and we're using it more as just a landing spot until that is fully developed. That's about the only piece that we're going to use more in the future.

How are customer service and technical support?

When we have run into issues, we've reached out to our support team at Rubrik and they've been very quick to respond. Whether they're in the office or not, they do take our calls and help us out. It's always a quick response.

They're a newer company, so I'm sure they're still establishing their place, but the escalation teams and everybody that we've worked with have been capable and they've been able to fix our problems without having to bring in too many people.

Which solution did I use previously and why did I switch?

We had Commvault and NetBackup before. Both of those were based on costly consumption-based licenses, and our CIO really disliked that model. The licenses that we had had been increasing in cost year after year and it just wasn't feasible to keep two separate products that weren't a good fit for the automation piece, for hybrid cloud. And they were on a slightly more pricey model. So rather than going to one or the other, we went out to see if there was anything that made more sense at the time. And that's when we found Rubrik.

With Rubrik, we have an agreement where it isn't license-based, and we are able to add more Briks as needed and more clusters as needed. It makes it extremely easy to expand our backup environment as the need arises.

With the other models out there, you would buy one quota and then you would hit it and prices would change and other things would happen. They have you locked in, no matter what. It was basically a situation where you had to pay whatever price they said you had to pay. With Rubrik, it's been very nice to have all of the equipment in our own data center and to have a little bit more control. For example, if we think we're going to need this much next year, this is what the hardware cost is going to be, and we can pay for any additional capacity that we need. That's been really nice with Rubrik.

How was the initial setup?

Setting up Rubrik was both a little bit straightforward and a little bit of complex. We had the team that sold us the product there with us during setup and we went to add in all of the nodes at the same time. That was something that even that team had thought we could do, and then they remembered, in the middle of adding all the nodes at the same time, that we needed to do it in groups. That does take time. We were putting in something like 16 or 20 nodes, and we had to do it four-at-a-time. We had already done the physical installation and all the cabling, and all that portion. But when we started to add in the nodes, we had to do four and then wait for it to finish on that, and then do another four and wait for it to finish on that.

I think that, with time, they may implement a system that cues them up and continues to add nodes as it can. But that seems to be a similar problem to what occurs with other products in the same category. We also have Cohesity in our environment, which we don't use as a backup product, we use it strictly as a NAS, and it suffers from that same issue.

Our Rubrik setup took a few days, between our getting network issues figured out on our side, getting all of the cable management figured out with our data center team, the physical installs, the configuration with the Rubrik partners, and then adding in those nodes four-at-a-time until we had them all in.

We could have done it with less staff but we did want to make sure that all of us were aware of how the implementation worked, so we brought in all five of our team, two Rubrik partners, and two of our reseller partners, as well.

For maintenance of Rubrik we require two to three people. One works on Rubrik pretty much all the time, and the other four of us just jump in as needed on little things here and there.

In terms of Rubrik users, in addition to the five of us who do administration, we've given out access to a few of our database groups, so far, where there are 10 to 15 people.

What about the implementation team?

Our reseller was ASG at that time, now it's Sirius. Everything was fine with them. On the Rubrik side, we had an engineer and a sales engineer, and that worked really well.

What was our ROI?

With Rubrik, we have been able to allocate FTEs to the other areas. We could have eliminated them but we chose to reallocate them. As we've had people either retire or move on to something different, we've either not replaced some, or we've been able to replace some of them with lower-level staff, simply because of the ease of use of this product.

On the hospital side, the ROI is from the lower cost, less work to manage it, and the smaller footprint in the data center, which means less power and cooling.

What's my experience with pricing, setup cost, and licensing?

The pricing and licensing of Rubrik is better than products that we've had in the past. It was quite a bit cheaper than Commvault and NetBackup.

Which other solutions did I evaluate?

We actually reached out with our VAR and we evaluated anybody that could use the HCP that we have for archive storage. There weren't too many on the market that could do that. Rubrik was really the only solid option that we had at the time, other than Commvault and NetBackup. We weren't too happy with the latter two because of how much they were costing at that time.

What other advice do I have?

We did physical PoCs in our environment and we did have Cohesity and Rubrik side-by-side, as well as NetBackup and Commvault. We did PoCs for moving to public cloud as well, for some of these services. The PoC with Rubrik stood out. 

Make sure that you work with your support team that's going to support you after your purchase and make sure that you're able to work with them well, before you pull the trigger on it. We like to build partnerships. When we have those partnerships, we're able to really rely on them for a long time.

I am a fairly new entry into the backup field. Before, we had Commvault and NetBackup, and when they were showing us how to use those, and trying to teach us some of the terms in the backup world, it felt like backup was a very niche piece of IT, and that there was a lingo and a language behind it. It seemed that there were definite things that people had experienced before that were common among all backup products, and things that they were left wanting or hating. With this new product, Rubrik, we walked into it blind, not being backup admins, and it made a lot of sense to us. And when we did bring in a backup admin, they said it was quite different to anything that they had worked on previously, and that it made more sense and that it was just quite a bit easier to manage.

Rubrik is something that everybody can understand fairly easily, and when we have given others access to it, such as the database teams, and we've let them run with it and see what they can do, they've been able to implement it really well. They've been able to figure out how to implement the tool in exactly the way that they wanted, whereas before there may have been limitations.

We haven't used the ransomware recovery at this point. We've got some protection behind that, where they are locked down and require additional effort to delete and to change. We follow guidelines from our IT security team and Rubrik together. We just haven't seen a scenario yet where we've actually needed to use that.

We have used Rubrik's predictive search, although we don't use it too much right now. Mainly, the way that we've used it so far has been the traditional backup and restore, where we get tickets stating that a backup needs to be spun up and it's done automatically. Then, when somebody comes back later on and says, "Hey, we need this item restored," we're able to call them up and restore it with them on the phone, within a matter of minutes. We haven't really had to use the file search too much or a lot of the tools that they have available for us, just because the need hasn't been there yet.

When it comes to recovery, we usually spin it up and turn it over to the team that asked us to recover that data. The information and identity access management team had to spin one up recently. They said that they had a bad patch and wanted us to spin back to that morning. We did that, and it had lost some of the network settings and some of that stuff that they were used to getting. We spent about 15 to 30 minutes with them and everything was back exactly the way that it should be. But that was pretty much exactly the same with other products that we had so it wasn't something new for us.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
BP
Senior Architect, Cloud Infrastructure at a tech vendor with 501-1,000 employees
Real User
Top 20
Provides a single solution to recover data

Pros and Cons

  • "It provides us a good holistic view of everything that we have backed up so far. It also provides us all the recovery points. If we look at an an object that has been backed up, we can tell how many retention copies it has, how far we can go, and recover any data, if needed."
  • "It does not have an easy deployment. The deployment is not something that just anybody can go in and deploy."

What is our primary use case?

We use it to backup and protect our virtual environments. We do Active Directory, SQL, file server, and some application levels backups. We do Office 365 and SharePoint backups too.

We back up everything locally first, then store it in the cloud.

How has it helped my organization?

It provides us a single solution to recover data. We haven't had a lot of restore requests. There have been a couple of them where we had to restore a full server and the work involved was very minimal. We were able to run a quick restore job. We did not really run into any challenges doing this. Every once in a while, we receive requests for files or emails that people have lost and those files are in SharePoint or OneDrive. We have the ability to restore it within 30 days directly from the portal. But if it's beyond the 30 days, we use Commvault to restore data and that has worked absolutely fine.

It has helped us drive innovation and accelerate growth. From a growth perspective, this storage solution has clearly helped us. The option for us to save the data in the cloud is very valuable for the organization.

The solution has helped our admins to minimize the time they spend on backup tasks and other projects. We have an administrator who manages the system. I'm more of an architect. Compared to the previous product where the administrator had to go around and look for a lot of information before he could find out whether the backup had competed successfully and the reporting structure was not that great, the reporting structure now with Commvault is where he can get daily emails from the jobs which have been completed. If there are any issues with jobs, he can directly drill-down to the details and find out why the job failed or why it did not run on time since there may be other dependencies that won't allow the job to run.

What is most valuable?

All the features used right now have been very valuable. The biggest advantage for us right now is the ability to back up our Office 365 mailboxes along with all our SharePoint and OneDrive data. Because all our users mostly store all their data in these locations, it is important for us that we back up all these services.

It provides us a good holistic view of everything that we have backed up so far. It also provides us all the recovery points. If we look at an an object that has been backed up, we can tell how many retention copies it has, how far we can go, and recover any data, if needed.

What needs improvement?

I have written a lot of different reviews about the product and every time I have mentioned the user interface is not user-friendly, e.g., the admin portal is not user-friendly. It definitely takes a lot of understanding to get familiar with the portal. However, once you are completely familiar with it, then it is pretty easy to manage. It's not something that you can jump in right away and start, knowing what exactly is going on. There are a lot of places that you need to look around to understand how the backups are configured.

The administration of the solution could be simplified. This would really make the administrator's life easier.

For how long have I used the solution?

We've been using Commvault since early 2017. We are in our third year right now.

What do I think about the stability of the solution?

It has been pretty stable. We have not run into a situation where our systems were compromised. However, we have run into system corruption issues and were back in business within about two hours.

Right now, we only have one primary administrator for this product. We have a couple of backups in case this person or another is on vacation. We have other people who have been provided good knowledge transfer on how this product works. This way, if either of them is unavailable, there is somebody who can do the job.

What do I think about the scalability of the solution?

It is definitely scalable. We are able to scale as we need, whether we need to add any compute, storage, or additional licenses for user accounts. All of that is very flexible when it comes to scalability. If we want to add more users to our Office 365 backup, we can quickly get new licenses from the vendor with a quick turnaround time. As soon as we get that, we are able to add those users' data to our backups. We generally have a buffer. However, sometimes if there are a lot of new hires, then we need to go in and secure new licenses.

We are using more space than what we were previous using, mainly because we did not have a lot of flexibility with the previous product that we were using. So, there was not much room for us to store the data for a long duration. At the same time, we did not have enough on premise storage capacity to leave the data around for a long time. Therefore, data growth has been significant over the past years because we have been able to store data. So, we are leaving the data on-premise for 30 days, then we moving it to the cloud. Most of the data is now in the cloud, but even on-premise we are now able to back up a lot of systems that we were not able to back up earlier. We have seen significant storage growth on long-term systems, because we are now backing those up and the data is there.

It is only my team managing the system. We back up all the data that the end user has. If they need help restoring their data, then one of my team members will go in and restore the data. The user has no direct interaction with the product.

It is pretty extensively used right now. It is backing up all the data that we have right now. We are looking into some additional features, so we might not start looking at those until later this year. Commvault has come out with some new features and we want to look into those. For the first two years, it was a stabilization period for us to get the product implemented, ensure everything was stabilized, all the important data was being protected, and data was being stored in necessary places. We also looked at all the trending over the last two years to ensure we had enough capacity in all the areas to maintain the server and storage space. Now, we are at the stage where we are pretty comfortable on how we can scale this product when needed. We are looking into additional features that Commvault has, and we will start looking into these towards the end of the year.

How are customer service and technical support?

Tech support has been good. I haven't had a lot of interactions.

Every once in a while when we have to make any architectural changes to the deployment, my administrators reach out and consult with me. We sometimes engage with the support team or Professional Services team. Their responses have been pretty good so far. We have never had a situation where we were kept waiting for days to get an answer or solution.

Which solution did I use previously and why did I switch?

I used Commvault from 2011 to 2012 at one of my previous organization, but it was only for a short period of time that I worked with it. I then had to move onto other things. That experience helped me when we deployed the newer version of Commvault. At the time, it was Commvault Simpana, and now, it's only Commvault. That experience helped us to understand its requirements and how we could set it up.

We were using Dell EMC Data Protection Rapid Recovery. It wasn't flexible nor scalable. It did not meet all our requirements. It wasn't able to back up physical and cloud environments. It could not store data in the cloud, so we had to look at options to store and protect our data. We were unable to back up our Office 365 and SharePoint data. With Commvault, it has made it seamless for us to store data in the cloud, not only protect it. 

We can set up proper retention policies now. So, if we need to store any data, for example, over a year, seven years, or 10 years, we can accordingly store it. We can then apply policy to that storage, which after that retention period, we will not have to go in and do a manual cleanup.

How was the initial setup?

The deployment took about a month. The planning was another month or two.

We wanted to ensure that we were able to protect all our systems and data not protected up until then. At the same time, the strategy was that we did not want to incur a lot of significant costs on just deploying the solution itself. Plus, we did not want a lot of administrative overhead while maintaining the servers and application environment. We did not want that routine daily administration activity. We wanted to set up the environment and not worry about it until something went wrong.

What about the implementation team?

We had assistance from the vendor, so they did assist with the setup. The system was completely new for some of my team members who had never worked with it before, so it did take them a lot of time to get familiar with it. Those administrators are able to manage the system very well now compared to what they were able to do in their first year when they had to frequently go back to the vendor and ask them, "How do we do this? How can we do that?"

We worked directly with the vendor. The vendor's Professional Services team was able to assist us with the deployment.

What was our ROI?

After deploying the Commvault solution, we are saving four to five hours a week.

We have been able to save on infrastructure costs by not storing long-term data onto systems. Instead, we have been able to store them on cheaper cloud systems. There is a lot of savings there if you consider all the cost involved to store data on an on-premise server storage system, plus the maintenance, and the support which goes behind maintaining that system. 

I have seen return of investment.

What's my experience with pricing, setup cost, and licensing?

There is a bit of cost involved with signing up the entire solution. It's not a cheap solution.

Which other solutions did I evaluate?

We did evaluate Veeam and Cohesity. 

At the time, Cohesity was not mature, as they were fairly new to the business. We had a few meetings with them, and after our discussions, we found that the solution might not meet all our requirements. E.g., the physical server backup was one important feature that was not supported at the time. 

Veeam is a platform that I have extensively worked with in all my previous roles at other companies. So, we do have a Veeam implementation that is used by a different team in our organization. They manage all their backups through Veeam. Our plan was not to use the same solution in all environments. We wanted to use different solutions within the entire organization for exposure to multiple data protection solutions. Also, Veeam did not support physical machine backups and only supported virtual machine backups.

In my previous deployment, there were no cloud features. The cloud was not popular and everything was on-prem. Even when we moved to Commvault, Veeam lacked a lot of features, which is why Commvault seems to be the best choice for us.

We already had our cloud solution in place. After understanding that Commvault does work with that cloud provider and it would help us store our data, we did not have any further concerns about cloud vendor selection. The cloud environment and Commvault environment were set up around the same time. We moved to the cloud at the end of 2016, and then, in early 2017, we moved to Commvault. So, everything worked out well.

What other advice do I have?

Go through an assessment first before selecting the product. Every business is different and has different requirements. Do a complete assessment with the data protection partner, whether it's Commvault, Veeam, Cohesity, or someone else. Go through a proof of concept, if possible. Mind your business requirements, RPO, and RTO. Look at your budget too. This should help you to make the right decision.

The biggest lesson would be to have a proper data protection strategy for the organization. There were a lot of things that we had to implement after implementing the product. It's better if you completely understand your business requirements, then implement this product.

I would give it a rating of an eight (out of 10) because it does not have an easy deployment. The deployment is not something that just anybody can go in and deploy. It needs a good level of understanding for deployment. Once you deploy, you need to be familiar with how to administer the product, how to set up all the reporting, etc. Just navigating the admin interface is not really that easy.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JB
Principal at a venture capital & private equity firm with 51-200 employees
Real User
Top 20
Is especially flexible for tape environments

Pros and Cons

  • "If you are running on a legacy tape environment NetBackup is best."
  • "The flip side about NetBackup is that it is not policy-based."

What is most valuable?

In terms of most valuable features, I like the fact that if you have a bunch of backups, NetBackup gives you the ability to have one master and multiple media servers. What that means is you can have a bunch of sites that all have libraries and you have one master server that controls all the functionality of all the jobs. You don't have to deploy a standup NetBackup solution at each site. You can just deploy the media version for their tape library and have one master server that controls all the jobs.

What I also like about NetBackup, as opposed to most solutions like Rubrik and Cohesity, which don't really support backing up to tape environments, is that NetBackup does. If you are running on a legacy tape environment NetBackup is best. Most of the guys I've seen that use NetBackup have a tape environment.

What needs improvement?

The flip side about NetBackup is that it is not policy-based. NetBackup doesn't give you that feature. For example, Rubrik is a policy-based type of app, so when you create a backup job with it, say you have 30 servers in that backup, you can make one policy and apply it to them all. NetBackup doesn't do that. With NetBackup, you need to create a backup job for each server you want to back up and for each server you have. That is the only thing I don't really like about NetBackup. I can use Rubrik or Cohesity where you can create one policy, and apply it to many servers at one time where with NetBackup, you can't do that. You create a backup for each server. That takes more time.

If they can improve on policy-based backups, that would be great.

For how long have I used the solution?

I have been using Veritas NetBackup for about 10 or 11 years.

I think that the last version I used was version six. They're probably up to eight or 10 now. But really nothing has changed. Maybe additional features from the last time I saw it, but not really much has changed. I think they made a version 10.

The last time I went online I didn't really see much difference from a feature perspective since I began using it. I think the GUI interface looks a little different, a little cleaner, but functionality-wise, I didn't really see much change.

What do I think about the stability of the solution?

In terms of stability, no problem. Like I said, if you have multiple tape libraries, you can have one master that has a bunch of multiple media services. So you can have tape libraries all scattered at different sites. The one master server you set up controls all the job functions. When you log into it, it just kicks off the jobs and you can pause jobs. For different sites, you can keep the job turned off. It controls all the functions and all the backup jobs for all the multiple sites. That's all the master server does. It doesn't actually do any backup. It's responsible for making the kicked off jobs to get backed up.

How are customer service and support?

Their customer support is not bad. I don't have any issues with technical support. Technical support is okay.

How was the initial setup?

The initial setup is very easy. Commvault has a lot more convoluted setup. NetBackup is really easy to set up. I've never used Commvault, but from other colleagues I know who use it, you need professional services because it's so convoluted to set up. NetBackup is not that convoluted. Commvault is nice. It's a very nice application, don't get me wrong. I'm not going to put it down or anything like that. Once it's running, it's a good product. But from being exposed to Commvault a little, I like NetBackup better. I just think the downside to NetBackup is that it's not policy driven. That's the only thing I don't like about it.

What's my experience with pricing, setup cost, and licensing?

Pricing depends on the number of licenses and on the number of servers you have. It varies based on the number of servers that you're trying to back up.

What other advice do I have?

My advice to anyone considering Veritas NetBackup is to validate. If you have multiple sites, it's better to have the setup. If you have multiple sites that are running a tape library and media servers, you can set up one master server. But if you only have one site, you can set up a backup as a media server and a master server. If you have multiple sites, you want to look at how many sites you are backing up. If it's multiple sites, then you want to set them up with one master server.

If you only have one site, then you have the media server and the master, and it does both. That would be my suggestion - to validate if there is more than one site you're going to be backing up. If you are going to be backing up more than one site, you want to properly set up the first time. If you only have one site you're backing up, set it up as a master media. If you have multiple sites to set up, you want to set them up as media servers and then set up one master server that controls all the functions for the remaining sites. That is really the biggest thing, to be honest with you.

You might want to confirm if it supports backing up to Azure or AWS. Some people want to do long-term archiving. You want to confirm whether or not NetBackup supports backup to Azure or Google Cloud or AWS from a long-term archiving perspective.

Some people backup to tape. Some people are going to say that you can't back up the disk with NetBackup. I just don't know if it supports backing up to cloud providers.

On a scale of one to ten, I'd say NetBackup is an eight. It's pretty strong. I don't have other problems. I would say it's definitely a strong eight. It's a pretty good product.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
AS
Senior Engineer, Disaster Recovery at a financial services firm with 1,001-5,000 employees
Real User
Rock solid, does its job, but needs better UI, deduplication, and ease of doing certain things

Pros and Cons

  • "Scheduling is valuable. It does a good job of backing up, and it does a good job of restoring. Nobody has got a problem with that. The agents are well supported."
  • "When you get down to doing certain things, such as somebody wants a particular file restored, the process by which you do that is stupid. You kind of have to know exactly where to look for in order to find it. Even on older backup products that I've used, I didn't have that kind of problem. If we were looking for a file with a particular kind of a name, the solution would find that file anywhere irrespective of where it resides within the backup system. So, we didn't have to know the name of the specific server, the specific timeframe, almost all the characters of the file name, and all kinds of data in order to find a file. In Avamar, we got to know these details. We've gone around and around with them on that, and their attitude seems to be that it is working just fine. There is nothing for them to improve. The organizational system of other products that I'm working with, such as Zerto and Cohesity, seems to be centered around the tasks that you would most commonly do and want to do, as opposed to we've laid it out in a really neat technical hierarchy."

What is our primary use case?

It is our main backup system while we're in the middle of switching over to Cohesity.

What is most valuable?

Scheduling is valuable. It does a good job of backing up, and it does a good job of restoring. Nobody has got a problem with that. The agents are well supported. 

In terms of functionality, it is rock solid. It does its job.

What needs improvement?

The UI is a complete mess. It is graphic, but it might as well be a CLI considering how difficult it is to work with. It takes an entire person and a significant amount of time to manage backups within the company. It really shouldn't be that hard.

When you get down to doing certain things, such as somebody wants a particular file restored, the process by which you do that is stupid. You kind of have to know exactly where to look for in order to find it. Even on older backup products that I've used, I didn't have that kind of problem. If we were looking for a file with a particular kind of a name, the solution would find that file anywhere irrespective of where it resides within the backup system. So, we didn't have to know the name of the specific server, the specific timeframe, almost all the characters of the file name, and all kinds of data in order to find a file. In Avamar, we got to know these details. We've gone around and around with them on that, and their attitude seems to be that it is working just fine. There is nothing for them to improve. The organizational system of other products that I'm working with, such as Zerto and Cohesity, seems to be centered around the tasks that you would most commonly do and want to do, as opposed to we've laid it out in a really neat technical hierarchy. 

There should be some kind of greater granularity in the way it is storing backups. The reason why we're using things like Zerto and going to Cohesity, at least in the DR environment, and this will work in terms of backups as well, is that we need to be able to have a recovery point objective with some kind of granularity, such as every 15 minutes, every half hour, or every hour in case of a disaster recovery scenario, ransomware scenario, etc. We're pretty much allowed to do our once-in-a-day backup every 24 hours or however we schedule them. In most cases, we don't do anything different for basic backups, but it seems very difficult within Avamar to do anything if we want to have an image of a system every so often or at least an incremental point of reference or an RPO point. 

The other thing is that the way that it locks files seems to make those systems unavailable while it is operating the backup. So, we have to very carefully schedule our backups after hours or over periods of time when there is low bandwidth of the transactions happening. With the other products we have, we don't have this problem. I certainly don't have that problem with Zerto. I've got a recovery point of every few seconds, and it doesn't seem to take a lot of storage room to do that. Storage is a big thing for us. It is very expensive, and that's always an issue for us. So, things like deduplication would be really nice to have.

For how long have I used the solution?

I have been using this solution for at least six years.

What do I think about the stability of the solution?

It is rock solid. We don't ever have any problems with backups being lost or anything like that.

What do I think about the scalability of the solution?

All of the data in the company is used by one person or another, so there are a couple of thousand users.

How are customer service and support?

Their technical support is excellent. We've never had any problem dealing with Avamar in terms of technical support. We've had some nasty instances too where they've not been able to drill down on things and support their own product.

How was the initial setup?

I've only been with the company for about five years, and it was present when I came on board.

What other advice do I have?

I would rate Dell EMC Avamar a six out of 10. It is a pretty basic backup system in terms of features. It does its job. However, its UI is just ridiculous.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering Rubrik, Veeam Software, Commvault, and other competitors of Cohesity DataProtect. Updated: November 2021.
555,358 professionals have used our research since 2012.