We changed our name from IT Central Station: Here's why
Get our free report covering Dell EMC, Dell EMC, Hewlett Packard Enterprise, and other competitors of Veritas NetBackup Appliance. Updated: January 2022.
564,643 professionals have used our research since 2012.

Read reviews of Veritas NetBackup Appliance alternatives and competitors

Director of Technology at a financial services firm with 201-500 employees
Real User
Top 20
Reliable, easy to automate tasks, good PowerShell support
Pros and Cons
  • "We have a great success rate for backups with Rubrik and because of the ease of automating tasks, we also run periodical restores to check the quality of the backups."
  • "I would love to be able to just get from the dashboard to a file that I need, or a system that I need."

What is our primary use case?

We are a financial company and we have redundant data centers, with a VMware Metro Cluster staged between the two locations. We have Rubrik running in our data center and it is used for backing up our on-premises infrastructure.

We keep the backup of the environment on-premises for two weeks, just to be able to restore in case we lose or corrupt part of the virtual infrastructure. We also send copies of some of the data into the cloud for long-term archiving because we're under a regulatory requirement to store certain parts of the business data for up to seven years.

At this point, our environment is probably close to 90% virtual. We use physical servers for market data and essentially, there is nothing to back up on those systems because there's no data that's worth saving there. Should one of these servers fail, we just put a new one in place. It would be deployed, including the operating system, and it would start processing market data for us. We consider these as compute nodes and there is no persistent data on them.

We are highly virtualized, so Rubrik is used to back most of the VMs up. We are running VMware ESXi for our VMs, and application-wise, we are a Microsoft shop so we backup SQL Server, Exchange Server, and Microsoft file shares. We also back up a lot of business data, which is contained outside of that server.

How has it helped my organization?

The biggest impact that Rubrik has is that it allows us to have the reliance on the backup, knowing that the data is there and that the ability to restore is there. It provided the safety net we needed to deploy faster. This is because it played a great role in convincing developers and operations to do rapid releases, as opposed to doing it the old way where we didn't have reliable backups. It meant that we had to wrap all the releases in the solid recovery plan in addition to just the rollout. Now, we have the confidence in the backup and can release faster.

Rubrik has saved us time with managing backups in general. For recovery testing, the SLA policies have greatly reduced the time that we have to babysit backups. This is simply because Rubrik put thought into designing their system the right way. Instead of adding a server by creating jobs and creating schedules on top of the jobs, you're just dropping them into an SLA and all of the legwork is done for you, so adding the systems is easier.

The fact that they're SLAs, I don't need to go through the job log and analyze it to figure out why there was a job failure. Similarly, I don't need to look into the impact of the failure. This is because I know that if the machine is protected within SLA guidelines, I will get an alert in case of a problem with a machine. In this case, it means that I need to act and somebody needs to take a look at it. Essentially, it has reduced a lot of repetitive babysitting steps that don't really produce any business value.

We have never had a problem such that Rubrik has saved us downtime. But, it's certainly a great thing to have this additional safety net, which is a reliable backup solution. Everything we have is redundant, so even there is a hardware failure, another piece of hardware kicks in. We won't rely on Rubrik specifically for disaster recovery, but we do rely on it for business continuity. If for whatever reason, both of our data centers lose power or lose internet, or are inaccessible, then Rubrik will help us rebuild the environment. What we don't rely on it for is daily disaster recovery.

As we moved away from our previous solutions, using Rubrik has improved our overall efficiency. These days, we rarely have to do anything with the systems. Most of the time when we have to resolve an issue with the backup it's because the target system has become unavailable or has been taken offline for maintenance. It may also be the case that we have another restore request. These are the only two reasons that a restore might be delayed. It is not the same as we had with NetBackup, where we had to update the agent and software. We don't have to do anything of that nature. Backup is now pretty much gone from our weekly schedule.

What is most valuable?

The most valuable features are reliability and programmability. We have a great success rate for backups with Rubrik and because of the ease of automating tasks, we also run periodical restores to check the quality of the backups.

Rubrik makes it really simple to automate the restore task, which is important because I don't care about the backup. I care about the restores, and Rubrik did a great job of assuring restore reliability.

Our time spent on recovery testing has improved simply because we're able to automate it. It saves us between two and four hours per week, whether it is simply adding a new machine or going through the logs and seeing what failed.

We don't do recovery on a daily or weekly basis. We receive between two and four recovery requests per month. Because it is mostly manual stuff, it is comparable to the old system if we're talking about restoring something within a two-week timeframe when it's still on disk. However, if we're talking about restoring from the cloud versus restore from tape, the timeframes are not even on the same level. This is simply because we use the offsite storage for tapes, so sometimes the restore task from tape will take weeks.

The web interface is easy to navigate and pleasant to look at.

The SLA-based policy has simplified our data protection operations tremendously. It goes back to caring about restores instead of backups, and the fact that it allows me to easily drop systems into the SLAs greatly reduces the amount of time it takes to set up the system for backup.

It allows me to create a protection policy and while it's running, I know that the systems that I've assigned to that policy are being protected accordingly. If that is not happening then I get an alert or a notification telling me that the systems are outside of the protection horizon. It's a great approach.

The archival functionality is impressive. Just by eliminating reliance on the tape technology, it's greatly improved the rate of successful restores that we were able to perform. In two and a half years, I can't remember a case where we couldn't locate data that was backed up using Rubrik.

We have not needed to use the ransomware recovery function but I know that Rubrik backups are essentially immutable. Even if an intrusion does happen, we'll be able to restore the data quickly.

I have used the rapid restore functionality and I noticed that on many occasions, I was able to mount a virtual machine or database on the Rubrik cluster itself. So, I know its high-speed connectivity options are excellent and support VMware well.

With the previous version, we had to do some Python scripting because the API was better and more developed than the PowerShell support. However, with the new version, it seems that PowerShell covers all of the functionality that we need, which is great, especially because we are a Windows shop.

The restore success rate is very good. I don't care so much about improving the time spent on the resource. Rather, it's the success rate. At this point, we have a 100% success rate, which was definitely not the case with any prior system that I've used.

What needs improvement?

I would love to be able to just get from the dashboard to a file that I need, or a system that I need. I believe that right now, there's the ability to search by system name, and then it will take you to the system. It would be great if I can reduce the number of clicks that I need to take in order to do a restore, or maybe to a system and the file, or maybe just directly to the file. It would be like continuous integration with PowerShell.

As we go into the Cloud in addition to Polaris, I would love to see a future where I can back up pieces of the Cloud, perhaps ARM templates or Azure Active Directories from the Cloud to on-prem. I know it sounds counter-intuitive, but just as the Cloud becomes more popular and used on a daily basis, I would love to have just a single pane of glass to provide visibility into the backups.

For how long have I used the solution?

I have been using Rubrik for approximately three and a half years.

What do I think about the stability of the solution?

In addition to just great recovery rates, we haven't had any unforeseen outages with Rubrik itself, due to hardware failure or anything like that. Even the Rubrik software upgrades are non-disruptive in the sense that because they're multiple nodes in the chassis as the upgrade happens, Rubrik never actually goes down and can continue doing the backups on the nodes that are not directly affected by the upgrade.

What do I think about the scalability of the solution?

This is a well-designed product, so adding more space is as easy as adding another chassis. It is great functionality because adding more storage is like adding more bandwidth and more connectivity. That's a great design.

We are a fairly small organization, so probably five to six people have access, and there are probably three or four who use it. We centralize Rubrik to our IT systems and IT help desk, so it's all managed internally. There is enough flexibility to extend it to developers and give certain people rights to certain restores. It's just that the workload is so light that it doesn't make sense for us to constantly keep training users on how to operate it. By the time they need to perform a restore, they'll forget it all and have to come back to the help desk anyway.

If in the next version of Rubrik they announce new ways to back up Azure or Office 365, I would jump on the offer. The main driver for us to purchase additional Rubrik units would be if we were constrained on storage. As of right now, we have sized it correctly so we have plenty of storage to satisfy the SLAs for the data that they need to store in-house.

If our data consumption or data storage requirements increase, and we suddenly need more storage for data protection, we will look into adding units. At this point, we are properly sized for the performance.

How are customer service and technical support?

Our experience with technical support has been great. We had a couple of questions in the beginning, so we interacted about two and a half years ago. You would email them and would get somebody from there, without having to exchange many emails.

They will do the upgrades for you, so lately, probably over the past year, the only interaction we have had with support is when we needed to do an upgrade. It's a great experience where you just open up a support ticket with them, they open up the secure remote channel, and they come in to complete the upgrade.

Which solution did I use previously and why did I switch?

Prior to Rubrik, we used Veritas NetBackup for the backup and CommVault for the tape system. We switched to Rubrik because our success rate was poor. The restore rate was horrendous, especially when we had to go to the tape system. it was hovering around a 75% success rate.

How was the initial setup?

The initial setup is extremely straightforward. We went through the exercises and were provided configuration details that were required from us. I think that they were as simple as supplying IP configuration information. Then, once they assembled all of the racks and wires, the Rubrik technician showed up, configured the system, and it was all done in probably less than 20 hours in total.

Because we're virtual, it meant that our implementation strategy was simple. Essentially, once the Rubrik system had been configured, all we had to do was to point it to VMware vSphere vCenter servers and from there, it automatically picked up all of the virtual machines that we had. Then, it was just a question of assigning them to SLAs and removing them from the old backup system. That final piece is not included in the 20 hours because 20 hours was just to get the Rubrik running. But, it was extremely easy to integrate.

What about the implementation team?

We worked directly with Rubrik to help with the deployment.

For maintenance, you really don't need more than two persons, and that's for redundancy purposes. You can have a single person manage terabytes of backups.

What was our ROI?

By now, we have probably made the money back in reduced support costs. Beyond that, we don't value this type of product by how much money it produces. Simply, the compliance requirements come with steep fines and other repercussions if they are not adhered to. Because this product gives us assurance in our ability to restore data if needed, it satisfies our compliance requirements.

What's my experience with pricing, setup cost, and licensing?

You get what you pay for. Rubrik was probably the most expensive solution but in the long run, it's justified by the value of the data that it protects. We were able to make a case that it's a good investment.

They have a very straightforward pricing model.

Which other solutions did I evaluate?

We evaluated a couple of other solutions, but Rubrik offered the best appliance. We looked at products from Veeam and the present solutions from Veritas and others, but it looked like Rubrik was the most modern solution.

What other advice do I have?

I am familiar with the predictive search but we're not employing it. Usually, when we need to restore, we have to restore the whole machine or we know the location of the file or data that was deleted.

We've considered using the Polaris SaaS-based framework as we're looking into leveraging the cloud a little bit more. Polaris is definitely on our radar, but we're not using it in our day-to-day operations.

I would rate this solution a ten out of ten.

Which deployment model are you using for this solution?

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Technical Presales Consultant/ Engineer at a wholesaler/distributor with 10,001+ employees
Real User
Top 5Leaderboard
Flexible and stable with good technical support
Pros and Cons
  • "If you have windows machine at home and you'd like a backup, you can always download their free edition and plug in an external hard disk, and do a full backup of your laptop."
  • "Some customers have Oracle databases and Veeam does support back up of Oracle databases."

What is our primary use case?

The solution is an agent. It can be used to back up almost from Windows Servers 2008 up to Service Pack 1, up until 2019. It integrates with Veeam Backup & Replication, which can enable you to restore to the cloud or back up a cloud workload as well.

It clearly used to do image-based backups. The main reason Veeam came up with the Agent was that they were mainly focusing on virtual environments before and that was a major challenge for their existing customers. Not everyone is going with a fully virtual environment. Virtualization has many advantages, however, the virtual architecture design will remain physical if an organization's architecture is probably architected.

We had the challenge that Veeam has many VMware customers who have a Microsoft kind of infrastructure set up on VMware. They were basically using shared virtual disks and part of the limitation was that VMware was conducting snapshot full backups.

They created the Agent for these two use cases, to back up VMs that VMware cannot conduct a snapshot for, Windows VMs, and to back up physical servers that any customer would like to do so. At the end of the day, the main is that Veeam is paired with VMware.

We get more customers that want to back up and change it themselves. Veeam created this agent as a VMware-based backup of Windows operating systems.

What is most valuable?

The solution is very stable.

They have already invested a lot of R&D and mainly they're supported on most of the Windows scenarios, even the custom-tailored parts. 

The solution allows for full integration. I can deploy the Agent from the backup server and manage the backups all from the backup server. Or I can use the Agent as a stand-alone and discard the backup server. In terms of restoration, I can restore the entire machine, specific file systems, application actors, et cetera.

Restoring to the cloud is pretty flexible.

Technical support is quite good.

The initial setup has improved quite a bit from version 4 to 5. You don't need to worry about downtime.

If you have windows machine at home and you'd like a backup, you can always download their free edition and plug in an external hard disk, and do a full backup of your laptop.

They just released Version 5 for Version 11 and they released some amazing features with it, such as the backup and restore snapshots features. Before the agent was only able to back up through the network. Now it's even able to back up through the SAN fabric, depending on the customer environment.

What needs improvement?

I can't think of an area where the solution is lacking in features. Overall, it's quite good, and more money is going into R&D already.

That said, there are many things they can develop for the Linux agent. The Windows agent is quite complete.

Some customers have Oracle databases and Veeam does support back up of Oracle databases. There is a specific setup in Oracle when you have the Oracle databases configured with the ASM - something related to Oracle storage back up. Veeam cannot back up or restore ASM disks as of right now. It could be something they could offer in the future.

Some customers that are in the industrial sector are using legacy systems, systems that are very old and running on Windows 2000 or Windows NT, Windows 2003, and they're physical, they're not even virtual. Veeam here is pretty weak, as Veeam supports 2008 or Service Pack 1 and above. Anything before that, the Veeam Agent for Windows will not be able to back up anything.

I don't expect Veeam to be releasing agents for older editions of operating systems. Veeam itself is a new company. On the other hand, if you go to the competition, like Veritas, you'll see that Veritas is a well-established company in the market since way back and therefore they have these agents that can back up the older versions of Windows.

For how long have I used the solution?

I've used the solution ever since its first release, since Version 1. That has been since around 2015 or 2016 or so. It's been a few years at this point. 

What do I think about the stability of the solution?

The solution is very stable. There are no bugs or glitches. It doesn't crash or freeze. It's reliable.

What do I think about the scalability of the solution?

I'm not sure about the scalability. With the agent, it should be pretty simple. You install it on each and every single server and then you back up. You can deploy it also with servers, however, the Agent will be in use on each and every single operating system which you want to back up. It can be also used for the PC environment, laptops, Windows 7, Windows 8, and Windows 10.

It's hard to count the number of users our clients have. There are many.

How are customer service and technical support?

Technical support is amazing. They're quick to respond and accurate in terms of the support that's provided. You really don't worry about getting stuck in limbo. Regarding the Veeam Agent for Windows team, they're amazing. They are responsive. You don't have to wait a long time for a reply. They are very good.

How was the initial setup?

The difficulty of each deployment depends on which version. They have improved the latest version, however, before, on Version 4, while the installation was straightforward, the problem was that it had a prerequisite requirement, which is the development framework on 4.7.2. This framework is not usually installed on all Windows operating systems. The problem is that it is free, and you can download it at any time and install it, however, it will require the service to be restarted and that means planned downtime.

Fortunately, they fixed that with Version 5. They changed the framework dependency to 4.5.2. so that there is no more forced downtime. 

The time it takes to deploy relies on various factors, however, assuming the prerequisites are all ready, it takes about 15 minutes.

What about the implementation team?

I can handle the installation myself with support from a field-certified architect so there is no downtime.

What's my experience with pricing, setup cost, and licensing?

Veeam did a major revamp in their licensing schema over the past three years. A lot of changes have happened within a very short timeframe. They almost then seemed irrational at first. However, now, somehow they figured how to have a great licensing model. It's called the Veeam Universal License.

This Veeam Universal License is meant to be a portable license. Before what used to be the problem is some customers would buy Veeam for VMware in five minutes, but now they've moved to a Nutanix and their license will no longer be valid. Veeam created this license so that you can use this license for the Agent for Windows, or, if you would like for the Agent to be for Linux, or if you would like it for VMware, or the Hyper-V or Nutanix, you can use it there.

Whatever Veeam features in Veeam Availability Suite, which encompasses Veeam Backup & Replication and Veeam Agents for Windows, Linux, and even Unix and Solaris, if you need to buy plug-ins or you're going to need an environment for SAP HANA, they have the support for SAP on Oracle and their backups. All of that's under the Veeam Universal License. They have unified it on a licensing model which works everywhere. So that makes it a lot simpler.

The only problem is that the license comes in bundles. It's not sold individually; it comes in bundles of 10 instances. Each instance is enough for a physical server.

The pricing is moderate. The solution falls in the middle of a few different options. It's not the cheapest, however, it's not the most expensive either. Comparing it with Veritas or Commvault or Rubik or Cohesity, for example, Veeam will definitely be a lot cheaper, as it's a software that has a very straightforward licensing model. However, solutions like Acronis will always be cheaper.

Which other solutions did I evaluate?

I've compared the solution with various products in terms of pricing. From my experience, to compare Veeam for example, to a Commvault or Veritas, Veeam is much cheaper. However, if you compare Veeam with Acronis or these small-time vendors, Veeam is very expensive.

What other advice do I have?

We are a distributor, not a reseller.

I'd rate the solution at an eight out of ten. It's a great product. The only drawback is the support for the ASM disks and the support for legacy Windows operating systems.

I'd recommend the solution to other companies. It's a straightforward solution. I am mostly a Linux guy, therefore, we're not as focused on Windows. In general, it's worked like a charm. It's helped me do backups and restores and it has never failed me in that perspective, except for the ASM disk issues.

Disclosure: My company has a business relationship with this vendor other than being a customer: Distributor
System Analyst at CtrlS Datacenters Ltd
Real User
We can immediately recover and enable services on a standby server
Pros and Cons
  • "We have multiple workloads, including SQL, Oracle, SAP HANA, especially Sybase, as well as file systems, VMs, and Exchange mailboxes. Commvault provides very good support for them."

    What is our primary use case?

    It is used as an enterprise backup solution.

    How has it helped my organization?

    We have a very good disaster recovery solution with Commvault. We have a standby CommServe where logs are being deployed every five minutes. If something goes wrong, we are immediately able to recover and enable services on the standby server. We are achieving 99.9 percent SLA with respect to the backups.

    It also helps to ensure broad coverage through the discovery of unprotected workloads. We can easily identify them in the Web Console where we can see which of our servers is not protected. And if there is no backup for more than one day, we can get a report, and we have also enabled alerts. Those features are really helpful to us in identifying and addressing issues.

    Commvault minimizes the time we spend on backup tasks. I only have to check the health of the CommCells, and the rest of the time I can work on the other tasks.

    What is most valuable?

    It's a very good enterprise backup solution with multiple features. We are able to take a backup of multiple databases. We don't need to use scripts to schedule any kind of local backups. We have a direct plugin for Commvault so that we are able to take backups of any of our databases or application systems, like SharePoint. Commvault is also enabling backup for PaaS services that are deployed on the cloud.

    Commvault provides encryption mechanisms with the latest standards that our customers are looking for.

    The CommCell console is very good and user-friendly. I have experience with NetBackup, HPE DP, and Backup Exec, but I'm really comfortable with Commvault. The console makes it easy to identify exactly what we need to see. For example, there are multiple categories. If a backup needs to be performed on multiple systems, we just configure one client or one group and we can push the agent straightaway. That's a very good feature that helps us to complete tasks on time.

    We can integrate our multiple CommCells in the single Web Console as well and that helps us easily identify how many servers are getting backed up and how many servers are not being backed up. We can see the SLA and the success rate. And even though our customer is huge, we can give them access and they can easily see the SLA and the success rate of the backups. Commvault also recently launched the Command Center. It is very good, enabling us to deploy server plans. It is very good and user-friendly.

    For disaster recovery, there is a feature called Live Sync, and we are also able to export disaster recovery backups to the cloud. If something goes wrong, we are immediately able to recover and continue with business.

    In addition, if something goes wrong and a backup fails, we can trace the issue using the log. Each service has a different log that clearly gives us information about the exact reason for the issue and what needs to be done.

    We have multiple workloads, including SQL, Oracle, SAP HANA, especially Sybase, as well as file systems, VMs, and Exchange mailboxes. Commvault provides very good support for them. We perform 70 to 80 restores on a monthly basis. Over the past year, I have faced challenges with one or two restores. All the rest were completed successfully. And if we get stuck, we can easily use the logs to identify the issue and to make some changes to the configuration. So we are approaching a 100 percent success rate with respect to restoration.

    Commvault has very good procedures for performing backups and restores of SAP HANA databases. As far as I know, no other technology provides an option to perform a restore directly from the backup tool itself. We log in to HANA Studio when we have to perform a restore and Commvault enables this by default. We are able to do the restoration from the Commvault GUI itself.

    Commvault also provides workflows. If you want to decommission a client's systems, there is a workflow where we just have to add the client to it and we can easily complete the task. This is useful when we are informed that a customer is moving out. It would be a huge task for the backup team to retain the backups for such-and-such a period of time and to release the license. Running this workflow makes our work very simple and reduces our efforts as well. The multiple workflows really help us in completing tasks quickly.

    Overall it has great features that fulfill our customers' expectations.

    For how long have I used the solution?

    I have been using Commvault for the past seven years.

    What do I think about the stability of the solution?

    The stability is very good. If you don't follow the metrics and best practices recommended by Commvault, or if you mess up the setup, you may face challenges. If you follow the best practices, it's a very good, stable solution.

    What do I think about the scalability of the solution?

    We can easily expand our licenses and deploy Commvault for our customers, which keeps our business going. From a scalability point of view, I haven't seen many challenges.

    How are customer service and support?

    We get very good support from Commvault if we run into any kind of production issue. They maintain a very good SLA for critical and high-priority tickets. We are really satisfied with their support.

    For example, let's say that something in production is down or multiple customers are impacted. SAP won't join a call and help us in resolving the issue. But if we have a critical CommServe-level issue, and multiple backups may fail, Commvault can easily jump on a call and can help us in addressing this issue. In reality, if something is wrong with a SAP system or if an OS is not functioning, a customer may not be able to do their work. Whereas, without a backup, they can continue their business, but they cannot recover things if something goes wrong. Still, if we raise a high-severity ticket, based on the criticality, Commvault support will definitely jump in. They can help us in one hour, at the most.

    How would you rate customer service and support?


    How was the initial setup?

    In one of my older projects, deployment of Commvault was simple, but the current one is complex. It's a very big environment. It depends on the environment of the client and the requirements. If you have a shared mechanism and the customer has multiple firewalls at their end, it will be very difficult to integrate multiple customers into one CommCell. But if you have a single project and a dedicated customer in a single domain, it will be very easy.

    What's my experience with pricing, setup cost, and licensing?

    Compared with other backup technologies, Commvault is a bit more costly, but we are satisfied with the support, the services, and the features that we get with Commvault.

    We are using the capacity-based license and have a total of 10 CommCells. In the license file, we can clearly see what kinds of workloads can be backed up.

    Which other solutions did I evaluate?

    Veeam is very useful for Windows-related platforms but we chose Commvault because it does not have any kind of platform dependency when it comes to backups. It has multiple features enabling us to backup Oracle RAC, or Exchange DAG, and IBM Lotus Notes, and any type of PaaS services.

    Commvault has a clear-cut, three-tier architecture, whereas others follow a two-tier architecture, other than NetBackup, I believe. With Commvault, every backup load will be taken care of by the MediaAgent, and administrative tasks will be taken care of by the CS. Evn the CommServe size also not be huge when compared with other solutions.

    What other advice do I have?

    With respect to security, in particular regarding ransomware, Commvault has built-in features that we enabled to protect our environment. As for storage targets, every storage array has its own built-in mechanism for encrypting or securing the data. It is very difficult for a third party to enter and to make any kind of use of the storage arrays.

    Storage cost completely depends on the retention the customer is looking for. If they have, say, a 1 TB system and they're looking for more than two months' retention, there will be a lot of storage utilization. But we do get a very good duplication ratio, close to 90 percent for file system backups, which helps us to minimize the cost.

    Overall, if your infra is very good, once you configure Commvault there are no challenges. It will function well. If something is wrong with the network, obviously, any backup technology will end up with issues. But Commvault is very good.

    Which deployment model are you using for this solution?

    Disclosure: My company has a business relationship with this vendor other than being a customer: Premium Partner
    Flag as inappropriate
    Regional Director at value data
    Top 10Leaderboard
    Advanced data management, backup and recovery with unique solutions that lead in the field
    Pros and Cons
    • "This product has unique capabilities that lead the product category in test data management, cloning and recovery."
    • "While the product does support various databases, the company needs to make more of an effort to support N-minus-one compatibility."

    What is our primary use case?

    There are many use cases for Actifio. It is most popular for the test data management feature in the solution, and the capabilities for instant backup and recovery.  

    How has it helped my organization?

    It is very important for any organization to be able to complete their development cycle very quickly. This product helps us to get a proper output for our projects. We have to be able to do their data provisioning for development. Usually, this can take a lot of time in waiting for approvals and time for the DBAs to provision the data. That data, when provisioned, will not really be point-in-time and not really instantaneous.  

    Because of the Gold Copy concept, Actifio creates a situation where it is easy to provision data via test data management, and I can provision data for various test developers. I do not need an additional license for adding additional users because they have this concept of unified copies. I can provision and mark those users. I save a lot of space because if I have a 20 Gig database and I want to provision to 20 users, I would need to make 20 copies of it on the appliance. In this case, using Actifio, I can just provision only one copy and then work with the Golden Copy concept so developers can work on their own individual copy of the data. It is a virtual copy and that offers a lot of advantages to the DevOps teams.  

    Similarly, for analytics, nobody runs analytics on real-time data on the production environment. So let us say a CEO wants data from yesterday night. You can actually take a copy of that data from that time and run analytics on the data. It is very close to getting real-time analytics. This kind of solution is something that our customers appreciate. The feature Actifio has using test data management and copy management helps the clients work to achieve their requirements and virtually process up-to-the-minute data.  

    What is most valuable?

    The feature that is the most valuable is the Golden Copy feature. It is an incremental backup forever and you can restore from any point in time. The restores are also very fast. I also like the database policing. It is good for use with test and development environments.  

    What needs improvement?

    There are quite a few ways the product can be improved, actually. In terms of recovery the solution is very good, but still can be better. Specifically, they can improve in the area of ease of recovery and improve the speed. Also, the kind of platforms they support should be enhanced. Like now, they have recently started supporting SAP S/4HANA, SAP MaxDB, and SAP MongoDB. This is good, but there are more platforms that can be supported. Obviously this will take time before the platform support for more products appears in the solution. But I think this is one important addition that they need to make.  

    For how long have I used the solution?

    We have been using this solution for a little more than a year.  

    What do I think about the stability of the solution?

    It is very stable. The support services are also very good. They have a support center in India, and I think that will help us and our clients to continue to get very good support within our time zone.  

    What do I think about the scalability of the solution?

    Actifio is scalable. It can actually be scaled in Petabytes. So that again is a very, very good option from a scalability point of view. The solution also has support for numerous storage technologies like object storage. It is really getting better over time.  

    Mostly our customers are enterprise customers who have a lot of need to use realtime data for test data management. It only takes about an hour to get the test data. One of our customers is the leading stock exchange in India. You are not going to stop services to collect test data in that situation, and the number of people that need data on the stock exchange is pretty broad. They need this to make predictions and to select the best investments. Actifio is also being used at one of the leading banks for DevOps purposes. It is used by one of the leading pharmaceutical companies which is into pharmaceuticals and life sciences. We have quite a nice assortment of customers spanning across various industries using Actifio and most of them are enterprise organizations.  

    Which solution did I use previously and why did I switch?

    I am an ex-IBM employee and I have been working with IBM products for a long, long time. So we have been selling IBM Tivoli Storage Manager in the past, which is now Spectrum Protect. Also, I was selling Veeam in the past. I have a lot of experience with these competing products.  

    The main difference between Actifio and other solutions that I had experience with is that legacy solutions are very different than the new breed of solutions. Actifio has actually kind of reshaped the backup methodologies. They have a new concept and add a new dimension to the way that data centers were being backed up in the past. They are different if you compare the product to the likes of ScrumWorks or Tivoli or even Veritas for that matter. They have improved things a lot and they have forced others to change. Many of the other products are still backing up to tapes. Actifio does a straight back up to a disk or back up to the cloud. So, that is a big change and a big advantage.  

    How was the initial setup?

    I found that the initial setup was very, very easy. In fact, the infrastructure support was also very, very flexible as Actifio supports almost any kind of infrastructure. It is essentially agnostic to any type of hardware. We can provision it on a virtual machine — it comes as a virtual machine in fact. Because of that, it is easy to deploy on any VMware or any virtual platform.  

    My team has deployed the real-time solutions for customers with Actifio in as short a time as one or two days.  

    What other advice do I have?

    The way Actifio is shaping up is really good, especially in the way they are opening new avenues for data management — like in terms of test data management, database cloning, and the capability of duplicating databases instantly. Also, it is one of the only solutions which is used by SAP for their own S/4HANA test data management. This one product enables a lot of very good solutions for backup of SAP S/4HANA either on to my systems and appliance or on the cloud. There are these advantages and quite a few other advantages this solution encompasses.  

    On a scale from one to ten where one is the worst and ten is the best, I would rate this product as around eight-of-ten. To make it more like a ten or at least nine, they have to take care of some of the possibilities for expanding the capabilities of the product even further. They need more coverage for newer platforms and any of the new databases coming out. For example, MongoDB is getting very popular and needs better support within Actifio. Support for the current versions of databases is good but maybe some backward compatibility and N-minus-one compatibility would be good because not many customers are always current and on the bleeding edge. We had an incident, in fact, where a customer wanted to use Actifio very much, but part of their deployment was on a current version of their database and another part of it was on an older version that was just not supported. So if they would make the effort to support the current versions and maybe N-minus-one that would be great to help bring in legacy players on so they can also use Actifio.  

    Which deployment model are you using for this solution?

    Hybrid Cloud
    Disclosure: My company has a business relationship with this vendor other than being a customer: reseller
    Sufyan Khan
    CTO at New Horizon Computers
    Real User
    Top 5Leaderboard
    Great deduplication, robust hardware, and extremely reliable
    Pros and Cons
    • "The hardware can operate in high temperatures, in case of any disaster."
    • "First-time integrations are difficult in NetWorker. NetWorker software needs to be simplified. It's very complex."

    What is our primary use case?

    We primarily use the solution for backup purposes.

    What is most valuable?

    The deduplication on the solution is great. It's elaborate but the companies already understand it. 

    The hardware can operate in high temperatures, in case of any disaster. 

    The scalability of the equipment and components of Data Domain are outstanding. We never go without Data Domain if we are talking about backup solutions. We always go with Data Domain.

    It's a very reliable and consistent product. 

    There's no match with any other product. It's outstanding. The performance of the hardware is improving day by day and new models are coming with more scalability.

    What needs improvement?

    In terms of backup software, NetWorker is a very, very good. However, it is very complex. If you want to export on a NetWorker deployment, usually you need to add more plug-ins. If you install Titanium, through the vCenter, you can directly backup all virtualized data. Using Titanium, you can backup Oracle data through the main directly or on the data lake.

    First-time integrations are difficult in NetWorker. NetWorker software needs to be simplified. It's very complex. 

    The technical support has gotten worse as of late. They could work to make it much better.

    One feature which IBM has, and which I am unable to see in Data Domain (or on their optimum roadmap) is the utility-based backup solutions. There are no utility-based Data Domain models. 

    For how long have I used the solution?

    We've been using the solution for more than five years now.

    What do I think about the stability of the solution?

    The solution is extremely stable. You just need to install it and then you can basically forget it. Our users have never complained about the performance and never complained about the consistency or reliability. There aren't any bugs or glitches at all. It's a very solid product.

    What do I think about the scalability of the solution?

    The scalability is excellent and continues to improve with each new release. That said, a company needs to buy assuming a future scale, as it is physical hardware. If you only buy four terabytes, it's hard to just jump to 16.

    How are customer service and technical support?

    In terms of support, Dell EMC support was outstanding. Right now, we've observed some changes in support. It's not as good. Whenever a log is up, we do not get immediate support. This has happened a number of times now. If I don't have a senior system engineer available at my company, and I have a server issue, or a backup suddenly stops due to some application restriction, I have problems. I've had a few incidents just this year, the year.

    While the support is excellent, the experience of some delays is off-putting. From 2015 until now, we didn't really experience any type of support issues. The delays are kind-of new. Support is perhaps limited in our region. However, beyond the delays we experience, the service we get, and the advice, is excellent.

    Which solution did I use previously and why did I switch?

    Banks will often have Lenovo or IBM. I always prefer, and my company always prefers, Data Domain. It's scalable and robust and far superior.

    We've used Avamar as a backup software. However, we find Networker has more features.

    What's my experience with pricing, setup cost, and licensing?

    The price point is high. That said, they're competing with other products like Veritas, so in that sense, the price is good, or, maybe typical. And yet, whenever we are competing with some legacy type of product, the price difference is huge.

    It's a premium product and so the pricing is somewhat expected.

    In terms of scaling, the price is difficult to pin down. IF you buy 4 terabytes, it's not so easy to upgrade to 16. YOu cannot add shells. So users should scale up, to avoid hitting limits down the road. The standard of Data Domain is typically 32 terabytes in terms of sizing.

    What other advice do I have?

    We are doing multiple POCs right now. We have already installed it in the banks of Pakistan. We are providing solution architects, support, deployment, and residential services. 

    In early deployments you need to size the backup solution properly and then design it, create it, and export it. Early on, we are sure to always have a delivery of that statement. After delivery, my engineers will be aligned with the Dell EMC CPU, who makes the PDQ chain. That way, we can always patch the required IPs and do those backups as well.

    We always deploy the bank's backup software, and we'll do the patches for every requirement. Sometimes we use NetWorker and Avamar. We've deployed Data Domain using Veeam as well. 

    We always do on-premises deployments, which are mandatory in our country. In Pakistan, you can't have any cloud-based deployments. Compliance and government rules are slowly changing. In a few years, we may also do cloud deployments as well.

    That said, wherever we deploy Data Domain virtualization, it is a step towards cloud-based deployment as it's a virtual machine. You can always send the data from a virtual machine to any cloud, including Microsoft Azure, IBM, and ECS.

    If a company is looking for an implementation partner, it's best to go with a tier-one partner - someone who is Gold, Silver, or Titanium. They will understand the product fully.

    I'd rate the solution ten out of ten. There's no comparison between Data Domain and any other partner. It's solid and consistent. We'll continue to use them. They are excellent.

    Which deployment model are you using for this solution?

    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
    Get our free report covering Dell EMC, Dell EMC, Hewlett Packard Enterprise, and other competitors of Veritas NetBackup Appliance. Updated: January 2022.
    564,643 professionals have used our research since 2012.