All-Flash Storage Arrays Performance Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Performance
NetApp AFF (All Flash FAS): Performance
SG
Storage Engineer at Missile Defense Agency

We don't use NetApp AFF for machine learning or artificial intelligence applications.

With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.

The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.

Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.

NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.

I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.

Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.

With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.

We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.

Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.

NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.

As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.

View full review »
DM
IT Director at a legal firm

This product was brought in when I started with the company, so that's hard for me to answer how it has improved my organization. I would say that it's improved the performance of our virtual machines because we weren't using Flash before this. We were only using Flash Cache. Stepping from Flash Cache with SAS drives up to an all-flash system really had a notable difference.

Thin provisioning enables us to add new applications without having to purchase additional storage. Virtually anything that we need to get started with is going to be smaller at the beginning than what the sales guys that sell our services tell us. We're about to bring in five terabytes of data. Due to the nature of our business operations that could happen over a series of months or even a year. We get that data from our clients. Thin provisioning allows us to use only the storage we need when we need it.

The solution allows the movement of large amounts of data from one data center to another, without interrupting the business. We're only doing that right now for disaster recovery purposes. With that said, it would be much more difficult to move our data at a file-level than at the block level with SnapMirror. We needed a dedicated connection to the DR location regardless, but it's probably saved our IT operations some bandwidth there.

I'm inclined to say the solution reduced our data center costs, but I don't have good modeling on that. The solution was brought in right when I started, so in regards to any cost modeling, I wasn't part of that conversation.

The solution freed us from worrying about storage as a limiting factor. In our line of business, we deal with some highly duplicative data. It has to do with what our customers send us to store and process through on their behalf. Redundant storage due to business workflows doesn't penalize us on the storage side when we get to block-level deduplication and compression. It can make a really big difference there. In some cases, some of the data we host for clients gets the same type of compression you would see in a VDI type environment. It's been really advantageous to us there.

View full review »
MS
Infrastructure Team Lead at a pharma/biotech company with 51-200 employees

NetApp helped us with its ease of deployment and ease of use.

The solution's data protection and data management are also easy.

AFF has improved our response time by about 30%.

We have enough storage, especially with the enhanced deduplication and compaction. It is good to be able to have a multitude of environments without having to worry about having spaces deployed. We always have a good amount of space. We do have multi-performance, with different performance layers for slower and quicker storage.

View full review »
CJ
Sr Storage Engineer at a financial services firm with 1,001-5,000 employees

Coming from a financial background, we are very dependent on performance. Using an all-flash solution, we have a performance guarantee that our applications are going to run fine, no matter how many IOPS we do.

We use NetApp for both SAN and NAS, and this solution has simplified our operations. Specifically, we use it for SAN on VMware, and all of our NFS storage is on NAS. They are unified in that it is the same physical box for both.

This solution has not helped us to leverage data in new ways.

Thin provisioning has allowed us to add new applications without having to purchase additional storage. This is one of the reasons that we purchased NetApp AFF. We almost always run it at seventy percent utilized, and we only purchase new physical storage when we reach the eighty or eighty-five percent mark.

I find that we do have better application response time, although it is not something that I can benchmark.

As a storage team, we are not worried about storage as a limiting factor. When other teams point out that storage might be an issue, we tell them that we've got the right tools to say that it is not.

View full review »
PY
Storage Administrator at a energy/utilities company with 1,001-5,000 employees

We have been happy with the performance and it has not given us any issues.

I like the simplicity of data protection and data management. We use snapshots for our FAS recovery, and we use SnapVault for our backups.

NetApp definitely simplifies our IT operations by unifying services. We only use this solution on-premises, but with NAS, we don't need Microsoft Windows to create a share. It's all on our NetApp platform. I like it because we do not have to switch.

I wouldn't say that we have reallocated resources that were previously dedicated to storage operations, although it does give us time to do other things.

We have used NetApp to move large amounts of data between data centers. It has made it easier for us, and RPOs are shorter because of it.

With respect to the response time for applications, I can definitely say that it has improved, although we have not done any benchmarking. I perceive the improvement through monitoring the applications.

This solution is pretty expensive, so I'm not sure whether it has reduced our data center costs.

NetApp has helped eliminate storage as a limiting factor in our business. My customers are happier because they have no issues with performance or accessing their data.

View full review »
Storage Analyst at a financial services firm with 10,001+ employees

Our primary use case for NetApp AFF is performance-based applications. Whenever our customers complain about performance, we move their data to an all-flash system to improve it.  

We have our own data center and don't share our network with others.

View full review »
KS
Systems Engineer at a tech services company with 51-200 employees

The performance of NetApp AFF allows our developers and researches to run models and their tests within a single workday instead of spreading out across multiple workdays.

For our machine learning applications, the latency is less than one millisecond.

The simplicity of data protection and data management is standard with the rest of NetApp's portfolio. We leverage SnapMirror and SnapVault.

In my environment, currently, we only use NAS. I can't talk about simplifying across NAS and SAN, but I can say that it provides simplification across multiple locations, multiple clusters, and data centers.

We have used NetApp to move large amounts of data between data centers, but we do not currently use the cloud.

Our users have told me that the application response time is faster.

The price of the A800 is very expensive, so our data center costs have not been reduced.

We are using ONTAP in combination with StorageGRID for a full data fabric. It provides us with a cold-hot tiering solution that we haven't experienced before.

Thin provisioning has allowed us to over-provision existing storage, especially NVMe SSD, the more expensive disk tier. Along with data efficiencies such as compaction, deduplication, and compression, it allows us to put more data on a single disk.

Adding StorageGRID has reduced our TCO and allows us to better leverage fastest NVMe SDD more, hot tiering to that, and cold tiering to StorageGRID.

View full review »
MB
Specialist Senior at a consultancy with 10,001+ employees

Our previous NetApp system was a SAS and SATA spinning disk solution that was reaching end-of-life, and we were overrunning it. We were ready for an upgrade and we stuck with NetApp because of the easy of cross-upgrading, as well as the performance.

View full review »
LR
Senior Data Center Architect at a financial services firm with 1,001-5,000 employees

There are little things that need improvement. For example, if you are setting up a SnapMirror through the GUI, you are forced to change the destination name of the volume, and we like to keep the volume names the same.

When you have SVM VR and you have multiple aggregates that you're writing the data to on the source array, and it does its SVM DR, it will put it on whatever aggregate it wants, instead of keeping it synced to stay on both sides.

This solution doesn't help leverage the data in ways that I didn't think were possible before.

We are not using it any differently than we were using it from many years ago. We were getting the benefits. What we are seeing right now is the speed, lower latency, and performance, all of the great things that we haven't had in years.

This solution hasn't freed us from worrying about usage, we are already reaching the eighty percent mark, so we are worried about usage, which is why we are looking toward the cloud to move to fabric pools with cloud volumes to tier off our snapshots into the cloud.

I wish that being forced to change the volume name would change or not exist, then I wouldn't have to go to the command line to do it at all.

View full review »
DC
Tech Solutions Architect at a healthcare company with 10,001+ employees

The most valuable feature is it's fast. We do not use the solution for artificial intelligence or machine learning applications, but our overall latency is low. With our SQL Servers and Oracle servers, compared to the older meta filers, like 7-mode, the 8000 custom mode, or performance on Pure flash systems, you can't compare. We are seeing submillisecond, which is pretty nice.

The solution has enabled us to move large amounts of data from one data center to another (on-premise) without interruption to the business using SnapMirror.

The solution has improved application response time. Compared to the 3250s and 8000s, it has been night and day.

View full review »
KN
Sr Data Storage at a energy/utilities company with 10,001+ employees

The monitor and performance need improvement. Right now we are using the Active IQ OnCommand Unified Manager, but we also have to do the Grafana to do the performance and I hope we will be able to see the improvement of the active IQ in terms of the performance graph. It should also be more detailed. 

In the next release, I'm looking for a flex group because that is the next level of the volumes, extended volume for the flex vault. In the flexible environment, we run into the limitation of the capacity at a hundred terabytes and sometimes in oil and gas, like us, when the seismic data is too big, sometimes a hundred terabytes are not big enough. We have to go with the next level, which is the flex group and I hope it has features like volume being able to transfer to the flex group. I think they said they will add a few more features to the flex group. I also wanted to see the non-disruptive conversion from flex vault to the flex group be easier so we don't have to have any downtime.

View full review »
VK
Storage Architect and Engineer at United Airlines

The price to performance ratio with NetApp is unmatched by any other vendor right now.

View full review »
Manager at Pramerica

ONTAP has improved my organization because we now have better performance. We can scale up and we can create servers a lot faster now. With the storage that we had, it used to take a lot longer, but now we can provide the business what they need a lot faster.

It simplifies IT operations by unifying data services across SAN and NAS environments. We use our own type of SAN and NAS for CIFS and also for virtual servers. It's pretty basic. I didn't realize how simple it was to create storage and manage storage until I started using NetApp ONTAP. We use it daily.

Response time has improved. IOPS reading between reading and the storage and getting it to the end-users is a hundred times faster than what it used to be. When we migrated from 7-Mode to cluster mode and went to an all-flash system, the speed and performance were amazing. The business commented on that which was good for us. 

Datacenter costs have definitely been reduced with the compression that we get with all-flash. We're getting 20 to one so it's definitely a huge saving.

It has enabled us to stop worrying about storage as a limiting factor. We can thin provision data now and we can over-provision compared to the actual physical hardware that we have. We have a lot of flexibility compared to what we had before. 

View full review »
JC
Storage Architect at a energy/utilities company with 10,001+ employees

The most valuable features of the solution are speed, performance, and reliability.

View full review »
DB
Consulting Storage Engineer at a healthcare company with 10,001+ employees

The solution has improved application response time. We are using the All Flash FAS boxes of the AFS and our primary use case is around file shares. These aren't really that performance intensive. Therefore, overall, response times have improved, but it's not necessarily something that can be seen. 

From a sheer footprint savings, we're in the process of moving one of our large Oracle environments which currently sits on a VMAX array, taking up about an entire rack, to an AFF A800 that is 4U. From just the sheer power of cooling and rack-space savings, there have been savings.

I haven't seen ROI on it yet, but we're working on it.

View full review »
MD
Solution Architect at Advanced UniByte GmbH

The primary use case is for customers who need absolute low latency and have low latency in their workloads. They need maximum performance in their virtualization and file storage environments.

View full review »
Senior Storage Engineer at Hyundai autoever

We have been using the FAS series product, and AFF is pretty similar to the FAS products, as it still runs the ONTAP operating system. They are using AFF because that comes with all-flash disks, which gives us better performance with a smaller footprint. We use that mainly to start our block and NAS data.

View full review »
PB
Storage Team Lead at a manufacturing company with 10,001+ employees

We primarily utilize AFFs for engineering VDIs. We are utilizing it to host VDI and performance is the primary expectation from AFFs. We are satisfied with the product.

View full review »
BC
Storage Manager at State of Nebraska

We like AFF because it has a very high reliability rate with very high performance. We are using it for top tier performance on application and virtual machine storage, as well as just being able to separate out SVMs for different security and network needs for all of our different customers across the state. 

We use the Snapshot feature to simplify backups for data protection. We set different policies that let let our agencies choose what backup policy they want to have for their Snapshots. It's very simple. Users can be given the opportunity to look at previous versions directly from the Windows interface or they can call/put in a ticket seeking support from our IT group if they need a larger system restore, because their data is protected with NetApp and replicated as well.

View full review »
System Administrator at Bell Canada

With AFF, the benefit is that we have 27 data centers across the country, we are able to standardize across all them and do storage replication. The simplicity of being able to offload cold data to StorageGRID with the tiering layers that NetApp provides, this just makes it easier for us to be able to reduce labor hours, operations, and time wasted trying to figure out moving data. The simplicity of tiering is a big bonus for us.

In terms of data protection, we have been leveraging SnapMirror with Snapshot to be able to do cloning. For the simplicity, we find it is able to do SnapMirror on a DR site in a disaster situation so we can recover and the speed to recovery is much more efficient. We find it much easier than what other vendors have done in the past. For us, to be able to do a SnapMirror a volume and restore immediately with a few comments, we find it more effective to use.

AFF has helped us in terms of performance, taking Snapshots, and being able to do cloning. We had a huge struggle with our backup system doing snapshots at the VM level. Using AFF, it has given us the flexibility to take a Snapshot more quickly. 

View full review »
Systems Management Engineer at Linklaters

We were early adopters of the cDOT environment five or six years ago. In the early stages of deployment (five or six years ago), we saw some challenges around cDOT. However in the last two to four years, the product has matured incredibly. Ever since the introduction of ONTAP 9.X, we haven't seen any issues in terms of availability and performance.

We are upgrading to ONTAP, which will give us a data encryption level at an aggregate layer of the ONTAP environment. We are looking forward to that.

We are using SnapMirror and not seeing any issues. Let us hope it stays like that.

View full review »
CP
Unix Engineer at a healthcare company with 5,001-10,000 employees

We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point. 

Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.

One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.

AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.

AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.

AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFF dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.

View full review »
IT Manager at Universo Online

It has a really useful, friendly console.

The dedupe gives us more IOPS for more reliance equipment and better performance.

View full review »
Head of Infrastructure, Network & Security Management at Vos Logistics N.V.

We are using this product for performance and growth.

View full review »
BS
IT Manager at a wholesaler/distributor with 201-500 employees

The stability and performance over the years have been good. In the seven years I've had it, it has totally crashed twice on me. The stability is pretty damn good. You have to admit that.

View full review »
LN
Solutions Consultant at a financial services firm with 5,001-10,000 employees

It would be better if they just improved the performance of the system.

View full review »
Dell EMC XtremIO: Performance
MS
Manager of Customer Services with 1,001-5,000 employees

The most important thing for the system engineer is to check if there is latency in the IOPS for any run. You cannot measure the number of IOPS or whether or not it is overloaded. You cannot measure anything in EMC about this. Most solutions, especially HP, improved our fall-over performance, with our database and servers. Most servers are HP, but we use EMC now only for backup. 

One thing that should be improved is the reporting and monitoring tools. It should use real-time monitoring for storage, IOPS, latency, etc.

View full review »
SP
G. Manager- Technical Services with 51-200 employees

The most valuable features are data compression and in-line deduplication. 

The performance is good, which is important.

View full review »
SolidFire: Performance
JR
CTO at a tech services company with 1-10 employees

The most valuable feature is the performance, as well as how you manage performance on the system.

View full review »
AS
Presales Engineer at Tech Data Corporation

I have experience with Pure Storage and NetApp is the better option. The software is similar and the Pure Storage has better performance but with NetApp, it is easier to scale up and scale-out.

View full review »
Manager IT at a tech services company with 201-500 employees

Because I'm also a PC guy for my company and in Pakistan there is not a third-person specific guy for a client or specific job, we are also doing the very tough job as the IT specialists. So I have to look after the solution, the technical stuff, and also the deployment. I am personally working in all three departments. I have to, because it's my job as the head of IT in my company. We are a resellers, so we are actually giving solutions to our customers. Regarding SolidFire, it's a very good storage solution when you are looking for a software defined data center.

SolidFire provides seamless performance across your storage system when you need to scale up. Other solutions(software defined data center) do not provide that much of scalability.

If customer doesn't want to learn a lot of software stuff and only wants to learn one piece of software, and does not want to learn the storage system; that's where SolidFire comes in, because its software defined and it is really good for software defined data centers(SDC) and virtualization.

What I mean is, if a customer doesn't need a centralized storage system but does need a data center solution which is capable of being an agile software defined storage system, then they should choose SolidFire; but if they need a big centralized storage SAN, they shouldn't choose SolidFire.

View full review »
Presales Engineer at a tech services company with 10,001+ employees

The solution is primarily for a hyper-converged solution, and the hyper-converged solution with NetApp HCI is to address the most common workloads, generic workloads, also workloads around VDI. It's primarily for everything around performance, around software like the CAD suite, and around scientific completion.

View full review »
Pure Storage FlashArray: Performance
MS
VMware and Windows Server Team Lead with 1,001-5,000 employees

With respect to comparing other solutions, when you put all of the features in a box, leverage them and migrate your application to one of these arrays, it will give you a lot of benefits. Some people have compared benchmark performance tests against other arrays and from my point of view, overall as a whole package when you sum everything up, Pure Storage is the winner.

View full review »
TS
IT System Engineer at a tech services company with 501-1,000 employees

We put the solution onto the VMware environment and all the Microsoft SQL servers. We do the synchronization between two data centers, so that is has a very low latency. We just have a few milliseconds of latency which is a ready performance, and near perfect. 

View full review »
Sr. Cloud Systems Engineer at a computer software company with 1,001-5,000 employees

We did a POC before buying this solution. If you're interested in using this solution, I recommend that you do the same — see if you like it. It's a good product for block storage; It offers very good performance

Overall, on a scale from one to ten, I would give Pure Storage FlashArray a rating of nine. 

View full review »
Professional Test Engineer at a computer software company with 10,001+ employees

We use Pure Storage FlashArray in a couple of backup products. Our DDP offerings, data platform offerings, is where we use Veritas with Pure Storage FlashArray. Then, we use the Pure Azure Service model with the secure multi-tenancy features. Pure Storage FlashArray can be managed centrally.

In individual file cases where most customers were looking for performance-based, minimum latency applications, we have deployed Pure Storage FlashArray.

View full review »
AM
Manager, Enterprise Infrastructure at a tech services company with 1,001-5,000 employees

The administration is very easy and quite minimal.

The performance is very good.

The installation is pretty straightforward. 

Technical support is good.

View full review »
PH
Solutions Architect at a wholesaler/distributor with 1,001-5,000 employees

I like what they're doing, but some of my customers complain that they do not have all the bells and whistles and knobs to fine-tune workloads that some of the competitors have. In my opinion, that's good. All customers don't have dedicated storage gurus, and they can get themselves into trouble if they fine-tune too many of those high-performance knobs, but they do get knocked down. Pure Storage takes a hit in the minds and opinions of some of the customers because they cannot customize things as much as compared to a legacy storage provider's appliance such as NetApp, Dell EMC, or even HPE. I personally think 95% of my customers are better off letting the system fine-tune itself. That was something that you needed to do 12 or 15 years ago, but now with all-flash, the technology can handle what it needs to handle. Customers just end up shooting themselves in the foot if they are tweaking too many default settings.

Pure is typically more expensive than everyone else. They can work on the price to make it more competitive.

View full review »
HPE Nimble Storage: Performance
Lead Infrastructure Architect at ThinkON

These arrays perform very well and have allowed us to move many physical servers to virtual and run them from the Nimble arrays without any performance impact and there is actual performance improvement.

View full review »
Lead Infrastructure Architect at ThinkON

The AF5000 array is the primary storage for our iManage DMS 10 document management systems.  It allows the best performance for users using the system.

View full review »
LL
Product Manager at a comms service provider with 11-50 employees

The performance and the processor are good.

HPE has very good technology in terms of storage, and they have very good support assistance.

View full review »
TN
Information Technology Operations Manager at Weber Metals

This product is very easy to set up.

The design of the hardware and software lends itself to great performance and redundancy.

View full review »
Team Leader at PT.Helios Informatika Nusantara

HPE Nimble Storage has a simple management end-user. The customer will generally provide us with feedback about performance upon installation of the solution, including in respect of VMware. Our customers are helpful when our performance involves configuring integration such as that of vCenter with InfoSight. I am aware of the tech problem we encounter. The VM we have with high-class latency is, surprisingly, very helpful to manage. 

View full review »
ICT Architect / Team Leader at a tech services company with 51-200 employees

The performance and reliability are excellent. Due to the fact that we are a provider, we need systems which just run and run and run the whole day and the whole night without issue. This product does just that. We are selling services, and therefore we need a system which works 24/7.

The initial setup is very easy.

The stability is very good.

The scalability is straightforward.

View full review »
LV
ICT Director KA Infra at a transportation company with 1,001-5,000 employees

We use HPE Nimble Storage for general purposes, not for high performance, just for archiving and backup solutions.

View full review »
VP - Engineering Operations at WPG Consulting

The price of the solution is a little high, although the performance is very good. 

View full review »
HPE 3PAR StoreServ: Performance
Head of IT Department at Sonepar

The most valuable feature is the proactive technical support. If there is a problem then the HPE facility will detect it and immediately contact me.

It achieves very high performance.

View full review »
FM
Presales Engineer at a tech services company with 51-200 employees

The performance is very good.

View full review »
PS
IT Infrastructure and Operational Lead at a consumer goods company with 201-500 employees

We had an issue a few months ago where we experienced a degradation in performance.

Every time you scale by adding more capacity, you need to pay for re-balancing services that cannot be performed in-house.

I would like to see an automatic re-balancing system or functionality for adaptive optimization.

View full review »
ES
Service & Infrastructure Manager at a tech services company with 201-500 employees

We are using a built-in solution in 3PAR. We are using All-Flash Storage, and there are some difficulties with it. HPE has now developed a new tool system to support All-Flash, and that's why we are changing our investment.

They must increase its performance. I want unlimited support, which is very important for performance. I am not interested in spinning disks. HPE is developing new storage systems called Primera, but they must be developed more.

View full review »
CP
SAN Consultant at a tech services company with 201-500 employees

The solution has greatly assisted data performance as far as a VM-ware environment goes. My data performance is much faster.

View full review »
MV
Storage Manager at a financial services firm with 10,001+ employees

At one point, some remote copy groups stopped working, and we used a disaster recovery plan because, in production, we replicate everything from A to B and then split up into some remote copy groups, gathering together some data source and clusters. If one of those remote copy groups stops, you don't have DFP anymore and you have to restart them. And last year after starting one of those replication groups; we had some performance issues because they're trying to get in sync as soon as possible using all the resources, so we had to plan very well outside the business hours.

View full review »
CC
Solution Architech at a consultancy with 51-200 employees

Cloud integration could be better. They can also add an NVMe to port to that. I would like to see NVMe in the next release. That's the future or the near future for storage. That will give us a real high throughput and some performance.

View full review »
HP
General Manager at a media company with 11-50 employees

From an overall perspective, all the latest technologies can improve support and performance. This is very important for us.

View full review »
MD
Technical Account Manager at a tech services company with 201-500 employees

HPE could improve by making an old flash system in order to compete with the current market. For the solution to be more competitive in the mid-range market they could increase the performance.

View full review »
Hitachi Virtual Storage Platform F Series: Performance
HL
Senior Manager Operational Services at Orange

The high performance of flash storage is especially valuable to us.

View full review »
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

We are a solution provider and I work with a lot of different SAN products, depending on the needs of the customers. we have implemented this solution, as well as the G series, for some of our clients.

I have a project right now that involves revising and fine-tuning a storage network. This network contains two Hitatch VSP G Series units. There is not a major difference between the F Series and the G Series. Both of them are enterprise-scale and efficient for many data centers. It is used as primary storage in industries such as banking, automotive, health care, and insurance. Large companies, or companies that have an IBM mainframe.

If the solution requires a very high IO/s (Input/Output per second) with sub-millisecond response time then they should select the F Series because it has better performance.

View full review »
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

Businesses are looking for simple storage solutions that can exceed expectations and meet the challenge of delivering continuous availability and high performance while maximizing the value of physical and virtual infrastructures. Simultaneously, enterprise and mission-critical business application environments are becoming increasingly unpredictable and organizations need the ability to deliver and orchestrate automated operations.

View full review »
Product Manager at Storageone

I am not an end user. I present the solution to customers. I study a customer's infrastructure and suggest a product based on the customer's needs, such as latency or IOPS performance. I usually work with VSP F700 and F900 models.

View full review »
Hitachi Universal Storage VM [EOL]: Performance
Chief Solutions Architect at Science Applications International Corporation

This solution has a good price-performance ratio.

View full review »
IBM FlashSystem: Performance
VP - Head Enterprise IT Infrastructure at MIB

The most valuable features are deduplication and compression, which together, enable you to have more space.

Performance is a major advantage of this storage.

View full review »
Infrastructure Architect Supervisor; Solution Delivery Supervisor at a financial services firm with 1,001-5,000 employees

Most of the features for the reduction in data compression are useful. 

It is also very easy to use and administer. Its performance is also good.

View full review »
KV
Director Technical at a tech services company with 11-50 employees

My basic advice is to work with partners who really understand what they're talking about. Anybody who sells one of these boxes doesn't necessarily have the capability to supply or support them. Be very clear that you're dealing with organizations that have the experience to actually deploy and support you. 

That would be what is critical. Because it's not something that we just rack it up and switch it on and it works. There are many things involved. 

Also, initially, before purchasing, the sizing is very critical. There has to be enough time spent on performance metrics, analyzing the workload requirements, and things like that.

Before the purchase and after the purchase and the deployment, there needs to be quite a bit of involvement. This is why I would advise the customer to work with partners of IBM or Hitachi. 

Whoever you're talking about, and who has experience. Not somebody who just comes and says, "I'll do anything, and for the price, I'll give you the best deal." 

The best deal is not always the best deal. 

Once you buy it and it doesn't work for you, ultimately you are paying more.

I would rate IBM XIV and eight out of ten.

View full review »
Storage Manager at a financial services firm with 10,001+ employees

They can improve its initial configuration. The initial configuration is currently very difficult. There are multiple choices or alternative ways to configure based on the use case and what you are targeting out of the device, that is, more capacity or more performance. These multiple alternatives cause a lot of confusion.

They should increase the processing part of the nodes. Currently, you can cluster up to eight nodes. From my experience and the workload that I am facing in my environment currently, I would like to see either a bigger or stronger node or a larger number of nodes that can be clustered together. We formally communicated to them that we need to see either this or that, and they are working on something.

View full review »
IE
Senior Systems Engineer at a tech services company with 1,001-5,000 employees

I like most of the features. Its speed, performance, and availability are valuable. We are implementing the data reduction technology the most.

View full review »
RC
Hybrid IT Enterprise Executive at a tech services company with 11-50 employees

I am an IBM reseller and I sell this solution to our clients. I have a lot of knowledge of this solution.

Our clients use this solution for database performance.

View full review »
Storage Consultant at E-Storage

The most valuable features are flexibility and performance.

View full review »
AA
Technical Presales Consultant at a tech services company with 201-500 employees

One of the valuable features is the performance, it is one of the best in the market.

View full review »
NetApp EF-Series All Flash Arrays: Performance
IT Systems Engineer at Adaptive Solutions

The most valuable feature is the ability to set a specific margin of performance to a specific workload. This feature is unique to this vendor and the competitors do not have it.

View full review »
IT
Director at a computer software company with 1,001-5,000 employees

The most valuable feature of this solution is the performance of the database access.

It's simple in operating and for maintenance. Also, they provide a warranty for the I/O output.

View full review »
System Administrator at a government with 201-500 employees

Its performance is most valuable. This solution is much faster than other as well as older storage solutions. The performance of the system is very good. We are getting 50 times better experience than the older storages. We are using AFF 300. It also has native cloud integration and most of the features.

View full review »
Technical Advisor at Synnex Metrodata Indonesia

The solution is very good. It offers very good performance, and very good data services to customers. 

The ONTAP is excellent. 

SnapMirror is very useful. It allows the customer to be able to see the entire relationship. It's one of the best features of the product.

The initial setup is pretty straightforward.

For the most part, the solution is stable.

The technical support has been pretty good for the most part.

View full review »
Dell EMC Unity XT: Performance
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

Dell EMC Unity is designed for performance and optimized for efficiency. This product is perfect for small, and mid-range customers who need to pay less, but still, get enterprise-level capabilities.

View full review »
ST
Cloud Engineer at a tech services company with 51-200 employees

Good in-built monitoring tools from the System|Performance section Tab, from CloudIQ you can reach out to vCenter as well. ESRS (Call Home) on the service delivery part is valuable.

Remote Code update support (interactive or not) is free of charge , as you wish, nonetheless you are free to do it yourself as update are cumulative and retained on each new codelevel.

View full review »
Huawei OceanStor: Performance
EK
Project Manager at a tech services company with 11-50 employees

The performance of the solution is good.

View full review »
Huawei OceanStor Dorado: Performance
MS
Head of Research & Development at a construction company with 1,001-5,000 employees

The OceanStor V3 5000 was, when we started, the first real NVMe all-flash storage solution. In NVMe, the performance was much more impressive to seek latency and that was unmatched. When we later sold our machine, the supplier did not have any in stock. Only later Dell was able to introduce PowerMax or IBM, and they introduced their new solution integrating the NVMe.  

The other advantage is that HyperMetro functionality is app-specific to VMware or to the virtualized environment to have more reliability and higher capability. It is, therefore, possible to have all the data synchronized, using less storage. All the other features inside the system are very reliable and the installation time was shorter. We use less space for storage now - it decreased from two racks to only four units. That is really impressive.

View full review »
JS
Senior Storage Consultant at a tech services company with 51-200 employees

We are an IT distributor and this is one of the storage solutions that we implement for our clients. The primary use case is for VMware virtualization, although it is also used for database system storage. Oracle and other SQL databases require a lot of performance in terms of IO per second, which is met by using OceanStore Dorado.

View full review »
AY
IT Service Manager at a financial services firm with 1,001-5,000 employees

We like that the solution is all-flash.

The solution has a sufficient amount of storage on offer.

The solution is very stable.

So far, we have found it to be quite scalable.

The initial setup is fairly straightforward and not overly complex. The installation and management are very easy.

The performance is very good. 

Technical support is quite good.

View full review »
Dell EMC SC Series: Performance
VP
EMC Storage & Backup Implementation Specialist at a tech vendor with 1-10 employees

Performance-wise it's high speed. It's also more stable and scalable. 

View full review »
JM
IT Director - Enterprise Storage and Data Protection at a manufacturing company with 10,001+ employees

With the hybrid storage approach, we can balance between cheap space and good performance.

View full review »
Managing Director at Consult BenJ Ltd

The solution's most valuable feature is its performance redundancy. The solution works quite well for that. The redundancy is important to us for snapshot and recovery purposes.

The product offers good performance and is quite powerful. 

The implementation is straightforward. 

View full review »
DM
Senior Systems Consultant at a tech services company with 11-50 employees

Its leading feature is the price-performance ratio, which is very good.

The management is pretty good.

It has good expandability options.

View full review »
Storage Architect at a healthcare company with 10,001+ employees

We use it for multiple databases. We use it for Oracle and for SQL. We also use it for file systems — Oracle, SQL, file system storage. Most of our use cases involve Oracle, SQL, VMware, and large file systems.

I am a storage architect. have a state administrator that works with me as well.

Internally, within our company, there are a few dozen employees using this solution. Externally, we literally have millions of people that hit that storage system every day.

As far as our database administrators, they're always looking at the storage performance. Some of them actually have read-only log-ins to the storage array itself. They can log in and look at directly what the storage performance is for their database.

Currently, we are not using this solution extensively because it's becoming a sunset solution. There's no option to increase usage. It's like saying you want to buy a '65 Mustang — no, you can't get one brand new, they don't make those anymore. There's no expansion being done because the product's no longer available. 

View full review »
TN
Information Technology Operations Manager at Weber Metals

In summary, it's a nice product but it's too expensive and too complicated to set up.

If I were rating this product solely on the management or administrative side, I would rate it a one out of ten because it's just ridiculous. Performance-wise, I can rate it a four or five out of ten.

Overall, I would rate this solution a four out of ten.

View full review »
JC
Senior Consultant at a tech company with 11-50 employees

Dell will discontinue this storage. That's the main pain point for clients right now. They will focus on Power Store and the SC Series will be at end-of-life soon - possibly as soon as one or two months from now.

In some customer cases, customers experienced more performance or latency. The performance overall could be better.

View full review »
Lenovo ThinkSystem DM Series: Performance
FA
IT Solutions Architect at nds Netzwerksysteme GmbH

We're a Lenovo partner.

I'd rate the solution six out of ten.

With Lenovo, there are only two solutions: the DE or DM series. For common workloads, we tend to recommend the DE series as it's the best match for smaller companies. The DM series is more for those with many workloads that need very high performance. In use cases where the Workload needs performance, we advice our Customer to take the advantage of the Lenovo Best in Breed DM-Systems.

We typically advise customers to choose the DM series.

In Germany, we have many small firms and smaller environments. Most people will tell clients they absolutely need flash. However, we don't think that is always the case. It's similar with DM. You can't sell clients with small issues or small storage requirements something that offers flash all the time. It's expensive. You need to think about costs and be strategic to ensure you're meeting your client's needs responsibly.

View full review »
Pure Storage FlashBlade: Performance
JR
Architecte technique at a energy/utilities company with 10,001+ employees

The solution is very easy to manage.

Overall, the product has great performance.

The initial setup is pretty quick.

We've found the pricing to be okay.

Technical support has been great so far.

The stability of the solution is reliable.

View full review »
CTO at a tech services company with 201-500 employees

This solution is mainly used in a very performance-sensitive environment for enterprise software storage.

View full review »
HPE Primera: Performance
AK
Enterprise Solutions Architect at a tech services company with 1-10 employees

Our customers have given very positive feedback about InfoSign, which is the management software for this solution.

Primera has good performance and the compression is also good.

It integrates well with other software including Docker and Kubernetes.

It is easy to expand.

View full review »
SM
Principal Consultant at a consultancy with 1-10 employees

One of the most valuable features is the ease of deployment.

Integration with the compute capacity is good, as this is just the storage component.

Orchestration and management are good.

The performance and capacity-based costs are also good.

Another advantage is that HPE sells everything. This includes all of the capabilities of the hardware, like replication, snapshot, and other specific features. They are all included from the get-go, as opposed to everything being separate and in another budget. When you buy it, you can do whatever you have to be able to do with it out of the box.

View full review »
SM
Principal Consultant at a consultancy with 1-10 employees

The most valuable feature is the ease of deployment. 

The integration with the compute capacity because this is just the storage side.

The orchestrations and management, and the performance and cost per capacity.

Another advantage is that HPE sells everything. All of the capabilities of the hardware such as replication, snapshot, and the specific features are all included from the start as opposed to being on another license.

When you buy it, you can do whatever you have to, to be able to do with it out of the box.

View full review »
Head of IT Infrastructure Solutions at a tech services company with 51-200 employees

The Primary use case for HPE Primera is moving and storing high performance databases and core infrastructure, virtual machines, SQL databases, et cetera...

This is an on-premises solution because right now it is not popular among customers to use hybrid or cloud solutions in the Georgian Republic. Today, it is not common to move compute and storage sources onto the cloud. I don't know what happens in other regions like Europe, but in the Georgia Republic it is not popular.

View full review »
GH
CTO at a financial services firm with 5,001-10,000 employees

The performance is very good.

View full review »
Head of Hosting & LAN Services at Lanka Communication Services (Pvt) Ltd.

HPE Primera had helped us achieve very low latency performance, hence enhancing the customer experience of higher IOPS demanding applications. 

View full review »
MD
Technical Account Manager at a tech services company with 201-500 employees

The performance of the solution is good.

View full review »
Dell EMC PowerMax NVMe: Performance
VF
Presales Engineer Information System and Security at a tech services company with 10,001+ employees

Helped our organization by improving performance on the I/O side. Before migrating to PowerMax, customers were faced with many performance issues due to high latency from the back-end and front-end side. Our previous storage was VMAX 10K, and with the evolution of business applications, it became more exigent in term of performance, intelligent data placement with FAST VP, resilience, replication, data protection with snapshot, and no more tasks for provisioning servers and applications. E.g., at the end of month, when the financial department ran the script to produce reports for the BI solution, these scripts generated many performance issues and the storage was struggling. With PowerMax, this is very transparent at the end user level.

View full review »
FA
VP Global Markets, Global Head of Storage at a financial services firm with 10,001+ employees

Uptime and availability are first and foremost. The deduplication and compression capabilities are also excellent, allowing us to be very efficient with the physical hardware that we need to deploy on-prem in order to fulfill our requirements. It has given us excellent value for money without compromising performance.

The solution's snapshot capabilities and replication are very good features. Snapshots are allowing us to quickly build analytical models directly from production data. This gives us amazing insights into market trends and allows us to build more effective trading algorithms. Replication offers us unparalleled levels of resilience.

The management overall is excellent. Dell EMC continues to build on very solid foundations, which have been evolving for over two decades. 

The REST APIs are great.

The solution exposes excellent automation opportunities.

We have found the performance to be very good so far.

View full review »
CM
Storage Team Manager at a government with 10,001+ employees

What is most valuable to us is the fact that it has multiple engines, and each of those engines works in conjunction in a grid environment. That's important to us because we have so many different use cases. One example might be that a state trooper pulls someone over at 2 o'clock on Sunday morning and wants to go into the LEIN system, which is the law enforcement information network. He wants to see who this person is that he has pulled over and gather as much information as he can on that person. We can't predict when he's going to pull someone over, nor can we predict when backups are actually going to be taken against the volume that he's going to for that information. The PowerMax allows us to do backups of that volume at the same time that he is looking up the data he needs, and there's no impact on performance at all.

The performance is very good. Our predominant workloads are all less than 5 milliseconds and it's most common to have a sub-1-millisecond response time for our applications. In terms of efficiency, we've turned on compression and we're able to get as high as two-to-one compression on our workloads, on average. Some workloads can't compress and some can compress better, but on average, we're a little bit more than two-to-one.

The solution’s built-in QoS capabilities for providing workload congestion protection work pretty well because we actually don't even turn on the service level options. We leave it to the default settings and allow it to decide the performance. We don't enforce the Platinum, Gold, or Silver QoS levels. We just let the array handle it all, and it does so.

We also use VPLEX Metro, which is a separate service offering from Dell EMC. It does SRDF-like things, but it's really SRDF on steroids. Of course it copies data from one data center to the other, but with the VPLEX, not only does it copy it synchronously, but it also has coherent caching between both data centers. That means we are literally in an Active-Active mode. For instance, we can dynamically move a VMware host that is in one data center to another data center, and we're not just doing vMotion with the host. The data is already in there at the other data center as well. It's all seamless. We don't have to stop SRDF and remount it on another drive. It's already there.

View full review »
Infrastructure Lead at Umbra Ltd.

With the SCM memory, it has been set it and forget it. It is being used as a cache drive. There is very little configuration for us to do. We just know that it is working.

PowerMax NVMe's QoS capabilities give us a lot of visibility into taking a look at what could be a potential performance issue. However, because it is so fast, we haven't really noticed any slowdowns from the date of deployment even until today.

It is a very good storage appliance for enterprise-level, mission-critical IT workloads because of its high redundancy, parity drives. It gives us the ability to not worry about our data. Or, if something were to go wrong, e.g., a drive pops, then we have our mission-critical warranty. We get a drive the same day, then get it swapped by the next business day at the latest.

PowerMax NVMe has made it a lot easier to understand how much we are able to provision. It has made it a lot faster to provision new things. 90% of my time for provisioning has been reduced. Also, it has made it very easy to understand and see everything behind it versus the older heritage, where Dell EMC was very convoluted and hard to get working. Things that used to take an hour, probably now take five to 10 minutes.

View full review »
Senior Solution Architect at Rackspace

We are a very large customer of Dell EMC. We have several different deployments or installations. The biggest use case is probably a multi-tenant or shared environment where we provide many petabytes of storage for multiple customers who utilize that same infrastructure. We are a managed services provider in the cloud sector so we have to deliver high performance storage for thousands of customers who have to be up all the time.

There are a lot of different use cases, in general: Having large quantities of storage available that is always available, because of this uptime is important as is performance. As a service provider, we deliver storage on demand for our customers. This is important because we can adjust storage needs on a per customer basis. Whether it be increases or decreases in storage, this platform allows us to do that very easily.

We are using the latest release.

View full review »
Pure FlashArray X NVMe: Performance
HH
Managing Director at Dr. Netik & Partner GmbH

The most valuable feature is the performance.

One of the best features is the support, which is excellent.

The user interface and reporting are good.

It is easy to deploy and administrate this solution.

View full review »
VP Infrastructure & Security at a financial services firm with 51-200 employees

We needed a flash array to support our core databases for maximum performance. We use SQL. We were using vSAN before, but we were having some problems with it. So, we wanted to isolate the databases with dedicated storage. Rather than using a vSAN solution using servers, we tested a couple of solutions, and we figured out that Pure FlashArray X NVMe was giving us the best performance.

View full review »
Implementation and Support Engineer at PRACSO S.R.L.

The stability has been very good and very reliable. There are no bugs or glitches. It doesn't crash or freeze. Its performance has been good.

View full review »
IBM FlashSystem 9100 NVMe: Performance
Microfinance at a financial services firm with 5,001-10,000 employees

The performance of each server was improved when we implemented this product. The IOPS rate is high with flash storage, compared to other types of disks.

View full review »
General Manager at SinergyHard Ecuador

The high performance and high availability improved our overall processes.

View full review »
Information Technology Senior Administrator at Genpa

Before buying IBM Flash, I tested many flash storage solutions. For example, I tested Pure Storage for about one and a half months, Huawei Storage, and Dell Storage. I tested and compared the performance results. 

I also tested Iometer. When we run the Imeter on NetApp, the performance result is not the same. It immediately goes down and suddenly, goes up. It's not stable. But when we were testing Pure Storage, we found it to be very good. It's stable. It goes on the same line. When I tested Dell Storage and removed the controller or hard drive manually I saw the performance go down. Dell goes down, and Huawei goes down. When I did the same test on Pure Storage,  the performance was near the same. Because of this, I would recommend Pure Storage. I plan to buy it when I have the budget for it. 

View full review »
Dell EMC PowerStore: Performance
CTO at Universita' degli Studi di Pisa

The use case for this solution is based on VMware and that is why we chose PowerStore X. During the first period of the pandemic, we decided to use our VMware infrastructure for HPC workloads. We were looking for high-performance storage that could be inserted inside our VMware environment in an easy way. PowerStore, which had just been announced, seemed like the right solution and so we decided to buy. We have this storage environment inside the virtual HPC.

In our environment we are doing medical analysis related to genomic workloads. The data are acquired remotely from experiments and stored on the PowerStore. The PowerStore is exposed to the user through virtual machines, and the data are analyzed in this environment.

View full review »
BC
Chief Information Officer at a computer software company with 5,001-10,000 employees

We replaced an older, high-performance storage device that was very expensive. With PowerStore, we were able to achieve the IOPS, and we were also able to get a data compression rate significantly above what we had expected. We were able to retire that older, very expensive piece of storage by bringing in the PowerStore. It's been faster and cheaper than we had expected, per terabyte.

Another reason that we were after this machine was PowerStore's VMware integration. We're a very large VMware customer. Some 98 percent  of our workload runs on VMware.

View full review »
Founder and CEO at Desktoptowork

The built-in intelligence adapts quickly to changing workload requirements. It works as we expected it to work. It doesn't give you a view into how it's doing it because it's doing it in the backend, but up until now, every process has been working perfectly fine.

The biggest benefit is that we have an enterprise storage solution we can rely on. It does simplify storage operations. It's easy to manage.

In the few months that we have used it, the performance has been great.

View full review »
NetApp NVMe AFF A800: Performance
Team Lead at Adani Enterprises Ltd

The most valuable features are stability and performance.

View full review »
Pavilion HyperParallel Flash Array: Performance
JL
Network Manager at a transportation company with 1,001-5,000 employees

We run a virtualized workload. Right now, we run everything on Pavilion, which includes our high performance databases and engineering tasks as well as Exchange and file shares. All that stuff runs on our Pavilion Hyperparallel Flash Array.

We use block storage for our VMware infrastructure and are a complete VM shop. All of our files servers run on VMs, which use block storage on the Pavilion device.

View full review »
MP
Manager of Production Systems at a media company with 10,001+ employees

The solution's performance and density are excellent.

Typically, there is a trade-off. You can have incredibly dense storage in a small footprint sometimes, but the trade-off to that is you need a lot of horsepower to access it, which ends up counterbalancing the small footprint. Then, sometimes you can have very fast access to a storage array, but that usually requires a more comprehensive infrastructure.

This kind of balance, to somehow fit it all into one chassis, in a 4U server rack, is unheard of. You have the processing proxy accessing the data and almost a petabyte of flash accessible.

It's a very small footprint, which is important to our type of industry because we don't have massive servers.

We have benefited from this technology because we were able to centralize a lot of workflows. There is normally a trade-off, where you can have very fast local storage on the computer, but in a collaborative environment that's counterproductive because it requires people to share files and then copy them onto their system in order to get the very fast local performance. But with Pavilion, basically, you get that local NVMe performance but over a fabric, which makes it easier to keep things in sync.

We have been able to consolidate storage and as part of a multi-layer storage system, it plays a very important part. For us, it cuts down on costs because we essentially get an NVMe tier that's large enough to hold everyone's data, but the other thing for us is time and collaboration. Flexibility is worth a lot to us, as is creativity, so having the resources to do that is incredibly valuable.

If we wanted to do so, Pavilion could help us create a separation between storage and compute resources. It's one of those things where, in some environments, such as separation is natural and in other environments, there's an inclination to minimize the separation between compute and data. But to that point, Pavilion has the flexibility to allow you to really do whatever you want.

In that sense, you have some workloads where compute is very close to the data, such as iterative stuff, whereas we have some things where we simply want bulk data processing. You can do any of that but for us, that type of separation is not necessarily something we are concerned with, just given our type of workflows. That said, we have that flexibility if necessary.

This system has allowed us to ingest a lot of data in parallel at once, and that has been very useful because it's a parallel system. It's really helped eliminate a lot of the traditional bottlenecks we've had.

Pavilion could allow for running additional virtual machines on existing infrastructure, although in our case, the limitation is the core densities in our hardware. That said, it is definitely useful for handling the storage layer in a lot of our VMs. The problem is that the constraints of our VM deployments are really in just how many other boxes we have to handle the cores and the memory.

View full review »
JB
Manager of Platform Software at a healthcare company with 51-200 employees

Performance-wise, this product is faster than pretty much anything we've seen. In terms of the density and how it compares, what we have in-house is not very extensive in terms of other things we use, but in terms of our research and actually, what we have used, the density is much higher than anything else we've seen.

We can basically store the entire company's data inside of one unit, when the unit is properly configured. As it is now, it's equivalent to replacing three or four racks of equipment. The density is incredibly high.

This solution provides us with flexibility in our storage operations. It's software-defined storage, so we can allocate capacity however we want. It uses thin provisioning, which is convenient for us, and all sorts of other enterprise features that come with it that we haven't used quite yet. But, we can imagine we'll be taking advantage of them as the usage against the unit rises.

Our use case is primarily about performance, so consolidation has not saved us in terms of costs or capital expenditures. Our implementation of the product is an add-on to what's currently at the company. We've taken data out of the existing infrastructure and just moved it. The migration has allowed us to use it a lot faster, but we haven't gone through a consolidation exercise where we've gotten rid of the old equipment and now just depend on the new unit.

Absolutely, we are able to run more virtual machines on our existing infrastructure. With respect to storage management, we've reduced the amount of work that was required. In fact, we can eliminate most of the staff that has been dedicated to doing that in the old equipment. Now, we need very few people to administer the entire company using Pavilion. We can basically have one person manage all of the company's engineering data.

In terms of cost savings, in our situation, the cost we're saving is not headcount but rather, engineering time spent doing those kinds of activities. Where we may have had to spend a lot more time administering storage and IT equipment, we now have to spend much less time doing it, even though the headcount dedicated to IT is the same. Basically, opportunity costs have improved dramatically, as we've been able to assign staff to more value-added tasks.

We probably had three people spending between 25% and 50% of their time doing related activities, whereas now, we have one person spending perhaps 10% of their time.

View full review »
VAST Data: Performance
HPC CTO at a tech services company with 10,001+ employees

The most valuable feature is the performance. It is flash-based storage.

It is easy to integrate.

The stability is rock-solid.

View full review »
Hitachi Virtual Storage Platform E990: Performance
IT Infrastructure Manager at DISH

I am using the latest update to the solution?

The solution is very good when it comes to performance with the data center. 

View full review »
IntelliFlash: Performance
HM
Lead Systems Engineer at a retailer with 5,001-10,000 employees

I wouldn't say I like anything about this solution. We are looking for a replacement with Dell EMC and Pure Storage. Tegile's performance, support, and features are horrible. It's going down.

Multiple companies have bought it. It looked okay at one point in time, like four years ago. Even though it wasn't one of the best, it still looked okay. Since the management has changed several times, it looks like it's going down the drain. 

Performance is horrible now. Our original intent was to buy new storage in about two years. But since it became a critical urgency for us, we decided to purchase a new one in two or three months.

It would be better if they improved the codebase. We have issues very often with their code, and I think that is the main pain point. The hardware is also horrible because we have either a controller failure or a SATADOM failure very often. Now and then, we also have a disc failure. 

They have to get their act together. They have to make sure their hardware is robust, they have to make sure their code is good, and then we can think about new features and functionality. 

First, make the unit run properly, and then we can think about additions. Obviously, their support has to be knowledgeable. Because when I told them, "we have latency issues, come troubleshoot it for us," nobody came. But if we tell them that "we need to do a firmware upgrade," then they are like, "okay. Let's do a firmware upgrade." They will come to do the firmware upgrade, and then they will go. But with the firmware upgrades, you might never know when it works properly and when it doesn't work properly.

If there is a disc that needs to be replaced, and we ask them to replace it, they'll say, "okay, just share the remote station with us, and we'll run some commands, and we'll validate which disc is faulty. If it's really faulty, we will send the disc. We do that, and then they find the faulty disc and send a replacement.

They will do these minor things. But that's not what we are looking for. We are looking for more features and more functionality. Like if there is latency, try to help us out and help the customer find where the latency is. It doesn't necessarily have to be only with SAN storage. It might be a configuration issue, or it might be something else. So, you should help the customer find where the issue is. Unfortunately, that is not what we are getting from them. So they have to improve that a lot.

View full review »
CS
IT Manager at a agriculture with 1,001-5,000 employees

We used the solution basically for all operations. We got our ERP, Citrix, email (there was about a two wall to 18 terabytes with email), et cetera, that we had on the hybrid and we had our ERP system, which demands performance. We had that on the solution's all-flash.

View full review »
Zadara: Performance
CTO at Pratum

One of the main benefits is being able to scale up as needed, on-demand, without having to invest in any sort of hardware costs. If we were to get a large client, a Fortune 500 or Fortune 100, that had a significant number of assets or data they were looking to monitor, being able to scale on-demand and increase the drives behind the scenes is something that we can do in a matter of minutes today. If we were to manage that ourselves, it would take time to spin up those drives or to make those purchases and then get them configured and onboarded. We can now do that with the click of a button.

Zadara performs proactive monitoring and that includes any alerts or support tickets that are created within the system. For instance, if there is some sort of performance issue due to increasing ingestion or increasing storage consumption, or there are any other issues behind the scenes, all that is monitored. It creates an automated ticket that also goes to their team and one of their customer support individuals will reach out. We, obviously, have our own security monitoring on top of that, as well as performance monitoring, but we certainly work closely with Zadara and their support team in responding to any events that are generated. But they have the ability to go in and help mitigate any of the items that do come up.

It always helps having additional monitoring capabilities or individuals, especially when their focus is primarily on the data storage and the volumes behind the scenes, to ensure that everything's healthy and functioning. It's always good to have multiple layers there, in terms of visibility. But one of the key benefits that we have received is being able to respond quicker. We can open up a support ticket and ask them to make a change on our behalf, or to add additional storage, or to increase speed somewhere. They leverage their team to perform those things on our behalf and that saves our team from having to do it. They've been able to reduce the management overhead for us because we can offload some of those responsibilities to them.

It's pretty hard to compare performance levels to when we previously managed things ourselves, since we have grown significantly within the last five years and especially year-over-year. But we've probably seen an increase in performance of at least 100 to 200 percent. We've shifted workloads from SATAs all the way to SSDs. We've been able to go from a single GB to 10 GBs so performance-wise we're in a completely different arena. That being said, we've also doubled and tripled our event ingestion count. That has increased year-over-year with constant growth. So the performance demand has grown significantly. We're not only able to keep up with that but exceed that, year-over-year. We can scale resources, increase the network, decrease latency, increase the speed and the amount of CPUs and the amount of memory on those virtual private storage arrays, as well, as needed.

We've also been able to leverage some of the private compute. They're scaling up their compute so that you can actually spin up servers and instances closer to your storage, all within Zadara. That has been a tremendous benefit to us in increasing performance and reducing latency. I've been impressed with all of those features, and that capability is fairly new here. Staying cutting edge and providing additional services is something that's been very helpful to us as well.

In terms of data center footprint, we were able to take everything from our security operations center that was on-premises, and all of our co-lo's, and move all that into Zadara's management. That was definitely one of our primary objectives. They've been able to take over all the overhead that goes along with managing the backend infrastructure. It has been tremendously helpful in that regard.

Compared to us trying to do this ourselves, we've probably seen about 50 to 60 percent in cost savings over the last five years.

View full review »
CEO at Momit Srl

Luckily for us, we usually scale up. We have never scaled down, so I haven't tested their ability to shrink and go down. However, scaling up is extremely easy, just a few clicks. This is very good for the financial part of the company because we don't need to spend money today for what we will need in the next month or year. You just pay for what you are using, and this is something that has definitely helped us. This is why we don't have any other solutions today, only Zadara Storage Cloud.

The increase of performance for us means having an elastic system where we are able to scale up and scale out the storage without buying new disks or requiring to size today what you need over the next five or 10 years. This is something very complicated as a service provider. This is the best part of Zadara Storage Cloud for us. 

Speaking just to the storage performance, I don't know if they are better than any other storage array with all-flash and similar features. They perform like others, not better or worse, just differently. It is a different approach.

View full review »
EO
CTO at a tech services company with 51-200 employees

Having dedicated cores and memory absolutely provides us with a single-tenant experience. We have use cases in both categories, but we have customers who have a completely dedicated and private environment and it is particularly important for them. For example, if they are dealing with medical or patient data then they have a dedicated core and a dedicated disk, which is essentially their own private cloud.

It is important that we also have the flexibility for some of the lower-end services that we can have multi-tenant storage because not all of our clients require completely dedicated cores and disk space.

It is very important to us that Zadara provides various drive options because we're more of a niche cloud player, and we don't compete with Azure, AWS, or other large providers. We tend to have bespoke solutions, so having the different drive options gives us the flexibility we need to do that.

Zadara has improved our business with the main benefit being that we generate quite a bit of revenue every month from the services that we provide others, and I don't think that would have been possible without this product. What really attracted us to Zadara was the fact that they have the pay as you grow model.

As a new cloud provider, say three years ago, it would have been quite a large investment for us to take on. Not just in the hardware but also in the skills and the knowledge required to set up and operate it. Zadara was a key enabler for us to be able to enter the cloud business because if it wasn't for them, it would have taken us a lot longer. We would have had to invest in more people and as well in more hardware. As it is now, we are generating revenue and it gives us some credibility with our larger customer base.

Although we only use cloud storage services, Zadara is an agile solution that offers compute and networking, as well. This agility means that they are very quick at turning things around, which is key for us because we're able to implement solutions for customers quite quickly. Ultimately, we can start bringing in revenue for sales quite fast, as opposed to some of our traditional business. 

If you take fiber, for example, it could take up to three months to realize the value of the sale before it actually starts to build. Whereas with Zadara, it is so agile and so quick and easy to set up that even in a few days, we can turn a sale around into billing. This quick conversion from sale to revenue is also important for the business.

We didn't have a cloud before we had Zadara so, in that regard, it has increased our performance by 100%. In fact, we have been able to redeploy people and budgets to more strategic projects because it helped us to enter the cloud environment and to offer new services to our customers.

View full review »
Chief Technology Officer at Harbor Solution

Our initial application was probably the simplest one. We were sunsetting a product, but we needed to do some movement and we needed some additional storage, but we knew that what we needed was going to change within six months as we got rid of one product and brought in another. To handle this, we started deploying Block storage with Zadara, which we then changed to Object storage and effectively sent back the drives related to the Block storage as we did that migration. This meant that we did not have to invest in new technology or different platforms but rather, we could do it all on one platform and we can manage that migration very easily.

We use Zadara for most of our storage and it provides us with a single-tenant experience. We have a lot more customer environments running on it and although we don't use the compute services at the moment, we do use it for multi-tenant deployment for all of our storage.

I appreciate that they also offer compute services. Although we don't use it at the moment, it is something that we're looking at.

The fact that Zadara provides drive options such as SSD, NL-SAS, and SSD Cache is really useful for us. Much like in the way we can offer different deployments to our customers, having different drive sizes and different drive types means that we can mix and match, depending on customer requirements at the time they come in.

With available protocols including NFS, CIFS, and iSCSI, Zadara supports all of the main things that you'd want to support.

In terms of integration, Zadara supports all of the public and private clouds that we need it to. I'm not sure if it supports all of them on the market, but it works for everything that we require. This is something that is important to us because of the flexibility we have in that regardless of whether our customers are on-premises, in AWS, or otherwise, we can use Zadara storage to support that.

I would characterize Zadara's solution as elastic in all directions. There clearly are some limits to what technology can do, but from Zadara's perspective, it's very good.

With respect to performance, it was not a major factor for us so I don't know whether Zadara improved it or not. Flexibility around capacity is really the key aspect for us.

Zadara has not actually helped us to reduce our data center footprint but that's because we're adding a lot more customers. Instead, we are growing. It has helped us to redeploy people to more strategic projects. This is not so true with the budget, since it was factored in, but we do focus on more strategic projects.

View full review »
GW
Platform and Infrastructure Manager at a tech services company with 1,001-5,000 employees

We use Zadara as a multi-tenanted experience and it is key to us that we have dedicated resources for each tenant because it maintains a consistent level of performance, regardless of how it scales.

The fact that Zadara provides drive options such as SSD and NL-SAS, as well as SSD Cache, is very important because we need that kind of performance in our recovery environments. For example, when the system is used in anger by a customer, it's critical that it's able to perform there and then. This is a key point for us.

At the moment, we don't use the NFS or CIFS protocols. We are, however, big users of iSCSI and Object, and the ability to just have one single solution that covers all of those areas was important to us. I expect that we will be using NFS and CIFS in the future, but that wasn't a day-one priority for us.

The importance of multi-protocol support stems from the fact that historically, we've had to buy different products to support specific use cases. This meant purchasing equipment from different vendors to support different storage workloads, such as Object or File or Block protocols. Having everything all in one was very attractive to us and furthermore, as we retired old equipment, it can all go onto one central platform.

Another important point is that having a single vendor means it's a lot easier for us to support. Our engineers only need to have experience on one storage platform, rather than the three or four that we've previously had to have.

It is important to us that Zadara integrates with all of the public cloud providers, as well as private clouds because what we're starting to see now, especially in the DR business, is the adoption of hybrid working from our customers. As they move into the cloud, they want to utilize our services in the same way. Because Zadara works exactly the same way in a public cloud as it does on-premises, it's a seamless move for us. We don't have to do anything clever or look at alternative products to support it.

It is important to us that this solution can be configured for on-premises, co-location, and cloud environments because it provides us with a seamless experience. It is really helpful that we have one solution that stretches across on-premises, hybrid, and public cloud systems that looks and works the same.

An example of how Zadara has benefited our company is that during the lockdown due to the global pandemic, we've had a big surge in demand for our products. The ability of Zadara to ramp up quickly and expand the system seamlessly has been a key selling point for us, and it's somewhat fueled our growth. As our customer take-up has grown, Zadara's been the backbone in helping us to cope with that increased demand and that increased capacity.

It's been really easy to do, as well. They've been really easy to work with, and we've substantially increased our usage of Zadara. Even though we've only been using it for just about five months, in that time, we've deployed four Zadara systems across four different data centers. Their servicing capacity has been available within about four weeks of saying, "Can you do this?" and them saying "Yes, we can."

With respect to our recovery solutions, using Zadara has perhaps doubled the performance of what we had before. A bit of that is because it's a newer technology, and a bit of that is also in the way we can scale the engine workload. When the workload is particularly high, we can upgrade the engine, in-place, to be a higher-performance engine, and then when the workload scales down, we can drop back to a lower-performance one. 

That flexibility in the performance of not only being able to take advantage of the latest flash technology but also being able to scale the power of the storage engines, up and down as needed, has been really good for us.

Using Zadara has not at the moment helped to reduce our data center footprint, although I expect that it will do so in the future. In fact, at this point, we've taken up more data center footprint to install Zadara, but within six months we will have removed a lot of the older systems. It takes time to migrate our data but the expectation is that we will probably save between 25% and 30%, compared to our previous footprint.

This solution has had a significant effect on our budgeting. Previously, we would have had to spend money as a capital expense to buy storage. Now, it's an operational expense and I don't need to go and find hundreds of thousands of pounds to buy a new storage system. That's helped tremendously with our budgeting.

Compared to the previous solution, we are expecting a saving of about 40% over five years. When we buy new equipment, our write-down period is five years. So, once we've bought it, it has to earn its keep in that time. Using Zadara has not only saved us money but it will continue to save us money over the five years.

It has saved us in terms of incurring costs because I haven't had to spend the money all upfront, and I'm effectively spreading the cost over the five years. We do see an advantage in that the upfront capital costs are eliminated and overall, we expect between 30% and 40% savings over the lifetime if we'd had to buy the equipment.

View full review »
IA
Chief Information Officer at a tech services company with 201-500 employees

The fact that we have offsite storage that is provided to us using iSCSI as a service has allowed me to offload certain storage-related workloads into Zadara. This means that when I have a planned failover, if I need to maintain the local storage that I have in my data center, I simply shift all of the new incoming traffic into Zadara storage. None of my customers even know that it has happened. In this regard, it allows us to scale in an infinite way because we do not have to keep adding more capacity inside our physical data center, which includes power, networking, footprint, and so on. The fact that Zadara handles all of that for me behind the scenes, somewhere in Virginia, is my biggest selling point.

With its dedicated cores and memory, we feel that Zadora provides us with a single-tenant experience. This is important for us because we are aware that in the actual physical environment, where Zadara is hosting our data, they have other clients. Yet, the fact that we have not had any kind of performance issues, and we don't have the noisy neighbor concept, feels like we are the only ones on that particular storage area network (SAN). It's really important for us.

Zadara provides drive options such as SSD and NL-SAS, as well as SSD cache, and this has been important for us. These options allow us to decide for different volumes, what kind of services we're going to be running on them. For example, if it happens to be a database that requires fast throughput, then we will choose a certain type of drive. If we require volume, but not necessarily performance, then we can choose another drive.

A good thing about Zadara is you do not buy a solution that is fixed at the time of purchase. For instance, if I buy an off-the-shelf storage area network, then whatever that device can do at the time of purchase, give or take one or two upgrades, is where I am. With Zadara, they always improve and they always add more functionalities and more capacities.

One example is that when we became customers, their largest drives were only nine terabytes in size. A year or so later, they improved the technology and they now have 14 terabyte drives available, which is good at almost a 50% increase. It is helpful because we were able to take advantage of those higher densities and higher capacities. We were able to migrate our volumes from the nine terabyte drives to the 14 terabyte drives pretty much without any downtime and without any kind of interruption to service. This type of scalability, and the fact that you are future-proofing your purchase or your operations, is another great advantage that we see with Zadara.

As far as I know, Zadara integrates with all of the public cloud providers. The fact that they are physically located in the vicinity of public cloud regions is a major selling point for them. From my perspective, it is not yet very important because we are not in the public cloud. We have our own private cloud in Miami, and not part of Amazon or Azure. This means that for us, the fact that they happen to be in Virginia next to Amazon does not play a major role. That said, they are in a place where there is a lot of connectivity, so in that regard, there is an advantage. We are not benefiting from the fact that they are playing nice with public clouds, simply because we are not in the public cloud, but I'm sure that's an advantage for many others who are.

Absolutely, we are taking advantage of the fact that they integrate with private clouds.

Zadara saves me money in a couple of ways. One is that my operational costs are very consistent. The second is that the system is consistent and reliable, and this avoids a lot of the headaches that are associated with downtime, reputation, and all of that. So, knowing that we have a reputable, reliable, and consistent vendor on our side, that to me is important.

It is difficult to estimate how much we have saved because it wouldn't be comparing apples to apples. We would be buying a system versus paying for it operationally and I don't really have those kinds of numbers off-hand. Of course, I cannot put a price tag on my reputation.

View full review »