All-Flash Storage Arrays VMWare Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term VMWare
NetApp AFF (All Flash FAS): VMWare
Chief Information Officer at Mt. San Rafael Hospital

We have a pretty amazing story about using AFS. When I went into this organization, we had a 59% uptime ratio, and at the time we were looking at how to improve on efficiency, and how to bring good technology initiatives together to make this digital transformation happen. When the Affordable Care Act came out, it started mandating a lot of these health care organizations to implement an electronic medical record system. Of course, since health care has been behind the curve when it comes to technology, it was a major problem when I came into this organization that had a 59% uptime ratio. They also wanted to implement an electronic medical record system throughout their facility, and we didn't have the technology in place.

One of my key initiatives at the time was to determine what we wanted to do as a whole organization. We wanted to focus on the digital transformation. We needed to determine if we could find some good business partners in place so we selected NetApp. We were trying to create a better, efficient process, with very strong security practices as well. We selected an All-Flash FAS solution because we were starting to implement virtual desktop infrastructure with VMware.

We wanted to throw out zero clients throughout the whole organization for the physicians, which allowed them to do single sign-on. The physician would be able to go to one specific office, tap his badge, sign in to the specific system from there. That floating profile would come over with him, and then you just created some great efficiencies. The security practices behind the ONTAP solution and the security that we were experiencing with NetApp was absolutely out of this world. I've been very impressed with it. One of the main reasons I started with NetApp was because they have a strong focus on health care initiatives. I was asked to sit on the neural network, which was a NetApp-facilitated health care advisory group that focused and looked at the overall roadmap of NetApp. When you have a good business partner like NetApp, versus a vendor where a vendor's going to come in, sell me a solution and just call me a year later and say that they want us to sign something, I'm not looking for people like that. I'm looking for business partners. What I like to say is, "My success is your success, and your success is ours." That's really a critical point that NetApp has demonstrated.

View full review »
SG
Storage Engineer at Missile Defense Agency

We don't use NetApp AFF for machine learning or artificial intelligence applications.

With respect to latency, we basically don't have any. If it's there then nobody knows it and nobody can see it. I'm probably the only one that can recognize that it's there, and I barely catch it. This solution is all-flash, so the latency is almost nonexistent.

The DP protection level is great. You can have three disks failing and you would still get your data. I think it takes four to fail before you can't access data. The snapshot capability is there, which we use a lot, along with those other really wonderful tools that can be used. We depend very heavily on just the DP because it's so reliable. We have not had any data inaccessible because of any kind of drive failure, at all since we started. That was with our original FAS8040. This is a pretty robust and pretty reliable system, and we don't worry too much about the data that is on it. In fact, I don't worry about it at all because it just works.

Using this solution has helped us by making things go faster, but we have not really implemented some of the things that we want to do. For example, we're getting ready to use the VDI capability where we do virtualization of systems. We're still trying to get the infrastructure in place. We deal with different locations around the world and rather than shipping hard drives that are not installed into PCs, then re-installing them at the main site, we want to use VDI. With VDI, we turn on a dumb system that has no permanent storage. It goes in, they run the application and we can control it all from one location, there in our data center. So, that's what we're moving towards. The reason for the A300 is so that our latency is so low that we can do large-scale virtualization. We use VMware a tremendous amount.

NetApp helps us to unify data services across SAN and NAS environments, but I cannot give specifics because the details are confidential.

I have extensive experience with storage systems, and so far, NetApp AFF has not allowed me to leverage data in ways that I have not previously thought of.

Implementing NetApp has allowed us to add new applications without having to purchase additional storage. This is true, in particular, for one of our end customers who spent three years deciding on the necessity of purchasing an A300. Ultimately, the customer ran out of storage space and found that upgrading the existing FAS8040 would have cost three times more. Their current system has quadruple the space of the previous one.

With respect to moving large amounts of data, we are not allowed to move data outside of our data center. However, when we installed the new A300, the moving of data from our FAS8040 was seamless. We were able to move all of the data during the daytime and nobody knew that we were doing it. It ran in the background and nobody noticed.

We have not relocated resources that have been used for storage because I am the only full-time storage resource. I do have some people that are there to help back me up if I need some help or if I go on vacation, but I'm the only dedicated storage guy. Our systems architect, who handles the design for network, storage, and other systems, is also familiar with our storage. We also have a couple of recent hires who will be trained, but they will only be used if I need help or am not available.

Talking about the application response time, I know that it has increased since we started using this solution, but I don't think that the users have actually noticed it. They know that it is a little bit snappier, but I don't think they understand how much faster it really is. I noticed because I can look at the system manager or the unify manager to see the performance numbers. I can see where the number was higher before in places where there was a lot of disk IO. We had a mix of SATA, SAS, and flash, but now we have one hundred percent flash, so the performance graph is barely moving along the bottom. The users have not really noticed yet because they're not really putting a load on it. At least not yet. Give them a chance though. Once they figure it out, they'll use it. I would say that in another year, they'll figure it out.

NetApp AFF has reduced our data center costs, considering the increase in the amount of data space. Had we moved to the same capacity with our older FAS8040 then it would have cost us four and a half million dollars, and we would not have even had new controller heads. With the new A300, it cost under two million, so it was very cost-effective. That, in itself, saved us money. Plus, the fact that it is all solid-state with no spinning disks means that the amount of electricity is going to be less. There may also be savings in terms of cooling in the data center.

As far as worrying about the amount of space, that was the whole reason for buying the A300. Our FAS8040 was a very good unit that did not have a single failure in three years, but when it ran out of space it was time to upgrade.

View full review »
Systems Engineer at Nordstrom, Inc.

We only evaluated NetApp, and we are slowly looking at VMware, VDI, and the cloud.

We went with this solution primarily because of the stability. I also see reducing a lot of storage and cleaning up a lot of stuff. It is pretty good at this.

View full review »
CJ
Sr Storage Engineer at a financial services firm with 1,001-5,000 employees

We primarily use NetApp AFF for file storage and VMware.

View full review »
PY
Storage Administrator at a energy/utilities company with 1,001-5,000 employees

We use NetApp AFF to support our VMware environment.

View full review »
MB
Specialist Senior at a consultancy with 10,001+ employees

Our primary use for this solution is NFS and fiber channel mounts for VMware and Solaris.

View full review »
SB
Director at a tech services company with 11-50 employees

NetApp is very easy to set up.

All of the solutions by different vendors have setup wizards but with NetApp, it walks you through the steps and it is easy. It has NAS, CIFS, NFS, and block, all at once. Building the lines and going through is done step-by-step. With other vendors like EMC, you have to get a separate filer. There are a lot more questions that have to be asked on the front end.

NetApp also talks seamlessly with VMware, and most people are on VMware.  

View full review »
MM
Senior Network Technical Developer and Support Expert at a healthcare company with 10,001+ employees

Prior to bringing in NetApp, we would do a lot of Commvault backups. We utilize Commvault, so we were just backing up the data that way, and recovering that way.  Utilizing Snapshots and SnapMirror allows us to recover a lot faster. We use it on a daily basis to recover end-users' files that have been deleted. It's a great tool for that.

We use Workflow Automation. Latency is great on our right, although we do find that with AFF systems, and it may just be what we're doing with them, the read latency is a little bit higher than we would expect from SSDs.

With regard to the simplicity of data protection and data management, it's great. SnapMirror is a breeze to set up and to utilize SnapVault is the same way.

NetApp absolutely simplifies our IT operations by unifying data services.

The thin provisioning is great, and we have used it in lieu of purchasing additional storage. Talking about the storage efficiencies that we're getting, on VMware for instance, we are getting seven to one on some volumes, which is great.

NetApp has allowed us to move large amounts of data between data centers. We are migrating our data center from on-premises to a hosted data center, so we're utilizing this functionality all the time to move loads of data from one center to another. It has been a great tool for that.

Our application response time has absolutely improved. In terms of latency, before when we were running Epic Caché, the latency on our FAS was ten to fifteen milliseconds. Now, running off of the AFFs, we have perhaps one or two milliseconds, so it has greatly improved.

Whether our data center costs are reduced remains to be seen. We've always been told that solid-state is supposed to be cheaper and go down in price, but we haven't been able to see that at all. It's disappointing.

View full review »
JC
Storage Architect at a energy/utilities company with 10,001+ employees

Our primary use for this solution is for production storage. We have got everything: VMware, SQL servers and file servers. It handles all of them.

View full review »
RC
Data Protection Engineering at a manufacturing company with 10,001+ employees

This solution reduced our costs by consolidating several types of disparate storage. The savings come mostly in power consumption and density. One of our big data center costs, which was clear when we built our recent data center, is that each space basically has a value tied to it. Going to a flash solution enabled us to have a lower power footprint, as well as higher density. This essentially means that we have more capacity in a smaller space. When it costs several hundred million dollars to build a data center, you have to think that each of those spots has a cost associated with them. This means that each server rack in there is worth that much at the end. When we look at those costs and everything else, it saved us money to go to AFF where we have that really high density. It's getting even better because the newer ones are going to come out and they're going to be even higher.

Being able to easily and quickly pull data out of snapshots is something that benefits us. Our times for recovery on a lot of things are going to be in the minutes, rather than in the range of hours. It takes the same amount of time for us to put a FlexClone out with a ten terabyte VM as it does a one terabyte VM. That is really valuable to us. We can provide somebody with a VM, regardless of size, and we can tell them how much time it will take to be able to get on it. This excludes the extra stuff that happens on the back end, like vMotion. They can already touch the VM, so we don't really worry about it.

One of the other things that helped us out was the inline efficiencies such as the deduplication, compaction, and compression. That made this solution shine in terms of how we're utilizing the environment and minimizing our footprint.

With respect to how simple this solution is around data protection, I would say that it's in the middle. I think that the data protection services that they offer, like SnapCenter, are terrible. There was an issue that we had in our environment where if you had a fully qualified domain name that was too long, or had too many periods in it, then it wouldn't work. They recently fixed this, but clearly, after having a problem like this, the solution is not enterprise-ready. Overall, I see NetApp as really good for data protection, but SnapCenter is the weak point. I'd be much more willing to go with something like Veeam, which utilizes those direct NetApp features. They have the technology, but personally, I don't think that their implementation is there yet on the data production side.

I think that this solution simplifies our IT operations by unifying data services across SAN and NAS environments. In fact, this is one of the reasons that we wanted to switch to this solution, because of the simplicity that it adds.

In terms of being able to leverage data in new ways because of this solution, I cannot think of anything in particular that is not offered by other vendors. One example of something that is game-changing is in-place snapshotting, but we're seeing that from a lot of vendors.

The thin provisioning capability provided by this solution has absolutely allowed us to add new applications without having to purchase additional storage. I would say that the thin provisioning coupled with the storage efficiencies are really helpful. The one thing we've had to worry about as a result of thin provisioning is our VMware teams, or other teams, thin provisioning on top of our thin provisioning, which you always know is not good. The problem is that you don't really have any insight into how much you're actually utilizing.

This solution has enabled us to move lots of data between the data center and cloud without interruption to the business. We have SVM DR relationships between data centers, so for us, even if we lost the whole data center, we could failover.

This solution has improved our application response time, but I was not with the company prior to implementation so I do not have specific metrics.

We have been using this solution's feature that automatically tiers data to the cloud, but it is not to a public cloud. Rather, we store cold data on our private cloud. It's still using object storage, but not on a public cloud.

I would say that this solution has, in a way, freed us from worrying about storage as a limiting factor. The main reason is, as funny as it sounds because our network is now the limiting factor. We can easily max out links with the all-flash array. Now we are looking at going back and upgrading the rest of the infrastructure to be able to keep up with the flash. I think that right now we don't even have a strong NDMP footprint because we couldn't support it, as we would need far too much speed.

View full review »
SM
Systems Engineer at Cleveland Clinic

The primary use case for AFF is as a SAN storage for our SQL database and VMware environment, which drives our treatment systems. We do not use our it currently for AI or machine learning.

We are running ONTAP 9.6.

View full review »
BC
Storage Manager at State of Nebraska

Switching to AFF has improved the performance of a lot of our virtual machines in a VMware environment. The number of support tickets that we receive has fallen to almost zero because of this, so it's been a real help for our virtual server support team.

View full review »
System Administrator at Bell Canada

Currently, we are leveraging AFF for our VMware environment solution. So, we use it as a storage for our customers and are leveraging it to provide a faster storage solution for VMware customers.

We are using it for block level based only storage, as of today.

View full review »
CP
Unix Engineer at a healthcare company with 5,001-10,000 employees

We've been using AFF for file shares for about 14 years now. So it's hard for me to remember how things were before we had it. For the Windows drives, they switched over before I started with the company, so it's hard for me to remember before that. But for the NFS, I do remember that things were going down all the time and clusters had to be managed like they were very fragile children ready to fall over and break. All of that disappeared the moment we moved to ONTAP. Later on, when we got into the AFF realm, all of a sudden performance problems just vanished because everything was on flash at that point. 

Since we've been growing up with AFF, through the 7-Mode to Cluster Mode transition, and the AFF transition, it feels like a very organic growth that has been keeping up with our needs. So it's not like a change. It's been more, "Hey, this is moving in the direction we need to move." And it's always there for us, or close to being always there for us.

One of the ways that we leverage data now, that we wouldn't have been able to do before — and we're talking simple file shares. One of the things we couldn't do before AFF was really search those things in a reasonable timeframe. We had all this unstructured data out there. We had all these things to search for and see: Do we already have this? Do we have things sitting out there that we should have or that we shouldn't have? And we can do those searches in a reasonable timeframe now, whereas before, it was just so long that it wasn't even worth bothering.

AFF thin provisioning allows us to survive. Every volume we have is over-provisioned and we use thin provisioning for everything. Things need to see they have a lot of space, sometimes, to function well, from the file servers to VMware shares to our database applications spitting stuff out to NFS. They need to see that they have space even if they're not going to use it. Especially with AFF, because there's a lot of deduplication and compression behind the scenes, that saves us a lot of space and lets us "lie" to our consumers and say, "Hey, you've got all this space. Trust us. It's all there for you." We don't have to actually buy it until later, and that makes it function at all. We wouldn't even be able to do what we do without thin provisioning.

AFF has definitely improved our response time. I don't have data for you — nothing that would be a good quote — but I do know that before AFF, we had complaints about response time on our file shares. After AFF, we don't. So it's mostly anecdotal, but it's pretty clear that going all-flash made a big difference in our organization.

AFF has probably reduced our data center costs. It's been so long since we considered anything other than it, so it's hard to say. I do know that doing some of the things that we do, without AFF, would certainly cost more because we'd have to buy more storage, to pull them off. So with AFF dedupe and compression, and the fact that it works so well on our files, I think it has saved us some money probably, at least ten to 20 percent versus just other solutions, if not way more.

View full review »
JG
Vice President Data Protection Strategy at a computer software company with 1,001-5,000 employees

To a certain extent, we offer the client basic tech support, meaning if a disc drive has failed we can send someone to replace it. NetApp has a very large tech support organization for their premium customers, where they will support third-party products like Rubrik, like VMware, like Combo - all kinds of third-party products that touch NetApp. 

Not every storage or NetApp deployment is open the box, put the NetApp in the rack, turn the on/off switch on, and click the wizard. It's got to interface in a hospital environment, has to interface with the medical imaging department, so in that regard, no product is easier or more difficult than NetApp other than how the storage device interfaces with what it's storing.

All tech support isn't great if they didn't do a good job setting up and all tech support is great if they did a great job for you, and I've had positive and negative experiences with every manufacturer's tech support. I would rate NetApp as one of the best. It's usually in-country. I have customers that are in South America, that are in the United States, that are in the UK, that are in Asia. I don't stay up nights worrying about their tech support.

The partner community, such as myself and my engineering team, usually get involved if there is a tech support issue that is not a manufacturing defect or a bug as we can't control that. We can only control the environment that we helped architect.

View full review »
SolidFire: VMWare
CS
Founder, President and CEO with 201-500 employees

The solution is primarily used as an on-premises VMWare based application provisioning platform.

View full review »
Presales Engineer at a tech services company with 10,001+ employees

The performance with the QoS is its most valuable aspect.

The integration with VMware is excellent. There are different plugins to manage the SolidFire storage from the vCenter level. That I really appreciate. 

SolidFire even as a standalone storage platform is excellent. 

I would say in terms of architecture and in terms of functionality, the product is quite good. 

It's block access storage, however, for block access storage we have the guarantee of performance. 

We have the duplication and we have the encryption with this solution. We have almost all the standards needed for storage with SolidFire. In terms of protection, with the level of protection we can set between the SolidFire nodes, it's very good.

View full review »
Pure Storage FlashArray: VMWare
TS
IT System Engineer at a tech services company with 501-1,000 employees

We put the solution onto the VMware environment and all the Microsoft SQL servers. We do the synchronization between two data centers, so that is has a very low latency. We just have a few milliseconds of latency which is a ready performance, and near perfect. 

View full review »
HPE Nimble Storage: VMWare
Lead Infrastructure Architect at ThinkON

Nimble storage is our primary Production storage vendor.  We use this with VMware on a daily basis including a new AFA5000 all flash array for our DMS system.

View full review »
Lead Infrastructure Architect at ThinkON

It has allowed us to upgrade our DMS to the latest version and reuse the older array as the DR storage for VMware SRM.  The entire DMS system performance has improved compared to the old which was on a previous generation CS260G.

View full review »
LL
Product Manager at a comms service provider with 11-50 employees

We are resellers. We provide products to our customers.

Unfortunately, we were not able to sell this product because it is too expensive. We use it in our own cloud. We created a VMware VCPP cloud to provide VMware services to our enterprise customers.

View full review »
MR
Technical Manager at a tech services company with 11-50 employees

The companies that bought this solution from us use it for VMware. They have also used some Oracle in the Red Hat operating system. It's mostly used for the VMware environment. The companies we provide the solution to are generally medium size; one of them is a hospital and the other is an agency that controls the sale of gas. We are partners of HPE Nimble Storage and I'm the technical manager of the company.

View full review »
LB
Systems Engineer at a tech services company with 51-200 employees

For me, Nimble has two main problems.

There is no active-active controller, which means that we can only have one controller online at a time. Replacing the controller is what I see as the only major issue, although I'm not sure that HPE can do this.

Nimble has a limit for objects. We have it configured for VMware, so if you have a laptop machine then you have a problem because of this limit. Also, if you have a virtual desktop with a lot of VMs, such as 2,000 to 3,000, then it's a problem because the window has a limit.

View full review »
Team Leader at PT.Helios Informatika Nusantara

HPE Nimble Storage has a simple management end-user. The customer will generally provide us with feedback about performance upon installation of the solution, including in respect of VMware. Our customers are helpful when our performance involves configuring integration such as that of vCenter with InfoSight. I am aware of the tech problem we encounter. The VM we have with high-class latency is, surprisingly, very helpful to manage. 

View full review »
HPE 3PAR StoreServ: VMWare
System Administrator at ON Semiconductor Phils. Inc.

We have deployed HPE 3PAR systems on all database-related storage including MSSQL and Oracle. All of the SQL databases are running on VMware, and the database-related storage is mounted as RDM. The Oracle database is mounted directly to HPE 3PAR with remote-copy enabled.

View full review »
TK
Sr, Storage Engineer at a manufacturing company with 10,001+ employees

We are using it for Oracle Databases. We're are also using it for VMware and NetBackup. It's one of the storage solutions for NetBackup.

View full review »
PS
IT Infrastructure and Operational Lead at a consumer goods company with 201-500 employees

We use the 3PAR to host our SQL Database and Oracle Database. We also use it for VMware and vCenter.

View full review »
IO
IT Infrastructure & Data Center Operation Engineer at Ministry of Communications and Information Technology (MCIT), Egypt

I use 3PAR as the standard storage. The main production is VMware, and it is connected to 3PAR across fabric switch. The fabric switch between them is MDS Switch and Notebook 8. We also have a Hyper-V environment, which is connected to the same storage. The main service is the exchange service. I have a public cloud and a private cloud. I use 3PAR as a private cloud.

View full review »
CP
SAN Consultant at a tech services company with 201-500 employees

You have to be careful about exactly what your usage is. You really need to understand what you want to do with the controller. You need to understand what your total IOP performances will be, before you do any sizing so that you can size appropriately, otherwise, as with any storage, you could have an underutilized controller. That would then cause you frustration and business units would suffer as you would have slower performance than what you expected.

A lot of times customers get themselves in trouble due to the fact that they make a purchase or size a controller and then later on they undersized it. Sometimes the under-sizing can occur due to the fact that a company starts putting more demand on the new stores where they want to put one more device on the storage product or connect to it, and that wasn't what it was sized for at the time. If you didn't plan on the growth of the utilization of the controllers the product doesn't look like it's performing properly, however, indeed it is. It's just, not big enough to handle the new user profile that you have put on it.

Microsoft Exchange, VMware, SQL, and Oracle - those are the types of software or devices that are applications that were used within our environment. We have several different departments or groups that get access to that storage.

View full review »
LB
Systems Engineer at a tech services company with 51-200 employees

I want to build a MetroCluester in VMware.

View full review »
Hitachi Virtual Storage Platform F Series: VMWare
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

We have read in the latest newsletters that in 2020 there is going to be more data, more than four hundred zettabytes. It's a huge amount of data and you will need the right platform to process it. More than people, we will need a platform that can process that amount of data in less than a millisecond. So we are looking for a high-speed storage enterprise, such as VSP.

If you compare to other solutions, Hitachi is more complex, but the platform is improving and it's not as difficult as it used to be.

We are facing new technologies such as Container by Google. It's the new way of Application Virtualization. I think that Hitachi and other companies should follow these technologies for integration with new technologies such as container base and their products. I think that cloud integration is important with vVol technologies from VMware.

There will be many challenges, but we need more integration between F Storage and new technology for Cloud, vVol, and Container.

I would like the interface to be simplified more than it is. The interface can be improved with new technologies such as HTML5, which is being used by some storage vendors.

Hitachi interface management is not as easy as the EMC Unity series. It's better to use HTML5 for the management systems.

View full review »
IBM FlashSystem: VMWare
KV
Director Technical at a tech services company with 11-50 employees

We are solution providers. We deploy solutions around VMware. Typically we deploy data protection, and disaster recovery of workloads in the cloud, and on-premises.

If I need to know about a platform or the base platform on which I'm working, I try to read up on their model. We are also storage integrators and solution providers.

The primary use case is for storage, enterprise workloads, and databases.

View full review »
Dell EMC Unity XT: VMWare
Solution Architect, IT Consultant at Merdasco - Rayan Merdas Data Prosseccing

I'm a data center solution architect at Merdasco and based on our customers' needs, we build solutions for them. This product is very flexible, powerful, and suitable for many environments.

Dell EMC Unity OE provides block LUN, VMware Virtual Volumes (VVols), and NAS file system storage access. Multiple, different storage resources can reside in the same storage pool, and multiple storage pools can be configured within the same DPE/DAE array.

View full review »
VS
Tech Lead at Complete Enterprise Solutions

This solution is our primary storage for all workloads.

It has good replication and integration with VMware.

View full review »
Huawei OceanStor Dorado: VMWare
MS
Head of Research & Development at a construction company with 1,001-5,000 employees

The OceanStor V3 5000 was, when we started, the first real NVMe all-flash storage solution. In NVMe, the performance was much more impressive to seek latency and that was unmatched. When we later sold our machine, the supplier did not have any in stock. Only later Dell was able to introduce PowerMax or IBM, and they introduced their new solution integrating the NVMe.  

The other advantage is that HyperMetro functionality is app-specific to VMware or to the virtualized environment to have more reliability and higher capability. It is, therefore, possible to have all the data synchronized, using less storage. All the other features inside the system are very reliable and the installation time was shorter. We use less space for storage now - it decreased from two racks to only four units. That is really impressive.

View full review »
JS
Senior Storage Consultant at a tech services company with 51-200 employees

We are an IT distributor and this is one of the storage solutions that we implement for our clients. The primary use case is for VMware virtualization, although it is also used for database system storage. Oracle and other SQL databases require a lot of performance in terms of IO per second, which is met by using OceanStore Dorado.

View full review »
AY
IT Service Manager at a financial services firm with 1,001-5,000 employees

We primarily use it for our VMware and some of our systems.

View full review »
Dell EMC SC Series: VMWare
Managing Director at Consult BenJ Ltd

The solution is used for shared storage for the ESX cluster, VMware, or Vcenter cluster. It's a virtual machine and it's hosting space for virtual service. The primary reason we use the solution is to host the core infrastructure, the virtual servers including file servers, domain controllers, application servers, sequel servers, etc. Basically, the servers that run the business.

View full review »
Storage Architect at a healthcare company with 10,001+ employees

We use it for multiple databases. We use it for Oracle and for SQL. We also use it for file systems — Oracle, SQL, file system storage. Most of our use cases involve Oracle, SQL, VMware, and large file systems.

I am a storage architect. have a state administrator that works with me as well.

Internally, within our company, there are a few dozen employees using this solution. Externally, we literally have millions of people that hit that storage system every day.

As far as our database administrators, they're always looking at the storage performance. Some of them actually have read-only log-ins to the storage array itself. They can log in and look at directly what the storage performance is for their database.

Currently, we are not using this solution extensively because it's becoming a sunset solution. There's no option to increase usage. It's like saying you want to buy a '65 Mustang — no, you can't get one brand new, they don't make those anymore. There's no expansion being done because the product's no longer available. 

View full review »
TN
Information Technology Operations Manager at Weber Metals

This is our SAN product for use with a traditional VMware setup, using VMware clusters. We have the 150 terabyte enclosure.

View full review »
Lenovo ThinkSystem DM Series: VMWare
FA
IT Solutions Architect at nds Netzwerksysteme GmbH

HPE Nimble, HPE VSA, VMware VSAN, Lenovo DE-Series

View full review »
Lenovo ThinkSystem DE Series: VMWare
DP
Solutions Developer at Next Dimension Inc.

We sell the Lenovo ThinkSystem DE 4000 or 6000 series storage arrays. The clients that we sell the products to are mostly manufacturers and the use case for ThinkSystem is always production for storing and operating their virtual environment, which means the units are almost always used for VMware. It is just typical storage for production and virtual machines.  

View full review »
FG
IT Departmant - System Administration at a healthcare company with 501-1,000 employees

We use the DE series, the 2000 and DE4000. It's DE, All-Flash storage.

We use Lenovo with the VMware solution. We have VMware with virtual machines and the production storage is the Lenovo DE4000 Hybrid. The solution for backup is Lenovo Storage 2000.

For the moment, we use the solution in respect of our three hosts, for VMware and for storage.

View full review »
HPE Primera: VMWare
Head of IT Infrastructure Solutions at a tech services company with 51-200 employees

There really isn't any aspect of the solution that needs improvement for the customer other than its price.

It is a very good solution, but the Georgia Republic is a very small country and customers in both the government public sector and in the private sector do not have money to purchase enterprise or high performance solutions. They are looking at mid-range or mid-class solutions.

I can say that they need to simplify the solution. In SimpliVity, they need a lot of integration with virtualization technologies. For example, putting some add-ons or plugins in vCenter. vCenter is a management software of VMware virtualization. 

Secondly, it would be better if they cold simplify the deployment of Primera. Thirdly, if you have already purchased Primera and you need to scale your infrastructure and you are thinking of buying more hardware disks, you will need to purchase the Rebalance Service from HP Enterprise. They need to improve that methodology.

The customers need solutions that do not require a lot of administrative tasks.

View full review »
Dell EMC PowerMax NVMe: VMWare
VF
Presales Engineer Information System and Security at a tech services company with 10,001+ employees

The primary use case is data storage consolidation for mission-critical applications, like billing, the charging system, mobile payment, and intelligent network. Virtualization and cloud infrastructure are where the customer is using many solutions for virtualization, like Hyper-V, Oracle Virtual Machine, OpenStack, VMware, Solaris, Linux, Kubernetes, and Docker. Disaster recovery was also the main focus of the customer to guarantee RPO and RTO. The last use case was a NAS solution through the eNAS provided by PowerMax. The previous eNAS hosted by VMAX 10K has its limits in term of size limit for a file system.

View full review »
CM
Storage Team Manager at a government with 10,001+ employees

What is most valuable to us is the fact that it has multiple engines, and each of those engines works in conjunction in a grid environment. That's important to us because we have so many different use cases. One example might be that a state trooper pulls someone over at 2 o'clock on Sunday morning and wants to go into the LEIN system, which is the law enforcement information network. He wants to see who this person is that he has pulled over and gather as much information as he can on that person. We can't predict when he's going to pull someone over, nor can we predict when backups are actually going to be taken against the volume that he's going to for that information. The PowerMax allows us to do backups of that volume at the same time that he is looking up the data he needs, and there's no impact on performance at all.

The performance is very good. Our predominant workloads are all less than 5 milliseconds and it's most common to have a sub-1-millisecond response time for our applications. In terms of efficiency, we've turned on compression and we're able to get as high as two-to-one compression on our workloads, on average. Some workloads can't compress and some can compress better, but on average, we're a little bit more than two-to-one.

The solution’s built-in QoS capabilities for providing workload congestion protection work pretty well because we actually don't even turn on the service level options. We leave it to the default settings and allow it to decide the performance. We don't enforce the Platinum, Gold, or Silver QoS levels. We just let the array handle it all, and it does so.

We also use VPLEX Metro, which is a separate service offering from Dell EMC. It does SRDF-like things, but it's really SRDF on steroids. Of course it copies data from one data center to the other, but with the VPLEX, not only does it copy it synchronously, but it also has coherent caching between both data centers. That means we are literally in an Active-Active mode. For instance, we can dynamically move a VMware host that is in one data center to another data center, and we're not just doing vMotion with the host. The data is already in there at the other data center as well. It's all seamless. We don't have to stop SRDF and remount it on another drive. It's already there.

View full review »
Infrastructure Lead at Umbra Ltd.

It was a pretty complex process in the beginning: migrating data, verifying everything is good to go, standing up our volumes, and things of that nature. Once everything got going, it was a lot easier to understand and manage.

Deployment took about two weeks’ time, not including transfer times. With transfer times, it was closer to a month.

We set up our PowerMax, attached the source to VMware, and then migrated all of our VMs off of our old storage array into the new one. Once we verified everything was good, we turned off the old storage array and went from there.

View full review »
Pure FlashArray X NVMe: VMWare
VP Infrastructure & Security at a financial services firm with 51-200 employees

I would absolutely recommend using it. I would also suggest negotiating and testing it. I bought a very small system of 10 terabytes that I put in one of our labs for testing so that my team can learn it, and I could play with it. We tested it, and after we were comfortable with the capabilities of the system and building things in VMware, which is a really critical part of the whole integration, we tested three different solutions from HP, Dell, etc. After the testing, it was clear to us that the Pure FlashArray X NVMe was the easiest to manage and configure and had the best performance that we had seen in all the arrays. We are not testers, but we could tell. We could see the speed at which the databases came up and everything else. After testing, you will be convinced that Pure FlashArray X NVMe is probably the best box or right there in terms of performance. We tested in early 2019. There might be another solution that is doing better today.

I would rate Pure FlashArray X NVMe a nine out of ten. The only reason I won't give it a ten is the price. Its feature set is pretty complete. I'm pushing it right now. It is like you buy a sports car and then you complain that you don't have a big trunk to put a lot of luggage. You are complaining about the wrong thing here. You bought the thing because it is fast. Similarly, we bought it because it is fast. From that perspective, whether they can address NAS or other things like that is just icing on the cake for me. Its price is a little high right now. Otherwise, I would have given it a ten.

View full review »
Implementation and Support Engineer at PRACSO S.R.L.

To be able to do the welcome files simultaneously on a lower version would be helpful. 

I general, we don't really have any pain points when dealing with the solution.

The solution should improve its logon requirements.

I'd like to see the product implement active replication for vehicles such as VMware.

View full review »
Dell EMC PowerStore: VMWare
CTO at Universita' degli Studi di Pisa

The use case for this solution is based on VMware and that is why we chose PowerStore X. During the first period of the pandemic, we decided to use our VMware infrastructure for HPC workloads. We were looking for high-performance storage that could be inserted inside our VMware environment in an easy way. PowerStore, which had just been announced, seemed like the right solution and so we decided to buy. We have this storage environment inside the virtual HPC.

In our environment we are doing medical analysis related to genomic workloads. The data are acquired remotely from experiments and stored on the PowerStore. The PowerStore is exposed to the user through virtual machines, and the data are analyzed in this environment.

View full review »
BC
Chief Information Officer at a computer software company with 5,001-10,000 employees

We replaced an older, high-performance storage device that was very expensive. With PowerStore, we were able to achieve the IOPS, and we were also able to get a data compression rate significantly above what we had expected. We were able to retire that older, very expensive piece of storage by bringing in the PowerStore. It's been faster and cheaper than we had expected, per terabyte.

Another reason that we were after this machine was PowerStore's VMware integration. We're a very large VMware customer. Some 98 percent  of our workload runs on VMware.

View full review »
Pavilion HyperParallel Flash Array: VMWare
JL
Network Manager at a transportation company with 1,001-5,000 employees

We run a virtualized workload. Right now, we run everything on Pavilion, which includes our high performance databases and engineering tasks as well as Exchange and file shares. All that stuff runs on our Pavilion Hyperparallel Flash Array.

We use block storage for our VMware infrastructure and are a complete VM shop. All of our files servers run on VMs, which use block storage on the Pavilion device.

View full review »
Hitachi Virtual Storage Platform 5000 Series: VMWare
AM
Storage and Backups Manager at a computer software company with 1,001-5,000 employees

We are a data center located in Mexico. We use a wide array of storage solutions from different companies. Our largest storage installation is on Hitachi.

We use this solution to share storage between our cloud environments. We have a VMware Cloud in each of our data centers with two locations in Mexico City. We have a replica between these two sites. We also have another Cloud Suite for OpenStack

View full review »
IntelliFlash: VMWare
HM
Lead Systems Engineer at a retailer with 5,001-10,000 employees

We use IntelliFlash for our virtualization environment. We use VMware, and it's used to show the virtual machines.

View full review »
CS
IT Manager at a agriculture with 1,001-5,000 employees

Once you started pushing it, it would start to not respond properly and then we would have to reach out to support, to try to figure out where these issues came from. They couldn't tell us where the problems were. Their advice was simply to take some load off and then it would work fine. 

Sometimes there was imprecise information. If there's an issue with the system, to kind of pinpoint where the issue was coming from, for example, if it was network latency or a load link to a VM or load link to some kind of switch, it would have been helpful to know. 

I know they can't look at all the networks, however, as the solution is connected to VMware and the SAN and the switch, there should be more information on the system. I can't pinpoint anything, which is a problem. Their reporting needs to be much better.

View full review »
Zadara: VMWare
CEO at Momit Srl

We use iSCSI and Object multi-protocols. These simplify our operations a lot because otherwise we would need a lot of different products or interconnections. With Zadara Storage Cloud, all of this is just one type of connection. It works only with Ethernet, which means no Fibre Channel nor other protocols, like InfiniBand. It is just Ethernet, which is easy and simple. You can just use the protocols that you need. Today, this means we are not using NFS. But, never say never. Probably tomorrow or one day, if someone would just ask us to implement something mounted via NFS, we are ready to go. This is good because we don't need to buy another hardware or additional features. The best part is the fact that the cost covers everything, so you don't have to activate features by license, e.g., we don't need to pay more to activate NFS, CIFS, or iSCSI because we are not using them today. We still have them. So, we are free to use them whenever we want, which is good.

All our customers report the same story when we ask for a case study. With Zadara Storage Cloud, you simplify the management, which is absolutely true. 

Zadara Storage Cloud's agility is the most important part because all customers want agility today. Everyone wants quick answers, support, and features as well as the ability to provide storage with just some clicks or a simple request. 

Zadara Storage Cloud is elastic in all directions. We create a lot of events (marketing events, technical events, and public speaking) with VMware. They have always been available to sponsor, participate, or just integrate their experience. Even with features, we requested some specific features for the Italian market, then they just put them into the roadmap, which was great.

View full review »
GW
Platform and Infrastructure Manager at a tech services company with 1,001-5,000 employees

We are a disaster recovery company and we used Zadara as a storage platform for all of our disaster recovery solutions. We do not make use of the computing and networking services they offer. Rather, we only use the storage facility.

Our main environment is Zadara Storage, and then we have multiple VMware and Hyper-V virtual clusters that run the services we provide to our customers. We've also got numerous recovery platforms as well, which we can recover customer's environments onto. Zadara is a key underpinning of that because, without that common storage layer and the services running on top of that, we wouldn't have a business to run.

It's key for us, as a DR specialist, that we have the confidence that all of our systems and services are available all the time. Picking a vendor, be it Zadara or any other vendor, is really important to us because we have to trust that they're going to be there 24/7, every day.

View full review »
IA
Chief Information Officer at a tech services company with 201-500 employees

One of the most useful features is that they provide iSCSI as a service. That was very useful for us because it allows us to simply mount their storage into our servers and just utilize it as if the storage is local. That's number one.

Number two, their reliability and fault tolerance, is really unmatched. We were able to upgrade our storage, add more drives, add volumes, replicate volumes, and change sizes of volumes, all with zero downtime. It's very impressive.

These two features are extremely useful for us. The iSCSI as a service, as well as the fact that the system is highly resilient for fault tolerance.

We typically use iSCSI and to a lesser degree, object storage. We have not been using NFS, so I can't comment on that, but the fact that the solution offers all of those is certainly a big differentiator for people who are looking for those kinds of solutions. It means that they don't have to have multiple vendors or multiple systems to put together to support those different solutions.

For example, if I need to have S3, I don't have to go to Amazon or anybody else because I have it available within Zadara. iSCSI is exceptionally rare to find as a service and the fact that they support it is a major competitive advantage, and the same is true with CIFS and NFS. You would need extra plugins and extra add-ons from other vendors like VMware in order for you to do this, but Zadara does it out of the box, which is nice.

Zadara can be configured for on-premises, colocation, and cloud-based deployments and we use a mixed-mode. We provide our services in a cloud capacity but we're not in the public cloud. We're not in AWS or Azure, for example. We have our own private cloud and Zadara is working beautifully in this hybrid mode. They do have an on-premises solution as well, which we have not yet taken advantage of but we are planning to.

The fact that they can provide us services that are outside of our data center is of utmost importance. If something happens to my data center, I know that I have an off-site remote, either backup or remote system that I can tap into and continue my operations. The fact that they can provide me an on-premises solution, which is really the entire stack in my data center, where I need it to be low latency and high capacity storage, is certainly something that we'll be looking into. It's nice to know that those options are available from the same vendor.

At this point, we only use Zadara for storage, but we are about to use some of the other services. In terms of agility, the platform has been working flawlessly. All their SLAs have been met. We have been adding more storage, we have been upgrading from one engine to another, and all of that was happening without any kind of outage.

I would categorize Zadara as elastic in all directions for the fact that we can add more capacity on the fly. We can add more drives or more cache storage. We can increase the engine if we need to have it faster with more memory, or with more CPU power. The fact that we can do all of that with a click of a button and it happens, it's provisioned relatively quickly, and we pay by the hour rather than paying for it outright, allows us to scale without letting them know. It is easy because they don't have to provision special hardware just for us and it can happen fast. For example, if all of a sudden my business experiences an increase, I can react within the hour. Any change to the billing is reflected immediately.

Using this solution has increased performance in our environment because we can offload storage capacity elsewhere, which we know is infinite in size. This alleviates a lot of the headaches, it's been consistent, and it has worked pretty well. It would be difficult to estimate our performance increase because we don't measure it.

Our data center footprint has been reduced using Zadara. We have fewer storage systems today in our data center which means less power consumption, less environmental impact, less heat, and they take up a smaller physical footprint on the racks. I cannot say exactly how much, but it's definitely at least half a rack.

In terms of saving resources and redeploying people to more strategic projects, I can say that it allows us to support more storage with the same number of people. But, I cannot say that the next time I would have bought storage, I would have had to add another person. I really cannot make that kind of a distinction. The only exception I can say is that they are helping us in our West Coast data center because over there, I do not have any staff. The fact that Zadara is helping me with storage, certainly I can say that there would have been a staff member managing the on-premises storage locally. Instead, Zadara is taking care of that, leaving me with requirements for one less person.

View full review »