We just raised a $30M Series A: Read our story

Top 8 Modular SAN (Storage Area Network) Tools

HPE 3PAR StoreServIBM FlashSystemNetApp FAS SeriesHPE StorageWorks MSAHitachi Virtual Storage Platform G SeriesDell EqualLogic PS seriesOracle Sun ZFS Storage ApplianceIBM System Storage DS5000 Series
  1. leader badge
    The solution is easy to install.The technical support is very good.
  2. leader badge
    The performance of IBM FlashSystem is very good. The new technology and high throughput have given us more confidence in the solution. The management of the system has improved and we can control the monitoring system alerts and multiple FlashSystems with the Enterprise Cloud Edition, which is free. The migration of recently stored data to a new flash is much easier. You can move your data because you can utilize it externally.
  3. Find out what your peers are saying about Hewlett Packard Enterprise, IBM, NetApp and others in Modular SAN (Storage Area Network). Updated: October 2021.
    542,029 professionals have used our research since 2012.
  4. leader badge
    The solution is easy to use.Can use both SAN and NAS at the same time.
  5. The solution is very easy to use, it is not really complex for any user, such as non-technical individuals.The product is very easy to deploy.
  6. We have many different types of replication, such as remote and drop local replication. All these features and licenses are already available. These are basic features available in the current model. Additionally, the performance has been good in our experience.
  7. The solution can scale. Technical support, when we had access to it, was very good.
  8. report
    Use our free recommendation engine to learn which Modular SAN (Storage Area Network) solutions are best for your needs.
    542,029 professionals have used our research since 2012.
  9. I like its storage capacity, quick access to the data, speed, and overall storage management.
  10. The initial setup is pretty straightforward.The stability is excellent.

Advice From The Community

Read answers to top Modular SAN (Storage Area Network) questions. 542,029 professionals have gotten help from our community of experts.
Rony_Sklar
Hi community,  What are key factors that businesses should take into consideration when choosing between traditional SAN and hyper-converged solutions?
author avatarFernando Salado
User

Well, there are many things to consider, but I will start with scalability.


In HCI solutions scalability is achieved by adding nodes, while in dHCI (discrete HCI, hyper-converged solutions that use a SAN), you can expand the compute nodes or the storage. That means that dHCI is more flexible, and you will address your compute or storage needs in a tailored way.

The other thing to consider is availability.


HCI solutions base their availability in RAIN (Redundant Array of "inexpensive" Nodes). This means that you have more than one copy of your data located in different nodes. In case that you experience a failure in a node, your data is protected and accessible. Moreover, is extremely easy to set up a stretched cluster.

SAN-based architectures, usually include just one copy of your data, unless you use more than one storage system and a replication solution.

Another thing to consider is operations. HCI environments are easy to use, set up and scale. On the other hand, SAN-based solutions require more knowledge and maintenance efforts (Fabric OS's to update, HBAs, etc).

author avatarTim Williams
Real User

Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:

- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles

If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.

There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.

HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.

3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.

In general, it's best to start with HCI and work to disqualify it rather than the other way around.

author avatarShivendraJha
Real User

There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.

I am sure HCI can do everything though.

author avatarreviewer1234203 (Pre-sales Engineer at a tech services company with 11-50 employees)
Real User

There are so many variables to consider.

First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.

To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.

SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.

Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.

Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.

Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.

HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.

author avatarBart Heungens
Reseller

All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.

Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.

Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.

author avatarCesar Danecke
Real User

Maybe what I say becomes a little redundant.


As mentioned earlier, new technologies don't see why not use HCI.


I think it's an important factor and when you have a reduced team, you end up opting for a fully integrated solution.


HCI is wonderful, and possible to work with scalability, redundancy, there are tools to provide agile backup.


The traditional structure makes many analysts more comfortable, but for small teams it ends up overloading.


I use both frameworks, for large volatile data volume I believe that pure investment in HCI comes at a high cost, as it adds more storage host.


Also talk about abandoning the SAN you already have, in my opinion and something very drastic, each product has its strengths, replication for storage is still my favorite, even though there are very good replication solutions in HCI.


It's worth analyzing the whole, the size of the structure, the technical team, the qualifications, what kind of application you want to work on, the financial investment is important, but it can be more expensive in the end.


I've seen companies connecting their SAN to HCI, not always for performance reasons, but because it already exists, or there are low-cost solutions, and space requirements.


But when everything is new, it is possible to buy the minimum (HCI), already in SAN and I need to pre-dimension the number of ports, capacity, processing, speed, which will be used in its growth journey, this can make the project more expensive.

author avatarKrishna Randadath
User

Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.

Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.

Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.

HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.

author avatarManjunath V
Real User

Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.

Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.


Find out what your peers are saying about Hewlett Packard Enterprise, IBM, NetApp and others in Modular SAN (Storage Area Network). Updated: October 2021.
542,029 professionals have used our research since 2012.