We just raised a $30M Series A: Read our story

Splunk Competitors and Alternatives

Get our free report covering Dynatrace, Datadog, IBM, and other competitors of Splunk. Updated: December 2021.
554,873 professionals have used our research since 2012.

Read reviews of Splunk competitors and alternatives

Dannie Combs
Senior Vice President and Chief Information Security Officer at Donnelley Financial Solutions
Video Review
Real User
Top 5
The alert fatigue and false positive rates have just plummeted, which is really exciting.

Pros and Cons

  • "As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity."
  • "Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus."

What is our primary use case?

My organization is in the financial services industry and the majority of services that we offer are financial services centric. We operate or support almost every industry in the marketplace. We restore processes and transmit highly sensitive information. Sometimes that information is premarket. Other times that information is personally identifiable information, personal health information, etc. It is dependent upon our client's requirements. Security is cornerstone in all that we do. It's in our DNA, as we would like to say internally. Being in a position to understand when we are at risk of a cyber attack is paramount.

We have a strong desire to understand who did what, where, when, and why internally. empow's near real-time, high fidelity, security monitoring capabilities are our primary use case. Other use cases revolve around:

  • Gaining as much insight from a threat intelligence perspective, being able to correlate that back to an alarm, and doing so in an automated fashion. 
  • The automated mitigation capability. 
  • The general reporting and analytics within the platform.

How has it helped my organization?

We have a significantly higher confidence in our ability to automate mitigations. We've had technologies across SOAR and cyber threat intelligence integrated into our platforms for over four years now. We would like to tell ourselves that we're reasonably experienced with both of those technology categories. 

One of the most impressive accomplishments that we were able to showcase internally was building metrics around the fidelity of our playbooks when they're executed. We have a high degree of confidence that we have the right playbooks in place. It's also worth mentioning that we're a global organization. We are corporate focused, primarily, not consumer focused. We know where our clients are from a geographic perspective, as an example, but our clients travel. We want to be hyperconservative on those mitigation techniques as to not adversely affect the client experience with our product lines. I was quite surprised, even though we took a very conservative approach initially, the degree of accuracy and percentages of false positive were almost zero when the mitigation playbooks were involved. The enablement of automated mitigations that the empow product line has provided us with is incredibly impressive.

One of the most impressive capabilities of the empow product line to our security analyst team is just how little maintenance is required to ensure that we are focusing on the right threats. The correlation rules themselves require effectively little to no maintenance from a client perspective, which is tremendous. This is leaps forward compared to other product lines and SIEMs over the last 10 years. 

Correlation rules maintenance has been one of the most time consuming bodies of work required. It is one of the areas where we had a higher degree of risk of focusing in the wrong areas. We spent an enormous amount of time being hyperfocused on ensuring that we have the right correlation rules in place, the fidelity of those rules was sound, etc. We just can't begin to mention how pleased we are that, for the most part, this is no longer something we have to be concerned about.

The power of the AI and the natural language processing capability is best measured by the outputs. The fidelity of the alarms that we receive is just night and day compared to SIEM platforms and other platforms we've used in the past. I also feel it is a leading reason (major theme) why our overall alarm volume is significantly lower, because we deal with far less alert fatigue. We are dealing with a lot less false positives as a direct result of the AI and NLP capabilities.

Our overall false positive rates are significantly fewer. It's definitely removed about 60 percent of the total volume of alarms that we have needed to respond to each month over the last year. Also, it's worth mentioning that we spent considerable amounts of time in years' past focusing on managing correlation rules, ensuring that we have the right prioritization applied to those rules, that the rules were accounted for, or they took into account our technology deployments, such as a general shift in our portfolio, adding/removing devices, retiring products and services, and adding new innovative solutions for our customers. This was to the extent that we had a 90-minute session twice a month with a partner of ours dedicated just to that session. Today, we don't have any meetings per month. We're focused on correlation rules as a direct result of our transition to empow.

Their ability to focus on an event with a high degree of fidelity really drives our level of confidence. Therefore, we are quick to respond with a high degree of urgency when we do receive an alarm because we recognize that there is a very high probability that the alarm is accurate and the fidelity is very high. This enables us to focus on other areas throughout the day. However, once we do receive an alarm from empow, we recognize it's something that needs to be responded to with a high degree of urgency.

The integration between Elastic and empow has been quite impressive for a couple of reasons: 

  1. We're a prime example of an organization who must have a high degree of flexibility in our deployments. We have full cloud-native deployments of products and corporate systems. We have on-prem deployments of both. Our cloud deployments span many cloud providers. Therefore, I need to be able to orchestrate and scale up and down my footprint, depending on geography, cloud providers, the tempos of the business relative to lifecycles with some of our products, and so on and so forth. Having a lot of leverage to pull on Elasticsearch has proven to be very attractive to us for supporting our set of requirements and flexibility. 
  2. They play a big role in making it incredibly easy to plug into other security tools, network platforms, and application platforms, whether they are internally developed or commercial offerings. The API model that the empow product provides has simplified the integration of almost any technology into their product lines.

empow has impacted our network security posture in a truly dramatic way, particularly in that we have a higher confidence when we are responding to an event that it is actionable and we should be concerned about. Secondly, it has positively impacted our network security posture by way of automated mitigations defined within the system. The playbooks that we define and can take a conservative approach to, they help us avoid any negative impact to our clients. The accuracy of those playbooks define the automated mitigations, and we have tremendous amount of confidence in them. Those playbooks are triggered daily and that reduces risk. They reduce the amount of time spent to contain and mitigate them. Overall, from a security perspective, there have been quite dramatic steps forward. 

It also directly supports our compliance programs. We're very easily able to measure when we have events and what actions were taken because the vast majority of them are addressed through automation.

I had worked using empow with a previous organization, but our requirements were very different. We are definitely enterprise-focused, but we are also corporate user-focused. Our client community is primarily that of mid to large enterprise organizations across the globe. How well a product organization in the services team responds to support calls is critically important. I give empow a lot of very high marks. The responsiveness has been very high, but more important than the responsiveness is the quality and accuracy of their recommended next steps to resolve whatever issue we may have. 

What is most valuable?

  • The automated mitigation capability. 
  • A next generation capability of attack replay, where it walks back from the event, historically, to provide that visualized representation of the attack lifecycle. 
  • The ability to rapidly deploy a comprehensive coverage tool without the need to spend months of planning for a deployment with emphasis placed on correlation rules. The ability to put aside the need for a high number of correlation rules is extremely advantageous to us, as it saves time and money, drives fidelity, and scores higher. It's just a fantastic capability.

When I think about the quality of the dashboard, it's one of the features that is just fantastic to speak about. They designed a dashboard where I can get a quick snapshot with a broad lens over the last seven to 10 days that dives specifically into areas which are a bit of concern. Also, from a SOC analyst perspective, there are many levels within a SOC organization, so whether they are entry level or a new hire, they can find that right altitude of interest relative to the depth of detail that's being presented. The flexibility of the dashboard to quickly drill up or down into an altitude of your choosing is fantastic.

Also, being able to pivot around between various data sets, whether it be:

  • Threat intelligence centric data
  • Alarm data
  • A specific asset
  • Elevating it to a solution level
  • Elevating it to an entity level

The degree of flexibility and speed in which you can change your view is very impressive. Oftentimes, with some of the more legacy SIEMs which have been in market for a long time, that was one of the major pain points: It took time to refresh views. The limitations of that flexibility was frustrating.

The platform has made mitigation faster primarily by way of the playbooks we defined (automated mitigation). We have a number of playbooks defined where our empow platform signals directly to the firewall to block traffic. For example, we have no customers in North Korea. Anytime we see an interrogation of our products or our assets from there, then we signal to the firewall to drop that traffic systematically when there's time. It is not some form of mean time to respond to an event, but really time relative to our analyst focusing more in other areas.

What needs improvement?

empow has a few areas of improvement as with any other technology, such as continuing to drive innovation in the dashboard. While we've been extremely impressed with the dashboard's ease of use, flexibility, ability to drill down deeply, and focus very intently on an area of interest, there will always be opportunities to be more innovative and open it up to a wider audience than just the operations group, for example. 

With reporting, there is always a desire to have custom reporting for every client of empow. 

Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus.

For how long have I used the solution?

I've been using the empow i-SIEM platform for a total of four years across two companies, but for two years in my current company.

What do I think about the stability of the solution?

Someone knock on some wood here if you would, but we haven't had any stability challenges yet. That is directly attributable to the architecture that we've put in place for empow and other solutions that we deploy. We plan for highly available solutions across each deployment site, or per data center, making geo-redundancies available. So far so good, we have not had any significant operational hiccups with the platform.

We have one dedicated resource who is accountable for ensuring that the empow environment is healthy, so one from a maintaining perspective. We have a team of threat analysts on staff. We have a third-party managed security services partnership in place as well. There's definitely one FTE whose primary role is to ensure that the empow platform is up and running, healthy, and satisfying the needs of our internal clients, which would be our team of cyber threat analysts.

What do I think about the scalability of the solution?

The scalability of empow is endless. I feel that they have an architecture that is highly scalable. It's been proven for on-prem, cloud, and hybrid environments. We presently have a hybrid environment. I suspect they can scale to almost any size needed. The question will be as to what are the unique needs of the organization where they've been deployed and what is their appetite for investing to ensure resiliency either locally, regionally, or globally. Those things play a role in how quickly and complex the architecture must be in order to scale. 

It is the standard for security operations. Anywhere my organization deploys technology assets, empow will be providing coverage, if not already.

The ability for empow to be managed by a single analyst depends on the organization. I don't need a team of 20 to 25 analysts any longer. It's significantly fewer than that. To quantify one analyst really depends on the organization and what is their threshold for risk? That's unique to every organization. What is the size of the organization from a technology perspective: Are you dealing with hundreds or tens of thousands of servers? That will be indicative of your resource needs. It takes essentially one, maybe two, resources regardless of your size to directly support the care, feeding, capacity management, and monitoring of the empow platform. The simplicity of the architecture is remarkably impressive.

How are customer service and technical support?

The partnership between empow and Elasticsearch has a very positive impact on us from a couple of different angles:

  1. Support. There's one throat to choke. We pick up the phone, we reach out to the empow team, and we have one point of contact, whether we're experiencing an application issue, a data issue, etc. It just simplifies the overall management. 
  2. The licensing negotiation through one organization is more simplified. As it relates to preparing for major upgrades, it makes our lives quite a bit easier when there are fewer parties that we have to interface with.

The partnership between empow and Elastic has a few of key benefits:

  1. The simplification and how we have one point of contact for support regardless of what the issue type is. If we're experiencing a concern relative to the application, UI, reporting engines, etc., we have one phone number to call and one lead engineer to reach out to who takes ownership relative to determining if it's an internal empow matter, or if we need to reach out across the boundaries over to Elastic. 
  2. How it relates to our planning for upgrades or expansion of the environment for capacity management purposes, whatever the issue may be. Having that simplified licensing arrangement makes my life easier. As we have one agreement, we have one pricing scheme to work from. It's just really good, which keeps it nice and simple for maintaining the business.
  3. As we look forward to future product lines and other architectural endeavors, having a single point of contact for planning purposes simplifies the process quite a bit as we look to year two and three.

Which solution did I use previously and why did I switch?

It is worth mentioning we were able to retire two other platforms as part of our migration over to empow. We retired a legacy SIEM deployment that we had in place for nearly four years. While it is a great product, I just felt that empow demonstrated more innovation at a lower cost point with a simpler architecture that's more extensible and easier to scale. We were able to retire the SIEM that we had in place for three and a half years, as well as our SOAR platform. We continue to use our existing cyber threat intelligence (CTI) platform, but there is an overlap with the capabilities across empow. However, we still see value in that CTI platform, so we retired the SOAR and our legacy SIEM.

We needed to make a change in large part because the cost of scaling was becoming quite concerning. Also, we went through a series of upgrades a little over a year ago there were problematic. So, anytime we experience something that is impactful we now want to pause and reflect back what we did well, what our opportunities were, and did we miss any opportunities to avoid that situation from being realized. We used the outcomes of those reflections to revisit the market and made the decision to really pursue empow as our leading solution for security operations.

empow has significantly been able to reduce the time that we spend on just maintaining the platform, particularly as compared to other product lines that we've previously invested in. The biggest advantage in this regard would be the lack of time spent on managing the correlation rules. The simplified architecture allows us to really lean upon empow's support teams to effectively provide almost end-to-end support of the underlying infrastructure that comprises the platform.

How was the initial setup?

There was complexity to the initial deployment in so much that we were migrating from an existing, fairly sizable deployment, if not a product line. There were a couple of different solutions in place which comprised our overall enterprise security monitoring solution. We had the SIEM, our SOAR platform, a cyber intelligence interface from a number of different feeds, etc. 

The initial deployment took a couple of weeks and most of it was planning. The actual technical activities were executed quite quickly. Of course, there was the migration of the primary existing data storage that we needed to migrate from our old SIEM environment, but there was also the body of work to redirect log streams and other ingestion of data from our several thousand devices (north of 5,000 devices) for production alone, which takes time. Our primary migrate deployment was a couple of weeks. Most of that would be in planning. The primary migration of the existing data storage took about 14 days to go through three or four different change windows to make sure that it was complete and to wrap up some other activities. Then, the effort to redirect all the various log streams into the empow environment away from a multitiered architecture to a single destination IP address, just a single collector across the environments, took approximately two months. That was more to ensure that we understood the risks associated with change management collisions and we were hyperfocused on never losing a log throughout those migrations.

What about the implementation team?

There was some complexity that several of my teams and the empow team needed to walk through to make sure we mutually understood the goal and technical requirements. There were some business requirements and relative reporting that we wanted to make sure we were all aligned to. While there was complexities, I also would give empow very favorable feedback as it relates to them taking a sense of ownership to the migration and the overall deployment of their technology. They really looked out for us. There were a number of times that they cautioned me to make some minor adjustments in the plan to ensure that they weren't disruptive to our business for which I'm very appreciative.

Our overall implementation strategy was bringing both empow and representatives from my teams together to build a plan. We established a few key milestones and aligned those milestones relative to availability of resources on both sides. This ensured we really understood what we were trying to accomplish, not just from an architecture perspective, but also, e.g., taking into account key business timelines that we needed to be mindful of and where we just didn't have an appetite for some major change management activities to occur. We each brought project management resources, a lead architect, and a threat analyst to the table to ensure that we understood each of those perspectives to really comprise that team and ensure that they were set up for success. I would estimate the total team makeup, excluding myself, would be six: two from empow and four from my team.

What was our ROI?

We are saving so much time. We deal with billions of events a month. We are definitely a data-centric organization. Easily, we are able to save 75 percent of the head count for security operations that would otherwise be needed given our scale. Now, we are in a bit of a unique situation where the organization spun off from its parent company just shy of four years ago. So, we are still in a growth mode in many respects. While we are still continuing to expand our security organization from an FTE and head count perspective, it's very easy to quantify without empow we would be looking at seven to 10 more resources being required. This is opposed to the one or two who are focused on the platform today, where focused on the platform includes capacity management, general system administration of the environment, and monitoring/responding to alarms that are generated.

As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity.

North of 75 percent of our time has been reduced relative to the support in the environment, starting from the general system administration, capacity management, the overall patching, and system admin of the ecosystem. Most notably would be on the time to maintain the application tier of empow, particularly that of the correlation rules. That has been reduced by north of 90 percent as compared to other platforms.

Mitigation time has been reduced by north of 75 percent for the vast majority of alarms that we receive. This varies depending on the event type. However, with the automated playbooks that we have defined and the confidence levels in the fidelity alarms, we have been able to enjoy significant reduction in our mean time to mitigate and mean time to respond.

As we have more alarms as a result of having more logs adjusted, this means we need more analysts to respond to those alarms in order for us to meet our SLAs because we have very aggressive SLAs. With a higher degree of fidelity in the alarms, we were able to avoid adding additional resources to our teams. We take into account the cost of security resources in the market and the significantly higher fidelity from the alarms that are being generated. This drove down our costs with our MSSP. It drove down my cost for human capital internally. It drove down our need to have multiple resources supporting the underlying infrastructure and health and maintenance of empow as a platform from several resources down to one. Therefore, human capital costs were significantly reduced. Our operating expenses were significantly reduced. Our capital costs were significantly reduced while tripling our capacity and our run rate reduced. It was almost a "too good to be true" situation. Fortunately, for us, it worked out very nicely.

What's my experience with pricing, setup cost, and licensing?

We were looking at a seven figure investment being necessary to sustain our growth projections for our log ingestion requirements, just for production. We had a goal of ensuring that we understood who did what, where, when, and why across all assets, production, staging, development, field devices, laptops, iPads, etc. Not only were we able to avoid all those costs relative to growth, from a point in time forward, the cost structure of empow (as compared to both existing tools) was cheaper for us to migrate then it was to sustain on our legacy platforms. When it's cheaper to migrate, that's very attractive. 

I don't have to put up with any longer with these hypercomplex licensing agreements. Every time I want to add some additional reporting as a compliance centric or regulatory specific, e.g., GDPR, PCI, or Sarbanes-Oxley, many providers would have an additional license for this, which felt a bit ridiculous to me. With the simplified licensing architecture, there were no hidden "gotchas" down the road with empow. Something I have experienced with other providers that I've worked with in the past.

As it relates to the SOAR side of the toolkit, there was no need to purchase an independent SOAR platform. The innovation that empow brings to the market just addressed all those use cases natively. We were able to just completely retire that toolkit out of our environment.

Which other solutions did I evaluate?

As we do with almost every technology selection, we looked at the markets. For this particular technology stack, there were five or six different players who we looked at intently. Then, as with most organizations, we took the broader view. We narrowed it down to two or three finalists who went into a formalized proof of concept lab environment, which runs for an extended period of time for something so critical as a SIEM or SOAR, which are the primary reporting capabilities for security purposes. We had an extended evaluation and followed that crawl, walk, run model relative to our rollout. In hindsight, it was a very structured, formalized process. What was interesting, the business case was very simple because of the cost savings from just the cost avoidance. As we look forward in our plans and the need to scale up, it more than covered the cost of transitioning in our entirety over to the empow platform.

Some of the other SIEM providers that we looked at when we revisited the market would include LogRhythm, Splunk, and empow to name a few. A pro of empow would be the simplified licensing model. Both my organizations have had feedback over the years from many clients that's an area of concern, particularly with Splunk. Also, the simplified architecture particularly leveraging the Elasticsearch technologies really prevents the need to have a complex architecture with high power CPUs in place across each of your footprints where you have log collectors deployed. That was a major value add and very attractive to us. 

We evaluate the partnership between empow and Elastic from a couple of different dimensions. First and foremost, what was our experience like as we were negotiating? Was empow in a position to adequately represent both the business terms for Elastic, support terms, and other commitments that we needed to work through. The answer was yes. It felt very seamless to us. Secondly, the simplicity of the licensing model just made the process of acquiring technologies so much simpler and straightforward than what we experienced in previous relationships.

The decision to partner with Elastic for that strategic partnership to be in place wasn't a Tier 1 criteria for us. However, what was a criteria for us was the outcome and capabilities which the partnership has resulted in, e.g., the speed to rapidly scale up, the ease of scaling down, and the ease of migrating a primary on-prem data center strategy to a hybrid 50/50 cloud on-prem strategy with long-term plans for pursuing cloud far more aggressively. As we need to pull the levers to keep up with the demands of our business, we wanted to have comfort that it would not be a disruptive series of changes as related to the SIEM and we wouldn't have to go back and re-architect, then buy additional licenses for new features and functionalities. We wanted to avoid that complex license structure. We want to have confidence in our ability to scale up and down and migrate from across multiple feeders as our business needs warranted. empow has done a great job in supporting us in that regard.

What other advice do I have?

If I was to rate empow on a scale of one to 10, I would give them a nine and a half, probably. Why it's so high is that there's no competitors on the market in my mind that has transformed the SIEM industry as much as empow. The speed is impressive in which they continue to innovate. Every couple of months, we're excited to learn about the latest and greatest capabilities of the platform. Most of the latest innovations have been centered around their automation capabilities. It's had such a tremendous impact on my organization. They tend to focus on what matters. It has given us high confidence that where we are spending our time is worth doing so. The alert fatigue and false positive rates have just plummeted, which is really exciting. They have transformed the industry, which no one would have expected not that long ago.

I'd like to give them a bit of a shout out for when they have given me commitments around enhancements, such as enhancing their reporting capabilities, some minor adjustments to the dashboard, and those types of feature requests, they've met those commitments as it relates to quality and timeline.

empow, without a doubt, is the most important monitoring tool that we have at our disposal. From a monitoring and incident response perspective, empow is the most valuable asset we have in our toolkit.

The biggest lesson I've learned from using empow would be just how far technology has come. It surprised me relative to the orchestration of the automation of our mitigations. That one was quite surprising. The accuracy and the level of confidence I have in the playbooks surprises me at how high it is, because it's quite high. Another area that surprised me would be the level of confidence that we have now in our ability to scale up and down, as scaling down sometimes can be equally as tricky.

The advice that I would give to anyone looking at empow would be primarily ensure that your planning is sound. When I think about our experiences with empow, it's refreshing to think back about how easy that journey was with such a difficult technology stack. Not only was it surprisingly simple, it should not have been since not long ago we were not standing up a deployment of a net new sandbox environment where we were needing to build and deploy, then migrate, a very sizeable deployment to this new ecosystem. Inevitably, we expected there to be some bumps along the road, but there were very few. I attribute this back to the quality of planning and reliability of the technology that empow brings to the table. Therefore, my advice would be ensure that your planning is sound. While it's exciting to know that the technology is very stable and the integrations are very straightforward with API driven integrations, they never can really take into full account the uniqueness of your business. Thus, planning is absolutely paramount.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JerryH
Director at a computer software company with 1,001-5,000 employees
Real User
Top 5
Enables us to bring all our data sources into a central hub for quick analysis, helping us focus on priorities in our threat landscape

Pros and Cons

  • "The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events."
  • "Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs."

What is our primary use case?

Our initial use case is to use Devo as a SIEM. We're using it for security and event logging, aggregation and correlation for security incidents, triage and response. That's our goal out of the gate.

Their solution is cloud-based and we're deploying some relays on-premise to handle anything that can't send it up there directly. But it's pretty straightforward. We're in a hybrid ecosystem, meaning we're running in both public and private cloud.

How has it helped my organization?

We're very early in the process so it's hard to say what the improvements are. The main reason that we bought this tool is that we were a conglomeration of several different companies. We were the original Qualcomm company way back in the day. After they made billions in IP and wireless, they spun us off to Vista Equity, and we rapidly and in succession bought three or four companies in the 2014/2015 timeframe. Since then, we've acquired three or four more. Unfortunately, we haven't done a very good job of integrating those companies, from a security and business services standpoint.

This tool is going to be our global SIEM and log-aggregation and management solution. We're going to be able to really shore up our visibility across all of our business areas, across international boundaries. We have businesses in Canada and Mexico, so our entire North American operations should benefit from this. We should have a global view into what's going on in our infrastructure for the first time ever.

The solution is enabling us to bring all our data sources into a central hub. That's the goal. If we can have all of our data sources in one hub and are then able to pull them back and analyze that data as fast as possible, and then archive it, that will be helpful. We have a lot of regulatory and compliance requirements as well, because we do business in the EU. Obviously, data privacy is a big concern and this is really going to help us out from that standpoint.

We have a varied array of threat vectors in our environment. We OEM and provide a SaaS service that runs on people's mobiles, plus we provide an in-cab mobile in truck fleets and tractor trailers that are both short- and long-haul. That means our threat surface is quite large, not only from the web services and web-native applications that we expose to our customers, but also from our in-cab and mobile application products that we sell. Being able to pull all that information into one central location is going to be huge for us. Securing that type of landscape is challenging because we have a lot of different moving parts. But it will at least give us some insight into where we need to focus our efforts and get the most bang for the buck.

We've found some insights fairly early in the process but I don't think we've gotten to the point where we can determine that our mean time to resolution has improved. We do expect it to help to reduce our MTTR, absolutely, especially for security incidents. It's critical to be able to find a threat and do something about it sooner. Devo's relationship with Palo Alto is very interesting in that regard because there's a possibility that we will be pushing this as a direct integration with our Layer 4 through Layer 7 security infrastructure, to be able to push real-time actions. Once we get the baseline stuff done, we'll start to evolve our maturity and our capabilities on the platform and use a lot more of the advanced features of Devo. We'll get it hooked up across all of our infrastructure in a more significant way so that we can use the platform to not only help us see what's going on, but to do something about it.

What is most valuable?

So far, the most valuable features are the ease of use and the ease of deployment. We're very early in the process. They've got some nice ways to customize the tool and some nice, out-of-the-box dashboards that are helpful and provide insight, particularly related to security operations.

The UI is 

  • clean
  • easy to use
  • intuitive. 

They've put a lot of work into the UI. There are a few areas they could probably improve, but they've done a really good job of making it easy to use. For us to get engagement from our engineering teams, it needs to be an easy tool to use and I think they've gone a long way to doing that.

The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events.

The fact that the real-time analytics are immediately available for query after ingest is super-critical in what we do. We're a transportation management company and we provide a SaaS. We need to be able to analyze logs and understand what's going on in our ecosystem in a very close to real-time way, if not in real time, because we're considered critical infrastructure. And that's not only from a security standpoint, but even from an engineering standpoint. There are things going on in our vehicles, inside of our trucks, and inside of our platform. We need to understand what's going on, very quickly, and to respond to it very rapidly.

Also, the integration of threat intelligence data provides context to an investigation. We've got a lot of data feeds that come in and Devo has its own. They have a partnership with Palo Alto, which is our primary security provider. All of that threat information and intel is very good. We know it's very good. We have a lot of confidence that that information is going to be timely and it's going to be relevant. We're very confident that the threat and intel pieces are right on the money. And it's definitely providing insights. We've already used it to shore up a couple of things in our ecosystem, just based on the proof of concept.

The solution’s multi-tenant, cloud-native architecture doesn't really affect our operations, but it gives us a lot of options for splitting things up by business area or different functional groups, as needed. It's pretty simple and straightforward to do so. You can implement those types of things after the fact. It doesn't really impact us too much. We're trying to do everything inside of one tenant, and we don't expose anything to our customers.

We haven't used the solution's Activeboards too much yet. We're in the process of building some of those out. We'll be building dashboards and customized dashboards and Activeboards based on what those tools are doing in Splunk. Devo's going to help us out with our ProServe to make sure that we do that right, and do it quickly.

Based on what I've seen, its Activeboards align nicely with what we need to see. The visual analytics are nice. There's a lot of customization that you can do inside the tool. It really gives you a clean view of what's going on from both interfaces and topology standpoints. We were able to get network topology on some log events, right out of the gate. The visualization and analytics are insightful, to say the least, and they're accurate, which is really good. It's not only the visualization, but it's also the ability to use the API to pull information out. We do a lot of customization in our backend operations and service management platforms, and being able to pull those logs back in and do something with them quickly is also very beneficial.

The customization helps because you can map it into your business requirements. Everybody's business requirements are different when it comes to security and the risks they're willing to take and what they need to do as a result. From a security analyst standpoint, Devo's workflow allows you to customize, in a granular way, what is relevant for your business. Once you get to that point where you've customized it to what you really need to see, that's where there's a lot of value-add for our analysts and our manager of security.

What needs improvement?

Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs.

We'll find more areas for improvement, I'm sure, as we move forward. But we've got a tight relationship with them. I'm sure we can get anything worked out.

For how long have I used the solution?

This is our first foray with Devo. We started looking at the product this year and we're launching an effort to replace our other technology. We've been using Devo for one month.

What do I think about the stability of the solution?

The stability is good. It hasn't been down yet.

What do I think about the scalability of the solution?

The scalability is unlimited, as far as I can tell. It's just a matter of how much money you have in your back pocket that you're willing to spend. The cost is based on log ingestion rate and how much retention. They're running in public cloud meaning it's unlimited capacity. And scaling is instantaneous.

Right now, we've got about 22 people in the platform. It will end up being anywhere between 200 and 400 when we're done, including software engineers, systems engineers, security engineers, and network operations teams for all of our mobile and telecommunications platforms. We'll have a wide variety of roles that are already defined. And on a limited basis, our customer support teams can go in and see what's going on.

How are customer service and technical support?

Their technical support has been good. We haven't had to use their operations support too much. We have a dedicated team that's working with us. But they've been excellent. We haven't had any issues with them. They've been very quick and responsive and they know their platform.

Which solution did I use previously and why did I switch?

We were using Splunk but we're phasing it out due to cost.

Our old Splunk rep went to Devo and he gave me a shout and asked me if I was looking to make a change, because he knew of some of the problems that we were having. That's how we got hooked up with Devo. It needed to have a Splunk-like feel, because I didn't want to have a long road or a huge cultural transformation and shock for our engineering teams and our security teams that use Splunk today. 

We liked the PoC. Everything it did was super-simple to use and was very cost-effective. That's really why we went down this path.

Once we got through the PoC and once we got people to take a look at it and give us a thumbs-up on what they'd seen, we moved ahead. From a price standpoint, it made a lot of sense and it does everything we needed to do, as far as we can tell.

How was the initial setup?

We were pulling in all of our firewall logs, throughout the entire company, in less than 60 minutes. We deployed some relay instances out there and it took us longer to go through the bureaucracy and the workflow of getting those instances deployed than it did to actually configure the platform to pull the relevant logs.

In the PoC we had a strategy. We had a set of infrastructure that we were focusing on, infrastructure that we really needed to make sure was going to integrate and that its logs could be pulled effectively into Devo. We hit all of those use cases in the PoC.

We did the PoC with three people internally: a network engineer, a systems engineer, and a security engineer.

Our strategy going forward is getting our core infrastructure in there first—our network, compute, and storage stuff. That is critical. Our network layer for security is critical. Our edge security, our identity and access stuff, including our Active Directory and our directory services—those critical, core security and foundational infrastructure areas—are what we're focusing on first.

We've got quite a few servers for a small to mid-sized company. We're trying to automate the deployment process to hit our Linux and Windows platforms as much as possible. It's relatively straightforward. There is no Linux agent so it's essentially a configuration change in all of our Linux platforms. We're going through that process right now across all our servers. It's a lift because of the sheer volume.

As for maintenance of the Devo platform we literally don't require anybody to do that.

We have a huge plan. We're in the process of spinning up all of our training and trying to get our folks trained as a day-zero priority. Then, as we pull infrastructure in, I want those guys to be trained. Training is a key thing we're working on right now. We're building the e-learning regimen. And Devo provides live, multi-day workshops for our teams. We go in and focus the agenda on what they need to see. Our focus will be on moving dashboards from Splunk and the critical things that we do on a day-to-day basis.

What about the implementation team?

We worked straight with Devo on pretty much everything. We have a third-party VAR that may provide some value here, but we're working straight with Devo.

What was our ROI?

We expect to see ROI from security intelligence and network layer security analysis. Probably the biggest thing will be turning off things that are talking out there that don't need to be talking. We found three of those types of things early in the process, things that were turned on that didn't need to be turned on. That's going to help us rationalize and modify our services to make sure that things are shut down and turned off the way they're supposed to be, and effectively hardened.

And the cost savings over Splunk is about 50 percent.

What's my experience with pricing, setup cost, and licensing?

Pricing is pretty straightforward. It's based on daily log ingestion and retention rate. They keep it simple. They have breakpoints, depending on what your volume is. But I like that they keep it simple and easy to understand.

There were no costs in addition to their standard licensing fees. I don't know if they're still doing this, but we got in early enough that all of the various modules were part of our entitlement. I think they're in the process changing that model a little bit so you can pick your modules. They're going to split it up and charge by the module. But everything was part of the package that we needed, day-one.

Which other solutions did I evaluate?

We were looking at ELK Stack and Datadog. Datadog has a security option, but it wasn't doing what we needed it to do. It wasn't hitting a couple of the use cases that we have Splunk doing, from a logging and reporting standpoint. We also looked at Logstash, some of the "roll-your-own" stuff. But when you do the comparison for our use case, having a cloud SaaS that's managed by somebody else, where we're just pushing up our logs, something that we can use and customize, made the most sense for us. 

And from a capability standpoint, Devo was the one that most aligned with our Splunk solution.

What other advice do I have?

Take a look at it. They're really going after Splunk hard. Splunk has a very diverse deployment base, but Splunk really missed the mark with its licensing model, especially when it relates to the cloud. There are options out there, effective alternatives to Splunk and some of the other big tools. But from a SaaS standpoint, if not best-in-breed, Devo is certainly in the top-two or top-three. It's definitely a strong up-and-comer. Devo is already taking market share away from Splunk and I think that's going to continue over the next 24 to 36 months.

Devo's speed when querying across our data is very good. We haven't fully loaded it yet. We'll see when the rubber really hits the road. But based on the demos and the things that we've seen in Devo, I think it's going to be extremely good. The architecture and the way that they built it are for speed, but it's also built for security. Between our DevOps, our SecOps, and our traditional operations, we'll be able to quickly use the tool, provide valuable insights into what we're doing, and bring our teams up to speed very quickly on how to use it and how to get value out of it quickly.

The fact that it manages 400 days of hot data falls a little bit outside of our use case. It's great to have 400 days of hot data, from security, compliance, and regulatory retention standpoints. It makes it really fast to rehydrate logs and go back and get trends from way back in the day and do some long-term trend analysis. Our use case is a little bit different. We just need to keep 90 days hot and we'll be archiving the rest of that information to object-based long-term storage, based on our retention policies. We may or may not need to rehydrate and reanalyze those, depending on what's going on in our ecosystem. Having the ability to be able to reach back and pull logs out of long-term storage is very beneficial, not only from a cost standpoint, but from the standpoint of being able to do some deeper analysis on trends and reach back into different log events if we have an incident where we need to do so.

Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JeffHaidet
Director of Application Development and Architecture at South Central Power Company
Real User
Top 5
SIEMphonic gives us an expert set of eyes on things, and assistance with rules has been a huge time saver

Pros and Cons

  • "I like EventTracker's dashboard. I see it every time I log in because it's the first thing you get to. We have our own widgets that we use. For the sake of transparency, there are a few widgets that we look at there and then we move out from there... Among the particularly helpful widgets, the not-reporting widget is a big one. The number-of-logs-processed is also a good one."
  • "It would be great if they had a client for phones by which they could push a notification to us, as opposed to via email."

What is our primary use case?

It's a system incident and event management platform. The typical use cases that go along with that are alerting and syslog aggregation.

How has it helped my organization?

Their run-and-watch service (now renamed SIEMphonic) has saved from having to hire at least one FTE. In addition, having an expert set of eyes on things and their assistance with rules has been a huge time saver. They've been a really good partner.

We are logging everything from Windows client workstations through our server stack, through important, critical web and cloud pieces, like Office 365 logs and web server logs. The latter would include IIS and Apache. All of that information is being streamed directly into, and assimilated by, the EventTracker product. It seems to be doing the job quite well. Having that visibility into the data is useful. Their interface is simple enough for us to be able to use but advanced enough that if we wanted to do some more advanced queries — which some of their competitors admittedly do a little better out-of-the-box — it hits the wheelhouse perfectly.

We're signed up for their weekly observations, so if they find something big they're going to notify us immediately. But having a management-level synopsis once a week has allowed us to not only replace the one FTE, but also streamline our prioritization of work, based off that data, as well.

What is most valuable?

Other than the log aggregation and alerting, their reports modules have come a long way. But for the most part, we stay right in the wheelhouse of the product to use it to the fullest extent.

The previous version, version 8, had a somewhat antiquated UI. The new version 9 is much easier to use and brings it into the current realm of development. It's very easy, very sleek, and designed relatively well. The version 8 to version 9 upgrade was complete night-and-day. It's significantly improved, and they're putting resources into it to make sure that they continue to stay up to date.

I like EventTracker's dashboard. I see it every time I log in because it's the first thing you get to. We have our own widgets that we use. For the sake of transparency, there are a few widgets that we look at there and then we move out from there. We're into the product looking more at the log information at that point. Among the particularly helpful widgets, the not-reporting widget is a big one. The number-of-logs-processed is also a good one. We call that log volume. They're helpful, but we try to dig in a little deeper, off the dashboard, more often than not.

What needs improvement?

In terms of advanced queries, I wouldn't say EventTracker is lagging behind its peers. The latter just make it easier to get to them. EventTracker is designed more for a small to medium type business, which is where we fit. With a competitive tool like Splunk or LogRhythm, you're not going to get what you get with these guys out-of-the-box. With EventTracker, you're going to have to build all that yourself from scratch. You're going to have to learn that markup language to do so.

I want to stress: We're very happy with not having to deal with that out-of-the-gate. If we need to, we can always call support and they can assist us in writing those more advanced queries. The functionality exists to do advanced queries, they're just not right in your face like they are in a competitive product. But for us, that's what we want.

There's always room for improvement in terms of performance and alerting options. It would be great if they had a client for phones by which they could push a notification to us, as opposed to via email. But those are all things that they'll grow into over time.

For how long have I used the solution?

We've been using EventTracker for just a smidge over three years.

What do I think about the stability of the solution?

It has been extremely stable. Very rarely do we even realize that it's still running, and that's good.

What do I think about the scalability of the solution?

We did have a few concerns with the scalability in the beginning. Our initial concerns were about scaling it and, if we blew it out, were we going to run into performance issues with their agent piece using too many resources on the client or running out of space on the server? But those concerns proved to be unfounded. We have 700 or 800 endpoints streaming data into it without any noticeable performance or any other issues.

We're using it almost to its full extent at this point. We're in that 90 percent range. We currently don't have any plans to move away from it. We're utilizing the features that pertain to us. Anytime that there's a patch or release, we look at the new features to see if they're applicable for us.

How are customer service and technical support?

The EventTracker team itself has been great. We can call them for pretty much anything related to their product. They will offer suggestions, advice, and best practices on ways to do things. It's like having another team member here at our disposal, working with their product. I believe that is their standard tech support.

We're paying for the run-and-watch (SIEMphonic) so we're getting an extra set of eyes on things, but when we call in, their support is top-notch. I would give their support team a 10 out of 10. That is a given. Of all the products and vendors that we've used, I've never had a more positive experience with a support team than with EventTracker's support team.

Which solution did I use previously and why did I switch?

We did not have a previous solution. We do annual audits, and the lack of a SIEM showed up in one of our audits as a piece that we needed to start investigating, four or five years ago. We knew that issue was coming. We were too busy dealing with some other things, but when it showed up in the audit, we pushed it up the priority food-chain. We weren't really having any issues by not having a SIEM, but having all the logs in one place sure makes troubleshooting a whole lot easier. if there was an Achilles heel, that was it.

We were looking for an easy-to-manage SIEM that provided the functionality that we needed. Since we're a relatively small IT staff, the part that really made EventTracker stand out to us was the run-and-watch service (SIEMphonic), where they are an active partner, reviewing the data that we get, so we don't miss anything. They're acting as a backstop to us.

How was the initial setup?

The initial setup was completely painless. They gave us a spec sheet for the on-premise server. We built a VM that matched that spec, and they then installed their software and got it up and running. We could be as involved or as uninvolved as we wanted to be; that was our choice. When it came to deploying the client pieces, they worked with us to identify which machine should get it and when. They took care of the pushing of that information out. When we started getting the data in, and it came time to start tweaking the rules, they took the lead on that as well. It really, truly was a painless process.

The deployment took less than a week. We had an analyst at that time who was running point on it. I wasn't even involved. I didn't need to be involved in it at that level. One of our entry-level analysts was able to work with them to get everything caught up.

I and one analyst are involved in the day-to-day maintenance of the application. Our entire IT staff, nine people, uses it for log review and incident correlation. We try to put the information out there for the rest of our team members to use.

What was our ROI?

We have been able to save at least one full FTE. The amount we would have to pay that FTE, including benefits, is way more than what we're paying EventTracker for the annual maintenance. It had a positive return on investment almost immediately for us.

What's my experience with pricing, setup cost, and licensing?

Our cost is significantly less than what it would have been for one of the competitor's products, and that includes the run-and-watch service (SIEMphonic). You can go with one-, two-, or three-year agreements. We pay annually for maintenance on the product.

Which other solutions did I evaluate?

When we acquired EventTracker, we went through an assessment process, reviewing five or six different manufacturers of SIEMs. The frontrunners were the typical players: Splunk and LogRhythm. There were a couple of freeware options out there, but what really set EventTracker apart was their SIEMphonic. That was the big differentiator. We were able to get much more value for our money, and it met all the requirements that we had set out when we started the research.

There weren't really major differences between EventTracker and the other players. Ultimately, SIEMs do the same things. They collect logs, they index those logs, and they make them searchable. There's not really a difference on the surface.

What other advice do I have?

The biggest lesson really isn't an EventTracker lesson, it's more of a SIEM lesson. And that lesson is: It's a lot of data. When you have a lot of data, it's going to take a while to study and learn that data, so you can react appropriately. Not all data is actionable.

Be prepared for the data. Be prepared to know what you didn't know before. And be prepared to weed out the noise from the actual data. That's where EventTracker's SIEMphonic becomes very helpful. My advice would be, if you're going to go with EventTracker, to go with the SIEMphonic service and leverage their support team to get your knowledge up to speed. So far, our experience with their support has been top-notch.

In terms of how we view EventTracker, we're typically just in a browser, so it's on whatever our standard is. I've got a couple of 20-inch monitors on my desk. It's sleek enough that it will work on a normal 15-inch laptop screen too. I have not looked at it on mobile yet, given the fact that it's an on-premise service. If I'm in the building, getting VPN'ed in across my phone is a little tough. But that would be the next iteration of the product, if we would decide to push up towards the cloud instead of being on-prem. We would definitely be looking for some sort of a mobile or a tablet-based mobile interface.

We have not integrated EventTracker with other products. Our service-desk tool is a tool called Samanage, which was recently acquired by SolarWinds and has been renamed Solar Winds Service Desk. We have not integrated anything with that since SolarWinds acquired it, because we wanted to see what SolarWinds was going to do with it. Integrating it into EventTracker is on the list. We'll do it if it makes sense.

I never rate anything a 10 out of 10, because nothing is ever perfect. But this solution would be at the upper end of that range. This partnership with EventTracker has been one of our better ones.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sean Moore
Lead Azure Sentinel Architect at a financial services firm with 10,001+ employees
Real User
Top 20
Quick to deploy, good performance, and automatically scales with our requirements

Pros and Cons

  • "The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance."
  • "If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies."

What is our primary use case?

Azure Sentinel is a next-generation SIEM, which is purely cloud-based. There is no on-premises deployment. We primarily use it to leverage the machine learning and AI capabilities that are embedded in the solution.

How has it helped my organization?

This solution has helped to improve our security posture in several ways. It includes machine learning and AI capabilities, but it's also got the functionality to ingest threat intelligence into the platform. Doing so can further enrich the events and the data that's in the backend, stored in the Sentinel database. Not only does that improve your detection capability, but also when it comes to threat hunting, you can leverage that threat intelligence and it gives you a much wider scope to be able to threat hunt against.

The fact that this is a next-generation SIEM is important because everybody's going through a digital transformation at the moment, and there is actually only one true next-generation SIEM. That is Azure Sentinel. There are no competing products at the moment.

The main benefit is that as companies migrate their systems and services into the Cloud, especially if they're migrating into Azure, they've got a native SIEM available to them immediately. With the market being predominately Microsoft, where perhaps 90% of the market uses Microsoft products, there are a lot of Microsoft houses out there and migration to Azure is common.

Legacy SIEMs used to take time in planning and looking at the specifications that were required from the hardware. It could be the case that to get an on-premises SIEM in place could take a month, whereas, with Azure Sentinel, you can have that available within two minutes. 

This product improves our end-user experience because of the enhanced ability to detect problems. What you've got is Microsoft Defender installed on all of the Windows devices, for instance, and the telemetry from Defender is sent to the Azure Defender portal. All of that analysis in Defender, including the alerts and incidents, can be forwarded into Sentinel. This improves the detection methods for the security monitoring team to be able to detect where a user has got malicious software or files or whatever it may be on their laptop, for instance.

What is most valuable?

It gives you that single pane of glass view for all of your security incidents, whether they're coming from Azure, AWS, or even GCP. You can actually expand the toolset from Azure Sentinel out to other Azure services as well.

The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance. With an on-premises SIEM, you needed to maintain the hardware and you needed to upgrade the hardware, whereas, with Azure Sentinel, it's auto-scaling. This means that there is no need to worry about any performance impact. You can send very large volumes of data to Azure Sentinel and still have the performance that you need.

What needs improvement?

When you ingest data into Azure Sentinel, not all of the events are received. The way it works is that they're written to a native Sentinel table, but some events haven't got a native table available to them. In this case, what happens is that anything Sentinel doesn't recognize, it puts it into a custom table. This is something that you need to create. What would be good is the extension of the Azure Sentinel schema to cover a lot more technologies, so that you don't have to have custom tables.

If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies.

For how long have I used the solution?

I have been using Azure Sentinel for between 18 months and two years.

What do I think about the stability of the solution?

I work in the UK South region and it very rarely has not been available. I'd say its availability is probably 99.9%.

What do I think about the scalability of the solution?

This is an extremely scalable product and you don't have to worry about that because as a SaaS, it auto-scales.

We have been 20 and 30 people who use it. I lead the delivery team, who are the engineers, and we've got some KQL programmers for developing the use cases. Then, we hand that over to the security monitoring team, who actually use the tool and monitor it. They deal with the alerts and incidents, as well as doing threat hunting and related tasks.

We use this solution extensively and our usage will only increase.

How are customer service and support?

I would rate the Microsoft technical support a nine out of ten.

Support is very good but there is always room for improvement.

Which solution did I use previously and why did I switch?

I have personally used ArcSight, Splunk, and LogRythm.

Comparing Azure Sentinel with these other solutions, the first thing to consider is scalability. That is something that you don't have to worry about anymore. It's excellent.

ArcSight was very good, although it had its problems the way all SIEMs do.

Azure Sentinel is very good but as it matures, I think it will probably be one of the best SIEMs that we've had available to us. There are too many pros and cons to adequately compare all of these products.

How was the initial setup?

The actual standard Azure Sentinel setup is very easy. It is just a case where you create a log analytics workspace and then you enable Azure Sentinel to sit over the top. It's very easy except the challenge is actually getting the events into Azure Sentinel. That's the tricky part.

If you are talking about the actual platform itself, the initial setup is really simple. Onboarding is where the challenge is. Then, once you've onboarded, the other challenge is that you need to develop your use cases using KQL as the query language. You need to have expertise in KQL, which is a very new language.

The actual platform will take approximately 10 minutes to deploy. The onboarding, however, is something that we're still doing now. It's use case development and it's an ongoing process that never ends. You are always onboarding.

It's a little bit like setting up a configuration management platform and you're only using one push-up configuration.

What was our ROI?

We are getting to the point where we see a return on our investment. We're not 100% yet but getting there.

What's my experience with pricing, setup cost, and licensing?

Azure Sentinel is very costly, or at least it appears to be very costly. The costs vary based on your ingestion and your retention charges. Although it's very costly to ingest and store data, what you've got to remember is that you don't have on-premises maintenance, you don't have hardware replacement, you don't have the software licensing that goes with that, you don't have the configuration management, and you don't have the licensing management. All of these costs that you incur with an on-premises deployment are taken away.

This is not to mention running data centers and the associated costs, including powering them and cooling them. All of those expenses are removed. So, when you consider those costs and you compare them to Azure Sentinel, you can see that it's comparative, or if not, Azure Sentinel offers better value for money.

All things considered, it really depends on how much you ingest into the solution and how much you retain.

Which other solutions did I evaluate?

There are no competitors. Azure Sentinel is the only next-generation SIEM.

What other advice do I have?

This is a product that I highly recommend, for all of the positives that I've mentioned. The transition from an on-premises to a cloud-based SIEM is something that I've actually done, and it's not overly complicated. It doesn't have to be a complex migration, which is something that a lot of companies may be reluctant about.

Overall, this is a good product but there are parts of Sentinel that need improvement. There are some things that need to be more adaptable and more versatile.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Simon Thornton
Cyber Security Services Operations Manager at a aerospace/defense firm with 501-1,000 employees
Real User
Top 10
Provides a single window into your network, SIEM, network flows, and risk management of your assets

Pros and Cons

  • "The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis."
  • "I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that."

What is our primary use case?

We're a customer, partner, or reseller. We use QRadar on our own internal SOC. We are also a reseller of QRadar for some of the projects. So, we sell QRadar to customers, and we're also a partner because we have different models. We roll the product out to a customer as part of our service where we own it, but the customer is paying. We also do a full deployment that a customer owns. So, we are actually fulfilling all three roles.

What is most valuable?

The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis.

What needs improvement?

In terms of the GUI, they need to improve the consistency. It has been written by different teams at different times. So, when you go around the interface, you'll find a lot of inconsistencies in terms of the way it works.

I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that. 

Their support should also be improved. Their support is very slow, and it is very difficult to find knowledgeable people within IBM.

Its price and licensing should be improved. It is overly expensive and overly complex in terms of licensing. 

For how long have I used the solution?

I have been using this solution for 12 years.

How are customer service and technical support?

Their support is very slow. it is very difficult to find knowledgeable people within IBM. I'm an expert in the use of QRadar, and I know the technical insights of QRadar very well, but it is sometimes very painful to deal with IBM's support and actually get them to do something. Their support is very difficult to work with for some customers.

Which solution did I use previously and why did I switch?

I work with Prelude, which is by a French company. It is a basic beginner's SIEM. If you never had a SIEM before and you wanted to experiment, this is where you would start, but it is probably that you would leave very quickly. I've also worked with ArcSight and Splunk.

My recommendation would depend upon your technical appetite or your technical capability. QRadar is essentially a Linux-based Red Hat appliance. Unfortunately, you still need some Linux knowledge to work with this effectively. Not everything is through the GUI. 

Comparing it with Splunk, in terms of licensing, IBM's model is simpler than Splunk's model. Splunk has two models. One is volume metrics, so you pay for the number of bytes that are transmitted daily. The other one is based upon the number of events per second, which they introduced relatively recently. Splunk can be more expensive than QRadar when you start to get into adding what they call indexes. So, basically, you create specific indexes to hold, for instance, logs related to Cisco. This is implicit within QRadar, and it is designed that way, but within Splunk, if you want to get that performance and you have large volumes of logs, you need to create indexes. This is where the cost of Splunk can escalate.

How was the initial setup?

Installing QRadar is very simple. You insert a DVD, boot the system, and it runs the installation after asking you a few questions. It runs pretty much automatically, and then you're up and going. From an installation point of view, it is very easy.

The only thing that you have to get right before you do the installation is your architecture because it has event collectors, event processes, flow collectors, flow processes, and a number of other components. You need to understand where they should be placed. If you want more storage, then you need to place data nodes on the ends of the processes. All this is something that you need to have in mind when you design and deploy.

What's my experience with pricing, setup cost, and licensing?

It is overly expensive and overly complex in terms of licensing. They have many different appliances, which makes it extremely difficult to choose the technology. It is very difficult to choose the technology or QRadar components that you should be deploying. 

They have improved some of it in the last few years. They have made it slightly easy with the fact that you can now buy virtual versions of all the appliances, which is good, but it is still very fragmented. For instance, on some of the smaller appliances, there is no upgrade path. So, if you exceed the capacity of the appliance, you have to buy a bigger appliance, which is not helpful because it is quite a major cost. If you want to add more disks to the system, they'll say that you can't. If they ship a disk with 2 terabytes that the older appliances have, and you say to them that you can commercially get 10 terabyte disks, they will say this is not possible, even though there is no technical reason why it cannot be done. So, they're not very flexible from that point of view. For IBM, it is good because you basically have to buy new appliances, but from a customer's point of view, it is a very expensive investment.

What other advice do I have?

Make sure that you have the buy-in from different teams in the company because you will need help from the network teams. You will potentially need help from IT. 

You need to have a strategy of how you onboard logs into SIEM. Do you take a risk-based approach or do you onboard everything? You should take the time to understand the architecture and the implications of design choices. For instance, QRadar Components communicate with each other using SSH tunnels. The normal practice in security is that if I put a device in a DMZ, then communication between the device on the normal network, which is a higher security zone, and the DMZ, which is a lower security zone, will be initiated from the high-security zone. You would not expect the device in the DMZ to initiate communication back into the normal network. In the case of QRadar, if you put your processes in the DMZ, then it has to communicate with the console, which means that you have to allow the processor to communicate. This has consequences. If you have remote sites or you plan to use cloud-based processes, collectors, etc, and have an internal console, the same communication channels have to exist. So, it requires some careful planning. That's the main thing.

I would rate QRadar an eight out of 10 as compared to other products.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering Dynatrace, Datadog, IBM, and other competitors of Splunk. Updated: December 2021.
554,873 professionals have used our research since 2012.