We just raised a $30M Series A: Read our story

LogRhythm NextGen SIEM Competitors and Alternatives

Get our free report covering Splunk, Datadog, IBM, and other competitors of LogRhythm NextGen SIEM. Updated: December 2021.
555,358 professionals have used our research since 2012.

Read reviews of LogRhythm NextGen SIEM competitors and alternatives

Dannie Combs
Senior Vice President and Chief Information Security Officer at Donnelley Financial Solutions
Video Review
Real User
Top 5
The alert fatigue and false positive rates have just plummeted, which is really exciting.

Pros and Cons

  • "As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity."
  • "Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus."

What is our primary use case?

My organization is in the financial services industry and the majority of services that we offer are financial services centric. We operate or support almost every industry in the marketplace. We restore processes and transmit highly sensitive information. Sometimes that information is premarket. Other times that information is personally identifiable information, personal health information, etc. It is dependent upon our client's requirements. Security is cornerstone in all that we do. It's in our DNA, as we would like to say internally. Being in a position to understand when we are at risk of a cyber attack is paramount.

We have a strong desire to understand who did what, where, when, and why internally. empow's near real-time, high fidelity, security monitoring capabilities are our primary use case. Other use cases revolve around:

  • Gaining as much insight from a threat intelligence perspective, being able to correlate that back to an alarm, and doing so in an automated fashion. 
  • The automated mitigation capability. 
  • The general reporting and analytics within the platform.

How has it helped my organization?

We have a significantly higher confidence in our ability to automate mitigations. We've had technologies across SOAR and cyber threat intelligence integrated into our platforms for over four years now. We would like to tell ourselves that we're reasonably experienced with both of those technology categories. 

One of the most impressive accomplishments that we were able to showcase internally was building metrics around the fidelity of our playbooks when they're executed. We have a high degree of confidence that we have the right playbooks in place. It's also worth mentioning that we're a global organization. We are corporate focused, primarily, not consumer focused. We know where our clients are from a geographic perspective, as an example, but our clients travel. We want to be hyperconservative on those mitigation techniques as to not adversely affect the client experience with our product lines. I was quite surprised, even though we took a very conservative approach initially, the degree of accuracy and percentages of false positive were almost zero when the mitigation playbooks were involved. The enablement of automated mitigations that the empow product line has provided us with is incredibly impressive.

One of the most impressive capabilities of the empow product line to our security analyst team is just how little maintenance is required to ensure that we are focusing on the right threats. The correlation rules themselves require effectively little to no maintenance from a client perspective, which is tremendous. This is leaps forward compared to other product lines and SIEMs over the last 10 years. 

Correlation rules maintenance has been one of the most time consuming bodies of work required. It is one of the areas where we had a higher degree of risk of focusing in the wrong areas. We spent an enormous amount of time being hyperfocused on ensuring that we have the right correlation rules in place, the fidelity of those rules was sound, etc. We just can't begin to mention how pleased we are that, for the most part, this is no longer something we have to be concerned about.

The power of the AI and the natural language processing capability is best measured by the outputs. The fidelity of the alarms that we receive is just night and day compared to SIEM platforms and other platforms we've used in the past. I also feel it is a leading reason (major theme) why our overall alarm volume is significantly lower, because we deal with far less alert fatigue. We are dealing with a lot less false positives as a direct result of the AI and NLP capabilities.

Our overall false positive rates are significantly fewer. It's definitely removed about 60 percent of the total volume of alarms that we have needed to respond to each month over the last year. Also, it's worth mentioning that we spent considerable amounts of time in years' past focusing on managing correlation rules, ensuring that we have the right prioritization applied to those rules, that the rules were accounted for, or they took into account our technology deployments, such as a general shift in our portfolio, adding/removing devices, retiring products and services, and adding new innovative solutions for our customers. This was to the extent that we had a 90-minute session twice a month with a partner of ours dedicated just to that session. Today, we don't have any meetings per month. We're focused on correlation rules as a direct result of our transition to empow.

Their ability to focus on an event with a high degree of fidelity really drives our level of confidence. Therefore, we are quick to respond with a high degree of urgency when we do receive an alarm because we recognize that there is a very high probability that the alarm is accurate and the fidelity is very high. This enables us to focus on other areas throughout the day. However, once we do receive an alarm from empow, we recognize it's something that needs to be responded to with a high degree of urgency.

The integration between Elastic and empow has been quite impressive for a couple of reasons: 

  1. We're a prime example of an organization who must have a high degree of flexibility in our deployments. We have full cloud-native deployments of products and corporate systems. We have on-prem deployments of both. Our cloud deployments span many cloud providers. Therefore, I need to be able to orchestrate and scale up and down my footprint, depending on geography, cloud providers, the tempos of the business relative to lifecycles with some of our products, and so on and so forth. Having a lot of leverage to pull on Elasticsearch has proven to be very attractive to us for supporting our set of requirements and flexibility. 
  2. They play a big role in making it incredibly easy to plug into other security tools, network platforms, and application platforms, whether they are internally developed or commercial offerings. The API model that the empow product provides has simplified the integration of almost any technology into their product lines.

empow has impacted our network security posture in a truly dramatic way, particularly in that we have a higher confidence when we are responding to an event that it is actionable and we should be concerned about. Secondly, it has positively impacted our network security posture by way of automated mitigations defined within the system. The playbooks that we define and can take a conservative approach to, they help us avoid any negative impact to our clients. The accuracy of those playbooks define the automated mitigations, and we have tremendous amount of confidence in them. Those playbooks are triggered daily and that reduces risk. They reduce the amount of time spent to contain and mitigate them. Overall, from a security perspective, there have been quite dramatic steps forward. 

It also directly supports our compliance programs. We're very easily able to measure when we have events and what actions were taken because the vast majority of them are addressed through automation.

I had worked using empow with a previous organization, but our requirements were very different. We are definitely enterprise-focused, but we are also corporate user-focused. Our client community is primarily that of mid to large enterprise organizations across the globe. How well a product organization in the services team responds to support calls is critically important. I give empow a lot of very high marks. The responsiveness has been very high, but more important than the responsiveness is the quality and accuracy of their recommended next steps to resolve whatever issue we may have. 

What is most valuable?

  • The automated mitigation capability. 
  • A next generation capability of attack replay, where it walks back from the event, historically, to provide that visualized representation of the attack lifecycle. 
  • The ability to rapidly deploy a comprehensive coverage tool without the need to spend months of planning for a deployment with emphasis placed on correlation rules. The ability to put aside the need for a high number of correlation rules is extremely advantageous to us, as it saves time and money, drives fidelity, and scores higher. It's just a fantastic capability.

When I think about the quality of the dashboard, it's one of the features that is just fantastic to speak about. They designed a dashboard where I can get a quick snapshot with a broad lens over the last seven to 10 days that dives specifically into areas which are a bit of concern. Also, from a SOC analyst perspective, there are many levels within a SOC organization, so whether they are entry level or a new hire, they can find that right altitude of interest relative to the depth of detail that's being presented. The flexibility of the dashboard to quickly drill up or down into an altitude of your choosing is fantastic.

Also, being able to pivot around between various data sets, whether it be:

  • Threat intelligence centric data
  • Alarm data
  • A specific asset
  • Elevating it to a solution level
  • Elevating it to an entity level

The degree of flexibility and speed in which you can change your view is very impressive. Oftentimes, with some of the more legacy SIEMs which have been in market for a long time, that was one of the major pain points: It took time to refresh views. The limitations of that flexibility was frustrating.

The platform has made mitigation faster primarily by way of the playbooks we defined (automated mitigation). We have a number of playbooks defined where our empow platform signals directly to the firewall to block traffic. For example, we have no customers in North Korea. Anytime we see an interrogation of our products or our assets from there, then we signal to the firewall to drop that traffic systematically when there's time. It is not some form of mean time to respond to an event, but really time relative to our analyst focusing more in other areas.

What needs improvement?

empow has a few areas of improvement as with any other technology, such as continuing to drive innovation in the dashboard. While we've been extremely impressed with the dashboard's ease of use, flexibility, ability to drill down deeply, and focus very intently on an area of interest, there will always be opportunities to be more innovative and open it up to a wider audience than just the operations group, for example. 

With reporting, there is always a desire to have custom reporting for every client of empow. 

Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus.

For how long have I used the solution?

I've been using the empow i-SIEM platform for a total of four years across two companies, but for two years in my current company.

What do I think about the stability of the solution?

Someone knock on some wood here if you would, but we haven't had any stability challenges yet. That is directly attributable to the architecture that we've put in place for empow and other solutions that we deploy. We plan for highly available solutions across each deployment site, or per data center, making geo-redundancies available. So far so good, we have not had any significant operational hiccups with the platform.

We have one dedicated resource who is accountable for ensuring that the empow environment is healthy, so one from a maintaining perspective. We have a team of threat analysts on staff. We have a third-party managed security services partnership in place as well. There's definitely one FTE whose primary role is to ensure that the empow platform is up and running, healthy, and satisfying the needs of our internal clients, which would be our team of cyber threat analysts.

What do I think about the scalability of the solution?

The scalability of empow is endless. I feel that they have an architecture that is highly scalable. It's been proven for on-prem, cloud, and hybrid environments. We presently have a hybrid environment. I suspect they can scale to almost any size needed. The question will be as to what are the unique needs of the organization where they've been deployed and what is their appetite for investing to ensure resiliency either locally, regionally, or globally. Those things play a role in how quickly and complex the architecture must be in order to scale. 

It is the standard for security operations. Anywhere my organization deploys technology assets, empow will be providing coverage, if not already.

The ability for empow to be managed by a single analyst depends on the organization. I don't need a team of 20 to 25 analysts any longer. It's significantly fewer than that. To quantify one analyst really depends on the organization and what is their threshold for risk? That's unique to every organization. What is the size of the organization from a technology perspective: Are you dealing with hundreds or tens of thousands of servers? That will be indicative of your resource needs. It takes essentially one, maybe two, resources regardless of your size to directly support the care, feeding, capacity management, and monitoring of the empow platform. The simplicity of the architecture is remarkably impressive.

How are customer service and technical support?

The partnership between empow and Elasticsearch has a very positive impact on us from a couple of different angles:

  1. Support. There's one throat to choke. We pick up the phone, we reach out to the empow team, and we have one point of contact, whether we're experiencing an application issue, a data issue, etc. It just simplifies the overall management. 
  2. The licensing negotiation through one organization is more simplified. As it relates to preparing for major upgrades, it makes our lives quite a bit easier when there are fewer parties that we have to interface with.

The partnership between empow and Elastic has a few of key benefits:

  1. The simplification and how we have one point of contact for support regardless of what the issue type is. If we're experiencing a concern relative to the application, UI, reporting engines, etc., we have one phone number to call and one lead engineer to reach out to who takes ownership relative to determining if it's an internal empow matter, or if we need to reach out across the boundaries over to Elastic. 
  2. How it relates to our planning for upgrades or expansion of the environment for capacity management purposes, whatever the issue may be. Having that simplified licensing arrangement makes my life easier. As we have one agreement, we have one pricing scheme to work from. It's just really good, which keeps it nice and simple for maintaining the business.
  3. As we look forward to future product lines and other architectural endeavors, having a single point of contact for planning purposes simplifies the process quite a bit as we look to year two and three.

Which solution did I use previously and why did I switch?

It is worth mentioning we were able to retire two other platforms as part of our migration over to empow. We retired a legacy SIEM deployment that we had in place for nearly four years. While it is a great product, I just felt that empow demonstrated more innovation at a lower cost point with a simpler architecture that's more extensible and easier to scale. We were able to retire the SIEM that we had in place for three and a half years, as well as our SOAR platform. We continue to use our existing cyber threat intelligence (CTI) platform, but there is an overlap with the capabilities across empow. However, we still see value in that CTI platform, so we retired the SOAR and our legacy SIEM.

We needed to make a change in large part because the cost of scaling was becoming quite concerning. Also, we went through a series of upgrades a little over a year ago there were problematic. So, anytime we experience something that is impactful we now want to pause and reflect back what we did well, what our opportunities were, and did we miss any opportunities to avoid that situation from being realized. We used the outcomes of those reflections to revisit the market and made the decision to really pursue empow as our leading solution for security operations.

empow has significantly been able to reduce the time that we spend on just maintaining the platform, particularly as compared to other product lines that we've previously invested in. The biggest advantage in this regard would be the lack of time spent on managing the correlation rules. The simplified architecture allows us to really lean upon empow's support teams to effectively provide almost end-to-end support of the underlying infrastructure that comprises the platform.

How was the initial setup?

There was complexity to the initial deployment in so much that we were migrating from an existing, fairly sizable deployment, if not a product line. There were a couple of different solutions in place which comprised our overall enterprise security monitoring solution. We had the SIEM, our SOAR platform, a cyber intelligence interface from a number of different feeds, etc. 

The initial deployment took a couple of weeks and most of it was planning. The actual technical activities were executed quite quickly. Of course, there was the migration of the primary existing data storage that we needed to migrate from our old SIEM environment, but there was also the body of work to redirect log streams and other ingestion of data from our several thousand devices (north of 5,000 devices) for production alone, which takes time. Our primary migrate deployment was a couple of weeks. Most of that would be in planning. The primary migration of the existing data storage took about 14 days to go through three or four different change windows to make sure that it was complete and to wrap up some other activities. Then, the effort to redirect all the various log streams into the empow environment away from a multitiered architecture to a single destination IP address, just a single collector across the environments, took approximately two months. That was more to ensure that we understood the risks associated with change management collisions and we were hyperfocused on never losing a log throughout those migrations.

What about the implementation team?

There was some complexity that several of my teams and the empow team needed to walk through to make sure we mutually understood the goal and technical requirements. There were some business requirements and relative reporting that we wanted to make sure we were all aligned to. While there was complexities, I also would give empow very favorable feedback as it relates to them taking a sense of ownership to the migration and the overall deployment of their technology. They really looked out for us. There were a number of times that they cautioned me to make some minor adjustments in the plan to ensure that they weren't disruptive to our business for which I'm very appreciative.

Our overall implementation strategy was bringing both empow and representatives from my teams together to build a plan. We established a few key milestones and aligned those milestones relative to availability of resources on both sides. This ensured we really understood what we were trying to accomplish, not just from an architecture perspective, but also, e.g., taking into account key business timelines that we needed to be mindful of and where we just didn't have an appetite for some major change management activities to occur. We each brought project management resources, a lead architect, and a threat analyst to the table to ensure that we understood each of those perspectives to really comprise that team and ensure that they were set up for success. I would estimate the total team makeup, excluding myself, would be six: two from empow and four from my team.

What was our ROI?

We are saving so much time. We deal with billions of events a month. We are definitely a data-centric organization. Easily, we are able to save 75 percent of the head count for security operations that would otherwise be needed given our scale. Now, we are in a bit of a unique situation where the organization spun off from its parent company just shy of four years ago. So, we are still in a growth mode in many respects. While we are still continuing to expand our security organization from an FTE and head count perspective, it's very easy to quantify without empow we would be looking at seven to 10 more resources being required. This is opposed to the one or two who are focused on the platform today, where focused on the platform includes capacity management, general system administration of the environment, and monitoring/responding to alarms that are generated.

As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity.

North of 75 percent of our time has been reduced relative to the support in the environment, starting from the general system administration, capacity management, the overall patching, and system admin of the ecosystem. Most notably would be on the time to maintain the application tier of empow, particularly that of the correlation rules. That has been reduced by north of 90 percent as compared to other platforms.

Mitigation time has been reduced by north of 75 percent for the vast majority of alarms that we receive. This varies depending on the event type. However, with the automated playbooks that we have defined and the confidence levels in the fidelity alarms, we have been able to enjoy significant reduction in our mean time to mitigate and mean time to respond.

As we have more alarms as a result of having more logs adjusted, this means we need more analysts to respond to those alarms in order for us to meet our SLAs because we have very aggressive SLAs. With a higher degree of fidelity in the alarms, we were able to avoid adding additional resources to our teams. We take into account the cost of security resources in the market and the significantly higher fidelity from the alarms that are being generated. This drove down our costs with our MSSP. It drove down my cost for human capital internally. It drove down our need to have multiple resources supporting the underlying infrastructure and health and maintenance of empow as a platform from several resources down to one. Therefore, human capital costs were significantly reduced. Our operating expenses were significantly reduced. Our capital costs were significantly reduced while tripling our capacity and our run rate reduced. It was almost a "too good to be true" situation. Fortunately, for us, it worked out very nicely.

What's my experience with pricing, setup cost, and licensing?

We were looking at a seven figure investment being necessary to sustain our growth projections for our log ingestion requirements, just for production. We had a goal of ensuring that we understood who did what, where, when, and why across all assets, production, staging, development, field devices, laptops, iPads, etc. Not only were we able to avoid all those costs relative to growth, from a point in time forward, the cost structure of empow (as compared to both existing tools) was cheaper for us to migrate then it was to sustain on our legacy platforms. When it's cheaper to migrate, that's very attractive. 

I don't have to put up with any longer with these hypercomplex licensing agreements. Every time I want to add some additional reporting as a compliance centric or regulatory specific, e.g., GDPR, PCI, or Sarbanes-Oxley, many providers would have an additional license for this, which felt a bit ridiculous to me. With the simplified licensing architecture, there were no hidden "gotchas" down the road with empow. Something I have experienced with other providers that I've worked with in the past.

As it relates to the SOAR side of the toolkit, there was no need to purchase an independent SOAR platform. The innovation that empow brings to the market just addressed all those use cases natively. We were able to just completely retire that toolkit out of our environment.

Which other solutions did I evaluate?

As we do with almost every technology selection, we looked at the markets. For this particular technology stack, there were five or six different players who we looked at intently. Then, as with most organizations, we took the broader view. We narrowed it down to two or three finalists who went into a formalized proof of concept lab environment, which runs for an extended period of time for something so critical as a SIEM or SOAR, which are the primary reporting capabilities for security purposes. We had an extended evaluation and followed that crawl, walk, run model relative to our rollout. In hindsight, it was a very structured, formalized process. What was interesting, the business case was very simple because of the cost savings from just the cost avoidance. As we look forward in our plans and the need to scale up, it more than covered the cost of transitioning in our entirety over to the empow platform.

Some of the other SIEM providers that we looked at when we revisited the market would include LogRhythm, Splunk, and empow to name a few. A pro of empow would be the simplified licensing model. Both my organizations have had feedback over the years from many clients that's an area of concern, particularly with Splunk. Also, the simplified architecture particularly leveraging the Elasticsearch technologies really prevents the need to have a complex architecture with high power CPUs in place across each of your footprints where you have log collectors deployed. That was a major value add and very attractive to us. 

We evaluate the partnership between empow and Elastic from a couple of different dimensions. First and foremost, what was our experience like as we were negotiating? Was empow in a position to adequately represent both the business terms for Elastic, support terms, and other commitments that we needed to work through. The answer was yes. It felt very seamless to us. Secondly, the simplicity of the licensing model just made the process of acquiring technologies so much simpler and straightforward than what we experienced in previous relationships.

The decision to partner with Elastic for that strategic partnership to be in place wasn't a Tier 1 criteria for us. However, what was a criteria for us was the outcome and capabilities which the partnership has resulted in, e.g., the speed to rapidly scale up, the ease of scaling down, and the ease of migrating a primary on-prem data center strategy to a hybrid 50/50 cloud on-prem strategy with long-term plans for pursuing cloud far more aggressively. As we need to pull the levers to keep up with the demands of our business, we wanted to have comfort that it would not be a disruptive series of changes as related to the SIEM and we wouldn't have to go back and re-architect, then buy additional licenses for new features and functionalities. We wanted to avoid that complex license structure. We want to have confidence in our ability to scale up and down and migrate from across multiple feeders as our business needs warranted. empow has done a great job in supporting us in that regard.

What other advice do I have?

If I was to rate empow on a scale of one to 10, I would give them a nine and a half, probably. Why it's so high is that there's no competitors on the market in my mind that has transformed the SIEM industry as much as empow. The speed is impressive in which they continue to innovate. Every couple of months, we're excited to learn about the latest and greatest capabilities of the platform. Most of the latest innovations have been centered around their automation capabilities. It's had such a tremendous impact on my organization. They tend to focus on what matters. It has given us high confidence that where we are spending our time is worth doing so. The alert fatigue and false positive rates have just plummeted, which is really exciting. They have transformed the industry, which no one would have expected not that long ago.

I'd like to give them a bit of a shout out for when they have given me commitments around enhancements, such as enhancing their reporting capabilities, some minor adjustments to the dashboard, and those types of feature requests, they've met those commitments as it relates to quality and timeline.

empow, without a doubt, is the most important monitoring tool that we have at our disposal. From a monitoring and incident response perspective, empow is the most valuable asset we have in our toolkit.

The biggest lesson I've learned from using empow would be just how far technology has come. It surprised me relative to the orchestration of the automation of our mitigations. That one was quite surprising. The accuracy and the level of confidence I have in the playbooks surprises me at how high it is, because it's quite high. Another area that surprised me would be the level of confidence that we have now in our ability to scale up and down, as scaling down sometimes can be equally as tricky.

The advice that I would give to anyone looking at empow would be primarily ensure that your planning is sound. When I think about our experiences with empow, it's refreshing to think back about how easy that journey was with such a difficult technology stack. Not only was it surprisingly simple, it should not have been since not long ago we were not standing up a deployment of a net new sandbox environment where we were needing to build and deploy, then migrate, a very sizeable deployment to this new ecosystem. Inevitably, we expected there to be some bumps along the road, but there were very few. I attribute this back to the quality of planning and reliability of the technology that empow brings to the table. Therefore, my advice would be ensure that your planning is sound. While it's exciting to know that the technology is very stable and the integrations are very straightforward with API driven integrations, they never can really take into full account the uniqueness of your business. Thus, planning is absolutely paramount.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Elizabeth Manemann
Cyber Security Engineer at H&R Block, Inc.
Real User
Top 20
Accepts data in raw format but does not offer their own agent

Pros and Cons

  • "The most valuable feature is definitely the ability that Devo has to ingest data. From the previous SIEM that I came from and helped my company administer, it really was the type of system where data was parsed on ingest. This meant that if you didn't build the parser efficiently or correctly, sometimes that would bring the system to its knees. You'd have a backlog of processing the logs as it was ingesting them."
  • "From our experience, the Devo agent needs some work. They built it on top of OS Query's open-source framework. It seems like it wasn't tuned properly to handle a large volume of Windows event logs. In our experience, there would definitely be some room for improvement. A lot of SIEMs on the market have their own agent infrastructure. I think Devo's working towards that, but I think that it needs some improvement as far as keeping up with high-volume environments."

What is our primary use case?

We have a couple of servers on-premises to gather the logs from our devices. We have a lot of devices including vendor-agnostic collectors that will, for example, collect syslogs from our Linux host. The logs are then sent to the Devo Relay, which encrypts the data and sends it to the Devo Cloud.

What we send to Devo includes all of our Unix-based logs. These are the host logs, as well as logs from a lot of the network devices such as Cisco switches. Currently, we are working with Devo to set up a new agent infrastructure, and the agents will collect Windows event logs.

We were using a beta product that Devo provided for us, which was based on an open-source platform called Osquery. That did not quite work for the volume of logs that we have. It didn't seem to be able to keep up with a large number of servers, or the large amount of Windows event log volume that we have in our environment. We're currently working with them to transition to an Xlog and use their agents, which work really well to forward the logs to Devo.

We also send cloud logs to Devo, and they have their own collector that handles a lot of that. It basically pulls the logs out of our cloud environment. We are sending Office365 management logs, as well as a lot of Azure PaaS service logs. We're sending those through an event hub to Devo. We are currently working on onboarding some AWS logs as well.

We have several corporate locations, with the main location in the US. That is where the majority of our resources are, but we do also have Devo relays stood up in Canadian, Australia, and India. These locations operate in a way that is similar to what is described above, although on a smaller scale. They're sending all of their Unix devices and syslogs to the relay, and then I believe only Australia at the moment is using agents to pull from Windows logs. Canada is using a different SIEM at the moment, although that contract is about to expire, so then we'll onboard their Windows event logs as well. India does not have any Windows servers that need to have an agent for collecting logs, so just send the Linux and Unix logs over the relay to Devo.

Our main use case and customer base are our security operations center analysts. A lot of our process was built up and carried over from our previous SIEM, LogRhythm. We have an alerting structure built out that initiates a standard analyst workflow.

It starts when you get an alert. You drill down in the logs and investigate to see if it's a false positive or not.

We are in the process of onboarding our internal networking team into Devo, and we are gathering a lot of network logs. This means that they can monitor the health of our networking infrastructure, and at some point, maybe set up health alerts for whatever they are looking for.

We have another team that is using Devo, which is our internal fraud team. They're very similar to stock analysts, where they just look for suspicious events. They are especially interested in tax filing and e-filing. We gather logs for that, and they go through a really deep investigative workflow.

How has it helped my organization?

One of the immediate improvements that come to mind is the amount of hot, searchable data. In the SIEM we had before, we were only able to search back 90 days of hot, searchable data, whereas here we have 400 days worth. That definitely has improved our threat hunting capabilities. 

We're also able to ingest quite a bit more data than we were before. We're able to ingest a lot of our net flow data, which if we had sent that to our previous SIEM would have brought it to its knees. So the amount of data that the analysts are able to see and investigate has been a really big beneficial use case. I'd say that's the biggest benefit that it's provided.

I myself do not leverage the fact that Devo keeps 400 days of hot data to look at historical patterns or analyze trends. A lot of times I will look at that to see the log volumes, the traffic, make sure there are no bottlenecks as far as how log sources are sending to Devo. I would say that the analysts definitely for certain cases will go back and try to retroactively view where a user was logging in, for example. At the moment, we haven't really had a use case to push the limit of that 400 days so to speak, and really go really far back. We definitely use the past couple of months of data for a lot of the analyst cases.

This is an important feature for our company especially with the recent SolarWinds attack, which was a big deal. We did not have Devo available, but because that happened so far in the past, it was a struggle to pull that data for it to look for those IOCs. That was definitely a really big selling point for this platform with our company.

Devo definitely provides us with more clarity when it comes to network endpoint or cloud visibility. We're able to onboard a lot of our net flow logs. We are able to drill down on what the network traffic looks like in our environment. For the cloud visibility, we're still working on trying to conceptualize that data and really get a grasp around it to make sure that we understand what those logs mean and what resources they're looking at. Also, there's a company push to make sure that everything in the cloud is actually logging to Devo. As far as cloud visibility, we as a company need to analyze it and conceptualize it a little bit more. For network visibility, I would say that Devo's definitely helped with that.

The fact that Devo stores the data raw and doesn't perform any transformation on it really gives us confidence when we know that what we are looking at is accurate. It hasn't been transformed in any way. I'd definitely say that the ability to send a bunch of data to Devo without worrying about if the infrastructure can handle it definitely allows us to have a bigger and better view of our environment, so when we make decisions, we can really address all the different tendencies. We're collecting a lot more types of log sources than we were before. So we can really see all sides of the issue; the vast amount of data and the ability to really take our decision and back it up with the data, and not just random data but we can use a query and display the data in a way that backs up the decision that we're making.

Devo helps to release the full potential of all our data. The active boards like the interactive dashboards that Devo provides really help us to filter our data, to have a workflow. There are a lot of different widgets that are available for us to visualize the data in different ways. The active boards can be a little slow at times, a little bit difficult to load, and a little bit heavy on the browser. So sometimes the speed of that visualization is not quite as fast as I would like but it's balanced by the vast amount of options that we have.

That's one of the big things that like all security companies, security departments really purported having that single pane of glass. The Devo active boards really allow us to have that single pane of glass. That part is really important to us as a company to be able to really visualize the data. I haven't found the loading speeds have become a significant roadblock for any of our workflows or anything, it's an enhancement and a nice to have.

We all want everything faster, so it's definitely not a roadblock but the ability to represent the data in that visualized format is very important to us. It's been really helpful, especially because we have a couple of IT managers, non-technical people that I am onboarding into the platform because they just want to see an overall high-level view, like how many users are added to a specific group, or how many users have logged in X amount of days. The ability to provide them not only with that high-level view, but allow them to drill down and be interactive with it has really been super helpful for us as a company.

Devo has definitely saved us time. The SIEM that we were on before was completely on-prem, so there were a lot of admin activities that I would have to do as an engineer that would take away from my time of contextualizing the data, parsing out the data, or fulfilling analysts requests and making enhancements. The fact that it is a stock platform has saved me a ton of time, taking away all those SIF admin activities. 

I wouldn't say that it really increased the speed of investigations, but it definitely didn't slow it down either. They can do a lot more analysis on their own, so that really takes away from the time that it takes to reach out to other people. If you went back 90 days, you had to go through a time-consuming process of restoring some archives. The analysts don't have to do that anymore, so that also cuts off several days' worth of waiting. We had to wait for that archive restoration process to complete. Now it's just you pull it back and it's searchable. It's right there. Overall, I would say Devo has definitely saved us a lot of time. For the engineering space, I would say it saves on average about one business day worth of time every two weeks because a lot of times with on-prem infrastructure, there would be some instances where it would go down where I'd have to stay up half the night, the whole night to get it back up. I haven't had to do that with the Devo platform because I'm not managing that infrastructure. 

What is most valuable?

We are using some of the other components, such as Relay, which is used to help us ship logs to Devo.

The most valuable feature is definitely the ability that Devo has to ingest data. From the previous SIEM that I came from and helped my company administer, it really was the type of system where data was parsed on ingest. This meant that if you didn't build the parser efficiently or correctly, sometimes that would bring the system to its knees. You'd have a backlog of processing the logs as it was ingesting them.

One thing that I love about Devo is that you can accept the data in a raw format. It's not going to try to parse it until you query it. This makes it really flexible for us because if the analysts come to us and explain that they need a specific log source, we can just work on the whole transportation system, insofar as how to get it to Devo. We don't have to worry about parsing it out until later. We can actually see the data in the platform and then we can use the queries to perform contextualization on it, parsing out whatever metadata we need.

I really like the flexibility that the queries offer to parse out the data. Parsing out JSON logs, for example, is very easy. You don't have to mess with regex. It's literally just a point-and-click interface. So that has been incredible. I would say overall in a nutshell, one of my favorite parts is that they really have captured the essence of sending us all your data. You don't have to worry about how to parse it. You can get the data onboard and then you can perform transformations on it later. And the transformations that you can perform on it are super flexible.

Devo definitely provides high-speed search capabilities and real-time analytics. The search can be a little bit slow at times. But for the amount of data that we're pulling back relatively speaking, I would say that the speed is very nice. The ability to pull back large amounts of data, also the amount of data that they keep hot and searchable for us is incredible. I would definitely say that they provide real-time analytics and searching.

I have heard from other customers that the multi-tenancy capabilities are pretty good, but I don't have much experience with that in the HR Block though.

What needs improvement?

When it comes to the ease of use for analysts, that's an area that they may need to work on a little bit. Devo offers its version of a case management platform called Devo SecOps. They did offer it to us. It's part of our contract with them. The analysts have found that the workflow isn't very intuitive. There are a couple of bugs within the platform, and so we are actually sticking with our old case management platform right now and trying to work with Devo to help iron out the roadblocks that the analysts are facing. Mostly it seems like they have trouble figuring out where the actual case is. A lot of the search features that are in the main Devo UI don't translate over into their SecOps module. They seem separate and disjointed. So the core of the platform where we have all of the data isn't integrated as well as we would like with their case management system. There's a lot of pivoting back and forth and the analysts can't really stay in the SecOps platform which adds some bumps to their workflow.

The SecOps module also needs improvement. It should be more closely integrated with the original platform that they had. The data search abilities in the SecOps platform should be made more like the data search abilities in the administrator's side of the platform. 

From our experience, the Devo agent needs some work. They built it on top of OS Query's open-source framework. It seems like it wasn't tuned properly to handle a large volume of Windows event logs. In our experience, there would definitely be some room for improvement. A lot of SIEMs on the market have their own agent infrastructure. I think Devo's working towards that, but I think that it needs some improvement as far as keeping up with high-volume environments.

For how long have I used the solution?

We implemented Devo as a PoC last year but have only just started using it officially a few months ago.

What do I think about the stability of the solution?

It's a Devo-managed SaaS cloud platform. It does seem like lately that they've been having trouble keeping up with the large volume of events. It's maybe due to other customers besides H&R Block. I have shared in their cloud infrastructure and we have noticed some slowness and some downtime. I would say it's definitely not more than a maximum of three hours every two weeks. It's usually not a lot of downtime or slowness, at least not to the point where we cannot work within the platform, but it does seem to have been picking up a little bit more lately. That's why I average out around three hours every two weeks. But I would say as far as overall stability, the uptime has been really great. If it's "down", it's really more just that search is run really slow, but you can still get into the platform. It's not really that everything is down where you can't look at alerts. That rarely ever happens. I would say overall, it's pretty stable and it allows our analysts to stay on the platform.

What do I think about the scalability of the solution?

In terms of scalability, we are able to ingest as much or as little data as we want, so that is really awesome. I've been pretty amazed at how much we're able to throw at it. We can expand as much as we want to suit our needs, obviously within the confines of the subscription agreement. There is a data cap, but within that limit, we can really go crazy. The scalability is awesome. It's very scalable.

There are about 50 or so users on the platform right now. We have our SOC analysts at different levels that just perform investigative activities. The majority of our clients on the platform are our security operations center analysts. They have different privileges based on their roles. We give them the ability to create test alerts if they're trying something out.

We have various other team members throughout our corporation using it, only two or three here and there. We have three individuals from our networking team, a couple of individuals from IT support that often utilize the platform to investigate user lockout and stuff like that. Of course, we have the engineers in the platform which have been five or six individuals. The main user base is our SOC analysts.

We do maintenance for our servers and such. We don't really have them on the platform at the moment. They have their own kind of tools. They utilize their graphs to monitor the health of our infrastructure, but that may be something at some point in the future that we may be pursuing. The more teams that we get into our SIEM, the better because it really justifies the usage of the tool. Right now, as far as from a maintenance perspective, the only IT staff that would be using it for that sort of thing would be our networking team. And we have about three individuals that we just recently onboarded, so they're just getting used to the platform.

Devo is mostly being used for security logs. There's a push to start using this not only for security monitoring but for infrastructure health monitoring as well. So we're starting with the networking team. We really are still in phase one of really fleshing Devo out, adding more enhancements and alerts. My primary role is to support the security operations center, to support the security aspect of things but I haven't heard if it's set in stone yet. I would say that we are definitely going to try to push to utilize Devo more throughout our organization for health monitoring and for the networking team to use. Perhaps at some point in the future, we would expand our usage. It's not set in stone yet, but I could definitely see that happening.

How are customer service and technical support?

We have a couple of tickets open with support but mostly for platform health monitoring questions. We do have some in regards to alert logic, but nothing that was super important that we actually had to call Devo tech support.

Their professional services team works really hard on building out active boards for us and helping us make sure that we are monitoring the health of our platform. Overall, I'd say they're definitely really collaborative. They want to hear what's going well for us and what's not. 

Sometimes they're not quite as responsive as I would like, but I think that is also due to the transition process because that was recent. We are giving them some time to get adjusted to our account and get things all set. I would say from a support standpoint, they have definitely been very responsive to our cases that we have open, so that's been really helpful. I've had instances in the past with vendors where it takes several weeks to hear anything back on a case.

Which solution did I use previously and why did I switch?

Prior to Devo, we used the LogRhythm SIEM.

We switched mainly because of the ability to ingest more data. In certain instances, we had to say no to onboarding certain log sources because of the amount of value it offered, the cost-benefit didn't weigh out. LogRhythm put the point where if you added too much data, if you had too much volume being ingested, it would start breaking. It would start complaining and things would just go bad. The amount of downtime we had with LogRhythm was really the main metric driver to get us to transition to Devo. Then what really appealed to us about Devo versus other SIEMs was their "give us all your data" model. That was something we were really struggling with and that was something that we really wanted from a SIEM. We wanted to correlate between as many data sources as possible. They offered us that capability that LogRhythm really did not. 

How was the initial setup?

The initial setup was fairly straightforward. I don't know if this falls into the whole analysis of the Devo UI itself. I'm not sure if Devo considers their agent infrastructure as part of this offering because it was beta, but that was rather complex. We didn't get a lot of details as to the specs needed for when we set up the agent manager infrastructure in our environment, so that was pretty complicated. But as far as just onboarding the data, really offloading all of the alerts that we had out of our old SIEM, it was pretty straightforward.

I would definitely say one thing that may have made it easier would be for Devo to have some more out-of-box alerts. The previous SIEM that we had, had a lot of alerts that they offered us from their research and from their labs and we really built on top of those. We had to build a lot of our alerts from scratch to transition them from our old SIEM to Devo. If Devo had their own alert library or a more fully fleshed-out alert library, that would have made that aspect of it a little less time-consuming. Otherwise, as far as the data onboarding and the data ingestion, that was very straightforward as far as SIEM transitions go. The relay they provided us gave us a single point to send everything to. 

We really shifted completely from our old SIEM to Devo in about three to four months, which by industry standards was very quick. It was a combination of a lot of hard work and teamwork on our side, but again, the data ingestion, the ease of getting that data into Devo took off a lot of that time. We had a single point to send it to which helped with the transition.

In terms of our implementation strategy, our initial goal was to get everything out of our old SIEM. So first we made sure that all of the log sources that were there would be redirected to Devo. And so we set up the appropriate components to forward it to Devo, and then once all the data was in there we started working on transitioning the alert and building out the alert logic, and making sure that the alert logic matched what we had in our old SIEM. After that was onboarding all the users, making sure that the RBAC controls were in place. 

What about the implementation team?

We did use a consultant. We worked with Novacoast. We had a relationship with them in the past. They're an MSSP. They really helped us with building the alert logic mainly.

What's my experience with pricing, setup cost, and licensing?

You definitely get what you pay for. Devo has offered a lot of extra features to justify the price. Devo worries about managing the infrastructure and how it's going to handle that volume, how it's going to store it, and all those things. It allows us to not require as many engineers and not require as many engineering hours. We can devote that time to other things. That's the biggest benefit for the cost. 

I have seen in the Devo documentation that for certain aggregation tasks that you have running in the background, you could be charged extra for those. I've been meaning to get clarification on that. 

Which other solutions did I evaluate?

We were considering Azure Sentinel, mainly because we're an Azure shop. That was mainly the only other solution we looked at. We chose which SIEM we wanted to have a POC with. Azure Sentinel didn't seem to offer as many features as Devo did, which is why we chose them for our POC. I think that was it. We sent it out to a couple of vendors but none of them fulfilled our needs as much as those two and then it was really just between Azure Sentinel and Devo. And Devo gave us a lot more features than Azure Sentinel did.

As far as from an analyst perspective, something that was unique about Devo versus other SIEMs was the immense contextualization capabilities they have because the platform is really based on that you query the data and you perform all these contextualizations in the query. 

Another thing that we were thinking about was the learning curve with querying and using the UI. Devo's response in that learning curve was they definitely provided a lot of training for us. That really helped the analysts.

As far as our previous solution, it definitely allows us to ingest more data than we were able to with LogRhythm. I don't think that the others had 400 days of hot, searchable data. They did not have that much time available as hot and searchable. We definitely have the ability to ingest and store for longer periods of time and to ingest more data. That's really the big thing that is the next-gen capability of Devo that we were looking for was really unlike other SIEMs that I've seen or administered where you don't parse on ingest, you parse after you get the raw data in the platform. That really removes that roadblock.

What other advice do I have?

I have been with the company for approximately three years and in the engineering space for about two.

If the more data the better is the goal for your organization, then Devo is really the way to go for that. But if you're looking more for a super robust analyst interface, next-gen analyst workflow, I don't think Devo is at that point yet. They're more at the point where you can ingest a lot of data and perform visualizations on it really well. 

One of the things that I really like about Devo is the ability to parse the data, and not just the ability to parse the data after you ingest it. There are so many different ways to do it. 

I would definitely explore trying to parse that out yourself because, for me, the first couple of times it was a little bit difficult to get used to the query language and everything. But now, when someone asks for something to be parsed out in a certain way, it's super easy. Explore the ability to use the queries to parse out data to give you that independence and ability to represent data however you want to represent it.

Devo definitely has all the next-gen concepts that I haven't really seen in any other SIEM, but I do think that they definitely have some more room for improvement. A lot of SIEMs offer their own agent and Devo does not at the moment. I would rate Devo a seven out of ten.

Most of the stuff that we saw in our POC with them was the "wow" moment. This platform can address anything. All of the features met my expectations from the POC. As far as the onboarding and integration, it's definitely improved our workflow but the "wow" moment was when we had our proof of concept with them and saw what the platform initially could do, and then it really lived up to that.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
JeffHaidet
Director of Application Development and Architecture at South Central Power Company
Real User
Top 5
SIEMphonic gives us an expert set of eyes on things, and assistance with rules has been a huge time saver

Pros and Cons

  • "I like EventTracker's dashboard. I see it every time I log in because it's the first thing you get to. We have our own widgets that we use. For the sake of transparency, there are a few widgets that we look at there and then we move out from there... Among the particularly helpful widgets, the not-reporting widget is a big one. The number-of-logs-processed is also a good one."
  • "It would be great if they had a client for phones by which they could push a notification to us, as opposed to via email."

What is our primary use case?

It's a system incident and event management platform. The typical use cases that go along with that are alerting and syslog aggregation.

How has it helped my organization?

Their run-and-watch service (now renamed SIEMphonic) has saved from having to hire at least one FTE. In addition, having an expert set of eyes on things and their assistance with rules has been a huge time saver. They've been a really good partner.

We are logging everything from Windows client workstations through our server stack, through important, critical web and cloud pieces, like Office 365 logs and web server logs. The latter would include IIS and Apache. All of that information is being streamed directly into, and assimilated by, the EventTracker product. It seems to be doing the job quite well. Having that visibility into the data is useful. Their interface is simple enough for us to be able to use but advanced enough that if we wanted to do some more advanced queries — which some of their competitors admittedly do a little better out-of-the-box — it hits the wheelhouse perfectly.

We're signed up for their weekly observations, so if they find something big they're going to notify us immediately. But having a management-level synopsis once a week has allowed us to not only replace the one FTE, but also streamline our prioritization of work, based off that data, as well.

What is most valuable?

Other than the log aggregation and alerting, their reports modules have come a long way. But for the most part, we stay right in the wheelhouse of the product to use it to the fullest extent.

The previous version, version 8, had a somewhat antiquated UI. The new version 9 is much easier to use and brings it into the current realm of development. It's very easy, very sleek, and designed relatively well. The version 8 to version 9 upgrade was complete night-and-day. It's significantly improved, and they're putting resources into it to make sure that they continue to stay up to date.

I like EventTracker's dashboard. I see it every time I log in because it's the first thing you get to. We have our own widgets that we use. For the sake of transparency, there are a few widgets that we look at there and then we move out from there. We're into the product looking more at the log information at that point. Among the particularly helpful widgets, the not-reporting widget is a big one. The number-of-logs-processed is also a good one. We call that log volume. They're helpful, but we try to dig in a little deeper, off the dashboard, more often than not.

What needs improvement?

In terms of advanced queries, I wouldn't say EventTracker is lagging behind its peers. The latter just make it easier to get to them. EventTracker is designed more for a small to medium type business, which is where we fit. With a competitive tool like Splunk or LogRhythm, you're not going to get what you get with these guys out-of-the-box. With EventTracker, you're going to have to build all that yourself from scratch. You're going to have to learn that markup language to do so.

I want to stress: We're very happy with not having to deal with that out-of-the-gate. If we need to, we can always call support and they can assist us in writing those more advanced queries. The functionality exists to do advanced queries, they're just not right in your face like they are in a competitive product. But for us, that's what we want.

There's always room for improvement in terms of performance and alerting options. It would be great if they had a client for phones by which they could push a notification to us, as opposed to via email. But those are all things that they'll grow into over time.

For how long have I used the solution?

We've been using EventTracker for just a smidge over three years.

What do I think about the stability of the solution?

It has been extremely stable. Very rarely do we even realize that it's still running, and that's good.

What do I think about the scalability of the solution?

We did have a few concerns with the scalability in the beginning. Our initial concerns were about scaling it and, if we blew it out, were we going to run into performance issues with their agent piece using too many resources on the client or running out of space on the server? But those concerns proved to be unfounded. We have 700 or 800 endpoints streaming data into it without any noticeable performance or any other issues.

We're using it almost to its full extent at this point. We're in that 90 percent range. We currently don't have any plans to move away from it. We're utilizing the features that pertain to us. Anytime that there's a patch or release, we look at the new features to see if they're applicable for us.

How are customer service and technical support?

The EventTracker team itself has been great. We can call them for pretty much anything related to their product. They will offer suggestions, advice, and best practices on ways to do things. It's like having another team member here at our disposal, working with their product. I believe that is their standard tech support.

We're paying for the run-and-watch (SIEMphonic) so we're getting an extra set of eyes on things, but when we call in, their support is top-notch. I would give their support team a 10 out of 10. That is a given. Of all the products and vendors that we've used, I've never had a more positive experience with a support team than with EventTracker's support team.

Which solution did I use previously and why did I switch?

We did not have a previous solution. We do annual audits, and the lack of a SIEM showed up in one of our audits as a piece that we needed to start investigating, four or five years ago. We knew that issue was coming. We were too busy dealing with some other things, but when it showed up in the audit, we pushed it up the priority food-chain. We weren't really having any issues by not having a SIEM, but having all the logs in one place sure makes troubleshooting a whole lot easier. if there was an Achilles heel, that was it.

We were looking for an easy-to-manage SIEM that provided the functionality that we needed. Since we're a relatively small IT staff, the part that really made EventTracker stand out to us was the run-and-watch service (SIEMphonic), where they are an active partner, reviewing the data that we get, so we don't miss anything. They're acting as a backstop to us.

How was the initial setup?

The initial setup was completely painless. They gave us a spec sheet for the on-premise server. We built a VM that matched that spec, and they then installed their software and got it up and running. We could be as involved or as uninvolved as we wanted to be; that was our choice. When it came to deploying the client pieces, they worked with us to identify which machine should get it and when. They took care of the pushing of that information out. When we started getting the data in, and it came time to start tweaking the rules, they took the lead on that as well. It really, truly was a painless process.

The deployment took less than a week. We had an analyst at that time who was running point on it. I wasn't even involved. I didn't need to be involved in it at that level. One of our entry-level analysts was able to work with them to get everything caught up.

I and one analyst are involved in the day-to-day maintenance of the application. Our entire IT staff, nine people, uses it for log review and incident correlation. We try to put the information out there for the rest of our team members to use.

What was our ROI?

We have been able to save at least one full FTE. The amount we would have to pay that FTE, including benefits, is way more than what we're paying EventTracker for the annual maintenance. It had a positive return on investment almost immediately for us.

What's my experience with pricing, setup cost, and licensing?

Our cost is significantly less than what it would have been for one of the competitor's products, and that includes the run-and-watch service (SIEMphonic). You can go with one-, two-, or three-year agreements. We pay annually for maintenance on the product.

Which other solutions did I evaluate?

When we acquired EventTracker, we went through an assessment process, reviewing five or six different manufacturers of SIEMs. The frontrunners were the typical players: Splunk and LogRhythm. There were a couple of freeware options out there, but what really set EventTracker apart was their SIEMphonic. That was the big differentiator. We were able to get much more value for our money, and it met all the requirements that we had set out when we started the research.

There weren't really major differences between EventTracker and the other players. Ultimately, SIEMs do the same things. They collect logs, they index those logs, and they make them searchable. There's not really a difference on the surface.

What other advice do I have?

The biggest lesson really isn't an EventTracker lesson, it's more of a SIEM lesson. And that lesson is: It's a lot of data. When you have a lot of data, it's going to take a while to study and learn that data, so you can react appropriately. Not all data is actionable.

Be prepared for the data. Be prepared to know what you didn't know before. And be prepared to weed out the noise from the actual data. That's where EventTracker's SIEMphonic becomes very helpful. My advice would be, if you're going to go with EventTracker, to go with the SIEMphonic service and leverage their support team to get your knowledge up to speed. So far, our experience with their support has been top-notch.

In terms of how we view EventTracker, we're typically just in a browser, so it's on whatever our standard is. I've got a couple of 20-inch monitors on my desk. It's sleek enough that it will work on a normal 15-inch laptop screen too. I have not looked at it on mobile yet, given the fact that it's an on-premise service. If I'm in the building, getting VPN'ed in across my phone is a little tough. But that would be the next iteration of the product, if we would decide to push up towards the cloud instead of being on-prem. We would definitely be looking for some sort of a mobile or a tablet-based mobile interface.

We have not integrated EventTracker with other products. Our service-desk tool is a tool called Samanage, which was recently acquired by SolarWinds and has been renamed Solar Winds Service Desk. We have not integrated anything with that since SolarWinds acquired it, because we wanted to see what SolarWinds was going to do with it. Integrating it into EventTracker is on the list. We'll do it if it makes sense.

I never rate anything a 10 out of 10, because nothing is ever perfect. But this solution would be at the upper end of that range. This partnership with EventTracker has been one of our better ones.

Which deployment model are you using for this solution?

On-premises
Disclosure: IT Central Station contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sean Moore
Lead Azure Sentinel Architect at a financial services firm with 10,001+ employees
Real User
Top 20
Quick to deploy, good performance, and automatically scales with our requirements

Pros and Cons

  • "The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance."
  • "If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies."

What is our primary use case?

Azure Sentinel is a next-generation SIEM, which is purely cloud-based. There is no on-premises deployment. We primarily use it to leverage the machine learning and AI capabilities that are embedded in the solution.

How has it helped my organization?

This solution has helped to improve our security posture in several ways. It includes machine learning and AI capabilities, but it's also got the functionality to ingest threat intelligence into the platform. Doing so can further enrich the events and the data that's in the backend, stored in the Sentinel database. Not only does that improve your detection capability, but also when it comes to threat hunting, you can leverage that threat intelligence and it gives you a much wider scope to be able to threat hunt against.

The fact that this is a next-generation SIEM is important because everybody's going through a digital transformation at the moment, and there is actually only one true next-generation SIEM. That is Azure Sentinel. There are no competing products at the moment.

The main benefit is that as companies migrate their systems and services into the Cloud, especially if they're migrating into Azure, they've got a native SIEM available to them immediately. With the market being predominately Microsoft, where perhaps 90% of the market uses Microsoft products, there are a lot of Microsoft houses out there and migration to Azure is common.

Legacy SIEMs used to take time in planning and looking at the specifications that were required from the hardware. It could be the case that to get an on-premises SIEM in place could take a month, whereas, with Azure Sentinel, you can have that available within two minutes. 

This product improves our end-user experience because of the enhanced ability to detect problems. What you've got is Microsoft Defender installed on all of the Windows devices, for instance, and the telemetry from Defender is sent to the Azure Defender portal. All of that analysis in Defender, including the alerts and incidents, can be forwarded into Sentinel. This improves the detection methods for the security monitoring team to be able to detect where a user has got malicious software or files or whatever it may be on their laptop, for instance.

What is most valuable?

It gives you that single pane of glass view for all of your security incidents, whether they're coming from Azure, AWS, or even GCP. You can actually expand the toolset from Azure Sentinel out to other Azure services as well.

The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance. With an on-premises SIEM, you needed to maintain the hardware and you needed to upgrade the hardware, whereas, with Azure Sentinel, it's auto-scaling. This means that there is no need to worry about any performance impact. You can send very large volumes of data to Azure Sentinel and still have the performance that you need.

What needs improvement?

When you ingest data into Azure Sentinel, not all of the events are received. The way it works is that they're written to a native Sentinel table, but some events haven't got a native table available to them. In this case, what happens is that anything Sentinel doesn't recognize, it puts it into a custom table. This is something that you need to create. What would be good is the extension of the Azure Sentinel schema to cover a lot more technologies, so that you don't have to have custom tables.

If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies.

For how long have I used the solution?

I have been using Azure Sentinel for between 18 months and two years.

What do I think about the stability of the solution?

I work in the UK South region and it very rarely has not been available. I'd say its availability is probably 99.9%.

What do I think about the scalability of the solution?

This is an extremely scalable product and you don't have to worry about that because as a SaaS, it auto-scales.

We have been 20 and 30 people who use it. I lead the delivery team, who are the engineers, and we've got some KQL programmers for developing the use cases. Then, we hand that over to the security monitoring team, who actually use the tool and monitor it. They deal with the alerts and incidents, as well as doing threat hunting and related tasks.

We use this solution extensively and our usage will only increase.

How are customer service and support?

I would rate the Microsoft technical support a nine out of ten.

Support is very good but there is always room for improvement.

Which solution did I use previously and why did I switch?

I have personally used ArcSight, Splunk, and LogRythm.

Comparing Azure Sentinel with these other solutions, the first thing to consider is scalability. That is something that you don't have to worry about anymore. It's excellent.

ArcSight was very good, although it had its problems the way all SIEMs do.

Azure Sentinel is very good but as it matures, I think it will probably be one of the best SIEMs that we've had available to us. There are too many pros and cons to adequately compare all of these products.

How was the initial setup?

The actual standard Azure Sentinel setup is very easy. It is just a case where you create a log analytics workspace and then you enable Azure Sentinel to sit over the top. It's very easy except the challenge is actually getting the events into Azure Sentinel. That's the tricky part.

If you are talking about the actual platform itself, the initial setup is really simple. Onboarding is where the challenge is. Then, once you've onboarded, the other challenge is that you need to develop your use cases using KQL as the query language. You need to have expertise in KQL, which is a very new language.

The actual platform will take approximately 10 minutes to deploy. The onboarding, however, is something that we're still doing now. It's use case development and it's an ongoing process that never ends. You are always onboarding.

It's a little bit like setting up a configuration management platform and you're only using one push-up configuration.

What was our ROI?

We are getting to the point where we see a return on our investment. We're not 100% yet but getting there.

What's my experience with pricing, setup cost, and licensing?

Azure Sentinel is very costly, or at least it appears to be very costly. The costs vary based on your ingestion and your retention charges. Although it's very costly to ingest and store data, what you've got to remember is that you don't have on-premises maintenance, you don't have hardware replacement, you don't have the software licensing that goes with that, you don't have the configuration management, and you don't have the licensing management. All of these costs that you incur with an on-premises deployment are taken away.

This is not to mention running data centers and the associated costs, including powering them and cooling them. All of those expenses are removed. So, when you consider those costs and you compare them to Azure Sentinel, you can see that it's comparative, or if not, Azure Sentinel offers better value for money.

All things considered, it really depends on how much you ingest into the solution and how much you retain.

Which other solutions did I evaluate?

There are no competitors. Azure Sentinel is the only next-generation SIEM.

What other advice do I have?

This is a product that I highly recommend, for all of the positives that I've mentioned. The transition from an on-premises to a cloud-based SIEM is something that I've actually done, and it's not overly complicated. It doesn't have to be a complex migration, which is something that a lot of companies may be reluctant about.

Overall, this is a good product but there are parts of Sentinel that need improvement. There are some things that need to be more adaptable and more versatile.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Balamurali Vellalath
Practice Head-CyberSecurity at a tech services company with 1,001-5,000 employees
MSP
Top 5
Good support with an intuitive dashboard but the cost is too high

Pros and Cons

  • "The most valuable aspect of the solution is the dashboard. It's very intuitive."
  • "There are a lot of competitive products that are doing better than what Splunk is doing on the analytics side."

What is our primary use case?

Since we have an IT services company, we have been using Splunk for the deployment to the customer locations as well. Sometimes the customer will come back to us and say that we need to have a SIEM tool, and when we do the benchmarking, we'll do a couple of deployments on the Splunk side and at the customer's locations as well.

As an example use case, we deployed Splunk to a banking institution a few years ago. There the use case was basically this: the customer wanted to set up a security operation center, and they wanted to have a pretty large deployment in terms of the number of endpoints and number of switches and routers. There were many regional branch offices and they have data centers and therefore, many assets in terms of endpoints. They had 30% of their assets are running on the cloud and they needed a complete solution from an incident monitoring and management perspective. That's why we deployed Splunk. 

They wanted to reduce the MTTR, and meantime resolution, and maintain detection. They didn't want to add more SOC analysts into their SOC as the organization scaled up. They have a plan to scale from 5,000 endpoints into 15-20,000 endpoints. They're very particular about deploying the SOC operation center.

Splunk has since acquired Phantom as a SOAR platform. Therefore, we have tried to manage the security automation using Phantom with the help of Splunk deployments. It helps us meet the customer's requirements.

How has it helped my organization?

In terms of support, we're able to get the right support at the right time. If there's a break or an appliance issue, they're are on top of it.

This is very important during large-scale deployments. It's not easy to address product-related issues or appliance-related issues, and the number of collectors or number of logs that come into the collector, and managing the collectors across the branch offices, across the corporate offices, etc. It is a cumbersome process for us. That's why it's integral that we get the right support at the right time - and they make this happen.

What is most valuable?

The most valuable aspect of the solution is the dashboard. It's very intuitive. 

The reporting is excellent. The team and the SOC analyst are able to easily track the alerts and the correlation is very good compared to other SIEM tools. 

What needs improvement?

There are a lot of competitive products that are doing better than what Splunk is doing on the analytics side.

The automation could be better. Typically, the issue that we face is that it has to go to the analytics engine, then goes to the automation engine, basically. Therefore, if there are no proper analytics, the SOAR module is going to be overloaded, and we are not able to get the expected result out from the SOAR module. If they improve the analytics, I think they'll be able to solve these issues very quickly.

The playbooks which they create and provide to premium users can improve a lot. They have to create a common platform wherein the end-customers like us can choose the playbooks, and automation playbooks readily available.

In terms of integration with the third-party tools, what we are seeing is that it's very limited compared to the competitive products. Competitive products have a lot of connectors and APIs that they have developed, and that's where the cloud integration, whether it is a public cloud or a private cloud integration comes in. There are a lot of limitations to this product compared to other products.

For how long have I used the solution?

In terms of Splunk, I've been working on it for more than three years in the current company. Prior to that, I worked with it at another company as well. In total, I have been using Splunk for close to six or seven years.

What do I think about the stability of the solution?

The solution is stable, however, sometimes in some of the collectors, we are facing a lot of issues. That said, overall, if you rate it from one to five, I would say in terms of stability, it will stand at a three. 

What do I think about the scalability of the solution?

The scalability is perfectly fine. It's very awesome compared to all the other tools, as easily we can integrate with the log forwarding modules and the collector management appliances or modules. That aspect won't be a problem. 

If you look at the SIEM as a market today, Splunk is expensive compared to other competitive products. I'm also into the SIEM evaluation in my current role. I've seen that there are many tools are coming up in the last one and half years. I have also seen many other mature tools that are available now. If you compare next-gen SIEM tools compared to the Splunk, it's expensive. Therefore, it's possible we may not use this in the future or expand on current usage.

How are customer service and technical support?

In terms of technical support, we don't have any issues, as the professional services which they have extended to us are very, very good. We're able to manage many of the critical issues with their support. I'd say we are definitely satisfied with the level of service provided.

How was the initial setup?

In terms of deployment, it's not so complex compared to the competitive products, however, we will be able to manage that deployment. We don't feel there's any problem on the deployment side. In that sense, I don't think deployment is a complex one when somebody going for Splunk as a tool.

How long it takes to deploy the solution depends on the size of the deployment, basically. Even a large deployment won't take more than a week. When I say deployment, I'm considering all the log collection, log management, and the curation of the incidents, and how incidents are created and routed properly according to prioritization. 

What was our ROI?

In terms of ROI, for example, if you look at one of our customers today, they are managing close to 100 million events per day. If you look at a traditional SIEM with 100 million events, they need to manage this environment with at least 25 to 30 people. That's 30 security analysts that have to be there. However, when Splunk was deployed, a lot of automation was added on top of it, and today we are managing the same environment with Splunk with close to 15 people. In that sense, if you look at it that way, the ROI is between 30-40%.

What's my experience with pricing, setup cost, and licensing?

In terms of a comparison with the rest of the competition, the licensing cost would be, I would say, 30% higher than most.

Which other solutions did I evaluate?

Before choosing Splunk, we have evaluated QRadar and LogRhythm. QRadar is much more expensive. LogRhythm lacked reporting.

We ended up choosing Splunk due to the pricing and the reporting features. It also had the kind of scalability that was required. We felt it would help us in terms of positioning from both a cost perspective and an incident alert perspective.

What other advice do I have?

We're partners. We have a business relationship with Splunk.

We're using the latest version of the solution.

Overall, I would rate the solution at a seven out of ten.

I'd advise potential new users to ensure they do proper sizing before deploying the product. If it's a very large deployment, the number of endpoints will be quite sizeable. You need to figure out the correct number of endpoints as well as endpoint devices, switches, routers, etc.

It's also a good idea to look at use cases. Splunk is very strong in some use cases. It's important to look into deployment scenarios and check out the use cases before deploying anything.

My biggest takeaway after working with the solution is that the environment is very important. You need to be clear about the problem you are addressing and it takes a lot of planning at the outset.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: partner
Get our free report covering Splunk, Datadog, IBM, and other competitors of LogRhythm NextGen SIEM. Updated: December 2021.
555,358 professionals have used our research since 2012.