We changed our name from IT Central Station: Here's why
Get our free report covering Splunk, Elastic, Wazuh, and other competitors of Graylog. Updated: January 2022.
564,143 professionals have used our research since 2012.

Read reviews of Graylog alternatives and competitors

Jordan Mauriello
SVP of Managed Security at CRITICALSTART
MSP
Top 10
Be cautious of metadata inclusion for log types in pricing. Having the ability to do real-time analytics drives down attacker dwell time.
Pros and Cons
  • "The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be doing is sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events."
  • "There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts."

What is our primary use case?

We use Devo as a SIEM solution for our customers to detect and respond to things happening in their environment. We are a service provider who uses Devo to provide services to our customers.

We are integrating from a source solution externally. We don't exclusively work inside of Devo. We kind of work in our source solution, pivoting in and back out.

How has it helped my organization?

With over 400 days of hot data, we can query and look for patterns historically. We can pivot into past data and look for trends and analytics, without needing to have a change in overall performance nor restore data from cold or frozen data archives to get answers about things that may be long-term trends. Having 400 days of live data means that we can do analytics, both short-term and long-term, with high speed.

The integration of threat intelligence data absolutely provides context to an investigation. Threat intelligence integration provides great contextual data, which has been very important for us in our investigation process as well. The way that the data is integrated and accessible to us is very useful for security analysts. The ability to have the integration of large amounts of threat intelligence data and provide that context dynamically with real time correlation means that, as analysts, we are seeing events as they're happening in customer environments. We are getting the context of whether that is related to something that we're also watching from a threat intelligence perspective, which can help shape an investigation.

What is most valuable?

The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events.

The solution’s real-time analytics of security-related data does incredibly well. I think all the SIEM solutions have struggled to be truly real-time, because there are events that happen out in systems and on a network. However, when I look at its overall performance and correlation capabilities, and its ability to then analyze that data rapidly, it has given us performance, which is exceptional.

It is incredibly important in security that the real-time analytics are immediately available for query after ingest. One of the most important things that we have to worry about is attacker dwell time, e.g., how long is an attacker allowed to sit on a system after it is compromised and discover more data, then compromise more systems on a network or expand what they currently have. For us, having the ability to do real-time analytics essentially drives down attacker dwell time because we're able to move quickly and respond more effectively. Therefore, we are able to stop the attacker sooner during the attack lifecycle and before it becomes a problem.

The solution speed is excellent for us, especially in regards to attacker dwell time and the speed that we're able to both discover and analyze data as well as respond to it. The fact that the solution is high performance from a query perspective is very important for us.

Another valuable feature would be detection capability. The ability to write high quality detection rules to do correlation in an advanced manner that really works effectively for us. Sometimes, the correlation in certain engines can be hampered by performance, but it also can be affected by an inability to do certain types of queries or correlate certain types of data together. The flexibility and power of Devo has given us the ability to do better detection, so we have better detection capabilities overall.

The UI is very good. They have an implementation of CyberChef, which is very good for security analysts. It allows us to manipulate, transform, and enrich data for analytics in a very fast, effective manner. The query UI is something that most people who have worked with SIEM platforms will be very used to utilizing. It is very similar to things that they've seen before. Therefore, it's not going to take them a long time to learn their way around the platform.

The pieces of the Activeboards that are built into SecOps have been very good and helpful for us.

They have high performance and high-speed search as well as the ability to pivot quickly. These are the things that they do well.

What needs improvement?

There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts.

I would like to see Devo rely more on the rules engine, seeing more things from the flow, correlation, and rules engine make its way into the standardized product. This would allow a lot of those pieces to be a part of SecOps so we can do advanced JOIN rules and capabilities inside of SecOps without flow. That would be a great functionality to add.

Devo's pricing mechanism, whereby parsed data is charged after metadata is added to the event itself, has led to unexpected price increases for customers based on new parsers being built. Pricing has not been competitive (log source type by log source type) with other vendors in the SEMP space.

Their internal multi-tenant architecture has not mapped directly to ours the way that it was supposed to nor has it worked as advertised. That has created challenges for us. This is something they are still actively working on, but it is not actually released and working, and it was supposed to be released and working. We got early access to it in the very beginning of our relationship. Then, as we went to market with larger customers, they were not able to enable it for those customers because it was still early access. Unfortunately, it is still not generally available for them. As a result, we don't get to use it to help get improvements on multi-tenant architecture for us.

For how long have I used the solution?

I have been using the solution for about a year.

What do I think about the stability of the solution?

Stability has been a little bit of a problem. We have had stability problems. Although we have not experienced any catastrophic outages within the platform, there have been numerous impacts to customers. This has caused a degradation of service over time by impacting customer value and the customer's perception of value, both from the platform and our service as a service provider.

We have full-time security engineers who do maintenance work and upkeep for all our SIEM solutions. However, that may be a little different because we are a service provider. We're looking at multiple, large deployments, so that may not be the same thing that other people experience.

What do I think about the scalability of the solution?

We haven't run into any major scalability problems with the solution. It has continued to scale and perform well for query. The one scalability problem that we have encountered has to do with multi-tenancy at scale for solutions integrating SecOps. Devo is still working to bring to market these features to allow multi-tenancy for us in this area. As a result, we have had to implement our own security, correlation rules, and content. That has been a struggle at scale for us, in comparison to using quality built-in, vendor content for SecOps, which has not yet been delivered for us.

There are somewhere between 45 to 55 security analysts and security engineers who use it daily.

How are customer service and technical support?

Technical support for operational customers has been satisfactory. However, support during onboarding and implementation, including the need for professional services engagements to develop parsers for new log types and troubleshoot problems during onboarding, has been severely lacking. Often, tenant set times and support requests during onboarding have gone weeks and even months without resolution, and sometimes without reply, which has impacted customer relationships.

Which solution did I use previously and why did I switch?

While we continue to use Splunk as a vendor for the SIEM services that we provide, we have also added Devo as an additional vendor to provide services to customers. We have found similar experiences at both vendors from a support perspective. Although professional services skill level and availability might be better at Devo, the overall experience for onboarding and implementing a customer is still very challenging with both.

How was the initial setup?

The deployment was fairly straightforward. For how we did the setup, we were building an integration with our product, which is a little more complicated, but that's not what most people are going to be doing. 

We were building a full integration with our platform. So, we are writing code to integrate with the APIs.

Not including our coding work that we had to do on the integration side, our deployment took about six weeks.

What about the implementation team?

It was just us and Devo's team building the integration. Expertise was provided from Devo to help work through some things, which was absolutely excellent.

What was our ROI?

In incidents where we are using Devo for analysis, our mean time to remediation for SIEM is lower. We're able to query faster, find the data that we need, and access it, then respond quicker. There is some ROI on query speed.

What's my experience with pricing, setup cost, and licensing?

Based on adaptations that they have made, where they are essentially charging for metadata around events that we collect now, that extra charge makes up any difference in price savings between Splunk or Azure Sentinel and them. 

Before, the cost was just the data itself, but they have adjusted it now where they even charge if we parse the data and add in names for a field that comes in. For example, we get a username. If you go to log into Windows, and it says, "That username tried to log in." Then, it labels the username with your name. They will charge us for the space that username takes up when they label it. On top of that, this has caused us to lose all of the price savings that were being found before. In fact, in some cases, it is more expensive than the competitors as a result. The charging for metadata on parsed fields has led to significant, unexpected pricing for customers.

Be cautious of metadata inclusion for log types in pricing, as there are some "gotchas" with that. This would not be charged by other vendors, like Splunk, where you are getting Windows Logs. Windows Logs have a bunch of blank space in them. Essentially, Splunk just compresses that. Then, after they compress and label it, that is the parse that you see, but they don't charge you for the white space. They don't charge you for the metadata. Whereas, Devo is charging you for that. There are some "gotchas" there around that. We want to point, "Pay attention to ingest charges for new data types, as you will be charged for metadata as a part of the overall license usage." 

There are charges for metadata, as Devo counts data after parsing and enrichment. It charges it against license usage, whereas other vendors charge the license before parsing and enrichment, e.g., you are looking at the raw, compressed, data first, then they parse and enrich it, and you don't get charged for that part. That difference is hitting some of our customers in a negative way, especially when there is an unparsed log type. They don't support it. One that is not supported right now is Cisco ASA, which should be supported as it is a major vendor out there. If a customer says, "Well, in Splunk, I'm currently bringing 50 gigabytes of Cisco ASA logs," but then they don't consider the fact that this adds 25% metadata in Splunk. Now, when they shift it over to Devo, it will actually be a 25% increase. They are going to see 62.5 gigs now when they move it over, because they are going to get charged for the metadata that they weren't being charged for in Splunk. Even though the price per gig is lower with Devo, by charging more for the metadata, i.e., by charging more gigs in the end, you are ending up either net neutral or even sometimes saving, if there is not a lot of metadata. Then, sometimes you are actually losing money in events that have a ton of metadata, because you are increasing it sometimes by as much as 50%. 

I have addressed this issue with Devo all the way to the CEO. They are not unaware. I talked to everyone, all the way up the chain of command. Then, our CEO has been having a direct call with their CEO. They have had a biweekly call for the last six weeks trying to get things moving forward in the right direction. Devo's new CEO is trying very hard to move things in the right direction, but customers need to be aware, "It's not there yet." They need to know what they are getting into.

Which other solutions did I evaluate?

We evaluated Graylog as well as QRadar as potential options. Neither of those options met our needs or use cases.

What other advice do I have?

No SIEM deployment is ever going to be easy. You want to attack it in order of priorities for what use cases matter to your business, not just log sources.

The Activeboards are easy to understand and flexible. However, we are not using them quite as much as maybe other people are. However, we are not using them quite as much as other people are. I would suggest investment in developing and working with Activeboards. Wait for a general availability release of SecOps to all your customers for use of this, as a SIEM product, if you lack internal SIEM expertise to develop correlation rules and content for Devo on your own.

I would rate this solution as a five out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
CharlesNetshivhera
Senior DevOps Engineer at a financial services firm with 10,001+ employees
Real User
Top 5
It is quite comprehensive and you're able to do a lot of tasks
Pros and Cons
  • "The indexes allow you to get your results quickly. The filtering and log passing is the advantage of Logstash."
  • "We're using the open-source edition, for now, I think maybe they can allow their OLED plugin to be open source, as at the moment it is commercialised."

What is our primary use case?

It is currently deployed as a single instance, but we are currently looking at clusters. We are using it for a logging solution. I'm a developer and act as a server engineer for DevOps Engineers. It's used by developers and mobile developers. It could be used by quite a few different teams.

How has it helped my organization?

It is quite comprehensive, and you're able to do a lot of tasks. It has dashboards and we're able to create a lot of search queries. It is not easy to use, but once you get the hang of it, then it provides good graphs and visuals such as these. The indexes allow you to get your results quickly. The filtering and log passing is the advantage of Logstash.

What is most valuable?

In terms of query resolution, error searching finding and production issues, we're able to find issues quicker. We don't need to manually obtain the logging reports. All bugs in code are quickly identified in the logs as they are in one centralized logging location.

What needs improvement?

We're using the open-source edition, for now, I think maybe they can allow their OLED plugin to be open source, as at the moment it is commercialised. We are planning to go into the production to use the enterprise edition, we just wanted to check how this one works first.  I think maybe on the last exercise part, I think the index rotation can be improved. It's something that they need to work on. It can be complex on how the index, all the logs that have been ingested, the index rotation can be challenging, so if they can work on that. In terms of ingestion, I think they should look at incorporating all operating systems. It should be easy to collect logs from different sources without a workaround to push the logs into the system. For example, in AIX, there's no direct log shipper so you do need to do a bit of tweaking there.

For how long have I used the solution?

We have been using ELK Logstash for three years or so. We believe we are using the latest version. 

What do I think about the stability of the solution?

The solution is quite stable, although it does need a bit of maintenance, and because there is quite a lot of plugins that come with it. There's a lot of testing that is involved to ensure that nothing breaks.

What do I think about the scalability of the solution?

The solution is scalable. So you're able to extend it and grow it. For example, you're able to put it in a cluster, so it is quite scalable.

How are customer service and technical support?

I have used the technical support. Their forums are quite good in terms of response. There is quite a big community of forums, where you can get similar question or issues that others have experienced issues previously. Even then direct support is quite good. They also have regional support. 

Which solution did I use previously and why did I switch?

Logging solution previously, but mainly I've been using Graylog and ELK. Graylog gives you centralized logging. It's built for a logging solution, whereas ELK is designed and built for more big data. If you want to go in deeper into analytics, ELK gives you that flexibility and out of the box models. The two solutions are widely used by a lot of bigger clients in the industry and they've been tried and tested.

How was the initial setup?

With ELK, installation is not really straightforward. There are about three applications to consider. It's quite intense in terms of set up, but once you've done the setup, then it's nice and smooth. The implementation took about 3 weeks, but that is because I was doing it in between other projects. We used an implementation plan. It was deployed to the development environment, then the Point of Concept (POC) environments. It was then deployed into the production environment.

What about the implementation team?

We implemented the solution in-house. There were no third parties involved. For deployment and maintenance, we just need about two to three people and the role is known as maintenance and installation.

What's my experience with pricing, setup cost, and licensing?

We're using the open-source solution, So there are no-cost implications on it, but we are planning to use it throughout the organization. So, we will soon adopt the open-source model and depending on if there is a need for enterprise then we'll go down the enterprise route. If you need a lasting solution, you do need to buy the license for the OLED plugin. The free version comes fully standard and has everything that you need. It is easy to deploy, easy to use, and you get everything you need to become operational with it, and have nothing further to pay unless you want the OLED plugin. 

Which other solutions did I evaluate?

We also have Graylog, for Graylog we're using it in parallel for a similar solution. At the moment, we're basically just comparing the two and see which one is preferred.

What other advice do I have?

Do a POC first. They should compare solutions and also look at different log formats they're trying to ingest. See how it really fits with the use case. This goes for ELK and Graylog. You can trial the enterprise version. In terms of lessons learned it does need some time and resources. It also needs adequate planning. You need to follow the documentation clearly and properly. I would give this solution 8 out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Get our free report covering Splunk, Elastic, Wazuh, and other competitors of Graylog. Updated: January 2022.
564,143 professionals have used our research since 2012.