We performed a comparison between Dynatrace and Azure Monitor based on real PeerSpot user reviews in five categories. After reading all of the collected data, you can find our conclusion below.
Comparison Results: Dynatrace is a better option than Azure Monitor, as it offers more advanced monitoring features such as real user tracking, AIOps automation, Kubernetes module, and session replay, along with a user-friendly interface, good AI capabilities, and easy deployment. In contrast, Azure Monitor is easy to set up and maintain but lacks visualization and integration with third-party services, and needs more out-of-the-box functionalities and artificial intelligence for event correlation. Dynatrace also provides better ROI through cost savings from automation and decreased mean time to identification and repair.
"Azure Monitor is very stable."
"The solution integrates well with the Microsoft platform."
"In the last company where I worked about a year ago, it looked very simple."
"Technical support is helpful."
"It has good troubleshooting features."
"A product that is well-integrated for monitoring Microsoft Azure."
"Azure Monitor's best features are its graphs and charts, the different visibility options, and reporting."
"It's a Microsoft native tool, so it works well with other Microsoft technologies, which is predominantly what our customer end-user base is."
"Real time monitoring helps reduce downtime. It saves a lot of time in determining what is the likely cause of an issue an end user may be experiencing."
"Triggering gives us warning that system is getting slow and we need to nail down the issue soon, so it does not impact our business."
"The most valuable feature of the solution would be the level of visibility that you get. I haven't seen anything that gives us that level of visibility yet"
"Its monitoring and key purpose capabilities are the most valuable. It provides the root cause of problems and helps peers join the war room."
"The most valuable feature is the beautiful UI."
"It provides the whole perspective in a single place when trying to guide the right people to go to the right solution at any given point in time."
"We use Dynatrace for performance testing. We use it to dig down for application layer or slowness issues, getting a clear idea of what is causing the issues, then reporting back to the engineering team."
"Data analytics help us to find us issues in the short-term or long-term."
"It might not have all of the capabilities we will need."
"There are a lot of things that take more time to do, such as charting, alerting, and correlation of data, and things like that. Azure Monitor doesn't tell you why something happened. It just tells you that it happened. It should also have some type of AI. Environments and applications are becoming more and more complex every day with hundreds or thousands of microservices. Therefore, having to do a lot of the stuff manually takes a lot of time, and on top of that, troubleshooting issues takes a lot of time. The traditional method of troubleshooting doesn't really work for or apply to this environment we're in. So, having an AI-based system and the ability to automate deployments of your monitoring and configurations makes it much easier."
"The troubleshooting logs need improvement. There should be some improvement there. I have a hard time finding the right logs at the right times whenever there is an issue occurring."
"n comparison to New Relic, which I've used before, it's a bit more complicated. It's not as easy to use. It also took some time to get it working. The implementation needs to be simpler."
"There is room for improvement in stability."
"The length of latency is terrible and needs to be improved."
"They need to work on a more hybrid deployment that will allow us to monitor local on-premise deployments and connect to different systems. I would like to see more integration."
"We encounter some difficulties in monitoring the operating system on its own."
"The container platform could include more value-added features."
"It is not clear what are our long-term strategy should be to upgrade (Appmon vs Dynatrace)."
"For AppMon, in order to use the rich client especially, I think you have to be somebody who is in there more often than not. It's not necessarily as intuitive as it could be."
"I would like to see dashboards included, and maybe more possibilities in terms of customization."
"Needs more compatibility of platforms out-of-the-box."
"There were a number of marketing promises that were made, which we do not see it in the tool yet. "
"It needs a dashboard for cluster events in general, and for Kubernetes specifically."
"They should make hooks into some of the more modern performance testing tools a little easier. I think that would go a long way."
Azure Monitor is ranked 4th in Application Performance Monitoring (APM) and Observability with 44 reviews while Dynatrace is ranked 2nd in Application Performance Monitoring (APM) and Observability with 340 reviews. Azure Monitor is rated 7.6, while Dynatrace is rated 8.8. The top reviewer of Azure Monitor writes "A powerful Kusto query language but the alerting mechanism needs improvement". On the other hand, the top reviewer of Dynatrace writes "AI identifies all the components of a response-time issue or failure, hugely benefiting our triage efforts". Azure Monitor is most compared with Datadog, Prometheus, Sentry, Grafana and New Relic, whereas Dynatrace is most compared with Datadog, New Relic, AppDynamics, Splunk Enterprise Security and Elastic Observability. See our Azure Monitor vs. Dynatrace report.
See our list of best Application Performance Monitoring (APM) and Observability vendors.
We monitor all Application Performance Monitoring (APM) and Observability reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.