Contact Blog News About Us
Blog

New Research Reveals How Top Performing IT Organizations Mitigate Risk

Posted February 18th, 2017

When it comes to application technology and infrastructure, the world is a very different place than it was a decade ago. So, too, are user expectations. With applications having migrated en masse to web-based interfaces and cloud-based hosting, and with anytime, anywhere data access now the norm, people have come to expect trouble-free engagement and immediate responsiveness (typically, in the range of 250 to 500 milliseconds) as they interact and transact with applications on their desktops and mobile devices.

These user expectations are not limited to consumer applications. They apply to work-related applications, as well. Whether the application is the product of a third-party solution provider or the company’s own internal software development group, users today have scarce little patience for slow-running applications or even temporary glitches in functionality. They have no tolerance at all for application downtime.

All of which explains why IT organizations are under more pressure than ever to maintain a high, if not flawless, level of performance with both new and existing applications. Adopting agile development cycles, internal software teams today often release new and updated applications in rapid cycles and on a continuous basis.

Against this backdrop of sky-high user expectations, unprecedented technology complexity and the proliferation of elastic, distributed cloud environments, it is easy to see why basic health checks that sufficed in the past are of limited value today. Monitoring for hardware failures by pulling status updates from servers and devices, or by testing how long it takes for a switch to respond to a ping, may still help meet basic performance management requirements. But the nature of today’s sophisticated technology architecture is such that it requires a comprehensive, service-wide view of application performance — one that focuses not just on infrastructure and network performance but also on the actual end-user experience.

This means deploying operations-centric performance management (oAPM) and database-centric performance management (dAPM) solutions that make it easy to monitor hardware performance at a deep-dive component level. Importantly, this also means deploying a code-centric performance management (cAPM) solution that makes it easy to analyze software code and instrument and detect bottlenecks in the code. The ability to pinpoint defective or inefficient code using a next-generation solution is fast becoming a primary focus area for top-performing IT organizations.

According to The 2017 Benchmark Report on Application Performance Management, underwritten by SolarWinds, the ability to identify, diagnose and fix issues (including code-level issues) before they impact business services and user experiences ranks as the top reason organizations are upgrading their APM capabilities. Mitigating the risk of application downtime and other performance issues translates, first and foremost, into increased customer satisfaction and loyalty as well as improved employee satisfaction and productivity.