In the first part of this series, two main areas that contribute to poor performance were discussed: hardware and software. Part 2 will discuss other software that can contribute to poor performance.
It’s STILL a software problem
Let’s face it; websites serve up dynamic content more than ever to ensure a good user experience and to make the website useful. Databases drive that dynamic content to a large degree and can be the cause of slow performance. The reasons are clear: databases can suffer from performance problems, stemming from poorly written queries, poorly placed indexes, out of control growth, and generally poor planning from the start.
Middleware can also contribute to poor performance, such as load balancing and middleware.
In a load balanced environment, it is not uncommon for one or more servers to have software out-of-sync with the other servers. From first-hand experience and discussions with colleagues, it is not uncommon for one “bad” server to get out of step when a software push/update is made, which can be confusing and difficult to diagnose. When this problem occurs, it often manifests itself with odd and intermittent errors that appear to the user and not re-creatable to support staff. It’s a totally frustrating situation.
Other middleware that coordinates transactions or business processes can be misconfigured or simply perform slowly, which can lead to months of diagnosis before a resolution can be made.
So how does one detect, monitor and resolve these types of issues and obtain the fastest and most efficient website as possible? By monitoring, of course! By monitoring the three levels of web applications (See this blog post for more: Three Levels of Web Application Monitoring), one can detect the problems before they arise, and resolve them before the customer is ever aware.