We live in an unforgiving, fast-moving world where the pace of technology development is only outstripped by the impatience of users who have come to expect, and get, software that operates not only efficiently but swiftly and without failure.
Today’s consumer does not forgive system downtime and failure, as switching to the next supplier is very easy and loyalty is intrinsically linked to service delivery. When software fails, organisations face immediate reputational and financial damage − both of which are hugely avoidable risks if performance is prioritised.
This brings us to the heart of what performance engineering is about − in a nutshell it pertains to the continuous assessment and improvement of the speed and efficiency of products. This is achieved while at the same time embedding performance decisions into architecture, design and implementation.
As software scales in complexity, companies are beginning to understand that performance engineering and testing are essential to the development process. Still, there’s a common misconception that performance is only about speed − it’s not.
Click Here
Performance engineering is proactive, continuous and end-to-end application performance testing and monitoring. It permits seamless collaboration between teams, tools and processes through continuous feedback loops. Here, it’s not just testers who are responsible for quality assurance, but developers, performance engineers, product owners and business analysts also.
Shifting timelines
As with all technology disciplines, there is a fair amount of jargon and ‘tech-speak’ surrounding the subject of performance engineering; therefore, I will attempt to translate this into business application.
By leveraging right-sized tools for developers and engineers, performance engineering enables shift-left performance testing and shift-right application performance monitoring. Shift-right (shifting right on the project timeline) testing enables you to test in production, and prepare for the unexpected, while shift-left testing is founded on the theory of test early and often in the lifecycle − this approach aims to improve the quality of the process and features of apps.
Often testing is viewed in isolation and treated as an afterthought that only begins at the end of functional testing.
It’s difficult to appreciate just how much of a departure performance engineering is from traditional performance testing without understanding the principles of classic performance testing, which is effectively a subset of performance engineering. It usually entails running a single round of load testing as part of the post-development quality assurance (QA) cycle.
Performance testing involves checking the speed, reliability, scalability, stability, response time and resource use of an application under the anticipated workload. Before we get into a discussion around the differences between performance engineering and performance testing, let’s first look at performance testing in detail and why in isolation, it is no longer sustainable.
Performance testing unveiled
Often testing is viewed in isolation and treated as an afterthought that only begins at the end of functional testing. This results in siloed working, which causes large communication gaps between project sub-teams and prevents the collaboration necessary to deliver a high-quality product.
Therefore, by the time performance testing kicks in, organisations have already invested substantial time, effort and finances into an application’s design, development and promotion.
Performance testing is also often treated as an afterthought and isn’t ticked in the ‘done’ criteria list preceding release. So, at this juncture, the business needs the app in production urgently and expects no delays.
In this context, QA’s feedback happens too late to fix problems completely before release. This, inevitably, leads to a high number of performance problems that will unquestionably, and unnecessarily, survive through to the production environment and all permitted just so that release timing remains within schedule. Fixing a defect in production is much more expensive and disruptive than doing so early in development.
Finally, traditional performance testing may have been perfect for the Waterfall model (a breakdown of project activities into linear sequential phases, where each phase depends on the deliverables of the previous one and corresponds to a specialisation of tasks) but has no place in today’s DevOps-centric world.
DevOps reduces the failure rate of new releases by shrinking the time between when changes are committed to the system and when the change is placed in production. Continuous integration and delivery ensures the software is always in a releasable state throughout its lifecycle.
DevOps also focuses on realigning organisations to support end-to-end collaboration between stakeholders, functions and tools. Software development needs a more evolved performance testing approach if it is to meet DevOps demands.
What’s the difference?
Performance testing is a quality check of an application’s load handling and responsiveness. It establishes how well the system will bear a production load and anticipates issues that could arise during heavy load conditions.
Performance engineering seeks to design the application from the start with performance metrics and facilitate the discovery of problems early in development.
Furthermore, performance testing is a QA process that usually takes place when a round of software development is complete.
Performance engineering, on the other hand, is a continuous process that is embedded in all phases of the software development cycle − from design, to development, and into the end-user experience.
Performance testing is conducted by the QA team while performance engineering involves research, development and QA.
In my second article in this series of two, I will explain how through certain concepts, DevOps and performance engineering deliver consistent production performance results, allowing customers to deploy applications efficiently with more confidence and to roll out high-performing, stable software that fulfils user expectations.