There are several myths vs facts in performance testing. We consider if the application is slow, fasten the performance testing team immediately to fix the bottlenecks.
In moments of crisis and when in search of reasonable ways to reduce costs, the purpose of adopting or migrating to open source testing tools quickly becomes more appealing.
Let’s discuss myths vs facts in each test phase.
Test Requirement Phase:
Myth: Project team can determine if Performance testing is needed or not as they are the application owners.
Fact: Include Performance engineers and architects during the Non-functional requirement Assessment to decide the risk and need for Performance testing.
Myth: When there are no Performance Testing NFRs, the development team or Project Managers can describe them.
Fact: The business team determines the NFRs in concurrence with Solution Architects to create the infrastructure and application to satisfy the NFRs. If the application is in generation, the Performance testing team would help determine the NFRs by examining the production logs or using production monitoring tools.
Myth: Non-functional requirements (NFRs) apply to application response time only.
Fact: Non-functional requirements apply to Key Performance indicators – Security, Scalability, Stability, Reliability, Capacity, Availability, Accessibility, and Usability. It’s not just application response time.
Test Planning Phase:
Myth: Performance testing can be accomplished in 2 to 4 weeks.
Fact: Performance Testing span cannot be fixed to ‘x’ weeks and depends upon the Performance Test purpose, Complexity of the application. Applications including complex architecture can take periods to be performance tested.
Myth: Performance Testing can be done Just towards the completion of the testing life cycle.
Fact: Baselining effort upfront accompanied by incremental tests to examine if performance is improving or worsening. Early Performance testing can be performed in parallel with Development if appropriate for complex engagements. Performance defects are quite expensive to be settled at the end of SDLC. It could head to a change in technical design.
Myth: Performance testing can be employed once SIT is in progress.
Fact: Performance Test Script Development can begin while SIT is in progress it is advised that test executions begin only after SIT Completion. At least 80% SIT conclusion with no Sev 1 or 2 Defects.
Test Development Phase:
Myth: Performance Testing is about learning and using a load testing tool.
Fact: Derive Workload Mix and Design realistic situations. Devise a proper end-to-end PT approach. Determine clear objectives for all test type. Examine Performance from both Software and Hardware aspects. Identify performance bottlenecks and provide suggestions.
Myth: Performance testing is estimating the response time that satisfies the defined SLA.
Fact: Performance testing objectives can involve the assessment of other NFRs like Scalability, Availability, etc. Testing Application Performance is one of the significant NFRs.
Myth: Test cases managed by the functional team can be leveraged for Performance Testing.
Fact: Performance Testing covers only crucial transactions – Most frequently used, Complex Business functions and System intensive operations.
Test Execution Phase:-
Myth: Developers can only tune the application performance
Fact: Performance Architects provide advice and tuning recommendations but the implementation of the suggestions would be with the development / Project team
Myth: Performance issues can be fixed by only plugging in extra hardware
Fact: Performance issues can be in Application Code, Third Party libraries, Improper Server Configuration, Infrastructure, etc.
Myth: To define overall application performance, two successful baseline tests are sufficient.
Fact: The number of tests and test types is basically defined based on Performance objectives. The performance testing team will always advise the applicable test types.
To learn more about Performance testing, connect with our experts at TestUnity.