Skip to main content

Software Testing Interviews and Test Driven Development

Guest Author: Jessica

Software testing evaluates and verifies that a software application does everything the developer planned that should do. Many developers and software testers find that software testing interviews require more in-depth preparations than they originally thought. As a software tester, you need a deep understanding of the practice to do well on interviews. This article is a comprehensive overview of that aspect of interviews covering software bugs and how developers ensure that a software application has comprehensive test coverage.

Because software testing is so important in software development, we'll consider the basic approach to testing that fulfills software requirements, types of testing, bug lifecycle, bug reports, metrics, and TDD (Test-Driven Development). For software testing interviews, it's important to develop adequate working knowledge of testing methodologies that ensure the actual software applications meet defined requirements. Be aware that these methodologies also work towards removing all kinds of bugs and defects from programs.

As you prepare for your interview, note the three methods of testing software. Black-box testing strictly considers external requirements and specifications. There's no need to learn about internal paths, structures, or implementation of the application under test (AUT). White box testing on the other hand, pays attention to the code structures, internal paths, and implementation of software. You need to know how the code was written to write white box tests. Gray-box testing is the in between. The tester has minimal awareness of the internal details of applications.

There are generally four levels of testing: unit testing, integration testing, system testing, and acceptance testing. The order is important, as they build on each other. Unit testing is the lowest level, generally testing modules and methods. Integration testing usually tests interfaces and interactions between modules. System testing can examine things like database connections and interfacing. Acceptance testing is the final level, usually done by a human with "user stories" to validate.

Testing reveals bugs and defects, and interviews often require clear explanation of bug or life cycle (also known as defect life cycle). It's necessary to outline the trajectory of a defect throughout the lifespan of the software. Bug reports come when users experience the issue and make the team aware of it. It then gets closed when the programmer or developer has ensured they can no longer reproduce or demonstrate the defect. An interviewer might want your explanation of the statuses of a bug, so take a moment and determine the difference between these: New, Assigned, Active, Tested, Verified, and Closed. When a bug is marked "active," it's time to "test" it. If it's reproduced during a test, then it is Reopened. An Active bug or defect may also be Rejected or Deferred, depending on the circumstances. Some interviews will even delve into organization and project-specific bug life cycles and the factors that determine them. Review the impact of factors such as team structure, project timeline, organization policy, and software development model (agile, iterative, waterfall, and so forth).

This brings us to bug reports. "What is a bug report and what does it do?" A bug report is a tester's diary of their observations, findings, and other information relevant to developers or management. The bug report is also called a test record, and ensures that team members understand the issue, outlines steps and conditions required to reproduce it. It includes the resolution after the problem is fixed. The title, description, version, status, steps to reproduce, who it's assigned to, and resolution are possible headings for a comprehensive bug report.

It's important to consider the vital metrics of software tests. In testing interviews, it's great to provide some insight into the high-level information a tester provides to developers or management and the accompanying action steps. Depending on how severe they are, there's a need to point out the total number of identified defects, how many were fixed, the number of problems due to source code errors compared to other factors such as configuration. Other metrics include the bug find and fix rate within a defined period, how many bugs are available per feature, the average time to find and fix a bug, total time spent on developing new features as opposed to resolving bugs, how many bugs are outstanding before a release, and the number of customer-reported vs. tester-found bugs. A tester often has to determine the conditions that determines if an application under test meets requirements and works as expected. That's the purpose of developing test cases, where each test case is a collection or set of conditions or variables that help the tester arrive at their conclusions.

If you're going to ace the software testing interview, you should understand the difference between functional and non-functional testing. Functional testing is the result of customer requirements, whereas non-functional testing works to meet customer expectations. Non-functional testing basically describes how the software works, while functional testing focuses on describing what the product does (its functionality). Then, your interviewer could ask that you "Differentiate verification and validation in software testing." You show mastery of both concepts by explaining that verification takes place without executing the program or application code. It's a static analysis technique that includes reviews, inspection, and walk-through. Explain validation by describing it as testing that requires code execution. It's a dynamic analysis technique that includes functional and non-functional software testing techniques.

The V model of the software development life cycle proposes doing development and quality assurance (QA) activities simultaneously. There's no discrete testing phase -- an important point to note -- as testing begins from the requirement phase. Verification and validation go hand-in-hand, so it's normal for software testing professionals to put them together. Modern software development best practices include something known as Test-Driven Development or TDD. Introduced by Kent Beck in 1999, TDD requires a developer working on a feature to write just enough code to make a test pass after writing a failing test. A passing test is a green light to write another failing test before writing just enough code to pass the failing test. This cycle goes on until the feature is fully developed. Where there are external dependencies including files, databases, or networks, mocking them to isolate the code is necessary. Test-Driven Development helps the developer to thoroughly think through the feature, clarifying requirements and specification before they write one line of code to build it. Poor requirements make it hard to write good tests, and failing tests indicate an issue with the most recent code addition, and help to keep debugging time to a minimum.

Tests are important for every iteration of software. They preserve software quality, and preserve the integrity of prior code. Manual testing helps to verify that software functionality is intact. The onus is on the tester during an interview, to communicate a list of test cases for the interview scenario or case study, highlighting test data where possible. A manual tester would take each case, and test-drive the software, just like an end user by providing input, and manually verifying the output. Since software testing ensures that an application gives the correct output for a given input, a final interview curve-ball would be, "Does Software Testing improve software quality?" Iteratively identifying defects in a software product ensures correct output and guarantees that it works in line with requirements. In summary, software testing interviews cover the breadth of issues in software development that can impact the final quality of software. Preparation needs to include both high-level and detailed understanding of topics such as metrics, requirements, bugs, TDD, and critical aspects of the software development lifecycle.

Familiarity with these topics is a surefire way to pass any software testing interview.