GUI Testing The Beginner's Guide for User Interface (UI) Testing

This guide addresses key questions about GUI testing: What is it? Why is it important? What are the major GUI testing types and techniques? Read this comprehensive guide to discover the answers to these questions, as well learn how to create a GUI test plan and write GUI test cases.

What is GUI testing?

If the beginning of wisdom is the definition of terms, then an understanding of GUI testing must begin with a definition of the term GUI. This is an acronym for Graphical User Interface, or that part of an application which is visible to a user. A GUI may contain elements such as menus, buttons, text boxes, and images. One of the first successful GUIs was the Apple Macintosh, which popularized the concept of a user “desktop” complete with file folders, a calendar, a trash can, and a calculator.

Early GUI - Apple Mcintosh An early GUI: The Apple Macintosh released in 1984. Image source: folklore.org CC license

In today’s GUI testing environment, the “simple calculator application” is no longer limited to the desktop of a computer. It may be a mobile app that is available on all of the major mobile platforms. Or, it may be a cloud application that must be supported by all of the major browsers. Testers must perform cross-browser and cross-platform testing to identify defects and ensure that the application fulfills all requirements.

Consequently, GUI testing refers to testing the functions of an application that are visible to a user. In the example of a calculator application, this would include verifying that the application responds correctly to events such as clicking on the number and function buttons. GUI testing would also confirm that appearance elements such as fonts and images conform to design specifications.

Is UI testing the same thing as GUI testing?

One challenge to learning about software testing is that there are many terms in the industry, and these terms often have overlapping meanings or are used inconsistently.

For example, user interface (UI) testing is similar to GUI testing, and the two terms are often treated as synonyms. However, UI is a broader concept that can include both GUI and Command Line Interfaces (CLIs). A CLI allows a user to interact with a computer system through text commands and responses. Although CLIs predate GUIs, they are still in use today and are often preferred by system administrators and developers. GUI testing would also confirm that appearance elements such as fonts and colors conform to design specifications.

Tip

To get to the command line on a Windows PC, start the Command Prompt desktop application. On a Mac, open the Terminal application

While there is no universally-accepted definition of testing terms, a good source is the International Software Testing Qualifications Board (ISTQB) in their Certified Tester Foundation Level Syllabus.

Where does GUI testing fit in the software development lifecycle?

To understand the role of GUI testing in the development life cycle, it is helpful to think about test levels and test types.

Test Levels

A test level tells you when a test occurs in the development life cycle. Each level corresponds to one phase of the development life cycle. The ISTQB test levels are component testing (also called unit testing), integration testing, system testing, and acceptance testing.

V-Model of Software Development The V-model of software development identifies testing tasks for each stage of development.

Component testing

Component testing checks individual units of code. Component testing is often called unit testing, but may also be called module testing or program testing. Developers write and execute unit tests to find and fix defects in their code as early as possible in the development process. This is critical in agile development environments, where short release cycles require fast test feedback. Unit tests are also white-box tests because they are written with a knowledge of the code being checked

Integration testing

Integration testing combines individual units and tests their interaction. A common type of integration testing is interface/API testing. An application programming interface (API) is a set of rules that two modules of code use to communicate with each other. Interface/API tests validate this interaction. Because the API rules tend to be very stable in any given application, API tests are good candidates for automation. Interface/API tests are white box tests because they require knowledge of the code being checked.

System testing

System testing verifies that a complete, integrated system works as designed. As no knowledge of the underlying code is necessary, this is black-box testing. System testing is the level where GUI testing occurs.

Acceptance testing

Acceptance testing is usually performed either by end-users or their proxies, such as a product owner. The goal of user acceptance testing (UAT) is to ensure that the application solves the customer’s need.

Test Types

A test type tells you what is being tested. Below are the ITSQB-defined test types.

Functional testing

Functional testing compares an application to its functional specifications to ensure that the application does what it is supposed to do. In the case of a calculator app, functional testing would ensure that all of the mathematical operations work correctly, and that the memory and recall buttons save and return data properly. Functional testing answers questions such as, “Does the divide by zero error handling work right?” GUI testing is an example of functional testing.

Non-functional testing

Non-functional testing tests how well the system works rather than its specific functions. Non-functional testing considers elements such as usability, responsiveness, and scalability. This type of testing answers questions such as “How easy is it to perform division with this app?” and “Does this app look right on screens of different sizes?” Testing a GUI for usability is an example of non-functional testing.

Structural testing

Structural testing is a white-box approach. It verifies that all components of a system are covered by an appropriate test. If coverage gaps are found, then additional tests can be designed to ensure that each component is tested properly.

Regression testing

Regression testing involves re-running previously-successful tests after code changes, to confirm that no additional defects (regressions) have been introduced. Regression tests are ideal for automation since they are often repeated.

End-to-end testing

End-to-end testing validates the workflow of a system. For example, end-to-end testing of a purchasing app would ensure that a user can search for an item, add it to a cart, enter payment and shipping details, and complete the purchase.

Why is GUI testing important?

In software development, quality is defined as delivering an application which has the functionality and ease-of-use to meet a customer’s need, and which is as free of defects as possible.

It has been said that “you cannot inspect quality into a product. The quality is there or it isn’t by the time it’s inspected.” This quote is about testing a completed product such as an automobile, but the principle applies to software development as well.

To improve quality, development teams seek to build it into their projects from the start. One way of doing this is to move testing earlier in the software development life cycle, an approach also known as Shift Left testing.

Shit left testing Shift left testing Performing unit and interface/API testing during development shifts the overall testing effort earlier in the software development life cycle.

Rather than waiting for system testing after an application is complete, development teams increase the time and resources spent in unit and interface testing. Catching errors early in the development process reduces the costs of resolving them. Also, unit and interface/API tests are well-suited for automation: unit tests can be created by developers as they code, while APIs tend to be very stable and therefore require less maintenance than GUI tests.

With the emphasis on Shift Left testing, it may seem that GUI testing is not important. After all, manual GUI testing can be time-consuming and resource-intensive. And test automation is more challenging for GUIs: Because the user interface can change often, previously-working automated GUI tests may break, requiring significant effort to maintain them.

But unit and interface testing cannot evaluate all areas of a system, especially the critical aspects of workflow and usability. In particular, these tests can only verify code that exists. They cannot evaluate functionality that may be missing or issues with an application’s visual elements and ease-of-use. This is the value of GUI testing, which is performed from the perspective of a user rather than a developer. By analyzing an application from a user’s point of view, GUI testing can provide a project team with information they need to decide whether an application is ready to deploy.

What are the major GUI testing techniques?

As explained above, test levels describe when to test, and test types describe what to test. Testing techniques describe how to test a target application, also known as the “application under test” or AUT. This section looks at three major GUI testing techniques: scripted testing, exploratory testing, and user experience testing.

Test Automation Both scripted and exploratory testing can be supported by test automation.

Scripted testing

In scripted testing, software testers design and then execute pre-planned scripts to uncover defects and verify that an application does what it is supposed to do. For example, a script might direct a tester through the process of placing a specific order on an online shopping site. The script defines the entries that the tester makes on each screen and the expected outcome of each entry. The tester analyzes the results and reports any defects that are found to the development team. Scripted testing may be performed manually or supported by test automation.

Scripted testing benefits

Because scripted testing is pre-planned and has tangible outputs – the test scripts and testing reports – scripted testing gives product managers and customers confidence that an application has been rigorously tested. By creating test scripts early in the development process, teams can uncover missing requirements or design defects before they make it into the code. While test scripts must be created by a more experienced tester with knowledge of the system, less-experienced/knowledgeable testers can perform the actual testing. Finally, test scripts can be reused in the future for regression testing, and can also be automated for greater efficiency.

Scripted testing challenges

Scripted testing requires a lot of up-front planning, which can cause time pressure especially in agile development environments. Test scripts must be updated as the AUT changes. More importantly, studies have suggested that the rigid structure of test scripts may cause testers to miss defects that would be uncovered by exploratory testing, or that the time required to develop test cases and execute them manually does not deliver payback in terms of the number and severity of defects found relative to exploratory testing. Read more about defect detection rates in scripted vs. exploratory testing.

Exploratory testing

Rather than following pre-written test scripts, exploratory testers draw on their knowledge and experience to learn about the AUT, design tests and then immediately execute the tests. After analyzing the results, testers may identify additional tests to be performed and/or provide feedback to developers.

Although exploratory testing does not use detailed test scripts, there is still pre-planning. For example, in session-based exploratory testing, testers create a document called a test charter to set goals for the planned tests and set a time frame for a period of focused exploratory testing. Sessions of exploratory testing are documented by a session report and reviewed in a follow-up debriefing meeting.

Likewise, during scripted testing, there may be some decisions available to testers including the ability to create new tests on the fly. For that reason, it is helpful to think of scripted testing and exploratory testing as being two ends of a continuum rather than being polar opposites.

Both scripted and exploratory testing can be completely manual, or they can be assisted by automation. For example, an exploratory tester might decide to use test automation to conduct a series of tests over a range of data values.

Exploratory testing benefits

As time planning and writing test cases is reduced, testers have more time to focus on the actual testing of the AUT. Testers who are challenged to use their knowledge, skills, and creativity to identify defects and ensure conformance to requirements may be more engaged and may find more defects than testers who are restricted to scripts written by others.

Exploratory testing challenges

Exploratory testing requires testers to have a deep understanding of the performance requirements of the AUT as well as skill in software testing. Due to the realities of time constraints and resource availability, it may be impractical to try to cover an entire AUT with exploratory testing. Exploratory tests are not as repeatable as scripted tests, which is a major drawback for regression testing. Further, relying on exploratory testing alone can create concern in product managers or customers that code will not be covered and defects will be missed.

User experience testing

In user experience testing, actual end-users or user representatives evaluate an application for its ease of use, visual appeal, and ability to meet their needs. The results of testing may be gathered by real-time observations of users as they explore the application on-site. Increasingly, this type of testing is done virtually using a cloud-based platform. As an alternative, project teams can do beta testing, where a complete or nearly-complete application is made available for ad hoc testing by end users at their location, with responses gathered by feedback forms. By its nature, user experience testing is manual and exploratory.

Don’t confuse user experience testing (UX) with user acceptance testing (UAT). As discussed earlier, UAT is a testing level which verifies that a given application meets requirements. For example, imagine that shortly after an update is released to a shopping website, the site receives many complaints that customers are unable to save items to a wish list. However, UAT verified that pressing the “wish list” button correctly added items to the wish list. So, what’s wrong? UX testing might have revealed that the wish list button was improperly placed on the screen, making it difficult for shoppers to find.

You can conduct UX testing at any point in the development phase where user feedback is needed. It is not necessary to have a completed application prior to involving users. For example, focus groups can respond to screen-mockups or virtual walk-throughs of an application early in development.

User experience testing benefits

UX testing provides the essential perspective of the end-user, and therefore it can identify defects that may be invisible to developers and testers due to their familiarity with a product. For example, a few years ago, a leading web-based e-mail provider developed a social sharing tool. This tool was beta tested by thousands of the provider’s own employees but not by end-users prior to its initial release. Only after the product was released into production did the provider begin receiving end-user feedback, which was overwhelmingly negative due to privacy concerns. Early UX testing would likely have revealed these concerns and saved the provider millions of dollars in development costs.

User experience testing challenges

UX testing requires identification and recruitment of user testers that accurately represent the target user base, such as novices, experienced users, young people, older adults, etc. While it is not necessary to have a very large group of users, it is important to cover the expected user personas. Recruiting user testers may be a time-consuming and potentially costly process. Although it is possible for user experience testing to be done by user proxies such as product owners, it can be difficult for them to set aside their knowledge of the AUT and fully step into the role of an end user. Finally, if UX testing is limited to beta testing, this occurs very late in the development cycle when it is expensive to find and fix defects.

What is the best GUI testing technique for my application?

Scripted testing, exploratory testing, or user experience testing: which is best for your situation? All decisions regarding testing should seek to maximize the value of an application for its users, both by detecting defects and by ensuring functionality and usability. In most cases, achieving this goal will require a combination of test techniques. Choosing the particular combination that best fits your application and development environment is done in the test planning phase, described in the next section.

How to write a GUI test plan

A GUI test plan sets the scope of a test project. Before writing test cases, it is important to have a test plan that identifies the resources available for testing and that prioritizes areas of the application to be tested. Given this information, a testing team can create a test charter for exploratory testing, and test scenarios, test cases and test scripts for scripted testing.

The test plan defines key information including:

  • Anticipated dates of testing
  • Required personnel
  • Required resources, such as physical hardware, virtual or cloud-based servers, and tools such as automation software
  • The target test environments, such as desktop, mobile devices, or web with supported browsers
  • The workflows and events of the AUT to be tested, as well as the AUT’s visual design, usability, and performance.
  • Planned testing techniques, including scripted testing, exploratory testing, and user experience testing.
  • The goals for testing including criteria for determining success or failure of the overall testing effort.

Test plans can be text documents, or you can use a test management tool to develop the test plan and to support analysis and reporting. There are many such tools available, including free server- and cloud-based tools. In the absence of a formal management tool, it is not uncommon to use a spreadsheet to track the progress of testing.

Remember that a GUI test plan is not a full system test plan, which would test other aspects of an AUT such as load testing, security, and backup and recovery.

Identifying the areas to test

There are several ways to identify the areas of the user interface to test. If specification documents are available, this is a good place to start. If specification documents are unavailable or incomplete, a useful approach is to conduct a brainstorming/concept-mapping session to determine areas to test.

The list below can help you start a brainstorming session:

  • Visual Design
  • Functionality
  • Performance
  • Security
  • Usability
  • Compliance

The sample concept map below shows the result of a brainstorming session for a generic application and includes GUI events such as add, edit, delete, save, and close. To create a concept map, testers apply heuristics: their knowledge of the AUT combined with general testing principles.

Partial concept map for GUI testing Partial concept map for GUI testing

For example, to verify the navigation of a cloud-based application, plan tests such as the following:

Sample areas to test for web navigation

  • Compatibility with all common browsers
  • Proper functioning of the page when the user clicks the back button or the refresh button
  • Page behavior after a user returns to the page using a bookmark or their browser history
  • Page behavior when the user has multiple browser windows open on the AUT at the same time.

Prioritizing test cases for risk-based testing

Because resources for testing are often limited, it can be helpful to prioritize areas to test. Risk-based testing uses an analysis of the relative risk of potential defects to select the priorities for testing. Risk analysis is completed using a matrix similar to the one below. In this matrix, the Frequency column describes how often a user might encounter a potential defect, which includes both how visible the function is and how often it is used. Each of the Impact columns describes the effect of the defect on the user. The combination of frequency and impact determines the level of risk.

Risk assessment matrix Risk Assessment Matrix

For example, let’s assume that the password reset process does not work at all. Once a user encounters it, the frequency is “constant” and the effect is “catastrophic” as the user is locked out of the application. Therefore, testing the password reset process is a critical priority. If the testing team determines that there is only enough time to inspect critical and high priority events, then medium events could be addressed through exploratory testing, while low priority events may not be tested at all.

Planning regression testing

As mentioned earlier, regression testing helps ensure that new defects haven’t been introduced to previously-working code. Using a test automation tool such as Ranorex Studio can significantly increase the number of regression test cases that can be completed in a testing window. However, even with automation, it may be impractical to repeat all of the previous test cases for a new release. The most important regression test cases to perform are listed below.

The most important regression test cases are those which:

  • Have the highest level of risk
  • Provide the greatest coverage of code
  • Are likely to uncover the greatest number of defects, based on how many defects each test case identified in prior rounds of testing.

To avoid wasting time and effort on an application that is not ready for full testing, a test plan may also include smoke testing and sanity testing.

Smoke and sanity testing

  • Smoke testing checks the basic functionality of an application. For example, smoke testing would verify that the AUT can start and that users can log on. Smoke testing is shallow, because it does not test any one part of the system in depth, and it is also wide because it covers as much of the major functionality as possible. The name comes from the practice of turning on a new piece of hardware to see if it catches fire. If it does not, additional testing can proceed.

  • Sanity testing examines the just the new or modified code to ensure that it not causing any major problems and that it meets specifications. Compared to shallow and wide smoke testing, sanity testing is narrow and deep.

Both smoke and sanity testing are often performed by developers prior to more rigorous review by software testers.

How to write GUI test scenarios

A test scenario is a brief statement of how an application will be used in real-life situations, such as “the user will be able to log in with a valid username and password.” Test scenarios can be written from development documents such as requirements, acceptance criteria, and user stories. In the absence of such documents, scenarios can be developed in consultation with developers and customers/customer representatives.

Scenarios can guide exploratory testing, giving testers an understanding of a GUI event to test, without restricting them to a specific procedure. Scenarios are also increasingly popular in agile environments, as it is much faster to create a brief scenario than to write out a full test case.

Scenarios are not required to create GUI test cases but are helpful in guiding their development. If scenarios are used in scripted testing, then they serve as the base from which test cases can be developed, as shown in the diagram on the right.

Test scenario Test scenarios, if used, guide the development of test cases and test scripts

For example, the “log in” scenario above could have test cases for GUI events such as the following:

  1. User enters a valid username and password
  2. User enters invalid username
  3. User enters valid username but invalid password
  4. User tries to reset the password
  5. User tries to copy the password from or to the password field
  6. User presses the help button

How to write GUI test cases

To write a GUI test case, start with a description of a GUI event to be tested, such as a login attempt. Then, add the conditions and procedures for executing the test. Finally, identify the expected result of the test and criteria for determining whether the test succeeds or fails.

Whether to write general or detailed procedures depends on factors such as:

  • The experience level of the testers, both with GUI testing in general and with the specific application being tested. Less-experienced testers may need more detailed procedures.
  • How often the user interface changes. If an interface changes frequently, maintaining detailed procedures requires more effort.
  • How much freedom end-users will have when navigating through the application. If users will have a lot of freedom, you could write procedures to cover all of the possible navigation paths, or else rely on the ability of the testers to anticipate the random paths that users might take.

If testers need only general procedures, these could appear in the test case itself. If testers need detailed procedures, putting these in a separate test script may help make your tests more maintainable.

What to include in a GUI test case

The most basic information in a test case is a description of the GUI event to be tested, the conditions for executing the test, and the expected result. To make test cases easier to manage, it can be helpful to include additional information such as links to requirements documents and/or defect tracking systems.

For example, the “valid username and password” test case above could contain information such as:

Test Case ID a unique identifier for the test script.
Title the title of the test script, such as “User enters a valid username and password, maximum length.
Scenario/ Requirement ID link to the unique ID for the test scenario or requirements document, if applicable.
Priority/Risk Level critical, high, medium, low.
Technique scripted, exploratory, UX
Description a brief explanation of the test case, such as “when the user is not already logged on, ensure that the user can log on with any valid character combination for the username and password, including special characters. Ensure that the password is hidden unless the user chooses to make it visible
Data Source* external spreadsheet or database containing combinations of usernames and passwords to test
Procedure* list of steps for the tester to follow when performing the test
Expected Result* success, application main window appears
Actual Result* completed by tester after testing
Status pass/fail/blocked status of the test case.
Defect Cross-Reference* if a defect is found, enter the code from the defect tracking system here, to connect the test case with the defect.

*If you choose to write test scripts, this information appears in the test script rather than the test case. See below

Best practices in writing GUI test cases

Applying “best practices” to your test case design can help improve the quality of your tests. Following are suggestions for making your GUI test cases easier to maintain and execute.

Separate test data from test cases

Separating test data from your test cases will make them easier to maintain. For example, the “valid username and password” test case should not include the actual username and password data values. Instead, these data values should be kept in a spreadsheet or database – whether the GUI tests are performed manually or with the help of test automation software. Likewise, it is also a good idea to separate information about the environment from the test case. Should the team decide to test on a new platform, it will not be necessary to change the test case itself.

Keep test cases modular

To the extent possible, keep test cases modular so that they can be performed in any order. This more closely mimics the actual user experience, because users don’t always go through an application in the order that developers expect. In the case of a cloud-based shopping application, don’t write one large test case for buying an item. Instead, create separate test cases for events such as searching for items, adding items to a cart, deleting items from a cart, and updating the quantity of items in the cart. This will make it easier to test the combination of events such as a user going back to the search feature after adding several items to the cart, rather than going straight to checkout.

Write positive and negative test cases

Be sure to write both positive test and negative test cases. A positive test case verifies the behavior of the AUT when a user enters valid data. A negative test case verifies the response of the AUT to invalid data. For example, the cloud-based shopping application might have the following test cases:


  • Positive test case: enter a valid credit card in the payment field. The test succeeds if the AUT accepts a valid payment method.
  • Negative test case: enter an invalid credit card in the payment field. The test succeeds if the AUT gives the specified error message.

Use testing heuristics

When creating data for test cases, it is useful to draw on testing heuristics. For example, create test data for the maximum and minimum values in a data field. Or, when testing queries against a database, have tests for a query that returns zero rows, one row, or multiple rows. For more examples of testing heuristics, see the Test Heuristics Cheat Sheet by agile testing expert Elisabeth Hendrickson.

How to write GUI test scripts

A test script provides a clearly-defined procedure for a tester to follow. The test script may include information such as the following:

Test Script ID a unique identifier for the test script.
Title the title of the test script, such as “User enters a valid username and password, maximum length.
Test Case ID a link to the unique ID for the test case.
Test Setup the requirements for the test environment; could be stored separately in a test data spreadsheet.
Data either the literal values for the tester to enter or a link to an external spreadsheet or database containing combinations of usernames and passwords to test.
Procedure the step-by-step instructions for the tester, such as teh following example:
  1. Start the AUT. THe log on screen appears.
  2. Click on the username field.
  3. Enter the first username from the spreadsheet.
  4. Enter the password from the spreadsheet.
  5. Click on the Log On button.
Actual Result completed by tester afte testing.
Status pass/fail/blocked status of the test script.
Defect Cross-Reference* if a defect is found, enter the code from the defect tracking system here, to connect the test case with the defect.

Create sufficient test scripts to verify the most common paths that users will take through the AUT.

If you would like to see how test automation can help you create and execute GUI test cases, and report the results of GUI testing, download a trial version of the Ranorex test automation solution. Or, contact one of our test automation experts at [email protected].

Association for Software Testing: https://associationforsoftwaretesting.org

International Software Testing Qualifications Board: http://www.istqb.org

Ask a Ranorex expert

Get your most pressing questions answered by one of our Ranorex experts.

Simply fill out the form below

Ask an expert