Do-It-Yourself, Low-Cost Pop-Up Usability Labs for Learning Experience Designers

&
DOI:10.59668/515.12895
Learning Experience DesignUsability TestingPortable Usability LabsHuman-centered Learning DesignUsability Research Methodology
This article introduces the concept of pop-up usability labs as a practical and cost-effective solution for conducting usability testing in real-world learning experience design (LXD) contexts. Various configurations of pop-up usability labs are presented, including budget-friendly, portable, semi-permanent, and mobile setups, along with recommendations for hardware and software requirements. The importance of selecting appropriate software packages for usability data analysis is emphasized, with suggestions for low-cost, no-cost, and open-source options for quantitative and qualitative data analyses. Pop-up usability labs offer numerous advantages, such as accommodating limited funding and reaching participants in rural areas. However, they also present challenges that require LXD professionals to become familiar with the equipment and data analysis methods. Despite these limitations, pop-up usability labs provide a viable, resource-efficient approach for LXD professionals to conduct usability testing in real-world settings, identify and address design flaws related to usability, and embrace more human-centered design practices in the development of educational and learning technology products, systems, or services.

Introduction

Researchers and practitioners in the field of learning design and technology (LDT) increasingly have become aware of the phenomenon known as learning experience design (LXD). LXD is defined as “a human-centric, theoretically-grounded, and socio-culturally sensitive approach to learning design, intended to propel learners towards identified learning goals, and informed by UXD [user experience design] methods” (Schmidt & Huang, 2021, p. 141). A challenge that exists in the field of LXD has to do with measuring learning experiences. To date, learning experience (LX) designers have borrowed many measurement strategies from the field of user experience (UX). Perhaps most prevalent among these borrowed strategies is usability testing (Lu et al., 2022), which, arguably, is the most prevalent method used to assess UX (Albert & Tullis, 2022). Usability testing is a robust and effective user-centered evaluation method that has been broadly used across a range of disciplines to uncover design flaws related to ease-of-use, utility, and accessibility. It is used to evaluate the effectiveness and efficiency of a product, system, or service (for example, an online course, software application, or user interface) by gathering feedback from users through observing their interactions with the product and analyzing their experience.

Usability testing plays a vital role in LXD as it enables learner-centered design, enhances engagement, identifies design flaws, and informs design decisions (Schmidt, Earnshaw, et al., 2020). Usability in learning environments includes considerations of accessibility, efficiency, and user satisfaction, which can promote learner engagement and motivation (Jahnke et al., 2020; Nora & Snyder, 2008; Schmidt et al., 2022). Specifically, usability testing in LXD includes the following benefits:

Although there are many reasons that usability testing can contribute positively to the design of learning experiences, usability testing historically has received limited attention in the context of learning and educational technologies (Lu et al., 2022). This is not to say that our field does not trial learning environments with actual learners. For example, Tessmer (1993) advocated for pilot testing in which a product, system, or service is put through a trial run before its full deployment. More recently, the relatively low uptake of usability testing in LDT has been attributed to a lack of sophistication with the method and a general misunderstanding of how it is performed (Lu et al., 2022). Furthermore, formal usability testing, using specialized tools and infrastructure as described in Table 1, is often associated with challenges, such as:

Table 1

Specialized Usability Research Tools and Infrastructure

Devices
Eye-tracking devicesTrack and record eye movements, gaze patterns, and fixation points, providing insights into user attention and visual perception.
Biometric sensorsMeasure physiological responses such as heart rate, skin conductance, and facial expressions, providing additional data on user emotional states and engagement levels.
Mobile testing devicesDevices such as smartphones and tablets for testing mobile applications and responsive designs to ensure usability across different screen sizes and surfaces.
Software
Usability testing softwareAssist in conducting and recording usability tests, allowing for easy capture of user interactions, screen recordings, and audio/video recordings.
Clickstream analysis toolsCapture and analyze user interactions with a website or software, recording mouse movements, clicks, scrolling behavior, and navigation paths.
Usability analysis softwareAid in the analysis of collected data, providing visualizations, heatmaps, and metrics to interpret user behavior patterns and identify usability issues
Research Apparatus
Usability testing kitsTypically include tools such as cameras, microphones, tripods, and recording devices, allowing for easy setup and capturing of user testing sessions.
Survey and feedback toolsEnable the collection of quantitative and qualitative data from users, including satisfaction ratings, feedback, and suggestions.
Infrastructure
Usability testing roomsSpecifically designed testing environments equipped with one-way mirrors, cameras, and microphones to observe and record user behavior without interfering with their experience.
Remote usability testing toolsFacilitate conducting usability tests remotely, allowing testers to observe and interact with users remotely, collect data, and share screens for real-time feedback.

Although the challenges with usability research detailed above are valid, alternative approaches to usability research exist that can help mitigate some of these challenges. For example, so-called “quick-and-dirty” usability testing has long been recognized as a low-cost, low-resource approach to usability testing. This approach can produce useful results for making pragmatic improvements to technology systems both in industry (e.g., Brooke, 1996; Krug, 2009) and learning design (e.g., Mayes & Fowler, 1999; Reeves et al., 2002). This form of usability testing is often performed in informal spaces using less sophisticated technologies that allows for evaluating the usability of a product, system, or service in a spontaneous and low-cost manner. It typically involves recruiting participants on the spot, in a public setting, and asking them to perform specific tasks with the product, system, or service, while observing and recording their behavior and feedback. It is often used as a quick and effective way to gather insights about a product, system, or service’s usability, identify potential issues, and inform design decisions (Krug, 2009).

Researchers and practitioners in LDT can conduct usability testing in their own evaluation contexts, even if they do not have access to sophisticated, high-cost usability laboratories and equipment, such as psychophysiological measurement tools (see Table 1). Indeed, usability research that is conducted with humble resources is well established in industry and, in some cases, is better suited to the evaluation context and able to provide authentic perspectives that simply are not possible in laboratory settings.

The purpose of this article is to provide guidance and lessons learned for LDT researchers to create their own no-cost or low-cost usability research labs. To this end, we propose the notion of “pop-up” usability labs. A pop-up usability lab is a solution for conducting usability testing that is usually limited to a relatively short period of time. In contrast to formal usability labs, pop-up labs are usually low- or zero-cost, require little space, can be supported by one or two trained personnel, and are often highly portable. These pop-up usability labs can allow for conducting educational and learning technology usability research in low resource or impromptu contexts.

Usability Research Methodology

Before presenting the pop-up usability lab configurations, we first provide a short description of usability research methodology, specifically focusing on observations, the think-aloud method, eye tracking, and frequently used usability questionnaires (i.e., Computer System Usability Questionnaire and the System Usability Scale). These methods can be used together or separately, depending on the specific research goals and context. They are not mutually exclusive and can complement one another to provide a comprehensive understanding of learner experiences and usability issues. While there are other methods that are commonly used to evaluate usability (i.e., heuristics), the methods we include here are intentionally selected because they are most appropriate for use with pop-up usability labs.

Observation

Observation is a fundamental technique employed in usability testing to gather data about users’ interactions, behaviors, and experiences while using a product, system, or service (Albert & Tullis, 2022). In LXD, this technique involves carefully observing and documenting learner actions, verbalizations, and non-verbal cues during the testing session. During the session, researchers usually observe participants as they perform predetermined tasks or scenarios with the product, system, or service. This method allows researchers to directly witness how learners navigate interfaces, make decisions, and encounter any challenges or difficulties. It also provides valuable insights into learner behaviors and subjective experiences, which are essential for evaluating the usability and effectiveness of a design.

Observation often involves a combination of techniques. For example, researchers may use various tools, such as video recording, screen recording, and note-taking, to capture and document users’ actions and behaviors. This allows for later analysis. Researchers should pay close attention to the sequence of steps taken, the time required to complete tasks, the strategies employed, and any errors or points of confusion encountered. Researchers may also note verbalizations, such as participants’ comments, questions, and feedback during the testing session. These insights provide valuable qualitative data, revealing user perceptions, preferences, frustrations, and suggestions for improvement.

Think-Aloud Method

The think-aloud method is frequently used in conjunction with observation, meaning that similar tools are employed (e.g., video cameras, screen recording software). Think aloud is perhaps the most widely-used usability evaluation technique in which a participant verbally expresses their thoughts while interacting with a product, system, or service. Jakob Nielsen, a leading expert in the field, has called it “the single most valuable usability engineering method” (1993, p. 195). The think-aloud method is typically used during the functional prototyping phase, where a single participant is tested at a time. The functional prototyping phase occurs during design and development, in which a high-fidelity prototype is created to test and validate the functionality and performance of a product, system, or service. The prototype is then tested rigorously to identify any design flaws or usability issues and to gather feedback from learners. The participant narrates their thoughts, actions, and emotions verbally while using the prototype or fully functional educational or learning technology artifact (e.g., online course, learning module, interactive activity). For further information on functional prototyping, the reader is referred to Schmidt, Earnshaw, et al. (2020).

Think-aloud user testing is considered a valuable method for capturing user/learner feedback on a user interface; however, it can be unnatural for some participants, so the researcher should encourage participants to continue verbalizing throughout a session. An alternative approach is the retrospective think-aloud, in which participants review the recorded testing session and speak to the researcher about their thoughts during the process. It is important to conduct this as soon after the testing session as possible.

Think-aloud testing is widely used in practice and increasingly gaining acceptance in LDT (cf. Reeves & Hedberg, 2003; Gregg et al., 2020; Lu et al., 2022; Schmidt & Tawfik, 2022) as it allows evaluating new and advanced learning technologies with a relatively small number of participants. Most experts suggest that a small number of participants is sufficient for usability testing, with as few as five being sufficient for prototype testing (Nielsen, 2000). However, in the field of LDT, Schmidt and colleagues (in press a) state, “Given the limited resources provided to learning designers, think-aloud user testing is particularly attractive because it can be conducted with relatively small numbers of participants (5-12 users depending on the complexity of the system) and with open source or free-to-use tools.”

A full discussion of think-aloud methods is beyond the scope of the current paper. However, the U.S. government’s online resources for usability (U.S. Department of Health & Human Services, 2016) provide a high-quality primer on how to conduct think-aloud user testing. Further, Andrea Gregg and her colleagues (2022) provide a webinar on think-aloud methods in LDT, which draws from a book chapter on the same topic (Gregg et al., 2020).

Eye Tracking

For quick-and-dirty usability testing, the inclusion of rudimentary eye-tracking capabilities can greatly enhance the research process. An example of one such device is the Tobii Eye X, discussed in the Portable Pop-Up Usability Lab section below. Eye-tracking technology provides valuable insights into user attention and visual perception, allowing researchers to optimize user experience and understand user interactions. While this approach may not be suitable for those new to usability methodology, integrating eye tracking into usability testing enables the collection of objective data on user gaze patterns, fixation points, and attention focus. Eye tracking can be used in conjunction with the observation and think-aloud methods. Recording gaze patterns via eye tracking can allow researchers to gain deeper insights into learners’ cognitive processes and decision-making while the learners verbalize their thoughts. Analysis of eye-tracking data in conjunction with verbal feedback can allow researchers to connect visual attention patterns and verbalized perceptions or difficulties. This can provide a more comprehensive understanding of user behavior and aids in the interpretation of qualitative data (Conley et al., 2020).

Commonly Used Quantitative Usability Instruments

A range of validated and reliable instruments has been developed for assessing the usability of a product, system, or service. For an overview, the reader is referred to Lewis (2018a). In the sections below, we present two of the most prevalent instruments, the System Usability Scale (SUS) and the Computer System Usability Questionnaire (CSUQ).

System Usability Scale

The SUS, developed by Brooke (1996) while working at Digital Equipment Corporation, is perhaps the most widely used tool for measuring the usability of a product, system, or service. It consists of a 10-item questionnaire that users complete, rating their agreement on a 5-point Likert scale (strongly disagree to strongly agree) on the statements provided in Table 2. The scores for each item are then added together to produce a single overall score, which can be used to compare the usability of different products or versions of the same product, with scores greater than 68 indicating above-average usability. The SUS has been found to be a reliable and valid measure of usability (Lewis, 2018b; Peres et al., 2013).

Table 2

System Usability Scale Items

#Item
1I think that I would like to use this system frequently.
2I found the system unnecessarily complex.
3I thought the system was easy to use.
4I think that I would need the support of a technical person to be able to use this system.
5I found the various functions in this system were well integrated.
6I thought there was too much inconsistency in this system.
7I would imagine that most people would learn to use this system very quickly.
8I found the system very cumbersome to use.
9I felt very confident using the system.
10I needed to learn a lot of things before I could get going with this system.


Computer System Usability Questionnaire

The Computer System Usability Questionnaire (CSUQ) is a widely used instrument for evaluating the usability of computer systems. Developed in 1995 by James Lewis at IBM, the CSUQ is available in the public domain for researchers to use. One of the key advantages of the CSUQ is that it is a post-test questionnaire, which can help to obtain a broader view of the perceived usability of the tested system. The instrument has established internal consistency, validity, and reliability, and assesses overall usability, system usefulness, information quality, and interface quality using a 7-point Likert scale. The CSUQ consists of 19 statements in total, which are worded positively. One of the reasons that researchers may be particularly interested in using the CSUQ is because the third version of the CSUQ questionnaire has been psychometrically analyzed for factor structures, with the reported factor structure consisting of four main factors: overall (items 1-16), system usefulness (items 1-6), information quality (items 7-12), and interface quality (items 13-15) (Sauro & Lewis, 2016).

The CSUQ v. 3 items are provided below in Table 3. In addition, readers can use an online, form-fillable version of the CSUQ, available at Gary Perlman’s website. This latter resource is of particular value for pop-up usability labs, as usability evaluators do not need to set up any data collection system, but instead can simply use this pre-existing resource.

Table 3

Computer System Usability Questionnaire Items

#Item
1Overall, I am satisfied with how easy it is to use this system.
2It was simple to use this system.
3I could effectively complete the tasks and scenarios using this system.
4I was able to complete the tasks and scenarios quickly using this system.
5I was able to efficiently complete the tasks and scenarios using this system.
6I felt comfortable using this system.
7It was easy to learn to use this system.
8I believe I could become productive quickly using this system.
9The system gave error messages that clearly told me how to fix problems.
10Whenever I made a mistake using the system, I could recover easily and quickly.
11The information (such as online help, on-screen messages, and other documentation) provided with this system was clear.
12It was easy to find the information I needed.
13The information provided for the system was easy to understand.
14The information was effective in helping me complete the tasks and scenarios.
15The organization of information on the system screens was clear.
16The interface of this system was pleasant.
17I liked using the interface of this system.
18This system has all the functions and capabilities I expect it to have.
19Overall, I am satisfied with this system.


The SUS and CSUQ are validated measures; however, they only measure technological usability. LDT researchers have noted that technological usability alone is insufficient in that it does not fully account for considerations of learning (Jahnke et al., 2020; Mayes & Fowler, 1999). While it may be tempting to adapt these measures to enhance their focus on learning, researchers are advised that this would be methodologically unsound, as this would influence the reliability and validity of the instruments. There is a need in LDT for usability measures that are specifically designed for learning technologies (Lu et al., 2022).

Pop-Up Usability Labs

As we have explained in the above sections, usability testing is an essential part of designing effective, efficient, and appealing educational and learning technologies, but can be challenging to conduct in real-world contexts due to factors such as limited funding, outdated hardware, lack of dedicated space, etc. In addition, performing usability tests in controlled lab settings is inauthentic, suggesting that usability testing in the field is needed–a context for which pop-up usability labs are well-suited.

In the following sections, we provide four hardware and software configurations that can support usability testing across a range of real-world educational and learning technology evaluation contexts. First, we describe a “budget-friendly” configuration, which requires nothing more than a laptop with a web camera and zero-cost software (e.g., Open Source, freemium). Second, we detail a portable configuration that extends the budget-friendly configuration with low-cost peripherals and more sophisticated analysis software. Third, we outline a semi-permanent configuration that is designed to last for the duration of a given project using low- and no-cost data collection and analysis software. Last, we explain how LDT professionals can create their own mobile learning usability rig. The first three configurations are shown in Table 4.

Table 4

Profiles of Pop-up Usability Labs

Budget-Friendly Pop-up Usability LabPortable Pop-Up Usability LabSemi-Permanent Pop-up Usability Lab
  • Wanting to trial method before investing
  • No funding
  • Limited access to hardware/outdated hardware
  • No dedicated space
  • Urgent turnaround needs
  • Single researcher
  • Brings pop-up usability lab materials to the site where users are located
  • Some costs involved
  • Often multiple sites
  • Limited time at site
  • Ideal for school-based or home-based research
  • Can provide way to include rural participants
  • Single and multiple researchers
  • Shared or limited space
  • Limited time
  • Limited resources
  • More sophisticated usability research needs
  • Need for observation using “virtual two-way mirror”

Hardware and Software Configurations for Pop-Up Usability Labs

This section provides detailed information on different configurations for pop-up usability labs. The section begins by discussing the budget-friendly pop-up usability lab configuration, which can be created with a standard laptop or desktop equipped with a webcam. The portable pop-up usability lab configuration is also described, which is useful when a researcher needs to collect data on-site. The semi-permanent pop-up usability lab configuration is discussed, which is ideal for situations where there is available space but only for a limited time. Finally, the section includes information on the mobile learning usability rig, which is designed to evaluate the usability of mobile learning apps.

Budget-Friendly Pop-Up Usability Lab

A budget-friendly pop-up usability lab can be created with nothing more than a standard laptop or desktop computer that is equipped with a webcam. The key consideration with the budget-friendly system is keeping at a near-zero price point. It can be situated in a static setting (i.e., the hardware remains in a single location), or it can be portable. Most modern laptops (i.e., released in the past 5 years) come equipped with a webcam. However, because camera resolution is not a major concern, low-end, low-cost USB webcams can be used in case the computer does not have a webcam or the webcam is not operational. According to Krug (2009), it is very important to use an external mouse for usability data collection, as trackpads can be difficult to use (Figure 1). When working with disabled populations, it is also important to accommodate any assistive technologies, such as screen readers, switch input, and screen magnifiers. For a budget-friendly pop-up usability lab, nearly any computer produced within the last five to seven years will work, including Macintosh, Windows, and even Chromebooks (with some caveats). This means that donated, repurposed, and refurbished computers can all be allocated for this task.

Figure 1

A Computer Equipped With a Webcam and External Mouse is Sufficient for the Budget-Friendly Pop-Up Usability Lab Configuration

A woman sitting at a laptop computer that is equipped with an external mouse


In addition to the product, system, or service that is the focus of the evaluation, researchers using a budget-friendly pop-up usability lab need screen recording software. A range of freely available screen recording software packages are available as built-in software, such as Quicktime for MacOS, or can be downloaded from the Internet, such as Zoom video conferencing software (note that the free version of Zoom has limitations). We provide recommendations for high quality software in Table 5. If the researcher is including the SUS or CSUQ questionnaires as part of their usability study, software for delivering digital forms is recommended. For example, a freely-available digital version of the CSUQ is available at Gary Perlman’s website (see Computer System Usability Questionnaire section above). Researchers can also use free-of-cost form software such as Google Forms to create their own digital forms, although Google tools may not be available to all. A benefit of the latter approach is that data are automatically entered into a spreadsheet when a participant submits their responses.

Table 5

High Quality, No-Cost Screen Recording Software

Windows/MacOSChromeOS
NameOBS StudioScreencastify
URLhttps://obsproject.com/https://www.screencastify.com/
Application TypeInstallable packageChrome browser extension
LicenseOpen source (GNU General Public License v2.0)Proprietary/freemium
DescriptionPowerful, highly configurable, cross-platform screen recording and streaming software that can run on low-end hardware.Simple, user-friendly, cloud-based screen recording solution that can run on Chromebooks.
CaveatsHas a challenging learning curve, but tutorials and how-to guides are readily available.MacOS system may require additional plug-ins to capture system sounds (records microphone inputs with no additional plug-ins).Free version is limited to 30 minutes of recording, and video export options are limited with the free version.

Portable Pop-Up Usability Lab

With a portable pop-up usability lab, the key intent is portability. This can be particularly useful for when a researcher must be on-site in order to collect data, such as when working in K-12 or industry contexts, in situations where home visits are part of the research, or in rural settings that are geographically distant from a research site. However, there is often no space to house equipment in such contexts, meaning that the researcher must not only bring the equipment to the site, but also remove it when data collection is finished. Given that data is often collected by a single researcher in this type of scenario, it is critical that the entire pop-up usability lab kit be lightweight and highly portable. Therefore, portable storage is an important factor in this pop-up usability lab configuration, as is the use of mobile technologies such as laptops or tablets.

Although a budget-friendly pop-up usability lab can be used as a portable pop-up usability lab, the assumption with the budget-friendly configuration is that there is no funding or support. With a portable pop-up usability lab, the assumption is that portability is the primary requirement, and that there is some provision of funding for equipping hardware and software. Therefore, researchers interested in a portable pop-up usability lab should consider higher-end, more recent, and ultra-portable laptops such as the MacBook Air or the Lenovo ThinkPad X series, as they are both powerful and lightweight.

A key consideration with a portable pop-up usability lab is storage for the various components that are required. Higher-end, commercial solutions might use hard cases equipped with interior cut-outs to hold the hardware and wheels. For a do-it-yourself, portable pop-up usability lab, a small, rolling suitcase, such as those commonly used as carry-on bags for air travel, provides an ideal, low-cost solution. These provide ample storage for a laptop, mouse, and cables, and can even provide for additional peripherals like a small, low-cost eye tracker, high quality wireless microphones, game controllers (if conducting playtesting of educational games), etc. In lieu of dedicated cut-outs in a hard case, researchers can use the commercial cardboard packaging for the various hardware components, which typically provides ample protection. Indeed, the first author of this paper has used this approach to transport usability hardware as checked baggage on airlines (Figure 2).

Figure 2

Portable Pop-Up Usability Lab With Laptop, Mouse, and Tablet (Left) and With Laptop and VR Headset (Right)

Pictures of a suitcase containing computer hardware


The portable pop-up usability lab can benefit from the inclusion of a small, rudimentary eye tracker. One low-cost option for rudimentary eye tracking is the Tobii Eye X, available for approximately US$259. Older, used versions can be found for less on platforms like eBay or Facebook Marketplace. It is important to note that the Tobii Eye X is only compatible with Windows-based computers, limiting its use for Mac users. While the Tobii Eye X provides an on-screen indicator of user gaze, it lacks analysis software and does not generate detailed data for fixation and saccade analysis. More sophisticated eye-tracking systems are available for portable labs, but they often require additional hardware and configuration, making setup and breakdown more complex and time-consuming.

A key limitation of the portable pop-up usability lab is that it must be set up and broken down each time it is used. This introduces the possibility of human error, which can lead to data corruption and data loss. For example, simply missing a single configuration option while setting up could lead to audio not being captured in screen recordings, thereby rendering them useless. Failing to pack a cable or dongle can lead to a study not being able to be performed. Therefore, strict quality control protocols and checklists are paramount when using this type of pop-up usability lab (see Figure 3).

Figure 3

Example Setup Checklist for a Portable Pop-Up Usability Lab Used for Mobile App Usability Testing

Setup Checklist
  • All equipment present?
    • Clipboard
    • iPad Mini
    • Power cable
    • USB hub
    • Ziggi document camera
    • Logitech webcam
    • Mac laptop
  • Mac laptop booted?
  • Open Broadcaster Studio (OBS) launched?
  • Open Broadcaster Studio (OBS) configured correctly?
  • Open Broadcaster Studio (OBS) tested?
  • Connected to a guest wireless network?
  • Network connection tested?
  • Prototype loaded and showing launch screen?
  • OBS Studio recording started?

Semi-Permanent Pop-Up Usability Lab

A semi-permanent pop-up usability lab is a useful configuration when one or more researchers have available space, but only for a limited time (i.e., a one-month project), or are using a shared space on an irregular basis (i.e., once a week for an afternoon). In this situation, it is possible to create a semi-permanent pop-up usability lab that can facilitate regular data collection sessions, but without the challenges of having to set up and break-down equipment as with the portable pop-up usability lab.

A room that is ideal for a semi-permanent pop-up usability lab is one that:

The semi-permanent configuration differs from the budget-friendly and portable pop-up usability lab configurations in one particularly notable way. The same hardware and software from those configurations can be used in the semi-permanent configuration; however, because this configuration has a more permanent space, researchers have the opportunity to create a more sophisticated usability research setup in comparison to those other configurations. For example, more powerful desktop (not laptop) computers can be used because they do not have to be transported every time they are used, which can be useful for conducting virtual reality and digital games-based research. Eye tracking using more robust systems is also possible because trackers do not need to be set up and configured every time they are used. Researchers working in higher education contexts might reach out to colleagues in other colleges or departments to see if this equipment might be available on loan. In this way, a semi-permanent pop-up usability lab can be equipped with relatively sophisticated data collection apparatus for very little cost.

In addition to this, researchers can set up a “virtual two-way mirror” using the semi-permanent pop-up usability lab configuration. This allows observers in another room or location to observe usability studies while they are happening. A virtual two-way mirror can be achieved simply by mounting a smartphone on a tripod near the participant and streaming the usability session to observers using web conferencing software such as Zoom or Microsoft Teams. This can allow the facilitator to focus on conducting the usability study with the participant, while the observers assist with problem identification and recording of field notes.

Mobile Learning Usability Rig

The pop-up usability labs detailed above are all based on traditional laptop and desktop configurations. However, given the prevalence of smartphones and tablets (particularly in mobile learning contexts), researchers may be interested in how to conduct usability research on mobile devices, for example, to evaluate the usability of mobile learning apps. Collecting usability data on mobile devices requires different configurations than when using traditional computing surfaces. For example, while most smartphones are capable of screen recording, the tools to do so are rudimentary and do not allow for embedding of front-facing camera videos. Smartphones are also controlled using a touchscreen–not a mouse. Therefore, it is not possible to observe mouse movements (or in this case, finger movements, gestures, and taps) using screen recording apps. To overcome these challenges, we designed a low-cost mobile learning usability rig (see Figure 4).

Figure 4

Hardware and Configuration Used for Low-Cost Mobile Usability Rig

Photograph of hardware used for creating a mobile usability testing platform

This hardware configuration uses a USB document camera to record the screen of the mobile device, which allows for capturing finger movements and gestures. It also uses a webcam to record the user. These devices are connected to a USB hub, which is connected to a laptop running OBS Studio (Figure 5).

Figure 5

Output From Mobile Usability Rig, as Recorded Using OBS Studio

Picture depicting the output produced by the mobile usability rig, including an image of a mobile interface superimposed with an image of a user.

The hardware sits on top of a clipboard. All hardware, including the mobile device itself, is affixed to the clipboard using hook-and-loop strips. Video output from this hardware configuration is of high quality and overcomes the challenges of not being able to observe finger movements and gestures or record the user when using mobile screen recording solutions.

Considerations for Pop-Up Usability Lab Data Analysis

The analysis of usability data requires appropriate software packages, which should be selected based on the type(s) of data being analyzed. Usability data is often multi-modal, including webcam videos, screen recordings, audio files, transcriptions, and quantitative data from usability instruments. Spreadsheet software is generally sufficient for generating descriptive data, charts, and data summaries of quantitative data. Software such as Google Sheets and OpenOffice are no-cost and open-source options that can run on low-end hardware. For qualitative data, analysis methods vary, but for identifying problems with ease-of-use, it is common to review videos and mark where usability errors occur. A simple way to do this is to simply play the video of the usability session using built-in video playback software (i.e., Quicktime Player, VLC) and note the usability errors identified in a spreadsheet, along with timestamps that indicate when the identified errors start and end. The identified usability problems can be reviewed by the design team and prioritized based on their severity. To this end, readers are referred to Nielsen (1994), who proposes a five-point severity scale ranging from 0 (not a usability issue) to 4 (catastrophic). Another option is to use dedicated video analysis software, which can be costly. Alternatively, the open source ELAN linguistic annotator software is a no-cost solution for annotating video-based data, with many built-in data analysis features; however, this software is complex and has a substantial learning curve. Ultimately, a range of low- and no-cost solutions exist to evaluate both quantitative and qualitative usability data; however, the extent of data analysis afforded by these software tools is limited. For more sophisticated analyses, dedicated, proprietary software packages are usually necessary.

Implications

Pop-up usability labs can serve as a means for providing people with tools to evaluate educational programs and technology and bringing the benefits of usability testing to LDT evaluation contexts that traditionally may not have had the resources to conduct such learner-centered evaluations. However, a potential misconception that should be addressed is that pop-up usability labs are only useful for evaluating technology-based products, systems, and services. This is incorrect - they can also be useful for evaluating more traditional, non-technology educational products. Indeed, usability testing can focus on the clarity of instructions, the organization of content, the accessibility of resources, and the overall learner experience of taking a course.

It is necessary to address the limitations and contextual factors that influence the adoption of usability testing in educational contexts. While usability testing is widely recognized as a valuable approach in product design and user experience research, its application in the field of LXD may vary. Awareness and familiarity with usability testing among practitioners in educational settings may not be universal, and there may be various factors, including limited resources, time constraints, and a predominant focus on course design rather than product design, that contribute to the limited uptake of usability testing. Additionally, the low prevalence of pilot testing, which shares similarities with usability testing, further suggests that there are challenges beyond cost that impede the widespread implementation of these methodologies in educational contexts. Thus, it is crucial to acknowledge that a range of factors can limit its adoption. Hence, we maintain that there is a need for further research in this area. However, guidance such as what is provided in this article can provide researchers and practitioners in the field of LDT with opportunities to conduct usability testing in their own evaluation contexts and/or in situ (i.e., where learners will likely use the product, system, or service), identify and remedy design flaws related to usability, and adopt more human-centered design and development approaches for educational and learning technology products, systems, and services.

Of course, as with any research effort, considerations of data privacy and ethics is critical when using pop-up usability labs. Because this approach involves the collection and storage of personally identifiable data (i.e., webcam recordings, screen recordings, and survey responses), researchers must consider how this data is handled and utilized. To ensure protection of human subjects, it is essential to have clear protocols in place for obtaining informed consent from participants, protecting their privacy, and securely managing the collected data. Researchers must be transparent regarding data usage, outlining how it will be anonymized, stored, and accessed. Additionally, obtaining appropriate ethical clearance or approval from relevant review boards (i.e., institutional review board) ensures that studies will align with ethical guidelines and will safeguard participant rights and confidentiality.

While there are a range of benefits associated with using pop-up usability labs, such as accommodating issues like limited funding, outdated hardware, lack of dedicated space, reaching more rural participants, and urgent turnaround needs, the pop-up usability lab approach does suffer from some pitfalls. First, we refer in this article only to moderated and in-person usability studies. Pop-up usability labs, as described here, do not consider remotely moderated studies (although we expect that they could easily be adapted for remote moderation). Next, the hardware and software configurations recommended here will require that LXD professionals spend time getting familiar with operating the equipment, understanding file formats and data outputs, and how all of these various components interoperate. This is not the case with off-the-shelf, commercial solutions, which typically provide more seamless integration. Finally, pop-up usability labs may not provide the same level of control and experimental rigor as more formal usability labs. As we have discussed, pop-up usability labs are designed to be practical and cost-effective solutions for conducting usability testing in real-world contexts, and they may not have all of the specialized equipment or resources of a dedicated usability lab. This may limit the types of usability studies that can be conducted or the level of detail that can be captured in the data. As a result, researchers may need to be more creative and resourceful in designing and conducting studies in these contexts. However, they do provide for more authenticity because they allow for usability testing to be conducted in situ.

Usability testing is a key component of ensuring effective LXD. By raising awareness of the importance of usability testing in LXD and providing guidance on how to conduct it effectively and affordably, LXD professionals have the opportunity to create more impactful and engaging learning experiences that better serve the needs of learners.

References

Albert, B., & Tullis, T. (2022). Measuring the user experience: Collecting, analyzing, and presenting UX metrics (3rd ed.). Elsevier Science.

Brooke, J. (1996). SUS: A “quick and dirty” usability. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & I. L. McClelland (Eds.), Usability evaluation in industry (pp. 189-194). Taylor & Francis.

Carr-Chellman, A., & Savoy, M. (2013). User-design research. In D. Jonassen & M. Driscoll (Eds.), Handbook of research on educational communications and technology (2nd ed., pp. 696-711). Routledge.

Conley, Q., Earnshaw, Y., McWatters, G. (2020). Examining course layouts in Blackboard: Using eye-tracking to evaluate usability in a learning management system. International Journal of Human-Computer Interaction, 36(4), 373-385. https://doi.org/10.1080/10447318.2019.1644841

Dahleez, K. A., El-Saleh, A. A., Al Alawi, A. M., & Abdel Fattah, F. A. M. (2021). Student learning outcomes and online engagement in time of crisis: The role of e-learning system usability and teacher behavior. The International Journal of Information and Learning Technology, 38(5), 473–492. https://doi.org/10.1108/IJILT-04-2021-0057

Gregg, A., Reid, R., Aldemir, T., Garbrick, A., & Gray, J. (2022). Think-aloud methods: Just-in-time & systematic methods to improve course design [Webinar]. Design and Development Chronicles. https://edtechbooks.org/dd_chronicles/lxd_tao

Gregg, A., Reid, R., Aldemir, T., Gray, J., Frederick, M., & Garbrick, A. (2020). Think-aloud observations to improve online course design: A case example and “how-to” guide. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology. EdTechBooks. https://edtechbooks.org/ux/15_think_aloud_obser

Jahnke, I., Schmidt, M., Pham, M., & Singh, K. (2020). Sociotechnical-pedagogical usability for designing and evaluating learner experience in technology-enhanced environments. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology. EdTechBooks. https://edtechbooks.org/ux/sociotechnical_pedagogical_usability

Krug, S. (2009). Rocket surgery made easy: The do-it-yourself guide to finding and fixing usability problems. New Riders.

Lewis, J. R. (2018a). Measuring perceived usability: The CSUQ, SUS, and UMUX. International Journal of Human-Computer Interaction, 34(12), 1148-1156. https://doi.org/10.1080/10447318.2017.1418805

Lewis, J. R. (2018b). The system usability scale: Past, present, and future. International Journal of Human-Computer Interaction, 34(7), 577-590. https://doi.org/10.1080/10447318.2018.1455307

Lu, J., Schmidt, M., Lee, M., & Huang, R. (2022). Usability research in educational technology: A state-of-the-art systematic review. Educational Technology Research and Development, 70, 1951-1992. https://doi.org/10.1007/s11423-022-10152-6

Mayes, J. T., & Fowler, C. J. H. (1999). Learning technology and usability: A framework for understanding courseware. Interacting with Computers, 11(5), 485-497. https://doi.org/10.1016/S0953-5438(98)00065-4

Nielsen, J. (1993). Usability engineering. Morgan Kaufmann.

Nielsen, J. (1994). Usability inspection methods. In J. Nielsen & R. Mack (Eds.), Nielsen Normal Group (pp. 25–61). John Wiley & Sons.
https://www.nngroup.com/books/usability-inspection-methods/

Nielsen, J. (2000, March 18). Why you only need to test with 5 users. Nielsen Norman Group.
https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of e-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10(1), 3–19. https://doi.org/10.2190/CS.10.1.b

Peres, S. C., Pham, T., & Phillips, R. (2013). Validation of the system usability scale (SUS): SUS in the wild. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57(1), 192-196. https://doi.org/10.1177/1541931213571043

Reeves, T. C., Benson, L., Elliott, D., Grant, M., Holschuh, D., Kim, B., Kim, H., Lauber, E., & Loh, S. (2002, June 24-29). Usability and instructional design heuristics for e-learning evaluation. ED-MEDIA 2002 World Conference on Educational Multimedia, Hypermedia & Telecommunications Proceedings, 1653–1659.

Reeves, T. C., & Hedberg, J. C. (2003). Interactive learning systems evaluation. Educational Technology Publications.

Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd ed.). Morgan-Kaufmann.

Schmidt, M., Earnshaw, Y., Tawfik, A., & Jahnke, I. (In press a). Evaluation methods for learning experience design. In R. E. West & H. Leary (Eds.), Foundations of learning and instructional design technology (2nd ed). EdTechBooks. https://edtechbooks.org/foundations_of_learn

Schmidt, M., Earnshaw, Y., Tawfik, A., & Jahnke, I. (2020). Methods of user centered design and evaluation for learning designers. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology. EdTechBooks.
https://edtechbooks.org/ux/ucd_methods_for_lx

Schmidt, M., & Huang, R. (2021). Defining learning experience design: Voices from the field of learning design & technology. TechTrends, 66. 141-158. https://doi.org/10.1007/s11528-021-00656-y

Schmidt, M., Lu, J., Luo, W., Cheng, L., Lee, M., Huang, R., Weng, Y., Kichler, J. C., Corathers, S. D., Jacobsen, L. M., Albanese-O Neill, A., Smith, L., Westen, S., Gutierrez-Colina, A. M., Heckaman, L., Wetter, S. E., Driscoll, K. A., & Modi, A. (2022). Learning experience design of an mHealth self-management intervention for adolescents with type 1 diabetes. Educational Technology Research and Development, 70(6), 2171–2209.
https://doi.org/10.1007/s11423-022-10160-6

Schmidt, M., Tawfik, A. A., Jahnke, I., Earnshaw, Y., & Huang, R. (2020). Introduction to the edited volume. In M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), Learner and user experience research: An introduction for the field of learning design & technology. EdTechBooks. https://edtechbooks.org/ux/introduction_to_ux_lx_in_lidt

Soloway, E., Guzdial, M., & Hay, K. (1994). Learner-centered design: The challenge for HCI in the 21st century. Interactions, 1(2), 36–48. https://doi.org/10.1145/174809.174813

Tessmer, M. (1993). Planning and conducting formative evaluations: Improving the quality of education and training. Routledge. https://doi.org/10.4324/9780203061978

U.S. Department of Health & Human Services. (2016). Usability testing.
https://www.usability.gov/how-to-and-tools/methods/usability-testing.html

Matthew Schmidt

University of Georgia

Matthew Schmidt, Ph.D., is Associate Professor at the University of Georgia (UGA) in the Department of Workforce Education and Instructional Technology (WEIT). His research interests include design and development of innovative educational courseware and computer software with a particular focus on individuals with disabilities and their families/caregivers, virtual reality and educational gaming, and learning experience design.

Yvonne Earnshaw

Kennesaw State University

Yvonne Earnshaw, PhD is an Assistant Professor of Instructional Design and Technology in the School of Instructional Technology and Innovation at Kennesaw State University. Dr. Earnshaw has an extensive industry background in technical writing, instructional design, and usability consulting. Her research interests include user/learner experience, online teaching and learning practices in higher education, and workplace preparation.

This content is provided to you freely by EdTech Books.

Access it online or download it at https://edtechbooks.org/jaid_12_3/DIY_popup_usability_labs_for_LXD.