usability

Evaluating the Acceptability and Usability of EASEL: A Mobile Application that Supports Guided Reflection for Experiential Learning Activities

Aim/Purpose: To examine the early perceptions (acceptability) and usability of EASEL (Education through Application-Supported Experiential Learning), a mobile platform that delivers reflection prompts and content before, during, and after an experiential learning activity. Background: Experiential learning is an active learning approach in which students learn by doing and by reflecting on the experience. This approach to teaching is often used in disciplines such as humanities, business, and medicine. Reflection before, during, and after an experience allows the student to analyze what they learn and why it is important, which is vital in helping them to understand the relevance of the experience. A just-in-time tool (EASEL) was needed to facilitate this. Methodology: To inform the development of a mobile application that facilitates real-time guided reflection and to determine the relevant feature set, we conducted a needs analysis with both students and faculty members. Data collected during this stage of the evaluation helped guide the creation of a prototype. The user experience of the prototype and interface interactions were evaluated during the usability phase of the evaluation study. Contribution: Both the needs analysis and usability assessment provided justification for continued development of EASEL as well as insight that guides current development. Findings: The interaction design of EASEL is understandable and usable. Both students and teachers value an application that facilitates real-time guided reflection. Recommendations for Practitioners: The use of a system such as EASEL can leverage time and location-based services to support students in field experiences. This technology aligns with evidence that guided reflection provides opportunities for metacognition. Recommendation for Researchers: Iterative prototyping, testing, and refinement can lead to a deliberate and effective app development process. Impact on Society: The EASEL platform leverages inherent functionality of mobile devices, such as GPS and persistent network connectivity, to adapt reflection tasks based on lo-cation or time. Students using EASEL will engage in guided reflection, which leads to metacognition and can help instructors scaffold learning Future Research: We will continue to advance the application through iterative testing and development. When ready, the application will be vetted in larger studies across varied disciplines and contexts.




usability

Development and Validation of an Instrument for Assessing Users’ Views about the Usability of Digital Libraries




usability

Video Learning Object Application System: Beyond the Static Reusability




usability

Usability Issues in Mobile-Wireless Information Systems




usability

Usability and Pedagogical Assessment of an Algorithm Learning Tool: A Case Study for an Introductory Programming Course for High School

An algorithm learning tool was developed for an introductory computer science class in a specialized science and technology high school in Japan. The tool presents lessons and simple visualizations that aim to facilitate teaching and learning of fundamental algorithms. Written tests and an evaluation questionnaire were designed and implemented along with the learning tool among the participants. The tool’s effect on the learning performance of the students was examined. The differences of the two types of visualizations offered by the tool, one with more input and control options and the other with fewer options, were analyzed. Based on the evaluation questionnaire, the scales with which the tool can be assessed according to its usability and pedagogical effectiveness were identified. After using the algorithm learning tool there was an increase in the posttest scores of the students, and those who used the visualization with more input and control options had higher scores compared to those who used the one with limited options. The learning objectives used to evaluate the tool correlated with the test performance of the students. Properties comprised of learning objectives, algorithm visualization characteristics, and interface assessment are proposed to be incorporated in evaluating an algorithm learning tool for novice learners.




usability

Does Usability Matter? An Analysis of the Impact of Usability on Technology Acceptance in ERP Settings

Though the field of management information systems, as a sector and a discipline, is the inventor of many guidelines and models, it appears to be a slow runner on practical implications of interface usability. This usability can influence end users’ attitude and behavior to use IT. The purpose of this paper was to examine the interface usability of a popular Enterprise Resource Planning (ERP) software system, SAP, and to identify related issues and implications to the Technology Acceptance Model (TAM). A survey was conducted of 112 SAP ERP users from an organization in the heavy metal industry in Bangladesh. The partial least squares technique was used to analyze the survey data. The survey findings empirically confirmed that interface usability has a significant impact on users’ perceptions of usefulness and ease of use which ultimately affects attitudes and intention to use the ERP software. The research model extends the TAM by incorporating three criteria of interface usability. It is the first known study to investigate usability criteria as an extension of TAM.




usability

Learning Pod: A New Paradigm for Reusability of Learning Objects




usability

Towards A Comprehensive Learning Object Metadata: Incorporation of Context to Stipulate Meaningful Learning and Enhance Learning Object Reusability




usability

Expanding the Concept of Usability




usability

Senior Citizens and E-commerce Websites: The Role of Perceived Usefulness, Perceived Ease of Use, and Web Site Usability




usability

A Deliberation Theory-Based Approach to the Management of Usability Guidelines




usability

As COP27 Approaches, Report Recommends New Global Emissions Information Clearinghouse, Steps to Improve Accuracy and Usability of Information

As the U.N. Climate Change Conference (COP27) approaches, a new report recommends steps to improve the accuracy and usability of greenhouse gas emissions information for decision-makers, including creating a global information clearinghouse.




usability

Neurodiverse- and Disability-Inclusive Job Board www.workability.one Launches Updated Website To Improve Usability And Functionality for Job Seekers

85% of college-educated autistic adults are either underemployed or unemployed.




usability

A7: Usability Testing for the WWW

In a follow-up to last year's session, User testing on a shoestring budget, Emma Tonkin, UKOLN, demonstrates two methods of user testing. One, the cognitive walkthrough, an be carried out by a single evaluator. The second, the think-aloud protocol, is all about observing the way Web visitors interact with your Web site.




usability

Episode 127: Usability with Joachim Machate

This episode is an introduction to user interface design with Joachim Machate of UID. We talk about the importance of user interface design, about its relationship to the overall software engineering process, as well as about UID's process for systematic user interface design.




usability

Episode 428: Matt Lacey on Mobile App Usability

Matt Lacey, author of the Usability Matters book discusses what mobile app usability is and why it can make or break an app destined for consumers, business users or in-house users and what you can do to make the best app possible.




usability

Architect Meet-Up - Part 2 of 3: Mobile Security, Availability, and Usability

The community panel discusses the security, availability, and usability challenges in the evolution of the mobile enterprise, then turns its attention to the evolving role of the software developer.




usability

Testing: User, Usability, and Others You Should Be Using

When it comes to testing your user experiences, there are plenty of methods you can use that will get you the information you need. From interviews to assistive technology testing, these methods offer a more streamlined and beneficial process capable of revealing the insights you need to revolutionize your UX. But how can you know […]

The post Testing: User, Usability, and Others You Should Be Using appeared first on Usability Geek




usability

Highly enhanced cationic dye adsorption from water by nitro-functionalized Zn-MOF nano/microparticles and a biomolecular binder for improving the reusability

CrystEngComm, 2024, Advance Article
DOI: 10.1039/D4CE00131A, Paper
Gunasekaran Arunkumar, Mehboobali Pannipara, Abdullah G. Al-Sehemi, Savarimuthu Philip Anthony
Zn-MOF with a nitro-functionalized ligand exhibited enhanced cationic dye adsorption, and binder coating improved the reusability.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




usability

Voice Content and Usability

We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.

Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.

In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

Voice Interactions

We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:

  • we need something done (such as a transaction),
  • we want to know something (information of some sort), or
  • we are social beings and want someone to talk to (conversation for conversation’s sake).

These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.

Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01).

That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).

Transactional voice interactions

Unless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I get a Hawaiian pizza with extra pineapple?

Burhan: Sure, what size?

Alison: Large.

Burhan: Anything else?

Alison: No thanks, that’s it.

Burhan: Something to drink?

Alison: I’ll have a bottle of Coke.

Burhan: You got it. That’ll be $13.55 and about fifteen minutes.

Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.

Informational voice interactions

Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I ask a few questions?

Burhan: Of course! Go right ahead.

Alison: Do you have any halal options on the menu?

Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?

Alison: What about gluten-free pizzas?

Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?

Alison: That’s it for now. Good to know. Thanks!

Burhan: Anytime, come back soon!

This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.

Voice Interfaces

At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.

Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

Interactive voice response (IVR) systems

Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).

While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).

Screen readers

Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).

With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05).

Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)

In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.

Voice assistants

When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.

At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.

As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.

Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

Voice Content

Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.

Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.

For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?

Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)

I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.

As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.




usability

Usability and Security; Better Together

Divya Sasidharan calls into question the trade-offs often made between security and usability. Does a secure interface by necessity need to be hard to use? Or is it the choice we make based on years of habit? Snow has fallen, snow on snow.


Security is often synonymous with poor usability. We assume that in order for something to be secure, it needs to by default appear impenetrable to disincentivize potential bad actors. While this premise is true in many instances like in the security of a bank, it relies on a fundamental assumption: that there is no room for choice.

With the option to choose, a user almost inevitably picks a more usable system or adapts how they interact with it regardless of how insecure it may be. In the context of the web, passwords are a prime example of such behavior. Though passwords were implemented as a way to drastically reduce the risk of attack, they proved to be marginally effective. In the name of convenience, complex, more secure passwords were shirked in favor of easy to remember ones, and passwords were liberally reused across accounts. This example clearly illustrates that usability and security are not mutually exclusive. Rather, security depends on usability, and it is imperative to get user buy-in in order to properly secure our applications.

Security and Usability; a tale of broken trust

At its core, security is about fostering trust. In addition to protecting user accounts from malicious attacks, security protocols provide users with the peace of mind that their accounts and personal information is safe. Ironically, that peace of mind is incumbent on users using the security protocols in the first place, which further relies on them accepting that security is needed. With the increased frequency of cyber security threats and data breaches over the last couple of years, users have grown to be less trusting of security experts and their measures. Security experts have equally become less trusting of users, and see them as the “the weakest link in the chain”. This has led to more cumbersome security practices such as mandatory 2FA and constant re-login flows which bottlenecks users from accomplishing essential tasks. Because of this break down in trust, there is a natural inclination to shortcut security altogether.

Build a culture of trust not fear

Building trust among users requires empowering them to believe that their individual actions have a larger impact on the security of the overall organization. If a user understands that their behavior can put critical resources of an organization at risk, they will more likely behave with security in mind. For this to work, nuance is key. Deeming that every resource needs a similarly high number of checks and balances diminishes how users perceive security and adds unnecessary bottlenecks to user workflows.

In order to lay the foundation for good security, it’s worth noting that risk analysis is the bedrock of security design. Instead of blindly implementing standard security measures recommended by the experts, a better approach is to tailor security protocols to meet specific use cases and adapt as much as possible to user workflows. Here are some examples of how to do just that:

Risk based authentication

Risk based authentication is a powerful way to perform a holistic assessment of the threats facing an organization. Risks occur at the intersection of vulnerability and threat. A high risk account is vulnerable and faces the very real threat of a potential breach. Generally, risk based authentication is about calculating a risk score associated with accounts and determining the proper approach to securing it. It takes into account a combination of the likelihood that that risk will materialize and the impact on the organization should the risk come to pass. With this system, an organization can easily adapt access to resources depending on how critical they are to the business; for instance, internal documentation may not warrant 2FA, while accessing business and financial records may.

Dynamically adaptive auth

Similar to risk based auth, dynamically adaptive auth adjusts to the current situation. Security can be strengthened and slackened as warranted, depending on how risky the access point is. A user accessing an account from a trusted device in a known location may be deemed low risk and therefore not in need of extra security layers. Likewise, a user exhibiting predictive patterns of use should be granted quick and easy access to resources. The ability to adapt authentication based on the most recent security profile of a user significantly improves the experience by reducing unnecessary friction.

Conclusion

Historically, security failed to take the user experience into account, putting the onus of securing accounts solely on users. Considering the fate of password security, we can neither rely on users nor stringent security mechanisms to keep our accounts safe. Instead, we should aim for security measures that give users the freedom to bypass them as needed while still protecting our accounts from attack. The fate of secure systems lies in the understanding that security is a process that must constantly adapt to face the shifting landscape of user behavior and potential threats.


About the author

Divya is a web developer who is passionate about open source and the web. She is currently a developer experience engineer at Netlify, and believes that there is a better workflow for building and deploying sites that doesn’t require a server—ask her about the JAMstack. You will most likely find her in the sunniest spot in the room with a cup of tea in hand.

More articles by Divya




usability

Applicability of β-lactamase entrapped agarose discs for removal of doripenem antibiotic: reusability and scale-up studies

Environ. Sci.: Water Res. Technol., 2024, Advance Article
DOI: 10.1039/D4EW00572D, Paper
Huma Fatima, Amrik Bhattacharya, Sunil Kumar Khare
Schematic diagram illustrating antibiotic removal via β-lactamase-entrapped agarose discs in a fixed-bed column bioreactor, highlighting the potential for scale-up.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




usability

Lava Agni 3 Review | Offers solid everyday usability and features

While the secondary screen’s potential is not fully realised yet, it’s an exciting addition that could be improved with future updates




usability

Highland Management Group Announces Newly Redesigned Website that Improves Usability for Customers

New Simple and Mobile Friendly Website Helps Customers with Management Services




usability

Usability task scenarios: The beating heart of a usability test

Usability tests are unique. We ask people to do real tasks with the system and watch. As the person completes the task, we watch their behaviour and listen to their stream-of-consciousness narrative. But what makes a good usability task scenario?




usability

UX Deep Dive: Usability Testing

Take a deep dive into usability testing techniques and methodologies for user experience (UX) design projects with research expert Amanda Stockwell. In this course, you can learn which types of tests—remote or in person, moderated or unmoderated, task-based or unstructured—to use for specific types of projects and users. Amanda also shares tips on conducting usability tests, from recruiting participants to moderating sessions. Plus, learn how to analyze and present the results of your testing to the rest of your organization.




usability

Let's get mobile. Advancing mobile usability for everyone.

For many people, accessibility and disability are philanthropic efforts that represent requisite components of every company's Corporate Social Responsibility (CSR) portfolio.

More and more users are adopting the mobile platform. It is predicted that the tipping point will be reached in 2013 with mobile devices surpassing the desktop computer as the most common Web access device




usability

Application of Adult-Learning Principles to Patient Instructions: A Usability Study for an Exenatide Once-Weekly Injection Device

Gayle Lorenzi
Sep 1, 2010; 28:157-162
Bridges to Excellence




usability

Usability Testing for Voice Content

It’s an important time to be in voice design. Many of us are turning to voice assistants in these times, whether for comfort, recreation, or staying informed. As the interest in interfaces driven by voice continues to reach new heights around the world, so too will users’ expectations and the best practices that guide their design.

Voice interfaces (also known as voice user interfaces or VUIs) have been reinventing how we approach, evaluate, and interact with user interfaces. The impact of conscious efforts to reduce close contact between people will continue to increase users’ expectations for the availability of a voice component on all devices, whether that entails a microphone icon indicating voice-enabled search or a full-fledged voice assistant waiting patiently in the wings for an invocation.

But voice interfaces present inherent challenges and surprises. In this relatively new realm of design, the intrinsic twists and turns in spoken language can make things difficult for even the most carefully considered voice interfaces. After all, spoken language is littered with fillers (in the linguistic sense of utterances like hmm and um), hesitations and pauses, and other interruptions and speech disfluencies that present puzzling problems for designers and implementers alike.

Once you’ve built a voice interface that introduces information or permits transactions in a rich way for spoken language users, the easy part is done. Nonetheless, voice interfaces also surface unique challenges when it comes to usability testing and robust evaluation of your end result. But there are advantages, too, especially when it comes to accessibility and cross-channel content strategy. The fact that voice-driven content lies on the opposite extreme of the spectrum from the traditional website confers it an additional benefit: it’s an effective way to analyze and stress-test just how channel-agnostic your content truly is.

The quandary of voice usability

Several years ago, I led a talented team at Acquia Labs to design and build a voice interface for Digital Services Georgia called Ask GeorgiaGov, which allowed citizens of the state of Georgia to access content about key civic tasks, like registering to vote, renewing a driver’s license, and filing complaints against businesses. Based on copy drawn directly from the frequently asked questions section of the Georgia.gov website, it was the first Amazon Alexa interface integrated with the Drupal content management system ever built for public consumption. Built by my former colleague Chris Hamper, it also offered a host of impressive features, like allowing users to request the phone number of individual government agencies for each query on a topic.

Designing and building web experiences for the public sector is a uniquely challenging endeavor due to requirements surrounding accessibility and frequent budgetary challenges. Out of necessity, governments need to be exacting and methodical not only in how they engage their citizens and spend money on projects but also how they incorporate new technologies into the mix. For most government entities, voice is a completely different world, with many potential pitfalls.

At the outset of the project, the Digital Services Georgia team, led by Nikhil Deshpande, expressed their most important need: a single content model across all their content irrespective of delivery channel, as they only had resources to maintain a single rendition of each content item. Despite this editorial challenge, Georgia saw Alexa as an exciting opportunity to open new doors to accessible solutions for citizens with disabilities. And finally, because there were relatively few examples of voice usability testing at the time, we knew we would have to learn on the fly and experiment to find the right solution.

Eventually, we discovered that all the traditional approaches to usability testing that we’d executed for other projects were ill-suited to the unique problems of voice usability. And this was only the beginning of our problems.

How voice interfaces improve accessibility outcomes

Any discussion of voice usability must consider some of the most experienced voice interface users: people who use assistive devices. After all, accessibility has long been a bastion of web experiences, but it has only recently become a focus of those implementing voice interfaces. In a world where refreshable Braille displays and screen readers prize the rendering of web-based content into synthesized speech above all, the voice interface seems like an anomaly. But in fact, the exciting potential of Amazon Alexa for disabled citizens represented one of the primary motivations for Georgia’s interest in making their content available through a voice assistant.

Questions surrounding accessibility with voice have surfaced in recent years due to the perceived user experience benefits that voice interfaces can offer over more established assistive devices. Because screen readers make no exceptions when they recite the contents of a page, they can occasionally present superfluous information and force the user to wait longer than they’re willing. In addition, with an effective content schema, it can often be the case that voice interfaces facilitate pointed interactions with content at a more granular level than the page itself.

Though it can be difficult to convince even the most forward-looking clients of accessibility’s value, Georgia has been not only a trailblazer but also a committed proponent of content accessibility beyond the web. The state was among the first jurisdictions to offer a text-to-speech (TTS) phone hotline that read web pages aloud. After all, state governments must serve all citizens equally—no ifs, ands, or buts. And while these are still early days, I can see voice assistants becoming new conduits, and perhaps more efficient channels, by which disabled users can access the content they need.

Managing content destined for discrete channels

Whereas voice can improve accessibility of content, it’s seldom the case that web and voice are the only channels through which we must expose information. For this reason, one piece of advice I often give to content strategists and architects at organizations interested in pursuing voice-driven content is to never think of voice content in isolation. Siloing it is the same misguided approach that has led to mobile applications and other discrete experiences delivering orphaned or outdated content to a user expecting that all content on the website should be up-to-date and accessible through other channels as well.

After all, we’ve trained ourselves for many years to think of content in the web-only context rather than across channels. Our closely held assumptions about links, file downloads, images, and other web-based marginalia and miscellany are all aspects of web content that translate poorly to the conversational context—and particularly the voice context. Increasingly, we all need to concern ourselves with an omnichannel content strategy that straddles all those channels in existence today and others that will doubtlessly surface over the horizon.

With the advantages of structured content in Drupal 7, Georgia.gov already had a content model amenable to interlocution in the form of frequently asked questions (FAQs). While question-and-answer formats are convenient for voice assistants because queries for content tend to come in the form of questions, the returned responses likewise need to be as voice-optimized as possible.

For Georgia.gov, the need to preserve a single rendition of all content across all channels led us to perform a conversational content audit, in which we read aloud all of the FAQ pages, putting ourselves in the shoes of a voice user, and identified key differences between how a user would interpret the written form and how they would parse the spoken form of that same content. After some discussion with the editorial team at Georgia, we opted to limit calls to action (e.g., “Read more”), links lacking clear context in surrounding text, and other situations confusing to voice users who cannot visualize the content they are listening to.

Here’s a table containing examples of how we converted certain text on FAQ pages to counterparts more appropriate for voice. Reading each sentence aloud, one by one, helped us identify cases where users might scratch their heads and say “Huh?” in a voice context.

Before After
Learn how to change your name on your Social Security card. The Social Security Administration can help you change your name on your Social Security card.
You can receive payments through either a debit card or direct deposit. Learn more about payments. You can receive payments through either a debit card or direct deposit.
Read more about this. In Georgia, the Family Support Registry typically pulls payments directly from your paycheck. However, you can send your own payments online through your bank account, your credit card, or Western Union. You may also send your payments by mail to the address provided in your court order.

In areas like content strategy and content governance, content audits have long been key to understanding the full picture of your content, but it doesn’t end there. Successful content audits can run the gamut from automated checks for orphaned content or overly wordy articles to more qualitative analyses of how content adheres to a specific brand voice or certain design standards. For a content strategy truly prepared for channels both here and still to come, a holistic understanding of how users will interact with your content in a variety of situations is a baseline requirement today.

Other conversational interfaces have it easier

Spoken language is inherently hard. Even the most gifted orators can have trouble with it. It’s littered with mistakes, starts and stops, interruptions, hesitations, and a vertiginous range of other uniquely human transgressions. The written word, because it’s committed instantly to a mostly permanent record, is tame, staid, and carefully considered in comparison.

When we talk about conversational interfaces, we need to draw a clear distinction between the range of user experiences that traffic in written language rather than spoken language. As we know from the relative solidity of written language and literature versus the comparative transience of spoken language and oral traditions, in many ways the two couldn’t be more different from one another. The implications for designers are significant because spoken language, from the user’s perspective, lacks a graphical equivalent to which those scratching their head can readily refer. We’re dealing with the spoken word and aural affordances, not pixels, written help text, or visual affordances.

Why written conversational interfaces are easier to evaluate

One of the privileges that chatbots and textbots enjoy over voice interfaces is the fact that by design, they can’t hide the previous steps users have taken. Any conversational interface user working in the written medium has access to their previous history of interactions, which can stretch back days, weeks, or months: the so-called backscroll. A flight passenger communicating with an airline through Facebook Messenger, for example, knows that they can merely scroll up in the chat history to confirm that they’ve already provided the company with their e-ticket number or frequent flyer account information.

This has outsize implications for information architecture and conversational wayfinding. Since chatbot users can consult their own written record, it’s much harder for things to go completely awry when they make a move they didn’t intend. Recollection is much more difficult when you have to remember what you said a few minutes ago off the top of your head rather than scrolling up to the information you provided a few hours or weeks ago. An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history.An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history. Voice interfaces that live perpetually in the moment have no such luxury.

Eye tracking only works for visual components

In many cases, those who work with chatbots and messaging bots (especially those leveraging text messages or other messaging services like Facebook Messenger, Slack, or WhatsApp) have the unique privilege of benefiting from a visual component. Some conversational interfaces now insert other elements into the conversational flow between a machine and a person, such as embedded conversational forms (like SPACE10’s Conversational Form) that allow users to enter rich input or select from a range of possible responses.

The success of eye tracking in more traditional usability testing scenarios highlights its appropriateness for visual interfaces such as websites, mobile applications, and others. However, from the standpoint of evaluating voice interfaces that are entirely aural, eye tracking serves only the limited (but still interesting from a research perspective) purpose of assessing where the test subject is looking while speaking with an invisible interlocutor—not whether they are able to use the interface successfully. Indeed, eye tracking is only a viable option for voice interfaces that have some visual component, like the Amazon Echo Show.

Think-aloud and concurrent probing interrupt the conversational flow

A well-worn approach for usability testing is think-aloud, which allows for users working with interfaces to present their frequently qualitative impressions of interfaces verbally while interacting with the user experience in question. Paired with eye tracking, think-aloud adds considerable dimension to a usability test for visual interfaces such as websites and web applications, as well as other visually or physically oriented devices.

Another is concurrent probing (CP). Probing involves the use of questions to gather insights about the interface from users, and Usability.gov describes two types: concurrent, in which the researcher asks questions during interactions, and retrospective, in which questions only come once the interaction is complete.

Conversational interfaces that utilize written language rather than spoken language can still be well-suited to think-aloud and concurrent probing approaches, especially for the components in the interface that require manual input, like conversational forms and other traditional UI elements interspersed throughout the conversation itself.

But for voice interfaces, think-aloud and concurrent probing are highly questionable approaches and can catalyze a variety of unintended consequences, including accidental invocations of trigger words (such as Alexa mishearing “selected” as “Alexa”) and introduction of bad data (such as speech transcription registering both the voice interface and test subject). After all, in a hypothetical think-aloud or CP test of a voice interface, the user would be responsible for conversing with the chatbot while simultaneously offering up their impressions to the evaluator overseeing the test.

Voice usability tests with retrospective probing

Retrospective probing (RP), a lesser-known approach for usability testing, is seldom seen in web usability testing due to its chief weakness: the fact that we have awful memories and rarely remember what occurred mere moments earlier with anything that approaches total accuracy. (This might explain why the backscroll has joined the pantheon of rigid recordkeeping currently occupied by cuneiform, the printing press, and other means of concretizing information.)

For users of voice assistants lacking scrollable chat histories, retrospective probing introduces the potential for subjects to include false recollections in their assessments or to misinterpret the conclusion of their conversations. That said, retrospective probing permits the participant to take some time to form their impressions of an interface rather than dole out incremental tidbits in a stream of consciousness, as would more likely occur in concurrent probing.

What makes voice usability tests unique

Voice usability tests have several unique characteristics that distinguish them from web usability tests or other conversational usability tests, but some of the same principles unify both visual interfaces and their aural counterparts. As always, “test early, test often” is a mantra that applies here, as the earlier you can begin testing, the more robust your results will be. Having an individual to administer a test and another to transcribe results or watch for signs of trouble is also an effective best practice in settings beyond just voice usability.

Interference from poor soundproofing or external disruptions can derail a voice usability test even before it begins. Many large organizations will have soundproof rooms or recording studios available for voice usability researchers. For the vast majority of others, a mostly silent room will suffice, though absolute silence is optimal. In addition, many subjects, even those well-versed in web usability tests, may be unaccustomed to voice usability tests in which long periods of silence are the norm to establish a baseline for data.

How we used retrospective probing to test Ask GeorgiaGov

For Ask GeorgiaGov, we used the retrospective probing approach almost exclusively to gather a range of insights about how our users were interacting with voice-driven content. We endeavored to evaluate interactions with the interface early and diachronically. In the process, we asked each of our subjects to complete two distinct tasks that would require them to traverse the entirety of the interface by asking questions (conducting a search), drilling down into further questions, and requesting the phone number for a related agency. Though this would be a significant ask of any user working with a visual interface, the unidirectional focus of voice interface flows, by contrast, reduced the likelihood of lengthy accidental detours.

Here are a couple of example scenarios:

You have a business license in Georgia, but you’re not sure if you have to register on an annual basis. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

You’ve just moved to Georgia and you know you need to transfer your driver’s license, but you’re not sure what to do. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

We also peppered users with questions after the test concluded to learn about their impressions through retrospective probing:

  • “On a scale of 1–5, based on the scenario, was the information you received helpful? Why or why not?”
  • “On a scale of 1–5, based on the scenario, was the content presented clear and easy to follow? Why or why not?”
  • “What’s the answer to the question that you were tasked with asking?”

Because state governments also routinely deal with citizen questions having to do with potentially traumatic issues such as divorce and sexual harassment, we also offered the choice for participants to opt out of certain categories of tasks.

While this testing procedure yielded compelling results that indicated our voice interface was performing at the level it needed to despite its experimental nature, we also ran into considerable challenges during the usability testing process. Restoring Amazon Alexa to its initial state and troubleshooting issues on the fly proved difficult during the initial stages of the implementation, when bugs were still common.

In the end, we found that many of the same lessons that apply to more storied examples of usability testing were also relevant to Ask GeorgiaGov: the importance of testing early and testing often, the need for faithful yet efficient transcription, and the surprising staying power of bugs when integrating disparate technologies. Despite Ask GeorgiaGov’s many similarities to other interface implementations in terms of technical debt and the role of usability testing, we were overjoyed to hear from real Georgians whose engagement with their state government could not be more different from before.

Conclusion

Many of us may be building interfaces for voice content to experiment with newfangled channels, or to build for disabled people and people newer to the web. Now, they are necessities for many others, especially as social distancing practices continue to take hold worldwide. Nonetheless, it’s crucial to keep in mind that voice should be only one component of a channel-agnostic strategy equipped for content ripped away from its usual contexts. Building usable voice-driven content experiences can teach us a great deal about how we should envisage our milieu of content and its future in the first place.

Gone are the days when we could write a page in HTML and call it a day; content now needs to be rendered through synthesized speech, augmented reality overlays, digital signage, and other environments where users will never even touch a personal computer. By focusing on structured content first and foremost with an eye toward moving past our web-based biases in developing our content for voice and others, we can better ensure the effectiveness of our content on any device and in any form factor.

Eight months after we finished building Ask GeorgiaGov in 2017, we conducted a retrospective to inspect the logs amassed over the past year. The results were striking. Vehicle registration, driver’s licenses, and the state sales tax comprised the most commonly searched topics. 79.2% of all interactions were successful, an achievement for one of the first content-driven Alexa skills in production, and 71.2% of all interactions led to the issuance of a phone number that users could call for further information.

But deep in the logs we implemented for the Georgia team’s convenience, we found a number of perplexing 404 Not Found errors related to a search term that kept being recorded over and over again as “Lawson’s.” After some digging and consulting the native Georgians in the room, we discovered that one of our dear users with a particularly strong drawl was repeatedly pronouncing “license” in her native dialect to no avail.

As this anecdote highlights, just as no user experience can be truly perfect for everyone, voice content is an environment where imperfections can highlight considerations we missed in developing cross-channel content. And just as we have much to learn when it comes to the new shapes content can take as it jumps off the screen and out the window, it seems our voice interfaces still have a ways to go before they take over the world too.

Special thanks to Nikhil Deshpande for his feedback during the writing process.




usability

Usability and Security; Better Together

Divya Sasidharan calls into question the trade-offs often made between security and usability. Does a secure interface by necessity need to be hard to use? Or is it the choice we make based on years of habit? Snow has fallen, snow on snow.


Security is often synonymous with poor usability. We assume that in order for something to be secure, it needs to by default appear impenetrable to disincentivize potential bad actors. While this premise is true in many instances like in the security of a bank, it relies on a fundamental assumption: that there is no room for choice.

With the option to choose, a user almost inevitably picks a more usable system or adapts how they interact with it regardless of how insecure it may be. In the context of the web, passwords are a prime example of such behavior. Though passwords were implemented as a way to drastically reduce the risk of attack, they proved to be marginally effective. In the name of convenience, complex, more secure passwords were shirked in favor of easy to remember ones, and passwords were liberally reused across accounts. This example clearly illustrates that usability and security are not mutually exclusive. Rather, security depends on usability, and it is imperative to get user buy-in in order to properly secure our applications.

Security and Usability; a tale of broken trust

At its core, security is about fostering trust. In addition to protecting user accounts from malicious attacks, security protocols provide users with the peace of mind that their accounts and personal information is safe. Ironically, that peace of mind is incumbent on users using the security protocols in the first place, which further relies on them accepting that security is needed. With the increased frequency of cyber security threats and data breaches over the last couple of years, users have grown to be less trusting of security experts and their measures. Security experts have equally become less trusting of users, and see them as the “the weakest link in the chain”. This has led to more cumbersome security practices such as mandatory 2FA and constant re-login flows which bottlenecks users from accomplishing essential tasks. Because of this break down in trust, there is a natural inclination to shortcut security altogether.

Build a culture of trust not fear

Building trust among users requires empowering them to believe that their individual actions have a larger impact on the security of the overall organization. If a user understands that their behavior can put critical resources of an organization at risk, they will more likely behave with security in mind. For this to work, nuance is key. Deeming that every resource needs a similarly high number of checks and balances diminishes how users perceive security and adds unnecessary bottlenecks to user workflows.

In order to lay the foundation for good security, it’s worth noting that risk analysis is the bedrock of security design. Instead of blindly implementing standard security measures recommended by the experts, a better approach is to tailor security protocols to meet specific use cases and adapt as much as possible to user workflows. Here are some examples of how to do just that:

Risk based authentication

Risk based authentication is a powerful way to perform a holistic assessment of the threats facing an organization. Risks occur at the intersection of vulnerability and threat. A high risk account is vulnerable and faces the very real threat of a potential breach. Generally, risk based authentication is about calculating a risk score associated with accounts and determining the proper approach to securing it. It takes into account a combination of the likelihood that that risk will materialize and the impact on the organization should the risk come to pass. With this system, an organization can easily adapt access to resources depending on how critical they are to the business; for instance, internal documentation may not warrant 2FA, while accessing business and financial records may.

Dynamically adaptive auth

Similar to risk based auth, dynamically adaptive auth adjusts to the current situation. Security can be strengthened and slackened as warranted, depending on how risky the access point is. A user accessing an account from a trusted device in a known location may be deemed low risk and therefore not in need of extra security layers. Likewise, a user exhibiting predictive patterns of use should be granted quick and easy access to resources. The ability to adapt authentication based on the most recent security profile of a user significantly improves the experience by reducing unnecessary friction.

Conclusion

Historically, security failed to take the user experience into account, putting the onus of securing accounts solely on users. Considering the fate of password security, we can neither rely on users nor stringent security mechanisms to keep our accounts safe. Instead, we should aim for security measures that give users the freedom to bypass them as needed while still protecting our accounts from attack. The fate of secure systems lies in the understanding that security is a process that must constantly adapt to face the shifting landscape of user behavior and potential threats.


About the author

Divya is a web developer who is passionate about open source and the web. She is currently a developer experience engineer at Netlify, and believes that there is a better workflow for building and deploying sites that doesn’t require a server—ask her about the JAMstack. You will most likely find her in the sunniest spot in the room with a cup of tea in hand.

More articles by Divya




usability

Designing accessibility instruments: lessons on their usability for integrated land use and transport planning practices / edited by Cecilia Silva, Luca Bertolini and Nuno Pinto

Rotch Library - HT166.D3865 2019