tent

Strain induced rich planar defects in heterogeneous WS2/WO2 enable efficient nitrogen fixation at low overpotential

J. Mater. Chem. A, 2020, Advance Article
DOI: 10.1039/C9TA13812A, Paper
Ying Ling, Farhad M. D. Kazim, Shuangxiu Ma, Quan Zhang, Konggang Qu, Yangang Wang, Shenglin Xiao, Weiwei Cai, Zehui Yang
Incorporation of WO2 to WS2 nanosheets can efficiently suppress the competitive hydrogen evolution reaction (HER) due to the reduction of edge defects and create new planar defects at heterointerfaces for nitrogen reduction reaction (NRR).
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

Hacking life: systematized living and its discontents / Joseph M. Reagle Jr

Browsery HM851.R432 2019




tent

Patently mathematical: picking partners, passwords, and careers by the numbers / Jeff Suzuki

Hayden Library - QA41.S9395 2019




tent

Manipulating trap filling of persistent phosphors upon illumination by using a blue light-emitting diode

J. Mater. Chem. C, 2020, Advance Article
DOI: 10.1039/D0TC01427C, Paper
Qingqing Gao, Chenlin Li, Yichun Liu, Jiahua Zhang, Xiao-jun Wang, Feng Liu
Developing a conceptual “write”/“read” technology for optical information storage of persistent phosphors is necessary but often underestimated.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

Compact and ultrathin multi-element oxide films grown by temperature-controlled deposition and their surface-potential based transistor theoretical simulation model

J. Mater. Chem. C, 2020, Advance Article
DOI: 10.1039/D0TC00506A, Paper
Jiahui Liu, Zunxian Yang, Shimin Lin, Kang Zheng, Yuliang Ye, Bingqing Ye, Zhipeng Gong, Yinglin Qiu, Lei Xu, Tailiang Guo, Sheng Xu
Thin IMZO films were synthesized by a temperature controlled approach and applied to TFTs with good performance. The modification factor of voltage was introduced to simulate the electrical characteristics of devices.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

The New Rules of Marketing and PR: How to Use Content Marketing, Podcasting, Social Media, AI, Live Video, and Newsjacking to Reach Buyers Directly, 7th Edition


 

The seventh edition of the pioneering guide to generating attention for your idea or business, packed with new and updated information

In the Digital Age, marketing tactics seem to change on a day-to-day basis. As the ways we communicate continue to evolve, keeping pace with the latest trends in social media, the newest online videos, the latest mobile apps, and all the other high-tech influences can seem an almost impossible task. How can you keep



Read More...




tent

Linear holomorphic partial differential equations and classical potential theory / Dmitry Khavinson, Erik Lundberg

Hayden Library - QA404.7.K43 2018




tent

The Untapped Power of Vulnerability & Transparency in Content Strategy

In marketing, transparency and vulnerability are unjustly stigmatized. The words conjure illusions of being frightened, imperfect, and powerless. And for companies that shove carefully curated personas in front of users, little is more terrifying than losing control of how people perceive the brand.

Let’s shatter this illusioned stigma. Authentic vulnerability and transparency are strengths masquerading as weaknesses. And companies too scared to embrace both traits in their content forfeit bona fide user-brand connections for often shallow, misleading engagement tactics that create fleeting relationships.

Transparency and vulnerability are closely entwined concepts, but each one engages users in a unique way. Transparency is how much information you share, while vulnerability is the truth and meaning behind your actions and words. Combining these ideas is the trick to creating empowering and meaningful content. You can’t tell true stories of vulnerability without transparency, and to be authentically transparent you must be vulnerable.

To be vulnerable, your brand and its content must be brave, genuine, humble, and open, all of which are traits that promote long-term customer loyalty. And if you’re transparent with users about who you are and about your business practices, you’re courting 94 percent of consumers who say they’re more loyal to brands that offer complete openness and 89 percent of people who say they give transparent companies a second chance after a bad experience.

For many companies, being completely honest and open with their customers—or employees, in some cases—only happens in a crisis. Unfortunately for those businesses, using vulnerability and transparency only as a crisis management strategy diminishes how sincere they appear and can reduce customer satisfaction.

Unlocking the potential of being transparent and vulnerable with users isn’t a one-off tactic or quick-fix emergency response tool—it’s a commitment to intimate storytelling that embraces a user’s emotional and psychological needs, which builds a meaningful connection between the storyteller and the audience.

The three storytelling pillars of vulnerable and transparent content

In her book, Braving the Wilderness, sociologist Brené Brown explains that vulnerability connects us at an emotional level. She says that when we recognize someone is being vulnerable, we invest in their story and begin to develop an emotional bond. This interwoven connection encourages us to experience the storyteller’s joy and pain, and then creates a sense of community and common purpose among the person being vulnerable and the people who acknowledge that vulnerability.

Three pillars in a company’s lifecycle embrace this bond and provide an outline for telling stories worthy of a user’s emotional investment. The pillars are:

  • the origins of a company, product, idea, or situation;
  • intimate narratives about customers’ life experiences;
  • and insights about product success and failure.

Origin stories

An origin story spins a transparent tale about how a company, product, service, or idea is created. It is often told by a founder, CEO, or industry innovator. This pillar is usually used as an authentic way to provide crisis management or as a method to change how users feel about a topic, product, or your brand.

Customers’ life experiences

While vulnerable origin stories do an excellent job of making users trust your brand, telling a customer’s personal life story is arguably the most effective way to use vulnerability to entwine a brand with someone’s personal identity.

Unlike an origin story, the customer experiences pillar is focused on being transparent about who your customers are, what they’ve experienced, and how those journeys align with values that matter to your brand. Through this lens, you’ll empower your customers to tell emotional, meaningful stories that make users feel vulnerable in a positive way. In this situation, your brand is often a storytelling platform where users share their story with the brand and fellow customers.

Product and service insights

Origin stories make your brand trustworthy in a crisis, and customers’ personal stories help users feel an intimate connection with your brand’s persona and mission. The last pillar, product and service insights, combines the psychological principles that make origin and customer stories successful. The outcome is a vulnerable narrative that rallies users’ excitement about, and emotional investment in, what a company sells or the goals it hopes to achieve.

Vulnerability, transparency, and the customer journey

The three storytelling pillars are crucial to embracing transparency and vulnerability in your content strategy because they let you target users at specific points in their journey. By embedding the pillars in each stage of the customer’s journey, you teach users about who you are, what matters to you, and why they should care.

For our purposes, let’s define the user journey as:

  • awareness;
  • interest;
  • consideration;
  • conversion;
  • and retention.

Awareness

People give each other seven seconds to make a good first impression. We’re not so generous with brands and websites. After discovering your content, users determine if it’s trustworthy within one-tenth of a second.

Page design and aesthetics are often the determining factors in these split-second choices, but the information users discover after that decision shapes their long-term opinions about your brand. This snap judgement is why transparency and vulnerability are crucial within awareness content.

When you only get one chance to make a positive first impression with your audience, what content is going to be more memorable?

Typical marketing “fluff” about how your brand was built on a shared vision and commitment to unyielding customer satisfaction and quality products? Or an upfront, authentic, and honest story about the trials and tribulations you went through to get where you are now?

Buffer, a social media management company that helped pioneer the radical transparency movement, chose the latter option. The outcome created awareness content that leaves a positive lasting impression of the brand.

In 2016, Joel Gascoigne, cofounder and CEO of Buffer, used an origin story to discuss the mistakes he and his company made that resulted in laying off 10 employees.

In the blog post “Tough News: We’ve Made 10 Layoffs. How We Got Here, the Financial Details and How We’re Moving Forward,” Gascoigne wrote about Buffer’s over-aggressive growth choices, lack of accountability, misplaced trust in its financial model, explicit risk appetite, and overenthusiastic hiring. He also discussed what he learned from the experience, the changes Buffer made based on these lessons, the consequences of those changes, and next steps for the brand.

Gascoigne writes about each subject with radical honesty and authenticity. Throughout the article, he’s personable and relatable; his tone and voice make it obvious he’s more concerned about the lives he’s irrevocably affected than the public image of his company floundering. Because Gascoigne is so transparent and vulnerable in the blog post, it’s easy to become invested in the narrative he’s telling. The result is an article that feels more like a deep, meaningful conversation over coffee instead of a carefully curated, PR-approved response.

Yes, Buffer used this origin story to confront a PR crisis, but they did so in a way that encouraged users to trust the brand. Buffer chose to show up and be seen when they had no control over the outcome. And because Gascoigne used vulnerability and transparency to share the company’s collective pain, the company reaped positive press coverage and support on social media—further improving brand awareness, user engagement, and customer loyalty.

However, awareness content isn’t always brand focused. Sometimes, smart awareness content uses storytelling to teach users and shape their worldviews. The 2019 State of Science Index is an excellent example.

The annual State of Science Index evaluates how the global public perceives science. The 2019 report shows that 87 percent of people acknowledge that science is necessary to solve the world’s problems, but 33 percent are skeptical of science and believe that scientists cause as many problems as they solve. Furthermore, 57 percent of respondents are skeptical of science because of scientists’ conflicting opinions about topics they don’t understand.

3M, the multinational science conglomerate that publishes the report, says the solution for this anti-science mindset is to promote intimate storytelling among scientists and layfolk.

3M creates an origin story with its awareness content by focusing on the ins and outs of scientific research. The company is open and straightforward with its data and intentions, eliminating any second guesses users might have about the content they’re digesting.

The company kicked off this strategy on three fronts, and each storytelling medium interweaves the benefits of vulnerability and transparency by encouraging researchers to tell stories that lead with how their findings benefit humanity. Every story 3M tells focuses on breaking through barriers the average person faces when they encounter science and encouraging scientists to be vulnerable and authentic with how they share their research.

First, 3M began a podcast series known as Science Champions. In the podcast, 3M Chief Science Advocate Jayshree Seth interviews scientists and educators about the global perception of science and how science and scientists affect our lives. The show is currently in its second season and discusses a range of topics in science, technology, engineering, and math.

Second, the company worked with science educators, journalist Katie Couric, actor Alan Alda, and former NASA astronaut Scott Kelly to develop the free Scientists as Storytellers Guide. The ebook helps STEM researchers improve how and why they communicate their work with other people—with a special emphasis on being empathetic with non-scientists. The guide breaks down how to develop communications skills, overcome common storytelling challenges, and learn to make science more accessible, understandable, and engaging for others.

Last, 3M created a film series called Beyond the Beaker that explores the day-to-day lives of 3M scientists. In the short videos, scientists give the viewer a glimpse into their hobbies and home life. The series showcases how scientists have diverse backgrounds, hobbies, goals, and dreams.

Unlike Buffer, which benefits directly from its awareness content, 3M’s three content mediums are designed to create a long-term strategy that changes how people understand and perceive science, by spreading awareness through third parties. It’s too early to conclude that the strategy will be successful, but it’s off to a good start. Science Champions often tops “best of” podcast lists for science lovers, and the Scientists as Storytellers Guide is a popular resource among public universities.

Interest

How do you court new users when word-of-mouth and organic search dominate how people discover new brands? Target their interests.

Now, you can be like the hundreds of other brands that create a “10 best things” list and hope people stumble onto your content organically and like what they see. Or, you can use content to engage with people who are passionate about your industry and have genuine, open discussions about the topics that matter to you both.

The latter option is a perfect fit for the product and service insights pillar, and the customers’ life experiences pillar.

To succeed in these pillars you must balance discussing the users’ passions and how your brand plays into that topic against appearing disingenuous or becoming too self-promotional.

Nonprofits have an easier time walking this taut line because people are less judgemental when engaging with NGOs, but it’s rare for a for-profit company to achieve this balance. SpaceX and Thinx are among the few brands that are able to walk this tightrope.

Thinx, a women’s clothing brand that sells period-proof underwear, uses its blog to generate awareness, interest, and consideration content via the customers’ life experiences pillar. The blog, aptly named Periodical, relies on transparency and vulnerability as a cornerstone to engage users about reproductive and mental health.

Toni Brannagan, Thinx’s content editor, says the brand embraces transparency and vulnerability by sharing diverse ideas and personal experiences from customers and experts alike, not shying away from sensitive subjects and never misleading users about Thinx or the subjects Periodical discusses.

As a company focused on women’s healthcare, the product Thinx sells is political by nature and entangles the brand with themes of shame, cultural differences, and personal empowerment. Thinx’s strategy is to tackle these subjects head-on by having vulnerable conversations in its branding, social media ads, and Periodical content.

“Vulnerability and transparency play a role because you can’t share authentic diverse ideas and experiences about those things—shame, cultural differences, and empowerment—without it,” Brannagan says.

A significant portion of Thinx’s website traffic is organic, which means Periodical’s interest-driven content may be a user’s first touchpoint with the brand.

“We’ve seen that our most successful organic content is educational, well-researched articles, and also product-focused blogs that answer the questions about our underwear, in a way that’s a little more casual than what’s on our product pages,” Brannagan says. “In contrast, our personal essays and ‘more opinionated’ content performs better on social media and email.”

Thanks in part to the blog’s authenticity and open discussions about hard-hitting topics, readers who find the brand through organic search drive the most direct conversions.

Conversations with users interested in the industry or topic your company is involved in don’t always have to come from the company itself. Sometimes a single person can drive authentic, open conversations and create endearing user loyalty and engagement.

For a company that relies on venture capital investments, NASA funding, and public opinion for its financial future, crossing the line between being too self-promotional and isolating users could spell doom. But SpaceX has never shied away from difficult or vulnerable conversations. Instead, the company’s founder, Elon Musk, embraces engaging with users interests in public forums like Twitter and press conferences.

Musk’s tweets about SpaceX are unwaveringly authentic and transparent. He often tweets about his thoughts, concerns, and the challenges his companies face. Plus, Musk frequently engages with his Twitter followers and provides candid answers to questions many CEOs avoid discussing. This authenticity has earned him a cult-like following.

Musk and SpaceX create conversations that target people’s interests and use vulnerability to equally embrace failure and success. Both the company and its founder give the public and investors an unflinching story of space exploration.

And despite laying off 10 percent of its workforce in January of 2019, SpaceX is flourishing. In May 2019, its valuation had risen to $33.3 billion and reported annual revenue exceeded $2 billion. It also earned global media coverage from launching Musk’s Tesla Roadster into space, recently completed a test flight of its Crew Dragon space vehicle, and cemented multiple new payload contracts.

By engaging with users on social media and through standard storytelling mediums, Thinx and SpaceX bolster customer loyalty and brand engagement.

Consideration

Modern consumers argue that ignorance is not bliss. When users are considering converting with a brand, 86 percent of consumers say transparency is a deciding factor. Transparency remains crucial even after they convert, with 85 percent of users saying they’ll support a transparent brand during a PR crisis.

Your brand must be open, clear, and honest with users; there is no longer another viable option.

So how do you remain transparent while trying to sell someone a product? One solution employed by REI and Everlane is to be openly accountable to your brand and your users via the origin stories and product insights pillars.

REI, a national outdoor equipment retailer, created a stewardship program that behaves as a multifaceted origin story. The program’s content highlights the company’s history and manufacturing policies, and it lets users dive into the nitty-gritty details about its factories, partnerships, product production methods, manufacturing ethics, and carbon footprint.

REI also employs a classic content hub strategy to let customers find the program and explore its relevant information. From a single landing page, users can easily find the program through the website’s global navigation and then navigate to every tangential topic the program encompasses.

REI also publishes an annual stewardship report, where users can learn intimate details about how the company makes and spends its money.

Everlane, a clothing company, is equally transparent about its supply chain. The company promotes an insider’s look into its global factories via product insights stories. These glimpses tell the personal narratives of factory employees and owners, and provide insights into the products manufactured and the materials used. Everlane also published details of how they comply with the California Transparency in Supply Chains Act to guarantee ethical working conditions throughout its supply chain, including refusing to partner with human traffickers.

The crucial quality that Everlane and REI share is they publicize their transparency and encourage users to explore the shared information. On each website, users can easily find information about the company’s transparency endeavors via the global navigation, social media campaigns, and product pages.

The consumer response to transparent brands like REI and Everlane is overwhelmingly positive. Customers are willing to pay price premiums for the additional transparency, which gives them comfort by knowing they’re purchasing ethical products.

REI’s ownership model has further propelled the success of its transparency by using it to create unwavering customer engagement and loyalty. As a co-op where customers can “own” part of the company for a one-time $20 membership fee, REI is beholden to its members, many of which pay close attention to its supply chain and the brands REI partners with.

After a deadly school shooting in Parkland, Florida, REI members urged the company to refuse to carry CamelBak products because the brand’s parent company manufactures assault-style weapons. Members argued the partnership violated REI’s supply chain ethics. REI listened and halted orders with CamelBak. Members rejoiced and REI earned a significant amount of positive press coverage.

Conversion

Imagine you’ve started incorporating transparency throughout your company, and promote the results to users. Your brand also begins engaging users by telling vulnerable, meaningful stories via the three pillars. You’re seeing great engagement metrics and customer feedback from these efforts, but not much else. So, how do you get your newly invested users to convert?

Provide users with a full-circle experience.

If you combine the three storytelling pillars with blatant transparency and actively promote your efforts, users often transition from the consideration stage into the conversion state. Best of all, when users convert with a company that already earned their trust on an emotional level, they’re more likely to remain loyal to the brand and emotionally invested in its future.

The crucial step in combining the three pillars is consistency. Your brand’s stories must always be authentic and your content must always be transparent. The outdoor clothing brand Patagonia is among the most popular and successful companies to maintain this consistency and excel with this strategy.

Patagonia is arguably the most vocal and aggressive clothing retailer when it comes to environmental stewardship and ethical manufacturing.

In some cases, the company tells users not to buy its clothing because rampant consumerism harms the environment too much, which they care about more than profits. This level of radical transparency and vulnerability skyrocketed the company’s popularity among environmentally-conscious consumers.

In 2011, Patagonia took out a full-page Black Friday ad in the New York Times with the headline “Don’t Buy This Jacket.” In the ad, Patagonia talks about the environmental toll manufacturing clothes requires.

“Consider the R2 Jacket shown, one of our best sellers. To make it required 135 liters of water, enough to meet the daily needs (three glasses a day) of 45 people. Its journey from its origin as 60 percent recycled polyester to our Reno warehouse generated nearly 20 pounds of carbon dioxide, 24 times the weight of the finished product. This jacket left behind, on its way to Reno, two-thirds [of] its weight in waste.”

The ad encourages users to not buy any new Patagonia clothing if their old, ratty clothes can be repaired. To help, Patagonia launched a supplementary subdomain to its e-commerce website to support its Common Thread Initiative, which eventually got rebranded as the Worn Wear program.

Patatgonia’s Worn Wear subdomain gets users to engage with the company about causes each party cares about. Through Worn Wear, Patagonia will repair your old gear for free. If you’d rather have new gear, you can instead sell the worn out clothing to Patagonia, and they’ll repair it and then resell the product at a discount. This interaction encourages loyalty and repeat brand-user engagement.

In addition, the navigation on Patagonia’s main website practically begs users to learn about the brand’s non-profit initiatives and its commitment to ethical manufacturing.

Today, Patagonia is among the most respected, profitable, and trusted consumer brands in the United States.

Retention

Content strategy expands through nearly every aspect of the marketing stack, including ad campaigns, which take a more controlled approach to vulnerability and transparency. To target users in the retention stage and keep them invested in your brand, your goal is to create content using the customers’ life experiences pillar to amplify the emotional bond and brand loyalty that vulnerability creates.

Always took this approach and ended up with one of its most successful social media campaigns.

In June 2014, Always launched its #LikeAGirl campaign to empower adolescent and teenage girls by transforming the phrase “like a girl” from a slur into a meaningful and positive statement.

The campaign is centered on a video in which Always tasked children, teenagers, and adults to behave “like a girl” by running, punching, and throwing while mimicking their perception of how a girl performs the activity. Young girls performed the tasks wholeheartedly and with gusto, while boys and adults performed overly feminine and vain characterizations. The director then challenged the person on their portrayal, breaking down what doing things “like a girl” truly means. The video ends with a powerful, heart-swelling statement:

“If somebody else says that running like a girl, or kicking like a girl, or shooting like a girl is something you shouldn’t be doing, that’s their problem. Because if you’re still scoring, and you’re still getting to the ball in time, and you’re still being first...you’re doing it right. It doesn’t matter what they say.”

This customer story campaign put the vulnerability girls feel during puberty front and center so the topic would resonate with users and give the brand a powerful, relevant, and purposeful role in this connection, according to an Institute for Public Relations campaign analysis.

Consequently, the #LikeAGirl campaign was a rousing success and blew past the KPIs Always established. Initially, Always determined an “impactful launch” for the video meant 2 million video views and 250 million media impressions, the analysis states.

Five years later, the campaign video has more than 66.9 million views and 42,700 comments on YouTube, with more than 85 percent of users reacting positively. Here are a few additional highlights the analysis document points out:

  • Eighty-one percent of women ages 16–24 support Always in creating a movement to reclaim “like a girl” as a positive and inspiring statement.
  • More than 1 million people shared the video.
  • Thirteen percent of users created user-generated content about the campaign.
  • The #LikeAGirl program achieved 4.5 billion global impressions.
  • The campaign received 290 million social impressions, with 133,000 social mentions, and it caused a 195.3 percent increase in the brand’s Twitter followers.

Among the reasons the #LikeAGirl content was so successful is that it aligned with Brené Brown’s concept that experiencing vulnerability creates a connection centered on powerful, shared emotions. Always then amplified the campaign’s effectiveness by using those emotions to encourage specific user behavior on social media.

How do you know if you’re making vulnerable content?

Designing a vulnerability-focused content strategy campaign begins by determining what kind of story you want to tell, why you want to tell it, why that story matters, and how that story helps you or your users achieve a goal.

When you’re brainstorming topics, the most important factor is that you need to care about the stories you’re telling. These tales need to be meaningful because if you’re weaving a narrative that isn’t important to you, it shows. And ultimately, why do you expect your users to care about a subject if you don’t?

Let’s say you’re developing a content campaign for a nonprofit, and you want to use your brand’s emotional identity to connect with users. You have a handful of possible narratives but you’re not sure which one will best unlock the benefits of vulnerability. In a Medium post about telling vulnerable stories, Cayla Vidmar presents a list of seven self-reflective questions that can reveal what narrative to choose and why.

If you can answer each of Vidmar’s questions, you’re on your way to creating a great story that can connect with users on a level unrivaled by other methods. Here’s what you should ask yourself:

  • What meaning is there in my story?
  • Can my story help others?
  • How can it help others?
  • Am I willing to struggle and be vulnerable in that struggle (even with strangers)?
  • How has my story shaped my worldview (what has it made me believe)?
  • What good have I learned from my story?
  • If other people discovered this good from their story, would it change their lives?

While you’re creating narratives within the three pillars, refer back to Vidmar’s list to maintain the proper balance between vulnerability and transparency.

What’s next?

You now know that vulnerability and transparency are an endless fountain of strength, not a weakness. Vulnerable content won’t make you or your brand look weak. Your customers won’t flee at the sight of imperfection. Being human and treating your users like humans isn’t a liability.

It’s time for your brand to embrace its untapped potential. Choose to be vulnerable, have the courage to tell meaningful stories about what matters most to your company and your customers, and overcome the fear of controlling how users will react to your content.

Origin story

Every origin story has six chapters:

  • the discovery of a problem or opportunity;
  • what caused this problem or opportunity;
  • the consequences of this discovery;
  • the solution to these consequences;
  • lessons learned during the process;
  • and next steps.

Customers’ life experiences

Every customer journey narrative has six chapters:

  • plot background to frame the customer’s experiences;
  • the customer’s journey;
  • how the brand plays into that journey (if applicable);
  • how the customer’s experiences changed them;
  • what the customer learned from this journey;
  • and how other people can use this information to improve their lives.

Product and service insights

Narratives about product and service insights have seven chapters:

  • an overview of the product/service;
  • how that product/service affects users;
  • why the product/service is important to the brand’s mission or to users;
  • what about this product/service failed or succeeded;
  • why did that success or failure happen;
  • what lessons did this scenario create;
  • and how are the brand and its users moving forward.

You have the tools and knowledge necessary to be transparent, create vulnerable content, and succeed. And we need to tell vulnerable stories because sharing our experiences and embracing our common connections matters. So go ahead, put yourself out into the open, and see how your customers respond.




tent

Request with Intent: Caching Strategies in the Age of PWAs

Once upon a time, we relied on browsers to handle caching for us; as developers in those days, we had very little control. But then came Progressive Web Apps (PWAs), Service Workers, and the Cache API—and suddenly we have expansive power over what gets put in the cache and how it gets put there. We can now cache everything we want to… and therein lies a potential problem.

Media files—especially images—make up the bulk of average page weight these days, and it’s getting worse. In order to improve performance, it’s tempting to cache as much of this content as possible, but should we? In most cases, no. Even with all this newfangled technology at our fingertips, great performance still hinges on a simple rule: request only what you need and make each request as small as possible.

To provide the best possible experience for our users without abusing their network connection or their hard drive, it’s time to put a spin on some classic best practices, experiment with media caching strategies, and play around with a few Cache API tricks that Service Workers have hidden up their sleeves.

Best intentions

All those lessons we learned optimizing web pages for dial-up became super-useful again when mobile took off, and they continue to be applicable in the work we do for a global audience today. Unreliable or high latency network connections are still the norm in many parts of the world, reminding us that it’s never safe to assume a technical baseline lifts evenly or in sync with its corresponding cutting edge. And that’s the thing about performance best practices: history has borne out that approaches that are good for performance now will continue being good for performance in the future.

Before the advent of Service Workers, we could provide some instructions to browsers with respect to how long they should cache a particular resource, but that was about it. Documents and assets downloaded to a user’s machine would be dropped into a directory on their hard drive. When the browser assembled a request for a particular document or asset, it would peek in the cache first to see if it already had what it needed to possibly avoid hitting the network.

We have considerably more control over network requests and the cache these days, but that doesn’t excuse us from being thoughtful about the resources on our web pages.

Request only what you need

As I mentioned, the web today is lousy with media. Images and videos have become a dominant means of communication. They may convert well when it comes to sales and marketing, but they are hardly performant when it comes to download and rendering speed. With this in mind, each and every image (and video, etc.) should have to fight for its place on the page. 

A few years back, a recipe of mine was included in a newspaper story on cooking with spirits (alcohol, not ghosts). I don’t subscribe to the print version of that paper, so when the article came out I went to the site to take a look at how it turned out. During a recent redesign, the site had decided to load all articles into a nearly full-screen modal viewbox layered on top of their homepage. This meant requesting the article required requests for all of the assets associated with the article page plus all the contents and assets for the homepage. Oh, and the homepage had video ads—plural. And, yes, they auto-played.

I popped open DevTools and discovered the page had blown past 15 MB in page weight. Tim Kadlec had recently launched What Does My Site Cost?, so I decided to check out the damage. Turns out that the actual cost to view that page for the average US-based user was more than the cost of the print version of that day’s newspaper. That’s just messed up.

Sure, I could blame the folks who built the site for doing their readers such a disservice, but the reality is that none of us go to work with the goal of worsening our users’ experiences. This could happen to any of us. We could spend days scrutinizing the performance of a page only to have some committee decide to set that carefully crafted page atop a Times Square of auto-playing video ads. Imagine how much worse things would be if we were stacking two abysmally-performing pages on top of each other!

Media can be great for drawing attention when competition is high (e.g., on the homepage of a newspaper), but when you want readers to focus on a single task (e.g., reading the actual article), its value can drop from important to “nice to have.” Yes, studies have shown that images excel at drawing eyeballs, but once a visitor is on the article page, no one cares; we’re just making it take longer to download and more expensive to access. The situation only gets worse as we shove more media into the page. 

We must do everything in our power to reduce the weight of our pages, so avoid requests for things that don’t add value. For starters, if you’re writing an article about a data breach, resist the urge to include that ridiculous stock photo of some random dude in a hoodie typing on a computer in a very dark room.

Request the smallest file you can

Now that we’ve taken stock of what we do need to include, we must ask ourselves a critical question: How can we deliver it in the fastest way possible? This can be as simple as choosing the most appropriate image format for the content presented (and optimizing the heck out of it) or as complex as recreating assets entirely (for example, if switching from raster to vector imagery would be more efficient).

Offer alternate formats

When it comes to image formats, we don’t have to choose between performance and reach anymore. We can provide multiple options and let the browser decide which one to use, based on what it can handle.

You can accomplish this by offering multiple sources within a picture or video element. Start by creating multiple formats of the media asset. For example, with WebP and JPG, it’s likely that the WebP will have a smaller file size than the JPG (but check to make sure). With those alternate sources, you can drop them into a picture like this:

<picture>
  <source srcset="my.webp" type="image/webp">
  <img src="my.jpg" alt="Descriptive text about the picture.">
</picture>

Browsers that recognize the picture element will check the source element before making a decision about which image to request. If the browser supports the MIME type “image/webp,” it will kick off a request for the WebP format image. If not (or if the browser doesn’t recognize picture), it will request the JPG. 

The nice thing about this approach is that you’re serving the smallest image possible to the user without having to resort to any sort of JavaScript hackery.

You can take the same approach with video files:

<video controls>
  <source src="my.webm" type="video/webm">
  <source src="my.mp4" type="video/mp4">
  <p>Your browser doesn’t support native video playback,
    but you can <a href="my.mp4" download>download</a>
    this video instead.</p>
</video>

Browsers that support WebM will request the first source, whereas browsers that don’t—but do understand MP4 videos—will request the second one. Browsers that don’t support the video element will fall back to the paragraph about downloading the file.

The order of your source elements matters. Browsers will choose the first usable source, so if you specify an optimized alternative format after a more widely compatible one, the alternative format may never get picked up.  

Depending on your situation, you might consider bypassing this markup-based approach and handle things on the server instead. For example, if a JPG is being requested and the browser supports WebP (which is indicated in the Accept header), there’s nothing stopping you from replying with a WebP version of the resource. In fact, some CDN services—Cloudinary, for instance—come with this sort of functionality right out of the box.

Offer different sizes

Formats aside, you may want to deliver alternate image sizes optimized for the current size of the browser’s viewport. After all, there’s no point loading an image that’s 3–4 times larger than the screen rendering it; that’s just wasting bandwidth. This is where responsive images come in.

Here’s an example:

<img src="medium.jpg"
  srcset="small.jpg 256w,
    medium.jpg 512w,
    large.jpg 1024w"
  sizes="(min-width: 30em) 30em, 100vw"
  alt="Descriptive text about the picture.">

There’s a lot going on in this super-charged img element, so I’ll break it down:

  • This img offers three size options for a given JPG: 256 px wide (small.jpg), 512 px wide (medium.jpg), and 1024 px wide (large.jpg). These are provided in the srcset attribute with corresponding width descriptors.
  • The src defines a default image source, which acts as a fallback for browsers that don’t support srcset. Your choice for the default image will likely depend on the context and general usage patterns. Often I’d recommend the smallest image be the default, but if the majority of your traffic is on older desktop browsers, you might want to go with the medium-sized image.
  • The sizes attribute is a presentational hint that informs the browser how the image will be rendered in different scenarios (its extrinsic size) once CSS has been applied. This particular example says that the image will be the full width of the viewport (100vw) until the viewport reaches 30 em in width (min-width: 30em), at which point the image will be 30 em wide. You can make the sizes value as complicated or as simple as you want; omitting it causes browsers to use the default value of 100vw.

You can even combine this approach with alternate formats and crops within a single picture. ????

All of this is to say that you have a number of tools at your disposal for delivering fast-loading media, so use them!

Defer requests (when possible)

Years ago, Internet Explorer 11 introduced a new attribute that enabled developers to de-prioritize specific img elements to speed up page rendering: lazyload. That attribute never went anywhere, standards-wise, but it was a solid attempt to defer image loading until images are in view (or close to it) without having to involve JavaScript.

There have been countless JavaScript-based implementations of lazy loading images since then, but recently Google also took a stab at a more declarative approach, using a different attribute: loading.

The loading attribute supports three values (“auto,” “lazy,” and “eager”) to define how a resource should be brought in. For our purposes, the “lazy” value is the most interesting because it defers loading the resource until it reaches a calculated distance from the viewport.

Adding that into the mix…

<img src="medium.jpg"
  srcset="small.jpg 256w,
    medium.jpg 512w,
    large.jpg 1024w"
  sizes="(min-width: 30em) 30em, 100vw"
  loading="lazy"
  alt="Descriptive text about the picture.">

This attribute offers a bit of a performance boost in Chromium-based browsers. Hopefully it will become a standard and get picked up by other browsers in the future, but in the meantime there’s no harm in including it because browsers that don’t understand the attribute will simply ignore it.

This approach complements a media prioritization strategy really well, but before I get to that, I want to take a closer look at Service Workers.

Manipulate requests in a Service Worker

Service Workers are a special type of Web Worker with the ability to intercept, modify, and respond to all network requests via the Fetch API. They also have access to the Cache API, as well as other asynchronous client-side data stores like IndexedDB for resource storage.

When a Service Worker is installed, you can hook into that event and prime the cache with resources you want to use later. Many folks use this opportunity to squirrel away copies of global assets, including styles, scripts, logos, and the like, but you can also use it to cache images for use when network requests fail.

Keep a fallback image in your back pocket

Assuming you want to use a fallback in more than one networking recipe, you can set up a named function that will respond with that resource:

function respondWithFallbackImage() {
  return caches.match( "/i/fallbacks/offline.svg" );
}

Then, within a fetch event handler, you can use that function to provide that fallback image when requests for images fail at the network:

self.addEventListener( "fetch", event => {
  const request = event.request;
  if ( request.headers.get("Accept").includes("image") ) {
    event.respondWith(
      return fetch( request, { mode: 'no-cors' } )
        .then( response => {
          return response;
        })
        .catch(
          respondWithFallbackImage
        );
    );
  }
});

When the network is available, users get the expected behavior:

Social media avatars are rendered as expected when the network is available.

But when the network is interrupted, images will be swapped automatically for a fallback, and the user experience is still acceptable:

A generic fallback avatar is rendered when the network is unavailable.

On the surface, this approach may not seem all that helpful in terms of performance since you’ve essentially added an additional image download into the mix. With this system in place, however, some pretty amazing opportunities open up to you.

Respect a user’s choice to save data

Some users reduce their data consumption by entering a “lite” mode or turning on a “data saver” feature. When this happens, browsers will often send a Save-Data header with their network requests. 

Within your Service Worker, you can look for this header and adjust your responses accordingly. First, you look for the header:

let save_data = false;
if ( 'connection' in navigator ) {
  save_data = navigator.connection.saveData;
}

Then, within your fetch handler for images, you might choose to preemptively respond with the fallback image instead of going to the network at all:

self.addEventListener( "fetch", event => {
  const request = event.request;
  if ( request.headers.get("Accept").includes("image") ) {
    event.respondWith(
      if ( save_data ) {
        return respondWithFallbackImage();
      }
      // code you saw previously
    );
  }
});

You could even take this a step further and tune respondWithFallbackImage() to provide alternate images based on what the original request was for. To do that you’d define several fallbacks globally in the Service Worker:

const fallback_avatar = "/i/fallbacks/avatar.svg",
      fallback_image = "/i/fallbacks/image.svg";

Both of those files should then be cached during the Service Worker install event:

return cache.addAll( [
  fallback_avatar,
  fallback_image
]);

Finally, within respondWithFallbackImage() you could serve up the appropriate image based on the URL being fetched. In my site, the avatars are pulled from Webmention.io, so I test for that.

function respondWithFallbackImage( url ) {
  const image = avatars.test( /webmention.io/ ) ? fallback_avatar
                                                 : fallback_image;
  return caches.match( image );
}

With that change, I’ll need to update the fetch handler to pass in request.url as an argument to respondWithFallbackImage(). Once that’s done, when the network gets interrupted I end up seeing something like this:

A webmention that contains both an avatar and an embedded image will render with two different fallbacks when the Save-Data header is present.

Next, we need to establish some general guidelines for handling media assets—based on the situation, of course.

The caching strategy: prioritize certain media

In my experience, media—especially images—on the web tend to fall into three categories of necessity. At one end of the spectrum are elements that don’t add meaningful value. At the other end of the spectrum are critical assets that do add value, such as charts and graphs that are essential to understanding the surrounding content. Somewhere in the middle are what I would call “nice-to-have” media. They do add value to the core experience of a page but are not critical to understanding the content.

If you consider your media with this division in mind, you can establish some general guidelines for handling each, based on the situation. In other words, a caching strategy.

Media loading strategy, broken down by how critical an asset is to understanding an interface
Media category Fast connection Save-Data Slow connection No network
Critical Load media Replace with placeholder
Nice-to-have Load media Replace with placeholder
Non-critical Remove from content entirely

When it comes to disambiguating the critical from the nice-to-have, it’s helpful to have those resources organized into separate directories (or similar). That way we can add some logic into the Service Worker that can help it decide which is which. For example, on my own personal site, critical images are either self-hosted or come from the website for my book. Knowing that, I can write regular expressions that match those domains:

const high_priority = [
    /aaron-gustafson.com/,
    /adaptivewebdesign.info/
  ];

With that high_priority variable defined, I can create a function that will let me know if a given image request (for example) is a high priority request or not:

function isHighPriority( url ) {
  // how many high priority links are we dealing with?
  let i = high_priority.length;
  // loop through each
  while ( i-- ) {
    // does the request URL match this regular expression?
    if ( high_priority[i].test( url ) ) {
      // yes, it’s a high priority request
      return true;
    }
  }
  // no matches, not high priority
  return false;
}

Adding support for prioritizing media requests only requires adding a new conditional into the fetch event handler, like we did with Save-Data. Your specific recipe for network and cache handling will likely differ, but here was how I chose to mix in this logic within image requests:

// Check the cache first
  // Return the cached image if we have one
  // If the image is not in the cache, continue

// Is this image high priority?
if ( isHighPriority( url ) ) {

  // Fetch the image
    // If the fetch succeeds, save a copy in the cache
    // If not, respond with an "offline" placeholder

// Not high priority
} else {

  // Should I save data?
  if ( save_data ) {

    // Respond with a "saving data" placeholder

  // Not saving data
  } else {

    // Fetch the image
      // If the fetch succeeds, save a copy in the cache
      // If not, respond with an "offline" placeholder
  }
}

We can apply this prioritized approach to many kinds of assets. We could even use it to control which pages are served cache-first vs. network-first.

Keep the cache tidy

The  ability to control which resources are cached to disk is a huge opportunity, but it also carries with it an equally huge responsibility not to abuse it.

Every caching strategy is likely to differ, at least a little bit. If we’re publishing a book online, for instance, it might make sense to cache all of the chapters, images, etc. for offline viewing. There’s a fixed amount of content and—assuming there aren’t a ton of heavy images and videos—users will benefit from not having to download each chapter separately.

On a news site, however, caching every article and photo will quickly fill up our users’ hard drives. If a site offers an indeterminate number of pages and assets, it’s critical to have a caching strategy that puts hard limits on how many resources we’re caching to disk. 

One way to do this is to create several different blocks associated with caching different forms of content. The more ephemeral content caches can have strict limits around how many items can be stored. Sure, we’ll still be bound to the storage limits of the device, but do we really want our website to take up 2 GB of someone’s hard drive?

Here’s an example, again from my own site:

const sw_caches = {
  static: {
    name: `${version}static`
  },
  images: {
    name: `${version}images`,
    limit: 75
  },
  pages: {
    name: `${version}pages`,
    limit: 5
  },
  other: {
    name: `${version}other`,
    limit: 50
  }
}

Here I’ve defined several caches, each with a name used for addressing it in the Cache API and a version prefix. The version is defined elsewhere in the Service Worker, and allows me to purge all caches at once if necessary.

With the exception of the static cache, which is used for static assets, every cache has a limit to the number of items that may be stored. I only cache the most recent 5 pages someone has visited, for instance. Images are limited to the most recent 75, and so on. This is an approach that Jeremy Keith outlines in his fantastic book Going Offline (which you should really read if you haven’t already—here’s a sample).

With these cache definitions in place, I can clean up my caches periodically and prune the oldest items. Here’s Jeremy’s recommended code for this approach:

function trimCache(cacheName, maxItems) {
  // Open the cache
  caches.open(cacheName)
  .then( cache => {
    // Get the keys and count them
    cache.keys()
    .then(keys => {
      // Do we have more than we should?
      if (keys.length > maxItems) {
        // Delete the oldest item and run trim again
        cache.delete(keys[0])
        .then( () => {
          trimCache(cacheName, maxItems)
        });
      }
    });
  });
}

We can trigger this code to run whenever a new page loads. By running it in the Service Worker, it runs in a separate thread and won’t drag down the site’s responsiveness. We trigger it by posting a message (using postMessage()) to the Service Worker from the main JavaScript thread:

// First check to see if you have an active service worker
if ( navigator.serviceWorker.controller ) {
  // Then add an event listener
  window.addEventListener( "load", function(){
    // Tell the service worker to clean up
    navigator.serviceWorker.controller.postMessage( "clean up" );
  });
}

The final step in wiring it all up is setting up the Service Worker to receive the message:

addEventListener("message", messageEvent => {
  if (messageEvent.data == "clean up") {
    // loop though the caches
    for ( let key in sw_caches ) {
      // if the cache has a limit
      if ( sw_caches[key].limit !== undefined ) {
        // trim it to that limit
        trimCache( sw_caches[key].name, sw_caches[key].limit );
      }
    }
  }
});

Here, the Service Worker listens for inbound messages and responds to the “clean up” request by running trimCache() on each of the cache buckets with a defined limit.

This approach is by no means elegant, but it works. It would be far better to make decisions about purging cached responses based on how frequently each item is accessed and/or how much room it takes up on disk. (Removing cached items based purely on when they were cached isn’t nearly as useful.) Sadly, we don’t have that level of detail when it comes to inspecting the caches…yet. I’m actually working to address this limitation in the Cache API right now.

Your users always come first

The technologies underlying Progressive Web Apps are continuing to mature, but even if you aren’t interested in turning your site into a PWA, there’s so much you can do today to improve your users’ experiences when it comes to media. And, as with every other form of inclusive design, it starts with centering on your users who are most at risk of having an awful experience.

Draw distinctions between critical, nice-to-have, and superfluous media. Remove the cruft, then optimize the bejeezus out of each remaining asset. Serve your media in multiple formats and sizes, prioritizing the smallest versions first to make the most of high latency and slow connections. If your users say they want to save data, respect that and have a fallback plan in place. Cache wisely and with the utmost respect for your users’ disk space. And, finally, audit your caching strategies regularly—especially when it comes to large media files.Follow these guidelines, and every one of your users—from folks rocking a JioPhone on a rural mobile network in India to people on a high-end gaming laptop wired to a 10 Gbps fiber line in Silicon Valley—will thank you.




tent

Usability Testing for Voice Content

It’s an important time to be in voice design. Many of us are turning to voice assistants in these times, whether for comfort, recreation, or staying informed. As the interest in interfaces driven by voice continues to reach new heights around the world, so too will users’ expectations and the best practices that guide their design.

Voice interfaces (also known as voice user interfaces or VUIs) have been reinventing how we approach, evaluate, and interact with user interfaces. The impact of conscious efforts to reduce close contact between people will continue to increase users’ expectations for the availability of a voice component on all devices, whether that entails a microphone icon indicating voice-enabled search or a full-fledged voice assistant waiting patiently in the wings for an invocation.

But voice interfaces present inherent challenges and surprises. In this relatively new realm of design, the intrinsic twists and turns in spoken language can make things difficult for even the most carefully considered voice interfaces. After all, spoken language is littered with fillers (in the linguistic sense of utterances like hmm and um), hesitations and pauses, and other interruptions and speech disfluencies that present puzzling problems for designers and implementers alike.

Once you’ve built a voice interface that introduces information or permits transactions in a rich way for spoken language users, the easy part is done. Nonetheless, voice interfaces also surface unique challenges when it comes to usability testing and robust evaluation of your end result. But there are advantages, too, especially when it comes to accessibility and cross-channel content strategy. The fact that voice-driven content lies on the opposite extreme of the spectrum from the traditional website confers it an additional benefit: it’s an effective way to analyze and stress-test just how channel-agnostic your content truly is.

The quandary of voice usability

Several years ago, I led a talented team at Acquia Labs to design and build a voice interface for Digital Services Georgia called Ask GeorgiaGov, which allowed citizens of the state of Georgia to access content about key civic tasks, like registering to vote, renewing a driver’s license, and filing complaints against businesses. Based on copy drawn directly from the frequently asked questions section of the Georgia.gov website, it was the first Amazon Alexa interface integrated with the Drupal content management system ever built for public consumption. Built by my former colleague Chris Hamper, it also offered a host of impressive features, like allowing users to request the phone number of individual government agencies for each query on a topic.

Designing and building web experiences for the public sector is a uniquely challenging endeavor due to requirements surrounding accessibility and frequent budgetary challenges. Out of necessity, governments need to be exacting and methodical not only in how they engage their citizens and spend money on projects but also how they incorporate new technologies into the mix. For most government entities, voice is a completely different world, with many potential pitfalls.

At the outset of the project, the Digital Services Georgia team, led by Nikhil Deshpande, expressed their most important need: a single content model across all their content irrespective of delivery channel, as they only had resources to maintain a single rendition of each content item. Despite this editorial challenge, Georgia saw Alexa as an exciting opportunity to open new doors to accessible solutions for citizens with disabilities. And finally, because there were relatively few examples of voice usability testing at the time, we knew we would have to learn on the fly and experiment to find the right solution.

Eventually, we discovered that all the traditional approaches to usability testing that we’d executed for other projects were ill-suited to the unique problems of voice usability. And this was only the beginning of our problems.

How voice interfaces improve accessibility outcomes

Any discussion of voice usability must consider some of the most experienced voice interface users: people who use assistive devices. After all, accessibility has long been a bastion of web experiences, but it has only recently become a focus of those implementing voice interfaces. In a world where refreshable Braille displays and screen readers prize the rendering of web-based content into synthesized speech above all, the voice interface seems like an anomaly. But in fact, the exciting potential of Amazon Alexa for disabled citizens represented one of the primary motivations for Georgia’s interest in making their content available through a voice assistant.

Questions surrounding accessibility with voice have surfaced in recent years due to the perceived user experience benefits that voice interfaces can offer over more established assistive devices. Because screen readers make no exceptions when they recite the contents of a page, they can occasionally present superfluous information and force the user to wait longer than they’re willing. In addition, with an effective content schema, it can often be the case that voice interfaces facilitate pointed interactions with content at a more granular level than the page itself.

Though it can be difficult to convince even the most forward-looking clients of accessibility’s value, Georgia has been not only a trailblazer but also a committed proponent of content accessibility beyond the web. The state was among the first jurisdictions to offer a text-to-speech (TTS) phone hotline that read web pages aloud. After all, state governments must serve all citizens equally—no ifs, ands, or buts. And while these are still early days, I can see voice assistants becoming new conduits, and perhaps more efficient channels, by which disabled users can access the content they need.

Managing content destined for discrete channels

Whereas voice can improve accessibility of content, it’s seldom the case that web and voice are the only channels through which we must expose information. For this reason, one piece of advice I often give to content strategists and architects at organizations interested in pursuing voice-driven content is to never think of voice content in isolation. Siloing it is the same misguided approach that has led to mobile applications and other discrete experiences delivering orphaned or outdated content to a user expecting that all content on the website should be up-to-date and accessible through other channels as well.

After all, we’ve trained ourselves for many years to think of content in the web-only context rather than across channels. Our closely held assumptions about links, file downloads, images, and other web-based marginalia and miscellany are all aspects of web content that translate poorly to the conversational context—and particularly the voice context. Increasingly, we all need to concern ourselves with an omnichannel content strategy that straddles all those channels in existence today and others that will doubtlessly surface over the horizon.

With the advantages of structured content in Drupal 7, Georgia.gov already had a content model amenable to interlocution in the form of frequently asked questions (FAQs). While question-and-answer formats are convenient for voice assistants because queries for content tend to come in the form of questions, the returned responses likewise need to be as voice-optimized as possible.

For Georgia.gov, the need to preserve a single rendition of all content across all channels led us to perform a conversational content audit, in which we read aloud all of the FAQ pages, putting ourselves in the shoes of a voice user, and identified key differences between how a user would interpret the written form and how they would parse the spoken form of that same content. After some discussion with the editorial team at Georgia, we opted to limit calls to action (e.g., “Read more”), links lacking clear context in surrounding text, and other situations confusing to voice users who cannot visualize the content they are listening to.

Here’s a table containing examples of how we converted certain text on FAQ pages to counterparts more appropriate for voice. Reading each sentence aloud, one by one, helped us identify cases where users might scratch their heads and say “Huh?” in a voice context.

Before After
Learn how to change your name on your Social Security card. The Social Security Administration can help you change your name on your Social Security card.
You can receive payments through either a debit card or direct deposit. Learn more about payments. You can receive payments through either a debit card or direct deposit.
Read more about this. In Georgia, the Family Support Registry typically pulls payments directly from your paycheck. However, you can send your own payments online through your bank account, your credit card, or Western Union. You may also send your payments by mail to the address provided in your court order.

In areas like content strategy and content governance, content audits have long been key to understanding the full picture of your content, but it doesn’t end there. Successful content audits can run the gamut from automated checks for orphaned content or overly wordy articles to more qualitative analyses of how content adheres to a specific brand voice or certain design standards. For a content strategy truly prepared for channels both here and still to come, a holistic understanding of how users will interact with your content in a variety of situations is a baseline requirement today.

Other conversational interfaces have it easier

Spoken language is inherently hard. Even the most gifted orators can have trouble with it. It’s littered with mistakes, starts and stops, interruptions, hesitations, and a vertiginous range of other uniquely human transgressions. The written word, because it’s committed instantly to a mostly permanent record, is tame, staid, and carefully considered in comparison.

When we talk about conversational interfaces, we need to draw a clear distinction between the range of user experiences that traffic in written language rather than spoken language. As we know from the relative solidity of written language and literature versus the comparative transience of spoken language and oral traditions, in many ways the two couldn’t be more different from one another. The implications for designers are significant because spoken language, from the user’s perspective, lacks a graphical equivalent to which those scratching their head can readily refer. We’re dealing with the spoken word and aural affordances, not pixels, written help text, or visual affordances.

Why written conversational interfaces are easier to evaluate

One of the privileges that chatbots and textbots enjoy over voice interfaces is the fact that by design, they can’t hide the previous steps users have taken. Any conversational interface user working in the written medium has access to their previous history of interactions, which can stretch back days, weeks, or months: the so-called backscroll. A flight passenger communicating with an airline through Facebook Messenger, for example, knows that they can merely scroll up in the chat history to confirm that they’ve already provided the company with their e-ticket number or frequent flyer account information.

This has outsize implications for information architecture and conversational wayfinding. Since chatbot users can consult their own written record, it’s much harder for things to go completely awry when they make a move they didn’t intend. Recollection is much more difficult when you have to remember what you said a few minutes ago off the top of your head rather than scrolling up to the information you provided a few hours or weeks ago. An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history.An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history. Voice interfaces that live perpetually in the moment have no such luxury.

Eye tracking only works for visual components

In many cases, those who work with chatbots and messaging bots (especially those leveraging text messages or other messaging services like Facebook Messenger, Slack, or WhatsApp) have the unique privilege of benefiting from a visual component. Some conversational interfaces now insert other elements into the conversational flow between a machine and a person, such as embedded conversational forms (like SPACE10’s Conversational Form) that allow users to enter rich input or select from a range of possible responses.

The success of eye tracking in more traditional usability testing scenarios highlights its appropriateness for visual interfaces such as websites, mobile applications, and others. However, from the standpoint of evaluating voice interfaces that are entirely aural, eye tracking serves only the limited (but still interesting from a research perspective) purpose of assessing where the test subject is looking while speaking with an invisible interlocutor—not whether they are able to use the interface successfully. Indeed, eye tracking is only a viable option for voice interfaces that have some visual component, like the Amazon Echo Show.

Think-aloud and concurrent probing interrupt the conversational flow

A well-worn approach for usability testing is think-aloud, which allows for users working with interfaces to present their frequently qualitative impressions of interfaces verbally while interacting with the user experience in question. Paired with eye tracking, think-aloud adds considerable dimension to a usability test for visual interfaces such as websites and web applications, as well as other visually or physically oriented devices.

Another is concurrent probing (CP). Probing involves the use of questions to gather insights about the interface from users, and Usability.gov describes two types: concurrent, in which the researcher asks questions during interactions, and retrospective, in which questions only come once the interaction is complete.

Conversational interfaces that utilize written language rather than spoken language can still be well-suited to think-aloud and concurrent probing approaches, especially for the components in the interface that require manual input, like conversational forms and other traditional UI elements interspersed throughout the conversation itself.

But for voice interfaces, think-aloud and concurrent probing are highly questionable approaches and can catalyze a variety of unintended consequences, including accidental invocations of trigger words (such as Alexa mishearing “selected” as “Alexa”) and introduction of bad data (such as speech transcription registering both the voice interface and test subject). After all, in a hypothetical think-aloud or CP test of a voice interface, the user would be responsible for conversing with the chatbot while simultaneously offering up their impressions to the evaluator overseeing the test.

Voice usability tests with retrospective probing

Retrospective probing (RP), a lesser-known approach for usability testing, is seldom seen in web usability testing due to its chief weakness: the fact that we have awful memories and rarely remember what occurred mere moments earlier with anything that approaches total accuracy. (This might explain why the backscroll has joined the pantheon of rigid recordkeeping currently occupied by cuneiform, the printing press, and other means of concretizing information.)

For users of voice assistants lacking scrollable chat histories, retrospective probing introduces the potential for subjects to include false recollections in their assessments or to misinterpret the conclusion of their conversations. That said, retrospective probing permits the participant to take some time to form their impressions of an interface rather than dole out incremental tidbits in a stream of consciousness, as would more likely occur in concurrent probing.

What makes voice usability tests unique

Voice usability tests have several unique characteristics that distinguish them from web usability tests or other conversational usability tests, but some of the same principles unify both visual interfaces and their aural counterparts. As always, “test early, test often” is a mantra that applies here, as the earlier you can begin testing, the more robust your results will be. Having an individual to administer a test and another to transcribe results or watch for signs of trouble is also an effective best practice in settings beyond just voice usability.

Interference from poor soundproofing or external disruptions can derail a voice usability test even before it begins. Many large organizations will have soundproof rooms or recording studios available for voice usability researchers. For the vast majority of others, a mostly silent room will suffice, though absolute silence is optimal. In addition, many subjects, even those well-versed in web usability tests, may be unaccustomed to voice usability tests in which long periods of silence are the norm to establish a baseline for data.

How we used retrospective probing to test Ask GeorgiaGov

For Ask GeorgiaGov, we used the retrospective probing approach almost exclusively to gather a range of insights about how our users were interacting with voice-driven content. We endeavored to evaluate interactions with the interface early and diachronically. In the process, we asked each of our subjects to complete two distinct tasks that would require them to traverse the entirety of the interface by asking questions (conducting a search), drilling down into further questions, and requesting the phone number for a related agency. Though this would be a significant ask of any user working with a visual interface, the unidirectional focus of voice interface flows, by contrast, reduced the likelihood of lengthy accidental detours.

Here are a couple of example scenarios:

You have a business license in Georgia, but you’re not sure if you have to register on an annual basis. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

You’ve just moved to Georgia and you know you need to transfer your driver’s license, but you’re not sure what to do. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

We also peppered users with questions after the test concluded to learn about their impressions through retrospective probing:

  • “On a scale of 1–5, based on the scenario, was the information you received helpful? Why or why not?”
  • “On a scale of 1–5, based on the scenario, was the content presented clear and easy to follow? Why or why not?”
  • “What’s the answer to the question that you were tasked with asking?”

Because state governments also routinely deal with citizen questions having to do with potentially traumatic issues such as divorce and sexual harassment, we also offered the choice for participants to opt out of certain categories of tasks.

While this testing procedure yielded compelling results that indicated our voice interface was performing at the level it needed to despite its experimental nature, we also ran into considerable challenges during the usability testing process. Restoring Amazon Alexa to its initial state and troubleshooting issues on the fly proved difficult during the initial stages of the implementation, when bugs were still common.

In the end, we found that many of the same lessons that apply to more storied examples of usability testing were also relevant to Ask GeorgiaGov: the importance of testing early and testing often, the need for faithful yet efficient transcription, and the surprising staying power of bugs when integrating disparate technologies. Despite Ask GeorgiaGov’s many similarities to other interface implementations in terms of technical debt and the role of usability testing, we were overjoyed to hear from real Georgians whose engagement with their state government could not be more different from before.

Conclusion

Many of us may be building interfaces for voice content to experiment with newfangled channels, or to build for disabled people and people newer to the web. Now, they are necessities for many others, especially as social distancing practices continue to take hold worldwide. Nonetheless, it’s crucial to keep in mind that voice should be only one component of a channel-agnostic strategy equipped for content ripped away from its usual contexts. Building usable voice-driven content experiences can teach us a great deal about how we should envisage our milieu of content and its future in the first place.

Gone are the days when we could write a page in HTML and call it a day; content now needs to be rendered through synthesized speech, augmented reality overlays, digital signage, and other environments where users will never even touch a personal computer. By focusing on structured content first and foremost with an eye toward moving past our web-based biases in developing our content for voice and others, we can better ensure the effectiveness of our content on any device and in any form factor.

Eight months after we finished building Ask GeorgiaGov in 2017, we conducted a retrospective to inspect the logs amassed over the past year. The results were striking. Vehicle registration, driver’s licenses, and the state sales tax comprised the most commonly searched topics. 79.2% of all interactions were successful, an achievement for one of the first content-driven Alexa skills in production, and 71.2% of all interactions led to the issuance of a phone number that users could call for further information.

But deep in the logs we implemented for the Georgia team’s convenience, we found a number of perplexing 404 Not Found errors related to a search term that kept being recorded over and over again as “Lawson’s.” After some digging and consulting the native Georgians in the room, we discovered that one of our dear users with a particularly strong drawl was repeatedly pronouncing “license” in her native dialect to no avail.

As this anecdote highlights, just as no user experience can be truly perfect for everyone, voice content is an environment where imperfections can highlight considerations we missed in developing cross-channel content. And just as we have much to learn when it comes to the new shapes content can take as it jumps off the screen and out the window, it seems our voice interfaces still have a ways to go before they take over the world too.

Special thanks to Nikhil Deshpande for his feedback during the writing process.




tent

Concern over Akhil Gogoi’s prolonged detention

Concern over Akhil Gogoi’s prolonged detention




tent

Science of Ashwagandha: preventive and therapeutic potentials / Sunil C. Kaul, Renu Wadhwa, editors

Online Resource




tent

Pharmacotherapeutic Potential of Natural Products in Neurological Disorders / Amritpal Singh Saroya, Jaswinder Singh

Online Resource




tent

Natural products as source of molecules with therapeutic potential: research & development, challenges and perspectives / editor, Valdir Cechinel Filho

Online Resource




tent

Cross-border tourism in protected areas: potentials, pitfalls and perspectives / Marius Mayer [and 4 others]

Online Resource




tent

Enterprise content and search management for building digital platforms / Shailesh Shivakumar

Online Resource




tent

Learning with Lean: unleashing the potential for sustainable competitive advantage / James Zurn, Perry Mulligan

Online Resource




tent

La grande illusion [videorecording] / Realisations d'art cinématographique présentent ; CANAL+ Image International ; réalisation de Jean Renoir




tent

Media/society : technology, industries, content, and users / David Croteau (Virginia Commonwealth University), William Hoynes (Vassar College)

Croteau, David, author




tent

Television tension programmes : a study based on a content analysis of western, crime, and adventure programmes televised by Melbourne stations, 1960-61 / by David Martin

Martin, David




tent

Media strategies : managing content, platforms and relationships / Jane Johnston & Katie Rowney

Johnston, Jane, 1961- author




tent

Machine learning and medical engineering for cardiovascular health and intravascular imaging and computer assisted stenting: first International Workshop, MLMECH 2019, and 8th Joint International Workshop, CVII-STENT 2019, held in conjunction with MICCAI

Online Resource




tent

Sorption of sulfamethoxazole on biochars of varying mineral content

Environ. Sci.: Processes Impacts, 2020, Advance Article
DOI: 10.1039/D0EM00102C, Paper
Jing Li, Yihui Chen, Liping He, Ni Liang, Lin Wang, Jing Zhao, Bo Pan
Negative charge assisted H-bond played a significant role in sulfamethoxazole sorption for low-temperature biochars (200–300 °C), while Ca-containing mineral content facilitated SMX sorption as pyrolysis temperature increased.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

Influence of climate and biological variables on temporal trends of persistent organic pollutants in Arctic char and ringed seals from Greenland

Environ. Sci.: Processes Impacts, 2020, 22,993-1005
DOI: 10.1039/C9EM00561G, Paper
Frank Rigét, Katrin Vorkamp, Igor Eulaers, Rune Dietz
Climate change may affect temporal trends of persistent organic pollutants (POPs) in Arctic wildlife.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

Participatory science for coastal water quality: freshwater plume mapping and volunteer retention in a randomized informational intervention

Environ. Sci.: Processes Impacts, 2020, 22,918-929
DOI: 10.1039/C9EM00571D, Paper
Wiley C. Jennings, Sydney Cunniff, Kate Lewis, Hailey Deres, Dan R. Reineman, Jennifer Davis, Alexandria B. Boehm
This study presents a novel framework for estimating safe swimming distances at beaches and is the first participatory environmental science study to experimentally test strategies for increasing volunteer retention.
The content of this RSS Feed (c) The Royal Society of Chemistry




tent

[ASAP] Fragment Exchange Potential for Realizing Pauli Deformation of Interfragment Interactions

The Journal of Physical Chemistry Letters
DOI: 10.1021/acs.jpclett.0c00933




tent

[ASAP] The Potential of Overlayers on Tin-based Perovskites for Water Splitting

The Journal of Physical Chemistry Letters
DOI: 10.1021/acs.jpclett.0c00964




tent

[ASAP] A Single Enzyme Mediates the “Quasi-Living” Formation of Multiblock Copolymers with a Broad Biomedical Potential

Biomacromolecules
DOI: 10.1021/acs.biomac.0c00126




tent

[ASAP] Functionalization of Cellulose Nanocrystals with POEGMA Copolymers via Copper-Catalyzed Azide–Alkyne Cycloaddition for Potential Drug-Delivery Applications

Biomacromolecules
DOI: 10.1021/acs.biomac.9b01713




tent

Frequency in language: memory, attention and learning / Dagmar Divjak

Dewey Library - P128.F73 D58 2019




tent

Reorganising grammatical variation: diachronic studies in the rentention, redistribution and refunctionalisation of linguistic variants / edited by Antje Dammel, Matthias Eitelmann, Mirjam Schmuck

Hayden Library - P120.V37 R45 2018




tent

India’s diversity makes content moderation tough

India had 504 million active internet users – who logged onto the Web at least once in the last one month – at the end of November 2019, according to a recent report by the Internet and Mobile Association of India (IAMAI).




tent

Ice worlds of the solar system: their tortured landscapes and biological potential / Michael Carroll

Online Resource




tent

Africa development indicators.: the potential, the problem, the promise.

Online Resource




tent

Why do so many incompetent men become leaders?: (and how to fix it) / Tomas Chamorro-Premuzic

Dewey Library - HD6060.6.C47 2019




tent

Counter mentor leadership: how to unlock the potential of the 4-generation workplace / Kelly Riggs, Robby Riggs

Dewey Library - HD57.7.R537 2018




tent

The Wiley Blackwell Handbook of the Psychology of Recruitment, Selection and Employee Retention


 Read More...




tent

Genes, brains, and human potential: the science and ideology of intelligence / Ken Richardson

Hayden Library - BF431.R4125 2017




tent

The unity of perception: content, consciousness, evidence / Susanna Schellenberg

Hayden Library - BF311.S35 2018




tent

Processes of visuospatial attention and working memory / Timothy Hodgson, editor

Online Resource




tent

[ASAP] Lithium–Oxygen Batteries and Related Systems: Potential, Status, and Future

Chemical Reviews
DOI: 10.1021/acs.chemrev.9b00609




tent

[ASAP] Multireference Electron Correlation Methods: Journeys along Potential Energy Surfaces

Chemical Reviews
DOI: 10.1021/acs.chemrev.9b00496




tent

Tenth International Conference on Computational Electromagnetics (CEM 2019) [electronic journal].




tent

Proceedings Tenth IEEE International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2001 [electronic journal].

IEEE Computer Society




tent

Proceedings. Tenth IEEE International Symposium on Distributed Simulation and Real-Time Applications [electronic journal].

IEEE Computer Society




tent

Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL 2001) [electronic journal].

IEEE Computer Society




tent

Proceedings. Advanced Industrial Conference on Telecommunications/Service Assurance with partial and Intermittent Resources Telecommunications Workshop [electronic journal].

IEEE Computer Society




tent

Jurnal Pertahanan: Media Informasi tentang Kajian dan Strategi Pertahanan yang Mengedepankan Identity, Nasionalism & Integrity [electronic journal].




tent

First International Conference on Automated Production of Cross Media Content for Multi-Channel Distribution. AXMEDIS 2005 [electronic journal].

IEEE Computer Society




tent

2019 Tenth International Green and Sustainable Computing Conference (IGSC) [electronic journal].

IEEE / Institute of Electrical and Electronics Engineers Incorporated