science and technology

Tab Discarding in Chrome: a Memory-Saving Experiment




science and technology

History API: Scroll Restoration




science and technology

Updates to the service worker cache API




science and technology

The EME Logger extension




science and technology

AAP MLA Prakash Jarwal arrested in Delhi doctor suicide case

A Delhi court had on May 8 issued a non-bailable warrant against Jarwal and his close aide Kapil Nagar. 




science and technology

Semantics to Screen Readers

As a child of the ’90s, one of my favorite movie quotes is from Harriet the Spy: “there are as many ways to live as there are people in this world, and each one deserves a closer look.” Likewise, there are as many ways to browse the web as there are people online. We each bring unique context to our web experience based on our values, technologies, environments, minds, and bodies. Assistive technologies (ATs), which are hardware and software that help us perceive and interact with digital content, come in diverse forms. ATs can use a whole host of user input, ranging from clicks and keystrokes to minor muscle movements. ATs may also present digital content in a variety of forms, such as Braille displays, color-shifted views, and decluttered user interfaces (UIs). One more commonly known type of AT is the screen reader. Programs such as JAWS, Narrator, NVDA, and VoiceOver can take digital content and present it to users through voice output, may display this output visually on the user’s screen, and can have Braille display and/or screen magnification capabilities built in. If you make websites, you may have tested your sites with a screen reader. But how do these and other assistive programs actually access your content? What information do they use? We’ll take a detailed step-by-step view of how the process works. (For simplicity we’ll continue to reference “browsers” and “screen readers” throughout this article. These are essentially shorthands for “browsers and other applications,” and “screen readers and other assistive technologies,” respectively.)

The semantics-to-screen-readers pipeline

Accessibility application programming interfaces (APIs) create a useful link between user applications and the assistive technologies that wish to interact with them. Accessibility APIs facilitate communicating accessibility information about user interfaces (UIs) to the ATs. The API expects information to be structured in a certain way, so that whether a button is properly marked up in web content or is sitting inside a native app taskbar, a button is a button is a button as far as ATs are concerned. That said, screen readers and other ATs can do some app-specific handling if they wish. On the web specifically, there are some browser and screen reader combinations where accessibility API information is supplemented by access to DOM structures. For this article, we’ll focus specifically on accessibility APIs as a link between web content and the screen reader. Here’s the breakdown of how web content reaches screen readers via accessibility APIs: The web developer uses host language markup (HTML, SVG, etc.), and potentially roles, states, and properties from the ARIA suite where needed to provide the semantics of their content. Semantic markup communicates what type an element is, what content it contains, what state it’s in, etc. The browser rendering engine (alternatively referred to as a “user agent”) takes this information and maps it into an accessibility API. Different accessibility APIs are available on different operating systems, so a browser that is available on multiple platforms should support multiple accessibility APIs. Accessibility API mappings are maintained on a lower level than web platform APIs, so web developers don’t directly interact with accessibility APIs. The accessibility API includes a collection of interfaces that browsers and other apps can plumb into, and generally acts as an intermediary between the browser and the screen reader. Accessibility APIs provide interfaces for representing the structure, relationships, semantics, and state of digital content, as well as means to surface dynamic changes to said content. Accessibility APIs also allow screen readers to retrieve and interact with content via the API. Again, web developers don’t interact with these APIs directly; the rendering engine handles translating web content into information useful to accessibility APIs.

Examples of accessibility APIs

The screen reader uses client-side methods from these accessibility APIs to retrieve and handle information exposed by the browser. In browsers where direct access to the Document Object Model (DOM) is permitted, some screen readers may also take additional information from the DOM tree. A screen reader can also interact with apps that use differing accessibility APIs. No matter where they get their information, screen readers can dream up any interaction modes they want to provide to their users (I’ve provided links to screen reader commands at the end of this article). Testing by site creators can help identify content that feels awkward in a particular navigation mode, such as multiple links with the same text (“Learn more”), as one example.

Example of this pipeline: surfacing a button element to screen reader users

Let’s suppose for a moment that a screen reader wants to understand what object is next in the accessibility tree (which I’ll explain further in the next section), so it can surface that object to the user as they navigate to it. The flow will go a little something like this:
Diagram illustrating the steps involved in presenting the next object in a document; detailed list follows
  1. The screen reader requests information from the API about the next accessible object, relative to the current object.
  2. The API (as an intermediary) passes along this request to the browser.
  3. At some point, the browser references DOM and style information, and discovers that the relevant element is a non-hidden button: <button>Do a thing</button>.
  4. The browser maps this HTML button into the format the API expects, such as an accessible object with various properties: Name: Do a thing, Role: Button.
  5. The API returns this information from the browser to the screen reader.
  6. The screen reader can then surface this object to the user, perhaps stating “Button, Do a thing.”
Suppose that the screen reader user would now like to “click” this button. Here’s how their action flows all the way back to web content:
Diagram illustrating the steps involved in routing a screen reader click to web content; detailed list follows
  1. The user provides a particular screen reader command, such as a keystroke or gesture.
  2. The screen reader calls a method into the API to invoke the button.
  3. The API forwards this interaction to the browser.
  4. How a browser may respond to incoming interactions depends on the context, but in this case the browser can raise this as a “click” event through web APIs. The browser should give no indication that the click came from an assistive technology, as doing so would violate the user’s right to privacy.
  5. The web developer has registered a JavaScript event listener for clicks; their callback function is now executed as if the user clicked with a mouse.
Now that we have a general sense of the pipeline, let’s go into a little more detail on the accessibility tree.

The accessibility tree

Dev Tools in Microsoft Edge showing the DOM tree and accessibility tree side by side; there are more nodes in the DOM tree
The accessibility tree is a hierarchical representation of elements in a UI or document, as computed for an accessibility API. In modern browsers, the accessibility tree for a given document is a separate, parallel structure to the DOM tree. “Parallel” does not necessarily mean there is a 1:1 match between the nodes of these two trees. Some elements may be excluded from the accessibility tree, for example if they are hidden or are not semantically useful (think non-focusable wrapper divs without any semantics added by a web developer). This idea of a hierarchical structure is somewhat of an abstraction. The definition of what exactly an accessibility tree is in practice has been debated and partially defined in multiple places, so implementations may differ in various ways. For example, it’s not actually necessary to generate accessible objects for every element in the DOM whenever the DOM tree is constructed. As a performance consideration, a browser could choose to deal with only a subset of objects and their relationships at a time—that is, however much is necessary to fulfill the requests coming from ATs. The rendering engine could make these computations during all user sessions, or only do so when assistive technologies are actively running. Generally speaking, modern web browsers wait until after style computation to build up any accessible objects. Browsers wait in part because generated content (such as ::before and ::after) can contain text that can participate in calculation of the accessible object’s name. CSS styles can also impact accessible objects in other various ways: text styling can come through as attributes on accessible text ranges. Display property values can impact the computation of line text ranges. These are just a few ways in which style can impact accessibility semantics. Browsers may also use different structures as the basis for accessible object computation. One rendering engine may walk the DOM tree and cross-reference style computations to build up parallel tree structures; another engine may use only the nodes that are available in a style tree in order to build up their accessibility tree. User agent participants in the standards community are currently thinking through how we can better document our implementation details, and whether it might make sense to standardize more of these details further down the road. Let’s now focus on the branches of this tree, and explore how individual accessibility objects are computed.

Building up accessible objects

From API to API, an accessible object will generally include a few things:
  • Role, or the type of accessible object (for example, Button). The role tells a user how they can expect to interact with the control. It is typically presented when screen reader focus moves onto the accessible object, and it can be used to provide various other functionalities, such as skipping around content via one type of object.
  • Name, if specified. The name is an (ideally short) identifier that better helps the user identify and understand the purpose of an accessible object. The name is often presented when screen focus moves to the object (more on this later), can be used as an identifier when presenting a list of available objects, and can be used as a hook for functionalities such as voice commands.
  • Description and/or help text, if specified. We’ll use “Description” as a shorthand. The Description can be considered supplemental to the Name; it’s not the main identifier but can provide further information about the accessible object. Sometimes this is presented when moving focus to the accessible object, sometimes not; this variation depends on both the screen reader’s user experience design and the user’s chosen verbosity settings.
  • Properties and methods surfacing additional semantics. For simplicity’s sake, we won’t go through all of these. For your awareness, properties can include details like layout information or available interactions (such as invoking the element or modifying its value).
Let’s walk through an example using markup for a simple mood tracker. We’ll use simplified property names and values, because these can differ between accessibility APIs.
<form>
  <label for="mood">On a scale of 1–10, what is your mood today?</label>
  <input id="mood" type="range"
       min="1" max="10" value="5"
       aria-describedby="helperText" />
  <p id="helperText">Some helpful pointers about how to rate your mood.</p>
  <!-- Using a div with button role for the purposes of showing how the accessibility tree is created. Please use the button element! -->
  <div tabindex="0" role="button">Log Mood</div>
</form>
First up is our form element. This form doesn’t have any attributes that would give it an accessible Name, and a form landmark without a Name isn’t very useful when jumping between landmarks. Therefore, HTML mapping standards specify that it should be mapped as a group. Here’s the beginning of our tree:
  • Role: Group
Next up is the label. This one doesn’t have an accessible Name either, so we’ll just nest it as an object of role “Label” underneath the form:
  • Role: Group
    • Role: Label
Let’s add the range input, which will map into various APIs as a “Slider.” Due to the relationship created by the for attribute on the label and id attribute on the input, this slider will take its Name from the label contents. The aria-describedby attribute is another id reference and points to a paragraph with some text content, which will be used for the slider’s Description. The slider object’s properties will also store “labelledby” and “describedby” relationships pointing to these other elements. And it will specify the current, minimum, and maximum values of the slider. If one of these range values were not available, ARIA standards specify what should be the default value. Our updated tree:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
The paragraph will be added as a simple paragraph object (“Text” or “Group” in some APIs):
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
The final element is an example of when role semantics are added via the ARIA role attribute. This div will map as a Button with the name “Log Mood,” as buttons can take their name from their children. This button will also be surfaced as “invokable” to screen readers and other ATs; special types of buttons could provide expand/collapse functionality (buttons with the aria-expanded attribute), or toggle functionality (buttons with the aria-pressed attribute). Here’s our tree now:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
    • Role: Button Name: Log Mood

On choosing host language semantics

Our sample markup mentions that it is preferred to use the HTML-native button element rather than a div with a role of “button.” Our buttonified div can be operated as a button via accessibility APIs, as the ARIA attribute is doing what it should—conveying semantics. But there’s a lot you can get for free when you choose native elements. In the case of button, that includes focus handling, user input handling, form submission, and basic styling. Aaron Gustafson has what he refers to as an “exhaustive treatise” on buttons in particular, but generally speaking it’s great to let the web platform do the heavy lifting of semantics and interaction for us when we can. ARIA roles, states, and properties are still a great tool to have in your toolbelt. Some good use cases for these are
  • providing further semantics and relationships that are not naturally expressed in the host language;
  • supplementing semantics in markup we perhaps don’t have complete control over;
  • patching potential cross-browser inconsistencies;
  • and making custom elements perceivable and operable to users of assistive technologies.

Notes on inclusion or exclusion in the tree

Standards define some rules around when user agents should exclude elements from the accessibility tree. Excluded elements can include those hidden by CSS, or the aria-hidden or hidden attributes; their children would be excluded as well. Children of particular roles (like checkbox) can also be excluded from the tree, unless they meet special exceptions. The full rules can be found in the “Accessibility Tree” section of the ARIA specification. That being said, there are still some differences between implementers, some of which include more divs and spans in the tree than others do.

Notes on name and description computation

How names and descriptions are computed can be a bit confusing. Some elements have special rules, and some ARIA roles allow name computation from the element’s contents, whereas others do not. Name and description computation could probably be its own article, so we won’t get into all the details here (refer to “Further reading and resources” for some links). Some short pointers:
  • aria-label, aria-labelledby, and aria-describedby take precedence over other means of calculating name and description.
  • If you expect a particular HTML attribute to be used for the name, check the name computation rules for HTML elements. In your scenario, it may be used for the full description instead.
  • Generated content (::before and ::after) can participate in the accessible name when said name is taken from the element’s contents. That being said, web developers should not rely on pseudo-elements for non-decorative content, as this content could be lost when a stylesheet fails to load or user styles are applied to the page.
When in doubt, reach out to the community! Tag questions on social media with “#accessibility.” “#a11y” is a common shorthand; the “11” stands for “11 middle letters in the word ‘accessibility.’” If you find an inconsistency in a particular browser, file a bug! Bug tracker links are provided in “Further reading and resources.”

Not just accessible objects

Besides a hierarchical structure of objects, accessibility APIs also offer interfaces that allow ATs to interact with text. ATs can retrieve content text ranges, text selections, and a variety of text attributes that they can build experiences on top of. For example, if someone writes an email and uses color alone to highlight their added comments, the person reading the email could increase the verbosity of speech output in their screen reader to know when they’re encountering phrases with that styling. However, it would be better for the email author to include very brief text labels in this scenario. The big takeaway here for web developers is to keep in mind that the accessible name of an element may not always be surfaced in every navigation mode in every screen reader. So if your aria-label text isn’t being read out in a particular mode, the screen reader may be primarily using text interfaces and only conditionally stopping on objects. It may be worth your while to consider using text content—even if visually hidden—instead of text via an ARIA attribute. Read more thoughts on aria-label and aria-labelledby.

Accessibility API events

It is the responsibility of browsers to surface changes to content, structure, and user input. Browsers do this by sending the accessibility API notifications about various events, which screen readers can subscribe to; again, for performance reasons, browsers could choose to send notifications only when ATs are active. Let’s suppose that a screen reader wants to surface changes to a live region (an element with role="alert" or aria-live):
Diagram illustrating the steps involved in announcing a live region via a screen reader; detailed list follows
  1. The screen reader subscribes to event notifications; it could subscribe to notifications of all types, or just certain types as categorized by the accessibility API. Let’s assume in our example that the screen reader is at least listening to live region change events.
  2. In the web content, the web developer changes the text content of a live region.
  3. The browser (provider) recognizes this as a live region change event, and sends the accessibility API a notification.
  4. The API passes this notification along to the screen reader.
  5. The screen reader can then use metadata from the notification to look up the relevant accessible objects via the accessibility API, and can surface the changes to the user.
ATs aren’t required to do anything with the information they retrieve. This can make it a bit trickier as a web developer to figure out why a screen reader isn’t announcing a change: it may be that notifications aren’t being raised (for example, because a browser is not sending notifications for a live region dynamically inserted into web content), or the AT is not subscribed or responding to that type of event.

Testing with screen readers and dev tools

While conformance checkers can help catch some basic accessibility issues, it’s ideal to walk through your content manually using a variety of contexts, such as
  • using a keyboard only;
  • with various OS accessibility settings turned on;
  • and at different zoom levels and text sizes, and so on.
As you do this, keep in mind the Web Content Accessibility Guidelines (WCAG 2.1), which give general guidelines around expectations for inclusive web content. If you can test with users after your own manual test passes, all the better! Robust accessibility testing could probably be its own series of articles. In this one, we’ll go over some tips for testing with screen readers, and catching accessibility errors as they are mapped into the accessibility API in a more general sense.

Screen reader testing

Screen readers exist in many forms: some are pre-installed on the operating system and others are separate applications that in some cases are free to download. The WebAIM screen reader user survey provides a list of commonly used screen reader and browser combinations among survey participants. The “Further reading and resources” section at the end of this article includes full screen reader user docs, and Deque University has a great set of screen reader command cheat sheets that you can refer to. Some actions you might take to test your content:
  • Read the next/previous item.
  • Read the next/previous line.
  • Read continuously from a particular point.
  • Jump by headings, landmarks, and links.
  • Tab around focusable elements only.
  • Get a summary of all elements of a particular type within the page.
  • Search the page for specific content.
  • Use table-specific commands to interact with your tables.
  • Jump around by form field; are field instructions discoverable in this navigational mode?
  • Use keyboard commands to interact with all interactive elements. Are your JavaScript-driven interactions still operable with screen readers (which can intercept key input in certain modes)? WAI-ARIA Authoring Practices 1.1 includes notes on expected keyboard interactions for various widgets.
  • Try out anything that creates a content change or results in navigating elsewhere. Would it be obvious, via screen reader output, that a change occurred?

Tracking down the source of unexpected behavior

If a screen reader does not announce something as you’d expect, here are a few different checks you can run:
  • Does this reproduce with the same screen reader in multiple browsers on this OS? It may be an issue with the screen reader or your expectation may not match the screen reader’s user experience design. For example, a screen reader may choose to not expose the accessible name of a static, non-interactive element. Checking the user docs or filing a screen reader issue with a simple test case would be a great place to start.
  • Does this reproduce with multiple screen readers in the same browser, but not in other browsers on this OS? The browser in question may have an issue, there may be compatibility differences between browsers (such as a browser doing extra helpful but non-standard computations), or a screen reader’s support for a specific accessibility API may vary. Filing a browser issue with a simple test case would be a great place to start; if it’s not a browser bug, the developer can route it to the right place or make a code suggestion.
  • Does this reproduce with multiple screen readers in multiple browsers? There may be something you can adjust in your code, or your expectations may differ from standards and common practices.
  • How does this element’s accessibility properties and structure show up in browser dev tools?

Inspecting accessibility trees and properties in dev tools

Major modern browsers provide dev tools to help you observe the structure of the accessibility tree as well as a given element’s accessibility properties. By observing which accessible objects are generated for your elements and which properties are exposed on a given element, you may be able to pinpoint issues that are occurring either in front-end code or in how the browser is mapping your content into the accessibility API. Let’s suppose that we are testing this piece of code in Microsoft Edge with a screen reader:
<div class="form-row">
  <label>Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
We’re navigating the page by form field, and when we land on this text field, the screen reader just tells us this is an “edit” control—it doesn’t mention a name for this element. Let’s check the tools for the element’s accessible name. 1. Inspect the element to bring up the dev tools.
The Microsoft Edge dev tools, with an input element highlighted in the DOM tree
2. Bring up the accessibility tree for this page by clicking the accessibility tree button (a circle with two arrows) or pressing Ctrl+Shift+A (Windows).
The accessibility tree button activated in the Microsoft Edge dev tools
Reviewing the accessibility tree is an extra step for this particular flow but can be helpful to do. When the Accessibility Tree pane comes up, we notice there’s a tree node that just says “textbox:,” with nothing after the colon. That suggests there’s not a name for this element. (Also notice that the div around our form input didn’t make it into the accessibility tree; it was not semantically useful). 3. Open the Accessibility Properties pane, which is a sibling of the Styles pane. If we scroll down to the Name property—aha! It’s blank. No name is provided to the accessibility API. (Side note: some other accessibility properties are filtered out of this list by default; toggle the filter button—which looks like a funnel—in the pane to get the full list).
The Accessibility Properties pane open in Microsoft Edge dev tools, in the same area as the Styles pane
4. Check the code. We realize that we didn’t associate the label with the text field; that is one strategy for providing an accessible name for a text input. We add for="myTextInput" to the label:
<div class="form-row">
  <label for="myTextInput">Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
And now the field has a name:
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
In another use case, we have a breadcrumb component, where the current page link is marked with aria-current="page":
<nav class="breadcrumb" aria-label="Breadcrumb">
  <ol>
    <li>
      <a href="/cat/">Category</a>
    </li>
    <li>
      <a href="/cat/sub/">Sub-Category</a>
    </li>
    <li>
      <a aria-current="page" href="/cat/sub/page/">Page</a>
    </li>
  </ol>
</nav>
When navigating onto the current page link, however, we don’t get any indication that this is the current page. We’re not exactly sure how this maps into accessibility properties, so we can reference a specification like Core Accessibility API Mappings 1.2 (Core-AAM). Under the “State and Property Mapping” table, we find mappings for “aria-current with non-false allowed value.” We can check for these listed properties in the Accessibility Properties pane. Microsoft Edge, at the time of writing, maps into UIA (UI Automation), so when we check AriaProperties, we find that yes, “current=page” is included within this property value.
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
Now we know that the value is presented correctly to the accessibility API, but the particular screen reader is not using the information. As a side note, Microsoft Edge’s current dev tools expose these accessibility API properties quite literally. Other browsers’ dev tools may simplify property names and values to make them easier to read, particularly if they support more than one accessibility API. The important bit is to find if there’s a property with roughly the name you expect and whether its value is what you expect. You can also use this method of checking through the property names and values if mapping specs, like Core-AAM, are a bit intimidating!

Advanced accessibility tools

While browser dev tools can tell us a lot about the accessibility semantics of our markup, they don’t generally include representations of text ranges or event notifications. On Windows, the Windows SDK includes advanced tools that can help debug these parts of MSAA or UIA mappings: Inspect and AccEvent (Accessible Event Watcher). Using these tools presumes knowledge of the Windows accessibility APIs, so if this is too granular for you and you’re stuck on an issue, please reach out to the relevant browser team! There is also an Accessibility Inspector in Xcode on MacOS, with which you can inspect web content in Safari. This tool can be accessed by going to Xcode > Open Developer Tool > Accessibility Inspector.

Diversity of experience

Equipped with an accessibility tree, detailed object information, event notifications, and methods for interacting with accessible objects, screen readers can craft a browsing experience tailored to their audiences. In this article, we’ve used the term “screen readers” as a proxy for a whole host of tools that may use accessibility APIs to provide the best user experience possible. Assistive technologies can use the APIs to augment presentation or support varying types of user input. Examples of other ATs include screen magnifiers, cognitive support tools, speech command programs, and some brilliant new app that hasn’t been dreamed up yet. Further, assistive technologies of the same “type” may differ in how they present information, and users who share the same tool may further adjust settings to their liking. As web developers, we don’t necessarily need to make sure that each instance surfaces information identically, because each user’s preferences will not be exactly the same. Our aim is to ensure that no matter how a user chooses to explore our sites, content is perceivable, operable, understandable, and robust. By testing with a variety of assistive technologies—including but not limited to screen readers—we can help create a better web for all the many people who use it.

Further reading and resources




science and technology

Canary in a Coal Mine: How Tech Provides Platforms for Hate

As I write this, the world is sending its thoughts and prayers to our Muslim cousins. The Christchurch act of terrorism has once again reminded the world that white supremacy’s rise is very real, that its perpetrators are no longer on the fringes of society, but centered in our holiest places of worship. People are begging us to not share videos of the mass murder or the hateful manifesto that the white supremacist terrorist wrote. That’s what he wants: for his proverbial message of hate to be spread to the ends of the earth.

We live in a time where you can stream a mass murder and hate crime from the comfort of your home. Children can access these videos, too.

As I work through the pure pain, unsurprised, observing the toll on Muslim communities (as a non-Muslim, who matters least in this event), I think of the imperative role that our industry plays in this story.

At time of writing, YouTube has failed to ban and to remove this video. If you search for the video (which I strongly advise against), it still comes up with a mere content warning; the same content warning that appears for casually risqué content. You can bypass the warning and watch people get murdered. Even when the video gets flagged and taken down, new ones get uploaded.

Human moderators have to relive watching this trauma over and over again for unlivable wages. News outlets are embedding the video into their articles and publishing the hateful manifesto. Why? What does this accomplish?

I was taught in journalism class that media (photos, video, infographics, etc.) should be additive (a progressive enhancement, if you will) and provide something to the story for the reader that words cannot.

Is it necessary to show murder for our dear readers to understand the cruelty and finality of it? Do readers gain something more from watching fellow humans have their lives stolen from them? What psychological damage are we inflicting upon millions of people   and for what?

Who benefits?

The mass shooter(s) who had a message to accompany their mass murder. News outlets are thirsty for perverse clicks to garner more ad revenue. We, by way of our platforms, give agency and credence to these acts of violence, then pilfer profits from them. Tech is a money-making accomplice to these hate crimes.

Christchurch is just one example in an endless array where the tools and products we create are used as a vehicle for harm and for hate.

Facebook and the Cambridge Analytica scandal played a critical role in the outcome of the 2016 presidential election. The concept of “race realism,” which is essentially a term that white supremacists use to codify their false racist pseudo-science, was actively tested on Facebook’s platform to see how the term would sit with people who are ignorantly sitting on the fringes of white supremacy. Full-blown white supremacists don’t need this soft language. This is how radicalization works.

The strategies articulated in the above article are not new. Racist propaganda predates social media platforms. What we have to be mindful with is that we’re building smarter tools with power we don’t yet fully understand: you can now have an AI-generated human face. Our technology is accelerating at a frightening rate, a rate faster than our reflective understanding of its impact.

Combine the time-tested methods of spreading white supremacy, the power to manipulate perception through technology, and the magnitude and reach that has become democratized and anonymized.

We’re staring at our own reflection in the Black Mirror.

The right to speak versus the right to survive

Tech has proven time and time again that it voraciously protects first amendment rights above all else. (I will also take this opportunity to remind you that the first amendment of the United States offers protection to the people from the government abolishing free speech, not from private money-making corporations).

Evelyn Beatrice Hall writes in The Friends of Voltaire, “I disapprove of what you say, but I will defend to the death your right to say it.” Fundamentally, Hall’s quote expresses that we must protect, possibly above all other freedoms, the freedom to say whatever we want to say. (Fun fact: The quote is often misattributed to Voltaire, but Hall actually wrote it to explain Voltaire’s ideologies.)

And the logical anchor here is sound: We must grant everyone else the same rights that we would like for ourselves. Former 99u editor Sean Blanda wrote a thoughtful piece on the “Other Side,” where he posits that we lack tolerance for people who don’t think like us, but that we must because we might one day be on the other side. I agree in theory.

But, what happens when a portion of the rights we grant to one group (let’s say, free speech to white supremacists) means the active oppression another group’s right (let’s say, every person of color’s right to live)?

James Baldwin expresses this idea with a clause, “We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist.”

It would seem that we have a moral quandary where two sets of rights cannot coexist. Do we protect the privilege for all users to say what they want, or do we protect all users from hate? Because of this perceived moral quandary, tech has often opted out of this conversation altogether. Platforms like Twitter and Facebook, two of the biggest offenders, continue to allow hate speech to ensue with irregular to no regulation.

When explicitly asked about his platform as a free-speech platform and its consequence to privacy and safety, Twitter CEO Jack Dorsey said,

“So we believe that we can only serve the public conversation, we can only stand for freedom of expression if people feel safe to express themselves in the first place. We can only do that if they feel that they are not being silenced.”

Dorsey and Twitter are most concerned about protecting expression and about not silencing people. In his mind, if he allows people to say whatever they want on his platform, he has succeeded. When asked about why he’s failed to implement AI to filter abuse like, say, Instagram had implemented, he said that he’s most concerned about being able to explain why the AI flagged something as abusive. Again, Dorsey protects the freedom of speech (and thus, the perpetrators of abuse) before the victims of abuse.

But he’s inconsistent about it. In a study by George Washington University comparing white nationalists and ISIS social media usage, Twitter’s freedom of speech was not granted to ISIS. Twitter suspended 1,100 accounts related to ISIS whereas it suspended only seven accounts related to Nazis, white nationalism, and white supremacy, despite the accounts having more than seven times the followers, and tweeting 25 times more than the ISIS accounts. Twitter here made a moral judgment that the fewer, less active, and less influential ISIS accounts were somehow not welcome on their platform, whereas the prolific and burgeoning Nazi and white supremacy accounts were.

So, Twitter has shown that it won’t protect free speech at all costs or for all users. We can only conclude that Twitter is either intentionally protecting white supremacy or simply doesn’t think it’s very dangerous. Regardless of which it is (I think I know), the outcome does not change the fact that white supremacy is running rampant on its platforms and many others.

Let’s brainwash ourselves for a moment and pretend like Twitter does want to support freedom of speech equitably and stays neutral and fair to complete this logical exercise: Going back to the dichotomy of rights example I provided earlier, where either the right to free speech or the right to safety and survival prevail, the rights and the power will fall into the hands of the dominant group or ideologue.

In case you are somehow unaware, the dominating ideologue, whether you’re a flagrant white supremacist or not, is white supremacy. White supremacy was baked into founding principles of the United States, the country where the majority of these platforms were founded and exist. (I am not suggesting that white supremacy doesn’t exist globally, as it does, evidenced most recently by the terrorist attack in Christchurch. I’m centering the conversation intentionally around the United States as it is my lived experience and where most of these companies operate.)

Facebook attempted to educate its team on white supremacy in order to address how to regulate free speech. A laugh-cry excerpt:

“White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics].”

White nationalism is a softened synonym for white supremacy so that racists-lite can feel more comfortable with their transition into hate. White nationalism (a.k.a. white supremacy) by definition explicitly seeks to eradicate all people of color. So, Facebook should see white nationalist speech as exclusionary, and therefore a violation of their policies.

Regardless of what tech leaders like Dorsey or Facebook CEO Zuckerberg say or what mediocre and uninspired condolences they might offer, inaction is an action.

Companies that use terms and conditions or acceptable use policies to defend their inaction around hate speech are enabling and perpetuating white supremacy. Policies are written by humans to protect that group of human’s ideals. The message they use might be that they are protecting free speech, but hate speech is a form of free speech. So effectively, they are protecting hate speech. Well, as long as it’s for white supremacy and not the Islamic State.

Whether the motivation is fear (losing loyal Nazi customers and their sympathizers) or hate (because their CEO is a white supremacist), it does not change the impact: Hate speech is tolerated, enabled, and amplified by way of their platforms.

“That wasn’t our intent”

Product creators might be thinking, Hey, look, I don’t intentionally create a platform for hate. The way these features were used was never our intent.

Intent does not erase impact.

We cannot absolve ourselves of culpability merely because we failed to conceive such evil use cases when we built it. While we very well might not have created these platforms with the explicit intent to help Nazis or imagined it would be used to spread their hate, the reality is that our platforms are being used in this way.

As product creators, it is our responsibility to protect the safety of our users by stopping those that intend to or already cause them harm. Better yet, we ought to think of this before we build the platforms to prevent this in the first place.

The question to answer isn’t, “Have I made a place where people have the freedom to express themselves?” Instead we have to ask, “Have I made a place where everyone has the safety to exist?” If you have created a place where a dominant group can embroil and embolden hate against another group, you have failed to create a safe place. The foundations of hateful speech (beyond the psychological trauma of it) lead to events like Christchurch.

We must protect safety over speech.

The Domino Effect

This week, Slack banned 28 hate groups. What is most notable, to me, is that the groups did not break any parts of their Acceptable Use Policy. Slack issued a statement:

The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform… Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack.

That’s it.

It is not illegal for tech companies like Slack to ban groups from using their proprietary software because it is a private company that can regulate users if they do not align with their vision as a company. Think of it as the “no shoes, no socks, no service” model, but for tech.

Slack simply decided that supporting the workplace collaboration of Nazis around efficient ways to evangelize white supremacy was probably not in line with their company directives around inclusion. I imagine Slack also considered how their employees of color most ill-affected by white supremacy would feel working for a company that supported it, actively or not.

What makes the Slack example so notable is that they acted swiftly and on their own accord. Slack chose the safety of all their users over the speech of some.

When caught with their enablement of white supremacy, some companies will only budge under pressure from activist groups, users, and employees.

PayPal finally banned hate groups after Charlottesville and after Southern Poverty Law Center (SPLC) explicitly called them out for enabling hate. SPLC had identified this fact for three years prior. PayPal had ignored them for all three years.

Unfortunately, taking these “stances” against something as clearly and viscerally wrong as white supremacy is rare for companies to do. The tech industry tolerates this inaction through unspoken agreements.

If Facebook doesn’t do anything about racist political propaganda, YouTube doesn’t do anything about PewDiePie, and Twitter doesn’t do anything about disproportionate abuse against Black women, it says to the smaller players in the industry that they don’t have to either.

The tech industry reacts to its peers. When there is disruption, as was the case with Airbnb, who screened and rejected any guests who they believed to be partaking in the Unite the Right Charlottesville rally, companies follow suit. GoDaddy cancelled Daily Stormer’s domain registration and Google did the same when they attempted migration.

If one company, like Slack or Airbnb, decides to do something about the role it’s going to play, it creates a perverse kind of FOMO for the rest: Fear of missing out of doing the right thing and standing on the right side of history.

Don’t have FOMO, do something

The type of activism at those companies all started with one individual. If you want to be part of the solution, I’ve gathered some places to start. The list is not exhaustive, and, as with all things, I recommend researching beyond this abridged summary.

  1. Understand how white supremacy impacts you as an individual.
    Now, if you are a person of color, queer, disabled, or trans, it’s likely that you know this very intimately.

     

    If you are not any of those things, then you, as a majority person, need to understand how white supremacy protects you and works in your favor. It’s not easy work, it is uncomfortable and unfamiliar, but you have the most powerful tools to fix tech. The resources are aplenty, but my favorite abridged list:

    1. Seeing White podcast
    2. Ijeoma Oluo’s So you want to talk about race
    3. Reni Eddo-Lodge’s Why I’m no longer talking to white people about race (Very key read for UK folks)
    4. Robin DiAngelo’s White Fragility
  2. See where your company stands: Read your company’s policies like accepted use and privacy policies and find your CEO’s stance on safety and free speech.
    While these policies are baseline (and in the Slack example, sort of irrelevant), it’s important to known your company's track record. As an employee, your actions and decisions either uphold the ideologies behind the company or they don’t. Ask yourself if the company’s ideologies are worth upholding and whether they align with your own. Education will help you to flag if something contradicts those policies, or if the policies themselves allow for unethical activity.
  3. Examine everything you do critically on an ongoing basis.
    You may feel your role is small or that your company is immune—maybe you are responsible for the maintenance of one small algorithm. But consider how that algorithm or similar ones can be exploited. Some key questions I ask myself:
    1. Who benefits from this? Who is harmed?
    2. How could this be used for harm?
    3. Who does this exclude? Who is missing?
    4. What does this protect? For whom? Does it do so equitably?
  4. See something? Say something.
    If you believe that your company is creating something that is or can be used for harm, it is your responsibility to say something. Now, I’m not naïve to the fact that there is inherent risk in this. You might fear ostracization or termination. You need to protect yourself first. But you also need to do something.
    1. Find someone who you trust who might be at less risk. Maybe if you’re a nonbinary person of color, find a white cis man who is willing to speak up. Maybe if you’re a white man who is new to the company, find a white man who has more seniority or tenure. But also, consider how you have so much more relative privilege compared to most other people and that you might be the safest option.
    2. Unionize. Find peers who might feel the same way and write a collective statement.
    3. Get someone influential outside of the company (if knowledge is public) to say something.
  5. Listen to concerns, no matter how small, particularly if they’re coming from the most endangered groups.
    If your user or peer feels unsafe, you need to understand why. People often feel like small things can be overlooked, as their initial impact might be less, but it is in the smallest cracks that hate can grow. Allowing one insensitive comment about race is still allowing hate speech. If someone, particularly someone in a marginalized group, brings up a concern, you need to do your due diligence to listen to it and to understand its impact.

I cannot emphasize this last point enough.

What I say today is not new. Versions of this article have been written before. Women of color like me have voiced similar concerns not only in writing, but in design reviews, in closed door meetings to key stakeholders, in Slack DMs. We’ve blown our whistles.

But here is the power of white supremacy.

White supremacy is so ingrained in every single aspect of how this nation was built, how our corporations function, and who is in control. If you are not convinced of this, you are not paying attention or intentionally ignoring the truth.

Queer, Muslim, disabled, trans women and nonbinary folks of color — the marginalized groups most impacted by this — are the ones who are voicing these concerns most voraciously. Speaking up requires us to enter the spotlight and outside of safety—we take a risk and are not heard.

The silencing of our voices is one of many effective tools of white supremacy. Our silencing lives within every microaggression, each time we’re talked over, or not invited to partake in key decisions.

In tech, I feel I am a canary in a coal mine. I have sung my song to warn the miners of the toxicity. My sensitivity to it is heightened, because of my existence.

But the miners look at me and tell me that my lived experience is false. It does not align with their narrative as humans. They don’t understand why I sing.

If the people at the highest echelons of the tech industry—the white, male CEOs in power—fail to listen to its most marginalized people—the queer, disabled, trans, people of color—the fate of the canaries will too become the fate of the miners.




science and technology

Responsible JavaScript: Part I

By the numbers, JavaScript is a performance liability. If the trend persists, the median page will be shipping at least 400 KB of it before too long, and that’s merely what’s transferred. Like other text-based resources, JavaScript is almost always served compressed—but that might be the only thing we’re getting consistently right in its delivery.

Unfortunately, while reducing resource transfer time is a big part of that whole performance thing, compression has no effect on how long browsers take to process a script once it arrives in its entirety. If a server sends 400 KB of compressed JavaScript, the actual amount browsers have to process after decompression is north of a megabyte. How well devices cope with these heavy workloads depends, well, on the deviceMuch has been written about how adept various devices are at processing lots of JavaScript, but the truth is, the amount of time it takes to process even a trivial amount of it varies greatly between devices.

Take, for example, this throwaway project of mine, which serves around 23 KB of uncompressed JavaScript. On a mid-2017 MacBook Pro, Chrome chews through this comparably tiny payload in about 25 ms. On a Nokia 2 Android phone, however, that figure balloons to around 190 ms. That’s not an insignificant amount of time, but in either case, the page gets interactive reasonably fast.

Now for the big question: how do you think that little Nokia 2 does on an average page? It chokes. Even on a fast connection, browsing the web on it is an exercise in patience as JavaScript-laden web pages brick it for considerable stretches of time.

Figure 1. A performance timeline overview of a Nokia 2 Android phone browsing on a page where excessive JavaScript monopolizes the main thread.

While devices and the networks they navigate the web on are largely improving, we’re eating those gains as trends suggest. We need to use JavaScript responsibly. That begins with understanding what we’re building as well as how we’re building it.

The mindset of “sites” versus “apps”

Nomenclature can be strange in that we sometimes loosely identify things with terms that are inaccurate, yet their meanings are implicitly understood by everyone. Sometimes we overload the term “bee” to also mean “wasp”, even though the differences between bees and wasps are substantial. Those differences can motivate you to deal with each one differently. For instance, we’ll want to destroy a wasp nest, but because bees are highly beneficial and vulnerable insects, we may opt to relocate them.

We can be just as fast and loose in interchanging the terms “website” and “web app”. The differences between them are less clear than those between yellowjackets and honeybees, but conflating them can bring about painful outcomes. The pain comes in the affordances we allow ourselves when something is merely a “website” versus a fully-featured “web app.” If you’re making an informational website for a business, you’re less likely to lean on a powerful framework to manage changes in the DOM or implement client-side routing—at least, I hope. Using tools so ill-suited for the task would not only be a detriment to the people who use that site but arguably less productive.

When we build a web app, though, look out. We’re installing packages which usher in hundreds—if not thousands—of dependencies, some of which we’re not sure are even safe. We’re also writing complicated configurations for module bundlers. In this frenzied, yet ubiquitous, sort of dev environment, it takes knowledge and vigilance to ensure what gets built is fast and accessible. If you doubt this, run npm ls --prod in your project’s root directory and see if you recognize everything in that list. Even if you do, that doesn’t account for third party scripts—of which I’m sure your site has at least a few.

What we tend to forget is that the environment websites and web apps occupy is one and the same. Both are subject to the same environmental pressures that the large gradient of networks and devices impose. Those constraints don’t suddenly vanish when we decide to call what we build “apps”, nor do our users’ phones gain magical new powers when we do so.

It’s our responsibility to evaluate who uses what we make, and accept that the conditions under which they access the internet can be different than what we’ve assumed. We need to know the purpose we’re trying to serve, and only then can we build something that admirably serves that purpose—even if it isn’t exciting to build.

That means reassessing our reliance on JavaScript and how the use of it—particularly to the exclusion of HTML and CSS—can tempt us to adopt unsustainable patterns which harm performance and accessibility.

Don’t let frameworks force you into unsustainable patterns

I’ve been witness to some strange discoveries in codebases when working with teams that depend on frameworks to help them be highly productive. One characteristic common among many of them is that poor accessibility and performance patterns often result. Take the React component below, for example:

import React, { Component } from "react";
import { validateEmail } from "helpers/validation";

class SignupForm extends Component {
  constructor (props) {
    super(props);

    this.handleSubmit = this.handleSubmit.bind(this);
    this.updateEmail = this.updateEmail.bind(this);
    this.state.email = "";
  }

  updateEmail (event) {
    this.setState({
      email: event.target.value
    });
  }

  handleSubmit () {
    // If the email checks out, submit
    if (validateEmail(this.state.email)) {
      // ...
    }
  }

  render () {
    return (
      
); } }

There are some notable accessibility issues here:

  1. A form that doesn’t use a <form> element is not a form. Indeed, you could paper over this by specifying role="form" in the parent <div>, but if you’re building a form—and this sure looks like one—use a <form> element with the proper action and method attributes. The action attribute is crucial, as it ensures the form will still do something in the absence of JavaScript—provided the component is server-rendered, of course.
  2. <span> is not a substitute for a <label> element, which provides accessibility benefits <span>s don’t.
  3. If we intend to do something on the client side prior to submitting a form, then we should move the action bound to the <button> element's onClick handler to the <form> element’s onSubmit handler.
  4. Incidentally, why use JavaScript to validate an email address when HTML5 offers form validation controls in almost every browser back to IE 10? There’s an opportunity here to rely on the browser and use an appropriate input type, as well as the required attribute—but be aware that getting this to work right with screen readers takes a little know-how.
  5. While not an accessibility issue, this component doesn't rely on any state or lifecycle methods, which means it can be refactored into a stateless functional component, which uses considerably less JavaScript than a full-fledged React component.

Knowing these things, we can refactor this component:

import React from "react";

const SignupForm = props => {
  const handleSubmit = event => {
    // Needed in case we're sending data to the server XHR-style
    // (but will still work if server-rendered with JS disabled).
    event.preventDefault();

    // Carry on...
  };
  
  return (
    <form method="POST" action="/signup" onSubmit={handleSubmit}>
      <label for="email" class="email-label">Enter your email:</label>
      <input type="email" id="email" required />
      <button>Sign Up</button>
    </form>
  );
};

Not only is this component now more accessible, but it also uses less JavaScript. In a world that’s drowning in JavaScript, deleting lines of it should feel downright therapeutic. The browser gives us so much for free, and we should try to take advantage of that as often as possible.

This is not to say that inaccessible patterns occur only when frameworks are used, but rather that a sole preference for JavaScript will eventually surface gaps in our understanding of HTML and CSS. These knowledge gaps will often result in mistakes we may not even be aware of. Frameworks can be useful tools that increase our productivity, but continuing education in core web technologies is essential to creating usable experiences, no matter what tools we choose to use.

Rely on the web platform and you’ll go far, fast

While we’re on the subject of frameworks, it must be said that the web platform is a formidable framework of its own. As the previous section showed, we’re better off when we can rely on established markup patterns and browser features. The alternative is to reinvent them, and invite all the pain such endeavors all but guarantee us, or worse: merely assume that the author of every JavaScript package we install has solved the problem comprehensively and thoughtfully.

SINGLE PAGE APPLICATIONS

One of the tradeoffs developers are quick to make is to adopt the single page application (SPA) model, even if it’s not a fit for the project. Yes, you do gain better perceived performance with the client-side routing of an SPA, but what do you lose? The browser’s own navigation functionality—albeit synchronous—provides a slew of benefits. For one, history is managed according to a complex specification. Users without JavaScript—be it by their own choice or not—won’t lose access altogether. For SPAs to remain available when JavaScript is not, server-side rendering suddenly becomes a thing you have to consider.

Figure 2. A comparison of an example app loading on a slow connection. The app on the left depends entirely upon JavaScript to render a page. The app on the right renders a response on the server, but then uses client-side hydration to attach components to the existing server-rendered markup.

Accessibility is also harmed if a client-side router fails to let people know what content on the page has changed. This can leave those reliant on assistive technology to suss out what changes have occurred on the page, which can be an arduous task.

Then there’s our old nemesis: overhead. Some client-side routers are very small, but when you start with Reacta compatible router, and possibly even a state management library, you’re accepting that there’s a certain amount of code you can never optimize away—approximately 135 KB in this case. Carefully consider what you’re building and whether a client side router is worth the tradeoffs you’ll inevitably make. Typically, you’re better off without one.

If you’re concerned about the perceived navigation performance, you could lean on rel=prefetch to speculatively fetch documents on the same origin. This has a dramatic effect on improving perceived loading performance of pages, as the document is immediately available in the cache. Because prefetches are done at a low priority, they’re also less likely to contend with critical resources for bandwidth.

Figure 3. The HTML for the writing/ URL is prefetched on the initial page. When the writing/ URL is requested by the user, the HTML for it is loaded instantaneously from the browser cache.

The primary drawback with link prefetching is that you need to be aware that it can be potentially wasteful. Quicklink, a tiny link prefetching script from Google, mitigates this somewhat by checking if the current client is on a slow connection—or has data saver mode enabled—and avoids prefetching links on cross-origins by default.

Service workers are also hugely beneficial to perceived performance for returning users, whether we use client side routing or not—provided you know the ropesWhen we precache routes with a service worker, we get many of the same benefits as link prefetching, but with a much greater degree of control over requests and responses. Whether you think of your site as an “app” or not, adding a service worker to it is perhaps one of the most responsible uses of JavaScript that exists today.

JAVASCRIPT ISN’T THE SOLUTION TO YOUR LAYOUT WOES

If we’re installing a package to solve a layout problem, proceed with caution and ask “what am I trying to accomplish?” CSS is designed to do this job, and requires no abstractions to use effectively. Most layout issues JavaScript packages attempt to solve, like box placement, alignment, and sizingmanaging text overflow, and even entire layout systems, are solvable with CSS today. Modern layout engines like Flexbox and Grid are supported well enough that we shouldn’t need to start a project with any layout framework. CSS is the framework. When we have feature queries, progressively enhancing layouts to adopt new layout engines is suddenly not so hard.

/* Your mobile-first, non-CSS grid styles goes here */

/* The @supports rule below is ignored by browsers that don't
   support CSS grid, _or_ don't support @supports. */
@supports (display: grid) {
  /* Larger screen layout */
  @media (min-width: 40em) {
    /* Your progressively enhanced grid layout styles go here */
  }
}

Using JavaScript solutions for layout and presentations problems is not new. It was something we did when we lied to ourselves in 2009 that every website had to look in IE6 exactly as it did in the more capable browsers of that time. If we’re still developing websites to look the same in every browser in 2019, we should reassess our development goals. There will always be some browser we’ll have to support that can’t do everything those modern, evergreen browsers can. Total visual parity on all platforms is not only a pursuit made in vain, it’s the principal foe of progressive enhancement.

I’m not here to kill JavaScript

Make no mistake, I have no ill will toward JavaScript. It’s given me a career and—if I’m being honest with myself—a source of enjoyment for over a decade. Like any long-term relationship, I learn more about it the more time I spend with it. It’s a mature, feature-rich language that only gets more capable and elegant with every passing year.

Yet, there are times when I feel like JavaScript and I are at odds. I am critical of JavaScript. Or maybe more accurately, I’m critical of how we’ve developed a tendency to view it as a first resort to building for the web. As I pick apart yet another bundle not unlike a tangled ball of Christmas tree lights, it’s become clear that the web is drunk on JavaScript. We reach for it for almost everything, even when the occasion doesn’t call for it. Sometimes I wonder how vicious the hangover will be.

In a series of articles to follow, I’ll be giving more practical advice to follow to stem the encroaching tide of excessive JavaScript and how we can wrangle it so that what we build for the web is usable—or at least more so—for everyone everywhere. Some of the advice will be preventative. Some will be mitigating “hair of the dog” measures. In either case, the outcomes will hopefully be the same. I believe that we all love the web and want to do right by it, but I want us to think about how to make it more resilient and inclusive for all.





science and technology

Accessibility for Vestibular Disorders: How My Temporary Disability Changed My Perspective

Accessibility can be tricky. There are plenty of conditions to take into consideration, and many technical limitations and weird exceptions that make it quite hard to master for most designers and developers.

I never considered myself an accessibility expert, but I took great pride in making my projects Web Content Accessibility Guidelines (WCAG) compliant…ish. They would pass most automated tests, show perfectly in the accessibility tree, and work quite well with keyboard navigation. I would even try (and fail) to use a screen reader every now and then.

But life would give me a lesson I would probably never learn otherwise: last October, my abled life took a drastic change—I started to feel extremely dizzy, with a constant sensation of falling or spinning to the right. I was suffering from a bad case of vertigo caused by labyrinthitis that made it impossible to get anything done.

Vertigo can have a wide range of causes, the most common being a viral infection or tiny calcium crystal free floating in the inner ear, which is pretty much our body’s accelerometer. Any disruption in there sends the brain confusing signals about the body’s position, which causes really heavy nausea, dizziness, and headaches. If you’ve ever felt seasick, it’s quite a similar vibe. If not, think about that feeling when you just get off a rollercoaster…it’s like that, only all day long.

For most people, vertigo is something they’ll suffer just once in a lifetime, and it normally goes away in a week or two. Incidence is really high, with some estimates claiming that up to 40% of the population suffers vertigo at least once in their lifetime. Some people live all their lives with it (or with similar symptoms caused by a range of diseases and syndromes grouped under the umbrella term of vestibular disorders), with 4% of US adults reporting chronic problems with balance, and an additional 1.1% reporting chronic dizziness, according to the American Speech-Language-Hearing Association.

In my case, it was a little over a month. Here’s what I learned while going through it.

Slants can trigger vestibular symptoms

It all started as I was out for my daily jog. I felt slightly dizzy, then suddenly my vision got totally distorted. Everything appeared further away, like looking at a fun house’s distortion mirror. I stumbled back home and rested; at that moment I believed I might have over-exercised, and that hydration, food, and rest were all I needed. Time would prove me wrong.

What I later learned was that experiencing vertigo is a constant war between one of your inner ears telling the brain “everything is fine, we’re level and still” and the other ear shouting “oh my God, we’re falling, we’re falling!!!” Visual stimuli can act as an intermediary, supporting one ear’s message or the other’s. Vertigo can also work in the opposite way, with the dizziness interfering with your vision.

I quickly found that when symptoms peaked, staring at a distant object would ease the falling sensation somewhat.

In the same fashion, some visual stimuli would worsen it.

Vertical slants were a big offender in that sense. For instance, looking at a subtle vertical slant (the kind that you’d have to look at twice to make sure it’s not perfectly vertical) on a webpage would instantly trigger symptoms for me. Whether it was a page-long slant used to create some interest beside text or a tiny decoration to mark active tabs, looking at anything with slight slants would instantly send me into the rollercoaster.

Horizontal slants (whatever the degree) and harder vertical slants wouldn’t cause these issues.

My best guess is that slight vertical slants can look like forced perspective and therefore reinforce the falling-from-height sensation, so I would recommend avoiding vertical slants if you can, or make them super obvious. A slight slant looks like perspective, a harder one looks like a triangle.

Target size matters (even on mouse-assisted devices)

After a magnetic resonance imaging (MRI) scan, some tests to discard neurological conditions, and other treatments that proved ineffective, I was prescribed Cinnarizine.

Cinnarizine is a calcium channel blocker—to put it simply, it prevents the malfunctioning inner ear “accelerometer” from sending incorrect info to the brain. 
And it worked wonders. After ten days of being barely able to get out of bed, I was finally getting something closer to my normal life. I would still feel dizzy all the time, with some peaks throughout the day, but for the most part, it was much easier.

At this point, I was finally able to use the computer (but still unable to produce any code at all). To make the best of it, I set on a mission to self-experiment on accessibility for vestibular disorders. In testing, I found that one of the first things that struck me was that I would always miss targets (links and buttons).

I’m from the generation that grew up with desktop computers, so using a mouse is second nature. The pointer is pretty much an extension of my mind, as it is for many who use it regularly. But while Cinnarizine helped with the dizziness, it has a common side effect of negatively impacting coordination and fine motor skills (it is recommended not to drive or operate machinery while under treatment). It was not a surprise when I realized it would be much harder to get the pointer to do what I intended.

The common behavior would be: moving the pointer past the link I intended to click, clicking before reaching it at all, or having to try multiple times to click on smaller targets.

Success Criterion 2.5.5 Target Size (Level AAA) of the World Wide Web Consortium (W3C)’s WCAG recommends bigger target sizes so users can activate them easily. The obvious reason for this is that it’s harder to pinpoint targets on smaller screens with coarser inputs (i.e., touchscreens of mobile devices). A fairly common practice for developers is to set bigger target sizes for smaller viewport widths (assuming that control challenges are only touch-related), while neglecting the issue on big screens expected to be used with mouse input. I know I’m guilty of that myself.

Instead of targeting this behavior for just smaller screen sizes, there are plenty of reasons to create larger target sizes on all devices: it will benefit users with limited vision (when text is scaled up accordingly and colors are of sufficient contrast), users with mobility impairments such as hand tremors, and of course, users with difficulty with fine motor skills.

Font size and spacing

Even while “enjoying” the ease of symptoms provided by the treatment, reading anything still proved to be a challenge for the following three weeks.

I was completely unable to use mobile devices while suffering vertigo due to the smaller font sizes and spacing, so I was forced to use my desktop computer for everything.

I can say I was experiencing something similar to users with mild forms of dyslexia or attention disorders: whenever I got to a website that didn’t follow good font styling, I would find myself reading the same line over and over again.

This proves once again that accessibility is intersectional: when we improve things for a particular purpose it usually benefits users with other challenges as well. I used to believe recommendations on font styles were mostly intended for the nearsighted and those who have dyslexia. Turns out they are also critical for those with vertigo, and even for those with some cognitive differences. At the end of the day, everybody benefits from better readability.

Some actions you can take to improve readability are:

  • Keep line height to at least 1.5 times the font size (i.e., line-height: 1.5).
  • Set the spacing between paragraphs to at least 2.0 times the font size. We can do this by adjusting the margins using relative units such as em.
  • Letter spacing should be at least 0.12 times the font size. We can adjust this by using the letter-spacing CSS property, perhaps setting it in a relative unit.
  • Make sure to have good contrast between text and its background.
  • Keep font-weight at a reasonable level for the given font-family. Some fonts have thin strokes that make them harder to read. When using thinner fonts, try to improve contrast and font size accordingly, even more than what WCAG would suggest.
  • Choose fonts that are easy to read. There has been a large and still inconclusive debate on which font styles are better for users, but one thing I can say for sure is that popular fonts (as in fonts that the user might be already familiar with) are generally the least challenging for users with reading issues.

WCAG recommendations on text are fairly clear and fortunately are the most commonly implemented of recommendations, but even they can still fall short sometimes. So, better to follow specific guides on accessible text and your best judgement. Passing automated tests does not guarantee actual accessibility.

Another issue on which my experience with vertigo proved to be similar to that of people with dyslexia and attention disorders was how hard it was for me to keep my attention in just one place. In that sense…

Animations are bad (and parallax is pure evil)

Val Head has already covered visually-triggered vestibular disorders in an outstanding article, so I would recommend giving it a good read if you haven’t already.

To summarize, animations can trigger nausea, dizziness, and headaches in some users, so we should use them purposely and responsibly.

While most animations did not trigger my symptoms, parallax scrolling did. I’d never been a fan of parallax to begin with, as I found it confusing. And when you’re experiencing vertigo, the issues introduced by parallax scrolling compound.

Really, there are no words to describe just how bad a simple parallax effect, scrolljacking, or even background-attachment: fixed would make me feel. I would rather jump on one of those 20-G centrifuges astronauts use than look at a website with parallax scrolling.

Every time I encountered it, I would put the bucket beside me to good use and be forced to lie in bed for hours as I felt the room spinning around me, and no meds could get me out of it. It was THAT bad.

Though normal animations did not trigger a reaction as severe, they still posed a big problem. The extreme, conscious, focused effort it took to read would make it such that anything moving on the screen would instantly break my focus, and force me to start the paragraph all over. And I mean anything.

I would constantly find myself reading a website only to have the typical collapsing navigation bar on scroll distract me just enough that I’d totally lose count of where I was at. Autoplaying carousels were so annoying I would delete them using dev tools as soon as they showed up. Background videos would make me get out of the website desperately.

Over time I started using mouse selection as a pointer; a visual indication of what I’d already read so I could get back to it whenever something distracted me. Then I tried custom stylesheets to disable transforms and animations whenever possible, but that also meant many websites having critical elements not appear at all, as they were implemented to start off-screen or otherwise invisible, and show up on scroll.

Of course, deleting stuff via dev tools or using custom stylesheets is not something we can expect 99.99% of our users to even know about.

So if anything, consider reducing animations to a minimum. Provide users with controls to turn off non-essential animations (WCAG 2.2.3 Animation from Interactions) and to pause, stop, or hide them (WCAG 2.2.2 Pause, Stop, Hide). Implement animations and transitions in such a way that if the user disables them, critical elements still display.

And be extra careful with parallax: my recommendation is to, at the very least, try limiting its use to the header (“hero”) only, and be mindful of getting a smooth, realistic parallax experience. My vertigo self would have said, “just don’t freaking use parallax. Never. EVER.” But I guess that might be a hard idea to sell to stakeholders and designers.

Also consider learning how to use the prefers-reduced-motion feature query. This is a newer addition to the specs (it’s part of the Media Queries Level 5 module , which is at an early Editor’s Draft stage) that allows authors to apply selective styling depending on whether the user has requested the system to minimize the use of animations. OS and browser support for it is still quite limited, but the day will come when we will set any moving thing inside a query for when the user has no-preference, blocking animations from those who choose reduce.

After about a week of wrestling websites to provide a static experience, I remembered something that would prove to be my biggest ally while the vertigo lasted:

Reader mode

Some browsers include a “reader mode” that strips the content from any styling choices, isolates it from any distraction, and provides a perfect WCAG compliant layout for the text to maximize readability.

It is extremely helpful to provide a clear and consistent reading experience throughout multiple websites, especially for users with any kind of reading impairment.

I have to confess: before experiencing my vestibular disorder, I had never used Reader Mode (the formal name varies in browsers) or even checked if my projects were compatible with it. I didn’t even think it was such a useful feature, as a quick search for “reader mode” actually returned quite a few threads by users asking how to disable it or how to take the button for it out of Firefox’s address bar. (It seems some people are unwittingly activating it…perhaps the icon is not clear enough.)

Displaying the button to access Reader Mode is toggled by browser heuristics, which are based on the use (or not) of semantic tags in a page’s HTML. Unfortunately this meant not all websites provided such a “luxury.”

I really wish I wouldn’t have to say this in 2019…but please, please use semantic tags. Correct conversational semantics allow your website to be displayed in Reader Mode, and provide a better experience for users of screen readers. Again, accessibility is intersectional.

Reader Mode proved to be extremely useful while my vertigo lasted. But there was something even better:

Dark color schemes

By the fourth week, I started feeling mostly fine. I opened Visual Studio Code to try to get back to work. In doing so, it served me well to find one more revelation: a light-text-on-dark-background scheme was SO much easier for me to read. (Though I still was not able to return to work at this time.)

I was quite surprised, as I had always preferred light mode with dark-text-on-light-background for reading, and dark mode, with light-text-on-dark for coding. I didn’t know at the time that I was suffering from photophobia (which is a sensitivity to light), which was one of the reasons I found it hard to read on my desktop and to use my mobile device at all.

As far as I know, photophobia is not a common symptom of vestibular disorders, but there are many conditions that will trigger it, so it’s worth looking into for our projects’ accessibility.

CSS is also planning a media query to switch color schemes. Known as prefers-color-scheme, it allows applying styles based on the user’s stated preference for dark or light theming. It’s also part of the Media Queries Level 5 spec, and at the time of writing this article it’s only available in Safari Technology Preview, with Mozilla planning to ship it in the upcoming Firefox 67. Luckily there’s a PostCSS plugin that allows us to use it in most modern browsers by turning prefers-color-schemequeries into color-index queries, which have much better support.

If PostCSS is not your cup of tea, or for whatever reason you cannot use that approach to automate switching color schemes to a user’s preference, try at least to provide a theming option in your app’s configuration. Theming has become extremely simple since the release of CSS Custom Properties, so implementing this sort of switch is relatively easy and will greatly benefit anyone experiencing photophobia.

Moving on

After a month and some days, the vertigo disappeared completely, and I was able to return to work without needing any meds or further treatment. It should stay that way, as for most people it’s a once-in-a-lifetime occurrence.

I went back to my abled life, but the experience changed my mindset for good.

As I said before, I always cared for making my projects compatible for people using keyboard navigation and screen readers. But I learned the hard way that there are plenty of “invisible conditions” that are just as important to take into consideration: vestibular disorders, cognitive differences, dyslexia, and color blindness, just to name a few. I was totally neglecting those most of the time, barely addressing the issues in order to pass automated tests, which means I was unintentionally annoying some users by making websites inaccessible to them.

After my experience with vertigo, I’ve turned to an accessibility-first approach to design and development. Now I ask myself, “am I leaving anyone behind with this decision?,” before dropping a single line of code. Accessibility should never be an afterthought.

Making sure my projects work from the start for those with difficulties also improves the experience for everyone else. Think about how improving text styles for users with dyslexia, vertigo, or visual problems improves readability for all users, or how being able to control animations or choose a color scheme can be critical for users with attention disorders and photophobia, respectively, while also a nice feature for everybody.

It also turned my workflow into a much smoother development experience, as addressing accessibility issues from the beginning can mean a slower start, but it’s also much easier and faster than trying to fix broken accessibility afterwards.

I hope that by sharing my personal experience with vertigo, I’ve illustrated how we can all design and develop a better web for everybody. Remember, we’re all just temporarily abled.





science and technology

Nothing Fails Like Success

A family buys a house they can’t afford. They can’t make their monthly mortgage payments, so they borrow money from the Mob. Now they’re in debt to the bank and the Mob, live in fear of losing their home, and must do whatever their creditors tell them to do.

Welcome to the internet, 2019.

Buying something you can’t afford, and borrowing from organizations that don’t have your (or your customers’) best interest at heart, is the business plan of most internet startups. It’s why our digital services and social networks in 2019 are a garbage fire of lies, distortions, hate speech, tribalism, privacy violations, snake oil, dangerous idiocy, deflected responsibility, and whole new categories of unpunished ethical breaches and crimes.

From optimistically conceived origins and message statements about making the world a better place, too many websites and startups have become the leading edge of bias and trauma, especially for marginalized and at-risk groups.

Why (almost) everything sucks

Twitter, for instance, needs a lot of views for advertising to pay at the massive scale its investors demand. A lot of views means you can’t be too picky about what people share. If it’s misogynists or racists inspiring others who share their heinous beliefs to bring back the 1930s, hey, it’s measurable. If a powerful elected official’s out-of-control tweeting reduces churn and increases views, not only can you pay your investors, you can even take home a bonus. Maybe it can pay for that next meditation retreat.

You can cloak this basic economic trade-off in fifty layers of bullshit—say you believe in freedom of speech, or that the antidote to bad speech is more speech—but the fact is, hate speech is profitable. It’s killing our society and our planet, but it’s profitable. And the remaining makers of Twitter—the ones whose consciences didn’t send them packing years ago—no longer have a choice. The guy from the Mob is on his way over, and the vig is due.

Not to single out Twitter, but this is clearly the root cause of its seeming indifference to the destruction hate speech is doing to society…and will ultimately do to the platform. (But by then Jack will be able to afford to meditate full-time.)

Other companies do other evil things to pay their vig. When you owe the Mob, you have no choice. Like sell our data. Or lie about medical research.

There are internet companies (like Basecamp, or like Automattic, makers of WordPress.com, where I work) that charge money for their products and services, and use that money to grow their business. I wish more internet companies could follow that model, but it’s hard to retrofit a legitimate business model to a product that started its life as free.

And there are even some high-end news publications, such as The New York Times, The Washington Post, and The Guardian, that survive on a combination of advertising and flexible paywalls. But these options are not available to most digital publications and businesses.

Return with me to those Halcyon days…

Websites and internet startups used to be you and your friends making cool stuff for your other friends, and maybe building new friendships and even small communities in the process. (Even in 2019, that’s still how some websites and startups begin—as labors of love, fashioned by idealists in their spare time.)

Because they are labors of love; because we’ve spent 25 years training people to believe that websites, and news, and apps, and services should be free; because, when we begin a project, we can scarcely believe anyone will ever notice or care about it—for these reasons and more, the things we make digitally, especially on the web, are offered free of charge. We labor on, excited by positive feedback, and delighted to discover that, if we keep at it, our little community will grow.

Most such labors of love disappear after a year or two, as the creators drift out of touch with each other, get “real” jobs, fall in love, start families, or simply lose interest due to lack of attention from the public or the frustrations of spending weekends and holidays grinding away at an underappreciated site or app while their non-internet friends spend those same hours either having fun or earning money.

Along came money

But some of these startup projects catch on. And when they do, a certain class of investor smells ROI. And the naive cofounders, who never expected their product or service to really get anywhere, can suddenly envision themselves rich and Zuckerberg-famous. Or maybe they like the idea of quitting their day job, believing in themselves, and really going for it. After all, that is an empowering and righteous vision.

Maybe they believe that by taking the initial investment, they can do more good—that their product, if developed further, can actually help people. This is often the motivation behind agreeing to an initial investment deal, especially in categories like healthcare.

Or maybe the founders are problem solvers. Existing products or services in a given category have a big weakness. The problem solvers are sure that their idea is better. With enough capital, and a slightly bigger team, they can show the world how to do it right. Most inventions that have moved humankind forward followed exactly this path. It should lead to a better world (and it sometimes does). It shouldn’t produce privacy breaches and fake medicine and election-influencing bots and all the other plagues of our emerging digital civilization. So why does it?

Content wants to be paid

Primarily it is because these businesses have no business model. They were made and given away free. Now investors come along who can pay the founders, buy them an office, give them the money to staff up, and even help with PR and advertising to help them grow faster.

Now there are salaries and insurance and taxes and office space and travel and lecture tours and sales booths at SXSW, but there is still no charge for the product.

And the investor seeks a big return.

And when the initial investment is no longer enough to get the free-product company to scale to the big leagues, that’s when the really big investors come in with the really big bucks. And the company is suddenly famous overnight, and “everybody” is using the product, and it’s still free, and the investors are still expecting a giant payday.

Like I said—a house you can’t afford, so you go into debt to the bank and the Mob.

The money trap

Here it would be easy to blame capitalism, or at least untrammeled, under-regulated capitalism, which has often been a source of human suffering—not that capitalism, properly regulated, can’t also be a force for innovation which ameliorates suffering. That’s the dilemma for our society, and where you come down on free markets versus governmental regulation of businesses should be an intellectual decision, but these days it is a label, and we hate our neighbors for coming down a few degrees to the left or right of us. But I digress and oversimplify, and this isn’t a complaint about late stage capitalism per se, although it may smell like one.

No, the reason small companies created by idealists too frequently turn into consumer-defrauding forces for evil has to do with the amount of profit each new phase of investor expects to receive, and how quickly they expect to receive it, and the fact that the products and services are still free. And you know what they say about free products.

Nothing fails like success

A friend who’s a serial entrepreneur has started maybe a dozen internet businesses over the span of his career. They’ve all met a need in the marketplace. As a consequence, they’ve all found customers, and they’ve all made a profit. Yet his investors are rarely happy.

“Most of my startups have the decency to fail in the first year,” one investor told him. My friend’s business was taking in several million dollars a year and was slowly growing in staff and customers. It was profitable. Just not obscenely so.

And internet investors don’t want a modest return on their investment. They want an obscene profit right away, or a brutal loss, which they can write off their taxes. Making them a hundred million for the ten million they lent you is good. Losing their ten million is also good—they pay a lower tax bill that way, or they use the loss to fold a company, or they make a profit on the furniture while writing off the business as a loss…whatever rich people can legally do under our tax system, which is quite a lot.

What these folks don’t want is to lend you ten million dollars and get twelve million back.

You and I might go, “Wow! I just made two million dollars just for being privileged enough to have money to lend somebody else.” And that’s why you and I will never have ten million dollars to lend anybody. Because we would be grateful for it. And we would see a free two million dollars as a life-changing gift from God. But investors don’t think this way.

We didn’t start the fire, but we roasted our weenies in it

As much as we pretend to be a religious nation, our society worships these investors and their profits, worships companies that turn these profits, worships above all the myth of overnight success, which we use to motivate the hundreds of thousands of workers who will work nights and weekends for the owners in hopes of cashing in when the stock goes big.

Most times, even if the stock does go big, the owner has found a way to devalue it by the time it does. Owners have brilliant advisers they pay to figure out how to do those things. You and I don’t.

A Christmas memory

I remember visiting San Francisco years ago and scoring an invitation to Twitter’s Christmas party through a friend who worked there at the time. Twitter was, at the time, an app that worked via SMS and also via a website. Period.

Some third-party companies, starting with my friends at Iconfactory, had built iPhone apps for people who wanted to navigate Twitter via their newfangled iPhones instead of the web. Twitter itself hadn’t publicly addressed mobile and might not even have been thinking about it.

Although Twitter was transitioning from a fun cult thing—used by bloggers who attended SXSW Interactive in 2007—to an emerging cultural phenomenon, it was still quite basic in its interface and limited in its abilities. Which was not a bad thing. There is art in constraint, value in doing one thing well. As an outsider, if I’d thought about it, I would have guessed that Twitter’s entire team consisted of no more than 10 or 12 wild-eyed, sleep-deprived true believers.

Imagine my surprise, then, when I showed up at the Christmas party and discovered I’d be sharing dinner with hundreds of designers, developers, salespeople, and executives instead of the handful I’d naively anticipated meeting. (By now, of course, Twitter employs many thousands. It’s still not clear to an outsider why so many workers are needed.)

But one thing is clear: somebody has to pay for it all.

Freemium isn’t free

Employees, let alone thousands of them, on inflated Silicon Valley engineer salaries, aren’t free. Health insurance and parking and meals and HR and travel and expense accounts and meetups and software and hardware and office space and amenities aren’t free. Paying for all that while striving to repay investors tenfold means making a buck any way you can.

Since the product was born free and a paywall isn’t feasible, Twitter must rely on that old standby: advertising. Advertising may not generate enough revenue to keep your hometown newspaper (or most podcasts and content sites) in business, but at Twitter’s scale, it pays.

It pays because Twitter has so many active users. And what keeps those users coming back? Too often, it’s the dopamine of relentless tribalism—folks whose political beliefs match and reinforce mine in a constant unwinnable war of words with folks whose beliefs differ.

Of course, half the antagonists in a given brawl may be bots, paid for in secret by an organization that wants to make it appear that most citizens are against Net Neutrality, or that most Americans oppose even the most basic gun laws, or that our elected officials work for lizard people. The whole system is broken and dangerous, but it’s also addictive, and we can’t look away. From our naive belief that content wants to be free, and our inability to create businesses that pay for themselves, we are turning our era’s greatest inventions into engines of doom and despair.

Your turn

So here we are. Now what do we do about it?

It’s too late for current internet businesses (victims of their own success) that are mortgaged to the hilt in investor gelt. But could the next generation of internet startups learn from older, stable companies like Basecamp, and design products that pay for themselves via customer income—products that profit slowly and sustainably, allowing them to scale up in a similarly slow, sustainable fashion?

The self-payment model may not work for apps and sites that are designed as modest amusements or communities, but maybe those kinds of startups don’t need to make a buck—maybe they can simply be labors of love, like the websites we loved in the 1990s and early 2000s.

Along those same lines, can the IndieWeb, and products of IndieWeb thinking like Micro.blog, save us? Might they at least provide an alternative to the toxic aspects of our current social web, and restore the ownership of our data and content? And before you answer, RTFM.

On an individual and small collective basis, the IndieWeb already works. But does an IndieWeb approach scale to the general public? If it doesn’t scale yet, can we, who envision and design and build, create a new generation of tools that will help give birth to a flourishing, independent web? One that is as accessible to ordinary internet users as Twitter and Facebook and Instagram? Tantek Çelik thinks so, and he’s been right about the web for nearly 30 years. (For more about what Tantek thinks, listen to our conversation in Episode № 186 of The Big Web Show.)
Are these approaches mere whistling against a hurricane? Are most web and internet users content with how things are? What do you think? Share your thoughts on your personal website (dust yours off!) or (irony ahoy!) on your indie or mainstream social networks of choice using hashtag #LetsFixThis. I can’t wait to see what you have to say.




science and technology

Everyday Information Architecture: Auditing for Structure

Just as we need to understand our content before we can recategorize it, we need to understand the system before we try to rebuild it.

Enter the structural audit: a review of the site focused solely on its menus, links, flows, and hierarchies. I know you thought we were done with audits back in Chapter 2, but hear me out! Structural audits have an important and singular purpose: to help us build a new sitemap.

This isn’t about recreating the intended sitemap—no, this is about experiencing the site the way users experience it. This audit is meant to track and record the structure of the site as it really works.

Setting up the template

First, we’re gonna need another spreadsheet. (Look, it is not my fault that spreadsheets are the perfect system for recording audit data. I don’t make the rules.)

Because this involves building a spreadsheet from scratch, I keep a “template” at the top of my audit files—rows that I can copy and paste into each new audit (Fig 4.1). It’s a color-coded outline key that helps me track my page hierarchy and my place in the auditing process. When auditing thousands of pages, it’s easy to get dizzyingly lost, particularly when coming back into the sheet after a break; the key helps me stay oriented, no matter how deep the rabbit hole.

Fig 4.1: I use a color-coded outline key to record page hierarchy as I move through the audit. Wait, how many circles did Dante write about?

Color-coding

Color is the easiest, quickest way to convey page depth at a glance. The repetition of black text, white cells, and gray lines can have a numbing effect—too many rows of sameness, and your eyes glaze over. My coloring may result in a spreadsheet that looks like a twee box of macarons, but at least I know, instantly, where I am.

The exact colors don’t really matter, but I find that the familiar mental model of a rainbow helps with recognition—the cooler the row color, the deeper into the site I know I must be.

The nested rainbow of pages is great when you’re auditing neatly nested pages—but most websites color outside the lines (pun extremely intended) with their structure. I leave my orderly rainbow behind to capture duplicate pages, circular links, external navigation, and other inconsistencies like:

  • On-page navigation. A bright text color denotes pages that are accessible via links within page content—not through the navigation. These pages are critical to site structure but are easily overlooked. Not every page needs to be displayed in the navigation menus, of course—news articles are a perfect example—but sometimes this indicates publishing errors.
  • External links. These are navigation links that go to pages outside the domain. They might be social media pages, or even sites held by the same company—but if the domain isn’t the one I’m auditing, I don’t need to follow it. I do need to note its existence in my spreadsheet, so I color the text as the red flag that it is. (As a general rule, I steer clients away from placing external links in navigation, in order to maintain a consistent experience. If there’s a need to send users offsite, I’ll suggest using a contextual, on-page link.)
  • Files. This mostly refers to PDFs, but can include Word files, slide decks, or anything else that requires downloading. As with external links, I want to capture anything that might disrupt the in-site browsing experience. (My audits usually filter out PDFs, but for organizations that overuse them, I’ll audit them separately to show how much “website” content is locked inside.)
  • Unknown hierarchy. Every once in a while, there’s a page that doesn’t seem to belong anywhere—maybe it’s missing from the menu, while its URL suggests it belongs in one section and its navigation scheme suggests another. These pages need to be discussed with their owners to determine whether the content needs to be considered in the new site.
  • Crosslinks. These are navigation links for pages that canonically live in a different section of the site—in other words, they’re duplicates. This often happens in footer navigation, which may repeat the main navigation or surface links to deeper-but-important pages (like a Contact page or a privacy policy). I don’t want to record the same information about the page twice, but I do need to know where the crosslink is, so I can track different paths to the content. I color these cells gray so they don’t draw my attention.

Note that coloring every row (and indenting, as you’ll see in a moment) can be a tedious process—unless you rely on Excel’s formatting brush. That tool applies all the right styles in just two quick clicks.

Outlines and page IDs

Color-coding is half of my template; the other half is the outline, which is how I keep track of the structure itself. (No big deal, just the entire point of the spreadsheet.)

Every page in the site gets assigned an ID. You are assigning this number; it doesn’t correspond to anything but your own perception of the navigation. This number does three things for you:

  1. It associates pages with their place in the site hierarchy. Decimals indicate levels, so the page ID can be decoded as the page’s place in the system.
  2. It gives each page a unique identifier, so you can easily refer to a particular page—saying “2.4.1” is much clearer than “you know that one page in the fourth product category?”
  3. You can keep using the ID in other contexts, like your sitemap. Then, later, when your team decides to wireframe pages 1.1.1 and 7.0, you’ll all be working from the same understanding.

Let me be completely honest: things might get goofy sometimes with the decimal outline. There will come a day when you’ll find yourself casually typing out “1.2.1.2.1.1.1,” and at that moment, a fellow auditor somewhere in the universe will ring a tiny gong for you.

In addition to the IDs, I indent each level, which reinforces both the numbers and the colors. Each level down—each digit in the ID, each change in color—gets one indentation.

I identify top-level pages with a single number: 1.0, 2.0, 3.0, etc. The next page level in the first section would be 1.1, 1.2, 1.3, and so on. I mark the homepage as 0.0, which is mildly controversial—the homepage is technically a level above—but, look: I’ve got a lot of numbers to write, and I don’t need those numbers to tell me they’re under the homepage, so this is my system. Feel free to use the numbering system that work best for you.

Criteria and columns

So we’ve got some secret codes for tracking hierarchy and depth, but what about other structural criteria? What are our spreadsheet columns (Fig 4.2)? In addition to a column for Page ID, here’s what I cover:

  • URL. I don’t consistently fill out this column, because I already collected this data back in my automated audit. I include it every twenty entries or so (and on crosslinks or pages with unknown hierarchy) as another way of tracking progress, and as a direct link into the site itself.
  • Menu label/link. I include this column only if I notice a lot of mismatches between links, labels, and page names. Perfect agreement isn’t required; but frequent, significant differences between the language that leads to a page and the language on the page itself may indicate inconsistencies in editorial approach or backend structures.
  • Name/headline. Think of this as “what does the page owner call it?” It may be the H1, or an H2; it may match the link that brought you here, or the page title in the browser, or it may not.
  • Page title. This is for the name of the page in the metadata. Again, I don’t use this in every audit—particularly if the site uses the same long, branded metadata title for every single page—but frequent mismatches can be useful to track.
  • Section. While the template can indicate your level, it can’t tell you which area of the site you’re in—unless you write it down. (This may differ from the section data you applied to your automated audit, taken from the URL structure; here, you’re noting the section where the page appears.)
  • Notes. Finally, I keep a column to note specific challenges, and to track patterns I’m seeing across multiple pages—things like “Different template, missing subnav” or “Only visible from previous page.” My only caution here is that if you’re planning to share this audit with another person, make sure your notes are—ahem—professional. Unless you enjoy anxiously combing through hundreds of entries to revise comments like “Wow haha nope” (not that I would know anything about that).
Fig 4.2: A semi-complete structural audit. This view shows a lot of second- and third-level pages, as well as pages accessed through on-page navigation.

Depending on your project needs, there may be other columns, too. If, in addition to using this spreadsheet for your new sitemap, you want to use it in migration planning or template mapping, you may want columns for new URLs, or template types. 

You can get your own copy of my template as a downloadable Excel file. Feel free to tweak it to suit your style and needs; I know I always do. As long as your spreadsheet helps you understand the hierarchy and structure of your website, you’re good to go.

Gathering data

Setting up the template is one thing—actually filling it out is, admittedly, another. So how do we go from a shiny, new, naive spreadsheet to a complete, jaded, seen-some-stuff spreadsheet? I always liked Erin Kissane’s description of the process, from The Elements of Content Strategy:

Big inventories involve a lot of black coffee, a few late nights, and a playlist of questionable but cheering music prominently featuring the soundtrack of object-collecting video game Katamari Damacy. It takes quite a while to exhaustively inventory a large site, but it’s the only way to really understand what you have to work with.

We’re not talking about the same kind of exhaustive inventory she was describing (though I am recommending Katamari music). But even our less intensive approach is going to require your butt in a seat, your eyes on a screen, and a certain amount of patience and focus. You’re about to walk, with your fingers, through most of a website.

Start on the homepage. (We know that not all users start there, but we’ve got to have some kind of order to this process or we’ll never get through it.) Explore the main navigation before moving on to secondary navigation structures. Move left to right, top to bottom (assuming that is your language direction) over each page, looking for the links. You want to record every page you can reasonably access on the site, noting navigational and structural considerations as you go.

My advice as you work:

  • Use two monitors. I struggle immensely without two screens in this process, which involves constantly switching between spreadsheet and browser in rapid, tennis-match-like succession. If you don’t have access to multiple monitors, find whatever way is easiest for you to quickly flip between applications.
  • Record what you see. I generally note all visible menu links at the same level, then exhaust one section at a time. Sometimes this means I have to adjust what I initially observed, or backtrack to pages I missed earlier. You might prefer to record all data across a level before going deeper, and that would work, too. Just be consistent to minimize missed links.
  • Be alert to inconsistencies. On-page links, external links, and crosslinks can tell you a lot about the structure of the site, but they’re easy to overlook. Missed on-page links mean missed content; missed crosslinks mean duplicate work. (Note: the further you get into the site, the more you’ll start seeing crosslinks, given all the pages you’ve already recorded.)
  • Stick to what’s structurally relevant. A single file that’s not part of a larger pattern of file use is not going to change your understanding of the structure. Neither is recording every single blog post, quarterly newsletter, or news story in the archive. For content that’s dynamic, repeatable, and plentiful, I use an x in the page ID to denote more of the same. For example, a news archive with a page ID of 2.8 might show just one entry beneath it as 2.8.x; I don’t need to record every page up to 2.8.791 to understand that there are 791 articles on the site (assuming I noted that fact in an earlier content review).
  • Save. Save frequently. I cannot even begin to speak of the unfathomable heartbreak that is Microsoft Excel burning an unsaved audit to the ground.  

Knowing which links to follow, which to record, and how best to untangle structural confusion—that improves with time and experience. Performing structural audits will not only teach you about your current site, but will help you develop fluency in systems thinking—a boon when it comes time to document the new site.




science and technology

Trans-inclusive Design

Late one night a few years ago, a panicked professor emailed me: “My transgender student’s legal name is showing on our online discussion board. How can I keep him from being outed to his classmates?” Short story: we couldn’t. The professor created an offline workaround with the student. Years later this problem persists not just in campus systems, but in many systems we use every day.

To anyone who’d call that an unusual situation, it’s not. We are all already designing for trans users—1 in 250 people in the US identifies as transgender or gender non-binary (based on current estimates), and the number is rising.

We are web professionals; we can do better than an offline workaround. The choices we make impact the online and offline experiences of real people who are trans, non-binary, or gender-variant—choices that can affirm or exclude, uplift or annoy, help or harm.

The rest of this article assumes you agree with the concept that trans people are human beings who deserve dignity, respect, and care. If you are seeking a primer on trans-related vocabulary and concepts, please read up and come back later.

I’m going to cover issues touching on content, images, forms, databases, IA, privacy, and AI—just enough to get you thinking about the decisions you make every day and some specific ideas to get you started.

“Tried making a Bitmoji again, but I always get disillusioned immediately by their binary gender model from literally step 1 and end up not using it. I don’t feel represented.”

Editorial note: All personal statements quoted in this article have been graciously shared with the express consent of the original authors.

How we can get things right

Gender is expansively misconstrued as some interchangeable term for anatomical features. Unlike the constellation of human biological forms (our sex), gender is culturally constructed and varies depending on where you are in the world. It has its own diversity.

Asking for gender when it is not needed; limiting the gender options users can select; assuming things about users based on gender; or simply excluding folks from our designs are all ways we reify the man-woman gender binary in design decisions.

Names are fundamentally important

If we do nothing else, we must get names right. Names are the difference between past and present, invalidation and affirmation, and sometimes safety and danger.

Yet, many of the systems we use and create don’t offer name flexibility.

Many programmers and designers have a few misconceptions about names, such as assuming people have one moniker that they go by all the time, despite how common it is for names to change over a lifetime. People might update them after a change in marital status, family situation, or gender, or perhaps someone is known by a nickname, westernized name, or variation on a first name.

In most locales, legally changing names is extremely difficult, extremely expensive, requires medical documentation, or is completely out of the question.

Changes to name and gender marker are even more complicated; they tend to be two separate, long-drawn-out processes. To make matters worse, laws vary from state to state within the U.S. and most only recognize two genders—man and woman—rather than allowing non-binary options.Not all trans people change their names, but for those who do, it’s a serious and significant decision that shouldn’t be sabotaged. We can design systems that protect the lives and privacy of our users, respect the fluid nature of personal identity, and act as an electronic curb cut that helps everyone in the process.

Deadnaming

One need only search Twitter for “deadname app” to get an idea of how apps can leave users in the lurch. Some of the most alarming examples involve apps and sites that facilitate real-life interactions (which already involve a measure of risk for everyone).

“Lyft made it completely impossible for me to change my name on its app even when it was legally changed. I reached out to their support multiple times and attempted to delete the account and start over with no result. I was completely dependent on this service for groceries, appointments, and work, and was emotionally exhausted every single time I needed a ride. I ended up redownloading Uber - even though there was a strike against the service - which I felt awful doing. But Uber allowed me to change my name without any hoops to jump through, so for the sake of my mental health, I had to.”

Trans people are more likely to experience financial hardship, so using payment apps to ask for donations is often necessary. Some of these services may reveal private information as a matter of course, leaving them exposed and potentially at risk.

There are also ramifications when linked services rely on our data sources for name information, instigating an unpredictable cascade effect with little or no recourse to prevent the sharing of sensitive details.

These are examples of deadnaming. Deadnaming is what happens when someone’s previous or birth name is used, rather than the name the person uses now. Deadnaming is invalidating at the least, even as a faux pas, but can be psychologically devastating at the other extreme, even putting lives at risk.The experiences of trans, non-binary, or gender-variant folk can vary widely, and they live in disparate conditions throughout the world. Many are thriving and creating new and joyful ways to resist and undo gender norms, despite the common negative narrative around trans lives. Others can face hardship; trans people are more likely to be unstably housed, underemployed, underpaid, and targets of violence in and out of their homes, workplaces, and intimate relationships. The ramifications are amplified for people of color and those with disabilities, as well as those in precarious living/working situations and environments where exposure can put them in harm’s way.

Design for name changes

Heres what we can do:

Design for renaming. Emma Humphries’ talk on renaming covers this nicely. Airbnb has developed policies and procedures for users who’ve transitioned, allowing users to keep their review histories intact with amended names and/or pronouns.

Get rid of “real name” requirements. Allow people to use names they go by rather than their legal first names.

Clarify when you actually need a legal name, and only use that in conjunction with a display name field.

Have a name change process that allows users to change their names without legal documentation. (It’s likely that you have procedures for marriage-related name changes already.)

Ensure users can still change their display names when connecting with other data sources to populate users’ names.

Don’t place onerous restrictions on changes. Once someone creates a username, web address, or profile URL, allow them to change it.

Draft a code of conduct if you’re part of an online community, and make sure to include policies around deadnaming. Twitter banned deadnaming last year.

Allow people to be forgotten. When people delete their accounts for whatever reason, help them make sure that their data is not lingering in your systems or in other places online.

Update the systems users don’t see, too

Identity management systems can be a mess, and name changes can reveal the failures among those systems, including hidden systems that users don’t see.

One Twitter user’s health insurance company kept their ID number between jobs but changed their gender. Another user updated their display name but got an email confirmation addressed to their legal name.

Hidden information can also undermine job opportunities:

“At a university as a student, I transitioned and changed my name and gender to be a woman. TWELVE YEARS later after being hired to work in the Libraries, the Libraries HR coordinator emailed me that I was listed as male still in the database. He changed it on my asking, but I have to wonder how long… was it a factor in my being turned down for jobs I applied to… who had seen that..?”

Emma Humphries details the hidden systems that can carry out-of-date information about users. Her tips for database design include:

  • Don’t use emails as unique IDs.
  • Use an invariant user ID internally, and link the user’s current email and display name to it.

Images

Visuals should allow room for representation and imagination rather than a narrow subset of the usual suspects: figures who appear to be straight, cisgender, able-bodied, and white/Caucasian.

What we can do is feature a variety of gender presentations, as well as not assume someone’s gender identity if they buy certain items.

Some companies, like Wildfang and Thinx, offer a broad array of images representing different races, body sizes, and gender expressions on their websites and in their ads.

Many are also choosing not to hire models, allowing room for imagination and versatility:

“I got a catalog for a ‘classic menswear company’ that features zero photos of any person of any gender. Now if only I could afford an $800 blazer...”

Here's what we can do:

Actively recruit diverse groups of models for photos. And pay them!

If you can’t shoot your own photos, Broadly has recently launched a trans-inclusive stock photo collection free for wide use. Avataaars allows users to create an avatar without selecting a gender.

Information architecture

How we organize information is a political act and a non-neutral decision (librarians have said this for a while). This applies to gender-based classifications.

Many companies that sell consumer goods incorporate gender into their product design and marketing, no matter what. The product itself might be inherently gender-neutral (such as clothing, toys, bikes, or even wine), but these design and marketing decisions can directly impact the information architecture of websites.

Here's what we can do:

Evaluate why any menus, categories, or tags are based on gender, and how it can be done differently:

“Nike has a ‘gender neutral’ clothing category, yet it’s listed under ‘men’ and ‘women’ in the website architecture. ????”

Forms

Forms, surveys, and other types of data gathering are surefire ways to include or exclude people. If you ask for information you don’t need or limit the options that people can select, you risk losing them as users.

Here's what we can do:

Critically evaluate why you are asking for personal information, including gender. Will that information be used to help someone, or sell things to your advertisers?

"Why does the @CocaCola site make me select a gender just to make a purchase? Guess my family isn't getting personalized Coke bottles for Christmas."

If you are asking users for their gender, you’d better have a good reason and options that include everyone. A gender field should have more than two options, or should ask for pronouns instead. When including more than binary options, actually record the selections in your databases instead of reclassifying answers as male/female/null, otherwise you risk losing trust when disingenuous design decisions become public.

Honorifics are infrequently used these days, but it takes little work to add gender-inclusive titles to a list. For English-language sites, “Mx.” can go alongside “Mr.” and “Ms.” without fuss. United Airlines debuted this option earlier this year.

Content

Here's what we can do:

Avoid inappropriately gendered language. Your style guide should include singular “they” instead of “he/she” or “s/he,” and exclude frequently used words and phrases that exclude trans folks. Resources such as this transgender style guide are a quick way to check your language and benchmark your own content guidelines.

Check assumptions about gender and biology. Not everyone who can have a period, can get pregnant, or can breastfeed identifies as women or mothers—just as not everyone who identifies as women or mothers can have periods, can get pregnant, or can breastfeed. Thinx, a company that sells period underwear, has an inclusive tagline: “For people with periods.”

Avoid reinforcing the binary. Groups of people aren’t “ladies and gentlemen” or “boys and girls.” They are folks, people, colleagues, “y’all,” or even “all y’all.”

Pronouns aren’t “preferred”—they’re just pronouns. Calling pronouns preferred suggests that they’re optional and are replacing a “true” pronoun.

Avoid reinforcing stereotypes about trans people. Not all trans people are interested in medically transitioning, or in “passing.” They also aren’t fragile or in need of a savior. Gender is separate from sexual orientation. You can’t “tell” someone is trans.

Privacy, surveillance, and nefarious AI

We’ve heard the story of algorithms identifying a pregnant teen before her parents knew. What if an algorithm predicts or reveals information about your gender identity?

Inferences. Users’ genders are assumed based on their purchase/browsing history.

Recommendations. A user bought something before they transitioned and it shows up in “recommended because you bought X.”

Predictions. Users’ genders are not only inferred but used to predict something else based on characteristics of that gender. Even if you don’t tell big websites what your gender is, they assume one for you based on your interests. That kind of reductive essentialism can harm people of all genders. One of this article’s peer readers summed this up:

“Gender markers are a poor proxy for tastes. I like dresses, cute flats, and Raspberry Pis.”

Flashbacks. “On this day” algorithms remind users of the past, sometimes for better (“I’ve come so far”) or for worse (“don’t remind me”).

AI-based discrimination

AI and surveillance software can also reinforce norms about what men’s and women’s bodies should look like, resulting in harrowing airline travel experiences and creating AI-based discrimination for trans people.

So, too, can trans folks’ public data be used for projects that they don’t consent to. Just because we can use AI for something—like determining gender based on a face scan—doesn’t mean we should.

Here's what we can do:

Read up and proactively mitigate bias. AI and algorithms can reflect developers’ biases and perpetuate stereotypes about how people’s bodies should look. Use AI to challenge the gender binary rather than reinforce it. Design for privacy first. Hire more types of people who represent different lived experiences.

Toward a gender-inclusive web

The ideas I’ve offered here are only starting points. How you choose to create space for trans folks is going to be up to you. I don’t have all the solutions here, and there is no singular trans experience. Also, language, definitions, and concepts change rapidly.

We shouldn’t use any of these facts as excuses to keep us from trying.

When we start to think about design impact on trans folks, the ideas we bring into question can benefit everyone. Our designs should go beyond including—they should affirm and validate. Ideally, they will also reflect organizational cultures that support diversity and inclusion.

Here's what we can do:

Keep learning. Learn how to be a good ally. Pay trans user research participants to help validate your design assumptions. Hire trans people on your team and don't hang them out to dry or make them do all the hard work around inclusion and equity. Make it everyone’s job to build a more just web and world for everybody.

Editorial note: All personal statements quoted in this article have been graciously shared with the express consent of the original authors.

This article is stronger and wiser thanks to Mica McPheeters at A List Apart and the following peer readers. Thank you.

Jake Atchison
Katherine Deibel, Ph.D.
Justina F. Hall
Austyn Higgs
Emma Humphries
Tara Robertson
Levi R. Walter




science and technology

Daily Ethical Design

Suddenly, I realized that the people next to me might be severely impacted by my work. I was having a quick lunch in the airport. A group of flight attendants sat down at the table next to me and started to prepare for their flight. For a while now, our design team had been working on futuristic concepts for the operations control center of these flight attendants’ airline, pushing ourselves to come up with innovative solutions enabled by the newest technologies. As the control center deals with all activities around flying planes, our concepts touched upon everything and everyone within the airline. How was I to know what the impact of my work would be on the lives of these flight attendants? And what about the lives of all the other people working at the airline? Ideally, we would have talked to all the types of employees in the company and tested our concepts with them. But, of course, there was no budget (or time) allocated to do so, not to mention we faced the hurdle of convincing (internal) stakeholders of the need. Not for the first time, I felt frustrated: practical, real-world constraints prevented me from assessing the impact and quality of my work. They prevented me from properly conducting ethical design.

What is ethical design?

Right, good question. A very comprehensive definition of ethical design can be found at Encyclopedia.com:
Design ethics concerns moral behavior and responsible choices in the practice of design. It guides how designers work with clients, colleagues, and the end users of products, how they conduct the design process, how they determine the features of products, and how they assess the ethical significance or moral worth of the products that result from the activity of designing.
In other words, ethical design is about the “goodness”—in terms of benefit to individuals, society, and the world—of how we collaborate, how we practice our work, and what we create. There’s never a black-and-white answer for whether design is good or bad, yet there are a number of areas for designers to focus on when considering ethics.

Usability

Nowadays usability has conquered a spot as a basic requirement for each interface; unusable products are considered design failures. And rightly so; we have a moral obligation as designers to create products that are intuitive, safe, and free from possibly life-threatening errors. We were all reminded of usability’s importance by last year’s accidental nuclear strike warning in Hawaii. What if, instead of a false-positive, the operator had broadcasted a false-negative?

Accessibility

Like usability, inclusive design has become a standard item in the requirement list of many designers and companies. (I will never forget that time someone tried to use our website with a screen reader—and got absolutely stuck at the cookie message.) Accessible design benefits all, as it attempts to cover as many needs and capabilities as possible. Yet for each design project, there are still a lot of tricky questions to answer. Who gets to benefit from our solutions? Who is (un)intentionally left out? Who falls outside the “target customer segment”?

Privacy

Another day, another Facebook privacy scandal. As we’re progressing into the Data Age, the topic of privacy has become almost synonymous with design ethics. There’s a reason why more and more people use DuckDuckGo as an alternative search engine to Google. Corporations have access to an abundance of personal information about consumers, and as designers we have the privilege—and responsibility—of using this information to shape products and services. We have to consider how much information is strictly necessary and how much people are willing to give up in exchange for services. And how can we make people aware of the potential risks without overloading them?

User involvement

Overlapping largely with privacy, this focus area is about how we deal with our users and what we do with the data that we collect from them. IDEO has recently published The Little Book of Design Research Ethics, which provides a comprehensive overview of the core principles and guidelines we should follow when conducting design research.

Persuasion

Ethics related to persuasion is about to what extent we may influence the behavior and thoughts of our users. It doesn’t take much to bring acceptable, “white hat” persuasion into gray or even dark territories. Conversion optimization, for example, can easily turn into “How do we squeeze out more revenue from our customers by turning their unconsciousness against them?” Prime examples include Netflix, which convinces us to watch, watch, and watch even more, and Booking.com, which barrages our senses with urgency and social pressure.

Focus

The current digital landscape is addictive, distracting, and competing for attention. Designing for focus is about responsibly handling people’s most valuable resource: time. Our challenge is to limit everything that disrupts our users’ attention, lower the addictiveness of products, and create calmness. The Center for Humane Technology has started a useful list of resources for this purpose.

Sustainability

What’s the impact of our work on the world’s environment, resources, and climate? Instead of continuously adding new features in the unrelenting scrum treadmill, how could we design for fewer? We’re in the position to create responsible digital solutions that enable sustainable consumer behavior and prevent overconsumption. For example, apps such as Optimiam and Too Good To Go allow people to order leftover food that would normally be thrashed. Or consider Mutum and Peerby, whose peer-to-peer platforms promote the sharing and reuse of owned products.

Society

The Ledger of Harms of the Center for Human Technology is a work-in-progress collection of the negative impacts that digital technology has on society, including topics such as relationships, mental health, and democracy. Designers who are mindful of society consider the impact of their work on the global economy, communities, politics, and health.
[caption id="attachment_7171650" align="alignnone" width="1200"] The focus areas of design ethics. That’s a lot to consider![/caption]

Ethics as an inconvenience

Ideally, in every design project, we should assess the potential impact in all of the above-mentioned areas and take steps to prevent harm. Yet there are many legitimate, understandable reasons why we often neglect to do so. It’s easy to have moral principles, yet in the real world, with the constraints that our daily life imposes upon us, it’s seldom easy to act according to those principles. We might simply say it’s inconvenient at the moment. That there’s a lack of time or budget to consider all the ethical implications of our work. That there are many more pressing concerns that have priority right now. We might genuinely believe it’s just a small issue, something to consider later, perhaps. Mostly, we are simply unaware of the possible consequences of our work. And then there’s the sheer complexity of it all: it’s simply too much to simultaneously focus on. When short on time, or in the heat of approaching deadlines and impatient stakeholders, how do you incorporate all of design ethics’ focus areas? Where do you even start?

Ethics as a structural practice

For these reasons, I believe we need to elevate design ethics to a more practical level. We need to find ways to make ethics not an afterthought, not something to be considered separately, but rather something that’s so ingrained in our process that not doing it means not doing design at all. The only way to overcome the “inconvenience” of acting ethically is to practice daily ethical design: ethics structurally integrated in our daily work, processes, and tools as designers. No longer will we have to rely on the exceptions among us; those extremely principled who are brave enough to stand up against the system no matter what kind of pressure is put upon them. Because the system will be on our side. By applying ethics daily and structurally in our design process, we’ll be able to identify and neutralize in a very early stage the potential for mistakes and misuse. We’ll increase the quality of our design and our practices simply because we’ll think things through more thoroughly, in a more conscious and structured manner. But perhaps most important is that we’ll establish a new standard for design. A standard that we can sell to our clients as the way design should be done, with ethical design processes and deliverables already included. A standard that can be taught to design students so that the newest generation of designers doesn’t know any better than to apply ethics, always.

How to practice daily ethical design?

At this point we’ve arrived at the question of how we can structurally integrate ethics into our design process. How do we make sure that our daily design decisions will result in a product that’s usable and accessible; protects people’s privacy, agency, and focus; and benefits both society and nature? I want to share with you some best practices that I’ve identified so far, and how I’ve tried to apply them during a recent project at Mirabeau. The goal of the project was to build a web application that provides a shaver manufacturer’s factory workers insight into the real-time availability of production materials.

Connect to your organization’s mission and values

By connecting our designs to the mission and values of the companies we work for, we can structurally use our design skills in a strategic manner, for moral purposes. We can challenge the company to truly live up to its promises and support it in carrying out its mission. This does, however, require you to be aware of the company’s values, and to compare these to your personal values. As I had worked with our example client before, I knew it was a company that takes care of its employees and has a strong focus on creating a better world. During the kick-off phase, we used a strategy pyramid to structure the client’s mission and values, and to agree upon success factors for the project. We translated the company’s customer-facing brand guidelines to employee-focused design principles that maintained the essence of the organization.

Keep track of your assumptions

Throughout our entire design process, we make assumptions for each decision that we take. By structurally keeping track of these assumptions, you’ll never forget about the limitations of your design and where the potential risks lie in terms of (harmful) impact on users, the project, the company, and society. In our example project, we listed our assumptions about user goals, content, and functionalities for each page of the application. If we were not fully sure about the value for end users, or the accuracy of a user goal, we marked it as a value assumption. When we were unsure if data could be made available, we marked this as a data (feasibility) assumption. If we were not sure whether a feature would add to the manufacturer’s business, we marked it as a scope assumption. Every week, we tested our assumptions with end users and business stakeholders through user tests and sprint demos. Each design iteration led to new questions and assumptions to be tested the next week.

Aim to be proven wrong

While our assumptions are the known unknowns, there are always unknown unknowns that we aren’t aware of but could be a huge risk for the quality and impact of our work. The only way we can identify these is by applying the scientific principle of falsifiability: seeking actively to be proven wrong. Only outsiders can point out to us what we miss as an individual or as a team. In our weekly user tests, we included factory workers and stakeholders with different disciplines, from different departments, and working in different contexts, to identify the edge cases that could break our concept. On one occasion, this made us reconsider the entirety of our concept. Still, we could have done better: although scalability to other factories was an important success factor, we were unable to gather input from those other factories during the project. We felt our only option was to mention this as a risk (“limit to scalability”).

Use the power of checklists

Let’s face it: we forget things. (Without scrolling up the page, can you name all the focus areas of design ethics?) This is where checklists help us out: they provide knowledge in the world, so that we don’t have to process it in our easily overwhelmed memory. Simple yet powerful, a checklist is an essential tool to practice daily ethical design. In our example project, we used checklists to maintain an overview of questions and assumptions to user test, checking whether we included our design principles properly, and assessing whether we complied to the client’s values, design principles, and the agreed-upon success factors. In hindsight, we could also have taken a moment during the concept phase to go through the list of focus areas for design ethics, as well as have taken a more structural approach to check accessibility guidelines.

The main challenge for daily ethical design

Most ethics focus areas are quite tangible, where design decisions have immediate, often visible effects. While certainly challenging in their own right, they’re relatively easy to integrate in our daily practice, especially for experienced designers. Society and the environment, however, are more intangible topics; the effects of our work in these areas are distant and uncertain. I’m sure that when Airbnb was first conceived, the founders did not consider the magnitude of its disruptive impact on the housing market. The same goes for Instagram, as its role in creating demand for fast fashion must have been hard to foresee. Hard, but not impossible. So how do we overcome this challenge and make the impact that we have on society and the environment more immediate, more daily?

Conduct Dark Reality sessions

The ancient Greek philosopher Socrates used a series of questions to gradually uncover the invalidity of people’s beliefs. In a very similar way, we can uncover the assumptions and potential disastrous consequences of our concepts in a ‘Dark Reality’ session, a form of speculative design that focuses on stress-testing a concept with challenging questions. We have to ask ourselves—or even better, somebody outside our team has to ask us— questions such as, “What is the lifespan of your product? What if the user base will be in the millions? What are the long-term effects on economy, society, and the environment? Who benefits from your design? Who loses? Who is excluded? And perhaps most importantly, how could your design be misused? (For more of these questions, Alan Cooper provided a great list in his keynote at Interaction 18.) The back-and-forth Q&A of the Dark Reality session will help us consider and identify our concept’s weaknesses and potential consequences. As it is a team effort, it will spark discussion and uncover differences in team members’ ethical values. Moreover, the session will result in a list of questions and assumptions that can be tested with potential users and subject matter experts. In the project for the airline control center, it resulted in more consideration for the human role in automatization and how digital interfaces can continue to support human capabilities (instead of replacing them), and reflection on the role of airports in future society. The dark reality session is best conducted during the convergent parts of the double diamond, as these are the design phases in which we narrow down to realistic ideas. It’s vital to have a questioner from outside the team with strong interviewing skills and who doesn’t easily accept an answer as sufficient. There are helpful tools available to help structure the session, such as the Tarot Cards of Tech and these ethical tools.

Take a step back to go forward

As designers, we’re optimists by nature. We see the world as a set of problems that we can solve systematically and creatively if only we try hard enough. We intend well. However, merely having the intention to do good is not going to be enough. Our mindset comes with the pitfall of (dis)missing potential disastrous consequences, especially under pressure of daily constraints. That’s why we need to regularly, systematically take a step back and consider the future impact of our work. My hope is that the practical, structural mindset to ethics introduced in this article will help us agree on a higher standard for design.




science and technology

Resilient Management, An Excerpt

In Tuckman’s Stages of Group Development, the Storming stage happens as a group begins to figure out how to work together. Previously, each person had been doing their own thing as individuals, so necessarily a few things need to be ironed out: how to collaborate, how to hit goals, how to determine priorities. Of course there may be some friction here!

But even if your team doesn’t noticeably demonstrate this kind of internal Storming as they begin to gel, there might be some outside factors at play in your work environment that create friction. During times of team scaling and organizational change—the water we in the web industry are often swimming in—managers are responsible for things like strategy-setting, aligning their team’s work to company objectives, and unblocking the team as they ship their work.

In addition to these business-context responsibilities, managers need to be able to help their teammates navigate this storm by helping them grow in their roles and support the team’s overall progress. If you and your teammates don’t adapt and evolve in your roles, it’s unlikely that your team will move out of the Storming stage and into the Norming stage of team dynamics.

To spur this course-correction and growth in your teammates, you’ll end up wearing four different hats:

  • Mentoring: lending advice and helping to problem solve based on your own experience.
  • Coaching: asking open questions to help your teammate reflect and introspect, rather than sharing your own opinions or quickly problem solving.
  • Sponsoring: finding opportunities for your teammate to level up, take on new leadership roles, and get promoted.
  • Delivering feedback: observing behavior that is or isn’t aligned to what the team needs to be doing and sharing those observations, along with praise or suggestions.

Let’s dive in to how to choose, and when to use, each of these skills as you grow your teammates, and then talk about what it looks like when teammates support the overarching direction of the team.

Mentoring

When I talk to managers, I find that the vast majority have their mentor hats on ninety percent of the time when they’re working with their teammates. It’s natural!

In mentoring mode, we’re doling out advice, sharing our perspective, and helping someone else problem solve based on that information. Our personal experiences are often what we can talk most confidently about! For this reason, mentorship mode can feel really good and effective for the mentor. Having that mentor hat on can help the other person overcome a roadblock or know which next steps to take, while avoiding drastic errors that they wouldn’t have seen coming otherwise.

As a mentor, it’s your responsibility to give advice that’s current and sensitive to the changing dialog happening in our industry. Advice that might work for one person (“Be louder in meetings!” or “Ask your boss for a raise!”) may undermine someone else, because members of underrepresented groups are unconsciously assessed and treated differently. For example, research has shown that “when women are collaborative and communal, they are not perceived as competent—but when they emphasize their competence, they’re seen as cold and unlikable, in a classic ‘double bind’”.

If you are not a member of a marginalized group, and you have a mentee who is, please be a responsible mentor! Try to be aware of the way members of underrepresented groups are perceived, and the unconscious bias that might be at play in your mentee’s work environment. When you have your mentor hat on, do lots of gut checking to make sure that your advice is going to be helpful in practice for your mentee.

Mentoring is ideal when the mentee is new to their role or to the organization; they need to learn the ropes from someone who has firsthand experience. It’s also ideal when your teammate is working on a problem and has tried out a few different approaches, but still feels stumped; this is why practices like pair coding can help folks learn new things.

As mentors, we want our mentees to reach beyond us, because our mentees’ success is ultimately our success. Mentorship relationships evolve over time, because each party is growing. Imaginative, innovative ideas often come from people who have never seen a particular challenge before, so if your mentee comes up with a creative solution on their own that you wouldn’t have thought of, be excited for them—don’t just focus on the ways that you’ve done it or seen it done before.

Managers often default to mentoring mode because it feels like the fastest way to solve a problem, but it falls short in helping your teammate connect their own dots. For that, we’ll look to coaching.

Coaching

In mentoring mode, you’re focused on both the problem and the solution. You’ll share what you as the mentor would do or have done in this situation. This means you’re more focused on yourself, and less on the person who is sitting in front of you.

In coaching mode—an extremely powerful but often underutilized mode—you’re doing two primary things:

  1. Asking open questions to help the other person explore more of the shape of the topic, rather than staying at the surface level.
  2. Reflecting, which is like holding up a mirror for the other person and describing what you see or hear, or asking them to reflect for themselves.

These two tools will help you become your teammate’s fiercest champion.

Open Questions

“Closed” questions can only be answered with yes or no. Open questions often start with who, what, when, where, why, and how. But the best open questions are about the problem, not the solution. Questions that start with why tend to make the other person feel judged, and questions that start with how tend to go into problem solving mode—both of which we want to avoid while in coaching mode.

However, what questions can be authentically curious! When someone comes to you with a challenge, try asking questions like:

  • What’s most important to you about it?
  • What’s holding you back?
  • What does success look like?

Let’s say my teammate comes to me and says they’re ready for a promotion. Open questions could help this teammate explore what this promotion means and demonstrate to me what introspection they’ve already done around it. Rather than telling them what I think is necessary for them to be promoted, I could instead open up this conversation by asking them:

  • What would you be able to do in the new level that you can’t do in your current one?
  • What skills are required in the new level? What are some ways that you’ve honed those skills?
  • Who are the people already at that level that you want to emulate? What about them do you want to emulate?

Their answers would give me a place to start coaching. These questions might push my teammate to think more deeply about what this promotion means, rather than allowing them to stay surface level and believe that a promotion is about checking off a lot of boxes on a list. Their answers might also open my eyes to things that I hadn’t seen before, like a piece of work that my teammate had accomplished that made a huge impact. But most important, going into coaching mode would start a two-way conversation with this teammate, which would help make an otherwise tricky conversation feel more like a shared exploration.

Open questions, asked from a place of genuine curiosity, help people feel seen and heard. However, if the way you ask your questions comes across as judgy or like you’ve already made some assumptions, then your questions aren’t truly open (and your teammate can smell this on you!). Practice your intonation to make sure your open questions are actually curious and open.

By the way, forming lots of open questions (instead of problem solving questions, or giving advice) is tremendously hard for most people. Don’t worry if you don’t get the hang of it at first; it takes a lot of practice and intention over time to default to coaching mode rather than mentoring mode. I promise, it’s worth it.

Reflections

Just like open questions, reflections help the other person feel seen and heard, and to explore the topic more deeply.

It’s almost comical how rarely we get the sense that the person we’re talking to is actively listening to us, or focusing entirely on helping us connect our own dots. Help your teammates reflect by repeating back to them what you hear them say, as in:

  • “What I’m hearing you say is that you’re frustrated with how this project is going. Is that right?”
  • “What I know to be true about you is how deeply you care about your teammates’ feelings.”

In each of these examples, you are holding up a metaphorical mirror to your teammate, and helping them look into it. You can coach them to reflect, too:

  • “How does this new architecture project map to your goals?”
  • “Let’s reflect on where you were this time last year and how far you’ve come.”

Occasionally, you might get a reflection wrong; this gives the other person an opportunity to realize something new about their topic, like the words they’re choosing aren’t quite right, or there’s another underlying issue that should be explored. So don’t be worried about giving a bad reflection; reflecting back what you’re hearing will still help your teammate.

The act of reflecting can help the other person do a gut check to make sure they’re approaching their topic holistically. Sometimes the act of reflection forces (encourages?) the other person to do some really hard work: introspection. Introspection creates an opportunity for them to realize new aspects of the problem, options they can choose from, or deeper meanings that hadn’t occurred to them before—which often ends up being a nice shortcut to the right solution. Or, even better, the right problem statement.

When you have your coaching hat on, you don’t need to have all the answers, or even fully understand the problem that your teammate is wrestling with; you’re just there as a mirror and as a question-asker, to help prompt the other person to think deeply and come to some new, interesting conclusions. Frankly, it may not feel all that effective when you’re in coaching mode, but I promise, coaching can generate way more growth for that other person than just giving them advice or sharing your perspective.

Choose coaching when you’re looking to help someone (especially an emerging leader) hone their strategic thinking skills, grow their leadership aptitude, and craft their own path forward. Coaching mode is all about helping your teammate develop their own brain wrinkles, rather than telling them how you would do something. The introspection and creativity it inspires create deeper and longer-lasting growth.

Sponsoring

While you wear the mentoring and coaching hats around your teammates, the sponsor hat is more often worn when they’re not around, like when you’re in a 1:1 with your manager, a sprint planning meeting, or another environment where someone’s work might be recognized. You might hear about an upcoming project to acquire a new audience and recommend that a budding user researcher take it on, or you’ll suggest to an All Hands meeting organizer that a junior designer should give a talk about a new pattern they’ve introduced to the style guide.

Sponsorship is all about feeling on the hook for getting someone to the next level. As someone’s sponsor, you’ll put their name in the ring for opportunities that will get them the experience and visibility necessary to grow in their role and at the organization. You will put your personal reputation on the line on behalf of the person you’re sponsoring, to help get them visible and developmental assignments. It’s a powerful tool, and the one most effective at helping someone get to the next level (way more so than mentoring or coaching!).

The Center for Talent Innovation routinely measures the career benefits of sponsorship (PDF). Their studies have found that when someone has a sponsor, they are way more likely to have access to career-launching work. They’re also more likely to take actions that lead to even more growth and opportunities, like asking their manager for a stretch assignment or a raise.

When you’re in sponsorship mode, think about the different opportunities you have to offer up someone’s name. This might look like:

  • giving visible/public recognition (company “shout outs,” having them present a project demo, thanking them in a launch email, giving someone’s manager feedback about their good work);
  • assigning stretch tasks and projects that are just beyond their current skill set, to help them grow and have supporting evidence for a future promotion; or
  • opening the door for them to write blog posts, give company or conference talks, or contribute open-source work.

Remember that members of underrepresented groups are typically over-mentored, but under-sponsored. These individuals get lots of advice (often unsolicited), coffee outings, and offers to teach them new skills. But it’s much rarer for them to see support that looks like sponsorship.

This isn’t because sponsors intentionally ignore marginalized folks, but because of in-group bias. Because of how our brains (and social networks) work, the people we’re closest to tend to look mostly like us—and we draw from that same pool when we nominate people for projects, for promotions, and for hires. Until I started learning about bias in the workplace, most of the people I sponsored were white, cisgender women, like myself. Since then, I’ve actively worked to sponsor people of color and nonbinary people. It takes effort and intention to combat our default behaviors—but I know you can do it!

Take a look at the daily communications you participate in: your work chat logs, the conversations you have with others, the process for figuring out who should fix a bug or work on a new project, and the processes for making your teams’ work visible (like an architecture review, code review, launch calendar, etc.). You’ll be surprised how many moments there are to sponsor someone throughout an average day. Please put in the time and intention to ensure that you’re sponsoring members of underrepresented groups, too.




science and technology

Responsible JavaScript: Part II

You and the rest of the dev team lobbied enthusiastically for a total re-architecture of the company’s aging website. Your pleas were heard by management—even up to the C-suite—who gave the green light. Elated, you and the team started working with the design, copy, and IA teams. Before long, you were banging out new code.

It started out innocently enough with an npm install here and an npm install there. Before you knew it, though, you were installing production dependencies like an undergrad doing keg stands without a care for the morning after.

Then you launched.

Unlike the aftermath of most copious boozings, the agony didn’t start the morning after. Oh, no. It came months later in the ghastly form of low-grade nausea and headache of product owners and middle management wondering why conversions and revenue were both down since the launch. It then hit a fever pitch when the CTO came back from a weekend at the cabin and wondered why the site loaded so slowly on their phone—if it indeed ever loaded at all.

Everyone was happy. Now no one is happy. Welcome to your first JavaScript hangover.

It’s not your fault

When you’re grappling with a vicious hangover, “I told you so” would be a well-deserved, if fight-provoking, rebuke—assuming you could even fight in so sorry a state.

When it comes to JavaScript hangovers, there’s plenty of blame to dole out. Pointing fingers is a waste of time, though. The landscape of the web today demands that we iterate faster than our competitors. This kind of pressure means we’re likely to take advantage of any means available to be as productive as possible. That means we’re more likely—but not necessarily doomed—to build apps with more overhead, and possibly use patterns that can hurt performance and accessibility.

Web development isn't easy. It’s a long slog we rarely get right on the first try. The best part of working on the web, however, is that we don’t have to get it perfect at the start. We can make improvements after the fact, and that’s just what the second installment of this series is here for. Perfection is a long ways off. For now, let’s take the edge off of that JavaScript hangover by improving your site’s, er, scriptuation in the short term.

Round up the usual suspects

It might seem rote, but it’s worth going through the list of basic optimizations. It’s not uncommon for large development teams—particularly those that work across many repositories or don’t use optimized boilerplate—to overlook them.

Shake those trees

First, make sure your toolchain is configured to perform tree shaking. If tree shaking is new to you, I wrote a guide on it last year you can consult. The short of it is that tree shaking is a process in which unused exports in your codebase don’t get packaged up in your production bundles.

Tree shaking is available out of the box with modern bundlers such as webpack, Rollup, or Parcel. Grunt or gulp—which are not bundlers, but rather task runners—won’t do this for you. A task runner doesn’t build a dependency graph like a bundler does. Rather, they perform discrete tasks on the files you feed to them with any number of plugins. Task runners can be extended with plugins to use bundlers to process JavaScript. If extending task runners in this way is problematic for you, you’ll likely need to manually audit and remove unused code.

For tree shaking to be effective, the following must be true:

  1. Your app logic and the packages you install in your project must be authored as ES6 modules. Tree shaking CommonJS modules isn’t practically possible.
  2. Your bundler must not transform ES6 modules into another module format at build time. If this happens in a toolchain that uses Babel, @babel/preset-env configuration must specify modules: false to prevent ES6 code from being converted to CommonJS.

On the off chance tree shaking isn’t occurring during your build, getting it to work may help. Of course, its effectiveness varies on a case-by-case basis. It also depends on whether the modules you import introduce side effects, which may influence a bundler’s ability to shake unused exports.

Split that code

Chances are good that you’re employing some form of code splitting, but it’s worth re-evaluating how you’re doing it. No matter how you’re splitting code, there are two questions that are always worth asking yourself:

  1. Are you deduplicating common code between entry points?
  2. Are you lazy loading all the functionality you reasonably can with dynamic import()?

These are important because reducing redundant code is essential to performance. Lazy loading functionality also improves performance by lowering the initial JavaScript footprint on a given page. On the redundancy front, using an analysis tool such as Bundle Buddy can help you find out if you have a problem.

Bundle Buddy can examine your webpack compilation statistics and determine how much code is shared between your bundles.

Where lazy loading is concerned, it can be a bit difficult to know where to start looking for opportunities. When I look for opportunities in existing projects, I’ll search for user interaction points throughout the codebase, such as click and keyboard events, and similar candidates. Any code that requires a user interaction to run is a potentially good candidate for dynamic import().

Of course, loading scripts on demand brings the possibility that interactivity could be noticeably delayed, as the script necessary for the interaction must be downloaded first. If data usage is not a concern, consider using the rel=prefetch resource hint to load such scripts at a low priority that won’t contend for bandwidth against critical resources. Support for rel=prefetch is good, but nothing will break if it’s unsupported, as such browsers will ignore markup they doesn’t understand.

Externalize third-party hosted code

Ideally, you should self-host as many of your site’s dependencies as possible. If for some reason you must load dependencies from a third party, mark them as externals in your bundler’s configuration. Failing to do so could mean your website’s visitors will download both locally hosted code and the same code from a third party.

Let’s look at a hypothetical situation where this could hurt you: say that your site loads Lodash from a public CDN. You've also installed Lodash in your project for local development. However, if you fail to mark Lodash as external, your production code will end up loading a third party copy of it in addition to the bundled, locally hosted copy.

This may seem like common knowledge if you know your way around bundlers, but I’ve seen it get overlooked. It’s worth your time to check twice.

If you aren’t convinced to self-host your third-party dependencies, then consider adding dns-prefetch, preconnect, or possibly even preload hints for them. Doing so can lower your site’s Time to Interactive and—if JavaScript is critical to rendering content—your site’s Speed Index.

Smaller alternatives for less overhead

Userland JavaScript is like an obscenely massive candy store, and we as developers are awed by the sheer amount of open source offerings. Frameworks and libraries allow us to extend our applications to quickly do all sorts of stuff that would otherwise take loads of time and effort.

While I personally prefer to aggressively minimize the use of client-side frameworks and libraries in my projects, their value is compelling. Yet, we do have a responsibility to be a bit hawkish when it comes to what we install. When we’ve already built and shipped something that depends on a slew of installed code to run, we’ve accepted a baseline cost that only the maintainers of that code can practically address. Right?

Maybe, but then again, maybe not. It depends on the dependencies used. For instance, React is extremely popular, but Preact is an ultra-small alternative that largely shares the same API and retains compatibility with many React add-ons. Luxon and date-fns are much more compact alternatives to moment.js, which is not exactly tiny.

Libraries such as Lodash offer many useful methods. Yet, some of them are easily replaceable with native ES6. Lodash’s compact method, for example, is replaceable with the filter array method. Many more can be replaced without much effort, and without the need for pulling in a large utility library.

Whatever your preferred tools are, the idea is the same: do some research to see if there are smaller alternatives, or if native language features can do the trick. You may be surprised at how little effort it may take you to seriously reduce your app’s overhead.

Differentially serve your scripts

There’s a good chance you’re using Babel in your toolchain to transform your ES6 source into code that can run on older browsers. Does this mean we’re doomed to serve giant bundles even to browsers that don’t need them, until the older browsers disappear altogether? Of course not! Differential serving helps us get around this by generating two different builds of your ES6 source:

  • Bundle one, which contains all the transforms and polyfills required for your site to work on older browsers. You’re probably already serving this bundle right now.
  • Bundle two, which contains little to none of the transforms and polyfills because it targets modern browsers. This is the bundle you’re probably not serving—at least not yet.

Achieving this is a bit involved. I’ve written a guide on one way you can do it, so there’s no need for a deep dive here. The long and short of it is that you can modify your build configuration to generate an additional but smaller version of your site’s JavaScript code, and serve it only to modern browsers. The best part is that these are savings you can achieve without sacrificing any features or functionality you already offer. Depending on your application code, the savings could be quite significant.

A webpack-bundle-analyzer analysis of a project's legacy bundle (left) versus one for a modern bundle (right). View full-sized image.

The simplest pattern for serving these bundles to their respective platforms is brief. It also works a treat in modern browsers:

<!-- Modern browsers load this file: -->
<script type="module" src="/js/app.mjs"></script>
<!-- Legacy browsers load this file: -->
<script defer nomodule src="/js/app.js"></script>

Unfortunately, there’s a caveat with this pattern: legacy browsers like IE 11—and even relatively modern ones such as Edge versions 15 through 18—will download both bundles. If this is an acceptable trade-off for you, then worry no further.

On the other hand, you'll need a workaround if you’re concerned about the performance implications of older browsers downloading both sets of bundles. Here’s one potential solution that uses script injection (instead of the script tags above) to avoid double downloads on affected browsers:

var scriptEl = document.createElement("script");

if ("noModule" in scriptEl) {
  // Set up modern script
  scriptEl.src = "/js/app.mjs";
  scriptEl.type = "module";
} else {
  // Set up legacy script
  scriptEl.src = "/js/app.js";
  scriptEl.defer = true; // type="module" defers by default, so set it here.
}

// Inject!
document.body.appendChild(scriptEl);

This script infers that if a browser supports the nomodule attribute in the script element, it understands type="module". This ensures that legacy browsers only get legacy scripts and modern browsers only get modern ones. Be warned, though, that dynamically injected scripts load asynchronously by default, so set the async attribute to false if dependency order is crucial.

Transpile less

I’m not here to trash Babel. It’s indispensable, but lordy, it adds a lot of extra stuff without your ever knowing. It pays to peek under the hood to see what it’s up to. Some minor changes in your coding habits can have a positive impact on what Babel spits out.

https://twitter.com/_developit/status/1110229993999777793

To wit: default parameters are a very handy ES6 feature you probably already use:

function logger(message, level = "log") {
  console[level](message);
}

The thing to pay attention to here is the level parameter, which has a default of “log.” This means if we want to invoke console.log with this wrapper function, we don’t need to specify level. Great, right? Except when Babel transforms this function, the output looks like this:

function logger(message) {
  var level = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : "log";

  console[level](message);
}

This is an example of how, despite our best intentions, developer conveniences can backfire. What was a handful of bytes in our source has now been transformed into much larger in our production code. Uglification can’t do much about it either, as arguments can’t be reduced. Oh, and if you think rest parameters might be a worthy antidote, Babel’s transforms for them are even bulkier:

// Source
function logger(...args) {
  const [level, message] = args;

  console[level](message);
}

// Babel output
function logger() {
  for (var _len = arguments.length, args = new Array(_len), _key = 0; _key < _len; _key++) {
    args[_key] = arguments[_key];
  }

  const level = args[0],
        message = args[1];
  console[level](message);
}

Worse yet, Babel transforms this code even for projects with a @babel/preset-env configuration targeting modern browsers, meaning the modern bundles in your differentially served JavaScript will be affected too! You could use loose transforms to soften the blow—and that’s a fine idea, as they’re often quite a bit smaller than their more spec-compliant counterparts—but enabling loose transforms can cause issues if you remove Babel from your build pipeline later on.

Regardless of whether you decide to enable loose transforms, here’s one way to cut the cruft of transpiled default parameters:

// Babel won't touch this
function logger(message, level) {
  console[level || "log"](message);
}

Of course, default parameters aren’t the only feature to be wary of. For example, spread syntax gets transformed, as do arrow functions and a whole host of other stuff.

If you don’t want to avoid these features altogether, you have a couple ways of reducing their impact:

  1. If you’re authoring a library, consider using @babel/runtime in concert with @babel/plugin-transform-runtime to deduplicate the helper functions Babel puts into your code.
  2. For polyfilled features in apps, you can include them selectively with @babel/polyfill via @babel/preset-env’s useBuiltIns: "usage" option.

This is solely my opinion, but I believe the best choice is to avoid transpilation altogether in bundles generated for modern browsers. That’s not always possible, especially if you use JSX, which must be transformed for all browsers, or if you’re using bleeding edge language features that aren’t widely supported. In the latter case, it might be worth asking if those features are really necessary to deliver a good user experience (they rarely are). If you arrive at the conclusion that Babel must be a part of your toolchain, then it’s worth peeking under the hood from time to time to catch suboptimal stuff Babel might be doing that you can improve on.

Improvement is not a race

As you massage your temples wondering when this horrid JavaScript hangover is going to lift, understand that it’s precisely when we rush to get something out there as fast as we possibly can that the user experience can suffer. As the web development community obsesses on iterating faster in the name of competition, it’s worth your time to slow down a little bit. You’ll find that by doing so, you may not be iterating as fast as your competitors, but your product will be faster than theirs.

As you take these suggestions and apply them to your codebase, know that progress doesn’t spontaneously happen overnight. Web development is a job. The truly impactful work is done when we’re thoughtful and dedicated to the craft for the long haul. Focus on steady improvements. Measure, test, repeat, and your site’s user experience will improve, and you’ll get faster bit by bit over time.

Special thanks to Jason Miller for tech editing this piece. Jason is the creator and one of the many maintainers of Preact, a vastly smaller alternative to React with the same API. If you use Preact, please consider supporting Preact through Open Collective.




science and technology

Optimizing Scratch 3 Pen Blocks

Earlier this year, we shared our work on the launch of Scratch 3.0, a major version of the visual programming environment for children of all ages. The new version of Scratch marked a complete rewrite of the runtime in JavaScript leveraging open web APIs. In our previous post, we enumerated the many performance optimizations that […]



  • Graphics And Interactive Applications

science and technology

Take the MDN Developer & Designer Needs Survey

Today, MDN announced their first-ever needs assessment survey for web developers and designers. The survey takes about 20 minutes and asks a variety of questions aimed at understanding the joys, frustrations, needs and wants of everyday web-makers. Mozilla have committed to making the results of the survey public later this year, and the survey itself […]




science and technology

Lessons Learned from a Year of Testing the Web Platform

The web-platform-tests project is a massive suite of tests (over one million in total) which verify that software (mostly web browsers) correctly implement web technologies. It’s as important as it is ambitious: the health of the web depends on a plurality of interoperable implementations. Although Bocoup has been contributing to the web-platform-tests, or “WPT,” for […]




science and technology

Contributing to the ARIA Authoring Practices Guide

We believe that inclusivity and accessibility are of utmost importance to the open web platform. One of the ways that we empower the full diversity of Internet users is by ensuring that those with permanent disabilities and temporary limitations, can browse websites with assistive technologies. Writing accessible code improves the experience of browsing a website […]




science and technology

Experiments in Faster Scratch 3 Loading with Texture Atlases

One of the best parts of the Scratch community is the diversity of Scratch projects. Community members have used the Scratch programming language to create many different kinds of interactive applications, from full game engines to music sequencers. One genre is especially unique: Multiple Animator Projects, or MAPs. These Scratch projects compile animations from many […]




science and technology

How to scribe at TPAC

Next week is TPAC in Fukuoka, Japan. This is an annual conference for all working groups in the W3C to meet face-to-face. Naturally, there is a desire to have a record of what is said in these meetings. This is done by people in the meeting taking turns to scribe. Even if you have attended […]




science and technology

Introducing a JavaScript library for exploring Scratch projects: sb-util

Introduction We’re excited to introduce sb-util, a new JavaScript library that makes it easy to query Scratch projects via .sb3 files. This npm library allows developers (or even teachers and students) to parse and introspect Scratch projects for a range of purposes, from data visualization to custom tooling. Previously, working with Scratch project files required […]




science and technology

The ECMAScribes

Did you know that in the process of standardizing JavaScript, TC39 publishes notes for each of their regular meetings? Every other month, over 50 “delegates” convene to discuss the future of the language, and the minutes they publish provide an incredible view into their discussions. Here’s what you can expect to find: a list of […]




science and technology

Launching Test262 Results on MDN Web Docs

We are excited to announce support for report embedding on test262.report, along with a new MDN collaboration to bring up-to-date information about ECMAScript feature conformance to MDN Web Docs. Starting today, you can view test results from Test262 Report, updated daily and embedded directly on MDN pages for the newest ECMAScript features where interoperability and […]




science and technology

Bocoup & Open Standards: A (Very Full) Year in Review

We’ve had a very productive year making web standards more open, predictable, and inclusive. As our standards liaison, my job is to spot opportunities for us to do that work externally, and to see where more support is needed. We still have a lot to do, but it’s nice to reflect on our accomplishments over […]




science and technology

Facile fabrication of a hybrid polymer electrolyte via initiator-free thiol–ene photopolymerization for high-performance all-solid-state lithium metal batteries

Polym. Chem., 2020, 11,2732-2739
DOI: 10.1039/D0PY00203H, Paper
Cai Zuo, Binghua Zhou, Ye Hyang Jo, Sibo Li, Gong Chen, Shaoqiao Li, Wen Luo, Dan He, Xingping Zhou, Zhigang Xue
The article reports the facile fabrication of a solid polymer electrolyte via initiator-free thiol–ene photopolymerization for all-solid-state lithium metal batteries.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

A polymerization-induced self-assembly process for all-styrenic nano-objects using the living anionic polymerization mechanism

Polym. Chem., 2020, 11,2635-2639
DOI: 10.1039/D0PY00296H, Communication
Chengcheng Zhou, Jian Wang, Peng Zhou, Guowei Wang
By combination of the living anionic polymerization (LAP) mechanism with the polymerization-induced self-assembly (PISA) technique, the all-styrenic diblock copolymer poly(p-tert-butylstyrene)-b-polystyrene (PtBS-b-PS) based LAP PISA was successfully developed.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Tin(II) 2-ethylhexanoate catalysed methanolysis of end-of-life poly(lactide)

Polym. Chem., 2020, 11,2625-2629
DOI: 10.1039/D0PY00292E, Communication
Melanie Hofmann, Christoph Alberti, Felix Scheliga, Roderich R. R. Meißner, Stephan Enthaler
The depolymerisation of end-of-life poly(lactide) (PLA) goods was studied as part of the chemical recycling of PLA.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Correction: Dynamic covalent polymer networks via combined nitroxide exchange reaction and nitroxide mediated polymerization

Polym. Chem., 2020, 11,2761-2761
DOI: 10.1039/D0PY90053B, Correction
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Yixuan Jia, Yannick Matt, Qi An, Isabelle Wessely, Hatice Mutlu, Patrick Theato, Stefan Bräse, Audrey Llevot, Manuel Tsotsalas
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Correction: Block copolymer hierarchical structures from the interplay of multiple assembly pathways

Polym. Chem., 2020, 11,2762-2762
DOI: 10.1039/D0PY90057E, Correction
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Alessandro Ianiro, Meng Chi, Marco M. R. M. Hendrix, Ali Vala Koç, E. Deniz Eren, Michael Sztucki, Andrei V. Petukhov, Gijsbertus de With, A. Catarina C. Esteves, Remco Tuinier
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

A covalently crosslinked silk fibroin hydrogel using enzymatic oxidation and chemoenzymatically synthesized copolypeptide crosslinkers consisting of a GPG tripeptide motif and tyrosine: control of gelation and resilience

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00187B, Paper
Hiromitsu Sogawa, Takuya Katashima, Keiji Numata
A covalently crosslinked silk fibroin hydrogel was successfully formed via an enzymatic crosslinking reaction using copolypeptides, which consist of a glycine–proline–glycine tripeptide motif and tyrosine, as linker molecules.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Visible light-mediated ring-opening polymerization of lactones based on the excited state acidity of ESPT molecules

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00369G, Paper
Xun Zhang, Siping Hu, Qiang Ma, Saihu Liao
A visible light-regulated ring-opening polymerization of lactones has been developed based on the excited state acidity of ESPT molecules.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Synthesis and enantioseparation of proline-derived helical polyacetylenes as chiral stationary phases for HPLC

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00205D, Paper
Ge Shi, Xiao Dai, Yue Zhou, Jie Zhang, Jun Shen, Xinhua Wan
Proline-derived aliphatically substituted polyacetylenes with stable helical conformations exhibit an excellent enantioseparation ability as chiral stationary phases of HPLC.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Synthesis of star-shaped polyzwitterions with adjustable UCST and fast responsiveness by a facile RAFT polymerization

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00318B, Paper
Zhi Li, Hao Li, Zhonghe Sun, Botao Hao, Tung-Chun Lee, Anchao Feng, Liqun Zhang, San H. Thang
We describe crosslinking of polyzwitterions for the formation of novel star-shaped polymers with low polydispersities and dual-responsiveness using RAFT polymerization.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Injectable dual cross-linked adhesive hyaluronic acid multifunctional hydrogel scaffolds for potential applications in cartilage repair

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00371A, Paper
Chenxi Yu, Huichang Gao, Qingtao Li, Xiaodong Cao
A double crosslinked hydrogels was designed and prepared by combining the Diels–Alder click reaction and possessed good mechanical strength, injectability and adhesion.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Turning natural δ-lactones to thermodynamically stable polymers with triggered recyclability

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00270D, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Linnea Cederholm, Peter Olsén, Minna Hakkarainen, Karin Odelius
Extending the use of natural δ-lactones in circular materials via a synthetic strategy yielding thermodynamically stable polyesters with triggered recyclability.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Highly elastic, strong, and reprocessable cross-linked polyolefin elastomers enabled by boronic ester bonds

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00235F, Paper
Fei Yang, Li Pan, Zhe Ma, Yahui Lou, Yuanyuan Li, Yuesheng Li
Despite the development of new catalysts and synthetic strategies to prepare polyolefin elastomers (POEs), less progress has been made in balancing their elastomeric properties with processability and resistance to heat and solvents.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Poly(alanine-nylon-alanine) as a bioplastic: chemoenzymatic synthesis, thermal properties and biological degradation effects

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00137F, Paper
Open Access
Prashant G. Gudeangadi, Kei Uchida, Ayaka Tateishi, Kayo Terada, Hiroyasu Masunaga, Kousuke Tsuchiya, Hitoshi Miyakawa, Keiji Numata
Poly(amino acids) such as polypeptides and proteins are attractive biomass-based polymers that potentially contribute to circular economy for plastic.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Self-crosslinking smart hydrogels through direct complexation between benzoxaborole derivatives and diols from hyaluronic acid

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00308E, Paper
Tamiris Figueiredo, Yu Ogawa, Jing Jing, Vanina Cosenza, Isabelle Jeacomine, Johan D. M. Olsson, Thibaud Gerfaud, Jean-Guy Boiteau, Craig Harris, Rachel Auzély-Velty
By tailoring the structure of benzoxaborole (BOR), self-crosslinking hydrogels based on hyaluronic acid (HA) modified with BOR derivatives are obtained for the first time through the direct BOR-HA diol complexation at physiological pH.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Photo-controlled RAFT polymerization mediated by organic/inorganic hybrid photoredox catalysts: enhanced catalytic efficiency

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00171F, Paper
Wulong Wang, Sheng Zhong, Guicheng Wang, Hongliang Cao, Yun Gao, Weian Zhang
Photo-controlled RAFT polymerization mediated by an organic/inorganic hybrid photoredox catalyst (ZnTPP–POSS) was performed and showed enhanced catalytic efficiency compared with the ZnTPP photocatalyst.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Poly(hydroxy acids) derived from the self-condensation of hydroxy acids: from polymerization to end-of-life options

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00088D, Review Article
Elena Gabirondo, Ainara Sangroniz, Agustin Etxeberria, Sergio Torres-Giner, Haritz Sardon
Poly(hydroxy acids) derived from the self-condensation of hydroxy acid are biodegradable and can be fully recycled in a Circular Economy approach.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Cross-linker control of vitrimer flow

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00233J, Paper
Bassil M. El-Zaatari, Jacob S. A. Ishibashi, Julia A. Kalow
The rate of stress relaxation in a vitrimer can be modulated by changing solely the structure of the cross-linker electrophile.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Surfactant-free synthesis of layered double hydroxide-armored latex particles

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00140F, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
X. G. Qiao, P.-Y. Dugas, V. Prevot, E. Bourgeat-Lami
MgAl-layered double hydroxide (LDH)-armored latexes were produced by Pickering emulsion polymerization of styrene using 2-hydroxyethyl methacrylate (HEMA) and methyl methacrylate (MMA) as auxiliary comonomers.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Miniemulsion polymerization of styrene using carboxylated graphene quantum dots as surfactant

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00404A, Paper
Le N. M. Dinh, Lakshmi N. Ramana, Vipul Agarwal, Per B. Zetterlund
Carboxylated graphene quantum dots (cGQDs) were synthesized from dextrose and sulfuric acid via a hydrothermal process, and subsequently used as sole surfactant in miniemulsion polymerization of styrene.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Using the dynamic behavior of macrocyclic monomers with a bis(hindered amino)disulfide linker for the preparation of end-functionalized polymers

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00366B, Paper
Hirogi Yokochi, Rikito Takashima, Daisuke Aoki, Hideyuki Otsuka
End-functionalized polymers were synthesized by simply heating a mixture of a macrocyclic compound with one bis(2,2,6,6-tetramethylpiperidin-1-yl)disulfide (BiTEMPS) moiety and bifunctional acyclic BiTEMPS compounds as sources of repeat units and terminal groups, respectively.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Rapid metal-free synthesis of pyridyl-functionalized conjugated microporous polymers for visible-light-driven water splitting

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00249F, Communication
Zhonghua Cheng, Lei Wang, Yan He, Xingjia Chen, Xiaojun Wu, Hangxun Xu, Yaozu Liao, Meifang Zhu
A rapid metal-free synthetic approach was applied to synthesize pyridyl-functionalized conjugated microporous polymers (PCMPs) via an aminative cyclization between aryl aldehydes and ketones without any metal catalysts.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Synthesis of acrylamide-based block-copolymer brushes under flow: monitoring real-time growth and surface restructuring upon drying

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00219D, Paper
Open Access
Joydeb Mandal, Andrea Arcifa, Nicholas D. Spencer
Block-copolymer brushes of water-soluble acrylamides have been synthesised by SI-ATRP under continuous flow and their growth monitored in situ by means of a quartz-crystal microbalance with dissipation (QCM-D).
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Preparation of aryl polysulfonates via a highly efficient SuFEx click reaction, their controllable degradation and functionalized behavior

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00435A, Communication
Zhenlei Cao, Feng Zhou, Pei-Yang Gu, Dongyun Chen, Jinghui He, John R. Cappiello, Peng Wu, Qingfeng Xu, Jianmei Lu
Polysulfonates obtained from SuFEx click reaction can be degraded using DBU. The degradation process was further utilized to prepare functional polymers.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Highly active bimetallic nickel catalysts for alternating copolymerization of carbon dioxide with epoxides

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00174K, Paper
Yu-Chia Su, Chih-Hsiang Tsui, Chen-Yen Tsai, Bao-Tsan Ko
Catalyst 1 was reported for the first time to be effective for nickel-catalyzed CO2/CHO copolymerization at 1 atm CO2 pressure.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




science and technology

Enhanced thermomechanical property of a self-healing polymer via self-assembly of a reversibly cross-linkable block copolymer

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00310G, Paper
Hyang Moo Lee, Suguna Perumal, Gi Young Kim, Jin Chul Kim, Young-Ryul Kim, Minsoo P. Kim, Hyunhyup Ko, Yecheol Rho, In Woo Cheong
Introduction of a self-healable block copolymer increases the mechanical property whilst maintaining self-healing efficiency.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry