scr

Pune based scriptwriter of Marathi movie 'Slambook' talks about his journey

It took four years for Dhalgade to get a project, yet he believes that everyone must follow their passion however tough it seems at the start.




scr

After outrage, Pune University scraps 'vegetarians only' gold medal

The Vice-Chancellor says the Savitri Phule Pune University administration is also reviewing conditions given for the eligibility of 40 other awards.




scr

[ASAP] Scrabbling around in Synthetic Nuances Managing Sodium Compounds: Bisphenol/Bisnaphthol Synthesis by Hydroxyl Group Masking

Inorganic Chemistry
DOI: 10.1021/acs.inorgchem.0c00310




scr

[ASAP] Complexation of Chiral Zinc(II)Porphyrin <italic toggle="yes">Tweezer</italic> with Chiral Guests: Control, Discrimination and Rationalization of Supramolecular Chirality

Inorganic Chemistry
DOI: 10.1021/acs.inorgchem.0c00877




scr

[ASAP] Discrete Supracrystalline Heterostructures from Integrative Assembly of Nanocrystals and Porous Organic Cages

ACS Nano
DOI: 10.1021/acsnano.9b09686




scr

Evidence and documents laid before the Committee on Alleged German Outrages (Appendix to the Report which has been published separately). Containing details of outrages on civil population in Belgium and France; the use of civilians as a screen; offences

London : Printed under the authority of His Majesty's Stationery Office by Hayman, Christy & Lilly, Ltd. E.C. To be purchased, either directly or through any bookseller, from Wyman & Sons, Ltd., 29, Breams Buildings, Fetter Lane, E.C., and 54, St. Mary Street, Cardiff; or H.M. Stationery Office (Scottish Branch), 23, Forth Street, Edinburgh; or E. Ponsonsby, Ltd., 116, Grafton Street, Dublin; or from the agencies in the British colonies and dependencies, the United States of America, and other foreign countries of T. Fisher Unwin, London, W.C., 1915.




scr

Screening Scarlett Johansson [Electronic book] : Gender, Genre, Stardom / Janice Loreck, Whitney Monaghanm Kirsten Stevens, editors.

Cham : Palgrave Macmillan US, 2020.




scr

The pearl of greatest price : Mormonism's most controversial scripture [Electronic book] / Terryl Givens and Brian Hauglid.

New York, NY : Oxford University Press, 2019.




scr

Locke and biblical hermeneutics [Electronic book] : conscience and scripture / Luisa Simonutti, editor.

Cham : Springer, c2019.




scr

Laboratory astrophysics [Electronic book] / Guillermo M. Muñoz Caro, Rafael Escribano, editors.

Cham, Switzerland : Springer, [2018]




scr

The finger of the scribe : how scribes learned to write the Bible [Electronic book] / William M. Schniedewind.

New York, NY : Oxford University Press, 2019.




scr

Clean hands? : philosophical lessons from scrupulosity [Electronic book] / Jesse S. Summers and Walter Sinnott-Armstrong.

New York, NY : Oxford University Press, 2019.




scr

FSIS Directive 7150.1 Descriptive Designation for Needle or Blade Tenderized Raw Beef Products as Required by 9 CFR 317.2(e)(3)




scr

The tentmakers of Cairo: Ṣunnāʻ al-khiyām fī al-Qāhirah / Non 'D' Script & Kim Beamish present

Rotch Library - NK9288.T46 2015




scr

Junction 48 / a Metro Communications ... [and others] production ; screenplay, Oren Moverman & Tamer Nafar ; director, Udi Aloni

Rotch Library - PN1997.2.J86 2017




scr

Adhesive particle flow: a discrete-element approach / Jeffrey S. Marshall, Shuiqing Li

Barker Library - TA357.5.G47 M37 2014




scr

Understanding the discrete element method: simulation of non-spherical particles for granular and multi-body systems / Hans-Georg Matuttis, Jian Chen

Barker Library - TA357.5.G47 M38 2014




scr

Fluid-structure interaction: modeling, adaptive discretizations and solvers / edited by Stefan Frei, Bärbel Holm, Thomas Richter, Thomas Wick, Huidong Yang

Barker Library - TA357.5.F58 F56 2017




scr

Adulthood is a myth: a "Sarah's Scribbles" collection / Sarah Andersen

Hayden Library - PN6727.A5243 A38 2016




scr

Big mushy happy lump: a "Sarah's Scribbles" collection / Sarah Andersen ; editor Grace Suh

Hayden Library - PN6727.A47 B54 2017




scr

Yes, Roya: an erotic graphic novel / script by C. Spike Trotman ; art by Emilee Denich

Hayden Library - PN6727.T77 Y47 2016




scr

Snotgirl / script, Bryan Lee O'Malley ; art, Leslie Hung ; colors, Mickey Quinn ; lettering, Maré Odomo ; created by Bryan Lee O'Malley & Leslie Hung

Hayden Library - PN6728.S567 O63 2017




scr

Black cloud / story, Jason Latour; script, Ivan Brandon; art, Greg Hinkle; color, Matt Wilson

Hayden Library - PN6728.B5175 L37 2017




scr

New Online: Persian Manuscripts

In celebration of the Persian New Year, also known as Nowruz, the Library of Congress has digitized and made available online for the first time the Rare Persian-Language Manuscript Collection, which sheds light on scientific, religious, philosophical and literary topics that are highly valued in the Persian speaking lands.

This collection, including 150 manuscripts with some dating back to the 13th century, also reflects the diversity of religious and confessional traditions within the Persian culture.

From the 10th century to the present, Persian became the cultural language for a large region stretching from West Asia to Central and South Asia. Today, Persian is the native language spoken in Iran, Afghanistan, Tajikistan and some regions of Central and South Asia and the Caucasus.

Click here for more information.




scr

Hispanic Resources: News & Events: "Soy Cubana": Documentary Screening and Discussion

The documentary Soy Cubana charts the daily lives of four middle-aged women from Santiago de Cuba and their efforts to draw on a broad repertoire of musical genres in creating their own a capella style in an era of studio production and hi-tech sounds. Dr. Joseph Scarpaci, Director of the Center for the Study of Cuban Culture and the Economy, is the co-producer, creator, and translator/interpreter of the documentary. He will provide a short introduction before the screening and a Q&A will follow.

Date and Time: Wednesday, April 3, 2019--4:00 p.m.
Location:
Hispanic Reading Room (LJ-240), Hanke Room (conference room) / Thomas Jefferson Building, 2nd floor

Click here for more information.




scr

25 Useful Resources for Creating Tooltips With JavaScript or CSS

Tooltips are awesome, there’s simply no denying it. They provide a simple, predictable and straightforward way to provide your users with useful, context-sensitive information, and they look cool to boot. We all agree on how great tooltips are, but how we go about implementing them can differ dramatically. If you’re at square one, looking for […]




scr

History API: Scroll Restoration




scr

Semantics to Screen Readers

As a child of the ’90s, one of my favorite movie quotes is from Harriet the Spy: “there are as many ways to live as there are people in this world, and each one deserves a closer look.” Likewise, there are as many ways to browse the web as there are people online. We each bring unique context to our web experience based on our values, technologies, environments, minds, and bodies. Assistive technologies (ATs), which are hardware and software that help us perceive and interact with digital content, come in diverse forms. ATs can use a whole host of user input, ranging from clicks and keystrokes to minor muscle movements. ATs may also present digital content in a variety of forms, such as Braille displays, color-shifted views, and decluttered user interfaces (UIs). One more commonly known type of AT is the screen reader. Programs such as JAWS, Narrator, NVDA, and VoiceOver can take digital content and present it to users through voice output, may display this output visually on the user’s screen, and can have Braille display and/or screen magnification capabilities built in. If you make websites, you may have tested your sites with a screen reader. But how do these and other assistive programs actually access your content? What information do they use? We’ll take a detailed step-by-step view of how the process works. (For simplicity we’ll continue to reference “browsers” and “screen readers” throughout this article. These are essentially shorthands for “browsers and other applications,” and “screen readers and other assistive technologies,” respectively.)

The semantics-to-screen-readers pipeline

Accessibility application programming interfaces (APIs) create a useful link between user applications and the assistive technologies that wish to interact with them. Accessibility APIs facilitate communicating accessibility information about user interfaces (UIs) to the ATs. The API expects information to be structured in a certain way, so that whether a button is properly marked up in web content or is sitting inside a native app taskbar, a button is a button is a button as far as ATs are concerned. That said, screen readers and other ATs can do some app-specific handling if they wish. On the web specifically, there are some browser and screen reader combinations where accessibility API information is supplemented by access to DOM structures. For this article, we’ll focus specifically on accessibility APIs as a link between web content and the screen reader. Here’s the breakdown of how web content reaches screen readers via accessibility APIs: The web developer uses host language markup (HTML, SVG, etc.), and potentially roles, states, and properties from the ARIA suite where needed to provide the semantics of their content. Semantic markup communicates what type an element is, what content it contains, what state it’s in, etc. The browser rendering engine (alternatively referred to as a “user agent”) takes this information and maps it into an accessibility API. Different accessibility APIs are available on different operating systems, so a browser that is available on multiple platforms should support multiple accessibility APIs. Accessibility API mappings are maintained on a lower level than web platform APIs, so web developers don’t directly interact with accessibility APIs. The accessibility API includes a collection of interfaces that browsers and other apps can plumb into, and generally acts as an intermediary between the browser and the screen reader. Accessibility APIs provide interfaces for representing the structure, relationships, semantics, and state of digital content, as well as means to surface dynamic changes to said content. Accessibility APIs also allow screen readers to retrieve and interact with content via the API. Again, web developers don’t interact with these APIs directly; the rendering engine handles translating web content into information useful to accessibility APIs.

Examples of accessibility APIs

The screen reader uses client-side methods from these accessibility APIs to retrieve and handle information exposed by the browser. In browsers where direct access to the Document Object Model (DOM) is permitted, some screen readers may also take additional information from the DOM tree. A screen reader can also interact with apps that use differing accessibility APIs. No matter where they get their information, screen readers can dream up any interaction modes they want to provide to their users (I’ve provided links to screen reader commands at the end of this article). Testing by site creators can help identify content that feels awkward in a particular navigation mode, such as multiple links with the same text (“Learn more”), as one example.

Example of this pipeline: surfacing a button element to screen reader users

Let’s suppose for a moment that a screen reader wants to understand what object is next in the accessibility tree (which I’ll explain further in the next section), so it can surface that object to the user as they navigate to it. The flow will go a little something like this:
Diagram illustrating the steps involved in presenting the next object in a document; detailed list follows
  1. The screen reader requests information from the API about the next accessible object, relative to the current object.
  2. The API (as an intermediary) passes along this request to the browser.
  3. At some point, the browser references DOM and style information, and discovers that the relevant element is a non-hidden button: <button>Do a thing</button>.
  4. The browser maps this HTML button into the format the API expects, such as an accessible object with various properties: Name: Do a thing, Role: Button.
  5. The API returns this information from the browser to the screen reader.
  6. The screen reader can then surface this object to the user, perhaps stating “Button, Do a thing.”
Suppose that the screen reader user would now like to “click” this button. Here’s how their action flows all the way back to web content:
Diagram illustrating the steps involved in routing a screen reader click to web content; detailed list follows
  1. The user provides a particular screen reader command, such as a keystroke or gesture.
  2. The screen reader calls a method into the API to invoke the button.
  3. The API forwards this interaction to the browser.
  4. How a browser may respond to incoming interactions depends on the context, but in this case the browser can raise this as a “click” event through web APIs. The browser should give no indication that the click came from an assistive technology, as doing so would violate the user’s right to privacy.
  5. The web developer has registered a JavaScript event listener for clicks; their callback function is now executed as if the user clicked with a mouse.
Now that we have a general sense of the pipeline, let’s go into a little more detail on the accessibility tree.

The accessibility tree

Dev Tools in Microsoft Edge showing the DOM tree and accessibility tree side by side; there are more nodes in the DOM tree
The accessibility tree is a hierarchical representation of elements in a UI or document, as computed for an accessibility API. In modern browsers, the accessibility tree for a given document is a separate, parallel structure to the DOM tree. “Parallel” does not necessarily mean there is a 1:1 match between the nodes of these two trees. Some elements may be excluded from the accessibility tree, for example if they are hidden or are not semantically useful (think non-focusable wrapper divs without any semantics added by a web developer). This idea of a hierarchical structure is somewhat of an abstraction. The definition of what exactly an accessibility tree is in practice has been debated and partially defined in multiple places, so implementations may differ in various ways. For example, it’s not actually necessary to generate accessible objects for every element in the DOM whenever the DOM tree is constructed. As a performance consideration, a browser could choose to deal with only a subset of objects and their relationships at a time—that is, however much is necessary to fulfill the requests coming from ATs. The rendering engine could make these computations during all user sessions, or only do so when assistive technologies are actively running. Generally speaking, modern web browsers wait until after style computation to build up any accessible objects. Browsers wait in part because generated content (such as ::before and ::after) can contain text that can participate in calculation of the accessible object’s name. CSS styles can also impact accessible objects in other various ways: text styling can come through as attributes on accessible text ranges. Display property values can impact the computation of line text ranges. These are just a few ways in which style can impact accessibility semantics. Browsers may also use different structures as the basis for accessible object computation. One rendering engine may walk the DOM tree and cross-reference style computations to build up parallel tree structures; another engine may use only the nodes that are available in a style tree in order to build up their accessibility tree. User agent participants in the standards community are currently thinking through how we can better document our implementation details, and whether it might make sense to standardize more of these details further down the road. Let’s now focus on the branches of this tree, and explore how individual accessibility objects are computed.

Building up accessible objects

From API to API, an accessible object will generally include a few things:
  • Role, or the type of accessible object (for example, Button). The role tells a user how they can expect to interact with the control. It is typically presented when screen reader focus moves onto the accessible object, and it can be used to provide various other functionalities, such as skipping around content via one type of object.
  • Name, if specified. The name is an (ideally short) identifier that better helps the user identify and understand the purpose of an accessible object. The name is often presented when screen focus moves to the object (more on this later), can be used as an identifier when presenting a list of available objects, and can be used as a hook for functionalities such as voice commands.
  • Description and/or help text, if specified. We’ll use “Description” as a shorthand. The Description can be considered supplemental to the Name; it’s not the main identifier but can provide further information about the accessible object. Sometimes this is presented when moving focus to the accessible object, sometimes not; this variation depends on both the screen reader’s user experience design and the user’s chosen verbosity settings.
  • Properties and methods surfacing additional semantics. For simplicity’s sake, we won’t go through all of these. For your awareness, properties can include details like layout information or available interactions (such as invoking the element or modifying its value).
Let’s walk through an example using markup for a simple mood tracker. We’ll use simplified property names and values, because these can differ between accessibility APIs.
<form>
  <label for="mood">On a scale of 1–10, what is your mood today?</label>
  <input id="mood" type="range"
       min="1" max="10" value="5"
       aria-describedby="helperText" />
  <p id="helperText">Some helpful pointers about how to rate your mood.</p>
  <!-- Using a div with button role for the purposes of showing how the accessibility tree is created. Please use the button element! -->
  <div tabindex="0" role="button">Log Mood</div>
</form>
First up is our form element. This form doesn’t have any attributes that would give it an accessible Name, and a form landmark without a Name isn’t very useful when jumping between landmarks. Therefore, HTML mapping standards specify that it should be mapped as a group. Here’s the beginning of our tree:
  • Role: Group
Next up is the label. This one doesn’t have an accessible Name either, so we’ll just nest it as an object of role “Label” underneath the form:
  • Role: Group
    • Role: Label
Let’s add the range input, which will map into various APIs as a “Slider.” Due to the relationship created by the for attribute on the label and id attribute on the input, this slider will take its Name from the label contents. The aria-describedby attribute is another id reference and points to a paragraph with some text content, which will be used for the slider’s Description. The slider object’s properties will also store “labelledby” and “describedby” relationships pointing to these other elements. And it will specify the current, minimum, and maximum values of the slider. If one of these range values were not available, ARIA standards specify what should be the default value. Our updated tree:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
The paragraph will be added as a simple paragraph object (“Text” or “Group” in some APIs):
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
The final element is an example of when role semantics are added via the ARIA role attribute. This div will map as a Button with the name “Log Mood,” as buttons can take their name from their children. This button will also be surfaced as “invokable” to screen readers and other ATs; special types of buttons could provide expand/collapse functionality (buttons with the aria-expanded attribute), or toggle functionality (buttons with the aria-pressed attribute). Here’s our tree now:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
    • Role: Button Name: Log Mood

On choosing host language semantics

Our sample markup mentions that it is preferred to use the HTML-native button element rather than a div with a role of “button.” Our buttonified div can be operated as a button via accessibility APIs, as the ARIA attribute is doing what it should—conveying semantics. But there’s a lot you can get for free when you choose native elements. In the case of button, that includes focus handling, user input handling, form submission, and basic styling. Aaron Gustafson has what he refers to as an “exhaustive treatise” on buttons in particular, but generally speaking it’s great to let the web platform do the heavy lifting of semantics and interaction for us when we can. ARIA roles, states, and properties are still a great tool to have in your toolbelt. Some good use cases for these are
  • providing further semantics and relationships that are not naturally expressed in the host language;
  • supplementing semantics in markup we perhaps don’t have complete control over;
  • patching potential cross-browser inconsistencies;
  • and making custom elements perceivable and operable to users of assistive technologies.

Notes on inclusion or exclusion in the tree

Standards define some rules around when user agents should exclude elements from the accessibility tree. Excluded elements can include those hidden by CSS, or the aria-hidden or hidden attributes; their children would be excluded as well. Children of particular roles (like checkbox) can also be excluded from the tree, unless they meet special exceptions. The full rules can be found in the “Accessibility Tree” section of the ARIA specification. That being said, there are still some differences between implementers, some of which include more divs and spans in the tree than others do.

Notes on name and description computation

How names and descriptions are computed can be a bit confusing. Some elements have special rules, and some ARIA roles allow name computation from the element’s contents, whereas others do not. Name and description computation could probably be its own article, so we won’t get into all the details here (refer to “Further reading and resources” for some links). Some short pointers:
  • aria-label, aria-labelledby, and aria-describedby take precedence over other means of calculating name and description.
  • If you expect a particular HTML attribute to be used for the name, check the name computation rules for HTML elements. In your scenario, it may be used for the full description instead.
  • Generated content (::before and ::after) can participate in the accessible name when said name is taken from the element’s contents. That being said, web developers should not rely on pseudo-elements for non-decorative content, as this content could be lost when a stylesheet fails to load or user styles are applied to the page.
When in doubt, reach out to the community! Tag questions on social media with “#accessibility.” “#a11y” is a common shorthand; the “11” stands for “11 middle letters in the word ‘accessibility.’” If you find an inconsistency in a particular browser, file a bug! Bug tracker links are provided in “Further reading and resources.”

Not just accessible objects

Besides a hierarchical structure of objects, accessibility APIs also offer interfaces that allow ATs to interact with text. ATs can retrieve content text ranges, text selections, and a variety of text attributes that they can build experiences on top of. For example, if someone writes an email and uses color alone to highlight their added comments, the person reading the email could increase the verbosity of speech output in their screen reader to know when they’re encountering phrases with that styling. However, it would be better for the email author to include very brief text labels in this scenario. The big takeaway here for web developers is to keep in mind that the accessible name of an element may not always be surfaced in every navigation mode in every screen reader. So if your aria-label text isn’t being read out in a particular mode, the screen reader may be primarily using text interfaces and only conditionally stopping on objects. It may be worth your while to consider using text content—even if visually hidden—instead of text via an ARIA attribute. Read more thoughts on aria-label and aria-labelledby.

Accessibility API events

It is the responsibility of browsers to surface changes to content, structure, and user input. Browsers do this by sending the accessibility API notifications about various events, which screen readers can subscribe to; again, for performance reasons, browsers could choose to send notifications only when ATs are active. Let’s suppose that a screen reader wants to surface changes to a live region (an element with role="alert" or aria-live):
Diagram illustrating the steps involved in announcing a live region via a screen reader; detailed list follows
  1. The screen reader subscribes to event notifications; it could subscribe to notifications of all types, or just certain types as categorized by the accessibility API. Let’s assume in our example that the screen reader is at least listening to live region change events.
  2. In the web content, the web developer changes the text content of a live region.
  3. The browser (provider) recognizes this as a live region change event, and sends the accessibility API a notification.
  4. The API passes this notification along to the screen reader.
  5. The screen reader can then use metadata from the notification to look up the relevant accessible objects via the accessibility API, and can surface the changes to the user.
ATs aren’t required to do anything with the information they retrieve. This can make it a bit trickier as a web developer to figure out why a screen reader isn’t announcing a change: it may be that notifications aren’t being raised (for example, because a browser is not sending notifications for a live region dynamically inserted into web content), or the AT is not subscribed or responding to that type of event.

Testing with screen readers and dev tools

While conformance checkers can help catch some basic accessibility issues, it’s ideal to walk through your content manually using a variety of contexts, such as
  • using a keyboard only;
  • with various OS accessibility settings turned on;
  • and at different zoom levels and text sizes, and so on.
As you do this, keep in mind the Web Content Accessibility Guidelines (WCAG 2.1), which give general guidelines around expectations for inclusive web content. If you can test with users after your own manual test passes, all the better! Robust accessibility testing could probably be its own series of articles. In this one, we’ll go over some tips for testing with screen readers, and catching accessibility errors as they are mapped into the accessibility API in a more general sense.

Screen reader testing

Screen readers exist in many forms: some are pre-installed on the operating system and others are separate applications that in some cases are free to download. The WebAIM screen reader user survey provides a list of commonly used screen reader and browser combinations among survey participants. The “Further reading and resources” section at the end of this article includes full screen reader user docs, and Deque University has a great set of screen reader command cheat sheets that you can refer to. Some actions you might take to test your content:
  • Read the next/previous item.
  • Read the next/previous line.
  • Read continuously from a particular point.
  • Jump by headings, landmarks, and links.
  • Tab around focusable elements only.
  • Get a summary of all elements of a particular type within the page.
  • Search the page for specific content.
  • Use table-specific commands to interact with your tables.
  • Jump around by form field; are field instructions discoverable in this navigational mode?
  • Use keyboard commands to interact with all interactive elements. Are your JavaScript-driven interactions still operable with screen readers (which can intercept key input in certain modes)? WAI-ARIA Authoring Practices 1.1 includes notes on expected keyboard interactions for various widgets.
  • Try out anything that creates a content change or results in navigating elsewhere. Would it be obvious, via screen reader output, that a change occurred?

Tracking down the source of unexpected behavior

If a screen reader does not announce something as you’d expect, here are a few different checks you can run:
  • Does this reproduce with the same screen reader in multiple browsers on this OS? It may be an issue with the screen reader or your expectation may not match the screen reader’s user experience design. For example, a screen reader may choose to not expose the accessible name of a static, non-interactive element. Checking the user docs or filing a screen reader issue with a simple test case would be a great place to start.
  • Does this reproduce with multiple screen readers in the same browser, but not in other browsers on this OS? The browser in question may have an issue, there may be compatibility differences between browsers (such as a browser doing extra helpful but non-standard computations), or a screen reader’s support for a specific accessibility API may vary. Filing a browser issue with a simple test case would be a great place to start; if it’s not a browser bug, the developer can route it to the right place or make a code suggestion.
  • Does this reproduce with multiple screen readers in multiple browsers? There may be something you can adjust in your code, or your expectations may differ from standards and common practices.
  • How does this element’s accessibility properties and structure show up in browser dev tools?

Inspecting accessibility trees and properties in dev tools

Major modern browsers provide dev tools to help you observe the structure of the accessibility tree as well as a given element’s accessibility properties. By observing which accessible objects are generated for your elements and which properties are exposed on a given element, you may be able to pinpoint issues that are occurring either in front-end code or in how the browser is mapping your content into the accessibility API. Let’s suppose that we are testing this piece of code in Microsoft Edge with a screen reader:
<div class="form-row">
  <label>Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
We’re navigating the page by form field, and when we land on this text field, the screen reader just tells us this is an “edit” control—it doesn’t mention a name for this element. Let’s check the tools for the element’s accessible name. 1. Inspect the element to bring up the dev tools.
The Microsoft Edge dev tools, with an input element highlighted in the DOM tree
2. Bring up the accessibility tree for this page by clicking the accessibility tree button (a circle with two arrows) or pressing Ctrl+Shift+A (Windows).
The accessibility tree button activated in the Microsoft Edge dev tools
Reviewing the accessibility tree is an extra step for this particular flow but can be helpful to do. When the Accessibility Tree pane comes up, we notice there’s a tree node that just says “textbox:,” with nothing after the colon. That suggests there’s not a name for this element. (Also notice that the div around our form input didn’t make it into the accessibility tree; it was not semantically useful). 3. Open the Accessibility Properties pane, which is a sibling of the Styles pane. If we scroll down to the Name property—aha! It’s blank. No name is provided to the accessibility API. (Side note: some other accessibility properties are filtered out of this list by default; toggle the filter button—which looks like a funnel—in the pane to get the full list).
The Accessibility Properties pane open in Microsoft Edge dev tools, in the same area as the Styles pane
4. Check the code. We realize that we didn’t associate the label with the text field; that is one strategy for providing an accessible name for a text input. We add for="myTextInput" to the label:
<div class="form-row">
  <label for="myTextInput">Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
And now the field has a name:
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
In another use case, we have a breadcrumb component, where the current page link is marked with aria-current="page":
<nav class="breadcrumb" aria-label="Breadcrumb">
  <ol>
    <li>
      <a href="/cat/">Category</a>
    </li>
    <li>
      <a href="/cat/sub/">Sub-Category</a>
    </li>
    <li>
      <a aria-current="page" href="/cat/sub/page/">Page</a>
    </li>
  </ol>
</nav>
When navigating onto the current page link, however, we don’t get any indication that this is the current page. We’re not exactly sure how this maps into accessibility properties, so we can reference a specification like Core Accessibility API Mappings 1.2 (Core-AAM). Under the “State and Property Mapping” table, we find mappings for “aria-current with non-false allowed value.” We can check for these listed properties in the Accessibility Properties pane. Microsoft Edge, at the time of writing, maps into UIA (UI Automation), so when we check AriaProperties, we find that yes, “current=page” is included within this property value.
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
Now we know that the value is presented correctly to the accessibility API, but the particular screen reader is not using the information. As a side note, Microsoft Edge’s current dev tools expose these accessibility API properties quite literally. Other browsers’ dev tools may simplify property names and values to make them easier to read, particularly if they support more than one accessibility API. The important bit is to find if there’s a property with roughly the name you expect and whether its value is what you expect. You can also use this method of checking through the property names and values if mapping specs, like Core-AAM, are a bit intimidating!

Advanced accessibility tools

While browser dev tools can tell us a lot about the accessibility semantics of our markup, they don’t generally include representations of text ranges or event notifications. On Windows, the Windows SDK includes advanced tools that can help debug these parts of MSAA or UIA mappings: Inspect and AccEvent (Accessible Event Watcher). Using these tools presumes knowledge of the Windows accessibility APIs, so if this is too granular for you and you’re stuck on an issue, please reach out to the relevant browser team! There is also an Accessibility Inspector in Xcode on MacOS, with which you can inspect web content in Safari. This tool can be accessed by going to Xcode > Open Developer Tool > Accessibility Inspector.

Diversity of experience

Equipped with an accessibility tree, detailed object information, event notifications, and methods for interacting with accessible objects, screen readers can craft a browsing experience tailored to their audiences. In this article, we’ve used the term “screen readers” as a proxy for a whole host of tools that may use accessibility APIs to provide the best user experience possible. Assistive technologies can use the APIs to augment presentation or support varying types of user input. Examples of other ATs include screen magnifiers, cognitive support tools, speech command programs, and some brilliant new app that hasn’t been dreamed up yet. Further, assistive technologies of the same “type” may differ in how they present information, and users who share the same tool may further adjust settings to their liking. As web developers, we don’t necessarily need to make sure that each instance surfaces information identically, because each user’s preferences will not be exactly the same. Our aim is to ensure that no matter how a user chooses to explore our sites, content is perceivable, operable, understandable, and robust. By testing with a variety of assistive technologies—including but not limited to screen readers—we can help create a better web for all the many people who use it.

Further reading and resources




scr

Responsible JavaScript: Part I

By the numbers, JavaScript is a performance liability. If the trend persists, the median page will be shipping at least 400 KB of it before too long, and that’s merely what’s transferred. Like other text-based resources, JavaScript is almost always served compressed—but that might be the only thing we’re getting consistently right in its delivery.

Unfortunately, while reducing resource transfer time is a big part of that whole performance thing, compression has no effect on how long browsers take to process a script once it arrives in its entirety. If a server sends 400 KB of compressed JavaScript, the actual amount browsers have to process after decompression is north of a megabyte. How well devices cope with these heavy workloads depends, well, on the deviceMuch has been written about how adept various devices are at processing lots of JavaScript, but the truth is, the amount of time it takes to process even a trivial amount of it varies greatly between devices.

Take, for example, this throwaway project of mine, which serves around 23 KB of uncompressed JavaScript. On a mid-2017 MacBook Pro, Chrome chews through this comparably tiny payload in about 25 ms. On a Nokia 2 Android phone, however, that figure balloons to around 190 ms. That’s not an insignificant amount of time, but in either case, the page gets interactive reasonably fast.

Now for the big question: how do you think that little Nokia 2 does on an average page? It chokes. Even on a fast connection, browsing the web on it is an exercise in patience as JavaScript-laden web pages brick it for considerable stretches of time.

Figure 1. A performance timeline overview of a Nokia 2 Android phone browsing on a page where excessive JavaScript monopolizes the main thread.

While devices and the networks they navigate the web on are largely improving, we’re eating those gains as trends suggest. We need to use JavaScript responsibly. That begins with understanding what we’re building as well as how we’re building it.

The mindset of “sites” versus “apps”

Nomenclature can be strange in that we sometimes loosely identify things with terms that are inaccurate, yet their meanings are implicitly understood by everyone. Sometimes we overload the term “bee” to also mean “wasp”, even though the differences between bees and wasps are substantial. Those differences can motivate you to deal with each one differently. For instance, we’ll want to destroy a wasp nest, but because bees are highly beneficial and vulnerable insects, we may opt to relocate them.

We can be just as fast and loose in interchanging the terms “website” and “web app”. The differences between them are less clear than those between yellowjackets and honeybees, but conflating them can bring about painful outcomes. The pain comes in the affordances we allow ourselves when something is merely a “website” versus a fully-featured “web app.” If you’re making an informational website for a business, you’re less likely to lean on a powerful framework to manage changes in the DOM or implement client-side routing—at least, I hope. Using tools so ill-suited for the task would not only be a detriment to the people who use that site but arguably less productive.

When we build a web app, though, look out. We’re installing packages which usher in hundreds—if not thousands—of dependencies, some of which we’re not sure are even safe. We’re also writing complicated configurations for module bundlers. In this frenzied, yet ubiquitous, sort of dev environment, it takes knowledge and vigilance to ensure what gets built is fast and accessible. If you doubt this, run npm ls --prod in your project’s root directory and see if you recognize everything in that list. Even if you do, that doesn’t account for third party scripts—of which I’m sure your site has at least a few.

What we tend to forget is that the environment websites and web apps occupy is one and the same. Both are subject to the same environmental pressures that the large gradient of networks and devices impose. Those constraints don’t suddenly vanish when we decide to call what we build “apps”, nor do our users’ phones gain magical new powers when we do so.

It’s our responsibility to evaluate who uses what we make, and accept that the conditions under which they access the internet can be different than what we’ve assumed. We need to know the purpose we’re trying to serve, and only then can we build something that admirably serves that purpose—even if it isn’t exciting to build.

That means reassessing our reliance on JavaScript and how the use of it—particularly to the exclusion of HTML and CSS—can tempt us to adopt unsustainable patterns which harm performance and accessibility.

Don’t let frameworks force you into unsustainable patterns

I’ve been witness to some strange discoveries in codebases when working with teams that depend on frameworks to help them be highly productive. One characteristic common among many of them is that poor accessibility and performance patterns often result. Take the React component below, for example:

import React, { Component } from "react";
import { validateEmail } from "helpers/validation";

class SignupForm extends Component {
  constructor (props) {
    super(props);

    this.handleSubmit = this.handleSubmit.bind(this);
    this.updateEmail = this.updateEmail.bind(this);
    this.state.email = "";
  }

  updateEmail (event) {
    this.setState({
      email: event.target.value
    });
  }

  handleSubmit () {
    // If the email checks out, submit
    if (validateEmail(this.state.email)) {
      // ...
    }
  }

  render () {
    return (
      
); } }

There are some notable accessibility issues here:

  1. A form that doesn’t use a <form> element is not a form. Indeed, you could paper over this by specifying role="form" in the parent <div>, but if you’re building a form—and this sure looks like one—use a <form> element with the proper action and method attributes. The action attribute is crucial, as it ensures the form will still do something in the absence of JavaScript—provided the component is server-rendered, of course.
  2. <span> is not a substitute for a <label> element, which provides accessibility benefits <span>s don’t.
  3. If we intend to do something on the client side prior to submitting a form, then we should move the action bound to the <button> element's onClick handler to the <form> element’s onSubmit handler.
  4. Incidentally, why use JavaScript to validate an email address when HTML5 offers form validation controls in almost every browser back to IE 10? There’s an opportunity here to rely on the browser and use an appropriate input type, as well as the required attribute—but be aware that getting this to work right with screen readers takes a little know-how.
  5. While not an accessibility issue, this component doesn't rely on any state or lifecycle methods, which means it can be refactored into a stateless functional component, which uses considerably less JavaScript than a full-fledged React component.

Knowing these things, we can refactor this component:

import React from "react";

const SignupForm = props => {
  const handleSubmit = event => {
    // Needed in case we're sending data to the server XHR-style
    // (but will still work if server-rendered with JS disabled).
    event.preventDefault();

    // Carry on...
  };
  
  return (
    <form method="POST" action="/signup" onSubmit={handleSubmit}>
      <label for="email" class="email-label">Enter your email:</label>
      <input type="email" id="email" required />
      <button>Sign Up</button>
    </form>
  );
};

Not only is this component now more accessible, but it also uses less JavaScript. In a world that’s drowning in JavaScript, deleting lines of it should feel downright therapeutic. The browser gives us so much for free, and we should try to take advantage of that as often as possible.

This is not to say that inaccessible patterns occur only when frameworks are used, but rather that a sole preference for JavaScript will eventually surface gaps in our understanding of HTML and CSS. These knowledge gaps will often result in mistakes we may not even be aware of. Frameworks can be useful tools that increase our productivity, but continuing education in core web technologies is essential to creating usable experiences, no matter what tools we choose to use.

Rely on the web platform and you’ll go far, fast

While we’re on the subject of frameworks, it must be said that the web platform is a formidable framework of its own. As the previous section showed, we’re better off when we can rely on established markup patterns and browser features. The alternative is to reinvent them, and invite all the pain such endeavors all but guarantee us, or worse: merely assume that the author of every JavaScript package we install has solved the problem comprehensively and thoughtfully.

SINGLE PAGE APPLICATIONS

One of the tradeoffs developers are quick to make is to adopt the single page application (SPA) model, even if it’s not a fit for the project. Yes, you do gain better perceived performance with the client-side routing of an SPA, but what do you lose? The browser’s own navigation functionality—albeit synchronous—provides a slew of benefits. For one, history is managed according to a complex specification. Users without JavaScript—be it by their own choice or not—won’t lose access altogether. For SPAs to remain available when JavaScript is not, server-side rendering suddenly becomes a thing you have to consider.

Figure 2. A comparison of an example app loading on a slow connection. The app on the left depends entirely upon JavaScript to render a page. The app on the right renders a response on the server, but then uses client-side hydration to attach components to the existing server-rendered markup.

Accessibility is also harmed if a client-side router fails to let people know what content on the page has changed. This can leave those reliant on assistive technology to suss out what changes have occurred on the page, which can be an arduous task.

Then there’s our old nemesis: overhead. Some client-side routers are very small, but when you start with Reacta compatible router, and possibly even a state management library, you’re accepting that there’s a certain amount of code you can never optimize away—approximately 135 KB in this case. Carefully consider what you’re building and whether a client side router is worth the tradeoffs you’ll inevitably make. Typically, you’re better off without one.

If you’re concerned about the perceived navigation performance, you could lean on rel=prefetch to speculatively fetch documents on the same origin. This has a dramatic effect on improving perceived loading performance of pages, as the document is immediately available in the cache. Because prefetches are done at a low priority, they’re also less likely to contend with critical resources for bandwidth.

Figure 3. The HTML for the writing/ URL is prefetched on the initial page. When the writing/ URL is requested by the user, the HTML for it is loaded instantaneously from the browser cache.

The primary drawback with link prefetching is that you need to be aware that it can be potentially wasteful. Quicklink, a tiny link prefetching script from Google, mitigates this somewhat by checking if the current client is on a slow connection—or has data saver mode enabled—and avoids prefetching links on cross-origins by default.

Service workers are also hugely beneficial to perceived performance for returning users, whether we use client side routing or not—provided you know the ropesWhen we precache routes with a service worker, we get many of the same benefits as link prefetching, but with a much greater degree of control over requests and responses. Whether you think of your site as an “app” or not, adding a service worker to it is perhaps one of the most responsible uses of JavaScript that exists today.

JAVASCRIPT ISN’T THE SOLUTION TO YOUR LAYOUT WOES

If we’re installing a package to solve a layout problem, proceed with caution and ask “what am I trying to accomplish?” CSS is designed to do this job, and requires no abstractions to use effectively. Most layout issues JavaScript packages attempt to solve, like box placement, alignment, and sizingmanaging text overflow, and even entire layout systems, are solvable with CSS today. Modern layout engines like Flexbox and Grid are supported well enough that we shouldn’t need to start a project with any layout framework. CSS is the framework. When we have feature queries, progressively enhancing layouts to adopt new layout engines is suddenly not so hard.

/* Your mobile-first, non-CSS grid styles goes here */

/* The @supports rule below is ignored by browsers that don't
   support CSS grid, _or_ don't support @supports. */
@supports (display: grid) {
  /* Larger screen layout */
  @media (min-width: 40em) {
    /* Your progressively enhanced grid layout styles go here */
  }
}

Using JavaScript solutions for layout and presentations problems is not new. It was something we did when we lied to ourselves in 2009 that every website had to look in IE6 exactly as it did in the more capable browsers of that time. If we’re still developing websites to look the same in every browser in 2019, we should reassess our development goals. There will always be some browser we’ll have to support that can’t do everything those modern, evergreen browsers can. Total visual parity on all platforms is not only a pursuit made in vain, it’s the principal foe of progressive enhancement.

I’m not here to kill JavaScript

Make no mistake, I have no ill will toward JavaScript. It’s given me a career and—if I’m being honest with myself—a source of enjoyment for over a decade. Like any long-term relationship, I learn more about it the more time I spend with it. It’s a mature, feature-rich language that only gets more capable and elegant with every passing year.

Yet, there are times when I feel like JavaScript and I are at odds. I am critical of JavaScript. Or maybe more accurately, I’m critical of how we’ve developed a tendency to view it as a first resort to building for the web. As I pick apart yet another bundle not unlike a tangled ball of Christmas tree lights, it’s become clear that the web is drunk on JavaScript. We reach for it for almost everything, even when the occasion doesn’t call for it. Sometimes I wonder how vicious the hangover will be.

In a series of articles to follow, I’ll be giving more practical advice to follow to stem the encroaching tide of excessive JavaScript and how we can wrangle it so that what we build for the web is usable—or at least more so—for everyone everywhere. Some of the advice will be preventative. Some will be mitigating “hair of the dog” measures. In either case, the outcomes will hopefully be the same. I believe that we all love the web and want to do right by it, but I want us to think about how to make it more resilient and inclusive for all.





scr

Responsible JavaScript: Part II

You and the rest of the dev team lobbied enthusiastically for a total re-architecture of the company’s aging website. Your pleas were heard by management—even up to the C-suite—who gave the green light. Elated, you and the team started working with the design, copy, and IA teams. Before long, you were banging out new code.

It started out innocently enough with an npm install here and an npm install there. Before you knew it, though, you were installing production dependencies like an undergrad doing keg stands without a care for the morning after.

Then you launched.

Unlike the aftermath of most copious boozings, the agony didn’t start the morning after. Oh, no. It came months later in the ghastly form of low-grade nausea and headache of product owners and middle management wondering why conversions and revenue were both down since the launch. It then hit a fever pitch when the CTO came back from a weekend at the cabin and wondered why the site loaded so slowly on their phone—if it indeed ever loaded at all.

Everyone was happy. Now no one is happy. Welcome to your first JavaScript hangover.

It’s not your fault

When you’re grappling with a vicious hangover, “I told you so” would be a well-deserved, if fight-provoking, rebuke—assuming you could even fight in so sorry a state.

When it comes to JavaScript hangovers, there’s plenty of blame to dole out. Pointing fingers is a waste of time, though. The landscape of the web today demands that we iterate faster than our competitors. This kind of pressure means we’re likely to take advantage of any means available to be as productive as possible. That means we’re more likely—but not necessarily doomed—to build apps with more overhead, and possibly use patterns that can hurt performance and accessibility.

Web development isn't easy. It’s a long slog we rarely get right on the first try. The best part of working on the web, however, is that we don’t have to get it perfect at the start. We can make improvements after the fact, and that’s just what the second installment of this series is here for. Perfection is a long ways off. For now, let’s take the edge off of that JavaScript hangover by improving your site’s, er, scriptuation in the short term.

Round up the usual suspects

It might seem rote, but it’s worth going through the list of basic optimizations. It’s not uncommon for large development teams—particularly those that work across many repositories or don’t use optimized boilerplate—to overlook them.

Shake those trees

First, make sure your toolchain is configured to perform tree shaking. If tree shaking is new to you, I wrote a guide on it last year you can consult. The short of it is that tree shaking is a process in which unused exports in your codebase don’t get packaged up in your production bundles.

Tree shaking is available out of the box with modern bundlers such as webpack, Rollup, or Parcel. Grunt or gulp—which are not bundlers, but rather task runners—won’t do this for you. A task runner doesn’t build a dependency graph like a bundler does. Rather, they perform discrete tasks on the files you feed to them with any number of plugins. Task runners can be extended with plugins to use bundlers to process JavaScript. If extending task runners in this way is problematic for you, you’ll likely need to manually audit and remove unused code.

For tree shaking to be effective, the following must be true:

  1. Your app logic and the packages you install in your project must be authored as ES6 modules. Tree shaking CommonJS modules isn’t practically possible.
  2. Your bundler must not transform ES6 modules into another module format at build time. If this happens in a toolchain that uses Babel, @babel/preset-env configuration must specify modules: false to prevent ES6 code from being converted to CommonJS.

On the off chance tree shaking isn’t occurring during your build, getting it to work may help. Of course, its effectiveness varies on a case-by-case basis. It also depends on whether the modules you import introduce side effects, which may influence a bundler’s ability to shake unused exports.

Split that code

Chances are good that you’re employing some form of code splitting, but it’s worth re-evaluating how you’re doing it. No matter how you’re splitting code, there are two questions that are always worth asking yourself:

  1. Are you deduplicating common code between entry points?
  2. Are you lazy loading all the functionality you reasonably can with dynamic import()?

These are important because reducing redundant code is essential to performance. Lazy loading functionality also improves performance by lowering the initial JavaScript footprint on a given page. On the redundancy front, using an analysis tool such as Bundle Buddy can help you find out if you have a problem.

Bundle Buddy can examine your webpack compilation statistics and determine how much code is shared between your bundles.

Where lazy loading is concerned, it can be a bit difficult to know where to start looking for opportunities. When I look for opportunities in existing projects, I’ll search for user interaction points throughout the codebase, such as click and keyboard events, and similar candidates. Any code that requires a user interaction to run is a potentially good candidate for dynamic import().

Of course, loading scripts on demand brings the possibility that interactivity could be noticeably delayed, as the script necessary for the interaction must be downloaded first. If data usage is not a concern, consider using the rel=prefetch resource hint to load such scripts at a low priority that won’t contend for bandwidth against critical resources. Support for rel=prefetch is good, but nothing will break if it’s unsupported, as such browsers will ignore markup they doesn’t understand.

Externalize third-party hosted code

Ideally, you should self-host as many of your site’s dependencies as possible. If for some reason you must load dependencies from a third party, mark them as externals in your bundler’s configuration. Failing to do so could mean your website’s visitors will download both locally hosted code and the same code from a third party.

Let’s look at a hypothetical situation where this could hurt you: say that your site loads Lodash from a public CDN. You've also installed Lodash in your project for local development. However, if you fail to mark Lodash as external, your production code will end up loading a third party copy of it in addition to the bundled, locally hosted copy.

This may seem like common knowledge if you know your way around bundlers, but I’ve seen it get overlooked. It’s worth your time to check twice.

If you aren’t convinced to self-host your third-party dependencies, then consider adding dns-prefetch, preconnect, or possibly even preload hints for them. Doing so can lower your site’s Time to Interactive and—if JavaScript is critical to rendering content—your site’s Speed Index.

Smaller alternatives for less overhead

Userland JavaScript is like an obscenely massive candy store, and we as developers are awed by the sheer amount of open source offerings. Frameworks and libraries allow us to extend our applications to quickly do all sorts of stuff that would otherwise take loads of time and effort.

While I personally prefer to aggressively minimize the use of client-side frameworks and libraries in my projects, their value is compelling. Yet, we do have a responsibility to be a bit hawkish when it comes to what we install. When we’ve already built and shipped something that depends on a slew of installed code to run, we’ve accepted a baseline cost that only the maintainers of that code can practically address. Right?

Maybe, but then again, maybe not. It depends on the dependencies used. For instance, React is extremely popular, but Preact is an ultra-small alternative that largely shares the same API and retains compatibility with many React add-ons. Luxon and date-fns are much more compact alternatives to moment.js, which is not exactly tiny.

Libraries such as Lodash offer many useful methods. Yet, some of them are easily replaceable with native ES6. Lodash’s compact method, for example, is replaceable with the filter array method. Many more can be replaced without much effort, and without the need for pulling in a large utility library.

Whatever your preferred tools are, the idea is the same: do some research to see if there are smaller alternatives, or if native language features can do the trick. You may be surprised at how little effort it may take you to seriously reduce your app’s overhead.

Differentially serve your scripts

There’s a good chance you’re using Babel in your toolchain to transform your ES6 source into code that can run on older browsers. Does this mean we’re doomed to serve giant bundles even to browsers that don’t need them, until the older browsers disappear altogether? Of course not! Differential serving helps us get around this by generating two different builds of your ES6 source:

  • Bundle one, which contains all the transforms and polyfills required for your site to work on older browsers. You’re probably already serving this bundle right now.
  • Bundle two, which contains little to none of the transforms and polyfills because it targets modern browsers. This is the bundle you’re probably not serving—at least not yet.

Achieving this is a bit involved. I’ve written a guide on one way you can do it, so there’s no need for a deep dive here. The long and short of it is that you can modify your build configuration to generate an additional but smaller version of your site’s JavaScript code, and serve it only to modern browsers. The best part is that these are savings you can achieve without sacrificing any features or functionality you already offer. Depending on your application code, the savings could be quite significant.

A webpack-bundle-analyzer analysis of a project's legacy bundle (left) versus one for a modern bundle (right). View full-sized image.

The simplest pattern for serving these bundles to their respective platforms is brief. It also works a treat in modern browsers:

<!-- Modern browsers load this file: -->
<script type="module" src="/js/app.mjs"></script>
<!-- Legacy browsers load this file: -->
<script defer nomodule src="/js/app.js"></script>

Unfortunately, there’s a caveat with this pattern: legacy browsers like IE 11—and even relatively modern ones such as Edge versions 15 through 18—will download both bundles. If this is an acceptable trade-off for you, then worry no further.

On the other hand, you'll need a workaround if you’re concerned about the performance implications of older browsers downloading both sets of bundles. Here’s one potential solution that uses script injection (instead of the script tags above) to avoid double downloads on affected browsers:

var scriptEl = document.createElement("script");

if ("noModule" in scriptEl) {
  // Set up modern script
  scriptEl.src = "/js/app.mjs";
  scriptEl.type = "module";
} else {
  // Set up legacy script
  scriptEl.src = "/js/app.js";
  scriptEl.defer = true; // type="module" defers by default, so set it here.
}

// Inject!
document.body.appendChild(scriptEl);

This script infers that if a browser supports the nomodule attribute in the script element, it understands type="module". This ensures that legacy browsers only get legacy scripts and modern browsers only get modern ones. Be warned, though, that dynamically injected scripts load asynchronously by default, so set the async attribute to false if dependency order is crucial.

Transpile less

I’m not here to trash Babel. It’s indispensable, but lordy, it adds a lot of extra stuff without your ever knowing. It pays to peek under the hood to see what it’s up to. Some minor changes in your coding habits can have a positive impact on what Babel spits out.

https://twitter.com/_developit/status/1110229993999777793

To wit: default parameters are a very handy ES6 feature you probably already use:

function logger(message, level = "log") {
  console[level](message);
}

The thing to pay attention to here is the level parameter, which has a default of “log.” This means if we want to invoke console.log with this wrapper function, we don’t need to specify level. Great, right? Except when Babel transforms this function, the output looks like this:

function logger(message) {
  var level = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : "log";

  console[level](message);
}

This is an example of how, despite our best intentions, developer conveniences can backfire. What was a handful of bytes in our source has now been transformed into much larger in our production code. Uglification can’t do much about it either, as arguments can’t be reduced. Oh, and if you think rest parameters might be a worthy antidote, Babel’s transforms for them are even bulkier:

// Source
function logger(...args) {
  const [level, message] = args;

  console[level](message);
}

// Babel output
function logger() {
  for (var _len = arguments.length, args = new Array(_len), _key = 0; _key < _len; _key++) {
    args[_key] = arguments[_key];
  }

  const level = args[0],
        message = args[1];
  console[level](message);
}

Worse yet, Babel transforms this code even for projects with a @babel/preset-env configuration targeting modern browsers, meaning the modern bundles in your differentially served JavaScript will be affected too! You could use loose transforms to soften the blow—and that’s a fine idea, as they’re often quite a bit smaller than their more spec-compliant counterparts—but enabling loose transforms can cause issues if you remove Babel from your build pipeline later on.

Regardless of whether you decide to enable loose transforms, here’s one way to cut the cruft of transpiled default parameters:

// Babel won't touch this
function logger(message, level) {
  console[level || "log"](message);
}

Of course, default parameters aren’t the only feature to be wary of. For example, spread syntax gets transformed, as do arrow functions and a whole host of other stuff.

If you don’t want to avoid these features altogether, you have a couple ways of reducing their impact:

  1. If you’re authoring a library, consider using @babel/runtime in concert with @babel/plugin-transform-runtime to deduplicate the helper functions Babel puts into your code.
  2. For polyfilled features in apps, you can include them selectively with @babel/polyfill via @babel/preset-env’s useBuiltIns: "usage" option.

This is solely my opinion, but I believe the best choice is to avoid transpilation altogether in bundles generated for modern browsers. That’s not always possible, especially if you use JSX, which must be transformed for all browsers, or if you’re using bleeding edge language features that aren’t widely supported. In the latter case, it might be worth asking if those features are really necessary to deliver a good user experience (they rarely are). If you arrive at the conclusion that Babel must be a part of your toolchain, then it’s worth peeking under the hood from time to time to catch suboptimal stuff Babel might be doing that you can improve on.

Improvement is not a race

As you massage your temples wondering when this horrid JavaScript hangover is going to lift, understand that it’s precisely when we rush to get something out there as fast as we possibly can that the user experience can suffer. As the web development community obsesses on iterating faster in the name of competition, it’s worth your time to slow down a little bit. You’ll find that by doing so, you may not be iterating as fast as your competitors, but your product will be faster than theirs.

As you take these suggestions and apply them to your codebase, know that progress doesn’t spontaneously happen overnight. Web development is a job. The truly impactful work is done when we’re thoughtful and dedicated to the craft for the long haul. Focus on steady improvements. Measure, test, repeat, and your site’s user experience will improve, and you’ll get faster bit by bit over time.

Special thanks to Jason Miller for tech editing this piece. Jason is the creator and one of the many maintainers of Preact, a vastly smaller alternative to React with the same API. If you use Preact, please consider supporting Preact through Open Collective.




scr

Optimizing Scratch 3 Pen Blocks

Earlier this year, we shared our work on the launch of Scratch 3.0, a major version of the visual programming environment for children of all ages. The new version of Scratch marked a complete rewrite of the runtime in JavaScript leveraging open web APIs. In our previous post, we enumerated the many performance optimizations that […]



  • Graphics And Interactive Applications

scr

Experiments in Faster Scratch 3 Loading with Texture Atlases

One of the best parts of the Scratch community is the diversity of Scratch projects. Community members have used the Scratch programming language to create many different kinds of interactive applications, from full game engines to music sequencers. One genre is especially unique: Multiple Animator Projects, or MAPs. These Scratch projects compile animations from many […]




scr

How to scribe at TPAC

Next week is TPAC in Fukuoka, Japan. This is an annual conference for all working groups in the W3C to meet face-to-face. Naturally, there is a desire to have a record of what is said in these meetings. This is done by people in the meeting taking turns to scribe. Even if you have attended […]




scr

Introducing a JavaScript library for exploring Scratch projects: sb-util

Introduction We’re excited to introduce sb-util, a new JavaScript library that makes it easy to query Scratch projects via .sb3 files. This npm library allows developers (or even teachers and students) to parse and introspect Scratch projects for a range of purposes, from data visualization to custom tooling. Previously, working with Scratch project files required […]




scr

The ECMAScribes

Did you know that in the process of standardizing JavaScript, TC39 publishes notes for each of their regular meetings? Every other month, over 50 “delegates” convene to discuss the future of the language, and the minutes they publish provide an incredible view into their discussions. Here’s what you can expect to find: a list of […]




scr

Extracting profit: imperialism, neoliberalism and the new scramble for Africa / Lee Wengraf

Dewey Library - HC800.W46 2018




scr

Sequential and alternating RAFT single unit monomer insertion: model trimers as the guide for discrete oligomer synthesis

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00390E, Paper
Ruizhe Liu, Lei Zhang, Zixuan Huang, Jiangtao Xu
A complete set of model trimers and their synthetic kinetics are established to guide the synthesis of diverse sequence-defined polymers.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




scr

Wet cells, dry cells, fuel cells [videorecording] : an introduction / producer/director, Bernard Motut ; script, Bernard Motut, John Davis




scr

Follow-up Testing Among Children With Elevated Screening Blood Lead Levels

Interview with Alex R. Kemper, MD, MPH, MS, author of Follow-up Testing Among Children With Elevated Screening Blood Lead Levels, published in the May 11 issue of JAMA, the Journal of the American Medical Association. Summary Points: 1. About half the children (six years and younger) with elevated blood lead levels did not receive follow up testing. 2. Nonwhite children, and those living in urban as well as high-risk lead settings, were less likely to receive follow up testing compared to their counterparts. 3. Follow-up testing for children with high blood lead levels is essential for managing lead poisoning and for maximizing cognitive development. 4. Interventions are needed to overcome disparities in care.




scr

Clinical Decision Support and Appropriateness of Antimicrobial Prescribing: A Randomized Trial

Interview with Matthew H. Samore, MD, author of Clinical Decision Support and Appropriateness of Antimicrobial Prescribing: A Randomized Trial, published in the November 9 issue of JAMA, the Journal of the American Medical Association. Summary points: 1. Repetitive use of a diagnostic and treatment algorithm to ingrain new prescribing habits was a valuable part of this practice change intervention. 2. Clinical decision support systems (CDSS) are feasibly implemented in practice settings that lack electronic medical records, including rural communities. 3. CDSS needs to be integrated with tools that save clinicians' time to be sustainable.




scr

Nucleoside-based fluorescent carbon dots for discrimination of metal ions

J. Mater. Chem. B, 2020, 8,3640-3646
DOI: 10.1039/C9TB02758K, Paper
Tieli Zhou, Jinyi Zhang, Biwu Liu, Shihong Wu, Peng Wu, Juewen Liu
Using nucleosides and citrate as starting materials, a series of fluorescent carbon dots were synthesized showing different quenching properties by metal ions for their detection by a sensor array.
The content of this RSS Feed (c) The Royal Society of Chemistry




scr

Discrimination of cysteamine from mercapto amino acids through isoelectric point-mediated surface ligand exchange of β-cyclodextrin-modified gold nanoparticles

J. Mater. Chem. B, 2020, Advance Article
DOI: 10.1039/D0TB00462F, Paper
Quanbao Ma, Xun Fang, Junting Zhang, Lili Zhu, Xiabing Rao, Qi Lu, Zhijun Sun, Huan Yu, Qunlin Zhang
A pI-mediated R6G-β-CD@AuNPs system was designed for the first time for the discrimination of CA from GSH/Cys/Hcy in human serum samples.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




scr

Essential psychopharmacology : the prescriber's guide / Stephen M. Stahl ; editorial assistant, Meghan M. Grady ; with illustrations by Nancy Muntner

Stahl, S. M




scr

PDR for nonprescription drugs, dietary supplements, and herbs




scr

Frequently prescribed medications : drugs you need to know / [edited by] Michael A. Mancano and Jason C. Gallagher




scr

Culpeper's complete herbal : consisting of a comprehensive description of nearly all herbs with their medicinal properties and directions for compounding the medicines extracted from them

Culpeper, Nicholas, 1616-1654




scr

Pharmacology case studies for nurse prescribers / edited by Donna Scholefield, Alan Sebti and Alison Harris




scr

Paediatric pharmacopoeia : pocket prescriber




scr

Frequently prescribed medications : drugs you need to know / [edited by] Michael A. Mancano, PharmD, Jason C. Gallagher, PharmD




scr

Discrete geometry and symmetry : dedicated to Károly Bezdek and Egon Schulte on the Occasion of their 60th Birthdays / Marston D. E. Conder, Antoine Deza, Asia Ivić Weiss, editors