script

25 Useful Resources for Creating Tooltips With JavaScript or CSS

Tooltips are awesome, there’s simply no denying it. They provide a simple, predictable and straightforward way to provide your users with useful, context-sensitive information, and they look cool to boot. We all agree on how great tooltips are, but how we go about implementing them can differ dramatically. If you’re at square one, looking for […]




script

Responsible JavaScript: Part I

By the numbers, JavaScript is a performance liability. If the trend persists, the median page will be shipping at least 400 KB of it before too long, and that’s merely what’s transferred. Like other text-based resources, JavaScript is almost always served compressed—but that might be the only thing we’re getting consistently right in its delivery.

Unfortunately, while reducing resource transfer time is a big part of that whole performance thing, compression has no effect on how long browsers take to process a script once it arrives in its entirety. If a server sends 400 KB of compressed JavaScript, the actual amount browsers have to process after decompression is north of a megabyte. How well devices cope with these heavy workloads depends, well, on the deviceMuch has been written about how adept various devices are at processing lots of JavaScript, but the truth is, the amount of time it takes to process even a trivial amount of it varies greatly between devices.

Take, for example, this throwaway project of mine, which serves around 23 KB of uncompressed JavaScript. On a mid-2017 MacBook Pro, Chrome chews through this comparably tiny payload in about 25 ms. On a Nokia 2 Android phone, however, that figure balloons to around 190 ms. That’s not an insignificant amount of time, but in either case, the page gets interactive reasonably fast.

Now for the big question: how do you think that little Nokia 2 does on an average page? It chokes. Even on a fast connection, browsing the web on it is an exercise in patience as JavaScript-laden web pages brick it for considerable stretches of time.

Figure 1. A performance timeline overview of a Nokia 2 Android phone browsing on a page where excessive JavaScript monopolizes the main thread.

While devices and the networks they navigate the web on are largely improving, we’re eating those gains as trends suggest. We need to use JavaScript responsibly. That begins with understanding what we’re building as well as how we’re building it.

The mindset of “sites” versus “apps”

Nomenclature can be strange in that we sometimes loosely identify things with terms that are inaccurate, yet their meanings are implicitly understood by everyone. Sometimes we overload the term “bee” to also mean “wasp”, even though the differences between bees and wasps are substantial. Those differences can motivate you to deal with each one differently. For instance, we’ll want to destroy a wasp nest, but because bees are highly beneficial and vulnerable insects, we may opt to relocate them.

We can be just as fast and loose in interchanging the terms “website” and “web app”. The differences between them are less clear than those between yellowjackets and honeybees, but conflating them can bring about painful outcomes. The pain comes in the affordances we allow ourselves when something is merely a “website” versus a fully-featured “web app.” If you’re making an informational website for a business, you’re less likely to lean on a powerful framework to manage changes in the DOM or implement client-side routing—at least, I hope. Using tools so ill-suited for the task would not only be a detriment to the people who use that site but arguably less productive.

When we build a web app, though, look out. We’re installing packages which usher in hundreds—if not thousands—of dependencies, some of which we’re not sure are even safe. We’re also writing complicated configurations for module bundlers. In this frenzied, yet ubiquitous, sort of dev environment, it takes knowledge and vigilance to ensure what gets built is fast and accessible. If you doubt this, run npm ls --prod in your project’s root directory and see if you recognize everything in that list. Even if you do, that doesn’t account for third party scripts—of which I’m sure your site has at least a few.

What we tend to forget is that the environment websites and web apps occupy is one and the same. Both are subject to the same environmental pressures that the large gradient of networks and devices impose. Those constraints don’t suddenly vanish when we decide to call what we build “apps”, nor do our users’ phones gain magical new powers when we do so.

It’s our responsibility to evaluate who uses what we make, and accept that the conditions under which they access the internet can be different than what we’ve assumed. We need to know the purpose we’re trying to serve, and only then can we build something that admirably serves that purpose—even if it isn’t exciting to build.

That means reassessing our reliance on JavaScript and how the use of it—particularly to the exclusion of HTML and CSS—can tempt us to adopt unsustainable patterns which harm performance and accessibility.

Don’t let frameworks force you into unsustainable patterns

I’ve been witness to some strange discoveries in codebases when working with teams that depend on frameworks to help them be highly productive. One characteristic common among many of them is that poor accessibility and performance patterns often result. Take the React component below, for example:

import React, { Component } from "react";
import { validateEmail } from "helpers/validation";

class SignupForm extends Component {
  constructor (props) {
    super(props);

    this.handleSubmit = this.handleSubmit.bind(this);
    this.updateEmail = this.updateEmail.bind(this);
    this.state.email = "";
  }

  updateEmail (event) {
    this.setState({
      email: event.target.value
    });
  }

  handleSubmit () {
    // If the email checks out, submit
    if (validateEmail(this.state.email)) {
      // ...
    }
  }

  render () {
    return (
      
); } }

There are some notable accessibility issues here:

  1. A form that doesn’t use a <form> element is not a form. Indeed, you could paper over this by specifying role="form" in the parent <div>, but if you’re building a form—and this sure looks like one—use a <form> element with the proper action and method attributes. The action attribute is crucial, as it ensures the form will still do something in the absence of JavaScript—provided the component is server-rendered, of course.
  2. <span> is not a substitute for a <label> element, which provides accessibility benefits <span>s don’t.
  3. If we intend to do something on the client side prior to submitting a form, then we should move the action bound to the <button> element's onClick handler to the <form> element’s onSubmit handler.
  4. Incidentally, why use JavaScript to validate an email address when HTML5 offers form validation controls in almost every browser back to IE 10? There’s an opportunity here to rely on the browser and use an appropriate input type, as well as the required attribute—but be aware that getting this to work right with screen readers takes a little know-how.
  5. While not an accessibility issue, this component doesn't rely on any state or lifecycle methods, which means it can be refactored into a stateless functional component, which uses considerably less JavaScript than a full-fledged React component.

Knowing these things, we can refactor this component:

import React from "react";

const SignupForm = props => {
  const handleSubmit = event => {
    // Needed in case we're sending data to the server XHR-style
    // (but will still work if server-rendered with JS disabled).
    event.preventDefault();

    // Carry on...
  };
  
  return (
    <form method="POST" action="/signup" onSubmit={handleSubmit}>
      <label for="email" class="email-label">Enter your email:</label>
      <input type="email" id="email" required />
      <button>Sign Up</button>
    </form>
  );
};

Not only is this component now more accessible, but it also uses less JavaScript. In a world that’s drowning in JavaScript, deleting lines of it should feel downright therapeutic. The browser gives us so much for free, and we should try to take advantage of that as often as possible.

This is not to say that inaccessible patterns occur only when frameworks are used, but rather that a sole preference for JavaScript will eventually surface gaps in our understanding of HTML and CSS. These knowledge gaps will often result in mistakes we may not even be aware of. Frameworks can be useful tools that increase our productivity, but continuing education in core web technologies is essential to creating usable experiences, no matter what tools we choose to use.

Rely on the web platform and you’ll go far, fast

While we’re on the subject of frameworks, it must be said that the web platform is a formidable framework of its own. As the previous section showed, we’re better off when we can rely on established markup patterns and browser features. The alternative is to reinvent them, and invite all the pain such endeavors all but guarantee us, or worse: merely assume that the author of every JavaScript package we install has solved the problem comprehensively and thoughtfully.

SINGLE PAGE APPLICATIONS

One of the tradeoffs developers are quick to make is to adopt the single page application (SPA) model, even if it’s not a fit for the project. Yes, you do gain better perceived performance with the client-side routing of an SPA, but what do you lose? The browser’s own navigation functionality—albeit synchronous—provides a slew of benefits. For one, history is managed according to a complex specification. Users without JavaScript—be it by their own choice or not—won’t lose access altogether. For SPAs to remain available when JavaScript is not, server-side rendering suddenly becomes a thing you have to consider.

Figure 2. A comparison of an example app loading on a slow connection. The app on the left depends entirely upon JavaScript to render a page. The app on the right renders a response on the server, but then uses client-side hydration to attach components to the existing server-rendered markup.

Accessibility is also harmed if a client-side router fails to let people know what content on the page has changed. This can leave those reliant on assistive technology to suss out what changes have occurred on the page, which can be an arduous task.

Then there’s our old nemesis: overhead. Some client-side routers are very small, but when you start with Reacta compatible router, and possibly even a state management library, you’re accepting that there’s a certain amount of code you can never optimize away—approximately 135 KB in this case. Carefully consider what you’re building and whether a client side router is worth the tradeoffs you’ll inevitably make. Typically, you’re better off without one.

If you’re concerned about the perceived navigation performance, you could lean on rel=prefetch to speculatively fetch documents on the same origin. This has a dramatic effect on improving perceived loading performance of pages, as the document is immediately available in the cache. Because prefetches are done at a low priority, they’re also less likely to contend with critical resources for bandwidth.

Figure 3. The HTML for the writing/ URL is prefetched on the initial page. When the writing/ URL is requested by the user, the HTML for it is loaded instantaneously from the browser cache.

The primary drawback with link prefetching is that you need to be aware that it can be potentially wasteful. Quicklink, a tiny link prefetching script from Google, mitigates this somewhat by checking if the current client is on a slow connection—or has data saver mode enabled—and avoids prefetching links on cross-origins by default.

Service workers are also hugely beneficial to perceived performance for returning users, whether we use client side routing or not—provided you know the ropesWhen we precache routes with a service worker, we get many of the same benefits as link prefetching, but with a much greater degree of control over requests and responses. Whether you think of your site as an “app” or not, adding a service worker to it is perhaps one of the most responsible uses of JavaScript that exists today.

JAVASCRIPT ISN’T THE SOLUTION TO YOUR LAYOUT WOES

If we’re installing a package to solve a layout problem, proceed with caution and ask “what am I trying to accomplish?” CSS is designed to do this job, and requires no abstractions to use effectively. Most layout issues JavaScript packages attempt to solve, like box placement, alignment, and sizingmanaging text overflow, and even entire layout systems, are solvable with CSS today. Modern layout engines like Flexbox and Grid are supported well enough that we shouldn’t need to start a project with any layout framework. CSS is the framework. When we have feature queries, progressively enhancing layouts to adopt new layout engines is suddenly not so hard.

/* Your mobile-first, non-CSS grid styles goes here */

/* The @supports rule below is ignored by browsers that don't
   support CSS grid, _or_ don't support @supports. */
@supports (display: grid) {
  /* Larger screen layout */
  @media (min-width: 40em) {
    /* Your progressively enhanced grid layout styles go here */
  }
}

Using JavaScript solutions for layout and presentations problems is not new. It was something we did when we lied to ourselves in 2009 that every website had to look in IE6 exactly as it did in the more capable browsers of that time. If we’re still developing websites to look the same in every browser in 2019, we should reassess our development goals. There will always be some browser we’ll have to support that can’t do everything those modern, evergreen browsers can. Total visual parity on all platforms is not only a pursuit made in vain, it’s the principal foe of progressive enhancement.

I’m not here to kill JavaScript

Make no mistake, I have no ill will toward JavaScript. It’s given me a career and—if I’m being honest with myself—a source of enjoyment for over a decade. Like any long-term relationship, I learn more about it the more time I spend with it. It’s a mature, feature-rich language that only gets more capable and elegant with every passing year.

Yet, there are times when I feel like JavaScript and I are at odds. I am critical of JavaScript. Or maybe more accurately, I’m critical of how we’ve developed a tendency to view it as a first resort to building for the web. As I pick apart yet another bundle not unlike a tangled ball of Christmas tree lights, it’s become clear that the web is drunk on JavaScript. We reach for it for almost everything, even when the occasion doesn’t call for it. Sometimes I wonder how vicious the hangover will be.

In a series of articles to follow, I’ll be giving more practical advice to follow to stem the encroaching tide of excessive JavaScript and how we can wrangle it so that what we build for the web is usable—or at least more so—for everyone everywhere. Some of the advice will be preventative. Some will be mitigating “hair of the dog” measures. In either case, the outcomes will hopefully be the same. I believe that we all love the web and want to do right by it, but I want us to think about how to make it more resilient and inclusive for all.





script

Responsible JavaScript: Part II

You and the rest of the dev team lobbied enthusiastically for a total re-architecture of the company’s aging website. Your pleas were heard by management—even up to the C-suite—who gave the green light. Elated, you and the team started working with the design, copy, and IA teams. Before long, you were banging out new code.

It started out innocently enough with an npm install here and an npm install there. Before you knew it, though, you were installing production dependencies like an undergrad doing keg stands without a care for the morning after.

Then you launched.

Unlike the aftermath of most copious boozings, the agony didn’t start the morning after. Oh, no. It came months later in the ghastly form of low-grade nausea and headache of product owners and middle management wondering why conversions and revenue were both down since the launch. It then hit a fever pitch when the CTO came back from a weekend at the cabin and wondered why the site loaded so slowly on their phone—if it indeed ever loaded at all.

Everyone was happy. Now no one is happy. Welcome to your first JavaScript hangover.

It’s not your fault

When you’re grappling with a vicious hangover, “I told you so” would be a well-deserved, if fight-provoking, rebuke—assuming you could even fight in so sorry a state.

When it comes to JavaScript hangovers, there’s plenty of blame to dole out. Pointing fingers is a waste of time, though. The landscape of the web today demands that we iterate faster than our competitors. This kind of pressure means we’re likely to take advantage of any means available to be as productive as possible. That means we’re more likely—but not necessarily doomed—to build apps with more overhead, and possibly use patterns that can hurt performance and accessibility.

Web development isn't easy. It’s a long slog we rarely get right on the first try. The best part of working on the web, however, is that we don’t have to get it perfect at the start. We can make improvements after the fact, and that’s just what the second installment of this series is here for. Perfection is a long ways off. For now, let’s take the edge off of that JavaScript hangover by improving your site’s, er, scriptuation in the short term.

Round up the usual suspects

It might seem rote, but it’s worth going through the list of basic optimizations. It’s not uncommon for large development teams—particularly those that work across many repositories or don’t use optimized boilerplate—to overlook them.

Shake those trees

First, make sure your toolchain is configured to perform tree shaking. If tree shaking is new to you, I wrote a guide on it last year you can consult. The short of it is that tree shaking is a process in which unused exports in your codebase don’t get packaged up in your production bundles.

Tree shaking is available out of the box with modern bundlers such as webpack, Rollup, or Parcel. Grunt or gulp—which are not bundlers, but rather task runners—won’t do this for you. A task runner doesn’t build a dependency graph like a bundler does. Rather, they perform discrete tasks on the files you feed to them with any number of plugins. Task runners can be extended with plugins to use bundlers to process JavaScript. If extending task runners in this way is problematic for you, you’ll likely need to manually audit and remove unused code.

For tree shaking to be effective, the following must be true:

  1. Your app logic and the packages you install in your project must be authored as ES6 modules. Tree shaking CommonJS modules isn’t practically possible.
  2. Your bundler must not transform ES6 modules into another module format at build time. If this happens in a toolchain that uses Babel, @babel/preset-env configuration must specify modules: false to prevent ES6 code from being converted to CommonJS.

On the off chance tree shaking isn’t occurring during your build, getting it to work may help. Of course, its effectiveness varies on a case-by-case basis. It also depends on whether the modules you import introduce side effects, which may influence a bundler’s ability to shake unused exports.

Split that code

Chances are good that you’re employing some form of code splitting, but it’s worth re-evaluating how you’re doing it. No matter how you’re splitting code, there are two questions that are always worth asking yourself:

  1. Are you deduplicating common code between entry points?
  2. Are you lazy loading all the functionality you reasonably can with dynamic import()?

These are important because reducing redundant code is essential to performance. Lazy loading functionality also improves performance by lowering the initial JavaScript footprint on a given page. On the redundancy front, using an analysis tool such as Bundle Buddy can help you find out if you have a problem.

Bundle Buddy can examine your webpack compilation statistics and determine how much code is shared between your bundles.

Where lazy loading is concerned, it can be a bit difficult to know where to start looking for opportunities. When I look for opportunities in existing projects, I’ll search for user interaction points throughout the codebase, such as click and keyboard events, and similar candidates. Any code that requires a user interaction to run is a potentially good candidate for dynamic import().

Of course, loading scripts on demand brings the possibility that interactivity could be noticeably delayed, as the script necessary for the interaction must be downloaded first. If data usage is not a concern, consider using the rel=prefetch resource hint to load such scripts at a low priority that won’t contend for bandwidth against critical resources. Support for rel=prefetch is good, but nothing will break if it’s unsupported, as such browsers will ignore markup they doesn’t understand.

Externalize third-party hosted code

Ideally, you should self-host as many of your site’s dependencies as possible. If for some reason you must load dependencies from a third party, mark them as externals in your bundler’s configuration. Failing to do so could mean your website’s visitors will download both locally hosted code and the same code from a third party.

Let’s look at a hypothetical situation where this could hurt you: say that your site loads Lodash from a public CDN. You've also installed Lodash in your project for local development. However, if you fail to mark Lodash as external, your production code will end up loading a third party copy of it in addition to the bundled, locally hosted copy.

This may seem like common knowledge if you know your way around bundlers, but I’ve seen it get overlooked. It’s worth your time to check twice.

If you aren’t convinced to self-host your third-party dependencies, then consider adding dns-prefetch, preconnect, or possibly even preload hints for them. Doing so can lower your site’s Time to Interactive and—if JavaScript is critical to rendering content—your site’s Speed Index.

Smaller alternatives for less overhead

Userland JavaScript is like an obscenely massive candy store, and we as developers are awed by the sheer amount of open source offerings. Frameworks and libraries allow us to extend our applications to quickly do all sorts of stuff that would otherwise take loads of time and effort.

While I personally prefer to aggressively minimize the use of client-side frameworks and libraries in my projects, their value is compelling. Yet, we do have a responsibility to be a bit hawkish when it comes to what we install. When we’ve already built and shipped something that depends on a slew of installed code to run, we’ve accepted a baseline cost that only the maintainers of that code can practically address. Right?

Maybe, but then again, maybe not. It depends on the dependencies used. For instance, React is extremely popular, but Preact is an ultra-small alternative that largely shares the same API and retains compatibility with many React add-ons. Luxon and date-fns are much more compact alternatives to moment.js, which is not exactly tiny.

Libraries such as Lodash offer many useful methods. Yet, some of them are easily replaceable with native ES6. Lodash’s compact method, for example, is replaceable with the filter array method. Many more can be replaced without much effort, and without the need for pulling in a large utility library.

Whatever your preferred tools are, the idea is the same: do some research to see if there are smaller alternatives, or if native language features can do the trick. You may be surprised at how little effort it may take you to seriously reduce your app’s overhead.

Differentially serve your scripts

There’s a good chance you’re using Babel in your toolchain to transform your ES6 source into code that can run on older browsers. Does this mean we’re doomed to serve giant bundles even to browsers that don’t need them, until the older browsers disappear altogether? Of course not! Differential serving helps us get around this by generating two different builds of your ES6 source:

  • Bundle one, which contains all the transforms and polyfills required for your site to work on older browsers. You’re probably already serving this bundle right now.
  • Bundle two, which contains little to none of the transforms and polyfills because it targets modern browsers. This is the bundle you’re probably not serving—at least not yet.

Achieving this is a bit involved. I’ve written a guide on one way you can do it, so there’s no need for a deep dive here. The long and short of it is that you can modify your build configuration to generate an additional but smaller version of your site’s JavaScript code, and serve it only to modern browsers. The best part is that these are savings you can achieve without sacrificing any features or functionality you already offer. Depending on your application code, the savings could be quite significant.

A webpack-bundle-analyzer analysis of a project's legacy bundle (left) versus one for a modern bundle (right). View full-sized image.

The simplest pattern for serving these bundles to their respective platforms is brief. It also works a treat in modern browsers:

<!-- Modern browsers load this file: -->
<script type="module" src="/js/app.mjs"></script>
<!-- Legacy browsers load this file: -->
<script defer nomodule src="/js/app.js"></script>

Unfortunately, there’s a caveat with this pattern: legacy browsers like IE 11—and even relatively modern ones such as Edge versions 15 through 18—will download both bundles. If this is an acceptable trade-off for you, then worry no further.

On the other hand, you'll need a workaround if you’re concerned about the performance implications of older browsers downloading both sets of bundles. Here’s one potential solution that uses script injection (instead of the script tags above) to avoid double downloads on affected browsers:

var scriptEl = document.createElement("script");

if ("noModule" in scriptEl) {
  // Set up modern script
  scriptEl.src = "/js/app.mjs";
  scriptEl.type = "module";
} else {
  // Set up legacy script
  scriptEl.src = "/js/app.js";
  scriptEl.defer = true; // type="module" defers by default, so set it here.
}

// Inject!
document.body.appendChild(scriptEl);

This script infers that if a browser supports the nomodule attribute in the script element, it understands type="module". This ensures that legacy browsers only get legacy scripts and modern browsers only get modern ones. Be warned, though, that dynamically injected scripts load asynchronously by default, so set the async attribute to false if dependency order is crucial.

Transpile less

I’m not here to trash Babel. It’s indispensable, but lordy, it adds a lot of extra stuff without your ever knowing. It pays to peek under the hood to see what it’s up to. Some minor changes in your coding habits can have a positive impact on what Babel spits out.

https://twitter.com/_developit/status/1110229993999777793

To wit: default parameters are a very handy ES6 feature you probably already use:

function logger(message, level = "log") {
  console[level](message);
}

The thing to pay attention to here is the level parameter, which has a default of “log.” This means if we want to invoke console.log with this wrapper function, we don’t need to specify level. Great, right? Except when Babel transforms this function, the output looks like this:

function logger(message) {
  var level = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : "log";

  console[level](message);
}

This is an example of how, despite our best intentions, developer conveniences can backfire. What was a handful of bytes in our source has now been transformed into much larger in our production code. Uglification can’t do much about it either, as arguments can’t be reduced. Oh, and if you think rest parameters might be a worthy antidote, Babel’s transforms for them are even bulkier:

// Source
function logger(...args) {
  const [level, message] = args;

  console[level](message);
}

// Babel output
function logger() {
  for (var _len = arguments.length, args = new Array(_len), _key = 0; _key < _len; _key++) {
    args[_key] = arguments[_key];
  }

  const level = args[0],
        message = args[1];
  console[level](message);
}

Worse yet, Babel transforms this code even for projects with a @babel/preset-env configuration targeting modern browsers, meaning the modern bundles in your differentially served JavaScript will be affected too! You could use loose transforms to soften the blow—and that’s a fine idea, as they’re often quite a bit smaller than their more spec-compliant counterparts—but enabling loose transforms can cause issues if you remove Babel from your build pipeline later on.

Regardless of whether you decide to enable loose transforms, here’s one way to cut the cruft of transpiled default parameters:

// Babel won't touch this
function logger(message, level) {
  console[level || "log"](message);
}

Of course, default parameters aren’t the only feature to be wary of. For example, spread syntax gets transformed, as do arrow functions and a whole host of other stuff.

If you don’t want to avoid these features altogether, you have a couple ways of reducing their impact:

  1. If you’re authoring a library, consider using @babel/runtime in concert with @babel/plugin-transform-runtime to deduplicate the helper functions Babel puts into your code.
  2. For polyfilled features in apps, you can include them selectively with @babel/polyfill via @babel/preset-env’s useBuiltIns: "usage" option.

This is solely my opinion, but I believe the best choice is to avoid transpilation altogether in bundles generated for modern browsers. That’s not always possible, especially if you use JSX, which must be transformed for all browsers, or if you’re using bleeding edge language features that aren’t widely supported. In the latter case, it might be worth asking if those features are really necessary to deliver a good user experience (they rarely are). If you arrive at the conclusion that Babel must be a part of your toolchain, then it’s worth peeking under the hood from time to time to catch suboptimal stuff Babel might be doing that you can improve on.

Improvement is not a race

As you massage your temples wondering when this horrid JavaScript hangover is going to lift, understand that it’s precisely when we rush to get something out there as fast as we possibly can that the user experience can suffer. As the web development community obsesses on iterating faster in the name of competition, it’s worth your time to slow down a little bit. You’ll find that by doing so, you may not be iterating as fast as your competitors, but your product will be faster than theirs.

As you take these suggestions and apply them to your codebase, know that progress doesn’t spontaneously happen overnight. Web development is a job. The truly impactful work is done when we’re thoughtful and dedicated to the craft for the long haul. Focus on steady improvements. Measure, test, repeat, and your site’s user experience will improve, and you’ll get faster bit by bit over time.

Special thanks to Jason Miller for tech editing this piece. Jason is the creator and one of the many maintainers of Preact, a vastly smaller alternative to React with the same API. If you use Preact, please consider supporting Preact through Open Collective.




script

Introducing a JavaScript library for exploring Scratch projects: sb-util

Introduction We’re excited to introduce sb-util, a new JavaScript library that makes it easy to query Scratch projects via .sb3 files. This npm library allows developers (or even teachers and students) to parse and introspect Scratch projects for a range of purposes, from data visualization to custom tooling. Previously, working with Scratch project files required […]




script

Wet cells, dry cells, fuel cells [videorecording] : an introduction / producer/director, Bernard Motut ; script, Bernard Motut, John Davis




script

PDR for nonprescription drugs, dietary supplements, and herbs




script

Culpeper's complete herbal : consisting of a comprehensive description of nearly all herbs with their medicinal properties and directions for compounding the medicines extracted from them

Culpeper, Nicholas, 1616-1654




script

Parsing an RSS News Feed with a Bash Script

I am involved in several free software projects, including one or two where I maintain the website. For one of those projects, we currently are updating the website. Ours is probably similar to other free software projects. We use a hosting service for several key services, including news, but we run our website on a web server that we own. In our case, we run most of our project on SourceForge and run the website on a third-party service, so the news and website are on different systems.

Not surprisingly, our project uses an RSS feed to pull news items from SourceForge to display on the project website.

complete article




script

Kill the Newsletter Converts Newsletter Subscriptions Into RSS Feeds

Newsletters are not all bad, but getting them in your email can be a bit disruptive. RSS is a good place for them because you are used to a seeing lot of content in that kind of feed and you can look back on it whenever you want. Kill the Newsletter creates a fake email address for you, then creates a RSS feed for any newsletter you send to that email address. It is a super easy to use system that works really well for anyone still holding onto RSS.

complete article




script

New Google News Drops RSS Feed Subscription Buttons

With the new Google News, they did not just drop the standout tag and editors pick but it seems like the direct method to subscribe to Google News via RSS and Google News keyword searches is gone. There are still ways to subscribe, but the buttons seem to be gone in the new design.

complete article




script

Why podcasting companies are getting more into scripted shows

Podcasting is going Hollywood.

Over the past year, HowStuffWorks, Gimlet and Wondery have stepped up their search for content that can be adapted into film, television, or even books as production studios snap up podcasts for high-profile films and cable TV series with A-list talent like Julia Roberts, Connie Britton and Eric Bana.

Gimlet Media, the startup that began selling film and TV rights to its shows last year, has already exceeded every 2018 goal it had set for Gimlet Pictures, said its head, Chris Giliberti.

complete article




script

The Definitive Guide to AdonisJs: Building Node.js Applications with JavaScript / by Christopher Pitt

Online Resource




script

Real-time quantification of fusion transcripts with ligase chain reaction by direct ligation of adjacent DNA probes at fusion junction

Analyst, 2020, Advance Article
DOI: 10.1039/D0AN00163E, Paper
Fengxia Su, Jianing Ji, Pengbo Zhang, Fangfang Wang, Zhengping Li
A fusion transcript assay is developed based on direct ligation and ligase chain reaction.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




script

Antibody free ELISA-like assay for the detection of transcription factors based on double-stranded DNA thermostability

Analyst, 2020, 145,3339-3344
DOI: 10.1039/C9AN02631B, Paper
Yue Sun, Zhiyan Li, Choiwan Lau, Jianzhong Lu
Transcription factors (TFs) play critical roles in gene expression regulation and disease development. Herein we report a chemiluminescence assay for the detection of transcription factor based on double-stranded DNA thermostability.
The content of this RSS Feed (c) The Royal Society of Chemistry




script

[ASAP] Engineering Prokaryotic Transcriptional Activator XylR as a Xylose-Inducible Biosensor for Transcription Activation in Yeast

ACS Synthetic Biology
DOI: 10.1021/acssynbio.0c00122




script

[ASAP] Transcript Barcoding Illuminates the Expression Level of Synthetic Constructs in <italic toggle="yes">E. coli</italic> Nissle Residing in the Mammalian Gut

ACS Synthetic Biology
DOI: 10.1021/acssynbio.0c00040




script

Caught in a whirlwind: a cultural history of Ottoman Baghdad as reflected in its illustrated manuscripts / by Melis Taner

Rotch Library - ND3239.I72 B338 2020




script

Learn HTML5 and Javascript for iOS / Scott Preston

Preston, Scott, 1969-




script

Exam ref 70-480 : programming in HTML5 with Javascript and CSS3 / Rick Delorme

Delorme, Rick, author




script

The Rumi prescription: how an ancient mystic poet changed my modern manic life / Melody Moezzi

Online Resource




script

The mark of theory: inscriptive figures, poststructuralist prehistories / Andrea Bachner

Hayden Library - BD161.B33 2018




script

Web Directions Code ’20 session spotlight–JavaScript debugging the hard way

JavaScript debugging the hard way Marcin Szczepanski, Principal Developer Atlassian Error on line 1, column 6532112 of bundle.js? Out of memory error trying to load a CPU profile into the Chrome debugger? Two minutes to see wait and see if a change you made fixed a bug? While upgrading our complex web application from Webpack […]

The post Web Directions Code ’20 session spotlight–JavaScript debugging the hard way appeared first on Web Directions.




script

Surface coating and particle size are main factors explaining the transcriptome-wide responses of the earthworm Lumbricus rubellus to silver nanoparticles

Environ. Sci.: Nano, 2020, 7,1179-1193
DOI: 10.1039/C9EN01144G, Paper
Dick Roelofs, Sunday Makama, Tjalf E. de Boer, Riet Vooijs, Cornelis A. M. van Gestel, Nico W. van den Brink
We present transcriptome responses of earthworms exposed to differently sized and coated silver nanoparticles (AgNPs), which are used in important industrial and biomedical applications.
The content of this RSS Feed (c) The Royal Society of Chemistry




script

ACSM's resource manual for guidelines for exercise testing and prescription / American College of Sports Medicine ; senior editor, David P. Swain ; section editors, Clinton A. Brawner ... [et al.]




script

ACSM's guidelines for exercise testing and prescription / senior editor, Deborah Riebe, PhD, FACSM, ACSM EP-C, Associate Dean, College of Health Sciences, Professor, Department of Kinesiology, University of Rhode Island, Kingston, Rhode Island ; assoc

American College of Sports Medicine, author, issuing body




script

The joy of pixeling and building pixel tools with HTML5 canvas and JavaScript

Some people knit, others do puzzles, and yet others find calm by colouring. Me, I love pixeling. My computer career started with a super basic computer. It didn’t even have a way to store what I programmed. So, every day, I would write myself a small program that allows me to paint on the screen […]




script

JavaScript Picture-in-Picture API

As a huge fan of media on the web, I’m always excited about enhancements to how we can control our media. Maybe I get excited about simple things like the <video> tag and its associated elements and attributes because media on the web started with custom codecs, browser extensions, and Flash. The latest awesome media […]

The post JavaScript Picture-in-Picture API appeared first on David Walsh Blog.




script

Beyond the script : take 3 : drama in the English and literacy classroom / Robyn Ewing and Jennifer Simons with Margery Hertzberg and Victoria Campbell

Ewing, Robyn (Robyn Ann), 1955- author




script

Real-Time Search in JavaScript

What I meant was scanning the DOM of a page for text equivalents and showing the actual parts of the page, as well as hiding the irrelevant ones. I came up with the technique when I was designing Readerrr’s FAQ page. Take a look at the example:

I have also implemented the solution here on my blog.

How it works

All simple. Let’s take the FAQ page as an example. Here’s a typical markup:

<h1>FAQ</h1>
<div class="faq">
	<input type="search" value="" placeholder="Type some keywords (e.g. giza, babylon, colossus)" />
	<ul>
		<li id="faq-1">
			<h2><a href="#faq-1">Great Pyramid of Giza</a></h2>
			<div>
				<p>The Great Pyramid of Giza <!-- ... --></p>
				<!-- ... -->
			</div>
		</li>
		<li id="faq-2">
			<h2><a href="#faq-2">Hanging Gardens of Babylon</a></h2>
			<div>
				<p>The Hanging Gardens of Babylon <!-- ... --></p>
				<!-- ... -->
			</div>
		</li>
		<!-- ... -->
	</ul>
	<div class="faq__notfound"><p>No matches were found.</p></div>
</div>

I wrote a tiny piece of JavaScript code to handle the interaction and this is how it works:

  1. When the page loads, the script indexes the content of all li’s into browser’s memory.
  2. When a user types text into the search field, the script searches for equivalents among the indexed data and hides the corresponding li’s where no equivalents were found. If nothing found, a message is shown.
  3. The script highlights the text equivalents by replacing phases, for example, babylon becomes <span class="highlight">babylon</span>.

Now, try it yourself:

Demo

Taking it further

Since I chose FAQ page as an example, there are some issues to deal with.

Toggling the answers

It is a good practice to hide the answers by default and show them only when user needs them, that is to say when they press the question:

.faq > ul > li:not( .is-active ) > div
{
	display: none;
}
$( document ).on( 'click', '.faq h2 a', function( e )
{
	e.preventDefault();
	$( this ).parents( 'li' ).toggleClass( 'is-active' );
});

In the CSS part I use child combinator selector > because I don’t want to select and, therefore, to hide the elements of an answer, which may contain lists and div’s.

What if JavaScript is disabled

The user won’t be able to see the answers. Unless you show them by default or develop a JavaScript-less solution. To do this, take a closer look at these fragments of the markup:

  • <li id="faq-1">
  • <a href="#faq-1">

The usage of fragment identifiers enables us to take the advantage of CSS’s pseudo selector :target:

.faq > ul > li:not( :target ) > div
{
	display: none;
}

Furthermore, the real-time search is not possible as well. But you can either provide a sever-side search possibility or hide the search field and so as not to confuse the user:

<html class="no-js">
	<head>
		<!-- remove this if you use Modernizr -->
		<script>(function(e,t,n){var r=e.querySelectorAll("html")[0];r.className=r.className.replace(/(^|s)no-js(s|$)/,"$1$2")})(document,window,0);</script>
	</head>
</html>

I added a class name no-js to <html> element. The <script> part removes that class name. If JavaScript support is disabled in a browser, the class name won’t be removed; therefore:

.no-js .faq input
{
	display: none;
}

The no-js is a very handy technique, you can use it site-wide.

Improving UX

If there is only one list item that matches user’s query, it is a good practice to automatically show the content of that item, without requiring to press the title. To see what I mean, head over the GIF at the beginning of the post.

Hidden keywords

Here on my blog I have a filterable list of blog post titles only. Each post has some related keywords assigned. So, during the search, how do I make an item discoverable even if the title does not consist of a particular keyword? For example, how can I make the entry “Real-Time Search in JavaScript” visible if a user entered “jquery”? Yes, exactly, that is adding keywords and hiding them with CSS:

<li>
	<h2><a href="/real-time-search-in-javascript">Real-Time Search in JavaScript</a></h2>
	<p class="hidden-keywords" aria-hidden="true">jquery filter input html css</p>
</li>
.hidden-keywords
{
	display: none;
}

A simple trick but not always that obvious.


You will find two versions of the code in the source of the demo: without dependencies and jQuery-dependent. These versions are also divided into three groups of code so you can adapt only what your project needs.

Demo




script

How to Add a CSS and JavaScript Sticky Menu to Your Site

See the two ways to add a sticky horizontal menu to your site, plus 7 beautiful examples of this pattern out in the wild.




script

Smooth Scrolling HTML Bookmarks using JavaScript

See how to use native JavaScript to create smooth scrolling HTML bookmark links inside the page, and for those that need legacy browser support, using jQuery instead.




script

Flex Cards Accordion script

jQuery script that uses CSS flexbox to create cards that when clicked on expands to show copious amount of information in a compact, manageable manner.




script

[ASAP] Most Influential Physicochemical and In Vitro Assay Descriptors for Hepatotoxicity and Nephrotoxicity Prediction

Chemical Research in Toxicology
DOI: 10.1021/acs.chemrestox.0c00040




script

Dynamic description technology of fractured vuggy carbonate gas reservoirs / Tongwen Jiang, Hedong Sun, Xingliang Deng

Online Resource




script

Patron Services: CORRECTION - Orientation to the Manuscript Division

Join the Manuscript Division for a focused research orientation to resources located in the Manuscript Reading Room. Learn how to find materials for your research projects and how to utilize the Manuscript Reading Room’s resources in-person and remotely. The session includes general information on conducting research in the Manuscript Reading Room and time for Q&A about research strategies or steps on specific research projects. All researchers are welcome.

 Date: Saturday, November 16, 2019, 10:00 AM – 11:30 AM EST

 Location: Library of Congress Thomas Jefferson Building, Room LJ-139B

 Click here for more information and to register.

 Request ADA accommodations five days in advance at (202) 707-6362 or ADA@loc.gov.

 

 

Click here for more information.




script

Patron Services: Civil Rights in the 20th Century: Personal Papers and Organizational Records in the Manuscript Division

In this session, Manuscript Reference Librarian Edith Sandler will demonstrate how to search for and access personal papers and organizational records documenting the history of the civil rights movement in the 20th century. Time will be included at the end of the session for Q&A about research strategies or steps on specific research projects. All researchers are welcome.

Please note that the maximum class size is 30 researchers unless otherwise indicated.

Individuals requiring accommodations for any of these events are requested to submit a request at least five business days in advance by contacting (202) 707-6362 or ADA@loc.gov.

Patrons are encouraged to arrive 15 minutes prior to the orientation. Seating is available on a first-come basis. Registration does not guarantee entry after the orientation start time.

For more information, please visit: https://www.loc.gov/rr/main/satorient/

 

Date: Saturday, January 25, 10:00 AM – 11:30 AM EST

 

Location: Library of Congress Thomas Jefferson Building, Room LJ-139B

 

Click here for more information and to register.

 

Request ADA accommodations five days in advance at (202) 707-6362 or ADA@loc.gov.

 

Click here for more information.




script

Patron Services: Saturday Research Orientation: Manuscript Division

Join Manuscript Reference Librarian Lara Szypszak for a focused orientation to resources located in the Manuscript Reading Room. This session will share the letters of love from members of congress to their spouses, writers to their paramours, artists to their muses, and more. Celebrate Valentine’s Day (better late than never) with notes of romance found in the Manuscript Division’s collections, and also learn how to find materials for your research projects utilizing the Manuscript Reading Room’s resources in-person and remotely. The session includes general information on conducting research in the Manuscript Reading Room and time for Q&A about research strategies or steps on specific research projects. All researchers are welcome. See the following link for Maps and Floor Plans in the Jefferson Building: https://www.loc.gov/visit/maps-and-floor-plans/thomas-jefferson-building/first-floor/

Date: Saturday, February 15, 10:00 am - 11:30 am EST

Location: Library of Congress Jefferson Building, Room 139B

Click here for more information and to register.

Request ADA accommodations five business days in advance at (202) 707-6362 or email ADA@loc.gov.

 

Click here for more information.




script

Making games: with JavaScript / Christopher Pitt

Online Resource




script

Building a 2D game physics engine: using HTML5 and JavaScript / Michael Tanaya, Huaming Chen, Jebediah Pavleas, Kelvin Sung

Online Resource




script

The advanced game developer's toolkit: create amazing Web-based games with JavaScript and HTML5 / Rex van der Spuy

Online Resource




script

Introducing JavaScript game development: build a 2D game from the ground up / Graeme Stuart

Online Resource




script

Let's Build a Multiplayer Phaser Game: With TypeScript, Socket. IO, and Phaser.

Online Resource




script

Phonetics: Transcription, Production, Acoustics, and Perception, 2nd Edition


 

An accessible yet in-depth introductory textbook on the basic concepts of phonetics, fully updated and revised

This broad, interdisciplinary textbook investigates how speech can be written down, how speech is produced, its acoustic characteristics, and how listeners perceive speech. Phonetics: Transcription, Production, Acoustics, and Perception introduces readers to the fundamental concepts of the discipline, providing coverage of all four areas of



Read More...




script

Essential ASP.NET Web Forms Development: Full Stack Programming with C#, SQL, Ajax, and JavaScript / Beasley, Robert

Online Resource




script

American epic / a production of BBC Arena, Lo-Max Films Ltd., Wildwood Enterprises, and Thirteen Productions LLC for WNET ; directed by Bernard MacMahon ; story by Bernard MacMahon & Allison McGourty & Duke Erikson ; telescript by William Morgan

Browsery DVD ML3790.A44 2017




script

Responsible JavaScript: Part III

You’ve done everything you thought was possible to address your website’s JavaScript problem. You relied on the web platform where you could. You sidestepped Babel and found smaller framework alternatives. You whittled your application code down to its most streamlined form possible. Yet, things are just not fast enough. When websites fail to perform the way we as designers and developers expect them to, we inevitably turn on ourselves:

“What are we failing to do?” “What can we do with the code we have written?” “Which parts of our architecture are failing us?”

These are valid inquiries, as a fair share of performance woes do originate from our own code. Yet, assigning blame solely to ourselves blinds us to the unvarnished truth that a sizable onslaught of our performance problems comes from the outside.

When the third wheel crashes the party

Convenience always has a price, and the web is wracked by our collective preference for it.  JavaScript, in particular, is employed in a way that suggests a rapidly increasing tendency to outsource whatever it is that We (the first party) don’t want to do. At times, this is a necessary decision; it makes perfect financial and operational sense in many situations.

But make no mistake, third-party JavaScript is never cheap. It’s a devil’s bargain where vendors seduce you with solutions to your problem, yet conveniently fail to remind you that you have little to no control over the side effects that solution introduces. If a third-party provider adds features to their product, you bear the brunt. If they change their infrastructure, you will feel the effects of it. Those who use your site will become frustrated, and they aren’t going to bother grappling with an intolerable user experience. You can mitigate some of the symptoms of third parties, but you can’t cure the ailment unless you remove the solutions altogether—and that’s not always practical or possible.

In this installment of Responsible JavaScript, we’ll take a slightly less technical approach than in the previous installment. We are going to talk more about the human side of third parties. Then, we’ll go down some of the technical avenues for how you might go about tackling the problem.

Hindered by convenience

When we talk about the sorry state of the web today, some of us are quick to point out the role of developer convenience in contributing to the problem. While I share the view that developer convenience has a tendency to harm the user experience, they’re not the only kind of convenience that can turn a website into a sluggish, janky mess.

Operational conveniences can become precursors to a very thorny sort of technical debt. These conveniences are what we reach for when we can’t solve a pervasive problem on our own. They represent third-party solutions that address problems in the absence of architectural flexibility and/or adequate development resources.

Whenever an inconvenience arises, that is the time to have the discussion around how to tackle it in a way that’s comprehensive. So let’s talk about what it looks like to tackle that sort of scenario from a more human angle.

The problem is pain

The reason third parties come into play in the first place is pain. When a decision maker in an organization has felt enough pain around a certain problem, they’re going to do a very human thing, which is to find the fastest way to make that pain go away.

Markets will always find ways to address these pain points, even if the way they do so isn’t sustainable or even remotely helpful. Web accessibility overlays—third-party scripts that purport to automatically fix accessibility issues—are among the worst offenders. First, you fork over your money for a fix that doesn’t fix anything. Then you pay a wholly different sort of price when that “fix” harms the usability of your website. This is not a screed to discredit the usefulness of the tools some third-party vendors provide, but to illustrate how the adoption of third-party solutions happens, even those that are objectively awful

A Chrome performance trace of a long task kicked off by a third party’s web accessibility overlay script. The task occupies the main thread for roughly 600 ms on a 2017 Retina MacBook.

So when a vendor rolls up and promises to solve the very painful problem we’re having, there’s a good chance someone is going to nibble. If that someone is high enough in the hierarchy, they’ll exert downward pressure on others to buy in—if not circumvent them entirely in the decision-making process. Conversely, adoption of a third-party solution can also occur when those in the trenches are under pressure and lack sufficient resources to create the necessary features themselves.

Whatever the catalyst, it pays to gather your colleagues and collectively form a plan for navigating and mitigating the problems you’re facing.

Create a mitigation plan

Once people in an organization have latched onto a third-party solution, however ill-advised, the difficulty you’ll encounter in forcing a course change will depend on how urgent a need that solution serves. In fact, you shouldn’t try to convince proponents of the solution that their decision was wrong. Such efforts almost always backfire and can make people feel attacked and more resistant to what you’re telling them. Even worse, those efforts could create acrimony where people stop listening to each other completely, and that is a breeding ground for far worse problems to develop.

Grouse and commiserate amongst your peers if you must—as I myself have often done—but put your grievances aside and come up with a mitigation plan to guide your colleagues toward better outcomes. The nooks and crannies of your specific approach will depend on the third parties themselves and the structure of the organization, but the bones of it could look like the following series of questions.

What problem does this solution address?

There’s a reason why a third-party solution was selected, and this question will help you suss out whether the rationale for its adoption is sound. Remember, there are times decisions are made when all the necessary people are not in the room. You might be in a position where you have to react to the aftermath of that decision, but the answer to this question will lead you to a natural follow-up.

How long do we intend to use the solution?

This question will help you identify the solution’s shelf life. Was it introduced as a bandage, with the intent to remove it once the underlying problem has been addressed, such as in the case of an accessibility overlay? Or is the need more long-term, such as the data provided by an A/B testing suite? The other possibility is that the solution can never be effectively removed because it serves a crucial purpose, as in the case of analytics scripts. It’s like throwing a mattress in a swimming pool: it’s easy to throw in, but nigh impossible to drag back out.

In any case, you can’t know if a third-party script is here to stay if you don’t ask. Indeed, if you find out the solution is temporary, you can form a plan to eventually remove it from your site once the underlying problem it addresses has been resolved.

Who’s the point of contact if issues arise?

When a third-party solution is put into place, someone must be the point of contact for when—not if—issues arise.

I’ve seen what happens (far too often) when a third-party script gets out of control. For example, when a tag manager or an A/B testing framework’s JavaScript grows slowly and insidiously because marketers aren’t cleaning out old tags or completed A/B tests. It’s for precisely these reasons that responsibility needs to be attached to a specific person in your organization for third-party solutions currently in use on your site. What that responsibility entails will differ in every situation, but could include:

  • periodic monitoring of the third-party script’s footprint;
  • maintenance to ensure the third-party script doesn’t grow out of control;
  • occasional meetings to discuss the future of that vendor’s relationship with your organization;
  • identification of overlaps of functionality between multiple third parties, and if potential redundancies can be removed;
  • and ongoing research, especially to identify speedier alternatives that may act as better replacements for slow third-party scripts.

The idea of responsibility in this context should never be an onerous, draconian obligation you yoke your teammates with, but rather an exercise in encouraging mindfulness in your colleagues. Because without mindfulness, a third-party script’s ill effects on your website will be overlooked until it becomes a grumbling ogre in the room that can no longer be ignored. Assigning responsibility for third parties can help to prevent that from happening.

Ensuring responsible usage of third-party solutions

If you can put together a mitigation plan and get everyone on board, the work of ensuring the responsible use of third-party solutions can begin. Luckily for you, the actual technical work will be easier than trying to wrangle people. So if you’ve made it this far, all it will take to get results is time and persistence.

Load only what’s necessary

It may seem obvious, but load only what’s necessary. Judging by the amount of unused first-party JavaScript I see loaded—let alone third-party JavaScript—it’s clearly a problem. It’s like trying to clean your house by stuffing clutter into the closets. Regardless of whether they’re actually needed, it’s not uncommon for third-party scripts to be loaded on every single page, so refer to your point of contact to figure out which pages need which third-party scripts.

As an example, one of my past clients used a popular third-party tool across multiple brand sites to get a list of retailers for a given product. It demonstrated clear value, but that script only needed to be on a site’s product detail page. In reality, it was frequently loaded on every page. Culling this script from pages where it didn’t belong significantly boosted performance for non-product pages, which ostensibly reduced the friction on the conversion path.

Figuring out which pages need which third-party scripts requires you to do some decidedly untechnical work. You’ll actually have to get up from your desk and talk to the person who has been assigned responsibility for the third-party solution you’re grappling with. This is very difficult work for me, but it’s rewarding when good-faith collaboration happens, and good outcomes are realized as a result.

Self-host your third-party scripts

This advice isn’t a secret by any stretch. I even touched on it in the previous installment of this series, but it needs to be shouted from the rooftops at every opportunity: you should self-host as many third-party resources as possible. Whether this is feasible depends on the third-party script in question.

Is it some framework you’re grabbing from Google’s hosted libraries, cdnjs, or other similar provider? Self-host that sucker right now.

Casper found a way to self-host their Optimizely script and significantly reduced their start render time for their trouble. It really drives home the point that a major detriment of third-party resources is the fact that their mere existence on other servers is one of the worst performance bottlenecks we encounter.

If you’re looking to self-host an analytics solution or a similar sort of script, there’s a higher level of difficulty to contend with to self-host it. You may find that some third-party scripts simply can’t be self-hosted, but that doesn’t mean it isn’t worth the trouble to find out. If you find that self-hosting isn’t an option for a third-party script, don’t fret. There are other mitigations you can try.

Mask latency of cross-origin connections

If you can’t self-host your third-party scripts, the next best thing is to preconnect to servers that host them. WebPageTest’s Connection View does a fantastic job of showing you which servers your site gathers resources from, as well as the latency involved in establishing connections to them.

WebPageTest’s Connection View shows all the different servers a page requests resources from during load.

Preconnections are effective because they establish connections to third-party servers before the browser would otherwise discover them in due course. Parsing HTML takes time, and parsers are often blocked by stylesheets and other scripts. Wherever you can’t self-host third-party scripts, preconnections make perfect sense.

Maybe don’t preload third-party scripts

Preloading resources is one of those things that sounds fantastic at first—until you consider its potential to backfire, as Andy Davies points out. If you’re unfamiliar with preloading, it’s similar to preconnecting but goes a step further by instructing the browser to fetch a particular resource far sooner than it ordinarily would.

The drawback of preloading is that while it’s great for ensuring a resource gets loaded as soon as possible, it changes the discovery order of that resource. Whenever we do this, we’re implicitly saying that other resources are less important—including resources crucial to rendering or even core functionality.

It’s probably a safe bet that most of your third-party code is not as crucial to the functionality of your site as your own code. That said, if you must preload a third-party resource, ensure you’re only doing so for third-party scripts that are critical to page rendering.

If you do find yourself in a position where your site’s initial rendering depends on a third-party script, refer to your mitigation plan to see what you can do to eliminate or ameliorate your dependence on it. Depending on a third party for core functionality is never a good position to be in, as you’re relinquishing a lot of control to others who might not have your best interests in mind.

Lazy load non-essential third-party scripts

The best request is no request. If you have a third-party script that doesn’t need to be loaded right away, consider lazy loading it with an Intersection Observer. Here’s what it might look like to lazy load a Facebook Like button when it’s scrolled into the viewport:


let loadedFbScript = false;

const intersectionListener = new IntersectionObserver(entries => {
  entries.forEach(entry => {
    if ((entry.isIntersecting || entry.intersectionRatio) && !loadedFbScript) {
      const scriptEl = document.createElement("script");

      scriptEl.defer = true;
      scriptEl.crossOrigin = "anonymous";
      scriptEl.src = "https://connect.facebook.net/en_US/sdk.js#xfbml=1&version=v3.0";
      scriptEl.onload = () => {
        loadedFbScript = true;
      };
      
      document.body.append(scriptEl);
    }
  });
});

intersectionListener.observe(document.querySelector(".fb-like"));

In the above snippet, we first set a variable to track whether we’ve loaded the Facebook SDK JavaScript. After that, an IntersectionListener is created that checks whether the observed element is in the viewport, and whether the Facebook SDK has been loaded. If the SDK JavaScript hasn’t been loaded, a reference to it is injected into the DOM, which will kick off a request for it.

You’re not going to be able to lazy load every third-party script. Some of them simply need to do their work at page load time, or otherwise can’t be deferred. Regardless, do the detective work to see if it’s possible to lazy load at least some of your third-party JavaScript.

One of the common concerns I hear from coworkers when I suggest lazy loading third-party scripts is how it can delay whatever interactions the third party provides. That’s a reasonable concern, because when you lazy load anything, a noticeable delay may occur as the resource loads. You can get around this to some extent with resource prefetching. This is different than preloading, which we discussed earlier. Prefetching consumes a comparable amount of data, yes, but prefetched resources are given lower priority and are less likely to contend for bandwidth with critical resources.

Staying on top of the problem

Keeping an eye on your third-party JavaScript requires mindfulness bordering on hypervigilance. When you recognize poor performance for the technical debt that it truly is, you’ll naturally slip into a frame of mind where you’ll recognize and address it as you would any other kind of technical debt.

Staying on top of third parties is refactoring—a sort that requires you to periodically perform tasks such as cleaning up tag managers and A/B tests, consolidating third-party solutions, eliminating any that are no longer needed, and applying the coding techniques discussed above. Moreover, you’ll need to work with your team to address this technical debt on a cyclical basis. This kind of work can’t be automated, so yes, you’ll need to knuckle down and have face-to-face, synchronous conversations with actual people.

If you’re already in the habit of scheduling “cleanup sprints” on some interval, then that is the time and space for you to address performance-related technical debt, regardless of whether it involves third- or first-party code. There’s a time for feature development, but that time should not comprise the whole of your working hours. Development shops that focus only on feature development are destined to be wholly consumed by the technical debt that will inevitably result.

So it will come to pass that in the fourth and final installment of this series we’ll discuss what it means to do the hard work of using JavaScript responsibly in the context of process. Therein, we’ll explore what it takes to unite your organization under the banner of making your website faster and more accessible, and therefore more usable for everyone, everywhere.




script

Exporting modules in JavaScript

In my latest entry I explain the difference about exporting a module between server side or CLI environments such Nashorn, SpiderMonkey, JSC, or micro controller and embedded engines such Duktape, Espruino, KinomaJS, and Desktop UI space via GJS.
Using this is a universal way to attach and export properties but when it comes to ES2015 modules, incompatible with CommonJS and with an undefined execution context.
Enjoy




script

The missing analysis in JavaScript "Real" Mixins

I love hacks and unusual patterns! As logical consequence, I loved this post about "Real" Mixins!!!
The only hitch about that post is that I believe there are few points closer to a "gonna sell you my idea" discussion than a non disillusioned one.
Let's start this counter analysis remembering what are actually classes in latest JavaScript standard, so that we can move on explaining what's missing in there.

JavaScript embraces prototypal inheritance

It doesn't matter if ES6 made the previously reserved class keyword usable; at the end of the day we're dealing with a special syntactical shortcut to enrich a generic prototype object.

// class in ES2015
class A {
constructor() {}
method() {}
get accessor() {}
set accessor(value) {}
}

// where are those methods and properties defined?
console.log(
Object.getOwnPropertyNames(A.prototype)
// ["constructor", "method", "accessor"]
);
Accordingly, declaring a generic class consists in bypassing the following procedure:

function A() {}
Object.defineProperties(
A.prototype,
{
// constructor is implicitly defined
method: {
configurable: true,
writable: true,
value: function method() {}
},
accessor: {
configurable: true,
get: function get() {},
set: function set(value) {}
}
}
);
If you don't trust me, trust what a transpiler would do, summarized in the following code:

var A = (function () {
// the constructor
function A() {
_classCallCheck(this, _temporalAssertDefined(A, "A", _temporalUndefined) && A);
}
// the enriched prototype
_createClass(_temporalAssertDefined(A, "A", _temporalUndefined) && A, [{
key: "method",
value: function method() {}
}, {
key: "accessor",
get: function get() {},
set: function set(value) {}
}]);

return _temporalAssertDefined(A, "A", _temporalUndefined) && A;
})();
If there is some public static property in the definition, its assignment to the constructor would be the second bypassed part.

The super case

The extra bit in terms of syntax that makes ES6 special is the special keyword super. Being multiple inheritance not possible in JavaScript, we could think about super as the static reference to the directly extended prototype. In case of the previous B class, which extends A, we can think about super variable like if it was defined as such:

// used within the constructor
let super = (...args) => A.apply(this, arguments);

// used within any other method
super.method = (...args) => A.prototype.method.apply(this, args);

// used as accessor
Object.defineProperty(super, 'accessor', {
get: () => Object.getOwnPropertyDescriptor(
A.prototype, 'accessor'
).get.call(this),
set: (value) => Object.getOwnPropertyDescriptor(
A.prototype, 'accessor'
).set.call(this, value)
});
Now that we have a decent understanding on how inheritance works in JavaScript and what it means to declare a class, let's talk about few misleading points sold as pros or cons in the mentioned article.

Prototypes are always modified anyway!

We've just seen that defining a class technically means enriching its prototype object. This already invalidates somehow Justin point but there's more to consider.
When Justin exposes his idea on why current solutions are bad, he says that:
When using mixin libraries against prototype objects, the prototypes are directly mutated. This is a problem if the prototype is used anywhere else that the mixed-in properties are not wanted.
The way Justin describes this issue is quite misleading because mutating prototypes at runtime is a well known bad practice.
Indeed, I believe every single library he mentioned in that post, and he also forgot mine, is not designed to mutate classes prototypes at runtime ... like: not at all!
Every single mixin proposal that is capable of implementing mixins via classes is indeed designed to define these classes at definition time, not at runtime!
Moreover, whatever solution Justin proposed will not guard any class from being modified at runtime later on!
The same way he's defining his final classes during their definitions, mixins-for-classes oriented libraries have exactly the same goal: you define your class and its mixins during the class definition time!
The fact mixins add properties to a prototype is a completely hidden matter that at class definition time is everything but bad.
Also, no property is modified in place, because mixins are there to enrich, not to modify ... and having a prototype enriched means also that it's easier to spot name clashing and methods or properties conflicts ... but I'll come back to that later ...

super actually should NOT work!

The main bummer about the article is that it starts in a very reasonable way, describing mixins and classes, and also analyzing their role in a program.
The real, and only, difference between a mixin and normal subclass is that a normal subclass has a fixed superclass, while a mixin definition doesn't yet have a superclass.
Justin started right at the very beginning, and then degenerated with all sort of contradictions such:
Then finally he's back to Sanity Village with the following sentence:
super calls can be a little unintuitive for those new to mixins because the superclass isn't known at mixin definition, and sometimes developers expect super to point to the declared superclass (the parameter to the mixin), not the mixin application.
And on top of that, Justin talks about constructors too:
Constructors are a potential source of confusion with mixins. They essentially behave like methods, except that overriden methods tend to have the same signature, while constructors in a inheritance hierarchy often have different signatures.
In case you're not convinced yet how much messed up could be the situation, I'd like to add extra examples to the plate.
Let's consider the word area and its multiple meanings:
  • any particular extent of space or surface
  • a geographical region
  • any section reserved for a specific function
  • extent, range, or scope
  • field of study, or a branch of a field of study
  • a piece of unoccupied ground; an open space
  • the space or site on which a building stands
Now you really have to tell me in case you implement a basic Shape mixin with an area() method what the hack would you expect when invoking super. Moreoever, you should tell me if for every single method you are going to write within a mixin, you are also going to blindly invoke super with arbitrary amount of arguments in there ...

So here my quick advice about calling blindly a super: NO, followed by DON'T and eventually NEVER!

Oversold super ability

No kidding, and I can't stress this enough ... I've never ever in my life wrote a single mixin that was blindly trusting on a super call. That would be eventually an application based on mixins but that's a completely different story.
My feeling is that Justin tried to combine at all cost different concepts, probably mislead by his Dart background, since mentioned as reference, where composition in Dart was indeed classes based and the lang itself exposes native mixins as classes ... but here again we are in JavaScript!

instanceof what?

Another oversold point in Justin's article is that instanceof works.
This one was easy to spot ... I mean, if you create a class at runtime everytime the mixin is invoked, what exactly are you capable of "instanceoffing" and why would that benefit anyone about anything?
I'm writing down his very same examples here that will obviously all fail:

// a new anonymous class is created each time
// who's gonna benefit about the instanceof?
let MyMixin = (superclass) => class extends superclass {
foo() {
console.log('foo from MyMixin');
}
};

// let's try this class
class MyClass extends MyMixin(MyBaseClass) {
/* ... */
}

// Justin says it's cool that instanceof works ...
(new MyClass) instanceof MyMixin; // false
// false ... really, it can't be an instance of
// an arrow function prototype, isn't it?!
Accordingly, and unless I've misunderstood Justin point in which case I apologies in advance, I'm not sure what's the exact point in having instanceof working. Yes, sure the intermediate class is there, but every time the mixin is used it will create a different class so there's absolutely no advantage in having instanceof working there ... am I right?

Improving **Objects** Composition

In his Improving the Syntax paragraph, Justin exposes a very nice API summarized as such:

let mix = (superclass) => new MixinBuilder(superclass);

class MixinBuilder {
constructor(superclass) {
this.superclass = superclass;
}

with(...mixins) {
return mixins.reduce((c, mixin) => mixin(c), this.superclass);
}
}
Well, this was actually the part I've liked the most about his article, it's a very simple and semantic API, and it also doesn't need classes at all to be implemented for any kind of JS object!
How? Well, simply creating objects from objects instead:

let mix = (object) => ({
with: (...mixins) => mixins.reduce(
(c, mixin) => Object.create(
c, Object.getOwnPropertyDescriptors(mixin)
), object)
});
It could surely be improved in order to deal with classes too but you get the idea:

let a = {a: 'a'};
let b = {b: 'b'};
let c = {c: 'c'};
let d = mix(c).with(a, b);
console.log(d);
Since the main trick in Justin proposal is to place an intermediate class in the inheritance chain, defining at runtime each time the same class and its prototype, I've done something different here that doesn't need to create a new class with its own prototype or object each time, while preserving original functionalities without affecting them.

Less RAM to use, a hopefully coming soon native Object.getOwnPropertyDescriptors that should land in ES7 and make extraction faster, and the ability to use the pattern with pretty much everything out there, modern or old.
The gist is here, feel free to reuse.

As Summary ...

Wrapping up this post, with latter proposal we can actually achieve whatever Justin did with his intermediate classes approach but following different goals:
  1. Mixins are added to the prototype chain.
  2. Mixins are applied without modifying existing objects.
  3. Mixins do no magic, and don't define new semantics on top of the core language.
  4. super.foo property access won't hopefully work within mixins but it will with subclasses methods.
  5. super() calls won't hopefully work in mixins constructors because you've no idea what kind of arguments you are going to receive. Subclasses still work as expected.
  6. Mixins are able to extend other mixins.
  7. instanceof has no reason to be even considered in this scenario since we are composing objects.
  8. Mixin definitions do not require library support - they can be written in a universal style and be compatible with non classes based engines too.
  9. bonus: less memory consumption overall, there's no runtime duplication for the same logic each time
I still want to thanks Justin because he made it quite clear that still not everyone fully understands mixins but there's surely a real-world need, or better demand, in the current JavaScript community.

Let's hope the next version of ECMAScript will let all of us compose in a standard way that doesn't include a footgun like super through intermediate classes definition could do.
Thanks for your patience reading through this!




script

JavaScript Interfaces

In this Implementing Interfaces in JavaScript blog entry I'll show a new way to enrich prototypal inheritance layering functionalities a part, without modifying prototypes at all. A different, alternative, and in some case even better, approach to mixins.




script

Description approaches and automated generalization algorithms for groups of map objects / Haowen Yan

Online Resource