sem

International Symposium on Advancing Geodesy in a Changing World: Proceedings of the IAG Scientific Assembly, Kobe, Japan, July 30 -- August 4, 2017 / Jeffrey T. Freymueller, Laura Sánchez, editors

Online Resource




sem

Vibration, structural engineering and measurement II: selected, peer-reviewed papers from the 2012 international conference on vibration, structural engineering and measurement (ICVSEM 2012), October 19-21, 2012, Shanghai, China / edited by Chunliang Zhan

Barker Library - TA355.V58 2012




sem

Cyborg / John Semper Jr., writer ; Paul Pelletier, Will Conrad, Timothy Green II [and two others], pencillers ; Tony Kordos, Sandra Hope, Scott Hanna [and four others], inkers ; Guy Major, Hi-Fi, Ivan Nunes, colorists ; Rob Leigh, letterer ; Will Conrad

Hayden Library - PN6728.C96 S46 2017




sem

Gotham Academy second semester / written by Brenden Fletcher, Becky Cloonan, Karl Kerschl ; pencils by Adam Archer ; inks by Sandra Hope ; background painting by Msassyk ; breakdowns by Rob Haynes ; color by Msassyk, Serge Lapointe, Chris Sotomayor ; let

Hayden Library - PN6728.G687 F45 2017




sem

The principles of green and sustainability science Adenike A. Akinsemolu

Online Resource




sem

Semantics to Screen Readers

As a child of the ’90s, one of my favorite movie quotes is from Harriet the Spy: “there are as many ways to live as there are people in this world, and each one deserves a closer look.” Likewise, there are as many ways to browse the web as there are people online. We each bring unique context to our web experience based on our values, technologies, environments, minds, and bodies. Assistive technologies (ATs), which are hardware and software that help us perceive and interact with digital content, come in diverse forms. ATs can use a whole host of user input, ranging from clicks and keystrokes to minor muscle movements. ATs may also present digital content in a variety of forms, such as Braille displays, color-shifted views, and decluttered user interfaces (UIs). One more commonly known type of AT is the screen reader. Programs such as JAWS, Narrator, NVDA, and VoiceOver can take digital content and present it to users through voice output, may display this output visually on the user’s screen, and can have Braille display and/or screen magnification capabilities built in. If you make websites, you may have tested your sites with a screen reader. But how do these and other assistive programs actually access your content? What information do they use? We’ll take a detailed step-by-step view of how the process works. (For simplicity we’ll continue to reference “browsers” and “screen readers” throughout this article. These are essentially shorthands for “browsers and other applications,” and “screen readers and other assistive technologies,” respectively.)

The semantics-to-screen-readers pipeline

Accessibility application programming interfaces (APIs) create a useful link between user applications and the assistive technologies that wish to interact with them. Accessibility APIs facilitate communicating accessibility information about user interfaces (UIs) to the ATs. The API expects information to be structured in a certain way, so that whether a button is properly marked up in web content or is sitting inside a native app taskbar, a button is a button is a button as far as ATs are concerned. That said, screen readers and other ATs can do some app-specific handling if they wish. On the web specifically, there are some browser and screen reader combinations where accessibility API information is supplemented by access to DOM structures. For this article, we’ll focus specifically on accessibility APIs as a link between web content and the screen reader. Here’s the breakdown of how web content reaches screen readers via accessibility APIs: The web developer uses host language markup (HTML, SVG, etc.), and potentially roles, states, and properties from the ARIA suite where needed to provide the semantics of their content. Semantic markup communicates what type an element is, what content it contains, what state it’s in, etc. The browser rendering engine (alternatively referred to as a “user agent”) takes this information and maps it into an accessibility API. Different accessibility APIs are available on different operating systems, so a browser that is available on multiple platforms should support multiple accessibility APIs. Accessibility API mappings are maintained on a lower level than web platform APIs, so web developers don’t directly interact with accessibility APIs. The accessibility API includes a collection of interfaces that browsers and other apps can plumb into, and generally acts as an intermediary between the browser and the screen reader. Accessibility APIs provide interfaces for representing the structure, relationships, semantics, and state of digital content, as well as means to surface dynamic changes to said content. Accessibility APIs also allow screen readers to retrieve and interact with content via the API. Again, web developers don’t interact with these APIs directly; the rendering engine handles translating web content into information useful to accessibility APIs.

Examples of accessibility APIs

The screen reader uses client-side methods from these accessibility APIs to retrieve and handle information exposed by the browser. In browsers where direct access to the Document Object Model (DOM) is permitted, some screen readers may also take additional information from the DOM tree. A screen reader can also interact with apps that use differing accessibility APIs. No matter where they get their information, screen readers can dream up any interaction modes they want to provide to their users (I’ve provided links to screen reader commands at the end of this article). Testing by site creators can help identify content that feels awkward in a particular navigation mode, such as multiple links with the same text (“Learn more”), as one example.

Example of this pipeline: surfacing a button element to screen reader users

Let’s suppose for a moment that a screen reader wants to understand what object is next in the accessibility tree (which I’ll explain further in the next section), so it can surface that object to the user as they navigate to it. The flow will go a little something like this:
Diagram illustrating the steps involved in presenting the next object in a document; detailed list follows
  1. The screen reader requests information from the API about the next accessible object, relative to the current object.
  2. The API (as an intermediary) passes along this request to the browser.
  3. At some point, the browser references DOM and style information, and discovers that the relevant element is a non-hidden button: <button>Do a thing</button>.
  4. The browser maps this HTML button into the format the API expects, such as an accessible object with various properties: Name: Do a thing, Role: Button.
  5. The API returns this information from the browser to the screen reader.
  6. The screen reader can then surface this object to the user, perhaps stating “Button, Do a thing.”
Suppose that the screen reader user would now like to “click” this button. Here’s how their action flows all the way back to web content:
Diagram illustrating the steps involved in routing a screen reader click to web content; detailed list follows
  1. The user provides a particular screen reader command, such as a keystroke or gesture.
  2. The screen reader calls a method into the API to invoke the button.
  3. The API forwards this interaction to the browser.
  4. How a browser may respond to incoming interactions depends on the context, but in this case the browser can raise this as a “click” event through web APIs. The browser should give no indication that the click came from an assistive technology, as doing so would violate the user’s right to privacy.
  5. The web developer has registered a JavaScript event listener for clicks; their callback function is now executed as if the user clicked with a mouse.
Now that we have a general sense of the pipeline, let’s go into a little more detail on the accessibility tree.

The accessibility tree

Dev Tools in Microsoft Edge showing the DOM tree and accessibility tree side by side; there are more nodes in the DOM tree
The accessibility tree is a hierarchical representation of elements in a UI or document, as computed for an accessibility API. In modern browsers, the accessibility tree for a given document is a separate, parallel structure to the DOM tree. “Parallel” does not necessarily mean there is a 1:1 match between the nodes of these two trees. Some elements may be excluded from the accessibility tree, for example if they are hidden or are not semantically useful (think non-focusable wrapper divs without any semantics added by a web developer). This idea of a hierarchical structure is somewhat of an abstraction. The definition of what exactly an accessibility tree is in practice has been debated and partially defined in multiple places, so implementations may differ in various ways. For example, it’s not actually necessary to generate accessible objects for every element in the DOM whenever the DOM tree is constructed. As a performance consideration, a browser could choose to deal with only a subset of objects and their relationships at a time—that is, however much is necessary to fulfill the requests coming from ATs. The rendering engine could make these computations during all user sessions, or only do so when assistive technologies are actively running. Generally speaking, modern web browsers wait until after style computation to build up any accessible objects. Browsers wait in part because generated content (such as ::before and ::after) can contain text that can participate in calculation of the accessible object’s name. CSS styles can also impact accessible objects in other various ways: text styling can come through as attributes on accessible text ranges. Display property values can impact the computation of line text ranges. These are just a few ways in which style can impact accessibility semantics. Browsers may also use different structures as the basis for accessible object computation. One rendering engine may walk the DOM tree and cross-reference style computations to build up parallel tree structures; another engine may use only the nodes that are available in a style tree in order to build up their accessibility tree. User agent participants in the standards community are currently thinking through how we can better document our implementation details, and whether it might make sense to standardize more of these details further down the road. Let’s now focus on the branches of this tree, and explore how individual accessibility objects are computed.

Building up accessible objects

From API to API, an accessible object will generally include a few things:
  • Role, or the type of accessible object (for example, Button). The role tells a user how they can expect to interact with the control. It is typically presented when screen reader focus moves onto the accessible object, and it can be used to provide various other functionalities, such as skipping around content via one type of object.
  • Name, if specified. The name is an (ideally short) identifier that better helps the user identify and understand the purpose of an accessible object. The name is often presented when screen focus moves to the object (more on this later), can be used as an identifier when presenting a list of available objects, and can be used as a hook for functionalities such as voice commands.
  • Description and/or help text, if specified. We’ll use “Description” as a shorthand. The Description can be considered supplemental to the Name; it’s not the main identifier but can provide further information about the accessible object. Sometimes this is presented when moving focus to the accessible object, sometimes not; this variation depends on both the screen reader’s user experience design and the user’s chosen verbosity settings.
  • Properties and methods surfacing additional semantics. For simplicity’s sake, we won’t go through all of these. For your awareness, properties can include details like layout information or available interactions (such as invoking the element or modifying its value).
Let’s walk through an example using markup for a simple mood tracker. We’ll use simplified property names and values, because these can differ between accessibility APIs.
<form>
  <label for="mood">On a scale of 1–10, what is your mood today?</label>
  <input id="mood" type="range"
       min="1" max="10" value="5"
       aria-describedby="helperText" />
  <p id="helperText">Some helpful pointers about how to rate your mood.</p>
  <!-- Using a div with button role for the purposes of showing how the accessibility tree is created. Please use the button element! -->
  <div tabindex="0" role="button">Log Mood</div>
</form>
First up is our form element. This form doesn’t have any attributes that would give it an accessible Name, and a form landmark without a Name isn’t very useful when jumping between landmarks. Therefore, HTML mapping standards specify that it should be mapped as a group. Here’s the beginning of our tree:
  • Role: Group
Next up is the label. This one doesn’t have an accessible Name either, so we’ll just nest it as an object of role “Label” underneath the form:
  • Role: Group
    • Role: Label
Let’s add the range input, which will map into various APIs as a “Slider.” Due to the relationship created by the for attribute on the label and id attribute on the input, this slider will take its Name from the label contents. The aria-describedby attribute is another id reference and points to a paragraph with some text content, which will be used for the slider’s Description. The slider object’s properties will also store “labelledby” and “describedby” relationships pointing to these other elements. And it will specify the current, minimum, and maximum values of the slider. If one of these range values were not available, ARIA standards specify what should be the default value. Our updated tree:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
The paragraph will be added as a simple paragraph object (“Text” or “Group” in some APIs):
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
The final element is an example of when role semantics are added via the ARIA role attribute. This div will map as a Button with the name “Log Mood,” as buttons can take their name from their children. This button will also be surfaced as “invokable” to screen readers and other ATs; special types of buttons could provide expand/collapse functionality (buttons with the aria-expanded attribute), or toggle functionality (buttons with the aria-pressed attribute). Here’s our tree now:
  • Role: Group
    • Role: Label
    • Role: Slider Name: On a scale of 1–10, what is your mood today? Description: Some helpful pointers about how to rate your mood. LabelledBy: [label object] DescribedBy: helperText ValueNow: 5 ValueMin: 1 ValueMax: 10
    • Role: Paragraph
    • Role: Button Name: Log Mood

On choosing host language semantics

Our sample markup mentions that it is preferred to use the HTML-native button element rather than a div with a role of “button.” Our buttonified div can be operated as a button via accessibility APIs, as the ARIA attribute is doing what it should—conveying semantics. But there’s a lot you can get for free when you choose native elements. In the case of button, that includes focus handling, user input handling, form submission, and basic styling. Aaron Gustafson has what he refers to as an “exhaustive treatise” on buttons in particular, but generally speaking it’s great to let the web platform do the heavy lifting of semantics and interaction for us when we can. ARIA roles, states, and properties are still a great tool to have in your toolbelt. Some good use cases for these are
  • providing further semantics and relationships that are not naturally expressed in the host language;
  • supplementing semantics in markup we perhaps don’t have complete control over;
  • patching potential cross-browser inconsistencies;
  • and making custom elements perceivable and operable to users of assistive technologies.

Notes on inclusion or exclusion in the tree

Standards define some rules around when user agents should exclude elements from the accessibility tree. Excluded elements can include those hidden by CSS, or the aria-hidden or hidden attributes; their children would be excluded as well. Children of particular roles (like checkbox) can also be excluded from the tree, unless they meet special exceptions. The full rules can be found in the “Accessibility Tree” section of the ARIA specification. That being said, there are still some differences between implementers, some of which include more divs and spans in the tree than others do.

Notes on name and description computation

How names and descriptions are computed can be a bit confusing. Some elements have special rules, and some ARIA roles allow name computation from the element’s contents, whereas others do not. Name and description computation could probably be its own article, so we won’t get into all the details here (refer to “Further reading and resources” for some links). Some short pointers:
  • aria-label, aria-labelledby, and aria-describedby take precedence over other means of calculating name and description.
  • If you expect a particular HTML attribute to be used for the name, check the name computation rules for HTML elements. In your scenario, it may be used for the full description instead.
  • Generated content (::before and ::after) can participate in the accessible name when said name is taken from the element’s contents. That being said, web developers should not rely on pseudo-elements for non-decorative content, as this content could be lost when a stylesheet fails to load or user styles are applied to the page.
When in doubt, reach out to the community! Tag questions on social media with “#accessibility.” “#a11y” is a common shorthand; the “11” stands for “11 middle letters in the word ‘accessibility.’” If you find an inconsistency in a particular browser, file a bug! Bug tracker links are provided in “Further reading and resources.”

Not just accessible objects

Besides a hierarchical structure of objects, accessibility APIs also offer interfaces that allow ATs to interact with text. ATs can retrieve content text ranges, text selections, and a variety of text attributes that they can build experiences on top of. For example, if someone writes an email and uses color alone to highlight their added comments, the person reading the email could increase the verbosity of speech output in their screen reader to know when they’re encountering phrases with that styling. However, it would be better for the email author to include very brief text labels in this scenario. The big takeaway here for web developers is to keep in mind that the accessible name of an element may not always be surfaced in every navigation mode in every screen reader. So if your aria-label text isn’t being read out in a particular mode, the screen reader may be primarily using text interfaces and only conditionally stopping on objects. It may be worth your while to consider using text content—even if visually hidden—instead of text via an ARIA attribute. Read more thoughts on aria-label and aria-labelledby.

Accessibility API events

It is the responsibility of browsers to surface changes to content, structure, and user input. Browsers do this by sending the accessibility API notifications about various events, which screen readers can subscribe to; again, for performance reasons, browsers could choose to send notifications only when ATs are active. Let’s suppose that a screen reader wants to surface changes to a live region (an element with role="alert" or aria-live):
Diagram illustrating the steps involved in announcing a live region via a screen reader; detailed list follows
  1. The screen reader subscribes to event notifications; it could subscribe to notifications of all types, or just certain types as categorized by the accessibility API. Let’s assume in our example that the screen reader is at least listening to live region change events.
  2. In the web content, the web developer changes the text content of a live region.
  3. The browser (provider) recognizes this as a live region change event, and sends the accessibility API a notification.
  4. The API passes this notification along to the screen reader.
  5. The screen reader can then use metadata from the notification to look up the relevant accessible objects via the accessibility API, and can surface the changes to the user.
ATs aren’t required to do anything with the information they retrieve. This can make it a bit trickier as a web developer to figure out why a screen reader isn’t announcing a change: it may be that notifications aren’t being raised (for example, because a browser is not sending notifications for a live region dynamically inserted into web content), or the AT is not subscribed or responding to that type of event.

Testing with screen readers and dev tools

While conformance checkers can help catch some basic accessibility issues, it’s ideal to walk through your content manually using a variety of contexts, such as
  • using a keyboard only;
  • with various OS accessibility settings turned on;
  • and at different zoom levels and text sizes, and so on.
As you do this, keep in mind the Web Content Accessibility Guidelines (WCAG 2.1), which give general guidelines around expectations for inclusive web content. If you can test with users after your own manual test passes, all the better! Robust accessibility testing could probably be its own series of articles. In this one, we’ll go over some tips for testing with screen readers, and catching accessibility errors as they are mapped into the accessibility API in a more general sense.

Screen reader testing

Screen readers exist in many forms: some are pre-installed on the operating system and others are separate applications that in some cases are free to download. The WebAIM screen reader user survey provides a list of commonly used screen reader and browser combinations among survey participants. The “Further reading and resources” section at the end of this article includes full screen reader user docs, and Deque University has a great set of screen reader command cheat sheets that you can refer to. Some actions you might take to test your content:
  • Read the next/previous item.
  • Read the next/previous line.
  • Read continuously from a particular point.
  • Jump by headings, landmarks, and links.
  • Tab around focusable elements only.
  • Get a summary of all elements of a particular type within the page.
  • Search the page for specific content.
  • Use table-specific commands to interact with your tables.
  • Jump around by form field; are field instructions discoverable in this navigational mode?
  • Use keyboard commands to interact with all interactive elements. Are your JavaScript-driven interactions still operable with screen readers (which can intercept key input in certain modes)? WAI-ARIA Authoring Practices 1.1 includes notes on expected keyboard interactions for various widgets.
  • Try out anything that creates a content change or results in navigating elsewhere. Would it be obvious, via screen reader output, that a change occurred?

Tracking down the source of unexpected behavior

If a screen reader does not announce something as you’d expect, here are a few different checks you can run:
  • Does this reproduce with the same screen reader in multiple browsers on this OS? It may be an issue with the screen reader or your expectation may not match the screen reader’s user experience design. For example, a screen reader may choose to not expose the accessible name of a static, non-interactive element. Checking the user docs or filing a screen reader issue with a simple test case would be a great place to start.
  • Does this reproduce with multiple screen readers in the same browser, but not in other browsers on this OS? The browser in question may have an issue, there may be compatibility differences between browsers (such as a browser doing extra helpful but non-standard computations), or a screen reader’s support for a specific accessibility API may vary. Filing a browser issue with a simple test case would be a great place to start; if it’s not a browser bug, the developer can route it to the right place or make a code suggestion.
  • Does this reproduce with multiple screen readers in multiple browsers? There may be something you can adjust in your code, or your expectations may differ from standards and common practices.
  • How does this element’s accessibility properties and structure show up in browser dev tools?

Inspecting accessibility trees and properties in dev tools

Major modern browsers provide dev tools to help you observe the structure of the accessibility tree as well as a given element’s accessibility properties. By observing which accessible objects are generated for your elements and which properties are exposed on a given element, you may be able to pinpoint issues that are occurring either in front-end code or in how the browser is mapping your content into the accessibility API. Let’s suppose that we are testing this piece of code in Microsoft Edge with a screen reader:
<div class="form-row">
  <label>Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
We’re navigating the page by form field, and when we land on this text field, the screen reader just tells us this is an “edit” control—it doesn’t mention a name for this element. Let’s check the tools for the element’s accessible name. 1. Inspect the element to bring up the dev tools.
The Microsoft Edge dev tools, with an input element highlighted in the DOM tree
2. Bring up the accessibility tree for this page by clicking the accessibility tree button (a circle with two arrows) or pressing Ctrl+Shift+A (Windows).
The accessibility tree button activated in the Microsoft Edge dev tools
Reviewing the accessibility tree is an extra step for this particular flow but can be helpful to do. When the Accessibility Tree pane comes up, we notice there’s a tree node that just says “textbox:,” with nothing after the colon. That suggests there’s not a name for this element. (Also notice that the div around our form input didn’t make it into the accessibility tree; it was not semantically useful). 3. Open the Accessibility Properties pane, which is a sibling of the Styles pane. If we scroll down to the Name property—aha! It’s blank. No name is provided to the accessibility API. (Side note: some other accessibility properties are filtered out of this list by default; toggle the filter button—which looks like a funnel—in the pane to get the full list).
The Accessibility Properties pane open in Microsoft Edge dev tools, in the same area as the Styles pane
4. Check the code. We realize that we didn’t associate the label with the text field; that is one strategy for providing an accessible name for a text input. We add for="myTextInput" to the label:
<div class="form-row">
  <label for="myTextInput">Favorite color</label>
  <input id="myTextInput" type="text" />
</div>
And now the field has a name:
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
In another use case, we have a breadcrumb component, where the current page link is marked with aria-current="page":
<nav class="breadcrumb" aria-label="Breadcrumb">
  <ol>
    <li>
      <a href="/cat/">Category</a>
    </li>
    <li>
      <a href="/cat/sub/">Sub-Category</a>
    </li>
    <li>
      <a aria-current="page" href="/cat/sub/page/">Page</a>
    </li>
  </ol>
</nav>
When navigating onto the current page link, however, we don’t get any indication that this is the current page. We’re not exactly sure how this maps into accessibility properties, so we can reference a specification like Core Accessibility API Mappings 1.2 (Core-AAM). Under the “State and Property Mapping” table, we find mappings for “aria-current with non-false allowed value.” We can check for these listed properties in the Accessibility Properties pane. Microsoft Edge, at the time of writing, maps into UIA (UI Automation), so when we check AriaProperties, we find that yes, “current=page” is included within this property value.
The accessible Name property set to the value of “Favorite color” inside Microsoft Edge dev tools
Now we know that the value is presented correctly to the accessibility API, but the particular screen reader is not using the information. As a side note, Microsoft Edge’s current dev tools expose these accessibility API properties quite literally. Other browsers’ dev tools may simplify property names and values to make them easier to read, particularly if they support more than one accessibility API. The important bit is to find if there’s a property with roughly the name you expect and whether its value is what you expect. You can also use this method of checking through the property names and values if mapping specs, like Core-AAM, are a bit intimidating!

Advanced accessibility tools

While browser dev tools can tell us a lot about the accessibility semantics of our markup, they don’t generally include representations of text ranges or event notifications. On Windows, the Windows SDK includes advanced tools that can help debug these parts of MSAA or UIA mappings: Inspect and AccEvent (Accessible Event Watcher). Using these tools presumes knowledge of the Windows accessibility APIs, so if this is too granular for you and you’re stuck on an issue, please reach out to the relevant browser team! There is also an Accessibility Inspector in Xcode on MacOS, with which you can inspect web content in Safari. This tool can be accessed by going to Xcode > Open Developer Tool > Accessibility Inspector.

Diversity of experience

Equipped with an accessibility tree, detailed object information, event notifications, and methods for interacting with accessible objects, screen readers can craft a browsing experience tailored to their audiences. In this article, we’ve used the term “screen readers” as a proxy for a whole host of tools that may use accessibility APIs to provide the best user experience possible. Assistive technologies can use the APIs to augment presentation or support varying types of user input. Examples of other ATs include screen magnifiers, cognitive support tools, speech command programs, and some brilliant new app that hasn’t been dreamed up yet. Further, assistive technologies of the same “type” may differ in how they present information, and users who share the same tool may further adjust settings to their liking. As web developers, we don’t necessarily need to make sure that each instance surfaces information identically, because each user’s preferences will not be exactly the same. Our aim is to ensure that no matter how a user chooses to explore our sites, content is perceivable, operable, understandable, and robust. By testing with a variety of assistive technologies—including but not limited to screen readers—we can help create a better web for all the many people who use it.

Further reading and resources




sem

Delhi assembly elections

Delhi assembly elections




sem

A polymerization-induced self-assembly process for all-styrenic nano-objects using the living anionic polymerization mechanism

Polym. Chem., 2020, 11,2635-2639
DOI: 10.1039/D0PY00296H, Communication
Chengcheng Zhou, Jian Wang, Peng Zhou, Guowei Wang
By combination of the living anionic polymerization (LAP) mechanism with the polymerization-induced self-assembly (PISA) technique, the all-styrenic diblock copolymer poly(p-tert-butylstyrene)-b-polystyrene (PtBS-b-PS) based LAP PISA was successfully developed.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Correction: Block copolymer hierarchical structures from the interplay of multiple assembly pathways

Polym. Chem., 2020, 11,2762-2762
DOI: 10.1039/D0PY90057E, Correction
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Alessandro Ianiro, Meng Chi, Marco M. R. M. Hendrix, Ali Vala Koç, E. Deniz Eren, Michael Sztucki, Andrei V. Petukhov, Gijsbertus de With, A. Catarina C. Esteves, Remco Tuinier
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Enhanced thermomechanical property of a self-healing polymer via self-assembly of a reversibly cross-linkable block copolymer

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00310G, Paper
Hyang Moo Lee, Suguna Perumal, Gi Young Kim, Jin Chul Kim, Young-Ryul Kim, Minsoo P. Kim, Hyunhyup Ko, Yecheol Rho, In Woo Cheong
Introduction of a self-healable block copolymer increases the mechanical property whilst maintaining self-healing efficiency.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

A general method to greatly enhance ultrasound-responsiveness for common polymeric assemblies

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00254B, Paper
Jinkang Dou, Ruiqi Yang, Kun Du, Li Jiang, Xiayun Huang, Daoyong Chen
Ultrasound-controlled drug release is a very promising technique for controlled drug delivery due to the unique advantages of ultrasound as the stimulus.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Pillar[5]arene-based self-assembled linear supramolecular polymer driven by guest halogen–halogen interactions in solid and solution states

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00327A, Paper
Talal F. Al-Azemi, Mickey Vinodh
A pillar[5]arene-based linear supramolecular polymer mediated by guest halogen–halogen interactions (C–Br⋯Br–C) was studied in both the solution and solid states.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Polymerization of dopamine accompanying its coupling to induce self-assembly of block copolymer and application in drug delivery

Polym. Chem., 2020, 11,2811-2821
DOI: 10.1039/D0PY00085J, Paper
Yudian Qiu, Zongyuan Zhu, Yalei Miao, Panke Zhang, Xu Jia, Zhongyi Liu, Xubo Zhao
The polymerization of dopamine and its coupling occur in succession, which synergistically induces the self-assembly of block copolymer to yield ordered structures, including micelles and vesicles.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Vesicular assemblies of thermoresponsive amphiphilic polypeptide copolymers for guest encapsulation and release

Polym. Chem., 2020, 11,2889-2903
DOI: 10.1039/D0PY00135J, Paper
Mahammad Anas, Somdeb Jana, Tarun K. Mandal
Thermoresponsive amphiphilic polypeptide copolymers are synthesized via different polymerization techniques for their self-assembly into vesicular aggregates for guest encapsulation and release.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Hierarchical self-assembled nanostructures of lactone-derived thiobarbiturate homopolymers for stimuli-responsive delivery applications

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00367K, Paper
Piyali Mandal, Diptendu Patra, Raja Shunmugam
Hierarchical self-assembled nanostructures of lactone-derived thiobarbiturate homopolymers for stimuli-responsive delivery applications are shown.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Self-assembly of strawberry-like organic–inorganic hybrid particle clusters with directionally distributed bimetal and facile transformation of the core and corona

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00237B, Paper
Shuxing Mei, Mingwang Pan, Juan Wang, Xiaopeng Zhang, Shaofeng Song, Chao Li, Gang Liu
Controllable structure of organic–inorganic hybrid particle clusters were successfully fabricated by self-assembly which derived from the strong interaction between carboxyl groups of the organic particles and amino groups of the inorganic particles.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Epoxy-functional diblock copolymer spheres, worms and vesicles via polymerization-induced self-assembly in mineral oil

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00380H, Paper
Philip J. Docherty, Chloé Girou, Matthew J. Derry, Steven P. Armes
Epoxy-functional poly(stearyl methacrylate)-poly(glycidyl methacrylate) spheres, worms or vesicles can be prepared by RAFT dispersion polymerization of glycidyl methacrylate in mineral oil at 70 °C.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Room Temperature Synthesis of Block Copolymer Nano-Objects with Different Morphologies via Ultrasound Initiated RAFT Polymerization-Induced Self-Assembly (Sono-RAFT-PISA)

Polym. Chem., 2020, Accepted Manuscript
DOI: 10.1039/D0PY00461H, Paper
Jing Wan, Bo Fan, Yiyi Liu, Tina Hsia, Kaiyuan Qin, Tanja Junkers, Boon M. Teo, San Thang
Polymerization-induced self-assembly (PISA), which allows scalable synthesis of nano-objects, has drawn significant research attention in the past decade. However, the initiation methods in most of the current reported PISA are...
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Effects of tacticity and chiral center-to-dipole distance on mesogen-free liquid crystalline self-assembly of sulfonyl-containing comb-like polymers

Polym. Chem., 2020, 11,3018-3031
DOI: 10.1039/D0PY00199F, Paper
Caleb A. Bohannon, Man-Hin Kwok, Ruipeng Li, Lei Zhu, Bin Zhao
Mesogen-free comb-like polyethers bearing strongly interacting mono- and di-sulfonylated side chains exhibit well-defined liquid crystalline self-assembly.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Development and disassembly of single and multiple acid-cleavable block copolymer nanoassemblies for drug delivery

Polym. Chem., 2020, 11,2934-2954
DOI: 10.1039/D0PY00234H, Review Article
Arman Moini Jazani, Jung Kwon Oh
Acid-degradable block copolymer-based nanoassemblies are promising intracellular candidates for tumor-targeting drug delivery as they exhibit the enhanced release of encapsulated drugs through their dissociation.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Synthesis, thermoresponsivity and multi-tunable hierarchical self-assembly of multi-responsive (AB)mC miktobrush-coil terpolymers

Polym. Chem., 2020, 11,3003-3017
DOI: 10.1039/D0PY00245C, Paper
Xiaomin Zhu, Jian Zhang, Cheng Miao, Siyu Li, Youliang Zhao
Stimuli-responsive miktobrush-coil terpolymers can exhibit unique physical properties and hierarchical self-assembly behaviors dependent on composition, concentration and external stimuli.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Secondary structure drives self-assembly in weakly segregated globular protein–rod block copolymers

Polym. Chem., 2020, 11,3032-3045
DOI: 10.1039/C9PY01680E, Paper
Helen Yao, Kai Sheng, Jialing Sun, Shupeng Yan, Yingqin Hou, Hua Lu, Bradley D. Olsen
Imparting secondary structure to the polymer block can drive self-assembly in globular protein–helix block copolymers, increasing the effective segregation strength between blocks with weak or no repulsion.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Polymerization-induced self-assembly for the fabrication of polymeric nano-objects with enhanced structural stability by cross-linking

Polym. Chem., 2020, Accepted Manuscript
DOI: 10.1039/D0PY00368A, Review Article
Wen-Jian Zhang, Jamshid Kadirkhanov, Chang-Hui Wang, Sheng-Gang Ding, Chun-Yan Hong, Fei Wang, Ye-Zi You
Polymerization-induced self-assembly (PISA) has been established as a robust strategy to synthesize block copolymer nano-objects with varying morphologies, size, and surface chemistry, which greatly enlarges the library of functional nano-objects...
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Corporate Taxprep Seminars - Autumn 2014

We are excited to announce that our annual fall Corporate Taxprep seminars will consist of a separate morning and afternoon session at each location to better suit your professional development needs.

The morning session will provide detailed information on the tax, form and software changes, while during the afternoon session, our professional and knowledgeable presenters will cover corporate tax compliance and a detailed discussion of various features in Corporate Taxprep.

Available Sessions for this Seminar:

December 10, 2014 9:00 AM - 4:00 PM EST
December 10, 2014 9:00 AM - 12:00 PM EST
December 12, 2014 9:00 AM - 4:00 PM EST
December 12, 2014 9:00 AM - 12:00 PM EST




sem

Cantax Productivity Seminars - Fall 2014

Experience the productivity-boosting power of this full-day seminar! These interactive, information-packed sessions are an excellent opportunity for both new and experienced Cantax users to brush up on their "Cantax know-how" and get valuable information they need to prepare T1 personal and T2 corporate returns efficiently.

Available Sessions for this Seminar:

December 10, 2014 9:00 AM - 4:00 PM EST
December 11, 2014 9:00 AM - 4:00 PM EST




sem

Semba: intermediate to advanced

Hayden Library - GV1713.A5 S46 2015




sem

A perylenetetracarboxylic dianhydride and aniline-assembled supramolecular nanomaterial with multi-color electrochemiluminescence for a highly sensitive label-free immunoassay

J. Mater. Chem. B, 2020, 8,3676-3682
DOI: 10.1039/C9TB02368B, Paper
Wei Zhang, Yue Song, Yunyun Wang, Shuijian He, Lei Shang, Rongna Ma, Liping Jia, Huaisheng Wang
A novel multi-color ECL nanomaterial assembled from 3,4,9,10-perylenetetracarboxylic dianhydride (PTCDA) and aniline (An) was used for highly sensitive label-free CEA detection.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Toehold-regulated competitive assembly to accelerate the kinetics of graphene oxide-based biosensors

J. Mater. Chem. B, 2020, 8,3683-3689
DOI: 10.1039/C9TB02454A, Paper
Huan Du, Junbo Chen, Jie Zhang, Rongxing Zhou, Peng Yang, Xiandeng Hou, Nansheng Cheng
With toehold-regulation, the kinetics of graphene oxide-based biosensors can be accelerated.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Diblock Copolypeptoids: A Review of Phase Separation, Self-Assembly and Biological Applications

J. Mater. Chem. B, 2020, Accepted Manuscript
DOI: 10.1039/D0TB00477D, Review Article
Sunting Xuan, Ronald N Zuckermann
Polypeptoids are biocompatible, synthetically accessible, chemically and enzymatically stable, chemically diverse, and structurally controllable. As a bioinspired and biomimetic material, it has attracted considerable attention due to its great potential...
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Celebrities endorsement earnings on social media

Since January, more than 200,000 posts per month on Instagram, a picture-sharing app owned by Facebook, have been tagged with #ad,  #sp or #sponsored, according to Captiv8, an analytics platform that connects brands to social media influencers. Hiring such influencers allows companies to reach a vast network of potential customers: Mr Ronaldo has a combined following of 240m people across Facebook, Instagram and Twitter.

complete article




sem

Report of the Parliamentary Delegation to the 39th AIPA General Assembly, September 2018

Australia. Parliament. Delegation to the ASEAN Inter-Parliamentary Assembly General Assembly




sem

Bioinformatics and phylogenetics: seminal contributions of Bernard Moret / Tandy Warnow, editor

Online Resource





sem

Critical discourse analysis of Chinese advertisement: case studies of household appliance advertisements from 1981 to 1996 / Chong Wang

Online Resource




sem

[ASAP] Efficient Low-Cost All-Flexible Microcavity Semitransparent Polymer Solar Cells Enabled by Polymer Flexible One-Dimensional Photonic Crystals

ACS Applied Materials & Interfaces
DOI: 10.1021/acsami.0c03508




sem

[ASAP] Genetic Engineering-Facilitated Coassembly of Synthetic Bacterial Cells and Magnetic Nanoparticles for Efficient Heavy Metal Removal

ACS Applied Materials & Interfaces
DOI: 10.1021/acsami.0c04512




sem

[ASAP] Preorganization Increases the Self-Assembling Ability and Antitumor Efficacy of Peptide Nanomedicine

ACS Applied Materials & Interfaces
DOI: 10.1021/acsami.0c02572




sem

[ASAP] Aggregation-Dependent Photoreactive Hemicyanine Assembly as a Photobactericide

ACS Applied Materials & Interfaces
DOI: 10.1021/acsami.0c03894




sem

Selective host–guest chemistry, self-assembly and conformational preferences of m-xylene macrocycles probed by ion-mobility spectrometry mass spectrometry

Phys. Chem. Chem. Phys., 2020, 22,9290-9300
DOI: 10.1039/C9CP06938K, Paper
Benjamin A. Link, Ammon J. Sindt, Linda S. Shimizu, Thanh D. Do
Ion-mobility spectrometry mass spectrometry successfully captures selective host–guest chemistry of m-xylene macrocycles; notably, a tetrahedral, dimeric Zn complex.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Monolayer Bi2Se3−xTex: novel two-dimensional semiconductors with excellent stability and high electron mobility

Phys. Chem. Chem. Phys., 2020, 22,9685-9692
DOI: 10.1039/D0CP00729C, Paper
Yifan Liu, Yuanfeng Xu, Yanju Ji, Hao Zhang
The bandgaps for monolayers Bi2Se3, Bi2Se2Te and Bi2SeTe2 decrease under moderate strains ranging from −4% to 10%, and the predicted electron mobilities are high, reaching 2708 cm2 V−1 s−1 for Bi2SeTe2.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

[ASAP] Oxaazabicyclooctene Oxides, Another Type of Bridgehead Nitrones: Diastereoselective Assembly from Acetylene Gas, Ketones, and Hydroxyl Amine

The Journal of Organic Chemistry
DOI: 10.1021/acs.joc.0c00742




sem

Accelerated non-crosslinking assembly of DNA-functionalized nanoparticles in alcoholic solvents: for application in the identification of clear liquors

Analyst, 2020, 145,3229-3235
DOI: 10.1039/D0AN00029A, Paper
Luyang Wang, Guoqing Wang, Yali Shi, Lan Zhang, Ran An, Tohru Takarada, Mizuo Maeda, Xingguo Liang
Accelerated aggregation of DNA-functionalized gold nanoparticles is discovered in alcoholic solvents upon adding full-match DNA and is potentially useful for the identification of Baijiu.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

[ASAP] A Modular Method for Directing Protein Self-Assembly

ACS Synthetic Biology
DOI: 10.1021/acssynbio.9b00504




sem

[ASAP] Correction to Toggling Preassembly with Single-Site Mutation Switches the Cytotoxic Mechanism of Cationic Amphipathic Peptides

Journal of Medicinal Chemistry
DOI: 10.1021/acs.jmedchem.0c00608




sem

[ASAP] Treating Cancer by Spindle Assembly Checkpoint Abrogation: Discovery of Two Clinical Candidates, BAY 1161909 and BAY 1217389, Targeting MPS1 Kinase

Journal of Medicinal Chemistry
DOI: 10.1021/acs.jmedchem.9b02035




sem

[ASAP] Ruthenium(II) Complex Containing a Redox-Active Semiquinonate Ligand as a Potential Chemotherapeutic Agent: From Synthesis to <italic toggle="yes">In Vivo</italic> Studies

Journal of Medicinal Chemistry
DOI: 10.1021/acs.jmedchem.0c00431




sem

[ASAP] Assembling Pharma Resources to Tackle Diseases of Underserved Populations

ACS Medicinal Chemistry Letters
DOI: 10.1021/acsmedchemlett.0c00051




sem

Simultaneous co-assembly of fenofibrate and ketoprofen peptide for the dual-targeted treatment of nonalcoholic fatty liver disease (NAFLD)

Chem. Commun., 2020, 56,4922-4925
DOI: 10.1039/D0CC00513D, Communication
Zhongyan Wang, Chuanrui Ma, Yuna Shang, Lijun Yang, Jing Zhang, Cuihong Yang, Chunhua Ren, Jinjian Liu, Guanwei Fan, Jianfeng Liu
An ingenious co-assembled nanosystem based on fenofibrate and ketoprofen peptide for the dual-targeted treatment of NAFLD by reducing hepatic lipid accumulation and inflammatory responses.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Noncovalent structural locking of thermoresponsive polyion complex micelles, nanowires, and vesicles via polymerization-induced electrostatic self-assembly using an arginine-like monomer

Chem. Commun., 2020, 56,4954-4957
DOI: 10.1039/D0CC00427H, Communication
Qingqing Zhao, Qizhou Liu, Chao Li, Lei Cao, Lei Ma, Xiyu Wang, Yuanli Cai
The noncovalent locking of nanostructured thermoresponsive polyion complexes can be achieved via polymerization-induced electrostatic self-assembly (PIESA) using an arginine-like cationic monomer.
The content of this RSS Feed (c) The Royal Society of Chemistry




sem

Graphene-modulated assembly of zinc phthalocyanine on BiVO4 nanosheets for efficient visible-light catalytic conversion of CO2

Chem. Commun., 2020, 56,4926-4929
DOI: 10.1039/D0CC01518K, Communication
Ji Bian, Jiannan Feng, Ziqing Zhang, Jiawen Sun, Mingna Chu, Ling Sun, Xin Li, Dongyan Tang, Liqiang Jing
Graphene-modulated ZnPc/BiVO4 Z-scheme heterojunctions for efficient visible-light catalytic CO2 conversion are achieved by increasing the optimized amount of highly dispersed ZnPc.
The content of this RSS Feed (c) The Royal Society of Chemistry