ex

Stimulus-responsive multifunctions in a zinc(II) sulfate complex: photochromism, photoswitching nonlinear optical properties, amine detection and visual film application

J. Mater. Chem. C, 2024, Advance Article
DOI: 10.1039/D4TC04169K, Paper
Shuai Liang, Shi-Kun Yan, Yu-Xuan Wen, Yan-Rui Zhao, Jin Zhang, Ji-Xiang Hu
A novel complex combining photo- and amine-induced chromic, switchable photoluminescence, and photomodulated nonlinear optical properties has been prepared using electron-rich sulfate and electron-deficient 2,4,6-tri(4-pyridyl)-1,3,5-triazine.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Unveiling the mechanism behind shell thickness-dependent X-ray excited optical and persistent luminescence in lanthanide-doped core/shell nanoparticles

J. Mater. Chem. C, 2024, Advance Article
DOI: 10.1039/D4TC04256E, Paper
Zezhen Liu, Jingtao Zhao, Danyang Shen, Lei Lei, Shiqing Xu
We reveal an optimal shell thickness of approximately 3 nm for both XEOL and XEPL of homogeneous NaYF4:Tb@NaYF4 and heterogeneous NaYF4:Tb@NaLuF4 core/shell NPs.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Three-Step Change in Uniaxial Negative Thermal Expansion by Switching Supramolecular Motion Modes in Ferromagnetically-Coupled Nickel Dithiolate Lattice

J. Mater. Chem. C, 2024, Accepted Manuscript
DOI: 10.1039/D4TC03992K, Paper
Masato Haneda, Kiyonori Takahashi, Naohiro Hasuo, Rui-Kang Huang, Xue Chen, Jia-bing Wu, Shin-ichiro Noro, Takayoshi Nakamura
The wheel-axle-type supramolecule, ((+H3N-C2H4)2O)([18]crown-6)2, was introduced into the crystal as a counter cation of [Ni(dmit)2]. Within the crystal, [Ni(dmit)2] was arranged in a honeycomb-like structure and one-dimensional chains formed by...
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

A novel deep-blue fluorescent emitter employed as an identical exciplex acceptor for solution-processed multi-color OLEDs

J. Mater. Chem. C, 2024, Accepted Manuscript
DOI: 10.1039/D4TC04073B, Paper
Jie Pan, Shiyue Zhang, Zhongxin Zhou, Yongtao Zhao, Shujing Jin, Yanju Luo, Weiguo Zhu, Yu Liu
Currently, exciplex-type thermally activated delayed fluorescence (TADF) materials are emerging as a promising strategy for optimizing organic light-emitting devices (OLEDs). However, achieving highly efficient multi-color OLEDs based on exciplexes remains...
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Enhanced energy storage performance with excellent thermal stability of BNT-based ceramics via the multiphase engineering strategy for pulsed power capacitor

J. Mater. Chem. C, 2024, Accepted Manuscript
DOI: 10.1039/D4TC04170D, Paper
Maqbool Ur Rehman, Aiwen Xie, Attaur Rahman, Yi Zhang, Ao Tian, Xuewen Jiang, Xinchun Xie, Cong Zhou, Tianyu Li, Liqiang Liu, Xin Gao, Xiaokuo Er, Ruzhong Zuo
High-temperature resistance and ultra-fast discharging of materials is one of the hot topics in the development of pulsed power systems. It is still a great challenge for dielectric materials to...
The content of this RSS Feed (c) The Royal Society of Chemistry





ex

SEE: Aishwarya Rai On The PS-1 Experience

'It's Mani Ratnam's dream project and to be a part of that is any artiste's dream.'




ex

What Irrfan's Son Babil Wants To Explore

'Mere father ki khubiyaan woh lekar chale gaye, ab mein apne khubiyaan explore karoonga.'




ex

Waterlogged [electronic resource] : examples and Procedures for Northwest Coast archaeologists / edited by Kathryn Bernick.

1 online resource (x, 246 Seiten) : Illustrationen, Karten




ex

Heartbleed exploit tl;dr

OpenSSL had a bug for several years which allowed attackers to untraceably read all your SSL traffic and some server memory. If you’re like me and have better things to do than reinvent the fix-wheel and you’re all like “WTFBBQ TL;DR” here’s the absolute minimum what anyone who runs a web server with SSL must […]




ex

National Family Health Survey: PMO exerts pressure, data is out

The secretaries had been asked to show results for the work being done.




ex

Excess pregnancy weight gain may make your child obese




ex

Form nodal agency to check online pre-natal sex selection ads: SC

Whatever is prohibited under the Act cannot go through websites, says Bench




ex

I write to rage, and rescue ourselves from collective amnesia, says Harsh Mander, speaking on India’s Covid experience

Harsh Mander’s new book demands accountability from the state for its handling of the pandemic’s impact




ex

Understanding deNOx mechanisms in transition metal exchanged zeolites

Chem. Soc. Rev., 2024, Advance Article
DOI: 10.1039/D3CS00468F, Review Article
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Jamal Abdul Nasir, Andrew M. Beale, C. Richard A. Catlow
Transition metal-containing zeolites have received considerable attention, owing to their application in the selective catalytic reduction of NOx. To understand their chemistry, both structural and mechanistic aspects at the atomic level are needed.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

‘Oragadam, city’s next hotspot’

In conversation with Niranjan Hiranandani, CMD, Hiranandani Communities, on his plans for Chennai




ex

Reconstruction of residential complex

Your property-related legal queries answered by S.C. RAGHURAM, Partner, RANK Associates, a Chennai-based law firm




ex

Designing for the Unexpected

I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?

Flash, Photoshop, and responsive design

When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.

Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.

The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.

A new way to design

Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:

.column-span-6 {
  width: 49%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;
}


.column-span-4 {
  width: 32%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;
}

.column-span-3 {
  width: 24%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;
}

Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:

.logo {
  @include colSpan(6);
}

.search {
  @include colSpan(3);
}

.social-share {
  @include colSpan(3);
}

Media queries

The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).

Components becoming too small at mobile breakpoints

Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.

<section class="row">
  <div class="column-span-4">1 of 7</div>
  <div class="column-span-4">2 of 7</div>
  <div class="column-span-4">3 of 7</div>
</section>

<section class="row">
  <div class="column-span-4">4 of 7</div>
  <div class="column-span-4">5 of 7</div>
  <div class="column-span-4">6 of 7</div>
</section>

<section class="row">
  <div class="column-span-4">7 of 7</div>
</section>
Components placed in the rows of a Sass grid

Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. 

Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist”  goal.

Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

Components responding to the viewport width with media queries

Container queries: our savior or a false dawn?

Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

Components responding to their parent container with container queries

One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

In other words, responsive components to replace responsive layouts.

Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? 

A component library removed from context and real content is probably not the best place for that decision. 

As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

Cards responding to their parent container with container queries
Cards responding based on their own content

In this example, the dimensions of the container are not what should dictate the design; rather, the image is.

It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.

CSS is changing

Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

.wrapper {
  display: grid;
  grid-template-columns: repeat(auto-fit, 450px);
  gap: 10px;
}

The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

.wrapper {
  display: flex;
  flex-wrap: wrap;
  justify-content: space-between;
}

.child {
  flex-basis: 32%;
  margin-bottom: 20px;
}

The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

A traditional Grid layout without the usual row containers

This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. 

Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?

Cards unable to respond to a sibling’s content changes

Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

Cards responding to content in sibling cards
.wrapper {
  display: grid;
  grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
     grid-template-rows: auto 1fr auto;
  gap: 10px;
}

.sub-grid {
  display: grid;
  grid-row: span 3;
  grid-template-rows: subgrid; /* sets rows to parent grid */
}

CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. 

Intrinsic layouts 

I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. 

Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

fr units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it.

—Jen Simmons, “Designing Intrinsic Layouts”

Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.

Slide from “Designing Intrinsic Layouts” by Jen Simmons

What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. 

We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

Another 2010 moment?

This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. 

But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. 

One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. 

Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. 

You can’t framework your way out of a content problem

Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. 

Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. 

The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?

Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

Content first 

Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

Instead of old markup hacks like this—

<p>
  <span class="first-line">First line of text with different styling</span>...
</p>

—we can target content based on where it appears.

.element::first-line {
  font-size: 1.4em;
}

.element::first-letter {
  color: red;
}

Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

In the Sass version, directional variables need to be set.

$direction: rtl;
$opposite-direction: ltr;

$start-direction: right;
$end-direction: left;

These variables can be used as values—

body {
  direction: $direction;
  text-align: $start-direction;
}

—or as properties.

margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;

However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

margin-block-end: 10px;
padding-block-start: 10px;

There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.

Fixed and fluid 

We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

For min() this means setting a fluid minimum value and a maximum fixed value.

.element {
  width: min(50%, 300px);
}

The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.

For max() we can set a flexible max value and a minimum fixed value.

.element {
  width: max(50%, 300px);
}

Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. 

The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

.element {
  width: clamp(300px, 50%, 600px);
}

This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.

With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

Situation first

Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”?

It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

Thankfully, there is a lot we can do to provide choice.

Responsible design 

“There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”

I Used the Web for a Day on a 50 MB Budget

Chris Ashton

One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

<img 
  src="image-file.jpg"
  srcset="large.jpg 1024w,
             medium.jpg 640w,
             small.jpg 320w"
     alt="Image alt text" />

The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->

There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

<img src="image.png" loading="lazy" alt="…">

With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

So how can we put users in control?

The return of media queries 

Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. 

As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

@media (light-level: normal) {
  --background-color: #fff;
  --text-color: #0b0c0c;  
}

@media (light-level: dim) {
  --background-color: #efd226;
  --text-color: #0b0c0c;
}

Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

Media queries like this go beyond choices made by a browser to grant more control to the user.

Expect the unexpected

In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. 

A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. 

Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.




ex

Sustainable Web Design, An Excerpt

In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task. 

But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes. 

This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.

We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.

Establishing standards for a sustainable web

In most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.

The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do? 

If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:

  1. Data transfer 
  2. Carbon intensity of electricity

Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.

Data transfer

Most researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.

For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).

Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.

The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes. 

Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website. 

History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.

Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.

You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.

Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design. 

We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class. 

If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.

Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.

Carbon intensity of electricity

Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh. 

Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).

Fig 2.3: Tomorrow’s electricityMap shows live data for the carbon intensity of electricity by country.

That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.

Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea. 

For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

Converting it back to carbon emissions

If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.

Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.

With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.

Browser Energy

Data transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser. 

All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.

In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).

Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.

You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring. 

It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.




ex

Design for Safety, An Excerpt

Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.

This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)

The process for inclusive safety

When you are designing for safety, your goals are to:

  • identify ways your product can be used for abuse,
  • design ways to prevent the abuse, and
  • provide support for vulnerable users to reclaim power and control.

The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:

  • Conducting research
  • Creating archetypes
  • Brainstorming problems
  • Designing solutions
  • Testing for safety
Fig 5.1: Each aspect of the Process for Inclusive Safety can be incorporated into your design process where it makes the most sense for you. The times given are estimates to help you incorporate the stages into your design plan.

The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.

And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.

If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.

Step 1: Conduct research

Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.

Broad research

Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.

Specific research: Survivors

When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.

Specific research: Abusers

It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.

Step 2: Create archetypes

Once you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.

The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.

Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.

The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?

Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse.

You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information.

It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.

And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.

Step 3: Brainstorm problems

After creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.

If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.

After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.

Step 4: Design solutions

At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

Some questions to ask yourself to help prevent harm and support your archetypes include:

  • Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
  • How can you make the victim aware that abuse is happening through your product?
  • How can you help the victim understand what they need to do to make the problem stop?
  • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?

In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.

Step 5: Test for safety

The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.

Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

Abuser testing

The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.

If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.

Survivor testing

Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.

Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

Stress testing

To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.




ex

Exploring chemical concepts through theory and computation [electronic resource] / edited by Shubin Liu.

Weinheim, Germany : Wiley-VCH, [2024]




ex

Bharathiar University schedules odd-semester exams of 2024-25 session in conformity with pre-Covid pattern

The exams are set to begin on November 13




ex

Self-financing colleges in Coimbatore reach out to Union Education Ministry seeking exclusive categorisation in NIRF ranking




ex

Mega food festival and wedding expo in Coimbatore on November 30




ex

How to explore the misty hills of Attuvampatti Crush in Kodaikanal?

Breathe in pollution-free air and enjoy farm-to-table food and learn what makes Kodai plums so unique



  • Life &amp; Style

ex

Sex traffic (2004) / directed by David Yates [DVD].

[U.K.] : InD DVD ; Fremantle Media, [2006]




ex

Racing extinction (2015) / starring and directed by Louie Psihoyos [DVD].

[U.K.] : Discovery Communications, [2016]




ex

Birdman, or, (The unexpected virtue of ignorance) (2014) / written and directed by Alejandro González Iñárritu [DVD].

[U.K.] : Twentieth Century Fox Home Entertainment, [2015]




ex

The air conditioning development index




ex

Multiplex monopoly






ex

Baku Climate Talks: G77, China Reject Framework For Draft Text On New Climate Finance Goal

G77 and China rejected the substantive framework for a draft negotiating text prepared by the co-chairs of the Ad-Hoc Work Programme on the New Collective Quantified Goal (NCQG), arguing that it does not accurately reflect the concerns raised by developing countries. 




ex

Experimental factors influencing the bioaccessibility and the oxidative potential of transition metals from welding fumes

Environ. Sci.: Processes Impacts, 2024, Advance Article
DOI: 10.1039/D3EM00546A, Paper
Manuella Ghanem, Laurent Y. Alleman, Davy Rousset, Esperanza Perdrix, Patrice Coddeville
Experimental conditions such as extraction methods and storage conditions induce biases on the measurement of the oxidative potential and the bioaccessibility of transition metals from welding fumes.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Co-exposure to tire wear particles and nickel inhibits mung bean yield by reducing nutrient uptake

Environ. Sci.: Processes Impacts, 2024, Advance Article
DOI: 10.1039/D4EM00070F, Paper
Imran Azeem, Muhammad Adeel, Noman Shakoor, Muhammad Zain, Hamida Bibi, Kamran Azeem, Yuanbo Li, Muhammad Nadeem, Umair Manan, Peng Zhang, Jason C. White, Yukui Rui
Tire wear particles and nickel have detrimental effects on plant health by causing blockage and altering nutrient hemotasis, ultimately reducing plant yield.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Telangana has not lost anything after BRS poll loss, except four people losing their jobs: Revanth takes a dig at KCR 

Telangana CM lists the initiatives taken, exams conducted, jobs secured in the last 10 months




ex

KTR cites example to criticize Congress for hypocrisy on crony capitalism

BRS leader faulted Congress govt in Telangana for spending ₹300 crore on ads in the media in Maharashtra in support of the Congress (MVA) there




ex

Telangana | Exempt teachers from survey duties: Balala Hakkula Sankshema Sangham

Half-a-day school will mean that when the children return, there is no one at home as majority of parents- of children at govt schools - leave to work in the morning and do not return till evening




ex

Former player and ex-Federer coach Peter Lundgren passes away

Peter Lundgren was part of the wave of Swedish tennis players in the 1980s that followed in the wake of icon Bjorn Borg, playing alongside the likes of Mats Wilander and Stefan Edberg.




ex

U.S. Open: Taylor Fritz gets past Alexander Zverev to reach his first Grand Slam semifinal

Now he is headed to the final four at the U.S. Open, where he will meet either No. 9 Grigor Dimitrov of Bulgaria or No. 20 Frances Tiafoe of the United States




ex

Rohit Rajpal expects an even fight against Sweden in Davis Cup




ex

Ramkumar exits in the pre-quarterfinals




ex

Jannik Sinner wins Shanghai Masters to extend Djokovic's wait for 100th title

The 23-year-old Sinner came out on top in a tiebreak in an enthralling opening set, before taking the one-sided second set to become the youngest-ever champion in Shanghai




ex

Djokovic tops Nadal before Sinner beats Alcaraz for the title at the Six Kings Slam exhibition

The last dance was an epic one...Tennis will miss you, wrote Novak Djokovic as he topped Rafael Nadal




ex

Investigating students' expectations and engagement in general and organic chemistry laboratory courses

Chem. Educ. Res. Pract., 2025, Advance Article
DOI: 10.1039/D4RP00277F, Paper
Elizabeth B. Vaughan, Saraswathi Tummuru, Jack Barbera
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Hybridized local and charge transfer dendrimers with near-unity exciton utilization for enabling high-efficiency solution-processed hyperfluorescent OLEDs

Mater. Horiz., 2024, 11,1741-1751
DOI: 10.1039/D3MH01860A, Communication
Yixiao Yin, Songkun Zeng, Chen Xiao, Peng Fan, Dong Jin Shin, Ki Ju Kim, Hyewon Nam, Qian Ma, Huili Ma, Weiguo Zhu, Taekyung Kim, Jun Yeob Lee, Yafei Wang
Two dendrimers, called D-TTT-H and D-TTT-tBu, were prepared, which exhibits both hot exciton process and TADF characteristic simultaneously in solid state. The solution processable OLED showed an EQEmax of 30.88% employing D-TTT-H as a sensitizer.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Correction: Solution-processed white OLEDs with power efficiency over 90 lm W−1 by triplet exciton management with a high triplet energy level interfacial exciplex host and a high reverse intersystem crossing rate blue TADF emitter

Mater. Horiz., 2024, 11,1817-1817
DOI: 10.1039/D4MH90021A, Correction
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Liang Chen, Yufei Chang, Song Shi, Shumeng Wang, Lixiang Wang
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Exploring negative thermal expansion materials with bulk framework structures and their relevant scaling relationships through multi-step machine learning

Mater. Horiz., 2024, Advance Article
DOI: 10.1039/D3MH01509B, Communication
Yu Cai, Chunyan Wang, Huanli Yuan, Yuan Guo, Jun-Hyung Cho, Xianran Xing, Yu Jia
We uses the multi-step ML method to mine 1000 potential NTE materials from ICSD, MPD and COD databases, and the presented phase diagram can serve as a preliminary criterion for judging and designing new NTE materials.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Excellent thermomagnetic power generation for harvesting waste heat via a second-order ferromagnetic transition

Mater. Horiz., 2024, Advance Article
DOI: 10.1039/D3MH02225K, Communication
Haodong Chen, Xianliang Liu, Yao Liu, Longlong Xie, Ziyuan Yu, Kaiming Qiao, Mingze Liu, Fengxia Hu, Baogen Shen, R. V. Ramanujan, Ke Chu, Hu Zhang
NiMnIn Heusler alloys with second-order ferromagnetic transition show good thermomagnetic generation (TMG) performance with zero hysteresis and a long-term service life, enabling them to be better candidates for practical applications of TMG.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




ex

Differential-targeting core-shell microneedle patch with coordinated and prolonged releases of mangiferin and MSC-derived exosomes for scarless skin regeneration

Mater. Horiz., 2024, Accepted Manuscript
DOI: 10.1039/D3MH01910A, Communication
Shang LYU, Qi Liu, Ho-Yin Yuen, Huizhi Xie, Yuhe Yang, Kelvin Yeung, Chak-Yin Tang, Shuqi Wang, Yaxiong Liu, Bin Li, Yong He, Xin Zhao
Microneedles for skin regeneration are conventionally restricted by uncontrollable multi-drug release, limited types of drugs, and poor wound adhesion. Here, a novel core-shell microneedle patch is developed for scarless skin...
The content of this RSS Feed (c) The Royal Society of Chemistry