for Swine flu: 3-member team for Telangana By www.thehindu.com Published On :: Wed, 21 Jan 2015 16:55:02 +0530 Full Article Telangana
for Affordable swine flu vaccine that never made it By www.thehindu.com Published On :: Thu, 29 Jan 2015 07:48:17 +0530 Full Article Andhra Pradesh
for Reasons for surge in H1N1 cases yet to be identified By www.thehindu.com Published On :: Sat, 14 Feb 2015 03:40:12 +0530 People Cautioned against self medication Full Article Policy & Issues
for 13 more test positive for swine flu in Jammu and Kashmir By www.thehindu.com Published On :: Thu, 19 Feb 2015 15:54:22 +0530 Full Article Other States
for 9 IPS trainees test positive for swine flu at Hyderabad police academy By www.thehindu.com Published On :: Sat, 21 Feb 2015 16:28:19 +0530 Three out of the nine officers, who were admitted to various private hospitals in Hyderabad, have already been discharged while the remaining are in stable condition. Full Article Hyderabad
for Avoid sitting for long hours for a healthy heart By www.thehindu.com Published On :: Sat, 07 Mar 2015 19:33:12 +0530 Full Article Policy & Issues
for ‘Nurses must be allowed to perform abortions’ By www.thehindu.com Published On :: Sat, 04 Apr 2015 04:26:35 +0530 The government’s proposal has drawn sharp criticism from medical associations Full Article Other States
for Residents in Chennai hope for clear waterways and relief from mosquito menace By www.thehindu.com Published On :: Wed, 12 Aug 2015 00:00:00 +0530 Full Article Chennai
for Show cause notices sent to 10 Indian doctors for receiving payment from drug companies By www.thehindu.com Published On :: Wed, 28 Oct 2015 14:29:15 +0530 Full Article Health
for AIDS epidemic worse than ever before, says Mark Feinberg By www.thehindu.com Published On :: Mon, 12 Sep 2016 01:47:59 +0530 Full Article Policy & Issues
for Centre forms high-level panel to monitor bird flu situation By www.thehindu.com Published On :: Wed, 26 Oct 2016 19:18:51 +0530 Full Article India
for Form nodal agency to check online pre-natal sex selection ads: SC By www.thehindu.com Published On :: Wed, 16 Nov 2016 20:59:12 +0530 Whatever is prohibited under the Act cannot go through websites, says Bench Full Article India
for Biden stresses case for computer chips before crucial Senate vote By www.thehindu.com Published On :: Tue, 26 Jul 2022 10:06:26 +0530 President Joe Biden is asking Congress to send him a bipartisan bill designed to boost the country’s computer chips industry Full Article Technology
for Nanomaterial-based regulation of redox metabolism for enhancing cancer therapy By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00404C, Review ArticleXiaodan Jia, Yue Wang, Yue Qiao, Xiue Jiang, Jinghong LiThis review provides a comprehensive summary of the dysregulation of redox metabolism in cancer cells and the advantages and the latest advances in nanomaterial-assisted redox metabolic regulation therapy.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Chemical strategies for antisense antibiotics By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00238E, Tutorial Review Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Mathijs J. Pals, Alexander Lindberg, Willem A. VelemaAntibacterial resistance is a severe threat to modern medicine and human health. Antisense technology offers an attractive modality for future antibiotics.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Nanoplasmonic biosensors for environmental sustainability and human health By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10491-10522DOI: 10.1039/D3CS00941F, Review ArticleWenpeng Liu, Kyungwha Chung, Subin Yu, Luke P. LeeThis review examines recent developments in nanoplasmonic biosensors to identify analytes from the environment and human physiological parameters for monitoring sustainable global healthcare for humans, the environment, and the earth.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for PEDOT-based stretchable optoelectronic materials and devices for bioelectronic interfaces By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10575-10603DOI: 10.1039/D4CS00541D, Review ArticleWeizhen Li, Yiming Li, Ziyu Song, Yi-Xuan Wang, Wenping HuThis review summarized the strategies and mechanisms for improving the conductivity, mechanical properties and stability of PEDOT:PSS, as well as the reliable micropatterning technologies and optoelectronic devices applied at bio-interfaces.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Black titanium oxide: synthesis, modification, characterization, physiochemical properties, and emerging applications for energy conversion and storage, and environmental sustainability By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10660-10708DOI: 10.1039/D4CS00420E, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Xuelan Hou, Yiyang Li, Hang Zhang, Peter D. Lund, James Kwan, Shik Chi Edman TsangThe current synthesis methods, modifications, and characterizations of black titanium oxide (B-TiOx) as well as a nuanced understanding of its physicochemical properties and applications in green energy and environment are reviewed.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Current development, optimisation strategies and future perspectives for lead-free dielectric ceramics in high field and high energy density capacitors By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10761-10790DOI: 10.1039/D4CS00536H, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Hareem Zubairi, Zhilun Lu, Yubo Zhu, Ian M. Reaney, Ge WangThis review highlights the remarkable advancements and future trends in bulk ceramics, MLCCs and ceramic thin films for lead-free high field and high energy density capacitors.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Multidimensionally ordered mesoporous intermetallics: Frontier nanoarchitectonics for advanced catalysis By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00484A, Tutorial ReviewHao Lv, Ben LiuThis perspective summarizes recent progress in rational design and synthesis of multidimensionally ordered mesoporous intermetallics, and propose new frontier nanoarchitectonics for designing high-performance functional nanocatalysts.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Harnessing DNA computing and nanopore decoding for practical applications: from informatics to microRNA-targeting diagnostics By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D3CS00396E, Tutorial Review Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Sotaro Takiguchi, Nanami Takeuchi, Vasily Shenshin, Guillaume Gines, Anthony J. Genot, Jeff Nivala, Yannick Rondelez, Ryuji KawanoThis tutorial review provides fundamentals on DNA computing and nanopore-based decoding, highlighting recent advances towards microRNA-targeting diagnostic applications.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Intrinsic immunomodulatory hydrogels for chronic inflammation By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00450G, Tutorial ReviewYuna Qian, Jiayi Ding, Rui Zhao, Yang Song, Jiyoung Yoo, Huiyeon Moon, Seyoung Koo, Jong Seung Kim, Jianliang ShenThis tutorial review presents the development of advanced immunomodulatory hydrogels strategically designed to address chronic inflammation through their intrinsic properties.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Lignin-based porous carbon adsorbents for CO2 capture By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, Advance ArticleDOI: 10.1039/D4CS00923A, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Daniel Barker-Rothschild, Jingqian Chen, Zhangmin Wan, Scott Renneckar, Ingo Burgert, Yong Ding, Yi Lu, Orlando J. RojasThis review covers the state-of-the-art in the production of lignin-based carbon adsorbents for CO2 capture, discussing lignin chemistry and properties, traditional synthesis approaches to emerging methods, and fundamentals for rational design.To cite this article before page numbers are assigned, use the DOI form of citation above.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Recent synthetic strategies for the functionalization of fused bicyclic heteroaromatics using organo-Li, -Mg and -Zn reagents By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,11045-11099DOI: 10.1039/D4CS00369A, Review Article Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Vasudevan Dhayalan, Vishal S. Dodke, Marappan Pradeep Kumar, Hatice Seher Korkmaz, Anja Hoffmann-Röder, Pitchamuthu Amaladass, Rambabu Dandela, Ragupathy Dhanusuraman, Paul KnochelThis review presents various new strategies for the functionalization of 5 and 6-membered fused heteroaromatics. These synthetic strategies enable rapid access to complex heterocyclic compounds.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for Light/X-ray/ultrasound activated delayed photon emission of organic molecular probes for optical imaging: mechanisms, design strategies, and biomedical applications By pubs.rsc.org Published On :: Chem. Soc. Rev., 2024, 53,10970-11003DOI: 10.1039/D4CS00599F, Review ArticleRui Qu, Xiqun Jiang, Xu ZhenVersatile energy inputs, including light, X-ray and ultrasound, activate organic molecular probes to undergo different delay mechanisms, including charge separation, triplet exciton stabilization and chemical trap, for delayed photon emission.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
for On May 25, who will Angela Merkel root for? By www.thehindubusinessline.com Published On :: Wed, 08 May 2013 14:56:31 +0530 Full Article B Baskar
for Spare a thought for this man too By www.thehindubusinessline.com Published On :: Thu, 07 Nov 2013 12:59:43 +0530 Full Article B Baskar
for For the love of heritage By www.thehindu.com Published On :: Fri, 01 Jul 2016 15:09:19 +0530 Chennai lags behind almost all major cities in heritage management; with no conservation policies, laws or popular participation Full Article Property Plus
for ‘Best time for smart investors’ By www.thehindu.com Published On :: Fri, 01 Jul 2016 15:12:50 +0530 A. S. Sivaramakrishnan, Head-Residential Services, CBRE India says the Chennai market is ripe for a turnaround in fortunes Full Article Property Plus
for An investment guide for commercial properties By www.thehindu.com Published On :: Fri, 08 Jul 2016 14:04:36 +0530 There are over 30 important technical specifications that such properties must meet Full Article Property Plus
for Why the monsoons are the optimal time for home purchases By www.thehindu.com Published On :: Fri, 22 Jul 2016 15:26:02 +0530 It is the ideal time to judge the potential investment value of a new property Full Article Property Plus
for Emerging affordable destinations in Chennai By www.thehindu.com Published On :: Fri, 29 Jul 2016 15:21:09 +0530 Even as the sale velocity for mid-income and luxury housing in Chennai is fluctuating, the city’s affordable housing segment has been witnessing steady demand on the outskirts Full Article Property Plus
for Transforming urban spaces By www.thehindu.com Published On :: Mon, 08 Aug 2016 11:38:15 +0530 How many of our cities have witnessed active intervention by urban planners and architects? Interestingly the scene in most cities worldwide is in stark contrast to that prevailing here. By Nandhini Sundar Full Article Property Plus
for Charges for a sale deed By www.thehindu.com Published On :: Mon, 08 Aug 2016 11:47:29 +0530 G. Shyam Sundar writes on stamp duty and registration fee charges for a sale deed across States Full Article Property Plus
for Norms for inclusive development By www.thehindu.com Published On :: Fri, 02 Sep 2016 16:10:33 +0530 Piyush Gandhi writes on development norms for the differently-abled and senior citizens Full Article Property Plus
for A checklist for women buyers By www.thehindu.com Published On :: Fri, 21 Oct 2016 13:08:57 +0530 A comprehensive guide for the single woman buying her first home Full Article Property Plus
for RERA: A blessing in disguise for realty sector By www.thehindu.com Published On :: Fri, 21 Oct 2016 21:52:25 +0530 It is expected to increase pan-India sales besides counteracting lengthy and cost-intensive dispute resolution mechanisms. By Shrinivas Rao Full Article Property Plus
for For a winter makeover By www.thehindu.com Published On :: Fri, 11 Nov 2016 15:06:09 +0530 Make your home cosy with these nine tips by Full Article Property Plus
for Grills for character By www.thehindu.com Published On :: Fri, 04 Nov 2022 16:44:28 +0530 Window grills are in vogue. They can be made with different materials to enhance the building facade Full Article Real Estate
for ‘₹45 lakh cap does not work for big cities’ By www.thehindu.com Published On :: Fri, 17 Mar 2023 15:05:07 +0530 With land and construction costs going up, the government needs to reinvent its affordable housing policies Full Article Real Estate
for Affordable housing breaks ground By www.thehindu.com Published On :: Fri, 17 Mar 2023 15:06:15 +0530 In spite of a massive dip in 2020 due to the pandemic and lockdown, 2021 saw the launch of nearly 2.37 lakh units in this segment, which significantly powers the country’s economic growth Full Article Real Estate
for NDA candidate Krishnakumar slams UDF, LDF for ignoring development in Palakkad By www.thehindu.com Published On :: Tue, 12 Nov 2024 20:53:56 +0530 Full Article Kerala
for Wayanad bypoll: 595 polling booths for over 6 lakh voters in Nilambur, Wandoor, Eranad By www.thehindu.com Published On :: Tue, 12 Nov 2024 20:59:44 +0530 Full Article Kerala
for Designing for the Unexpected By Published On :: 2021-07-15T13:00:00+00:00 I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented? Flash, Photoshop, and responsive design When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning. Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough. The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase. A new way to design Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes: .column-span-6 { width: 49%; float: left; margin-right: 0.5%; margin-left: 0.5%; } .column-span-4 { width: 32%; float: left; margin-right: 0.5%; margin-left: 0.5%; } .column-span-3 { width: 24%; float: left; margin-right: 0.5%; margin-left: 0.5%; } Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup: .logo { @include colSpan(6); } .search { @include colSpan(3); } .social-share { @include colSpan(3); } Media queries The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach). Components becoming too small at mobile breakpoints Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge. Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton. <section class="row"> <div class="column-span-4">1 of 7</div> <div class="column-span-4">2 of 7</div> <div class="column-span-4">3 of 7</div> </section> <section class="row"> <div class="column-span-4">4 of 7</div> <div class="column-span-4">5 of 7</div> <div class="column-span-4">6 of 7</div> </section> <section class="row"> <div class="column-span-4">7 of 7</div> </section> Components placed in the rows of a Sass grid Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist” goal. Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below? Components responding to the viewport width with media queries Container queries: our savior or a false dawn? Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations. Components responding to their parent container with container queries One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device. In other words, responsive components to replace responsive layouts. Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly. My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? A component library removed from context and real content is probably not the best place for that decision. As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio? Cards responding to their parent container with container queries Cards responding based on their own content In this example, the dimensions of the container are not what should dictate the design; rather, the image is. It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content. CSS is changing Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes. .wrapper { display: grid; grid-template-columns: repeat(auto-fit, 450px); gap: 10px; } The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. .wrapper { display: flex; flex-wrap: wrap; justify-content: space-between; } .child { flex-basis: 32%; margin-bottom: 20px; } The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development. A traditional Grid layout without the usual row containers This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below? Cards unable to respond to a sibling’s content changes Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change. Cards responding to content in sibling cards .wrapper { display: grid; grid-template-columns: repeat(auto-fit, minmax(150px, 1fr)); grid-template-rows: auto 1fr auto; gap: 10px; } .sub-grid { display: grid; grid-row: span 3; grid-template-rows: subgrid; /* sets rows to parent grid */ } CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. Intrinsic layouts I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible. fr units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it. —Jen Simmons, “Designing Intrinsic Layouts” Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up. Slide from “Designing Intrinsic Layouts” by Jen Simmons What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries. Another 2010 moment? This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. You can’t framework your way out of a content problem Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips. Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content. And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages. How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content? Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions. Content first Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements. Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes. Instead of old markup hacks like this— <p> <span class="first-line">First line of text with different styling</span>... </p> —we can target content based on where it appears. .element::first-line { font-size: 1.4em; } .element::first-letter { color: red; } Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp(). This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation. In the Sass version, directional variables need to be set. $direction: rtl; $opposite-direction: ltr; $start-direction: right; $end-direction: left; These variables can be used as values— body { direction: $direction; text-align: $start-direction; } —or as properties. margin-#{$end-direction}: 10px; padding-#{$start-direction}: 10px; However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction. margin-block-end: 10px; padding-block-start: 10px; There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start. Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs. Fixed and fluid We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. For min() this means setting a fluid minimum value and a maximum fixed value. .element { width: min(50%, 300px); } The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px. For max() we can set a flexible max value and a minimum fixed value. .element { width: max(50%, 300px); } Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable. .element { width: clamp(300px, 50%, 600px); } This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px. With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly. Situation first Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”? It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks. This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users. Thankfully, there is a lot we can do to provide choice. Responsible design “There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”“I Used the Web for a Day on a 50 MB Budget”Chris Ashton One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity. The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data. <img src="image-file.jpg" srcset="large.jpg 1024w, medium.jpg 640w, small.jpg 320w" alt="Image alt text" /> The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. <link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup--> <link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup--> There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed. <img src="image.png" loading="lazy" alt="…"> With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. So how can we put users in control? The return of media queries Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them. We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations. For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments. @media (light-level: normal) { --background-color: #fff; --text-color: #0b0c0c; } @media (light-level: dim) { --background-color: #efd226; --text-color: #0b0c0c; } Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. Media queries like this go beyond choices made by a browser to grant more control to the user. Expect the unexpected In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market. We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time. When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves. Full Article
for Design for Safety, An Excerpt By Published On :: 2021-08-26T15:01:43+00:00 Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy. This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.) The process for inclusive safety When you are designing for safety, your goals are to: identify ways your product can be used for abuse,design ways to prevent the abuse, andprovide support for vulnerable users to reclaim power and control. The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action: Conducting researchCreating archetypesBrainstorming problemsDesigning solutionsTesting for safety Fig 5.1: Each aspect of the Process for Inclusive Safety can be incorporated into your design process where it makes the most sense for you. The times given are estimates to help you incorporate the stages into your design plan. The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice. And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work. If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma. Step 1: Conduct research Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment. Broad research Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies. Specific research: Survivors When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers. Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6. Specific research: Abusers It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse. Step 2: Create archetypes Once you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized. The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally. Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses. The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)? Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse. You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse. Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information. It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals. And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype. Step 3: Brainstorm problems After creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step. How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process. If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm. After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step. It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly. Step 4: Design solutions At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered. Some questions to ask yourself to help prevent harm and support your archetypes include: Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?How can you make the victim aware that abuse is happening through your product?How can you help the victim understand what they need to do to make the problem stop?Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support? In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner. That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter. Step 5: Test for safety The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world. Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them. You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do. Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one. As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it. Abuser testing The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them. For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers. If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once. Survivor testing Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective. However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4. Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account. Stress testing To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design. Full Article
for Mobile-First CSS: Is It Time for a Rethink? By Published On :: 2022-06-09T02:13:10+00:00 The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right? Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that? On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project. Advantages of mobile-first Some of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense: Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing. Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well. Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project). Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices! Disadvantages of mobile-first Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications: More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints. Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible. Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested. The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don’t leverage the browser’s capability to download CSS files in priority order. The problem of property value overrides There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity. With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set). This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view! Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others. Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement. Closed media query ranges in practice In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs: smaller than 768from 768 to below 10241024 and anything larger Take a simple example where a block-level element has a default padding of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop. Classic min-width mobile-first .my-block { padding: 20px; @media (min-width: 768px) { padding: 40px; } @media (min-width: 1024px) { padding: 20px; } } Closed media query range .my-block { padding: 20px; @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The subtle difference is that the mobile-first example sets the default padding to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception). The goal is to: Only set styles when needed. Not set them with the expectation of overwriting them later on, again and again. To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited. Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range. .my-block { @media (max-width: 767.98px) { padding: 20px; } @media (min-width: 768px) and (max-width: 1023.98px) { padding: 40px; } } The browser default padding for our block is “0,” so instead of adding a desktop media query and using unset or “0” for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding style, as we want the browser default value. Bundling versus separating the CSS Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority. With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked. Which HTTP version are you using? To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used. Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column. Note: for a summarized comparison, see ImageKit’s “HTTP/2 vs. HTTP/1.” Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2. Splitting the CSS Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority. In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority. With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow. The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below. Bundled CSS <link href="site.css" rel="stylesheet"> This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority. Separated CSS <link href="default.css" rel="stylesheet"><link href="mobile.css" media="screen and (max-width: 767.98px)" rel="stylesheet"><link href="tablet.css" media="screen and (min-width: 768px) and (max-width: 1083.98px)" rel="stylesheet"><link href="desktop.css" media="screen and (min-width: 1084px)" rel="stylesheet"><link href="print.css" media="print" rel="stylesheet"> Separating the CSS and specifying a media attribute value on each link tag allows the browser to prioritize what it currently needs. Out of the five files listed above, two will be downloaded with Highest priority: the default file, and the file that matches the current media query. The others will be downloaded with Lowest priority. Depending on the project’s deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test. Moving on The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices. I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive. In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into. Full Article
for Personalization Pyramid: A Framework for Designing with User Data By Published On :: 2022-12-08T15:00:00+00:00 As a UX professional in today’s data-driven landscape, it’s increasingly likely that you’ve been asked to design a personalized digital experience, whether it’s a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX. That’s where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started). Growing tools for personalization: According to a Dynamic Yield survey, 39% of respondents felt support is available on-demand when a business case is made for it (up 15% from 2020).Source: “The State of Personalization Maturity – Q4 2021” Dynamic Yield conducted its annual maturity survey across roles and sectors in the Americas (AMER), Europe and the Middle East (EMEA), and the Asia-Pacific (APAC) regions. This marks the fourth consecutive year publishing our research, which includes more than 450 responses from individuals in the C-Suite, Marketing, Merchandising, CX, Product, and IT. Getting Started For the sake of this article, we’ll assume you’re already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points. Common scenarios for starting a personalization project: Your organization or client purchased a content management system (CMS) or marketing automation platform (MAP) or related technology that supports personalization The CMO, CDO, or CIO has identified personalization as a goal Customer data is disjointed or ambiguous You are running some isolated targeting campaigns or A/B testing Stakeholders disagree on personalization approach Mandate of customer privacy rules (e.g. GDPR) requires revisiting existing user targeting practices Workshopping personalization at a conference. Regardless of where you begin, a successful personalization program will require the same core building blocks. We’ve captured these as the “levels” on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful. From the ground up: Soup-to-nuts personalization, without going nuts. From top to bottom, the levels include: North Star: What larger strategic objective is driving the personalization program? Goals: What are the specific, measurable outcomes of the program? Touchpoints: Where will the personalized experience be served? Contexts and Campaigns: What personalization content will the user see? User Segments: What constitutes a unique, usable audience? Actionable Data: What reliable and authoritative data is captured by our technical platform to drive personalization? Raw Data: What wider set of data is conceivably available (already in our setting) allowing you to personalize? We’ll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We’ve found them helpful in personalization brainstorming sessions, and will include examples for you here. Personalization pack: Deck of cards to help kickstart your personalization brainstorming. Starting at the Top The components of the pyramid are as follows: North Star A north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include: Function: Personalize based on basic user inputs. Examples: “Raw” notifications, basic search results, system user settings and configuration options, general customization, basic optimizations Feature: Self-contained personalization componentry. Examples: “Cooked” notifications, advanced optimizations (geolocation), basic dynamic messaging, customized modules, automations, recommenders Experience: Personalized user experiences across multiple interactions and user flows. Examples: Email campaigns, landing pages, advanced messaging (i.e. C2C chat) or conversational interfaces, larger user flows and content-intensive optimizations (localization). Product: Highly differentiating personalized product experiences. Examples: Standalone, branded experiences with personalization at their core, like the “algotorial” playlists by Spotify such as Discover Weekly. North star cards. These can help orient your team towards a common goal that personalization will help achieve; Also, these are useful for characterizing the end-state ambition of the presently stated personalization effort. Goals As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include: Conversion Time on task Net promoter score (NPS) Customer satisfaction Goal cards. Examples of some common KPIs related to personalization that are concrete and measurable. Touchpoints Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples: Channel-level Touchpoints Email: Role Email: Time of open In-store display (JSON endpoint) Native app Search Wireframe-level Touchpoints Web overlay Web alert bar Web banner Web content block Web menu Touchpoint cards. Examples of common personalization touchpoints: these can vary from narrow (e.g., email) to broad (e.g., in-store). If you’re designing for web interfaces, for example, you will likely need to include personalized “zones” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns. Targeted Zones: Examples from Kibo of personalized “zones” on page-level wireframes occurring at various stages of a user journey (Engagement phase at left and Purchase phase at right.)Source: “Essential Guide to End-to-End Personaliztion” by Kibo. Contexts and Campaigns Once you’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns” (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an “Enrich” campaign that shows related articles may be a suitable supplement to extant content). Personalization Context Model: Browse Skim Nudge Feast Personalization Content Model: Alert Make Easier Cross-Sell Enrich We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model. Campaign and Context cards: This level of the pyramid can help your team focus around the types of personalization to deliver end users and the use-cases in which they will experience it. User Segments User segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid: Unknown Guest Authenticated Default Referred Role Cohort Unique ID Segment cards. Examples of common personalization segments: at a minimum, you will need to consider the anonymous, guest, and logged in user types. Segmentation can get dramatically more complex from there. Actionable Data Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as “data activation.”) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience. Source: “The State of Personalization 2021” by Twilio. Survey respondents were n=2,700 adult consumers who have purchased something online in the past 6 months, and n=300 adult manager+ decision-makers at consumer-facing companies that provide goods and/or services online. Respondents were from the United States, United Kingdom, Australia, and New Zealand.Data was collected from April 8 to April 20, 2021. First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples: Figure 1.1.2: Example of a personalization maturity curve, showing progression from basic recommendations functionality to true individualization. Credit: https://kibocommerce.com/blog/kibos-personalization-maturity-chart/ There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow. While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable. Pulling it Together While the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping. In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article. In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how. Scenario A: We want to use personalization to improve customer satisfaction on the website. For unknown users, we will create a short quiz to better identify what the user has come to do. This is sometimes referred to as “badging” a user in onboarding contexts, to better characterize their present intent and context. Lay Down Your Cards Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory. Full Article
for Opportunities for AI in Accessibility By Published On :: 2024-02-07T14:00:00+00:00 In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well. I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day. Alternative text Joe’s piece spends a lot of time talking about computer-vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer-vision models continue to improve in the quality and richness of detail in their descriptions, their results aren’t great. As he rightly points out, the current state of image analysis is pretty poor—especially for certain image types—in large part because current AI systems examine images in isolation rather than within the contexts that they’re in (which is a consequence of having separate “foundation” models for text analysis and image analysis). Today’s models aren’t trained to distinguish between images that are contextually relevant (that should probably have descriptions) and those that are purely decorative (which might not need a description) either. Still, I still think there’s potential in this space. As Joe mentions, human-in-the-loop authoring of alt text should absolutely be a thing. And if AI can pop in to offer a starting point for alt text—even if that starting point might be a prompt saying What is this BS? That’s not right at all… Let me try to offer a starting point—I think that’s a win. Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which images are likely to be decorative and which ones likely require a description. That will help reinforce which contexts call for image descriptions and it’ll improve authors’ efficiency toward making their pages more accessible. While complex images—like graphs and charts—are challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity as well. Let’s suppose that you came across a chart whose description was simply the title of the chart and the kind of visualization it was, such as: Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year. (That would be a pretty awful alt text for a chart since that would tend to leave many questions about the data unanswered, but then again, let’s suppose that that was the description that was in place.) If your browser knew that that image was a pie chart (because an onboard model concluded this), imagine a world where users could ask questions like these about the graphic: Do more people use smartphones or feature phones? How many more? Is there a group of people that don’t fall into either of these buckets? How many is that? Setting aside the realities of large language model (LLM) hallucinations—where a model just makes up plausible-sounding “facts”—for a moment, the opportunity to learn more about images and data in this way could be revolutionary for blind and low-vision folks as well as for people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in educational contexts to help people who can see these charts, as is, to understand the data in the charts. Taking things a step further: What if you could ask your browser to simplify a complex chart? What if you could ask it to isolate a single line on a line graph? What if you could ask your browser to transpose the colors of the different lines to work better for form of color blindness you have? What if you could ask it to swap colors for patterns? Given these tools’ chat-based interfaces and our existing ability to manipulate images in today’s AI tools, that seems like a possibility. Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, perhaps it could turn that pie chart (or better yet, a series of pie charts) into more accessible (and useful) formats, like spreadsheets. That would be amazing! Matching algorithms Safiya Umoja Noble absolutely hit the nail on the head when she titled her book Algorithms of Oppression. While her book was focused on the ways that search engines reinforce racism, I think that it’s equally true that all computer models have the potential to amplify conflict, bias, and intolerance. Whether it’s Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our ideas of what natural bodies look like, we know that poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and build them. When these platforms are built with inclusively baked in, however, there’s real potential for algorithm development to help people with disabilities. Take Mentra, for example. They are an employment network for neurodivergent people. They use an algorithm to match job seekers with potential employers based on over 75 data points. On the job-seeker side of things, it considers each candidate’s strengths, their necessary and preferred workplace accommodations, environmental sensitivities, and so on. On the employer side, it considers each work environment, communication factors related to each job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it came to typical employment sites. They use their algorithm to propose available candidates to companies, who can then connect with job seekers that they are interested in; reducing the emotional and physical labor on the job-seeker side of things. When more people with disabilities are involved in the creation of algorithms, that can reduce the chances that these algorithms will inflict harm on their communities. That’s why diverse teams are so important. Imagine that a social media company’s recommendation engine was tuned to analyze who you’re following and if it was tuned to prioritize follow recommendations for people who talked about similar things but who were different in some key ways from your existing sphere of influence. For example, if you were to follow a bunch of nondisabled white male academics who talk about AI, it could suggest that you follow academics who are disabled or aren’t white or aren’t male who also talk about AI. If you took its recommendations, perhaps you’d get a more holistic and nuanced understanding of what’s happening in the AI field. These same systems should also use their understanding of biases about particular communities—including, for instance, the disability community—to make sure that they aren’t recommending any of their users follow accounts that perpetuate biases against (or, worse, spewing hate toward) those groups. Other ways that AI can helps people with disabilities If I weren’t trying to put this together between other tasks, I’m sure that I could go on and on, providing all kinds of examples of how AI could be used to help people with disabilities, but I’m going to make this last section into a bit of a lightning round. In no particular order: Voice preservation. You may have seen the VALL-E paper or Apple’s Global Accessibility Awareness Day announcement or you may be familiar with the voice-preservation offerings from Microsoft, Acapela, or others. It’s possible to train an AI model to replicate your voice, which can be a tremendous boon for people who have ALS (Lou Gehrig’s disease) or motor-neuron disease or other medical conditions that can lead to an inability to talk. This is, of course, the same tech that can also be used to create audio deepfakes, so it’s something that we need to approach responsibly, but the tech has truly transformative potential. Voice recognition. Researchers like those in the Speech Accessibility Project are paying people with disabilities for their help in collecting recordings of people with atypical speech. As I type, they are actively recruiting people with Parkinson’s and related conditions, and they have plans to expand this to other conditions as the project progresses. This research will result in more inclusive data sets that will let more people with disabilities use voice assistants, dictation software, and voice-response services as well as control their computers and other devices more easily, using only their voice. Text transformation. The current generation of LLMs is quite capable of adjusting existing text content without injecting hallucinations. This is hugely empowering for people with cognitive disabilities who may benefit from text summaries or simplified versions of text or even text that’s prepped for Bionic Reading. The importance of diverse teams and data We need to recognize that our differences matter. Our lived experiences are influenced by the intersections of the identities that we exist in. These lived experiences—with all their complexities (and joys and pain)—are valuable inputs to the software, services, and societies that we shape. Our differences need to be represented in the data that we use to train new models, and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that foster more equitable outcomes. Want a model that doesn’t demean or patronize or objectify people with disabilities? Make sure that you have content about disabilities that’s authored by people with a range of disabilities, and make sure that that’s well represented in the training data. Want a model that doesn’t use ableist language? You may be able to use existing data sets to build a filter that can intercept and remediate ableist language before it reaches readers. That being said, when it comes to sensitivity reading, AI models won’t be replacing human copy editors anytime soon. Want a coding copilot that gives you accessible recommendations from the jump? Train it on code that you know to be accessible. I have no doubt that AI can and will harm people… today, tomorrow, and well into the future. But I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, and intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future. Many thanks to Kartik Sawhney for helping me with the development of this piece, Ashley Bischoff for her invaluable editorial assistance, and, of course, Joe Dolson for the prompt. Full Article
for Conjugated polymers for organic electronics [electronic resource] : design and synthesis / Andrew Grimsdale and Paul Dastoor. By darius.uleth.ca Published On :: Cambridge, United Kingdom ; New York : Cambridge University Press, 2024. Full Article
for Needed, a visionary leadership for Tamil Nadu By www.thehindubusinessline.com Published On :: Sun, 12 Jan 2014 13:20:07 +0530 Full Article N Ramakrishnan