it A highly sensitive, selective and renewable carbon paste electrode based on a unique acyclic diamide ionophore for the potentiometric determination of lead ions in polluted water samples By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17552-17560DOI: 10.1039/D0RA01435D, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.M. A. Zayed, Walaa H. Mahmoud, Ashraf A. Abbas, Aya E. Ali, Gehad G. MohamedDue to the toxicity of lead(II) to all living organisms destroying the central nervous system and leading to circulatory system and brain disorders, the development of effective and selective lead(II) ionophores for its detection is very important.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Electrochemical reduction of CO2 to ethylene on Cu/CuxO-GO composites in aqueous solution By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17572-17581DOI: 10.1039/D0RA02754E, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Nusrat Rashid, Mohsin Ahmad Bhat, U. K. Goutam, Pravin Popinand IngoleHerein, we present fabrication of graphene oxide supported Cu/CuxO nano-electrodeposits which efficiently and selectively can electroreduce CO2 into ethylene with a faradaic efficiency of 34% and conversion rate of 194 mmol g−1 h−1 at −0.985 V vs. RHE.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Correction: Influence of co-cultures of Streptococcus thermophilus and probiotic lactobacilli on quality and antioxidant capacity parameters of lactose-free fermented dairy beverages containing Syzygium cumini (L.) Skeels pulp By feeds.rsc.org Published On :: RSC Adv., 2020, 10,16905-16905DOI: 10.1039/D0RA90046J, Correction Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Sabrina Laís Alves Garcia, Gabriel Monteiro da Silva, Juliana Maria Svendsen Medeiros, Anna Paula Rocha de Queiroga, Blenda Brito de Queiroz, Daniely Rayane Bezerra de Farias, Joyceana Oliveira Correia, Eliane Rolim Florentino, Flávia Carolina Alonso BuritiThe content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Preparation of phosphorus-doped porous carbon for high performance supercapacitors by one-step carbonization By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17768-17776DOI: 10.1039/D0RA02398A, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Guanfeng Lin, Qiong Wang, Xuan Yang, Zhenghan Cai, Yongzhi Xiong, Biao HuangP-doped porous carbon can be prepared by one-step carbonization using biomass sawdust impregnated with a small amount of phosphoric acid.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it A chitosan-based edible film with clove essential oil and nisin for improving the quality and shelf life of pork patties in cold storage By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17777-17786DOI: 10.1039/D0RA02986F, Paper Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Karthikeyan Venkatachalam, Somwang LekjingThis study assessed chitosan (CS)-based edible films with clove essential oil (CO) and nisin (NI) singly or in combination, for improving quality and shelf life of pork patties stored in cold conditions.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Effect of Zn doping on phase transition and electronic structures of Heusler-type Pd2Cr-based alloys: from normal to all-d-metal Heusler By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17829-17835DOI: 10.1039/D0RA02951C, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Xiaotian Wang, Mengxin Wu, Tie Yang, Rabah KhenataBy first-principles calculations, for Heusler alloys Pd2CrZ (Z = Al, Ga, In, Tl, Si, Sn, P, As, Sb, Bi, Se, Te, Zn), the effect of Zn doping on their phase transition and electronic structure has been studied in this work.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Synthesis, characterization and corrosion inhibition behavior of 2-aminofluorene bis-Schiff bases in circulating cooling water By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17816-17828DOI: 10.1039/D0RA01903H, Paper Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Wenchang Wei, Zheng Liu, Chuxin Liang, Guo-Cheng Han, Jiaxing Han, Shufen ZhangTwo new bis-Schiff bases, namely 2-bromoisophthalaldehyde-2-aminofluorene (M1) and glutaraldehyde 2-aminofluorene (M2) were synthesized and were characterized, the potentiodynamic polarization curve confirmed that they were anode type inhibitors.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Effect of new carbonyl cyanide aromatic hydrazones on biofilm inhibition against methicillin resistant Staphylococcus aureus By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17854-17861DOI: 10.1039/D0RA03124K, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Xueer Lu, Ziwen Zhang, Yingying Xu, Jun Lu, Wenjian Tang, Jing Zhang2e and 2j with strong p-NO2 and p-CF3 at phenyl ring had the lowest MICs against S. aureus and MRSA. 2e displayed unaided or synergistic efficacy against MRSA, especially combined with ofloxacin. EM revealed that 2e destroys biofilms and cell membranes.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Research on the controllable degradation of N-methylamido and dialkylamino substituted at the 5th position of the benzene ring in chlorsulfuron in acidic soil By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17870-17880DOI: 10.1039/D0RA00811G, Paper Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Fan-Fei Meng, Lei Wu, Yu-Cheng Gu, Sha Zhou, Yong-Hong Li, Ming-Gui Chen, Shaa Zhou, Yang-Yang Zhao, Yi Ma, Zheng-Ming LiThese results will provide valuable information to discover tailored SU with controllable degradation properties to meet the needs of individual crops.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Nitrogen-doped RuS2 nanoparticles containing in situ reduced Ru as an efficient electrocatalyst for hydrogen evolution By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17862-17868DOI: 10.1039/D0RA02530E, Paper Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Yan Xu, Xiaoping Gao, Jingyan Zhang, Daqiang GaoThe reasonable design that N-doping and in situ reduced Ru metal enhances the performance of N-RuS2/Ru for HER.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Lithium metal deposition/dissolution under uniaxial pressure with high-rigidity layered polyethylene separator By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17805-17815DOI: 10.1039/D0RA02788J, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Shogo Kanamori, Mitsuhiro Matsumoto, Sou Taminato, Daisuke Mori, Yasuo Takeda, Hoe Jin Hah, Takashi Takeuchi, Nobuyuki ImanishiThe use of a high rigidity separator and application of an appropriate amount of pressure are effective approaches to control lithium metal growth and improve its cycle performance.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Synthesis of heteroatom-containing pyrrolidine derivatives based on Ti(O-iPr)4 and EtMgBr-catalyzed carbocyclization of allylpropargyl amines with Et2Zn By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17881-17891DOI: 10.1039/D0RA02677H, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Rita N. Kadikova, Ilfir R. Ramazanov, Azat M. Gabdullin, Oleg S. Mozgovoj, Usein M. DzhemilevThe Ti(O-iPr)4 and EtMgBr-catalyzed regio and stereoselective carbocyclization of N-allyl-substituted 2-alkynylamines with Et2Zn, followed by deuterolysis or hydrolysis, affords the corresponding methylenepyrrolidine derivatives in high yields.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Nanoporous materials with predicted zeolite topologies By feeds.rsc.org Published On :: RSC Adv., 2020, 10,17760-17767DOI: 10.1039/D0RA01888K, Paper Open Access   This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.Vladislav A. Blatov, Olga A. Blatova, Frits Daeyaert, Michael W. DeemTopological exploration of crystal structures demonstrates the presence of known zeolites, inorganics, and MOFs in a database of predicted materials.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article
it Glossary format definition list By nicolasgallagher.com Published On :: Mon, 01 Jun 2009 17:00:00 -0700 Bruce Lawson recently asked for ways to style a definition list in the common glossary format. This is one way to do it. Bruce’s original post – css challenge – describes what he is after: a “glossary style” appearance with the term on the left and the definitions on the right. Some terms will have multiple definitions, definitions of varying length, and each new term should appear on a new line. A definition list is semantically correct for this kind of information, so there was to be no fiddling around with the HTML, and the browser requirements were for it to work in all modern browsers and IE 6+. You can skip straight to the demo where some additional classes are included in the HTML in order to highlight each term-definition association. The basic HTML The basic HTML structure is a simple definition list and nothing more. There are some short, long, and multiple definitions for each term. <h1>Styling definition lists</h1> <dl> <dt>Cheese</dt> <dd> <p>Velit esse cillum dolore in reprehenderit in voluptate duis aute irure dolor. Consectetur adipisicing elit, excepteur sint occaecat sunt in culpa. Velit esse cillum dolore eu fugiat nulla pariatur. Ut aliquip ex ea commodo consequat.</p> <p>Mollit anim id est laborum. Ut enim ad minim veniam, consectetur adipisicing elit, ullamco laboris nisi. Lorem ipsum dolor sit amet, sunt in culpa quis nostrud exercitation.</p> </dd> <dd>yummy!</dd> <dt>Building flexibility through spreading knowledge and self-organization, exploiting the productive lifecycle to experience a profound paradigm shift. Through the adoption of a proactive stance, the astute manager can adopt a position at the vanguard.</dt> <dd>balderdash</dd>; <dd>poppycock</dd> <dt>Aardvark</dt> <dd>never hurt anyone</dd> </dl> The styles In order to get the required appearance in all browsers I had to use negative margins and a few conditional styles to get IE7 and IE6 to play along. For the purposes of the demo I’ve placed all the styles in <style> blocks in the head of the document. <style> dl {padding-left:300px;} dt {clear:both; float:left; width:260px; padding:10px; margin:0 0 2em -300px; font-weight:bold; color:#686663;} dd {float:left; width:100%; padding:10px 0; margin:0 0 2em;} </styl> <!--[if lte IE 7]> <style> dt {display:inline; margin-bottom:0;} dd {float:none; width:auto;} </style> <![endif]--> That’s it. The widths of the <dt> can be set in ems or percentages if the layout requires. The complete code is available in the demo and you are free to use this code. Full Article
it Multiple Backgrounds and Borders with CSS 2.1 By nicolasgallagher.com Published On :: Wed, 09 Jun 2010 17:00:00 -0700 Using CSS 2.1 pseudo-elements to provide up to 3 background canvases, 2 fixed-size presentational images, and multiple complex borders for a single HTML element. This method of progressive enhancement works for all browsers that support CSS 2.1 pseudo-elements and their positioning. No CSS3 support required. Demo: Multiple Backgrounds with CSS 2.1 Demo: Multiple Borders with CSS 2.1 Support: Firefox 3.5+, Safari 4+, Chrome 4+, Opera 10+, IE8+. How does it work? Essentially, you create pseudo-elements using CSS (:before and :after) and treat them similarly to how you would treat HTML elements nested within your target element. But they have distinct benefits – beyond semantics – over the use of nested HTML elements. To provide multiple backgrounds and/or borders, the pseudo-elements are pushed behind the content layer and pinned to the desired points of the HTML element using absolute positioning. The pseudo-elements contain no true content and are absolutely positioned. This means that they can be stretched to sit over any area of the “parent” element without affecting its content. This can be done using any combination of values for the top, right, bottom, left, width, and height properties and is the key to their flexibility. What effects can be achieved? Using just one element you can create parallax effects, multiple background colours, multiple background images, clipped background images, image replacement, expandable boxes using images for borders, fluid faux columns, images existing outside the box, the appearance of multiple borders, and other popular effects that usually require images and/or the use of presentational HTML. It is also possible to include 2 extra presentational images as generated content. The Multiple Backgrounds with CSS 2.1 and Multiple Borders with CSS 2.1 demo pages show how several popular examples of these effects can be achieved with this technique. Most structural elements will contain child elements. Therefore, more often than not, you will be able to gain a further 2 pseudo-elements to use in the presentation by generating them from the first child (and even last-child) element of the parent element. In addition, you can use style changes on :hover to produce complex interaction effects. Example code: multiple background images Using this technique it is possible to reproduce multiple-background parallax effects like those found on the Silverback site using just one HTML element. The element gets its own background image and any desired padding. By relatively positioning the element it acts as the reference point when absolutely positioning the pseudo-elements. The positive z-index will allow for the correct z-axis positioning of the pseudo-elements. #silverback { position: relative; z-index: 1; min-width: 200px; min-height: 200px; padding: 120px 200px 50px; background: #d3ff99 url(vines-back.png) -10% 0 repeat-x; } Both pseudo-elements are absolutely positioned and pinned to each side of the element. The z-index value of -1 moves the pseudo-elements behind the content layer. This way the pseudo-elements sit on top of the element’s background and border but all the content is still selectable or clickable. #silverback:before, #silverback:after { position: absolute; z-index: -1; top: 0; left: 0; right: 0; bottom: 0; padding-top: 100px; } Each pseudo-element then has a repeated background-image set. This is all that is needed to reproduce the parallax effect. The content property lets you add an image as generated content. With two pseudo-elements you can add 2 further images to an element. They can be crudely positioned within the pseudo-element box by varying other properties such as text-align and padding. #silverback:before { content: url(gorilla-1.png); padding-left: 3%; text-align: left; background: transparent url(vines-mid.png) 300% 0 repeat-x; } #silverback:after { content: url(gorilla-2.png); padding-right: 3%; text-align: right; background: transparent url(vines-front.png) 70% 0 repeat-x; } The finished product is part of the Multiple Backgrounds with CSS 2.1 demo. Example code: fluid faux columns Another application is creating equal height fluid columns without images or extra nested containers. The HTML base is very simple. I’ve used specific classes on each child div rather than relying on CSS 2.1 selectors that IE6 does not support. If you don’t require IE6 support you don’t actually need the classes. <div id="faux"> <div class="main">[content]</div> <div class="supp1">[content]</div> <div class="supp2">[content]</div> </div> The percentage-width container is once again relatively positioned and a positive z-index is set. Applying overflow:hidden gets the element to wrap its floated children and will hide the overflowing pseudo-elements. The background colour will provide the colour for one of the columns. #faux { position: relative; z-index: 1; width: 80%; margin: 0 auto; overflow: hidden; background: #ffaf00; } By using relative positioning on the child div‘s you can also control the order of the columns independently of their source order. #faux div { position: relative; float: left; width: 30%; } #faux .main { left: 35%; } #faux .supp1 { left: -28.5%; } #faux .supp2 { left: 8.5%; } The other two full-height columns are produced by creating, sizing, and positioning pseudo-elements with backgrounds. These backgrounds can be (repeating) images if the design requires. #faux:before, #faux:after { content: ""; position: absolute; z-index: -1; top: 0; right: 0; bottom: 0; left: 33.333%; background: #f9b6ff; } #faux:after { left: 66.667%; background: #79daff; } The finished product is part of the Multiple Backgrounds with CSS 2.1 demo. Example code: multiple borders Multiple borders are produced in much the same way. Using them can avoid the need for images to produce simple effects. An element must be relatively positioned and have sufficient padding to contain the width of the extra border you will be creating with pseudo-elements. #borders { position: relative; z-index: 1; padding: 30px; border: 5px solid #f00; background: #ff9600; } The pseudo-elements are positioned at specific distances away from the edge of the element’s box, moved behind the content layer with the negative z-index, and given the border and background values you want. #borders:before { content: ""; position: absolute; z-index: -1; top: 5px; left: 5px; right: 5px; bottom: 5px; border: 5px solid #ffea00; background: #4aa929; } #borders:after { content: ""; position: absolute; z-index: -1; top: 15px; left: 15px; right: 15px; bottom: 15px; border: 5px solid #00b4ff; background: #fff; } That’s all there is to it. The finished product is part of the Multiple Borders with CSS 2.1 demo. Progressive enhancement and legacy browsers IE6 and IE7 have no support for CSS 2.1 pseudo-elements and will ignore all :before and :after declarations. They get none of the enhancements but are left with the basic usable experience. A warning about Firefox 3.0 Firefox 3.0 supports CSS 2.1 pseudo-elements but does not support their positioning. Due to this partial support, you should avoid declaring display:block for absolutely positioned pseudo-elements that explicitly declare a width or height values. However, when using borders there is no graceful fallback for Firefox 3.0. Although, sometimes an improved appearance in Firefox 3.0 can be achieved by adding display:block to pseudo-element hacks that use borders. Enhancing with CSS3 All the applications included in this article could be further enhanced to take advantage of present-day CSS3 implementations. Using border-radius, rgba, and transforms, and CSS3 multiple background images in tandem with pseudo-elements can produce even more complex presentations that I hope to include in a future article. Currently there is no browser support for the use of CSS3 transitions or animations on pseudo-elements. In the future: CSS3 pseudo-elements The proposed extensions to pseudo-elements in the CSS3 Generated and Replaced Content Module include the addition of nested pseudo-elements (::before::before), multiple pseudo-elements (::after(2)), wrapping pseudo-elements (::outside), and the ability to insert pseudo-elements into later parts of the document (::alternate). These changes would provide a near limitless number, and arrangement, of pseudo-elements for all sorts of complex effects and presentations using just one element. Let me know what you’ve done I’ve focused on just a few applications and popular effects. If you find other applications, limitations, or want to share how you’ve applied this technique please leave a comment below or let me know on Twitter (@necolas. Translations 使用css2.1实现多重背景、多重边框效果 Full Article
it Jump links and viewport positioning By nicolasgallagher.com Published On :: Fri, 12 Nov 2010 16:00:00 -0800 Using within-page links presses the jumped-to content right at the very top of the viewport. This can be a problem when using a fixed header. With a bit of hackery, there are some CSS methods to insert space between the top of the viewport and the target element within a page. Demo: Jump links and viewport positioning Known support: varies depending on method used. This experiment is the result of a post Chris Coyier made on Forrst. Chris’ method was to add an empty span element to the target element, shift the id attribute onto the span, and then absolutely position the span somewhere above it’s parent element. That method works but it requires changes to the HTML. The comments on Chris’ post suggested the use of psuedo-elements or padding. This experiment expands on, and combines, some of those suggestions to show the limitations of each method and document their browser support. Simplest method If you need to jump to an element with simple styling then using the :before pseudo-element is a quick and simple approach. #target:before { content: ""; display: block; height: 50px; margin: -30px 0 0; } The drawbacks are that it requires browser support for pseudo-elements and it will fail if the target element has a background colour, a repeated background image, padding-top, or border-top as part of its rule set. More robust method The more robust method uses a transparent border, negative margin, and the background-clip property. If a top border is required then it can be mimicked using a pseudo-element, as described in Multiple Backgrounds and Borders with CSS 2.1. #target { position: relative; border-top: 52px solid transparent; margin: -30px 0 0; -webkit-background-clip: padding-box; -moz-background-clip: padding; background-clip: padding-box; } #target:before { content: ""; position: absolute; top: -2px; left: 0; right: 0; border-top: 2px solid #ccc; } There are still drawbacks: it requires browser support for background-clip if there is a background color, gradient, or repeating image set on the target element; it requires browser support for pseudo-elements and their positioning if a top border is desired; and it interferes with the standard use of margins. To see these methods in action – as well as more details on the code, browser support, and drawbacks – have a look at the demo page. Please let me know if you know of better techniques. Full Article
it CSS drop-shadows without images By nicolasgallagher.com Published On :: Fri, 10 Dec 2010 16:00:00 -0800 Drop-shadows are easy enough to create using pseudo-elements. It’s a nice and robust way to progressively enhance a design. This post is a summary of the technique and some of the possible appearances. Demo: CSS drop-shadows without images Known support: Firefox 3.5+, Chrome 5+, Safari 5+, Opera 10.6+, IE 9+ I’ll be looking mainly at a few details involved in making this effect more robust. Divya Manian covered the basic principle in her article Drop Shadows with CSS3 and Matt Hamm recently shared his Pure CSS3 box-shadow page curl effect. After a bit of back-and-forth on Twitter with Simurai, and proposing a couple of additions to Divya’s and Matt’s demos using jsbin, I felt like documenting and explaining the parts that make up this technique. The basic technique There is no need for extra markup, the effect can be applied to a single element. A couple of pseudo-elements are generated from an element and then pushed behind it. .drop-shadow { position: relative; width: 90%; } .drop-shadow:before, .drop-shadow:after { content: ""; position: absolute; z-index: -1; } The pseudo-elements need to be positioned and given explicit or implicit dimensions. .drop-shadow:before, .drop-shadow:after { content: ""; position: absolute; z-index: -1; bottom: 15px; left: 10px; width: 50%; height: 20%; } The next step is to add a CSS3 box-shadow and apply CSS3 transforms. Different types of drop-shadow can be produced by varying these values and the types of transforms applied. .drop-shadow:before, .drop-shadow:after { content: ""; position: absolute; z-index: -1; bottom: 15px; left: 10px; width: 50%; height: 20%; box-shadow: 0 15px 10px rgba(0, 0, 0, 0.7); transform: rotate(-3deg); } One of the pseudo-elements then needs to be positioned on the other side of the element and rotated in the opposite direction. This is easily done by overriding only the properties that need to differ. .drop-shadow:after{ right: 10px; left: auto; transform: rotate(3deg); } The final core code is as shown below. There is just one more addition – max-width – to prevent the drop-shadow from extending too far below very wide elements. .drop-shadow { position: relative; width: 90%; } .drop-shadow:before, .drop-shadow:after { content: ""; position: absolute; z-index: -1; bottom: 15px; left: 10px; width: 50%; height: 20%; max-width: 300px; box-shadow :0 15px 10px rgba(0, 0, 0, 0.7); transform: rotate(-3deg); } .drop-shadow:after{ right: 10px; left: auto; transform: rotate(3deg); } No Firefox 3.0 problems this time Some pseudo-element hacks require a work-around to avoid looking broken in Firefox 3.0 because that browser does not support the positioning of pseudo-elements. This usually involves implicitly setting their dimensions using offsets. However, as Divya Manian pointed out to me, in this case we’re only using box-shadow – which Firefox 3.0 doesn’t support – and Firefox 3.0 will ignore the position:absolute declaration for the pseudo-elements. This leaves them with the default display:inline style. As a result, there is no problem explicitly setting the pseudo-element width and height because it won’t be applied to the pseudo-elements in Firefox 3.0. Further enhancements From this base there are plenty of ways to tweak the effect by applying skew to the pseudo-elements and modifying the styles of the element itself. A great example of this was shared by Simurai. By adding a border-radius to the element you can give the appearance of page curl. .drop-shadow { border-radius: 0 0 120px 120px / 0 0 6px 6px; } I’ve put together a little demo page with a few of drop-shadow effects, including those that build on the work of Divya Manian and Matt Hamm. If you’ve got your own improvements, please send them to me on Twitter. Full Article
it Better conditional classnames for hack-free CSS By nicolasgallagher.com Published On :: Thu, 19 May 2011 17:00:00 -0700 Applying conditional classnames to the html element is a popular way to help target specific versions of IE with CSS fixes. It was first described by Paul Irish and is a feature of the HTML5 Boilerplate. Despite all its benefits, there are still a couple of niggling issues. Here are some hacky variants that side-step those issues. An article by Paul Irish, Conditional stylesheets vs CSS hacks? Answer: Neither!, first proposed that conditional comments be used on the opening html tag to help target legacy versions of IE with CSS fixes. Since its inclusion in the HTML5 Boilerplate project, contributors have further refined the technique. However, there are still some niggling issues with the “classic” conditional comments approach, which Mathias Bynens summarized in a recent article on safe CSS hacks. The Compatibility View icon is displayed in IE8 and IE9 if you are not setting the X-UA-Compatible header in a server config. The character encoding declaration might not be fully contained within the first 1024 bytes of the HTML document if you need to include several attributes on each version of the opening html tag (e.g. Facebook xmlns junk). You can read more about the related discussions in issue #286 and issue #378 at the HTML5 Boilerplate GitHub repository. The “bubble up” conditional comments method Although not necessarily recommended, it looks like both of these issues can be avoided with a bit of trickery. You can create an uncommented opening html tag upon which any shared attributes (so no class attribute) can be set. The conditional classes are then assigned in a second html tag that appears after the <meta http-equiv="X-UA-Compatible"> tag in the document. The classes will “bubble up” to the uncommented tag. <!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta charset="utf-8"> <!--[if lt IE 7]><html class="no-js ie6"><![endif]--> <!--[if IE 7]><html class="no-js ie7"><![endif]--> <!--[if IE 8]><html class="no-js ie8"><![endif]--> <!--[if gt IE 8]><!--><html class="no-js"><!--<![endif]--> <title>Document</title> </head> <body> </body> </html> Fork the Gist The result is that IE8 and IE9 won’t ignore the <meta http-equiv="X-UA-Compatible"> tag, the Compatibility View icon will not be displayed, and the amount of repeated code is reduced. Obviously, including a second html tag in the head isn’t pretty or valid HTML. If you’re using a server-side config to set the X-UA-Compatible header (instead of the meta tag), then you can still benefit from the DRYer nature of using two opening html tags and it isn’t necessary to include the conditional comments in the head of the document. However, you might still want to do so if you risk not containing the character encoding declaration within the first 1024 bytes of the document. <!DOCTYPE html> <html lang="en"> <!--[if lt IE 7]><html class="no-js ie6"><![endif]--> <!--[if IE 7]><html class="no-js ie7"><![endif]--> <!--[if IE 8]><html class="no-js ie8"><![endif]--> <!--[if gt IE 8]><!--><html class="no-js"><!--<![endif]--> <head> <meta charset="utf-8"> <title>Document</title> </head> <body> </body> </html> Fork the Gist The “preemptive” conditional comments method Another method to prevent the Compatibility View icon from showing was found by Julien Wajsberg. It relies on including a conditional comment before the DOCTYPE. Doing this seems to help IE recognise the <meta http-equiv="X-UA-Compatible"> tag. This method isn’t as DRY and doesn’t have the character encoding declaration as high up in the document, but it also doesn’t use 2 opening html elements. <!--[if IE]><![endif]--> <!DOCTYPE html> <!--[if lt IE 7]><html class="no-js ie6"><![endif]--> <!--[if IE 7]><html class="no-js ie7"><![endif]--> <!--[if IE 8]><html class="no-js ie8"><![endif]--> <!--[if gt IE 8]><!--><html class="no-js"><!--<![endif]--> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta charset="utf-8"> <title>Document</title> </head> <body> </body> </html> Fork the Gist While it’s interesting to explore these possibilities, the “classic” method is still generally the most understandable. It doesn’t create invalid HTML, doesn’t risk throwing IE into quirks mode, and you won’t have a problem with the Compatibility View icon if you use a server-side config. If you find any other approaches, or problems with those posted here, please leave a comment but also consider adding what you’ve found to the relevant issues in the HTML5 Boilerplate GitHub repository. Thanks to Paul Irish for feedback and suggestions. Full Article
it Mac OS X bootable backup drive with rsync By nicolasgallagher.com Published On :: Sun, 14 Aug 2011 17:00:00 -0700 I’ve started using a backup strategy based on that originally described by Jamie Zawinski and subsequently covered in Jeff Atwood’s What’s your backup strategy? article. It works by incrementally backing up your data to a bootable clone of your computer’s internal drive, in order to replace the internal drive when it fails. This script is maintained in my dotfiles repo. Please report problems or improvements in the issue tracker. This post is mainly to document – for myself as much as anything – the process I went through in order to implement an incremental backup strategy in OS X 10.6+. Use at your own risk. Feel free to suggest improvements if you know of any. Formatting and partitioning the drive With your backup drive in its enclosure, connect the drive to your Mac and open the Disk Utility application. Click on the disk’s name. This should bring up a “Partition” tab in the right panel. Click on the “Partition” tab. Under “Volume scheme” select the number of partitions you need. Probably “1 partition” if it is to match your internal disk. Under “Name” enter the volume name you want to use, e.g., “Backup”. Under “Format” select “Mac OS X Extended (Journaled)”, which is necessary if the disk is to be bootable. Click “Options” and check that “GUID Partition Table” is selected. Click “Apply”. This will format and partition the disk. The partition(s) should now show up in the Finder and on the Desktop. Enable ownership permissions The new partition needs permissions to be enabled to avoid chown errors when using rsync. To do this, select the partition and view its information page (using “Get Info” or Command+I). Expand the “Ownership & Permissions” section and uncheck “Ignore ownership on this volume” Backup script The backup script uses rsync – a fast and versatile file copying tool – to manage the copying and moving of data between volumes. You need to install rsync 3 (this is easily done using Homebrew: brew install https://raw.github.com/Homebrew/homebrew-dupes/master/rsync.rb). Rsync offers a wide variety of options and only copies the differences between the source files and the existing files in the destination, making it ideal for incremental backups. You can find out more about rsync in the rsync documentation The following is the contents of a script I’ve named backup. I’m using it to backup all of the data on my internal disk, with a specified set of exceptions contained within a file called .backupignore. #!/bin/bash # Disc backup script # Requires rsync 3 # Ask for the administrator password upfront sudo -v # IMPORTANT: Make sure you update the `DST` variable to match the name of the # destination backup drive DST="/Volumes/Macintosh HD/" SRC="/" EXCLUDE="$HOME/.backupignore" PROG=$0 # --acls update the destination ACLs to be the same as the source ACLs # --archive turn on archive mode (recursive copy + retain attributes) # --delete delete any files that have been deleted locally # --delete-excluded delete any files (on DST) that are part of the list of excluded files # --exclude-from reference a list of files to exclude # --hard-links preserve hard-links # --one-file-system don't cross device boundaries (ignore mounted volumes) # --sparse handle sparse files efficiently # --verbose increase verbosity # --xattrs update the remote extended attributes to be the same as the local ones if [ ! -r "$SRC" ]; then logger -t $PROG "Source $SRC not readable - Cannot start the sync process" exit; fi if [ ! -w "$DST" ]; then logger -t $PROG "Destination $DST not writeable - Cannot start the sync process" exit; fi logger -t $PROG "Start rsync" sudo rsync --acls --archive --delete --delete-excluded --exclude-from=$EXCLUDE --hard-links --one-file-system --sparse --verbose --xattrs "$SRC" "$DST" logger -t $PROG "End rsync" # Make the backup bootable sudo bless -folder "$DST"/System/Library/CoreServices exit 0 Adapted from the rsync script at Automated OSX backups with launchd and rsync This is the contents of the .backupignore file. .Spotlight-*/ .Trashes /afs/* /automount/* /cores/* /dev/* /Network/* /private/tmp/* /private/var/run/* /private/var/spool/postfix/* /private/var/vm/* /Previous Systems.localized /tmp/* /Volumes/* */.Trash Adapted from the excludes file at Automated OSX backups with launchd and rsync Every time the script runs, messages will be written to the system log. Check that the source (SRC) and destination (DST) paths in the script are correct and match the volume name that you chose when partitioning the disk. Wrapping the $SRC and $DST variables in double quotes ensures that the script will work even if your volume names contain spaces (e.g. “Macintosh Backup”). The command option --exclude-from tells the script where to find the file containing the exclude patterns. Make sure you either have .backupignore in the home directory or that you update this part of the command to reference the full path of the excludes file. Running the backup script You can run the script from the command line, or make it executable from the Finder or the Desktop: Type the following into the command line to ensure that you have permission to execute the script: chmod +x /path/to/rsync_backup.sh Remove the .sh extension from the script. Create an alias of the script and move it to the Desktop. Double click the icon to run the backup script. It’s important to run the script regularly in order to keep the backup in sync with your internal disk. If you have a desktop computer, or you never turn off your laptop, you can automate the running of the script by setting up a cron job. Checking the disk is bootable Once you’ve run the backup script, you should test that the backup disk is bootable. To do this, restart your computer and hold down the Alt/Option key. Your backup disk should be presented, with the volume name you chose, as a bootable disk. When I first booted my backup, the terminal displayed the following line: dyld: shared cached file was build against a different libSystem.dylib, ignoring cache According to this article, the fix for this is to update the cache by entering the following into the terminal: sudo update_dyld_shared_cache -force That should be everything you need to start implementing an incremental backup strategy when using OS X. Full Article
it Quick tip: git-checkout specific files from another branch By nicolasgallagher.com Published On :: Wed, 12 Oct 2011 17:00:00 -0700 The git-checkout command can be used to update specific files or directories in your working tree with those from another branch, without merging in the whole branch. This can be useful when working with several feature branches or using GitHub Pages to generate a static project site. The git-checkout manual page describes how the git checkout command is not just useful for switching between branches. When <paths> or --patch are given, git checkout does not switch branches. It updates the named paths in the working tree from the index file or from a named <tree-ish> (most often a commit)…The <tree-ish> argument can be used to specify a specific tree-ish (i.e. commit, tag or tree) to update the index for the given paths before updating the working tree. In git, a tree-ish is a way of referring to a particular commit or tree. This can be a partial sha or the branch, remote, and tag name pointers. The syntax for using git checkout to update the working tree with files from a tree-ish is as follows: git checkout [-p|--patch] [<tree-ish>] [--] <pathspec>… Therefore, to update the working tree with files or directories from another branch, you can use the branch name pointer in the git checkout command. git checkout <branch_name> -- <paths> As an example, this is how you could update your gh-pages branch on GitHub (used to generate a static site for your project) to include the latest changes made to a file that is on the master branch. # On branch master git checkout gh-pages git checkout master -- myplugin.js git commit -m "Update myplugin.js from master" The need to update my gh-pages branch with specific files from my master branch was how I first found out about the other uses of the checkout command. It’s worth having a read of the rest of the git-checkout manual page and experimenting with the options. Full Article
it “Mobile first” CSS and getting Sass to help with legacy IE By nicolasgallagher.com Published On :: Mon, 28 Nov 2011 16:00:00 -0800 Taking a “mobile first” approach to web development poses some challenges if you need to provide a “desktop” experience for legacy versions of IE. Using a CSS pre-processor like Sass can help. As of Sass 3.2, there is another way of catering for IE, described by Jake Archibald. One aspect of a “mobile first” approach to development is that your styles are usually gradually built up from a simple base. Each new “layer” of CSS adds presentational adjustments and complexity, via CSS3 Media Queries, to react to and make use of additional viewport space. However, IE 6/7/8 do not support CSS3 Media Queries. If you want to serve IE 6/7/8 something more than just the base CSS, then you need a solution that exposes the “enhancing” CSS to those browsers. Popular existing options An existing option is the use of a CSS3 Media Query polyfill, such as Respond.js. However, there are some drawbacks to this approach (see the project README), such as the introduction of a JavaScript dependency and the XHRing of your style sheets, which may introduce performance or cross-domain security issues. Furthermore, adding support for CSS3 Media Queries is probably not necessary for these legacy browsers. The main concern is exposing the “enhancing” CSS. Another method, which Jeremy Keith has described in his post on Windows mobile media queries, is to use separate CSS files: one basic global file, and an “enhancing” file that is referenced twice in the <head> of the document. The “enhancing” file is referenced once using a media attribute containing a CSS3 Media Query value. This prevents it being downloaded by browsers (such as IE 6/7/8) which do not support CSS3 Media Queries. The same file is then referenced again, this time wrapped in an IE conditional comment (without the use of a CSS3 Media Query value) to hide it from modern browsers. However, this approach becomes somewhat cumbersome, and introduces multiple HTTP requests, if you have multiple breakpoints in your responsive design. Getting Sass to help Sass 3.1 provides some features that help make this second approach more flexible. The general advantages of the Sass-based approach I’ve used are: You have full control over how your style sheets are broken up and reassembled. It removes the performance concerns of having to reference several separate style sheets for each breakpoint in the responsive design, simply to cater for IE 6/7/8. You can easily repeat large chunks of CSS in separate compiled files without introducing maintenance problems. The basic idea is to produce two versions of your compiled CSS from the same core code. One version of your CSS includes CSS3 @media queries and is downloaded by modern browsers. The other version is only downloaded by IE 6/7/8 in a desktop environment and contains no CSS3 @media queries. To do this, you take advantage of the fact that Sass can import and compile separate .scss/.sass files into a single CSS file. This allows you to keep the CSS rules used at any breakpoint completely separate from the @media query that you might want it to be a part of. This is not a CSS3 Media Query polyfill, so one assumption is that IE 6/7/8 users will predominantly be using mid-size screens and should receive styles appropriate to that environment. Therefore, in the example below, I am making a subjective judgement by including all the breakpoint styles up to a width of 960px but withholding those for any breakpoints beyond that. The ie.scss file imports numerous other files, each containing a layer of CSS that builds upon the previous each layer of CSS. No CSS3 @media queries are contained within the files or the ie.scss file. It then compiles to a single CSS file that is designed to be served only to IE 6/7/8. // ie.scss @import "base"; @import "320-up"; @import "480-up"; @import "780-up"; @import "960-up"; The style.scss file imports the code for each breakpoint involved in the design (including any beyond the limit imposed for legacy versions of IE) but nests them within the relevant CSS3 @media query. The compiled version of this file is served to all browsers apart from IE 6/7/8 and IEMobile. // style.scss @import "base"; @media (min-width:320px) { @import "320-up"; } @media (min-width:480px) { @import "480-up"; } @media (min-width:780px) { @import "780-up"; } @media (min-width:960px) { @import "960-up"; } @media (min-width:1100px) { @import "1100-up"; } The resulting CSS files can then be referenced in the HTML. It is important to hide the ie.css file from any IE-based mobile browsers. This ensures that they do not download the CSS meant for desktop versions of IE. <!--[if (gt IE 8) | (IEMobile)]><!--> <link rel="stylesheet" href="/css/style.css"> <!--<![endif]--> <!--[if (lt IE 9) & (!IEMobile)]> <link rel="stylesheet" href="/css/ie.css"> <![endif]--> This Sass-enabled approach works just as well if you need to serve a basic style sheet for mobiles without CSS3 Media Query support, and prevent those devices from downloading the CSS used to adapt the layout to wider viewports. For example, you can avoid importing base.scss into the ie.scss and style.scss files. It can then be referenced separately in the HTML. <link rel="stylesheet" href="/css/base.css"> <link rel="stylesheet" href="/css/style.css" media="(min-width:320px)"> <!--[if (lt IE 9) & (!IEMobile)]> <link rel="stylesheet" href="/css/ie.css"> <![endif]--> You’ll notice that I didn’t wrap the style.css reference in a conditional comment to hide it from legacy versions of IE. It’s not necessary this time because the value of the media attribute is not understood by legacy versions of IE, and the style sheet will not be downloaded. In different circumstances, different combinations of style sheets and media attribute values will be more appropriate. Summary Even if you want to don’t want to use any of the Sass or SCSS syntax, the pre-processor itself can help you to write your CSS in a “mobile first” manner (with multiple breakpoints), provide a “desktop” experience for IE 6/7/8, and avoid some of the performance or maintenance concerns that are sometimes present when juggling the two requirements. I’m relatively new to using Sass, so there may be even better ways to achieve the same result or even to prevent the inclusion of IE-specific CSS unless the file is being compiled into a style sheet that only IE will download. Full Article
it CSS: the cascade, specificity, and inheritance By nicolasgallagher.com Published On :: Sun, 04 Mar 2012 16:00:00 -0800 What is the cascade? The cascade is a mechanism for determining which styles should be applied to a given element, based on the rules that have cascaded down from various sources. The cascade takes importance, origin, specificity, and source order of style rules into account. It assigns a weight to each rule. When multiple rules apply to a given element, the rule with the greatest weight takes precedence. The result is an unambiguous way to determine the value of a given element/property combination. Browsers apply the following sorting logic: Find all declarations that apply to a given element/property combination, for the target media type. Sort declarations according to their importance (normal or important) and origin (author, user, or user agent). From highest to lowest precedence: user !important declarations author !important declarations author normal declarations user normal declarations user agent declarations If declarations have the same importance and source, sort them by selector specificity. Finally, if declarations have the same importance, source, and specificity, sort them by the order they are specified in the CSS. The last declaration wins. What is specificity? Specificity is a method of conflict resolution within the cascade. Specificity is calculated in a very particular way, based on the values of 4 distinct categories. For explanatory purposes, the CSS2 spec represents these categories using the letters a, b, c, and d. Each has a value of 0 by default. a is equal to 1 if the declaration comes from a style attribute in the HTML (“inline styles”) rather than a CSS rule with a selector. b is equal to the number of ID attributes in a selector. c is equal to the number of other attributes and pseudo-classes in a selector. d is equal to the number of elements and pseudo-elements in a selector. The specificity is given by concatenating all 4 resulting numbers. More specific selectors take precedence over less specific ones. For example, the selector #id .class[href] element:hover contains: 1 ID (b is 1) 1 class, 1 attribute selector, and 1 pseudo-class (c is 3) 1 element (d is 1) Therefore, it has a specificity of 0,1,3,1. Note that a selector containing a single ID (0,1,0,0) will have a higher specificity than one containing any number of other attributes or elements (e.g., 0,0,10,20). This is one of the reasons why many modern CSS architectural patterns avoid using IDs for styling purposes. What is inheritance? Inheritance is distinct from the cascade and involves the DOM tree. Inheritance is the process by which elements inherit the the values of properties from their ancestors in the DOM tree. Some properties, e.g. color, are automatically inherited by the children of the element to which they are applied. Each property defines whether it will be automatically inherited. The inherit value can be set for any property and will force a given element to inherit its parent element’s property value, even if the property is not normally inherited. About !important The above should make it apparent that !important is a separate concept to specificity. It has no effect on the specificity of a rule’s selector. An !important declaration has a greater precedence than a normal declaration (see the previously mentioned cascade sorting logic), even declarations contained in an element’s style attribute. [CSS terminology reference] Translations CSS: каскад, специфика и наследование Full Article
it About HTML semantics and front-end architecture By nicolasgallagher.com Published On :: Wed, 14 Mar 2012 17:00:00 -0700 A collection of thoughts, experiences, ideas that I like, and ideas that I have been experimenting with over the last year. It covers HTML semantics, components and approaches to front-end architecture, class naming patterns, and HTTP compression. About semantics Semantics is the study of the relationships between signs and symbols and what they represent. In linguistics, this is primarily the study of the meaning of signs (such as words, phrases, or sounds) in language. In the context of front-end web development, semantics are largely concerned with the agreed meaning of HTML elements, attributes, and attribute values (including extensions like Microdata). These agreed semantics, which are usually formalised in specifications, can be used to help programmes (and subsequently humans) better understand aspects of the information on a website. However, even after formalisation, the semantics of elements, attributes, and attribute values are subject to adaptation and co-option by developers. This can lead to subsequent modifications of the formally agreed semantics (and is an HTML design principle). Distinguishing between different types of HTML semantics The principle of writing “semantic HTML” is one of the foundations of modern, professional front-end development. Most semantics are related to aspects of the nature of the existing or expected content (e.g. h1 element, lang attribute, email value of the type attribute, Microdata). However, not all semantics need to be content-derived. Class names cannot be “unsemantic”. Whatever names are being used: they have meaning, they have purpose. Class name semantics can be different to those of HTML elements. We can leverage the agreed “global” semantics of HTML elements, certain HTML attributes, Microdata, etc., without confusing their purpose with those of the “local” website/application-specific semantics that are usually contained in the values of attributes like the class attribute. Despite the HTML5 specification section on classes repeating the assumed “best practice” that… …authors are encouraged to use [class attribute] values that describe the nature of the content, rather than values that describe the desired presentation of the content. …there is no inherent reason to do this. In fact, it’s often a hindrance when working on large websites or applications. Content-layer semantics are already served by HTML elements and other attributes. Class names impart little or no useful semantic information to machines or human visitors unless it is part of a small set of agreed upon (and machine readable) names – Microformats. The primary purpose of a class name is to be a hook for CSS and JavaScript. If you don’t need to add presentation and behaviour to your web documents, then you probably don’t need classes in your HTML. Class names should communicate useful information to developers. It’s helpful to understand what a specific class name is going to do when you read a DOM snippet, especially in multi-developer teams where front-enders won’t be the only people working with HTML components. Take this very simple example: <div class="news"> <h2>News</h2> [news content] </div> The class name news doesn’t tell you anything that is not already obvious from the content. It gives you no information about the architectural structure of the component, and it cannot be used with content that isn’t “news”. Tying your class name semantics tightly to the nature of the content has already reduced the ability of your architecture to scale or be easily put to use by other developers. Content-independent class names An alternative is to derive class name semantics from repeating structural and functional patterns in a design. The most reusable components are those with class names that are independent of the content. We shouldn’t be afraid of making the connections between layers clear and explicit rather than having class names rigidly reflect specific content. Doing this doesn’t make classes “unsemantic”, it just means that their semantics are not derived from the content. We shouldn’t be afraid to include additional HTML elements if they help create more robust, flexible, and reusable components. Doing so does not make the HTML “unsemantic”, it just means that you use elements beyond the bare minimum needed to markup the content. Front-end architecture The aim of a component/template/object-oriented architecture is to be able to develop a limited number of reusable components that can contain a range of different content types. The important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose – providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use. Reusable and combinable components Scalable HTML/CSS must, by and large, rely on classes within the HTML to allow for the creation of reusable components. A flexible and reusable component is one which neither relies on existing within a certain part of the DOM tree, nor requires the use of specific element types. It should be able to adapt to different containers and be easily themed. If necessary, extra HTML elements (beyond those needed just to markup the content) and can be used to make the component more robust. A good example is what Nicole Sullivan calls the media object. Components that can be easily combined benefit from the avoidance of type selectors in favour of classes. The following example prevents the easy combination of the btn component with the uilist component. The problems are that the specificity of .btn is less than that of .uilist a (which will override any shared properties), and the uilist component requires anchors as child nodes. .btn { /* styles */ } .uilist { /* styles */ } .uilist a { /* styles */ } <nav class="uilist"> <a href="#">Home</a> <a href="#">About</a> <a class="btn" href="#">Login</a> </nav> An approach that improves the ease with which you can combine other components with uilist is to use classes to style the child DOM elements. Although this helps to reduce the specificity of the rule, the main benefit is that it gives you the option to apply the structural styles to any type of child node. .btn { /* styles */ } .uilist { /* styles */ } .uilist-item { /* styles */ } <nav class="uilist"> <a class="uilist-item" href="#">Home</a> <a class="uilist-item" href="#">About</a> <span class="uilist-item"> <a class="btn" href="#">Login</a> </span> </nav> JavaScript-specific classes Using some form of JavaScript-specific classes can help to reduce the risk that thematic or structural changes to components will break any JavaScript that is also applied. An approach that I’ve found helpful is to use certain classes only for JavaScript hooks – js-* – and not to hang any presentation off them. <a href="/login" class="btn btn-primary js-login"></a> This way, you can reduce the chance that changing the structure or theme of components will inadvertently affect any required JavaScript behaviour and complex functionality. Component modifiers Components often have variants with slightly different presentations from the base component, e.g., a different coloured background or border. There are two mains patterns used to create these component variants. I’m going to call them the “single-class” and “multi-class” patterns. The “single-class” pattern .btn, .btn-primary { /* button template styles */ } .btn-primary { /* styles specific to save button */ } <button class="btn">Default</button> <button class="btn-primary">Login</button> The “multi-class” pattern .btn { /* button template styles */ } .btn-primary { /* styles specific to primary button */ } <button class="btn">Default</button> <button class="btn btn-primary">Login</button> If you use a pre-processor, you might use Sass’s @extend functionality to reduce some of the maintenance work involved in using the “single-class” pattern. However, even with the help of a pre-processor, my preference is to use the “multi-class” pattern and add modifier classes in the HTML. I’ve found it to be a more scalable pattern. For example, take the base btn component and add a further 5 types of button and 3 additional sizes. Using a “multi-class” pattern you end up with 9 classes that can be mixed-and-matched. Using a “single-class” pattern you end up with 24 classes. It is also easier to make contextual tweaks to a component, if absolutely necessary. You might want to make small adjustments to any btn that appears within another component. /* "multi-class" adjustment */ .thing .btn { /* adjustments */ } /* "single-class" adjustment */ .thing .btn, .thing .btn-primary, .thing .btn-danger, .thing .btn-etc { /* adjustments */ } A “multi-class” pattern means you only need a single intra-component selector to target any type of btn-styled element within the component. A “single-class” pattern would mean that you may have to account for any possible button type, and adjust the selector whenever a new button variant is created. Structured class names When creating components – and “themes” that build upon them – some classes are used as component boundaries, some are used as component modifiers, and others are used to associate a collection of DOM nodes into a larger abstract presentational component. It’s hard to deduce the relationship between btn (component), btn-primary (modifier), btn-group (component), and btn-group-item (component sub-object) because the names don’t clearly surface the purpose of the class. There is no consistent pattern. In early 2011, I started experimenting with naming patterns that help me to more quickly understand the presentational relationship between nodes in a DOM snippet, rather than trying to piece together the site’s architecture by switching back-and-forth between HTML, CSS, and JS files. The notation in the gist is primarily influenced by the BEM system‘s approach to naming, but adapted into a form that I found easier to scan. Since I first wrote this post, several other teams and frameworks have adopted this approach. MontageJS modified the notation into a different style, which I prefer and currently use in the SUIT framework: /* Utility */ .u-utilityName {} /* Component */ .ComponentName {} /* Component modifier */ .ComponentName--modifierName {} /* Component descendant */ .ComponentName-descendant {} /* Component descendant modifier */ .ComponentName-descendant--modifierName {} /* Component state (scoped to component) */ .ComponentName.is-stateOfComponent {} This is merely a naming pattern that I’m finding helpful at the moment. It could take any form. But the benefit lies in removing the ambiguity of class names that rely only on (single) hyphens, or underscores, or camel case. A note on raw file size and HTTP compression Related to any discussion about modular/scalable CSS is a concern about file size and “bloat”. Nicole Sullivan’s talks often mention the file size savings (as well as maintenance improvements) that companies like Facebook experienced when adopting this kind of approach. Further to that, I thought I’d share my anecdotes about the effects of HTTP compression on pre-processor output and the extensive use of HTML classes. When Twitter Bootstrap first came out, I rewrote the compiled CSS to better reflect how I would author it by hand and to compare the file sizes. After minifying both files, the hand-crafted CSS was about 10% smaller than the pre-processor output. But when both files were also gzipped, the pre-processor output was about 5% smaller than the hand-crafted CSS. This highlights how important it is to compare the size of files after HTTP compression, because minified file sizes do not tell the whole story. It suggests that experienced CSS developers using pre-processors don’t need to be overly concerned about a certain degree of repetition in the compiled CSS because it can lend itself well to smaller file sizes after HTTP compression. The benefits of more maintainable “CSS” code via pre-processors should trump concerns about the aesthetics or size of the raw and minified output CSS. In another experiment, I removed every class attribute from a 60KB HTML file pulled from a live site (already made up of many reusable components). Doing this reduced the file size to 25KB. When the original and stripped files were gzipped, their sizes were 7.6KB and 6KB respectively – a difference of 1.6KB. The actual file size consequences of liberal class use are rarely going to be worth stressing over. How I learned to stop worrying… The experience of many skilled developers, over many years, has led to a shift in how large-scale website and applications are developed. Despite this, for individuals weaned on an ideology where “semantic HTML” means using content-derived class names (and even then, only as a last resort), it usually requires you to work on a large application before you can become acutely aware of the impractical nature of that approach. You have to be prepared to disgard old ideas, look at alternatives, and even revisit ways that you may have previously dismissed. Once you start writing non-trivial websites and applications that you and others must not only maintain but actively iterate upon, you quickly realise that despite your best efforts, your code starts to get harder and harder to maintain. It’s well worth taking the time to explore the work of some people who have proposed their own approaches to tackling these problems: Nicole’s blog and Object Oriented CSS project, Jonathan Snook’s Scalable Modular Architecture CSS, and the Block Element Modifier method that Yandex have developed. When you choose to author HTML and CSS in a way that seeks to reduce the amount of time you spend writing and editing CSS, it involves accepting that you must instead spend more time changing HTML classes on elements if you want to change their styles. This turns out to be fairly practical, both for front-end and back-end developers – anyone can rearrange pre-built “lego blocks”; it turns out that no one can perform CSS-alchemy. Full Article
it A simple Git deployment strategy for static sites By nicolasgallagher.com Published On :: Wed, 01 Jan 2014 16:00:00 -0800 This is how I am deploying the build of my static website to staging and production domains. It requires basic use of the CLI, Git, and SSH. But once you’re set up, a single command will build and deploy. TL;DR: Push the static build to a remote, bare repository that has a detached working directory (on the same server). A post-receive hook checks out the files in the public directory. Prerequisites A remote web server to host your site. SSH access to your remote server. Git installed on your remote server (check with git --version). Generate an SSH key if you need one. On the server Set up password-less SSH access First, you need to SSH into your server, and provide the password if prompted. ssh user@hostname If there is no ~/.ssh directory in your user’s home directory, create one: mkdir ~/.ssh. Next, you need to copy your public SSH key (see “Generate an SSH key” above) to the server. This allows you to connect via SSH without having to enter a password each time. From your local machine – assuming your public key can be found at ~/.ssh/id_rsa.pub – enter the following command, with the correct user and hostname. It will append your public key to the authorized_keys file on the remote server. ssh user@hostname 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub If you close the connection, and then attempt to establish SSH access, you should no longer be prompted for a password. Create the remote directories You need to have 2 directories for each domain you want to host. One for the Git repository, and one to contain the checked out build. For example, if your domain were example.com and you also wanted a staging environment, you’d create these directories on the server: mkdir ~/example.com ~/example.git mkdir ~/staging.example.com ~/staging.example.git Initialize the bare Git repository Create a bare Git repository on the server. This is where you will push the build assets to, from your local machine. But you don’t want the files served here, which is why it’s a bare repository. cd ~/example.git git init --bare Repeat this step for the staging domain, if you want. Write a post-receive hook A post-receive hook allows you to run commands after the Git repository has received commits. In this case, you can use it to change Git’s working directory from example.git to example.com, and check out a copy of the build into the example.com directory. The location of the working directory can be set on a per-command basis using GIT_WORK_TREE, one of Git’s environment variables, or the --work-tree option. cat > hooks/post-receive #!/bin/sh WEB_DIR=/path/to/example.com # remove any untracked files and directories git --work-tree=${WEB_DIR} clean -fd # force checkout of the latest deploy git --work-tree=${WEB_DIR} checkout --force Make sure the file permissions on the hook are correct. chmod +x hooks/post-receive If you need to exclude some files from being cleaned out by Git (e.g., a .htpasswd file), you can do that using the --exclude option. This requires Git 1.7.3 or above to be installed on your server. git --work-tree=${WEB_DIR} clean -fd --exclude=<pattern> Repeat this step for the staging domain, if you want. On your local machine Now that the server configuration is complete, you want to deploy the build assets (not the source code) for the static site. The build and deploy tasks I’m using a Makefile, but use whatever you feel comfortable with. What follows is the basic workflow I wanted to automate. Build the production version of the static site. make build Initialize a new Git repo in the build directory. I don’t want to try and merge the new build into previous deploys, especially for the staging domain. git init ./build Add the remote to use for the deploy. cd ./build git remote add origin ssh://user@hostname/~/example.git Commit everything in the build repo. cd ./build git add -A git commit -m "Release" Force-replace the remote master branch, creating it if missing. cd ./build git push -f origin +master:refs/heads/master Tag the checked-out commit SHA in the source repo, so I can see which SHA’s were last deployed. git tag -f production Using a Makefile: BUILD_DIR := ./build STAGING_REPO = ssh://user@hostname/~/staging.example.git PROD_REPO = ssh://user@hostname/~/example.git install: npm install # Deploy tasks staging: build git-staging deploy @ git tag -f staging @ echo "Staging deploy complete" prod: build git-prod deploy @ git tag -f production @ echo "Production deploy complete" # Build tasks build: clean # whatever your build step is # Sub-tasks clean: @ rm -rf $(BUILD_DIR) git-prod: @ cd $(BUILD_DIR) && git init && git remote add origin $(PROD_REPO) git-staging: @ cd $(BUILD_DIR) && git init && git remote add origin $(STAGING_REPO) deploy: @ cd $(BUILD_DIR) && git add -A && git commit -m "Release" && git push -f origin +master:refs/heads/master .PHONY: install build clean deploy git-prod git-staging prod staging To deploy to staging: make staging To deploy to production: make prod Using Make, it’s a little bit more hairy than usual to force push to master, because the cd commands take place in a sub-process. You have to make sure subsequent commands are on the same line. For example, the deploy task would force push to your source code’s remote master branch if you failed to join the commands with && or ;! I push my site’s source code to a private repository on BitBucket. One of the nice things about BitBucket is that it gives you the option to prevent deletions or history re-writes of branches. If you have any suggested improvements, let me know on Twitter. Full Article
it Redux modules and code-splitting By nicolasgallagher.com Published On :: Thu, 01 Feb 2018 16:00:00 -0800 Twitter Lite uses Redux for state management and relies on code-splitting. However, Redux’s default API is not designed for applications that are incrementally-loaded during a user session. This post describes how I added support for incrementally loading the Redux modules in Twitter Lite. It’s relatively straight-forward and proven in production over several years. Redux modules Redux modules comprise of a reducer, actions, action creators, and selectors. Organizing redux code into self-contained modules makes it possible to create APIs that don’t involve directly referencing the internal state of a reducer – this makes refactoring and testing a lot easier. (More about the concept of redux modules.) Here’s an example of a small “redux module”. // data/notifications/index.js const initialState = []; let notificationId = 0; const createActionName = name => `app/notifications/${name}`; // reducer export default function reducer(state = initialState, action = {}) { switch (action.type) { case ADD_NOTIFICATION: return [...state, { ...action.payload, id: notificationId += 1 }]; case REMOVE_NOTIFICATION: return state.slice(1); default: return state; } } // selectors export const selectAllNotifications = state => state.notifications; export const selectNextNotification = state => state.notifications[0]; // actions export const ADD_NOTIFICATION = createActionName(ADD_NOTIFICATION); export const REMOVE_NOTIFICATION = createActionName(REMOVE_NOTIFICATION); // action creators export const addNotification = payload => ({ payload, type: ADD_NOTIFICATION }); export const removeNotification = () => ({ type: REMOVE_NOTIFICATION }); This module can be used to add and select notifications. Here’s an example of how it can be used to provide props to a React component. // components/NotificationView/connect.js import { connect } from 'react-redux'; import { createStructuredSelector } from 'reselect'; import { removeNotification, selectNextNotification } from '../../data/notifications'; const mapStateToProps = createStructuredSelector({ nextNotification: selectNextNotification }); const mapDispatchToProps = { removeNotification }; export default connect(mapStateToProps, mapDispatchToProps); // components/NotificationView/index.js import connect from './connect'; export class NotificationView extends React.Component { /*...*/ } export default connect(NotificationView); This allows you to import specific modules that are responsible for modifying and querying specific parts of the overall state. This can be very useful when relying on code-splitting. However, problems with this approach are evident once it comes to adding the reducer to a Redux store. // data/createStore.js import { combineReducers, createStore } from 'redux'; Import notifications from './notifications'; const initialState = /* from local storage or server */ const reducer = combineReducers({ notifications }); const store = createStore(reducer, initialState); export default store; You’ll notice that the notifications namespace is defined at the time the store is created, and not by the Redux module that defines the reducer. If the “notifications” reducer name is changed in createStore, all the selectors in the “notifications” Redux module no longer work. Worse, every Redux module needs to be imported in the createStore file before it can be added to the store’s reducer. This doesn’t scale and isn’t good for large apps that rely on code-splitting to incrementally load modules. A large app could have dozens of Redux modules, many of which are only used by a few components and unnecessary for initial render. Both of these issues can be avoided by introducing a Redux reducer registry. Redux reducer registry The reducer registry enables Redux reducers to be added to the store’s reducer after the store has been created. This allows Redux modules to be loaded on-demand, without requiring all Redux modules to be bundled in the main chunk for the store to correctly initialize. // data/reducerRegistry.js export class ReducerRegistry { constructor() { this._emitChange = null; this._reducers = {}; } getReducers() { return { ...this._reducers }; } register(name, reducer) { this._reducers = { ...this._reducers, [name]: reducer }; if (this._emitChange) { this._emitChange(this.getReducers()); } } setChangeListener(listener) { this._emitChange = listener; } } const reducerRegistry = new ReducerRegistry(); export default reducerRegistry; Each Redux module can now register itself and define its own reducer name. // data/notifications/index.js import reducerRegistry from '../reducerRegistry'; const initialState = []; let notificationId = 0; const reducerName = 'notifications'; const createActionName = name => `app/${reducerName}/${name}`; // reducer export default function reducer(state = initialState, action = {}) { switch (action.type) { case ADD_NOTIFICATION: return [...state, { ...action.payload, id: notificationId += 1 }]; case REMOVE_NOTIFICATION: return state.slice(1); default: return state; } } reducerRegistry.register(reducerName, reducer); // selectors export const selectAllNotifications = state => state[reducerName]; export const selectNextNotification = state => state[reducerName][0]; // actions export const ADD_NOTIFICATION = createActionName(ADD_NOTIFICATION); export const REMOVE_NOTIFICATION = createActionName(REMOVE_NOTIFICATION); // action creators export const addNotification = payload => ({ payload, type: ADD_NOTIFICATION }); export const removeNotification = () => ({ type: REMOVE_NOTIFICATION }); Next, we need to replace the store’s combined reducer whenever a new reducer is registered (e.g., after loading an on-demand chunk). This is complicated slightly by the need to preserve initial state that may have been created by reducers that aren’t yet loaded on the client. By default, once an action is dispatched, Redux will throw away state that is not tied to a known reducer. To avoid that, reducer stubs are created to preserve the state. // data/createStore.js import { combineReducers, createStore } from 'redux'; import reducerRegistry from './reducerRegistry'; const initialState = /* from local storage or server */ // Preserve initial state for not-yet-loaded reducers const combine = (reducers) => { const reducerNames = Object.keys(reducers); Object.keys(initialState).forEach(item => { if (reducerNames.indexOf(item) === -1) { reducers[item] = (state = null) => state; } }); return combineReducers(reducers); }; const reducer = combine(reducerRegistry.getReducers()); const store = createStore(reducer, initialState); // Replace the store's reducer whenever a new reducer is registered. reducerRegistry.setChangeListener(reducers => { store.replaceReducer(combine(reducers)); }); export default store; Managing the Redux store’s reducer with a registry should help you better code-split your application and modularize your state management. Full Article
it Odisha to expedite chariot construction for Rath Yatra By www.thehindu.com Published On :: Fri, 08 May 2020 19:08:57 +0530 The Home Ministry had on Thursday allowed chariot construction with a condition that no religious congregation should take place around the Ratha Khala. Full Article Other States
it Coronavirus | Indore remains worst hit in Madhya Pradesh with 3 more deaths By www.thehindu.com Published On :: Sat, 09 May 2020 02:21:20 +0530 Bhopal, by comparison, has so far reported 679 cases and 24 deaths, with 354 patients, or more than half of those infected, having recovered. Full Article Other States
it Coronavirus lockdown | With no work or food, workers brave the long march home from Uttar Pradesh By www.thehindu.com Published On :: Sat, 09 May 2020 10:06:56 +0530 "We don’t want anything from the government. We just want to be dropped home," says a migrant worker from Chhattisgarh. Full Article Other States
it PNB scam: HC rejects bail plea of accused who tested positive for COVID-19 By www.thehindu.com Published On :: Sat, 09 May 2020 01:43:26 +0530 Court says Hemant Bhatt needs to be treated at a govt. hospital Full Article Other States
it Coronavirus | 30 more test positive in J&K, cases mount to 823 By www.thehindu.com Published On :: Sat, 09 May 2020 02:01:16 +0530 Bandipora tops the list with 134 cases, followed by Srinagar at 129 Full Article Other States
it First special train with migrant workers leaves from Mumbai’s LTT By www.thehindu.com Published On :: Sat, 09 May 2020 02:25:02 +0530 All 1,111 passengers underwent thermal screening at the station before departing for Basti in U.P. Full Article Other States
it Ganjam sparred community spread as migrants stay put at quarantine centres By www.thehindu.com Published On :: Sat, 09 May 2020 02:08:17 +0530 All returnees are taken to centres from buses and trains Full Article Other States
it Muslim villagers help Hindu woman's last rites By www.assamtimes.org Published On :: Thu, 26 Mar 2020 11:38:47 +0000 Full Article
it Illicit liquor racket busted By www.assamtimes.org Published On :: Tue, 31 Mar 2020 03:51:28 +0000 Full Article
it COVID19 positive goes up to 16 By www.assamtimes.org Published On :: Wed, 01 Apr 2020 16:25:21 +0000 Full Article
it BTR at war with COVID19 with full force By www.assamtimes.org Published On :: Sat, 04 Apr 2020 10:00:05 +0000 Full Article
it Strawberry cultivator’s hope blighted with frustration By www.assamtimes.org Published On :: Sat, 04 Apr 2020 11:37:04 +0000 Full Article
it Hail storm hits state hard By www.assamtimes.org Published On :: Fri, 10 Apr 2020 04:43:56 +0000 Full Article
it Legislators named in delimitation panel By www.assamtimes.org Published On :: Thu, 07 May 2020 10:35:18 +0000 Full Article
it Stranded in the Nyiri Desert [electronic resource] : a group case study / Matthew J. Drake ; Aimee A. Kane and Mercy Shitemi By prospero.murdoch.edu.au Published On :: Drake, Matthew, author Full Article
it Strategic excellence in the architecture, engineering, and construction industries [electronic resource] : how AEC firms can develop and execute strategy using lean Six Sigma / Gerhard Plenert and Joshua J. Plenert By prospero.murdoch.edu.au Published On :: Plenert, Gerhard Johannes, author Full Article
it Strategic risk management [electronic resource] : new tools for competitive advantage in an uncertain age / Paul C. Godfrey, [and three others] By prospero.murdoch.edu.au Published On :: Godfrey, Paul C., author Full Article
it Strategisches IT-Management [electronic resource] / Josephine Hofmann, Matthias Knoll (Hrsg.) By prospero.murdoch.edu.au Published On :: Full Article
it Strategisches management für KMU [electronic resource] : unternehmenswachstum durch (r)evolutionäre Unternehmensführung / Gerrit Hamann By prospero.murdoch.edu.au Published On :: Hamann, Gerrit, author Full Article
it Stress less. achieve more [electronic resource] : simple ways to turn pressure into a positive force in your life / Aimee Bernstein By prospero.murdoch.edu.au Published On :: Bernstein, Aimee Full Article
it The stress test every business needs [electronic resource] : a capital agenda for confidently facing digital disruption, difficult investors, recessions and geopolitical threats / Jeffrey R. Greene, Steve Krouskos, Julie Hood, Harsha Basnayake, William Ca By prospero.murdoch.edu.au Published On :: Greene, Jeffrey R., author Full Article
it Stressbewältigung, Empathie und Zufriedenheit in der Partnerschaft [electronic resource] / Bente Klein By prospero.murdoch.edu.au Published On :: Klein, Bente Full Article
it The subjective well-being module of the American Time Use Survey [electronic resource] : assessment for its continuation / Panel on Measuring Subjective Well-Being in a Policy-Relevant Framework, Committee on National Statistics, Division of Behavioral an By prospero.murdoch.edu.au Published On :: Full Article
it Succeeding with SOA [electronic resource] : realizing business value through total architecture / Paul C. Brown By prospero.murdoch.edu.au Published On :: Brown, Paul C Full Article
it Successes and failures of knowledge management [electronic resource] / edited by Jay Liebowitz, Distinguished Chair of Applied Business and Finance, Harrisburg University of Science and Technology, Harrisburg, Pennsylvania By prospero.murdoch.edu.au Published On :: Full Article