a c

Sustainable groundwater management: a comparative analysis of French and Australian policies and implications to other countries / Jean-Daniel Rinaudo [and 3 others], editors

Online Resource




a c

Atlas of paediatric surgical imaging: a clinical and diagnostic approach / Robert Carachi, editor

Online Resource




a c

Perinatal palliative care: a clinical guide / Erin M. Denney-Koelsch, Denise Côté-Arsenault, editors

Online Resource




a c

Endovascular abdominal aortic repair-endoleak treatment: a case-based approach / Stevo Duvnjak

Online Resource




a c

Health in diversity -- diversity in health: (forced) migration, social diversification, and health in a changing world / Katharina Crepaz, Ulrich Becker, Elisabeth Wacker, editors

Online Resource




a c

Plastic and cosmetic surgery of the male breast Adriana Cordova, Alessandro Innocenti, Francesca Toia, Massimiliano Tripoli, editors

Online Resource




a c

Cardiac surgery: a complete guide / Shahzad G. Raja, editor

Online Resource




a c

Fetal and neonatal eye pathology Robert M. Verdijk, Martina C. Herwig-Carl

Online Resource




a c

Eyelid and Conjunctival Tumors: In Vivo Confocal Microscopy / edited by Mathilde Kaspi, Elisa Cinotti, Jean-Luc Perrot, Thibaud Garcin

Online Resource




a c

Dimensions on nursing teaching and learning: supporting nursing students in learning nursing / Sheila Cunningham, editor

Online Resource




a c

History of professional nursing in the United States: toward a culture of health / Arlene W. Keeling, Michelle C. Hehman, John C. Kirchgessner

Hayden Library - RT41.K44 2018




a c

Evidence-based critical care: a case study approach / Robert C. Hyzy, Jakob McSparron, editors

Online Resource




a c

Canary in a Coal Mine: How Tech Provides Platforms for Hate

As I write this, the world is sending its thoughts and prayers to our Muslim cousins. The Christchurch act of terrorism has once again reminded the world that white supremacy’s rise is very real, that its perpetrators are no longer on the fringes of society, but centered in our holiest places of worship. People are begging us to not share videos of the mass murder or the hateful manifesto that the white supremacist terrorist wrote. That’s what he wants: for his proverbial message of hate to be spread to the ends of the earth.

We live in a time where you can stream a mass murder and hate crime from the comfort of your home. Children can access these videos, too.

As I work through the pure pain, unsurprised, observing the toll on Muslim communities (as a non-Muslim, who matters least in this event), I think of the imperative role that our industry plays in this story.

At time of writing, YouTube has failed to ban and to remove this video. If you search for the video (which I strongly advise against), it still comes up with a mere content warning; the same content warning that appears for casually risqué content. You can bypass the warning and watch people get murdered. Even when the video gets flagged and taken down, new ones get uploaded.

Human moderators have to relive watching this trauma over and over again for unlivable wages. News outlets are embedding the video into their articles and publishing the hateful manifesto. Why? What does this accomplish?

I was taught in journalism class that media (photos, video, infographics, etc.) should be additive (a progressive enhancement, if you will) and provide something to the story for the reader that words cannot.

Is it necessary to show murder for our dear readers to understand the cruelty and finality of it? Do readers gain something more from watching fellow humans have their lives stolen from them? What psychological damage are we inflicting upon millions of people   and for what?

Who benefits?

The mass shooter(s) who had a message to accompany their mass murder. News outlets are thirsty for perverse clicks to garner more ad revenue. We, by way of our platforms, give agency and credence to these acts of violence, then pilfer profits from them. Tech is a money-making accomplice to these hate crimes.

Christchurch is just one example in an endless array where the tools and products we create are used as a vehicle for harm and for hate.

Facebook and the Cambridge Analytica scandal played a critical role in the outcome of the 2016 presidential election. The concept of “race realism,” which is essentially a term that white supremacists use to codify their false racist pseudo-science, was actively tested on Facebook’s platform to see how the term would sit with people who are ignorantly sitting on the fringes of white supremacy. Full-blown white supremacists don’t need this soft language. This is how radicalization works.

The strategies articulated in the above article are not new. Racist propaganda predates social media platforms. What we have to be mindful with is that we’re building smarter tools with power we don’t yet fully understand: you can now have an AI-generated human face. Our technology is accelerating at a frightening rate, a rate faster than our reflective understanding of its impact.

Combine the time-tested methods of spreading white supremacy, the power to manipulate perception through technology, and the magnitude and reach that has become democratized and anonymized.

We’re staring at our own reflection in the Black Mirror.

The right to speak versus the right to survive

Tech has proven time and time again that it voraciously protects first amendment rights above all else. (I will also take this opportunity to remind you that the first amendment of the United States offers protection to the people from the government abolishing free speech, not from private money-making corporations).

Evelyn Beatrice Hall writes in The Friends of Voltaire, “I disapprove of what you say, but I will defend to the death your right to say it.” Fundamentally, Hall’s quote expresses that we must protect, possibly above all other freedoms, the freedom to say whatever we want to say. (Fun fact: The quote is often misattributed to Voltaire, but Hall actually wrote it to explain Voltaire’s ideologies.)

And the logical anchor here is sound: We must grant everyone else the same rights that we would like for ourselves. Former 99u editor Sean Blanda wrote a thoughtful piece on the “Other Side,” where he posits that we lack tolerance for people who don’t think like us, but that we must because we might one day be on the other side. I agree in theory.

But, what happens when a portion of the rights we grant to one group (let’s say, free speech to white supremacists) means the active oppression another group’s right (let’s say, every person of color’s right to live)?

James Baldwin expresses this idea with a clause, “We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist.”

It would seem that we have a moral quandary where two sets of rights cannot coexist. Do we protect the privilege for all users to say what they want, or do we protect all users from hate? Because of this perceived moral quandary, tech has often opted out of this conversation altogether. Platforms like Twitter and Facebook, two of the biggest offenders, continue to allow hate speech to ensue with irregular to no regulation.

When explicitly asked about his platform as a free-speech platform and its consequence to privacy and safety, Twitter CEO Jack Dorsey said,

“So we believe that we can only serve the public conversation, we can only stand for freedom of expression if people feel safe to express themselves in the first place. We can only do that if they feel that they are not being silenced.”

Dorsey and Twitter are most concerned about protecting expression and about not silencing people. In his mind, if he allows people to say whatever they want on his platform, he has succeeded. When asked about why he’s failed to implement AI to filter abuse like, say, Instagram had implemented, he said that he’s most concerned about being able to explain why the AI flagged something as abusive. Again, Dorsey protects the freedom of speech (and thus, the perpetrators of abuse) before the victims of abuse.

But he’s inconsistent about it. In a study by George Washington University comparing white nationalists and ISIS social media usage, Twitter’s freedom of speech was not granted to ISIS. Twitter suspended 1,100 accounts related to ISIS whereas it suspended only seven accounts related to Nazis, white nationalism, and white supremacy, despite the accounts having more than seven times the followers, and tweeting 25 times more than the ISIS accounts. Twitter here made a moral judgment that the fewer, less active, and less influential ISIS accounts were somehow not welcome on their platform, whereas the prolific and burgeoning Nazi and white supremacy accounts were.

So, Twitter has shown that it won’t protect free speech at all costs or for all users. We can only conclude that Twitter is either intentionally protecting white supremacy or simply doesn’t think it’s very dangerous. Regardless of which it is (I think I know), the outcome does not change the fact that white supremacy is running rampant on its platforms and many others.

Let’s brainwash ourselves for a moment and pretend like Twitter does want to support freedom of speech equitably and stays neutral and fair to complete this logical exercise: Going back to the dichotomy of rights example I provided earlier, where either the right to free speech or the right to safety and survival prevail, the rights and the power will fall into the hands of the dominant group or ideologue.

In case you are somehow unaware, the dominating ideologue, whether you’re a flagrant white supremacist or not, is white supremacy. White supremacy was baked into founding principles of the United States, the country where the majority of these platforms were founded and exist. (I am not suggesting that white supremacy doesn’t exist globally, as it does, evidenced most recently by the terrorist attack in Christchurch. I’m centering the conversation intentionally around the United States as it is my lived experience and where most of these companies operate.)

Facebook attempted to educate its team on white supremacy in order to address how to regulate free speech. A laugh-cry excerpt:

“White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics].”

White nationalism is a softened synonym for white supremacy so that racists-lite can feel more comfortable with their transition into hate. White nationalism (a.k.a. white supremacy) by definition explicitly seeks to eradicate all people of color. So, Facebook should see white nationalist speech as exclusionary, and therefore a violation of their policies.

Regardless of what tech leaders like Dorsey or Facebook CEO Zuckerberg say or what mediocre and uninspired condolences they might offer, inaction is an action.

Companies that use terms and conditions or acceptable use policies to defend their inaction around hate speech are enabling and perpetuating white supremacy. Policies are written by humans to protect that group of human’s ideals. The message they use might be that they are protecting free speech, but hate speech is a form of free speech. So effectively, they are protecting hate speech. Well, as long as it’s for white supremacy and not the Islamic State.

Whether the motivation is fear (losing loyal Nazi customers and their sympathizers) or hate (because their CEO is a white supremacist), it does not change the impact: Hate speech is tolerated, enabled, and amplified by way of their platforms.

“That wasn’t our intent”

Product creators might be thinking, Hey, look, I don’t intentionally create a platform for hate. The way these features were used was never our intent.

Intent does not erase impact.

We cannot absolve ourselves of culpability merely because we failed to conceive such evil use cases when we built it. While we very well might not have created these platforms with the explicit intent to help Nazis or imagined it would be used to spread their hate, the reality is that our platforms are being used in this way.

As product creators, it is our responsibility to protect the safety of our users by stopping those that intend to or already cause them harm. Better yet, we ought to think of this before we build the platforms to prevent this in the first place.

The question to answer isn’t, “Have I made a place where people have the freedom to express themselves?” Instead we have to ask, “Have I made a place where everyone has the safety to exist?” If you have created a place where a dominant group can embroil and embolden hate against another group, you have failed to create a safe place. The foundations of hateful speech (beyond the psychological trauma of it) lead to events like Christchurch.

We must protect safety over speech.

The Domino Effect

This week, Slack banned 28 hate groups. What is most notable, to me, is that the groups did not break any parts of their Acceptable Use Policy. Slack issued a statement:

The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform… Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack.

That’s it.

It is not illegal for tech companies like Slack to ban groups from using their proprietary software because it is a private company that can regulate users if they do not align with their vision as a company. Think of it as the “no shoes, no socks, no service” model, but for tech.

Slack simply decided that supporting the workplace collaboration of Nazis around efficient ways to evangelize white supremacy was probably not in line with their company directives around inclusion. I imagine Slack also considered how their employees of color most ill-affected by white supremacy would feel working for a company that supported it, actively or not.

What makes the Slack example so notable is that they acted swiftly and on their own accord. Slack chose the safety of all their users over the speech of some.

When caught with their enablement of white supremacy, some companies will only budge under pressure from activist groups, users, and employees.

PayPal finally banned hate groups after Charlottesville and after Southern Poverty Law Center (SPLC) explicitly called them out for enabling hate. SPLC had identified this fact for three years prior. PayPal had ignored them for all three years.

Unfortunately, taking these “stances” against something as clearly and viscerally wrong as white supremacy is rare for companies to do. The tech industry tolerates this inaction through unspoken agreements.

If Facebook doesn’t do anything about racist political propaganda, YouTube doesn’t do anything about PewDiePie, and Twitter doesn’t do anything about disproportionate abuse against Black women, it says to the smaller players in the industry that they don’t have to either.

The tech industry reacts to its peers. When there is disruption, as was the case with Airbnb, who screened and rejected any guests who they believed to be partaking in the Unite the Right Charlottesville rally, companies follow suit. GoDaddy cancelled Daily Stormer’s domain registration and Google did the same when they attempted migration.

If one company, like Slack or Airbnb, decides to do something about the role it’s going to play, it creates a perverse kind of FOMO for the rest: Fear of missing out of doing the right thing and standing on the right side of history.

Don’t have FOMO, do something

The type of activism at those companies all started with one individual. If you want to be part of the solution, I’ve gathered some places to start. The list is not exhaustive, and, as with all things, I recommend researching beyond this abridged summary.

  1. Understand how white supremacy impacts you as an individual.
    Now, if you are a person of color, queer, disabled, or trans, it’s likely that you know this very intimately.

     

    If you are not any of those things, then you, as a majority person, need to understand how white supremacy protects you and works in your favor. It’s not easy work, it is uncomfortable and unfamiliar, but you have the most powerful tools to fix tech. The resources are aplenty, but my favorite abridged list:

    1. Seeing White podcast
    2. Ijeoma Oluo’s So you want to talk about race
    3. Reni Eddo-Lodge’s Why I’m no longer talking to white people about race (Very key read for UK folks)
    4. Robin DiAngelo’s White Fragility
  2. See where your company stands: Read your company’s policies like accepted use and privacy policies and find your CEO’s stance on safety and free speech.
    While these policies are baseline (and in the Slack example, sort of irrelevant), it’s important to known your company's track record. As an employee, your actions and decisions either uphold the ideologies behind the company or they don’t. Ask yourself if the company’s ideologies are worth upholding and whether they align with your own. Education will help you to flag if something contradicts those policies, or if the policies themselves allow for unethical activity.
  3. Examine everything you do critically on an ongoing basis.
    You may feel your role is small or that your company is immune—maybe you are responsible for the maintenance of one small algorithm. But consider how that algorithm or similar ones can be exploited. Some key questions I ask myself:
    1. Who benefits from this? Who is harmed?
    2. How could this be used for harm?
    3. Who does this exclude? Who is missing?
    4. What does this protect? For whom? Does it do so equitably?
  4. See something? Say something.
    If you believe that your company is creating something that is or can be used for harm, it is your responsibility to say something. Now, I’m not naïve to the fact that there is inherent risk in this. You might fear ostracization or termination. You need to protect yourself first. But you also need to do something.
    1. Find someone who you trust who might be at less risk. Maybe if you’re a nonbinary person of color, find a white cis man who is willing to speak up. Maybe if you’re a white man who is new to the company, find a white man who has more seniority or tenure. But also, consider how you have so much more relative privilege compared to most other people and that you might be the safest option.
    2. Unionize. Find peers who might feel the same way and write a collective statement.
    3. Get someone influential outside of the company (if knowledge is public) to say something.
  5. Listen to concerns, no matter how small, particularly if they’re coming from the most endangered groups.
    If your user or peer feels unsafe, you need to understand why. People often feel like small things can be overlooked, as their initial impact might be less, but it is in the smallest cracks that hate can grow. Allowing one insensitive comment about race is still allowing hate speech. If someone, particularly someone in a marginalized group, brings up a concern, you need to do your due diligence to listen to it and to understand its impact.

I cannot emphasize this last point enough.

What I say today is not new. Versions of this article have been written before. Women of color like me have voiced similar concerns not only in writing, but in design reviews, in closed door meetings to key stakeholders, in Slack DMs. We’ve blown our whistles.

But here is the power of white supremacy.

White supremacy is so ingrained in every single aspect of how this nation was built, how our corporations function, and who is in control. If you are not convinced of this, you are not paying attention or intentionally ignoring the truth.

Queer, Muslim, disabled, trans women and nonbinary folks of color — the marginalized groups most impacted by this — are the ones who are voicing these concerns most voraciously. Speaking up requires us to enter the spotlight and outside of safety—we take a risk and are not heard.

The silencing of our voices is one of many effective tools of white supremacy. Our silencing lives within every microaggression, each time we’re talked over, or not invited to partake in key decisions.

In tech, I feel I am a canary in a coal mine. I have sung my song to warn the miners of the toxicity. My sensitivity to it is heightened, because of my existence.

But the miners look at me and tell me that my lived experience is false. It does not align with their narrative as humans. They don’t understand why I sing.

If the people at the highest echelons of the tech industry—the white, male CEOs in power—fail to listen to its most marginalized people—the queer, disabled, trans, people of color—the fate of the canaries will too become the fate of the miners.




a c

The age of sustainability: just transitions in a complex world / Mark Swilling

Dewey Library - HC79.E5 S9144 2020




a c

The dark side of nudges / by Maria Alejandra Caporale Madi

Dewey Library - HB74.P8 C365 2020




a c

Possessive individualism: a crisis of capitalism / Daniel W. Bromley

Dewey Library - HB501.B76 2019




a c

Organizational Mindset of Entrepreneurship: Exploring the Co-Creation Pathways of Structural Change and Innovation / edited by Veland Ramadani, Ramo Palalić, Léo-Paul Dana, Norris Krueger, Andrea Caputo

Online Resource




a c

Markets and people: Romania country economic memorandum.

Online Resource




a c

Why we need a citizen's basic income: the desirability, feasibility and implementation of an unconditional income / Malcolm Torry

Dewey Library - HC260.I5 T672 2018




a c

Amma Canteens

Amma Canteens




a c

Tata Consulting Engineers

Tata Consulting Engineers




a c

All India council of Technical Education

All India council of Technical Education




a c

North Karanpura coalfield

North Karanpura coalfield




a c

Mega Cities

Mega Cities




a c

IMRA Committee

IMRA Committee




a c

Dalmia Cement

Dalmia Cement




a c

Mini Ratna company

Mini Ratna company




a c

Ayoleeza Consultants

Ayoleeza Consultants




a c

South Karanpura coalfield

South Karanpura coalfield




a c

Wardha Coal Field

Wardha Coal Field




a c

Bauma China

Bauma China




a c

All India Council for Technical Education

All India Council for Technical Education




a c

Scania Commercial Vehicles India

Scania Commercial Vehicles India




a c

Hanita Coatings

Hanita Coatings




a c

Portland Pozzolana Cement

Portland Pozzolana Cement




a c

Brahmaputra Cracker and Polymer

Brahmaputra Cracker and Polymer




a c

Correction: Dynamic covalent polymer networks via combined nitroxide exchange reaction and nitroxide mediated polymerization

Polym. Chem., 2020, 11,2761-2761
DOI: 10.1039/D0PY90053B, Correction
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Yixuan Jia, Yannick Matt, Qi An, Isabelle Wessely, Hatice Mutlu, Patrick Theato, Stefan Bräse, Audrey Llevot, Manuel Tsotsalas
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

A covalently crosslinked silk fibroin hydrogel using enzymatic oxidation and chemoenzymatically synthesized copolypeptide crosslinkers consisting of a GPG tripeptide motif and tyrosine: control of gelation and resilience

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00187B, Paper
Hiromitsu Sogawa, Takuya Katashima, Keiji Numata
A covalently crosslinked silk fibroin hydrogel was successfully formed via an enzymatic crosslinking reaction using copolypeptides, which consist of a glycine–proline–glycine tripeptide motif and tyrosine, as linker molecules.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

Rapid production of block copolymer nano-objects via continuous-flow ultrafast RAFT dispersion polymerisation

Polym. Chem., 2020, Advance Article
DOI: 10.1039/D0PY00276C, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Sam Parkinson, Stephen T. Knox, Richard A. Bourne, Nicholas J. Warren
Continuous-flow reactors are exploited for conducting ultrafast RAFT dispersion polymerisation for the preparation of diblock copolymer nanoparticles.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

Synthesis of well-defined heteroglycopolymers via combining sequential click reactions and PPM: the effects of linker and heterogeneity on Con A binding

Polym. Chem., 2020, 11,3054-3065
DOI: 10.1039/D0PY00302F, Paper
Meina Liu, Xingyou Wang, Dengyun Miao, Caiyun Wang, Wei Deng
A versatile post- polymerization modification strategy to synthesize well-defined glycopolymers via the combination of RAFT polymerization and sequential CuAAC and thiol–ene click reactions was developed.
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

Synthesis of conjugated polymers via cyclopentannulation reaction: promising materials for iodine adsorption

Polym. Chem., 2020, 11,3066-3074
DOI: 10.1039/D0PY00286K, Paper
Open Access
Noorullah Baig, Suchetha Shetty, Saleh Al-Mousawi, Bassam Alameddine
A new class of conjugated polymers is prepared by means of a versatile palladium-catalyzed cyclopentannulation reaction using a series of specially designed diethynyl aryl synthons with the commercially available 9,10-dibromoanthracene DBA monomer.
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

A Copper(I)-Catalyzed Azide-Alkyne Click Chemistry Approach towards Multifunctional Two-Way Shape-Memory Actuators

Polym. Chem., 2020, Accepted Manuscript
DOI: 10.1039/D0PY00217H, Paper
Zhong-Cheng Liu, Bo Zuo, Hai-Feng Lu, Meng Wang, Shuai Huang, Xu-Man Chen, Baoping Lin, Hong Yang
Nowadays, two-way shape memory polymeric materials with reversible shape-morphing capability, exhibit extraordinary application prospects in robotic, biomedical and intelligent material technologies, and have attracted extensive scientific attention. However, exploration of...
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

Polymerization of epoxide monomers promoted by tBuP4 phosphazene base: A comparative study of kinetic behavior.

Polym. Chem., 2020, Accepted Manuscript
DOI: 10.1039/D0PY00437E, Paper
Valentin Puchelle, Haiqin Du, Nicolas Illy, Philippe Guegan
Kinetics of the anionic ring-opening polymerizations (AROP) of epoxide monomers, 1,2-epoxybutane (BO), 1,2-epoxypropane (PO), tert-butyl glycidyl ether (tBuGE), allyl glycidyl ether (AGE), benzyl glycidyl ether (BnGE), ethoxyethyl glycidyl ether (EEGE)...
The content of this RSS Feed (c) The Royal Society of Chemistry




a c

Chancery papermaking at the University of Iowa Center for the Book, 2013.

Hayden Library - TS1124.5.C43 2013




a c

Renewable power : a case study into selected renewable energy sectors in Australia for the inquiry into developing Australia's non-fossil fuel energy industry : background information : interim report / House of Representatives, Standing Committee on

Australia. Parliament. House of Representatives. Standing Committee on Industry and Resources




a c

Renewable energy cannot sustain a consumer society / by Ted Trainer

Trainer, Ted




a c

Thin-film terrestrial photovoltaic (PV) modules : design qualification and type approval = Modules photovoltaïques (PV) en couches minces pour application terrestre : qualification de la conception et homologation




a c

Systèmes photovoltaïques (PV) autonomes - vérification de la conception = Photovoltaic (PV) stand-alone systems - design verification




a c

Germany's energy transition : a comparative perspective / Carol Hager, Christoph H. Stefes, editors




a c

Near-IR oxime-based solvatochromic perylene diimide probe as a chemosensor for Pd species and Cu2+ ions in water and live cells

Photochem. Photobiol. Sci., 2020, 19,504-514
DOI: 10.1039/C9PP00487D, Paper
Poonam Sharma, Sandeep Kaur, Satwinderjeet Kaur, Prabhpreet Singh
A near-IR colorimetric and fluorescent perylene diimide probe for detection of Pd0 (7.9 × 10−8 M) and Cu2+ (3.4 × 10−7 M) in water and live cells with solvatochromic properties is designed and synthesized.
The content of this RSS Feed (c) The Royal Society of Chemistry