ca Clinical care of the runner: assessment, biomechanical principles, and injury management / edited by Mark A. Harrast By library.mit.edu Published On :: Sun, 29 Mar 2020 06:39:15 EDT Online Resource Full Article
ca Improving healthcare services: coproduction, codesign and operations / Sharon J. Williams, Lynne Caley By library.mit.edu Published On :: Sun, 5 Apr 2020 06:39:21 EDT Online Resource Full Article
ca Fetal and neonatal eye pathology Robert M. Verdijk, Martina C. Herwig-Carl By library.mit.edu Published On :: Sun, 5 Apr 2020 06:39:21 EDT Online Resource Full Article
ca Endocrinology of physical activity and sport / Anthony C. Hackney, Naama W. Constantini, editors By library.mit.edu Published On :: Sun, 5 Apr 2020 06:39:21 EDT Online Resource Full Article
ca Clinical cases in dermatopathology Dong-Lin Xie, editor By library.mit.edu Published On :: Sun, 5 Apr 2020 06:39:21 EDT Online Resource Full Article
ca The art and science of filler injection: based on clinical anatomy and the pinch technique / Giwoong Hong, Seungmin Oh, Bongcheol Kim, Yongwoo Lee By library.mit.edu Published On :: Sun, 5 Apr 2020 06:39:21 EDT Online Resource Full Article
ca The caring heirs of Dr. Samuel Bard: profiles of selected distinguished graduates of Columbia University, College of Physicians and Surgeons / Peter Wortsman By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R690.W67 2019 Full Article
ca Landscapes of activism: civil society and HIV and AIDS care in northern Mozambique / Joel Christian Reed By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - RA644.A25 R436 2018 Full Article
ca Transition to diagnosis-related group (DRG) payments for health: lessons from case studies / Caryn Bredenkamp, Sarah Bales, and Kristiina Kahur, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca International handbook of health expectancies / Carol Jagger, Eileen M. Crimmins, Yasuhiko Saito, Renata Tiene De Carvalho Yokota, Herman Van Oyen, Jean-Marie Robine, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Rest uneasy: sudden infant death syndrome in twentieth century America / Brittany Cowgill By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - RJ320.S93 C69 2018 Full Article
ca Pathological realities: essays on disease, experiments, and history / Mirko D. Grmek ; edited, translated, and with an introduction by Pierre-Olivier Méthot ; foreword by Hans-Jörg Rheinberger By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R133.G76 2019 Full Article
ca Perioperative care of the orthopedic patient / C. Ronald MacKenzie, Charles N. Cornell, Stavros G. Memtsoudis, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Eyelid and Conjunctival Tumors: In Vivo Confocal Microscopy / edited by Mathilde Kaspi, Elisa Cinotti, Jean-Luc Perrot, Thibaud Garcin By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Clinical trials / Timothy M. Pawlik, Julie A. Sosa, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Local wound care for dermatologists / Afsaneh Alavi, Howard I. Maibach, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Unaffordable: American healthcare from Johnson to Trump / Jonathan Engel By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - RA395.A3 E546 2018 Full Article
ca From hysteria to hormones: a rhetorical history / Amy Koerber By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - RA564.85.K655 2018 Full Article
ca Multilingual healthcare: a global view on communicative challenges / Christiane Hohenstein, Magdalène Lévy-Tödter, editors By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Statistical remedies for medical researchers / Peter F. Thall By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Children and drug safety: balancing risk and protection in twentieth century America / Cynthia A. Connolly By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - RJ560.C66 2018 Full Article
ca Oculoplastic Surgery: A Practical Guide to Common Disorders / edited by Essam A. El Toukhy By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Carving a niche: the medical profession in Mexico, 1800-1870 / Luz María Hernández Sáenz By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R465.H47 2018 Full Article
ca The Cambridge companion to Hippocrates / edited by Peter E. Pormann By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R126.H8 C36 2018 Full Article
ca Dr. Arthur Spohn: surgeon, inventor, and Texas medical pioneer / Jane Clements Monday and Frances Brannen Vick ; with Charles W. Monday Jr. ; introduction by Kenneth L. Mattox By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R154.S66 M66 2018 Full Article
ca Ethical issues in clinical forensic psychiatry Artemis Igoumenou, editor By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Online Resource Full Article
ca Medicine, religion, and magic in early Stuart England: Richard Napier's medical practice / Ofer Hadass By library.mit.edu Published On :: Sun, 26 Apr 2020 07:06:33 EDT Hayden Library - R489.N37 H33 2018 Full Article
ca Handbook of lower extremity reconstruction: clinical case-based review and flap atlas / Scott T. Hollenbeck, Peter B. Arnold, Dennis P. Orgill, editors By library.mit.edu Published On :: Sun, 3 May 2020 07:23:24 EDT Online Resource Full Article
ca Pediatric gender identity: gender-affirming care for transgender & gender diverse youth / edited by Michelle Forcier, Gerrit Van Schalkwyk, Jack L. Turban By library.mit.edu Published On :: Sun, 3 May 2020 07:23:24 EDT Online Resource Full Article
ca Evidence-based critical care: a case study approach / Robert C. Hyzy, Jakob McSparron, editors By library.mit.edu Published On :: Sun, 3 May 2020 07:23:24 EDT Online Resource Full Article
ca TRAPPIST-1 exoplanets could harbour significant amounts of water By feedproxy.google.com Published On :: 2018-02-13T09:29:54Z All seven worlds circling a red dwarf could be habitable, say astronomers Full Article
ca Pistachio trees 'talk' to their neighbours, reveals statistical physics By feedproxy.google.com Published On :: 2018-02-19T15:43:36Z Ising model could account for nut production of pistachio orchards Full Article
ca Nuclear excitation by electron capture seen at long last By feedproxy.google.com Published On :: 2018-02-20T13:55:46Z Breakthrough could lead to new type of energy source Full Article
ca Build a Sliding Client Testimonials Carousel With jQuery By designshack.net Published On :: Wed, 15 Jan 2014 14:00:37 +0000 Many portfolio websites include a list of previous clients to build trust from other potential customers. Reading what other people have said about a service or product is one way to garner support from visitors who have never heard about your company before. (Of course, this design technique only works if you have previous clients […] Full Article JavaScript animation carousel jQuery slider
ca Tab Discarding in Chrome: a Memory-Saving Experiment By feedproxy.google.com Published On :: Tue, 01 Sep 2015 00:00:00 +0000 Full Article
ca Updates to the service worker cache API By feedproxy.google.com Published On :: Thu, 03 Sep 2015 00:00:00 +0000 Full Article
ca AAP MLA Prakash Jarwal arrested in Delhi doctor suicide case By Published On :: Saturday, May 09, 2020, 19:43 +0530 A Delhi court had on May 8 issued a non-bailable warrant against Jarwal and his close aide Kapil Nagar. Full Article
ca Canary in a Coal Mine: How Tech Provides Platforms for Hate By feedproxy.google.com Published On :: 2019-03-19T09:22:00+00:00 As I write this, the world is sending its thoughts and prayers to our Muslim cousins. The Christchurch act of terrorism has once again reminded the world that white supremacy’s rise is very real, that its perpetrators are no longer on the fringes of society, but centered in our holiest places of worship. People are begging us to not share videos of the mass murder or the hateful manifesto that the white supremacist terrorist wrote. That’s what he wants: for his proverbial message of hate to be spread to the ends of the earth. We live in a time where you can stream a mass murder and hate crime from the comfort of your home. Children can access these videos, too. As I work through the pure pain, unsurprised, observing the toll on Muslim communities (as a non-Muslim, who matters least in this event), I think of the imperative role that our industry plays in this story. At time of writing, YouTube has failed to ban and to remove this video. If you search for the video (which I strongly advise against), it still comes up with a mere content warning; the same content warning that appears for casually risqué content. You can bypass the warning and watch people get murdered. Even when the video gets flagged and taken down, new ones get uploaded. Human moderators have to relive watching this trauma over and over again for unlivable wages. News outlets are embedding the video into their articles and publishing the hateful manifesto. Why? What does this accomplish? I was taught in journalism class that media (photos, video, infographics, etc.) should be additive (a progressive enhancement, if you will) and provide something to the story for the reader that words cannot. Is it necessary to show murder for our dear readers to understand the cruelty and finality of it? Do readers gain something more from watching fellow humans have their lives stolen from them? What psychological damage are we inflicting upon millions of people and for what? Who benefits? The mass shooter(s) who had a message to accompany their mass murder. News outlets are thirsty for perverse clicks to garner more ad revenue. We, by way of our platforms, give agency and credence to these acts of violence, then pilfer profits from them. Tech is a money-making accomplice to these hate crimes. Christchurch is just one example in an endless array where the tools and products we create are used as a vehicle for harm and for hate. Facebook and the Cambridge Analytica scandal played a critical role in the outcome of the 2016 presidential election. The concept of “race realism,” which is essentially a term that white supremacists use to codify their false racist pseudo-science, was actively tested on Facebook’s platform to see how the term would sit with people who are ignorantly sitting on the fringes of white supremacy. Full-blown white supremacists don’t need this soft language. This is how radicalization works. The strategies articulated in the above article are not new. Racist propaganda predates social media platforms. What we have to be mindful with is that we’re building smarter tools with power we don’t yet fully understand: you can now have an AI-generated human face. Our technology is accelerating at a frightening rate, a rate faster than our reflective understanding of its impact. Combine the time-tested methods of spreading white supremacy, the power to manipulate perception through technology, and the magnitude and reach that has become democratized and anonymized. We’re staring at our own reflection in the Black Mirror. The right to speak versus the right to survive Tech has proven time and time again that it voraciously protects first amendment rights above all else. (I will also take this opportunity to remind you that the first amendment of the United States offers protection to the people from the government abolishing free speech, not from private money-making corporations). Evelyn Beatrice Hall writes in The Friends of Voltaire, “I disapprove of what you say, but I will defend to the death your right to say it.” Fundamentally, Hall’s quote expresses that we must protect, possibly above all other freedoms, the freedom to say whatever we want to say. (Fun fact: The quote is often misattributed to Voltaire, but Hall actually wrote it to explain Voltaire’s ideologies.) And the logical anchor here is sound: We must grant everyone else the same rights that we would like for ourselves. Former 99u editor Sean Blanda wrote a thoughtful piece on the “Other Side,” where he posits that we lack tolerance for people who don’t think like us, but that we must because we might one day be on the other side. I agree in theory. But, what happens when a portion of the rights we grant to one group (let’s say, free speech to white supremacists) means the active oppression another group’s right (let’s say, every person of color’s right to live)? James Baldwin expresses this idea with a clause, “We can disagree and still love each other unless your disagreement is rooted in my oppression and denial of my humanity and right to exist.” It would seem that we have a moral quandary where two sets of rights cannot coexist. Do we protect the privilege for all users to say what they want, or do we protect all users from hate? Because of this perceived moral quandary, tech has often opted out of this conversation altogether. Platforms like Twitter and Facebook, two of the biggest offenders, continue to allow hate speech to ensue with irregular to no regulation. When explicitly asked about his platform as a free-speech platform and its consequence to privacy and safety, Twitter CEO Jack Dorsey said, “So we believe that we can only serve the public conversation, we can only stand for freedom of expression if people feel safe to express themselves in the first place. We can only do that if they feel that they are not being silenced.” Dorsey and Twitter are most concerned about protecting expression and about not silencing people. In his mind, if he allows people to say whatever they want on his platform, he has succeeded. When asked about why he’s failed to implement AI to filter abuse like, say, Instagram had implemented, he said that he’s most concerned about being able to explain why the AI flagged something as abusive. Again, Dorsey protects the freedom of speech (and thus, the perpetrators of abuse) before the victims of abuse. But he’s inconsistent about it. In a study by George Washington University comparing white nationalists and ISIS social media usage, Twitter’s freedom of speech was not granted to ISIS. Twitter suspended 1,100 accounts related to ISIS whereas it suspended only seven accounts related to Nazis, white nationalism, and white supremacy, despite the accounts having more than seven times the followers, and tweeting 25 times more than the ISIS accounts. Twitter here made a moral judgment that the fewer, less active, and less influential ISIS accounts were somehow not welcome on their platform, whereas the prolific and burgeoning Nazi and white supremacy accounts were. So, Twitter has shown that it won’t protect free speech at all costs or for all users. We can only conclude that Twitter is either intentionally protecting white supremacy or simply doesn’t think it’s very dangerous. Regardless of which it is (I think I know), the outcome does not change the fact that white supremacy is running rampant on its platforms and many others. Let’s brainwash ourselves for a moment and pretend like Twitter does want to support freedom of speech equitably and stays neutral and fair to complete this logical exercise: Going back to the dichotomy of rights example I provided earlier, where either the right to free speech or the right to safety and survival prevail, the rights and the power will fall into the hands of the dominant group or ideologue. In case you are somehow unaware, the dominating ideologue, whether you’re a flagrant white supremacist or not, is white supremacy. White supremacy was baked into founding principles of the United States, the country where the majority of these platforms were founded and exist. (I am not suggesting that white supremacy doesn’t exist globally, as it does, evidenced most recently by the terrorist attack in Christchurch. I’m centering the conversation intentionally around the United States as it is my lived experience and where most of these companies operate.) Facebook attempted to educate its team on white supremacy in order to address how to regulate free speech. A laugh-cry excerpt: “White nationalism and calling for an exclusively white state is not a violation for our policy unless it explicitly excludes other PCs [protected characteristics].” White nationalism is a softened synonym for white supremacy so that racists-lite can feel more comfortable with their transition into hate. White nationalism (a.k.a. white supremacy) by definition explicitly seeks to eradicate all people of color. So, Facebook should see white nationalist speech as exclusionary, and therefore a violation of their policies. Regardless of what tech leaders like Dorsey or Facebook CEO Zuckerberg say or what mediocre and uninspired condolences they might offer, inaction is an action. Companies that use terms and conditions or acceptable use policies to defend their inaction around hate speech are enabling and perpetuating white supremacy. Policies are written by humans to protect that group of human’s ideals. The message they use might be that they are protecting free speech, but hate speech is a form of free speech. So effectively, they are protecting hate speech. Well, as long as it’s for white supremacy and not the Islamic State. Whether the motivation is fear (losing loyal Nazi customers and their sympathizers) or hate (because their CEO is a white supremacist), it does not change the impact: Hate speech is tolerated, enabled, and amplified by way of their platforms. “That wasn’t our intent” Product creators might be thinking, Hey, look, I don’t intentionally create a platform for hate. The way these features were used was never our intent. Intent does not erase impact. We cannot absolve ourselves of culpability merely because we failed to conceive such evil use cases when we built it. While we very well might not have created these platforms with the explicit intent to help Nazis or imagined it would be used to spread their hate, the reality is that our platforms are being used in this way. As product creators, it is our responsibility to protect the safety of our users by stopping those that intend to or already cause them harm. Better yet, we ought to think of this before we build the platforms to prevent this in the first place. The question to answer isn’t, “Have I made a place where people have the freedom to express themselves?” Instead we have to ask, “Have I made a place where everyone has the safety to exist?” If you have created a place where a dominant group can embroil and embolden hate against another group, you have failed to create a safe place. The foundations of hateful speech (beyond the psychological trauma of it) lead to events like Christchurch. We must protect safety over speech. The Domino Effect This week, Slack banned 28 hate groups. What is most notable, to me, is that the groups did not break any parts of their Acceptable Use Policy. Slack issued a statement: The use of Slack by hate groups runs counter to everything we believe in at Slack and is not welcome on our platform… Using Slack to encourage or incite hatred and violence against groups or individuals because of who they are is antithetical to our values and the very purpose of Slack. That’s it. It is not illegal for tech companies like Slack to ban groups from using their proprietary software because it is a private company that can regulate users if they do not align with their vision as a company. Think of it as the “no shoes, no socks, no service” model, but for tech. Slack simply decided that supporting the workplace collaboration of Nazis around efficient ways to evangelize white supremacy was probably not in line with their company directives around inclusion. I imagine Slack also considered how their employees of color most ill-affected by white supremacy would feel working for a company that supported it, actively or not. What makes the Slack example so notable is that they acted swiftly and on their own accord. Slack chose the safety of all their users over the speech of some. When caught with their enablement of white supremacy, some companies will only budge under pressure from activist groups, users, and employees. PayPal finally banned hate groups after Charlottesville and after Southern Poverty Law Center (SPLC) explicitly called them out for enabling hate. SPLC had identified this fact for three years prior. PayPal had ignored them for all three years. Unfortunately, taking these “stances” against something as clearly and viscerally wrong as white supremacy is rare for companies to do. The tech industry tolerates this inaction through unspoken agreements. If Facebook doesn’t do anything about racist political propaganda, YouTube doesn’t do anything about PewDiePie, and Twitter doesn’t do anything about disproportionate abuse against Black women, it says to the smaller players in the industry that they don’t have to either. The tech industry reacts to its peers. When there is disruption, as was the case with Airbnb, who screened and rejected any guests who they believed to be partaking in the Unite the Right Charlottesville rally, companies follow suit. GoDaddy cancelled Daily Stormer’s domain registration and Google did the same when they attempted migration. If one company, like Slack or Airbnb, decides to do something about the role it’s going to play, it creates a perverse kind of FOMO for the rest: Fear of missing out of doing the right thing and standing on the right side of history. Don’t have FOMO, do something The type of activism at those companies all started with one individual. If you want to be part of the solution, I’ve gathered some places to start. The list is not exhaustive, and, as with all things, I recommend researching beyond this abridged summary. Understand how white supremacy impacts you as an individual.Now, if you are a person of color, queer, disabled, or trans, it’s likely that you know this very intimately. If you are not any of those things, then you, as a majority person, need to understand how white supremacy protects you and works in your favor. It’s not easy work, it is uncomfortable and unfamiliar, but you have the most powerful tools to fix tech. The resources are aplenty, but my favorite abridged list: Seeing White podcast Ijeoma Oluo’s So you want to talk about race Reni Eddo-Lodge’s Why I’m no longer talking to white people about race (Very key read for UK folks) Robin DiAngelo’s White Fragility See where your company stands: Read your company’s policies like accepted use and privacy policies and find your CEO’s stance on safety and free speech.While these policies are baseline (and in the Slack example, sort of irrelevant), it’s important to known your company's track record. As an employee, your actions and decisions either uphold the ideologies behind the company or they don’t. Ask yourself if the company’s ideologies are worth upholding and whether they align with your own. Education will help you to flag if something contradicts those policies, or if the policies themselves allow for unethical activity.Examine everything you do critically on an ongoing basis.You may feel your role is small or that your company is immune—maybe you are responsible for the maintenance of one small algorithm. But consider how that algorithm or similar ones can be exploited. Some key questions I ask myself: Who benefits from this? Who is harmed? How could this be used for harm? Who does this exclude? Who is missing? What does this protect? For whom? Does it do so equitably? See something? Say something.If you believe that your company is creating something that is or can be used for harm, it is your responsibility to say something. Now, I’m not naïve to the fact that there is inherent risk in this. You might fear ostracization or termination. You need to protect yourself first. But you also need to do something. Find someone who you trust who might be at less risk. Maybe if you’re a nonbinary person of color, find a white cis man who is willing to speak up. Maybe if you’re a white man who is new to the company, find a white man who has more seniority or tenure. But also, consider how you have so much more relative privilege compared to most other people and that you might be the safest option. Unionize. Find peers who might feel the same way and write a collective statement. Get someone influential outside of the company (if knowledge is public) to say something. Listen to concerns, no matter how small, particularly if they’re coming from the most endangered groups.If your user or peer feels unsafe, you need to understand why. People often feel like small things can be overlooked, as their initial impact might be less, but it is in the smallest cracks that hate can grow. Allowing one insensitive comment about race is still allowing hate speech. If someone, particularly someone in a marginalized group, brings up a concern, you need to do your due diligence to listen to it and to understand its impact. I cannot emphasize this last point enough. What I say today is not new. Versions of this article have been written before. Women of color like me have voiced similar concerns not only in writing, but in design reviews, in closed door meetings to key stakeholders, in Slack DMs. We’ve blown our whistles. But here is the power of white supremacy. White supremacy is so ingrained in every single aspect of how this nation was built, how our corporations function, and who is in control. If you are not convinced of this, you are not paying attention or intentionally ignoring the truth. Queer, Muslim, disabled, trans women and nonbinary folks of color — the marginalized groups most impacted by this — are the ones who are voicing these concerns most voraciously. Speaking up requires us to enter the spotlight and outside of safety—we take a risk and are not heard. The silencing of our voices is one of many effective tools of white supremacy. Our silencing lives within every microaggression, each time we’re talked over, or not invited to partake in key decisions. In tech, I feel I am a canary in a coal mine. I have sung my song to warn the miners of the toxicity. My sensitivity to it is heightened, because of my existence. But the miners look at me and tell me that my lived experience is false. It does not align with their narrative as humans. They don’t understand why I sing. If the people at the highest echelons of the tech industry—the white, male CEOs in power—fail to listen to its most marginalized people—the queer, disabled, trans, people of color—the fate of the canaries will too become the fate of the miners. Full Article
ca Daily Ethical Design By feedproxy.google.com Published On :: 2019-05-30T14:30:24+00:00 Suddenly, I realized that the people next to me might be severely impacted by my work. I was having a quick lunch in the airport. A group of flight attendants sat down at the table next to me and started to prepare for their flight. For a while now, our design team had been working on futuristic concepts for the operations control center of these flight attendants’ airline, pushing ourselves to come up with innovative solutions enabled by the newest technologies. As the control center deals with all activities around flying planes, our concepts touched upon everything and everyone within the airline. How was I to know what the impact of my work would be on the lives of these flight attendants? And what about the lives of all the other people working at the airline? Ideally, we would have talked to all the types of employees in the company and tested our concepts with them. But, of course, there was no budget (or time) allocated to do so, not to mention we faced the hurdle of convincing (internal) stakeholders of the need. Not for the first time, I felt frustrated: practical, real-world constraints prevented me from assessing the impact and quality of my work. They prevented me from properly conducting ethical design. What is ethical design? Right, good question. A very comprehensive definition of ethical design can be found at Encyclopedia.com: Design ethics concerns moral behavior and responsible choices in the practice of design. It guides how designers work with clients, colleagues, and the end users of products, how they conduct the design process, how they determine the features of products, and how they assess the ethical significance or moral worth of the products that result from the activity of designing. In other words, ethical design is about the “goodness”—in terms of benefit to individuals, society, and the world—of how we collaborate, how we practice our work, and what we create. There’s never a black-and-white answer for whether design is good or bad, yet there are a number of areas for designers to focus on when considering ethics. Usability Nowadays usability has conquered a spot as a basic requirement for each interface; unusable products are considered design failures. And rightly so; we have a moral obligation as designers to create products that are intuitive, safe, and free from possibly life-threatening errors. We were all reminded of usability’s importance by last year’s accidental nuclear strike warning in Hawaii. What if, instead of a false-positive, the operator had broadcasted a false-negative? Accessibility Like usability, inclusive design has become a standard item in the requirement list of many designers and companies. (I will never forget that time someone tried to use our website with a screen reader—and got absolutely stuck at the cookie message.) Accessible design benefits all, as it attempts to cover as many needs and capabilities as possible. Yet for each design project, there are still a lot of tricky questions to answer. Who gets to benefit from our solutions? Who is (un)intentionally left out? Who falls outside the “target customer segment”? Privacy Another day, another Facebook privacy scandal. As we’re progressing into the Data Age, the topic of privacy has become almost synonymous with design ethics. There’s a reason why more and more people use DuckDuckGo as an alternative search engine to Google. Corporations have access to an abundance of personal information about consumers, and as designers we have the privilege—and responsibility—of using this information to shape products and services. We have to consider how much information is strictly necessary and how much people are willing to give up in exchange for services. And how can we make people aware of the potential risks without overloading them? User involvement Overlapping largely with privacy, this focus area is about how we deal with our users and what we do with the data that we collect from them. IDEO has recently published The Little Book of Design Research Ethics, which provides a comprehensive overview of the core principles and guidelines we should follow when conducting design research. Persuasion Ethics related to persuasion is about to what extent we may influence the behavior and thoughts of our users. It doesn’t take much to bring acceptable, “white hat” persuasion into gray or even dark territories. Conversion optimization, for example, can easily turn into “How do we squeeze out more revenue from our customers by turning their unconsciousness against them?” Prime examples include Netflix, which convinces us to watch, watch, and watch even more, and Booking.com, which barrages our senses with urgency and social pressure. Focus The current digital landscape is addictive, distracting, and competing for attention. Designing for focus is about responsibly handling people’s most valuable resource: time. Our challenge is to limit everything that disrupts our users’ attention, lower the addictiveness of products, and create calmness. The Center for Humane Technology has started a useful list of resources for this purpose. Sustainability What’s the impact of our work on the world’s environment, resources, and climate? Instead of continuously adding new features in the unrelenting scrum treadmill, how could we design for fewer? We’re in the position to create responsible digital solutions that enable sustainable consumer behavior and prevent overconsumption. For example, apps such as Optimiam and Too Good To Go allow people to order leftover food that would normally be thrashed. Or consider Mutum and Peerby, whose peer-to-peer platforms promote the sharing and reuse of owned products. Society The Ledger of Harms of the Center for Human Technology is a work-in-progress collection of the negative impacts that digital technology has on society, including topics such as relationships, mental health, and democracy. Designers who are mindful of society consider the impact of their work on the global economy, communities, politics, and health. [caption id="attachment_7171650" align="alignnone" width="1200"] The focus areas of design ethics. That’s a lot to consider![/caption] Ethics as an inconvenience Ideally, in every design project, we should assess the potential impact in all of the above-mentioned areas and take steps to prevent harm. Yet there are many legitimate, understandable reasons why we often neglect to do so. It’s easy to have moral principles, yet in the real world, with the constraints that our daily life imposes upon us, it’s seldom easy to act according to those principles. We might simply say it’s inconvenient at the moment. That there’s a lack of time or budget to consider all the ethical implications of our work. That there are many more pressing concerns that have priority right now. We might genuinely believe it’s just a small issue, something to consider later, perhaps. Mostly, we are simply unaware of the possible consequences of our work. And then there’s the sheer complexity of it all: it’s simply too much to simultaneously focus on. When short on time, or in the heat of approaching deadlines and impatient stakeholders, how do you incorporate all of design ethics’ focus areas? Where do you even start? Ethics as a structural practice For these reasons, I believe we need to elevate design ethics to a more practical level. We need to find ways to make ethics not an afterthought, not something to be considered separately, but rather something that’s so ingrained in our process that not doing it means not doing design at all. The only way to overcome the “inconvenience” of acting ethically is to practice daily ethical design: ethics structurally integrated in our daily work, processes, and tools as designers. No longer will we have to rely on the exceptions among us; those extremely principled who are brave enough to stand up against the system no matter what kind of pressure is put upon them. Because the system will be on our side. By applying ethics daily and structurally in our design process, we’ll be able to identify and neutralize in a very early stage the potential for mistakes and misuse. We’ll increase the quality of our design and our practices simply because we’ll think things through more thoroughly, in a more conscious and structured manner. But perhaps most important is that we’ll establish a new standard for design. A standard that we can sell to our clients as the way design should be done, with ethical design processes and deliverables already included. A standard that can be taught to design students so that the newest generation of designers doesn’t know any better than to apply ethics, always. How to practice daily ethical design? At this point we’ve arrived at the question of how we can structurally integrate ethics into our design process. How do we make sure that our daily design decisions will result in a product that’s usable and accessible; protects people’s privacy, agency, and focus; and benefits both society and nature? I want to share with you some best practices that I’ve identified so far, and how I’ve tried to apply them during a recent project at Mirabeau. The goal of the project was to build a web application that provides a shaver manufacturer’s factory workers insight into the real-time availability of production materials. Connect to your organization’s mission and values By connecting our designs to the mission and values of the companies we work for, we can structurally use our design skills in a strategic manner, for moral purposes. We can challenge the company to truly live up to its promises and support it in carrying out its mission. This does, however, require you to be aware of the company’s values, and to compare these to your personal values. As I had worked with our example client before, I knew it was a company that takes care of its employees and has a strong focus on creating a better world. During the kick-off phase, we used a strategy pyramid to structure the client’s mission and values, and to agree upon success factors for the project. We translated the company’s customer-facing brand guidelines to employee-focused design principles that maintained the essence of the organization. Keep track of your assumptions Throughout our entire design process, we make assumptions for each decision that we take. By structurally keeping track of these assumptions, you’ll never forget about the limitations of your design and where the potential risks lie in terms of (harmful) impact on users, the project, the company, and society. In our example project, we listed our assumptions about user goals, content, and functionalities for each page of the application. If we were not fully sure about the value for end users, or the accuracy of a user goal, we marked it as a value assumption. When we were unsure if data could be made available, we marked this as a data (feasibility) assumption. If we were not sure whether a feature would add to the manufacturer’s business, we marked it as a scope assumption. Every week, we tested our assumptions with end users and business stakeholders through user tests and sprint demos. Each design iteration led to new questions and assumptions to be tested the next week. Aim to be proven wrong While our assumptions are the known unknowns, there are always unknown unknowns that we aren’t aware of but could be a huge risk for the quality and impact of our work. The only way we can identify these is by applying the scientific principle of falsifiability: seeking actively to be proven wrong. Only outsiders can point out to us what we miss as an individual or as a team. In our weekly user tests, we included factory workers and stakeholders with different disciplines, from different departments, and working in different contexts, to identify the edge cases that could break our concept. On one occasion, this made us reconsider the entirety of our concept. Still, we could have done better: although scalability to other factories was an important success factor, we were unable to gather input from those other factories during the project. We felt our only option was to mention this as a risk (“limit to scalability”). Use the power of checklists Let’s face it: we forget things. (Without scrolling up the page, can you name all the focus areas of design ethics?) This is where checklists help us out: they provide knowledge in the world, so that we don’t have to process it in our easily overwhelmed memory. Simple yet powerful, a checklist is an essential tool to practice daily ethical design. In our example project, we used checklists to maintain an overview of questions and assumptions to user test, checking whether we included our design principles properly, and assessing whether we complied to the client’s values, design principles, and the agreed-upon success factors. In hindsight, we could also have taken a moment during the concept phase to go through the list of focus areas for design ethics, as well as have taken a more structural approach to check accessibility guidelines. The main challenge for daily ethical design Most ethics focus areas are quite tangible, where design decisions have immediate, often visible effects. While certainly challenging in their own right, they’re relatively easy to integrate in our daily practice, especially for experienced designers. Society and the environment, however, are more intangible topics; the effects of our work in these areas are distant and uncertain. I’m sure that when Airbnb was first conceived, the founders did not consider the magnitude of its disruptive impact on the housing market. The same goes for Instagram, as its role in creating demand for fast fashion must have been hard to foresee. Hard, but not impossible. So how do we overcome this challenge and make the impact that we have on society and the environment more immediate, more daily? Conduct Dark Reality sessions The ancient Greek philosopher Socrates used a series of questions to gradually uncover the invalidity of people’s beliefs. In a very similar way, we can uncover the assumptions and potential disastrous consequences of our concepts in a ‘Dark Reality’ session, a form of speculative design that focuses on stress-testing a concept with challenging questions. We have to ask ourselves—or even better, somebody outside our team has to ask us— questions such as, “What is the lifespan of your product? What if the user base will be in the millions? What are the long-term effects on economy, society, and the environment? Who benefits from your design? Who loses? Who is excluded? And perhaps most importantly, how could your design be misused? (For more of these questions, Alan Cooper provided a great list in his keynote at Interaction 18.) The back-and-forth Q&A of the Dark Reality session will help us consider and identify our concept’s weaknesses and potential consequences. As it is a team effort, it will spark discussion and uncover differences in team members’ ethical values. Moreover, the session will result in a list of questions and assumptions that can be tested with potential users and subject matter experts. In the project for the airline control center, it resulted in more consideration for the human role in automatization and how digital interfaces can continue to support human capabilities (instead of replacing them), and reflection on the role of airports in future society. The dark reality session is best conducted during the convergent parts of the double diamond, as these are the design phases in which we narrow down to realistic ideas. It’s vital to have a questioner from outside the team with strong interviewing skills and who doesn’t easily accept an answer as sufficient. There are helpful tools available to help structure the session, such as the Tarot Cards of Tech and these ethical tools. Take a step back to go forward As designers, we’re optimists by nature. We see the world as a set of problems that we can solve systematically and creatively if only we try hard enough. We intend well. However, merely having the intention to do good is not going to be enough. Our mindset comes with the pitfall of (dis)missing potential disastrous consequences, especially under pressure of daily constraints. That’s why we need to regularly, systematically take a step back and consider the future impact of our work. My hope is that the practical, structural mindset to ethics introduced in this article will help us agree on a higher standard for design. Full Article
ca Class history and class practices in the periphery of capitalism / edited by Paul Zarembka By library.mit.edu Published On :: Sun, 1 Mar 2020 08:00:08 EST Dewey Library - HB501.C53 2019 Full Article
ca The Canadian environment in political context / Andrea Olive By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC120.E5 O45 2019 Full Article
ca The Russian job: the forgotten story of how America saved the Soviet Union from ruin / Douglas Smith By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC340.F3 S55 2019 Full Article
ca Uncanny valley: a memoir / Anna Wiener By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC107.C2 H5335 2020 Full Article
ca Mongrel firebugs and men of property: capitalism and class conflict in American history / Steve Fraser By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC110.C3 F73 2019 Full Article
ca The ethical algorithm: the science of socially aware algorithm design / Michael Kearns and Aaron Roth By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC79.I55 K43 2020 Full Article
ca Predatory value extraction: how the looting of the business corporation became the U.S. norm and how sustainable prosperity can be restored / William Lazonick and Jang-Sup Shin By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HB201.L39 2020 Full Article
ca Laid waste!: the culture of exploitation in early America / John Lauritz Larson By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC103.7.L36 2020 Full Article
ca The dark side of nudges / by Maria Alejandra Caporale Madi By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HB74.P8 C365 2020 Full Article
ca Italy's economic revolution: integration and economy in Republican Italy / Saskia T. Roselaar By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HC39.R67 2019 Full Article
ca Possessive individualism: a crisis of capitalism / Daniel W. Bromley By library.mit.edu Published On :: Sun, 8 Mar 2020 08:11:31 EDT Dewey Library - HB501.B76 2019 Full Article