moral

A bridge too far: Bill Baroni, Bridget Kelly and Chris Christie committed moral crimes against New Jersey

By the time in 2015 when prosecutors indicted Chris Christie flunkies Bridget Kelly and Bill Baroni for shutting down Fort Lee’s George Washington Bridge lanes for four days in 2013 to punish the mayor for failing to endorse the big man in Trenton’s reelection, the two sick sycophants had long lost their stupid sinecures in the State House and Port Authority. And Christie had already rightly lost the trust of Jerseyans for building the hothouse in which the lichens could grow.




moral

Letters to the Editor: Rationing COVID-19 treatment to the elderly and disabled is illegal and immoral

The author of the Americans With Disabilities Act warns that coronavirus treatment that takes disability and age into account is immoral and illegal.




moral

Tottenham compared to Stoke by Peter Crouch after ‘demoralising’ Chelsea performance



Tottenham have been compared to Stoke sides of the past after their loss to Chelsea.




moral

Coronavirus: Why healthcare workers are at risk of moral injury

War veterans can experience trauma known as moral injury - now health workers are at risk too.




moral

CBD News: New surveys of more than 5,000 consumers in five countries indicate that the majority (79 per cent) feel that "companies have a moral obligation" to have a positive impact on people and biodiversity in their sourcing of natural ingredi




moral

Investigation of inter- and intra-tumoral heterogeneity of glioblastoma using TOF-SIMS

Samvel K Gularyan
Apr 6, 2020; 0:RA120.001986v1-mcp.RA120.001986
Research




moral

Initial studies with [11C]vorozole positron emission tomography detect over-expression of intra-tumoral aromatase in breast cancer

Introduction: Aromatase inhibitors are the mainstay of hormonal therapy in estrogen receptor positive, postmenopausal breast cancer, although response rate is just over 50%. The goal of the present study was to validate and optimize positron emission tomography (PET) with 11C-vorozole for measuring aromatase expression in postmenopausal breast cancer. Methods: Ten newly diagnosed, postmenopausal women with biopsy confirmed breast cancer were administered 11C-vorozole intravenously and PET emission data collected between 40 – 90 minutes post-injection. Tracer injection and scanning were repeated 2 hours after ingestion of 2.5mg letrozole p.o. Mean and maximal standard uptake values and ratios to non-tumor tissue (SUVs, SUVRs) were calculated for tumor and non-tumor regions at baseline and after letrozole. Biopsy specimens from the same tumors were stained for aromatase using immunohistochemistry and evaluated for stain intensity and the percentage of immune-positive cells. Results: Seven of the 10 women (70%) demonstrated increased focal uptake of tracer (SUVR>1.1) coinciding with the mammographic location of the lesion. The other 3 women (30%) did not show increased uptake in the tumor (SUVR <1.0). All of the cases with SUVR above 1.1 had SUVs above 2.4 and there was no overlap in SUV between the two groups, with mean SUV in tumors overexpressing aromatase (SUVR>1.1) ranging from 2.47 to 13.6, while tumors not overexpressing aromatase (SUVR<1) ranged from 0.8 to 1.8. Pretreatment with letrozole reduced tracer uptake in the majority of subjects; although the %blocking varied across and within tumors. Tumors with high SUV in vivo also showed high staining intensity on IHC. Conclusion: PET with 11C-vorozole is a useful technique for measuring aromatase expression in individual breast lesions, enabling a non-invasive quantitative measurement of baseline and post-treatment aromatase availability in primary tumors and metastatic lesions.




moral

Investigation of inter- and intra-tumoral heterogeneity of glioblastoma using TOF-SIMS [Research]

Glioblastoma (GBM) is one of the most aggressive human cancers with a median survival of less than two years. A distinguishing pathological feature of GBM is a high degree of inter- and intratumoral heterogeneity. Intertumoral heterogeneity of GBM has been extensively investigated on genomic, methylomic, transcriptomic, proteomic and metabolomics levels, however only a few studies describe intratumoral heterogeneity due to the lack of methods allowing to analyze GBM samples with high spatial resolution. Here, we applied TOF-SIMS (Time-of-flight secondary ion mass spectrometry) for the analysis of single cells and clinical samples such as paraffin and frozen tumor sections obtained from 57 patients. We developed a technique that allows us to simultaneously detect the distribution of proteins and metabolites in glioma tissue with 800 nm spatial resolution. Our results demonstrate that according to TOF-SIMS data glioma samples can be subdivided into clinically relevant groups and distinguished from the normal brain tissue. In addition, TOF-SIMS was able to elucidate differences between morphologically distinct regions of GBM within the same tumor. By staining GBM sections with gold-conjugated antibodies against Caveolin-1 we could visualize border between zones of necrotic and cellular tumor and subdivide glioma samples into groups characterized by different survival of the patients. Finally, we demonstrated that GBM contains cells that are characterized by high levels of Caveolin-1 protein and cholesterol. This population may partly represent a glioma stem cells. Collectively, our results show that the technique described here allows to analyze glioma tissues with a spatial resolution beyond reach of most of other omics approaches and the obtained data may be used to predict clinical behavior of the tumor.




moral

US must address addiction as an illness, not as a moral failing, Surgeon General says




moral

Christmas 2016 - ideologies and moralities

In an ideal world, policies would be evidence based - but governments are made of humans, who have positions and ideologies and moral bases. In this podcast Anthony Painter, from the RSA will be talking about why universal basic income may work, but who’s proponents cross ideological barriers, and writer and philosopher AC Grayling explains how...




moral

The Effect of Insulin on the Disposal of Intravenous Glucose: Results from Indirect Calorimetry and Hepatic and Femoral Venous Catheterization

R A DeFronzo
Dec 1, 1981; 30:1000-1007
Original Contribution




moral

In Face of Tragedy, 'Whodunit' Question Often Guides Moral Reasoning

When nearly 200 people in India were killed in terrorist attacks late last month, the carnage received saturation media coverage around the globe. When nearly 600 people in Zimbabwe died in a cholera outbreak a week ago, the international response was far more muted.




moral

STEAK BALMORAL (Oyster Mushroom and Leek Sauce Balmoral)

WHERE IS WINTER WITHOUT A GUTSY SAUCE TO ACCOMPANY A JUICY STEAK OR EVEN A ROAST CHICKEN? THIS IS MORE OF A RAGOUT TYPE SAUCE




moral

After Okla. Historic Pay Raise, Morale Is Up—But Teacher Shortage Persists

Despite a $6,100 teacher pay raise this spring, school districts report that they're starting the new academic year with nearly 500 teaching vacancies.




moral

How Principals and District Leaders Are Trying to Boost Lagging Teacher Morale During COVID-19

Knowing the shift to remote learning would be tough for teachers, school and district administrators have scrambled to assemble as many kinds of supports as they can.




moral

Des rapports conjugaux considérées sous le tirple point de vue de la population, de la santé et de la morale publique / par Alex. Mayer.

Londres : Paris, 1860.




moral

Élémens d'hygiène, ou de l'Influence des choses physiques et morales sur l'homme, et des moyens de conserver la santé / par Étienne Tourtelle.

Paris : Rémont, 1815.




moral

Morality, supported by Religion, points the way to happiness. Engraving by E. de Ghendt, 1807, after J.M. Moreau.

[Paris], [1807]




moral

Pediatric pelvic and proximal femoral osteotomies

9783319780337 978-3-319-78033-7




moral

"a new science of morality




moral

"a new science of morality"




moral

The Moral Meaning of the Plague

The virus is a test. We have the freedom to respond.




moral

When moral codes disappear in the fog of bloody war

The court was furnished in blond wood. There were no wigs and the accused man wore a jersey. But the informality was in contrast to the gravity of the charges. An army officer was on trial for a war crime: the killing of 11 innocent women and children in Afghanistan.




moral

Los peligros mortales del moralismo A

La enseñanza bíblica en profundidad de John MacArthur lleva la verdad transformadora de la Palabra de Dios a millones de personas cada día.




moral

Los peligros mortales del moralismo B

La enseñanza bíblica en profundidad de John MacArthur lleva la verdad transformadora de la Palabra de Dios a millones de personas cada día.




moral

Los peligros mortales del moralismo C

La enseñanza bíblica en profundidad de John MacArthur lleva la verdad transformadora de la Palabra de Dios a millones de personas cada día.




moral

T Follicular Helper Cells Regulate Humoral Response for Host Protection against Intestinal Citrobacter rodentium Infection [INFECTIOUS DISEASE AND HOST RESPONSE]

Key Points

  • Lack of Tfh cells renders the mice susceptible to C. rodentium infection.

  • Tfh cell–dependent protective Abs are essential to control C. rodentium.

  • Tfh cells regulate IgG1 response to C. rodentium infection.




    moral

    A Single Intramuscular Dose of a Plant-Made Virus-Like Particle Vaccine Elicits a Balanced Humoral and Cellular Response and Protects Young and Aged Mice from Influenza H1N1 Virus Challenge despite a Modest/Absent Humoral Response [Vaccines]

    Virus-like-particle (VLP) influenza vaccines can be given intramuscularly (i.m.) or intranasally (i.n.) and may have advantages over split-virion formulations in the elderly. We tested a plant-made VLP vaccine candidate bearing the viral hemagglutinin (HA) delivered either i.m. or i.n. in young and aged mice. Young adult (5- to 8-week-old) and aged (16- to 20-month-old) female BALB/c mice received a single 3-μg dose based on the HA (A/California/07/2009 H1N1) content of a plant-made H1-VLP (i.m. or i.n.) split-virion vaccine (i.m.) or were left naive. After vaccination, humoral and splenocyte responses were assessed, and some mice were challenged. Both VLP and split vaccines given i.m. protected 100% of the young animals, but the VLP group lost the least weight and had stronger humoral and cellular responses. Compared to split-vaccine recipients, aged animals vaccinated i.m. with VLP were more likely to survive challenge (80% versus 60%). The lung viral load postchallenge was lowest in the VLP i.m. groups. Mice vaccinated with VLP i.n. had little detectable immune response, but survival was significantly increased. In both age groups, i.m. administration of the H1-VLP vaccine elicited more balanced humoral and cellular responses and provided better protection from homologous challenge than the split-virion vaccine.




    moral

    Prevalent and Diverse Intratumoral Oncoprotein-Specific CD8+ T Cells within Polyomavirus-Driven Merkel Cell Carcinomas

    Merkel cell carcinoma (MCC) is often caused by persistent expression of Merkel cell polyomavirus (MCPyV) T-antigen (T-Ag). These non-self proteins comprise about 400 amino acids (AA). Clinical responses to immune checkpoint inhibitors, seen in about half of patients, may relate to T-Ag–specific T cells. Strategies to increase CD8+ T-cell number, breadth, or function could augment checkpoint inhibition, but vaccines to augment immunity must avoid delivery of oncogenic T-antigen domains. We probed MCC tumor-infiltrating lymphocytes (TIL) with an artificial antigen-presenting cell (aAPC) system and confirmed T-Ag recognition with synthetic peptides, HLA-peptide tetramers, and dendritic cells (DC). TILs from 9 of 12 (75%) subjects contained CD8+ T cells recognizing 1–8 MCPyV epitopes per person. Analysis of 16 MCPyV CD8+ TIL epitopes and prior TIL data indicated that 97% of patients with MCPyV+ MCC had HLA alleles with the genetic potential that restrict CD8+ T-cell responses to MCPyV T-Ag. The LT AA 70–110 region was epitope rich, whereas the oncogenic domains of T-Ag were not commonly recognized. Specific recognition of T-Ag–expressing DCs was documented. Recovery of MCPyV oncoprotein–specific CD8+ TILs from most tumors indicated that antigen indifference was unlikely to be a major cause of checkpoint inhibition failure. The myriad of epitopes restricted by diverse HLA alleles indicates that vaccination can be a rational component of immunotherapy if tumor immune suppression can be overcome, and the oncogenic regions of T-Ag can be modified without impacting immunogenicity.




    moral

    Intratumoral Delivery of a PD-1-Blocking scFv Encoded in Oncolytic HSV-1 Promotes Antitumor Immunity and Synergizes with TIGIT Blockade

    Oncolytic virotherapy can lead to systemic antitumor immunity, but the therapeutic potential of oncolytic viruses in humans is limited due to their insufficient ability to overcome the immunosuppressive tumor microenvironment (TME). Here, we showed that locoregional oncolytic virotherapy upregulated the expression of PD-L1 in the TME, which was mediated by virus-induced type I and type II IFNs. To explore PD-1/PD-L1 signaling as a direct target in tumor tissue, we developed a novel immunotherapeutic herpes simplex virus (HSV), OVH-aMPD-1, that expressed a single-chain variable fragment (scFv) against PD-1 (aMPD-1 scFv). The virus was designed to locally deliver aMPD-1 scFv in the TME to achieve enhanced antitumor effects. This virus effectively modified the TME by releasing damage-associated molecular patterns, promoting antigen cross-presentation by dendritic cells, and enhancing the infiltration of activated T cells; these alterations resulted in antitumor T-cell activity that led to reduced tumor burdens in a liver cancer model. Compared with OVH, OVH-aMPD-1 promoted the infiltration of myeloid-derived suppressor cells (MDSC), resulting in significantly higher percentages of CD155+ granulocytic-MDSCs (G-MDSC) and monocytic-MDSCs (M-MDSC) in tumors. In combination with TIGIT blockade, this virus enhanced tumor-specific immune responses in mice with implanted subcutaneous tumors or invasive tumors. These findings highlighted that intratumoral immunomodulation with an OV expressing aMPD-1 scFv could be an effective stand-alone strategy to treat cancers or drive maximal efficacy of a combination therapy with other immune checkpoint inhibitors.




    moral

    Tumoral and immune heterogeneity in an anti-PD-1-responsive glioblastoma: a case study [RESEARCH REPORT]

    Clinical benefit of immune checkpoint blockade in glioblastoma (GBM) is rare, and we hypothesize that tumor clonal evolution and the immune microenvironment are key determinants of response. Here, we present a detailed molecular characterization of the intratumoral and immune heterogeneity in an IDH wild-type, MGMT-negative GBM patient who plausibly benefited from anti-PD-1 therapy with an unusually long 25-mo overall survival time. We leveraged multiplex immunohistochemistry, RNA-seq, and whole-exome data from the primary tumor and three resected regions of recurrent disease to survey regional tumor-immune interactions, genomic instability, mutation burden, and expression profiles. We found significant regional heterogeneity in the neoantigenic and immune landscape, with a differential T-cell signature among recurrent sectors, a uniform loss of focal amplifications in EGFR, and a novel subclonal EGFR mutation. Comparisons with recently reported correlates of checkpoint blockade in GBM and with TCGA-GBM revealed appreciable intratumoral heterogeneity that may have contributed to a differential PD-1 blockade response.




    moral

    Care home residents filmed dancing and singing &apos;Don&apos;t Worry, Be Happy&apos; to boost morale

    Residents and staff at a care home were filmed dancing and singing "Don't Worry, Be Happy" to send a positive message to their families and "spread a bit of happiness".





    moral

    Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak


    In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.

    One of the earliest approaches to machine learning was to construct artificial neural networks that resemble the structure of the human brain. In the last decade this approach has finally taken off. Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before. They can translate between languages with a proficiency approaching that of a human translator. They can produce photorealistic images of humans and animals. They can speak with the voices of people whom they have listened to for mere minutes. And they can learn fine, continuous control such as how to drive a car or use a robotic arm to connect Lego pieces.

    WHAT IS HUMANITY?: First the computers came for the best players in Jeopardy!, chess, and Go. Now AI researchers themselves are worried computers will soon accomplish every task better and more cheaply than human workers.Wikimedia

    But perhaps the most important sign of things to come is their ability to learn to play games. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond. Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, researchers at the AI company DeepMind created AlphaZero: a neural network-based system that learned to play chess from scratch. In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs. The very same algorithm also learned to play Go from scratch, and within eight hours far surpassed the abilities of any human. The world’s best Go players were shocked. As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong ... I would go as far as to say not a single human has touched the edge of the truth of Go.”

    The question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes.

    It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari 1970s games at levels far exceeding human ability. Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learnt and mastered these games directly from the score and raw pixels.

    This burst of progress via deep learning is fuelling great optimism and pessimism about what may soon be possible. There are serious concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, and violating the norms of war. My book—The Precipice: Existential Risk and the Future of Humanity—is concerned with risks on the largest scale. Could developments in AI pose an existential risk to humanity?

    The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with intelligence that surpasses our own. A 2016 survey of top AI researchers found that, on average, they thought there was a 50 percent chance that AI systems would be able to “accomplish every task better and more cheaply than human workers” by 2061. The expert community doesn’t think of artificial general intelligence (AGI) as an impossible dream, so much as something that is more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created.

    Humanity is currently in control of its own fate. We can choose our future. The same is not true for chimpanzees, blackbirds, or any other of Earth’s species. Our unique position in the world is a direct result of our unique mental abilities. What would happen if sometime this century researchers created an AGI surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. On its own, this might not be too much cause for concern. For there are many ways we might hope to retain control. Unfortunately, the few researchers working on such plans are finding them far more difficult than anticipated. In fact it is they who are the leading voices of concern.

    If their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

    To see why they are concerned, it will be helpful to look at our current AI techniques and why these are hard to align or control. One of the leading paradigms for how we might eventually create AGI combines deep learning with an earlier idea called reinforcement learning. This involves agents that receive reward (or punishment) for performing various acts in various circumstances. With enough intelligence and experience, the agent becomes extremely capable at steering its environment into the states where it obtains high reward. The specification of which acts and states produce reward for the agent is known as its reward function. This can either be stipulated by its designers or learnt by the agent. Unfortunately, neither of these methods can be easily scaled up to encode human values in the agent’s reward function. Our values are too complex and subtle to specify by hand. And we are not yet close to being able to infer the full complexity of a human’s values from observing their behavior. Even if we could, humanity consists of many humans, with different values, changing values, and uncertainty about their values.

    Any near-term attempt to align an AI agent with human values would produce only a flawed copy. In some circumstances this misalignment would be mostly harmless. But the more intelligent the AI systems, the more they can change the world, and the further apart things will come. When we reflect on the result, we see how such misaligned attempts at utopia can go terribly wrong: the shallowness of a Brave New World, or the disempowerment of With Folded Hands. And even these are sort of best-case scenarios. They assume the builders of the system are striving to align it to human values. But we should expect some developers to be more focused on building systems to achieve other goals, such as winning wars or maximizing profits, perhaps with very little focus on ethical constraints. These systems may be much more dangerous. In the existing paradigm, sufficiently intelligent agents would end up with instrumental goals to deceive and overpower us. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: Being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it.

    Ultimately, the system would be motivated to wrest control of the future from humanity, as that would help achieve all these instrumental goals: acquiring massive resources, while avoiding being shut down or having its reward function altered. Since humans would predictably interfere with all these instrumental goals, it would be motivated to hide them from us until it was too late for us to be able to put up meaningful resistance. And if their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

    How could an AI system seize control? There is a major misconception (driven by Hollywood and the media) that this requires robots. After all, how else would AI be able to act in the physical world? Without robots, the system can only produce words, pictures, and sounds. But a moment’s reflection shows that these are exactly what is needed to take control. For the most damaging people in history have not been the strongest. Hitler, Stalin, and Genghis Khan achieved their absolute control over large parts of the world by using words to convince millions of others to win the requisite physical contests. So long as an AI system can entice or coerce people to do its physical bidding, it wouldn’t need robots at all.

    We can’t know exactly how a system might seize control. But it is useful to consider an illustrative pathway we can actually understand as a lower bound for what is possible.

    First, the AI system could gain access to the Internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: Consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the Internet, forming a large “botnet,” a vast scaling-up of computational resources providing a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate. None of these steps involve anything mysterious—human hackers and criminals have already done all of these things using just the Internet.

    Finally, the AI would need to escalate its power again. There are many plausible pathways: By taking over most of the world’s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity.

    Of course, no current AI systems can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes. History already involves examples of entities with human-level intelligence acquiring a substantial fraction of all global power as an instrumental goal to achieving what they want. And we’ve seen humanity scaling up from a minor species with less than a million individuals to having decisive control over the future. So we should assume that this is possible for new entities whose intelligence vastly exceeds our own.

    The case for existential risk from AI is clearly speculative. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

    There is actually less disagreement here than first appears. Those who counsel caution agree that the timeframe to AGI is decades, not years, and typically suggest research on alignment, not government regulation. So the substantive disagreement is not really over whether AGI is possible or whether it plausibly could be a threat to humanity. It is over whether a potential existential threat that looks to be decades away should be of concern to us now. It seems to me that it should.

    The best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers: 70 percent agreed with University of California, Berkeley professor Stuart Russell’s broad argument about why advanced AI with misaligned values might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the long-term impact of AGI being “extremely bad (e.g. human extinction)” was at least 5 percent.

    I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a 1 in 20 chance the field’s ultimate goal would be extremely bad for humanity? There is a lot of uncertainty and disagreement, but it is not at all a fringe position that AGI will be developed within 50 years and that it could be an existential catastrophe.

    Even though our current and foreseeable systems pose no threat to humanity at large, time is of the essence. In part this is because progress may come very suddenly: Through unpredictable research breakthroughs, or by rapid scaling-up of the first intelligent systems (for example, by rolling them out to thousands of times as much hardware, or allowing them to improve their own intelligence). And in part it is because such a momentous change in human affairs may require more than a couple of decades to adequately prepare for. In the words of Demis Hassabis, co-founder of DeepMind:

    We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.

    Toby Ord is a philosopher and research fellow at the Future of Humanity Institute, and the author of The Precipice: Existential Risk and the Future of Humanity.

    From the book The Precipice by Toby Ord. Copyright © 2020 by Toby Ord. Reprinted by permission of Hachette Books, New York, NY. All rights reserved.

    Lead Image: Titima Ongkantong / Shutterstock


    Read More…




    moral

    Normal People sparks huge debate on Irish radio over &apos;immoral&apos; sex scenes: &apos;It&apos;s fornication&apos;

    BBC series has been widely praised for its depiction of consensual sex between main characters Marianne and Connell




    moral

    Barcelona face &apos;economic bankruptcy and moral decay&apos; amid board chaos, says presidential candidate Victor Font

    Barcelona presidential candidate Victor Font believes the Catalan club could face "economic bankruptcy and moral decay" under the current board.




    moral

    Manchester United told to do &apos;morally correct&apos; thing over Dean Henderson transfer dilemma

    Sheffield United boss Chris Wilder has warned Manchester United to do the "morally correct" thing and allow Dean Henderson to see out the season with the Blades.




    moral

    'Morally it's the wrong thing to do': Insurers refuse to cover landlord's rental loss

    Thousands of mum-and-dad investors are being caught out by insurance companies refusing to cover them when they cut rent for tenants under financial stress due to coronavirus restrictions.




    moral

    Protective humoral immunity in SARS-CoV-2 infected pediatric patients




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    What Makes for a Moral Foreign Policy?

    Joseph Nye's new book rates the efforts of presidents from FDR to Trump.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    What Makes for a Moral Foreign Policy?

    Joseph Nye's new book rates the efforts of presidents from FDR to Trump.




    moral

    So Do Morals Matter in U.S. Foreign Policy? I Asked the Expert.

    In his new book, Do Morals Matter? Presidents and Foreign Policy from FDR to Trump, Joseph S. Nye developed a scorecard to determine how U.S. presidents since 1945 factored questions of ethics and morality into their foreign policy. In an interview, Henry Farrell asked him a few questions to get to the heart of his findings.




    moral

    'If an issue of morality is to be decided by majority, then fundamental right has no meaning'

    Retd Delhi HC Chief Justice and the man behind a landmark verdict decriminalising homosexuality, Justice A P Shah feels the Supreme Court setting aside that order is unfortunate.