outbreak

N.L. archive collecting stories, art from ongoing coronavirus outbreak and past pandemics

The Rooms is eager to document how people are coping with the current pandemic to build a record for the future.




outbreak

Keir Starmer accuses Boris Johnson of 'slow' response to coronavirus outbreak as he demands twice as many tests

Read the full interview HERE




outbreak

The Pandemic Can’t Lock Down Nature - Issue 84: Outbreak


Needing to clear my head, I went down to the Penobscot River. There they were, swimming with the mergansers, following an early pulse of river herring to the mouth of Kenduskeag stream: two harbor seals, raising sleek round heads for a few long breaths before rolling under the waves.

Evidently it’s not uncommon for seals to swim the couple dozen miles between Bangor, Maine, and the Atlantic Ocean, but I’d never seen them here before. They were a balm to my buzzing thoughts: What happens next? Will I become a vector of death to my elderly mother? Is the economy going to implode? For a precious few minutes there were only the seals and mergansers and the fish who drew them there, arriving as the Penobscot’s winter icepack broke and flowed to sea, a ritual enacted ever since glaciers retreated from this continental shelf.

In the months ahead we can look to nature for these respites. The nonhuman world is free of charge; sunlight is a disinfectant, physical distance easily maintained, and no pandemic can suspend it. Nature offers not just escape but reassurance.

The nonhuman world is free of charge; sunlight is a disinfectant, and physical distance is easily maintained.

In 1946, in the aftermath of World War II, with the Nazi threat vanquished but the Cold War looming, George Orwell welcomed spring’s arrival in London’s bombed-out heart. “After the sorts of winters we have had to endure recently, the spring does seem miraculous, because it has become gradually harder and harder to believe that it is actually going to happen,” he wrote in “Some Thoughts on the Common Toad.” “Every February since 1940 I have found myself thinking that this time Winter is going to be permanent. But Persephone, like the toads, always rises from the dead at about the same moment.”

So she does. And so the slumbering earth warms to life. Two nights before the seals, two nights before World Health Organization declared a pandemic, before the NBA shut down with teams on the floor and fans in the seats, before the fright went beyond viral into logarithmic, was the Worm Moon: the full moon named for the imminent stir of earthworms in thawing soil.

In burrows beneath leaf litter, hibernating toads prepare to open what Orwell called “the most beautiful eye of any living creature,” resembling “the golden-colored semi-precious stone which one sometimes sees in signet rings, and which I think is called a chrysoberyl.” Nearly as beautiful are the eyes of painted turtles waiting on pond bottoms here in eastern Maine, the ice above now retreating from shore, mallard couples dabbling in newly open water.

The birds are the surest sign of spring’s imminence. Downtown the house finches are holding daily concerts. Starlings are starting to replace their gold-streaked winter plumes with more iridescent garb. In the street today I saw two male mockingbirds joust above the pavement, their white wing-bars fluttering territorial semaphores, abandoning the contest only when a car nearly ran them down. 

There are many quieter signs, too: pale tips of shrubs poised to grow, a spider rappelling off a low branch, fresh fox scat in the driveway. It’s red from apples preserved under snow and lined with the fur of field mice and meadow voles whose secret winter tunnels are now revealed in the grass. Somewhere soon mother fox will give birth, nursing her blind hairless charges in underground peace.

Eastern comma butterflies will gather on the trunks of those apple trees and sip their rising sap. Not long after the first orange-belted bumblebee queens will appear, inspecting potential nest sites under fallen leaves and decomposing logs. Warm rainy nights will bring salamanders and newts, just a few spotted glistening inches long, some of them decades old, out from woodland hidey-holes and down ancient paths to vernal pool bacchanals held amidst a chorus of spring peepers. Woodland ephemerals will bloom in sunshine unfiltered by still-bare treetops. My favorite are trout lilies, colonies of which illuminate forest floors with a sea of bright yellow blossoms, petals falling once the canopy unfurls.

“The atom bombs are piling up in the factories, the police are prowling through the cities, the lies are streaming from the loudspeakers,” Orwell wrote, “but the earth is still going round the sun.”

At this point there’s no end of studies showing how nature is good for our health, how patients recover faster in hospital rooms with windows overlooking trees, how a mindful walk in the woods will lower stress and raise moods. All true, but at this moment something deeper and more urgent is offered. An affirmation of life.

Will the nightmare scenes out of Italy and Spain and now New York City spread across the land? How long will the pandemic last? Will it completely rend our already tattered social fabric? When can I again play hockey or go to a coffee shop or use a credit card machine without feeling like I’m risking my own and other lives? Who will die? Nobody knows for sure, but in a few weeks the swallows will arrive, and tonight above the fields at dusk I heard the cries of woodcock.

Secretive, ground-dwelling birds with limpid black eyes and long, slender beaks attuned to the frequencies of earthworm-rustles, their feathers blend perfectly with leaf litter and old grass. They rely on this camouflage, going still rather than fleeing a walker’s approach, taking wing only as a last resort.

When they do, their flight is notable for its slowness and the quavering whistle of their wings. At no other time than in spring do they dare draw attention, much less put on a show: calling out, with an urgent nasal buzz best described as a peent, and flying straight upward before spiraling against a darkening sky.

Brandon Keim is a freelance nature and science journalist. The author of The Eye of the Sandpiper: Stories from the Living World, he’s now writing Meet the Neighbors, forthcoming from W.W. Norton & Company, about what it means to think of wild animals as fellow persons—and what that means for the future of nature.

Lead image: Tim Zurowski / Shutterstock


Read More…




outbreak

How a Nuclear Submarine Officer Learned to Live in Tight Quarters - Issue 84: Outbreak


I’m no stranger to forced isolation. For the better part of my 20s, I served as a nuclear submarine officer running secret missions for the United States Navy. I deployed across the vast Pacific Ocean with a hundred other sailors on the USS Connecticut, a Seawolf-class ship engineered in the bygone Cold War era to be one of the fastest, quietest, and deepest-diving submersibles ever constructed. The advanced reactor was loaded with decades of enriched uranium fuel that made steam for propulsion and electrical power so we could disappear under the waves indefinitely without returning to port. My longest stint was for two months, when I traveled under the polar ice cap to the North Pole with a team of scientists studying the Arctic environment and testing high frequency sonar and acoustic communications for under-ice operations. During deployments, critical-life events occur without you: holidays with loved ones, the birth of a child, or in my case, the New York Giants 2011-2012 playoff run to beat Tom Brady’s Patriots in the Super Bowl for the second time. On the bright side, being cut off from the outside world was a great first job for an introvert.

It’s been a month since COVID-19 involuntarily drafted me into another period of isolation far away from home. I’m in Turkey, where a two-week trip with my partner to meet her family has been extended indefinitely. There were no reported cases here and only a few in California in early March when we left San Francisco, where I run a business design studio. I had a lot of anticipation about Turkey because I’d never been here. Now I’m sheltering in a coastal town outside of Izmir with my partner, her parents, their seven cats, and a new puppy.

Shuttered in a house on foreign soil where I don’t speak the language, I have found myself snapping back into submarine deployment mode. Each day I dutifully monitor online dashboards of data and report the status of the spread at the breakfast table to no one in particular. I stay in touch with friends and family all over the world who tell me they’re going stir crazy and their homes are getting claustrophobic. But if there is one thing my experience as a submarine officer taught me, it’s that you get comfortable being uncomfortable.

OFFICER OF THE DECK: Author Steve Weiner in 2011, on the USS Connecticut, a nuclear submarine. Weiner was the ship’s navigator. Submarine and crew, with a team of scientists, were deployed in the Arctic Ocean, studying the Arctic environment and testing high frequency sonar and acoustic communications for under-ice operations.Courtesy of Steve Weiner

My training began with psychological testing, although it may not be what you think. Evaluating mental readiness for underwater isolation isn’t conducted in a laboratory by clipboard-toting, spectacled scientists. The process to select officers was created by Admiral Hyman Rickover—the engineering visionary and noted madman who put the first nuclear reactor in a submarine—to assess both technical acumen and composure under stress. For three decades as the director of the Navy’s nuclear propulsion program, Rickover tediously interviewed every officer, and the recruiting folklore is a true HR nightmare: locking candidates in closets for hours, asking obtuse questions such as “Do something to make me mad,” and sawing down chair legs to literally keep one off balance.

Rickover retired from the Navy as its longest-serving officer and his successors carried on the tradition of screening each officer candidate, but with a slightly more dignified approach. Rickover’s ghost, though, seemed to preside over my interview process when I applied to be a submariner as a junior at the U.S. Naval Academy in Annapolis, Maryland. I was warned by other midshipmen that I would fail on the spot if I initiated a handshake. So, dressed in my formal navy blue uniform and doing my best to avoid tripping into accidental human contact, I rigidly marched into the Admiral’s office, staring straight ahead while barking my resume. When I took a seat on the unaltered and perfectly level chair in front of his desk, the Admiral asked me bluntly why I took so many philosophy classes and if I thought I could handle the technical rigors of nuclear power school. My response was a rote quip from John Paul Jones’ “Qualifications of a Naval Officer.” “Admiral, an officer should be a gentleman of liberal education, refined manners, punctilious courtesy, and the nicest sense of personal honor.” My future boss looked at me, shook his head like he thought I’d be a handful, and told me I got the job.

Confinement opened something up in my psyche and I gave myself permission to let go of my anxieties.

Nuclear power training is an academic kick in the face every day for over a year. The curriculum is highly technical and the pedagogy resembles a cyborg assembly-line without even a hint of the Socratic method. Our grades were conspicuously posted on the classroom wall and a line was drawn between those who passed and those who failed. I was below the line enough to earn the distinguished dishonor of 25 additional study hours each week, which meant I was at school at 5 a.m. and every weekend. This is how the Nuclear Navy builds the appropriate level of knowledge and right temperament to deal with shipboard reactor operations.

I finally sat down for a formal psychological evaluation a few months before my first deployment. I was ushered into a room no bigger than a broom closet and instructed to click through a computer-based questionnaire with multiple-choice questions about my emotions. I never did  learn the results, so I assume my responses didn’t raise too many red flags.

During my first year onboard, I spent all my waking hours either supervising reactor operations or learning the intricacies of every inch of the 350-foot tube and the science behind how it all worked. The electrolysis machine that split water molecules to generate oxygen was almost always out of commission, so instead we burned chlorate candles that produced breathable air. Seawater was distilled each day for drinking and shower water. Our satellite communications link had less bandwidth than my dial-up modem in the 1990s and we were permitted to send text-only emails to friends and family at certain times and in certain locations so as not to risk being detected. I took tests every month to demonstrate proficiency in nuclear engineering, navigation, and the battle capabilities of the ship. When I earned my submarine warfare qualification, the Captain pinned the gold dolphins insignia on my uniform and gave me the proverbial keys to the $4 billion warship. At that point, I was responsible for coordinating missions and navigating the ship as the Officer of the Deck.

Modern submarines are hydrodynamically shaped to have the most efficient laminar flow underwater, so that’s where we operated 99 percent of the time. The rare exception to being submerged is when we’d go in and out of port. The most unfortunate times were long transits tossing about in heavy swells, which made for a particularly nauseated cruise. To this day, conjuring the memory of some such sails causes a reflux flashback. A submariner’s true comfort zone is beneath the waves so as soon as we broke ties with the pier we navigated toward water that was deep enough for us to dive.

It’s unnatural to stuff humans, torpedoes, and a nuclear reactor into a steel boat that’s intentionally meant to sink. This engineering marvel ranks among the most complex, and before we’d proceed below and subject the ship and its inhabitants to extreme sea pressures, the officers would visually inspect thousands of valves to verify the proper lineup of systems that would propel us to the surface if we started flooding uncontrollably and sinking—a no-mistakes procedure called rigging for dive. Once we’d slip beneath the waves, the entire crew would walk around to check for leaks before we’d settle into a rotation of standing watch, practicing our casualty drills, engineering training, eating, showering (sometimes), and sleeping (rarely). The full cycle was 18 hours, which meant the timing of our circadian cycles were constantly changing. Regardless of the amount of government-issued Folger’s coffee I’d pour down my throat, I’d pass out upon immediate contact with my rack (the colloquialism for a submarine bunk in which your modicum of privacy was symbolized by a cloth curtain).

As an officer, I lived luxuriously with only two other grown men in a stateroom no bigger than a walk-in closet. Most of the crew slept stacked like lumber in an 18-person bunk room and they all took turns in the rack. This alternative lifestyle is known as hot-racking, because of the sensation you get when you crawl into bedding that’s been recently occupied. The bunk rooms are sanctuaries where silence is observed with monastic intensity. Slamming the door or setting an alarm clock was a cardinal sin so wakeups were conducted by a junior sailor who gently coaxed you awake when it was time to stand watch. Lieutenant Weiner, it’s time to wake up. You’ve got the midnight watch, sir. Words that haunt my dreams.

The electrolysis machine was out of commission, so we burned chlorate candles that produced breathable air.

I maintained some semblance of sanity and physical fitness by sneaking a workout on a rowing erg in the engine room or a stationary bike squeezed between electronics cabinets. The rhythmic beating of footsteps on a treadmill was a noise offender—the sound could be detected on sonar from miles away—so we shut it off unless we were in friendly waters where we weren’t concerned with counter-detection.

Like a heavily watered-down version of a Buddhist monk taking solitary retreat in a cave, my extended submarine confinements opened something up in my psyche and I gave myself permission to let go of my anxieties. Transiting underneath a vast ocean in a vessel with a few inches of steel preventing us from drowning helps put things into perspective. Now that I’m out of the Navy, I have more appreciation for the freedoms of personal choice, a fresh piece of fruit, and 24 hours in a day. My only regrets are not keeping a journal or having the wherewithal to discover the practice of meditation under the sea.

Today, I’m learning Turkish so I can understand more about what’s happening around me. I’m doing Kundalini yoga (a moving meditation that focuses on breathwork) and running on the treadmill (since I’m no longer concerned about my footsteps being detected on sonar). On my submarine, I looked at photos to stay connected to the world I left behind, knowing that I’d return soon enough. Now our friend who is isolating in our apartment in San Francisco sends us pictures of our cat and gives us reports about how the neighborhood has changed.

It’s hard to imagine that we’ll resume our lifestyles exactly as they were. But the submariner in me is optimistic that we have it in us to adapt to whatever conditions are waiting for us when it’s safe to ascend from the depths and return to the surface.

Steve Weiner is the founder of Very Scarce, a business design studio. He used to lead portfolio companies at Expa and drive nuclear submarines in the U.S. Navy. He has an MBA from The Wharton School and a BS from the U.S. Naval Academy. Instagram: @steve Twitter: @weenpeace

Lead image: Mike H. / Shutterstock


Read More…




outbreak

The Meme as Meme - Issue 84: Outbreak


This article from our 2013 issue, “Fame,” offers a look at the way information—whether it’s true or not—spreads across the Internet.

On April 11, 2012, Zeddie Little appeared on Good Morning America, wearing the radiant, slightly perplexed smile of one enjoying instant fame. About a week earlier, Little had been a normal, if handsome, 25-year-old trying to make it in public relations. Then on March 31, he was photographed amid a crowd of runners in a South Carolina race by a stranger, Will King, who posted the image to a social networking website, Reddit. Dubbed “Ridiculously Photogenic Guy,” Little’s picture circulated on Facebook, Twitter, and Tumblr, accruing likes, comments, and captions (“Picture gets put up as employee of the month/for a company he doesn’t work for”). It spawned spinoffs (Ridiculously Photogenic Dog, Prisoner, and Syrian Rebel) and leapt to the mainstream media. At a high point, ABC Morning News reported that a Google search for “Zeddie Little” yielded 59 million hits.

Why the sudden fame? The truth is that Little hadn’t become famous: His meme had. According to website Know Your Meme, which documents viral Internet phenomena, a meme is “a piece of content or an idea that’s passed from person to person, changing and evolving along the way.” Ridiculously Photogenic Guy is a kind of Internet meme represented by LOL cats: that is, a photograph, video, or cartoon, often overlaid with a snarky message, perfect for incubating in the bored, fertile minds of cubicle workers and college students. In an age where politicians campaign through social media and viral marketers ponder the appeal of sneezing baby pandas, memes are more important than ever—however trivial they may seem.

But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?

In an age where politicians campaign through social media and viral marketers ponder the appeal of sneezing baby pandas, memes are more important than ever—however trivial they may seem.

Memes were originally framed in relationship to genes. In The Selfish Gene, Dawkins claimed that humans are “survival machines” for our genes, the replicating molecules that emerged from the primordial soup and that, through mutation and natural selection, evolved to generate beings that were more effective as carriers and propagators of genes. Still, Dawkins explained, genes could not account for all of human behavior, particularly the evolution of cultures. So he identified a second replicator, a “unit of cultural transmission” that he believed was “leaping from brain to brain” through imitation. He named these units “memes,” an adaption of the Greek word mimene, “to imitate.”

Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.

Pinpointing when memes first made the leap to the Internet is tricky. Nowadays, we might think of the dancing baby, also known as Baby Cha-Cha, that grooved into our inboxes in the 1990s. It was a kind of proto-meme, but no one called it that at the time. The first reference I could find to an “Internet meme” appeared in a footnote in a 2003 academic article, describing an important event in the life of Jonah Peretti, co-founder of the hugely successful websites The Huffington Post and BuzzFeed. In 2001, as a procrastinating graduate student at MIT, Peretti decided to order a pair of Nike sneakers customized to read “sweatshop.” Nike refused. Peretti forwarded the email exchange to friends, who sent it on and on, until the story leapt to the mainstream media, where Peretti debated a Nike representative on NBC’s Today Show. Peretti later wrote, “Without really trying, I had released what biologist Richard Dawkins calls a meme.”

Peretti concluded that the email chain had spread exponentially “because it had access to such a wide range of different social networks.” Like Dawkins, he saw that a meme’s success depends on other memes, its ecosystem—and further saw that Internet memes’ ecosystems were online social networks, years before Facebook existed. According to a recent profile in New York Magazine, the Nike experience was formative for Peretti, who created BuzzFeed with the explicit goal of creating viral Internet memes. The company uses a formula called “Big Seed Marketing,” that begins with an equation describing the growth of a virus, the spread of a disease.

From the perspective of serious meme theorists, Internet memes have trivialized and distorted the spirit of the idea. Dennett told me that, in a planned workshop to be held in May 2014, he hopes to “rehabilitate the term in a very precise kind of way” for studying cultural evolution.

According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation. 

“I don’t know about you, but I’m not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other peoples’ ideas renew themselves.”

But if the concept of memes can really offer new insight into the intricate web of digital culture and cultural evolution more broadly, why have academics neglected it? Looking for answers, I called Susan Blackmore, a British professor who may be one of the last defenders of memetics as a scientific field. In a 2008 TED talk, Blackmore is an animated speaker, bright-eyed and wiry, her short grey hair dyed with streaks of blue. I reached her at her home in Devon, England, where she is occasionally joined in the garden by Dawkins and Dennett for meetings of the “meme lab.” “It’s only a bit of fun, nothing serious,” Blackmore said. Sometimes, members try experiments, like folding Chinese sailing ships from origami, itself a kind of meme. She remembered a March meeting in which the issue of Internet memes arose, saying, “Richard was upset because he invented the term, which shouldn’t just be about viral Internet memes. It’s a very powerful concept for understanding why humans are the way we are.”

For Blackmore, memetics is a science. An Oxford-educated psychologist, her early work was on telepathy, which she spent years investigating after an out-of-body experience at the age of 19. She subsequently found no evidence for the existence of paranormal phenomena, but she was no stranger to pushing scientific frontiers. It is perhaps unsurprising that she decided to flesh out memetics. Dawkins wrote that, with memes, he did not intend to “sculpt a grand theory of human culture.” In her 1999 book, The Meme Machine, Blackmore does just that. She argues that everything from the development of language to our big brains were products of “memetic drive.” This is perhaps her most radical claim: that memes make us do things.

Considering this idea in his book Consciousness Explained, Dennett writes, “I don’t know about you, but I’m not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other peoples’ ideas renew themselves… who’s in charge, in according to this vision—we or our memes?” Still, Dennett, too, became a major proponent of meme theory. Speaking on the phone, he used memes to explain the joy we take in our culture and related decisions not to procreate wildly. College, he pointed out, is a great underminer of genetic fitness. Reading Blackmore and Dennett, the idea of meme as mental parasite becomes both more and less convincing: If we are created and driven by our memes, then we are our memes, a duality that Dennett himself seems to recognize.

Perhaps the notion of the meme is evolving in the direction of its own survival.

Yet, the very breadth of the concept makes it difficult to approach memes from the perspective of serious, observation-based science. In the analogy to genes, memes have inevitably disappointed. As Dawkins himself wrote, memes, as entities, are more vague than genes, where alleles compete to hold the same “chromosomal slots.” Unlike genes, memes are not directly observable and have high rates of mutation. Also, no one seems to be sure if memes exist. On the phone, Blackmore told me “the one good reason” memetics might not be a science: “There has been no example of where some scientific discovery has been made using meme theory, that couldn’t have been made any other way.” Still, Blackmore told me that people are doing research on memes—they just don’t call them by that name.

Looking for meme theory at work, I found network theory, an interdisciplinary field that unites computer science, statistics, physics, ecology, and even marketing. “If you want to use memetics to explain ‘everything,’ like how religion spreads, the problem is the data,” said Michele Coscia, a researcher at the Harvard Kennedy School, who recently wrote a paper displaying a statistical “decision tree” that described the success of memes like Ridiculously Photogenic Guy. For Coscia, Internet memes, with their visible mutations and view counts, solved the problem of empirical evidence, allowing him to do work he sees as analogous to genetics experiments.

Perhaps the notion of the meme is evolving in the direction of its own survival. The term “Internet meme” appears to be growing exponentially from year to year, in classical memetic fashion. This is what Bob Scott, a digital humanities librarian at Columbia University, found when he ran various searches on the comprehensive news and wire-service aggregator LexisNexis. He saw that the term “Internet meme” showed up with the new millennium and really took off in 2004, with references roughly doubling each year thereafter. 

Infectious Internet memes are now big business. BuzzFeed now draws 85 million unique visitors a month, compared to The New York Times’ website at 29 million, and was recently valued at $200 million. Its staff trawl the Internet for viral content and curate it, adding news stories, humor pieces, and advertisements, or “sponsored posts.” These categories can be hard to disentangle, even though ads are printed on a taupe background. Scrolling through BuzzFeed, I read: “20 People We Hope to Never See Promoted on OK Cupid,” (which was an ad by Virgin Mobile), a new story on poisoned Indian children, and a post about a Republican Congressman who had “live tweeted” Jay-Z’s new album. It turned out that “23 Times When Wal-Mart Didn’t Disappoint” was not an ad, but still, the post made me think about how subversive humor—the kind that made Peretti’s email exchange with Nike so popular—could be used to advertise one of America’s least-subversive mega-chains.

While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”

It is more humbling still to think that our minds can be seduced not through the agency of memes, as Blackmore sees it, but through human agency and clever algorithms. Not by religions or quirks of culture, but by a never-ending list of stories that make us laugh. Even if the meme meme is too broad for empirical study, it offers us a powerful metaphor for how we absorb other peoples’ ideas, and how they absorb us. So maybe this is what meme theory can ultimately give us: the insight we need to put LOL cats aside—and get down to work.

Abby Rabinowitz has written for The New York Times and teaches writing at Columbia University.


Read More…




outbreak

Friendship Is a Lifesaver - Issue 84: Outbreak


My mother-in-law, Carol, lives alone. It was her 75th birthday the other day. Normally, I send flowers. Normally, she spends some part of the day with the family members who live nearby and not across the country as my husband, Mark, and I do. And normally, she makes plans to celebrate with a friend. But these are not normal times. I was worried about sending a flower delivery person. Social distancing means no visiting with friends or family, no matter how close they are. So, my sister-in-law dropped off a gift and Mark and I sang “Happy Birthday” down the phone line with our kids. But I could hear the loneliness in Carol’s voice.

This was hardly the worst thing anyone experienced in America on that particular April day. We are fortunate that Carol is healthy and safe. But it upset me anyway. People over 60 are more vulnerable to COVID-19 than anyone else. They are also vulnerable to loneliness, especially when they live alone. By forcing us all into social isolation, one public health crisis—the coronavirus—is shining a bright light on another, loneliness. It will be some time before we have a vaccine for the coronavirus. But the antidote to loneliness is accessible to all of us: friendship.

Those who valued friendship as much as family had higher levels of health and happiness.

All too often we fail to appreciate what we have until it’s gone. And this shared global moment has illuminated how significant friends are to day-to-day happiness. Science has been accumulating evidence that friendship isn’t just critical for our happiness but our health and longevity. Its presence or absence matters at every point in life, but the cumulative effects of either show up most starkly in the later stages of life. That is also the moment when demographics and health concerns can conspire to make friendships harder to find or sustain. As the world hits pause, it’s worth reminding ourselves why friendship is more important now than ever.

Friendship has long been understood to be valuable and pleasurable. Ancient Greek philosophers enjoyed debating its virtues, in the company of friends. But friendship has largely been considered a cultural phenomenon, a pleasant by-product of the human capacity for language and living in groups. In the 1970s and 1980s, a handful of epidemiologists and sociologists began to establish a link between social relationships and health. They showed those who were more socially isolated were more likely to die over the course of the studies. In 2015, a meta-analysis of more than 3 million people whose average age was 66 showed that social isolation and loneliness increased the risk of early mortality by up to 30 percent.1 Yet loneliness and social isolation are not the same thing. Social isolation is an objective measure of the number and extent of social contact a person has day to day. Loneliness is a subjective feeling of mismatch between how much social connection you want and how much you have.

Once the link between health and relationships was established in humans, it was noticed in other species as well. Primatologists studying baboons in Africa remarked that when female baboons lost their primary grooming partners to lions or drought, they worked to build bonds with other animals in place of the one they’d lost. When the researchers analyzed the social behavior of the animals and their outcomes over generations, they found in multiple studies that the animals with the strongest social networks live longer and have more and healthier babies than those that are more isolated.2 Natural selection has resulted in survival of the friendliest.

Since baboons don’t drive each other to the hospital, something deeper than social support must be at work. Friendship is getting “under the skin,” as biologists say. Some of the mechanisms by which it works have yet to be explained, but studies have demonstrated that social connection improves cardiovascular functioning, reduces susceptibility to inflammation and viral disease, sharpens cognition, reduces depression, lowers stress, and even slows biological aging.3

We also now have a clearer definition of what friendship is. Evolutionary biologists concluded that friendship in monkeys—as well as people—required at least three things: it had to be long-lasting, positive, and cooperative. When an anthropologist looked for consistent definitions of friendship across cultures, he found something similar. Friendships were described as positive, and they nearly always include a willingness to help, especially in times of crisis. What friendship is about at the end of the day is creating intensely bonded groups that act as protection against life’s stresses.4

Social connection reduces depression, lowers stress, and even slows biological aging.

That buffering effect is particularly powerful as we age. Those first epidemiology studies focused on people in the middle of life. In 1987, epidemiologist Teresa Seeman of the University of California, Los Angeles, wondered if age and type of relationship mattered for health.5 She found that for those under 60, whether or not they were married mattered most. Being unmarried in midlife put people at greater risk of dying earlier than normal. But that did not turn out to be true for the oldest groups. For those over 60, close ties with friends and relatives mattered more than having a spouse. “That was a real lightbulb that went on,” Seeman says.

In a 2016 study, researchers at the University of North Carolina found that in both adolescence and old age, having friends was associated with a lower risk of physiological problems.6 The more friends you had, the lower the risk. By contrast, adults in middle age were less affected by variation in how socially connected they were. But the quality of their social relationships—whether friendships provided support or added strain—mattered more. Valuing friendship also proved increasingly important with age in a 2017 study by William Chopik of Michigan State University. He surveyed more than 270,000 adults from 15 to 99 years of age and found that those who valued friendship as much as family had higher levels of health, happiness, and subjective well-being across the lifespan. The effects were especially strong in those over 65. As you get older, friendships become more important, not less; whether you’re married is relatively less significant.7

There’s a widespread sense, especially among younger people, that people are lonely post-retirement. The truth is more complicated. Social networks do get smaller later in life for a variety of reasons. In retirement, people lose regular interaction with colleagues. Most diseases, and the probability of getting them, worsen with age. It’s more likely you will lose a spouse. Friends start to die as well. Mental and physical capacities may diminish, and social lives may be limited by hearing loss or reduced mobility.

Yet some of this social-narrowing is intentional. If time is of the essence, the motivation to derive emotional meaning from life increases, says Laura Carstensen, director of the Stanford Center for Longevity. She found that people choose to spend time with those they really care about. They emphasize quality of relationships over quantity. While family members fill much of a person’s inner social circle, friends are there, too, and regularly fill in in the absence of family. A related, more optimistic perspective on retirement is that with fewer professional and family obligations, there are more hours for the things we want to do and the people with whom we want to do them.

At all stages of life, how we do friendship—whether we focus on one or two close friends or socialize more widely—has to do with our natural levels of sociability and motivation. Those vary, of course. I recently spoke with a man who had retired to Las Vegas. When he and his wife moved to their new house, his wife began baking cookies and distributing them to neighbors. She started throwing block parties for silly holidays and those neighbors showed up. No one had bothered to organize such a thing before. Even in retirement, this woman is what psychologists call a “social broker”—someone who brings people together. She has most likely always been friendly.

What best predicted health wasn’t cholesterol levels, but satisfaction in relationships.

How you live your life before you reach 60 makes a difference, experts on aging say. Friendship is a lifelong endeavor, but not everyone treats it that way. Think of relationships the way we do smoking, says epidemiologist Lisa Berkman of Harvard University. “If you start smoking when you’re 14, and stop smoking when you’re 65, in many ways, the damage is done,” she says “It’s not undoable. Stopping makes some things better. It’s worth doing but it’s very late in the game.” Similarly, if you only focus on friendships when your family and professional obligations slow, you will be at a disadvantage. Damage will have been done. The payoff in making friendship a priority was born out in the long-running Harvard Study of Adult Development, which followed more than 700 men for the entire course of their lives. What best predicted how healthy those men were at 80 wasn’t middle-aged cholesterol levels, it was how satisfied they were in their relationships at 50.8

Fortunately, it is possible to make new friends at every stage of life. In Los Angeles, I met a group of 70-something women who bonded as volunteers for Generation Xchange, an educational and community health nonprofit. The program places older adults in early elementary classrooms as teachers’ aids for a school year. As a result of the extra adult attention in class, the children’s reading scores have gone up and behavioral problems have gone down. The volunteers’ health has improved—they’ve lost weight, and lowered blood pressure and cholesterol. But they have also become friends, which is just what UCLA’s Seeman had in mind when she started the program. “One of the reasons our program may be successful is that we are motivating them to get engaged through their joint interest in helping the kids,” Seeman says. “It takes the pressure off of making friends. You can start getting to know each other in the context of the school and our team. Hopefully, the friendships can grow out of that.”

Concerns about loneliness among the elderly are well-founded. Demographics are not working in favor of the fight against loneliness. By 2035, older adults are projected to outnumber children for the first time in American history. Because of drops in marriage and childbearing, more of those older adults will be unmarried and childless than ever before. The percentage of older adults living alone rose steadily through the 20th century, and now hovers at 27 percent. And a digital divide still exists between older adults and their children and grandchildren, according to recent studies. That means older adults are less able to use virtual technology like Zoom to stay connected during the COVID-19 pandemic—though some are learning. Laura Fisher, a personal trainer in New York City, found that putting her business online meant training older clients one-on-one in videoconferencing. She now works out with one of her young clients in New York City and her client’s grandmother in Israel. Generally, older adults who use social media report more support from both their grown children and their friends. “For older people, social media is a real avenue of connection, of relational well-being,” says psychologist Jeff Hancock who runs the social media lab at Stanford University.

That is good news in this moment of enforced social isolation. So is the fact that being apart has reminded so many of us of how much we enjoy being together. For my part, I sent those flowers to my mother-in-law after all when I discovered contactless delivery. When the flowers arrived, we spoke again. And then I called her again two days later. “It’s great to talk to you,” she said.

Lydia Denworth is a contributing editor for Scientific American and the author of Friendship: The Evolution, Biology, and Extraordinary Power of Life’s Fundamental Bond.

Lead image: SanaStock / Shutterstock

References

1 Holt-Lunstad, J., et al. Loneliness and social isolation as risk factors for mortality: a meta-analytic review. Perspectives on Psychological Science 10, 227-237 (2015).

2 Silk, J.B., Alberts, S.C., & Altmann, J. Social bonds of female baboons enhance infant survival. Science 302, 1231-1234 (2003).

3 Holt-Lunstad, J., Uchino, B.N., Smith, T.W., & Hicks, A. On the importance of relationship quality: The impact of ambivalence in friendships on cardiovascular functioning. Annals of Behavioral Medicine 33, 278-290 (2007).

4 Uchino, B.N., Kent de Grey, R.G., & Cronan, S. The quality of social networks predicts age-related changes in cardiovascular reactivity to stress. Psychology and Aging 31, 321–326 (2016).

5 Seeman, T.E., et al. Social network ties and mortality among tile elderly in the Alameda County Study. American Journal of Epidemiology 126, 714-723 (1987).

6 Yang, Y.C., et al. Social relationships and physiological determinants of longevity across the human life span. Proceedings of the National Academy of Sciences 113, 578-583 (2016).

7 Chopik, W.J. Associations among relational values, support, health, and well‐being across the adult lifespan. Personal Relationships 24, 408-422 (2017).

8 Vaillant, G.E. & Mukamal, K. Successful aging. American Journal of Psychiatry 158, 839-847 (2001).


Read More…




outbreak

Why False Claims About COVID-19 Refuse to Die - Issue 84: Outbreak


Early in the morning on April 5, 2020, an article appeared on the website Medium with the title “Covid-19 had us all fooled, but now we might have finally found its secret.” The article claimed that the pathology of COVID-19 was completely different from what public health authorities, such as the World Health Organization, had previously described. According to the author, COVID-19 strips the body’s hemoglobin of iron, preventing red blood cells from delivering oxygen and damaging the lungs in the process. It also claimed to explain why hydroxychloroquine, an experimental treatment often hyped by President Trump, should be effective.

The article was published under a pseudonym—libertymavenstock—but the associated account was linked to a Chicagoland man working in finance, with no medical expertise. (His father is a retired M.D., and in a follow-up note posted on a blog called “Small Dead Animals,” the author claimed that the original article was a collaboration between the two of them.) Although it was not cited, the claims were apparently based on a single scientific article that has not yet undergone peer-review or been accepted for publication, along with “anecdotal evidence” scraped from social media.1

While Medium allows anyone to post on their site and does not attempt to fact-check content, this article remained up for less than 24 hours before it was removed for violating Medium’s COVID-19 content policy. Removing the article, though, has not stopped it from making a splash. The original text continues to circulate widely on social media, with users tweeting or sharing versions archived by the Wayback Machine and re-published by a right-wing blog. As of April 12, the article had been tweeted thousands of times.

There is a pandemic of misinformation about COVID-19 spreading on social media sites. Some of this misinformation takes well-understood forms: baseless rumors, intentional disinformation, and conspiracy theories. But much of it seems to have a different character. In recent months, claims with some scientific legitimacy have spread so far, so fast, that even if it later becomes clear they are false or unfounded, they cannot be laid to rest. Instead, they become information zombies, continuing to shamble on long after they should be dead.

POOR STANDARD: The antiviral drug hydroxychloroquine has been hyped as an effective treatment for COVID-19, notably by President Trump. The March journal article that kicked off the enthusiasm was later followed by a lesser-read news release from the board of its publisher, the International Society of Antimicrobial Chemotherapy, which states the “Board believes the article does not meet the Society’s expected standard.”Marc Bruxelle / Shutterstock

It is not uncommon for media sources like Medium to retract articles or claims that turn out to be false or misleading. Neither are retractions limited to the popular press. In fact, they are common in the sciences, including the medical sciences. Every year, hundreds of papers are retracted, sometimes because of fraud, but more often due to genuine errors that invalidate study findings.2 (The blog Retraction Watch does an admirable job of tracking these.)

Reversing mistakes is a key part of the scientific process. Science proceeds in stops and starts. Given the inherent uncertainty in creating new knowledge, errors will be made, and have to be corrected. Even in cases where findings are not officially retracted, they are sometimes reversed— definitively shown to be false, and thus no longer valid pieces of scientific information.3

Researchers have found, however, that the process of retraction or reversal does not always work the way it should. Retracted papers are often cited long after problems are identified,4 sometimes at a rate comparable to that before retraction. And in the vast majority of these cases, the authors citing retracted findings treat them as valid.5 (It seems that many of these authors pull information directly from colleagues’ papers, and trust that it is current without actually checking.) Likewise, medical researchers have bemoaned the fact that reversals in practice sometimes move at a glacial pace, with doctors continuing to use contraindicated therapies even though better practices are available.6

For example, in 2010, the anesthesiologist Scott Reuben was convicted of health care fraud for fabricating data and publishing it without having performed the reported research. Twenty-one of Reuben’s articles were ultimately retracted. And yet, an investigation four years later found half of these articles were still consistently cited, and that only one-fourth of these citations mentioned that the original work was fraudulent.7 Given that Reuben’s work focused on the use of anesthetics, this failure of retraction is seriously disturbing.

Claims with some scientific legitimacy continue to shamble on long after they should be dead.

But why don’t scientific retractions always work? At the heart of the matter lies the fact that information takes on a life of its own. Facts, beliefs, and ideas are transmitted socially, from person to person to person. This means that the originator of an idea soon loses control over it. In an age of instant reporting and social media, this can happen at lightning speed.

The first models of the social spread of information were actually epidemiological models, developed to track the spread of disease. (Yes, these are the very same models now being used to predict the spread of COVID-19.) These models treat individuals as nodes in a network and suppose that information (or disease) can propagate between connected nodes.

Recently, one of us, along with co-authors Travis LaCroix and Anders Geil, repurposed these models to think specifically about failures of retraction and reversal.8 A general feature of retracted information, understood broadly, is that it is less catchy than novel information in the following way. People tend to care about reversals or retractions only when they have already heard the original, false claim. And they tend to share retractions only when those around them are continuing to spread the false claim. This means that retractions actually depend on the spread of false information.

We built a contagion model where novel ideas and retractions can spread from person to person, but where retractions only “infect” those who have already heard something false. Across many versions of this model, we find that while a false belief spreads quickly and indiscriminately, its retraction can only follow in the path of its spread, and typically fails to reach many individuals. To quote Mark Twain, “A lie can travel halfway around the world while the truth is putting on its shoes.” In these cases it’s because the truth can’t go anywhere until the lie has gotten there first.

Another problem for retractions and reversals is that it can be embarrassing to admit one was wrong, especially where false claims can have life or death consequences. While scientists are expected to regularly update their views under normal circumstances, under the heat of media and political scrutiny during a pandemic they too may be less willing to publicize reversals of opinion.

The COVID-19 pandemic has changed lives around the world at a startling speed—and scientists have raced to keep up. Academic journals, accustomed to a comparatively glacial pace of operations, have faced a torrent of new papers to evaluate and process, threatening to overwhelm a peer-review system built largely on volunteer work and the honor system.9 Meanwhile, an army of journalists and amateur epidemiologists scour preprint archives and university press releases for any whiff of the next big development in our understanding of the virus. This has created a perfect storm for information zombies—and although it also means erroneous work is quickly scrutinized and refuted, this often makes little difference to how those ideas spread.

Many examples of COVID-19 information zombies look like standard cases of retraction in science, only on steroids. They originate with journal articles written by credentialed scientists that are later retracted, or withdrawn after being refuted by colleagues. For instance, in a now-retracted paper, a team of biologists based in New Delhi, India, suggested that novel coronavirus shared some features with HIV and was likely engineered.10 It appeared on an online preprint archive, where scientists post articles before they have undergone peer review, on January 31; it was withdrawn only two days later, following intense critique of the methods employed and the interpretation of the results by scientists from around the world. Days later, a detailed analysis refuting the article was published in the peer-reviewed journal Emerging Microbes & Infections.11 But a month afterward, the retracted paper was still so widely discussed on social media and elsewhere that it had that highest Altmetric score—a measure of general engagement with scientific research—of any scientific article published or written in the previous eight years. Despite a thorough rejection of the research by the scientific community, the dead information keeps walking.

Other cases are more subtle. One major question with far-reaching implications for the future development of the pandemic is to what extent asymptomatic carriers are able to transmit the virus. The first article reporting on asymptomatic transmission was a letter published in the prestigious New England Journal of Medicine claiming that a traveler from China to Germany transmitted the disease to four Germans before her symptoms appeared.12 Within four days, Science reported that the article was flawed because the authors of the letter had not actually spoken with the Chinese traveler, and a follow-up phone call by public health authorities confirmed that she had had mild symptoms while visiting Germany after all.13 Even so, the article has subsequently been cited nearly 500 times according to Google Scholar, and has been tweeted nearly 10,000 times, according to Altmetric.

Media reporting on COVID-19 should be linked to authoritative sources that are updated as information changes.

Despite the follow-up reporting on this article’s questionable methods, the New England Journal of Medicine did not officially retract it. Instead, a week after publishing the letter, the journal added a supplemental appendix describing the progression of the patient’s symptoms while in Germany, leaving it to the reader to determine whether the patient’s mild early symptoms should truly count. Meanwhile, subsequent research14, 15 involving different cases has suggested that asymptomatic transmission may be possible after all—though as of April 13, the World Health Organization considers the risk of infection from asymptomatic carriers to be “very low.” It may turn out that transmission of the virus can occur before any symptoms appear, or while only mild symptoms are present, or even in patients who will never go on to present symptoms. Even untangling these questions is difficult, and the jury is still out on their answers. But the original basis for claims of confirmed asymptomatic transmission was invalid, and those sharing them are not typically aware of the fact.

Another widely discussed article, which claims that the antiviral drug hydroxychloroquine and the antibiotic azithromycin, when administered together, are effective treatments for COVID-19 has drawn enormous amounts of attention to these particular treatments, fueled in part by President Trump.16 These claims, too, may or may not turn out to be true—but the article with which they apparently originated has since received a statement of concern from its publisher, noting that its methodology was problematic. Again, we have a claim that rests on shoddy footing, but which is spreading much farther than the objections can.17 And in the meantime, the increased demand for these medications has led to dangerous shortages for patients who have an established need for them.18

The fast-paced and highly uncertain nature of research on COVID-19 has also created the possibility for different kinds of information zombies, which follow a similar pattern as retracted or refuted articles, but with different origins. There have been a number of widely discussed arguments to the effect that the true fatality rate associated with COVID-19 may be ten or even a hundred times lower than early estimates from the World Health Organization, which pegged the so-called “case fatality rate” (CFR)—the number of fatalities per detected case of COVID-19—at 3.4 percent.19-21

Some of these arguments have noted that the case fatality rate in certain countries with extensive testing, such as Iceland, Germany, and Norway, is substantially lower. References to the low CFR in these countries have continued to circulate on social media, even though the CFR in all of these locations has crept up over time. In the academic realm, John Ioannidis, a Stanford professor and epidemiologist, noted in an editorial, “The harms of exaggerated information and non‐evidence‐based measures,” published on March 19 in the European Journal of Clinical Investigation, that Germany’s CFR in early March was only 0.2 percent.21 But by mid-April it had climbed to 2.45 percent, far closer to the original WHO estimate. (Ioannidis has not updated the editorial to reflect the changing numbers.) Even Iceland, which has tested more extensively than any other nation, had a CFR of 0.47 percent on April 13, more than 4 times higher than it was a month ago. None of this means that the WHO figure was correct—but it does mean some arguments that it is wildly incorrect must be revisited.

What do we do about false claims that refuse to die? Especially when these claims have serious implications for decision-making in light of a global pandemic? To some degree, we have to accept that in a world with rapid information sharing on social media, information zombies will appear. Still, we must combat them. Science journals and science journalists rightly recognize that there is intense interest in COVID-19 and that the science is evolving rapidly. But that does not obviate the risks of spreading information that is not properly vetted or failing to emphasize when arguments depend on data that is very much in flux.

Wherever possible, media reporting on COVID-19 developments should be linked to authoritative sources of information that are updated as the information changes. The Oxford-based Centre for Evidence-Based Medicine maintains several pages that review the current evidence on rapidly evolving questions connected to COVID-19—including whether current data supports the use of hydroxychloroquine and the current best estimates for COVID-19 fatality rates. Authors and platforms seeking to keep the record straight should not just remove or revise now-false information, but should clearly state what has changed and why. Platforms such as Twitter should provide authors, especially scientists and members of the media, the ability to explain why Tweets that may be referenced elsewhere have been deleted. Scientific preprint archives should encourage authors to provide an overview of major changes when articles are revised.

And we should all become more active sharers of retraction. It may be embarrassing to shout one’s errors from the rooftops, but that is what scientists, journals, and responsible individuals must do to slay the information zombies haunting our social networks.

Cailin O’Connor and James Owen Weatherall are an associate professor and professor of logic and philosophy at the University of California, Irvine. They are coauthors of The Misinformation Age: How False Beliefs Spread.

Lead image: nazareno / Shutterstock

References

1. Liu, W. & Li, H. COVID-19 attacks the 1-beta chain of hemoglobin and captures the porphyrin to inhibit human heme metabolism. ChemRxiv (2020).

2. Wager, E. & Williams, P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. Journal of Medical Ethics 37, 567-570 (2011).

3. Prasad, V., Gall, V., & Cifu, A. The frequency of medical reversal. Archives of Internal Medicine 171, 1675-1676 (2011).

4. Budd, J.M., Sievert, M., & Schultz, T.R. Phenomena of retraction: Reasons for retraction and citations to the publications. The Journal of the American Medical Association 280, 296-297 (1998).

5. Madlock-Brown, C.R. & Eichmann, D. The (lack of) impact of retraction on citation networks. Science and Engineering Ethics 21, 127-137 (2015).

6. Prasad, V. & Cifu, A. Medical reversal: Why we must raise the bar before adopting new technologies. Yale Journal of Biology and Medicine 84, 471-478 (2011).

7. Bornemann-Cimenti, H., Szilagyi, I.S., & Sandner-Kiesling, A. Perpetuation of retracted publications using the example of the Scott S. Reuben case: Incidences, reasons and possible improvements. Science and Engineering Ethics 22, 1063-1072 (2016).

8. LaCroix, T., Geil, A., & O’Connor, C. The dynamics of retraction in epistemic networks. Preprint. (2019).

9. Jarvis, C. Journals, peer reviewers cope with surge in COVID-19 publications. The Scientist (2020).

10. Pradhan, P., et al. Uncanny similarity of unique inserts in the 2019-nCoV spike protein to HIV-1 gp120 and Gag. bioRxiv (2020).

11. Xiao, C. HIV-1 did not contribute to the 2019-nCoV genome. Journal of Emerging Microbes and Infections 9, 378-381 (2020).

12. Rothe, C., et al. Transmission of 2019-nCoV infection from an asymptomatic contact in Germany. New England Journal of Medicine 382, 970-971 (2020).

13. Kupferschmidt, K. Study claiming new coronavirus can be transmitted by people without symptoms was flawed. Science (2020).

14. Hu, Z., et al. Clinical characteristics of 24 asymptomatic infections with COVID-19 screened among close contacts in Nanjing, China. Science China Life Sciences (2020). Retrieved from doi: 10.1007/s11427-020-1661-4.

15. Bai, R., et al. Presumed asymptomatic carrier transmission of COVID-19. The Journal of the American Medical Association 323, 1406-1407 (2020).

16. Gautret, P., et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. International Journal of Antimicrobial Agents (2020).

17. Ferner, R.E. & Aronson, J.K. Hydroxychloroquine for COVID-19: What do the clinical trials tell us? The Centre for Evidence-Based Medicine (2020).

18. The Arthritis Foundation. Hydroxychloroquine (Plaquenil) shortage causing concern. Arthritis.org (2020).

19. Oke, J. & Heneghan, C. Global COVID-19 case fatality rates. The Centre for Evidence-Based Medicine (2020).

20. Bendavid, E. & Bhattacharya, J. Is the coronavirus as deadly as they say? The Wall Street Journal (2020).

21. Ionnidis, J.P.A. Coronavirus disease 2019: The harms of exaggerated information and non-evidence-based measures. European Journal of Clinical Investigation 50, e13222 (2020).


Read More…




outbreak

The Economic Damage Is Barely Conceivable - Issue 84: Outbreak


Like most of us, Adam Tooze is stuck at home. The British-born economic historian and Columbia University professor of history had been on leave this school year to write a book about climate change. But now he’s studying a different global problem. There are more than 700,000 cases of COVID-19 in the United States and over 2 million infections worldwide. It’s also caused an economic meltdown. More than 18 million Americans have filed for unemployment in recent weeks, and Goldman Sachs analysts predict that U.S. gross domestic product will decline at an annual rate of 34 percent in the second quarter.

Tooze is an expert on economic catastrophes. He wrote the book Crashed: How a Decade of Financial Crises Changed the World, about the 2008 economic crisis and its aftermath. But even he didn’t see this one coming. He hadn’t thought much about how pandemics could impact the economy—few economists had. Then he watched as China locked down the city of Wuhan, in a province known for auto manufacturing, on January 23; as northern Italy shut down on February 23; and as the U.S. stock market imploded on March 9. By then, he knew he had another financial crisis to think about. He’s been busy writing ever since. Tooze spoke with Nautilus from his home in New York City.

INEQUALITY FOR ALL: Adam Tooze (above) says a crisis like this one, “where you shut the entire economy down in a matter of weeks” highlights the “profound inequality” in American society.Wikimedia

What do you make of the fact that, in three weeks, more than 16 million people in the U.S. have filed for unemployment?

The structural element here—and this is quite striking, when you compare Europe, for instance, to the U.S.—is that America has and normally celebrates the flexibility and dynamism of its labor market: The fact that people move between jobs. The fact that employers have the right to hire and fire if they need to. The downside is that in a shock like this, the appropriate response for an employer is simply to let people go. What America wasn’t able to do was to improvise the short-time working systems that the Europeans are trying to use to prevent the immediate loss of employment to so many people.

The disadvantage of the American system that reveals itself in a crisis like this is that hiring and firing is not easily reversible. People who lose jobs don’t necessarily easily get them back. There is a fantasy of a V-shaped recovery. We literally have never done this before, so we don’t know one way or another how this could happen. But it seems likely that many people who have lost employment will not immediately find reemployment over the summer or the fall when business activity resumes something like its previous state. In a situation with a lot of people with low qualifications in precarious jobs at low income, the damage from that kind of interruption of employment in sectors notably which are already teetering on the edge—the chain stores, which are quite likely closing anyway, and fragile malls, which were on the edge of dying—it’s quite likely that this shock will also induce disproportionately large amounts of scarring.

What role has wealth and income inequality played during this crisis?

The U.S. economic system is bad enough in a regular crisis. In one like this, where you shut the entire economy down in a matter of weeks, the damage is barely conceivable. There are huge disparities, all of which ultimately are rooted in social structures of race and class, and in the different types of jobs that people have. The profound inequality in American society has been brought home for us in everyone’s families, where there is a radical disparity between the ability of some households to sustain the education of their children and themselves living comfortably at home. Twenty-five percent of kids in the United States appear not to have a stable WiFi connection. They have smartphones. That seems practically universal. But you can’t teach school on a smartphone. At least, that technology is not there.

Presumably by next year something like normality returns. But forever after we’ll live under the shadow of this having happened.

President Trump wants the economy to reopen by May. Would that stop the economic crisis?

Certainly that is presumably what drives that haste to restart the economy and to lift intense social distancing provisions. There is a sense that we can’t stand this. And that has a lot to do with deep fragilities in the American social system. If all Americans live comfortably in their own homes, with the safety of a regular paycheck, with substantial savings, with health insurance that wasn’t conditional on precarious employment, and with unemployment benefits that were adequate and that were rolled out to most people in this society if they needed them, then there wouldn’t be such a rush. But that isn’t America as we know it. America is a society in which half of families have virtually no financial cushion; in which small businesses, which are so often hailed as the drivers of job creation, the vast majority of owners of them live hand-to-mouth; in which the unemployment insurance system really is a mockery; and with health insurance directly tied to employment for the vast majority of the people. A society like that really faces huge pressures if the economy is shut down.

How is the pandemic-induced economic collapse we’re facing now different from what we faced in 2008?

This is so much faster. Early this year, America had record-low unemployment numbers. And last week or so already we probably broke the record for unemployment in the United States in the period since World War II. This story is moving so fast that our statistical systems of registration can’t keep up. So we think probably de facto unemployment in the U.S. right now is 13, 14, 15 percent. That’s never happened before. 2007 to 2008 was a classic global crisis in the sense that it came out of one particular over-expanded sector, a sector which is very well known for its volatility, which is real estate and construction. It was driven by a credit boom.

What we’re seeing this time around is deliberately, government-ordered, cliff edge, sudden shutdown of the entire economy, hitting specifically the face-to-face human services—retail, entertainment, restaurants—sector, which are, generally speaking, lagging in cyclical terms and are not the kind of sectors that generate boom-bust cycles.

Are we better prepared this time than in 2008?

You’d find it very hard to point to anyone in the policymaking community at the beginning of 2020 who was thinking of pandemic risk. Some people were. Former Treasury Secretary and former Director of the National Economic Council Larry Summers, for example, wrote a paper about pandemic flu several years ago, because of MERS and SARS, previous respiratory illnesses caused by coronaviruses. But it wasn’t top of stack at the beginning of this year. So we weren’t prepared in that sense. But do we know what to do now if we see the convulsions in the credit markets that we saw at the beginning of March? Yes. Have the central banks done it? Yes. Did they use some of the techniques they employed in ’08? Yes. Did they know that you had to go in big and you had to go in heavy and hard and quickly? Yes. And they have done so on an even more gigantic scale than in ’08, which is a lesson learned in ’08, too: There’s no such a thing as too big. And furthermore, the banks, which were the fragile bit in ’08, have basically been sidelined.

You’ve written that the response to the 2008 crisis worked to “undermine democracy.” How so, and could we see that again with this crisis?

The urgency that any financial crisis produces forces governments’ hands—it strips the legislature, the ordinary processes of democratic deliberation. When you’re forced to make very dramatic, very rapid decisions—particularly in a country as chronically divided as the U.S. is on so many issues—the risk that you create opportunities for demagogues of various types to take advantage of is huge. We know what the response of the Tea Party was to the ’08, ’09 economic crisis. They created an extraordinarily distorted vision of what had happened and then rode that to see extraordinary influence over the Republican party in the years that followed. And there is every reason to think that we might be faced with similar stresses in the American political system in months to come.

The U.S. economic system is bad enough in a regular crisis. In one like this, where you shut the entire economy down in a matter of weeks, the damage is barely conceivable.

How should we be rethinking the economy to buffer against meltdowns like this in the future?

We clearly need to have a far more adequate and substantial medical capacity. There’s no alternative to a comprehensive publicly backstopped or funded health insurance system. Insofar as you haven’t got that, your capacity to guarantee the security in the most basic and elementary sense of your population is not there. When you have a system in which one of the immediate side effects, in a crisis like this, is that large parts of your hospital system go bankrupt—one of the threats to the American medical system right now—that points to something extraordinarily wrong, especially if you’re spending close to 18 percent of GDP on health, more than any other society on the planet.

What about the unemployment insurance system?

America needs to have a comprehensive unemployment insurance system. It can be graded by local wage rates and everything else. But the idea that you have the extraordinary disparities that we have between a Florida and a Georgia at one end, with recipiency rates in the 11, 12, 13, 14, 15 percent, and then states which actually operate an insurance system, which deserve the name—this shouldn’t be accepted in a country like the U.S. We would need to look at how short-time working models might be a far better way of dealing with shocks of this kind, essentially saying that there is a public interest in the continuity of employment relationships. The employer should be investing in their staff and should not be indifferent as to who shows up for work on any given day.

What does this pandemic teach us about living in a global economy?

There are a series of very hard lessons in the recent history of globalization into which the corona shock fits—about the peculiar inability of American society, American politics, and the American labor market to cushion shocks that come from the outside in a way which moderates the risk and the damage to the most vulnerable people. If you look at the impact of globalization on manufacturing, industry, inequality, the urban fabric in the U.S., it’s far more severe than in other societies, which have basically been subject to the same shock. That really needs to raise questions about how the American labor market and welfare system work, because they are failing tens of millions of people in this society.

You write in Crashed not just about the 2008 crisis, but also about the decade afterward. What is the next decade going to look like, given this meltdown?

I have never felt less certain in even thinking about that kind of question. At this point, can either you or I confidently predict what we’re going to be doing this summer or this autumn? I don’t know whether my university is resuming normal service in the fall. I don’t know whether my daughter goes back to school. I don’t know when my wife’s business in travel and tourism resumes. That is unprecedented. It’s very difficult against that backdrop to think out over a 10-year time horizon. Presumably by next year something like normality returns. But forever after we’ll live under the shadow of this having happened. Every year we’re going to be anxiously worrying about whether flu season is going to be flu season like normal or flu season like this. That is itself something to be reckoned with.

How will anxiety and uncertainty about a future pandemic-like crisis affect the economy?

When we do not know what the future holds to this extent, it makes it very difficult for people to make bold, long-term financial decisions. This previously wasn’t part of the repertoire of what the financial analysts call tail risk. Not seriously. My sister works in the U.K. government, and they compile a list every quarter of the top five things that could blow your departmental business up. Every year pandemics are in the top three. But no one ever acted on it. It’s not like terrorism. In Britain, you have a state apparatus which is geared to address the terrorism risk because it’s very real—it’s struck many times. Now all of a sudden we have to take the possibility of pandemics that seriously. And their consequences are far more drastic. How do we know what our incomes are going to be? A very large part of American society is not going to be able to answer that question for some time to come. And that will shake consumer confidence. It will likely increase the savings rate. It’s quite likely to reduce the desire to invest in a large part of the U.S. economy.

Max Kutner is a journalist in New York City. He has written for Newsweek, The Boston Globe, and Smithsonian. Follow him on Twitter @maxkutner.

Lead image: Straight 8 Photography / Shutterstock


Read More…




outbreak

The Ecological Vision That Will Save Us - Issue 84: Outbreak


The marquee on my closed neighborhood movie theater reads, “See you on the other side.” I like reading it every day as I pass by on my walk. It causes me to envision life after the coronavirus pandemic. Which is awfully hard to envision now. But it’s out there. When you have a disease and are in a hospital, alone and afraid, intravenous tubes and sensor wires snaking from your body into digital monitors, all you want is to be normal again. You want nothing more than to have a beer in a dusky bar and read a book in amber light. At least that’s all I wanted last year when I was in a hospital, not from a coronavirus. When, this February, I had that beer in a bar with my book, I was profoundly happy. The worst can pass.

With faith, you can ask how life will be on the other side. Will you be changed personally? Will we be changed collectively? The knowledge we’re gaining now is making us different people. Pain demands relief, demands we don’t repeat what produced it. Will the pain of this pandemic point a new way forward? It hasn’t before, as every war attests. This time may be no different. But the pandemic has slipped a piece of knowledge into the body public that may not be easy to repress. It’s an insight scientists and poets have voiced for centuries. We’re not apart from nature, we are nature. The environment is not outside us, it is us. We either act in concert with the environment that gives us life, or the environment takes life away.

Guess which species is the bully? No animal has had the capacity to modify its niche the way we have.

Nothing could better emphasize our union with nature than the lethal coronavirus. It’s crafted by a molecule that’s been omnipresent on Earth for 4 billion years. Ribonucleic acid may not be the first bridge from geochemical to biochemical life, as some scientists have stated. But it’s a catalyst of biological life. It wrote the book on replication. RNA’s signature molecules, nucleotides, code other molecules, proteins, the building blocks of organisms. When RNA’s more chemically stable kin, DNA, arrived on the scene, it outcompeted its ancestor. Primitive organisms assembled into cells and DNA set up shop in their nucleus. It employed its nucleotides to code proteins to compose every tissue in every multicellular species, including us. A shameless opportunist, RNA made itself indispensable in the cellular factory, shuttling information from DNA into the cell’s power plant, where proteins are synthesized.

RNA and DNA had other jobs. They could be stripped down to their nucleotides, swirled inside a sticky protein shell. That gave them the ability to infiltrate any and all species, hijack their reproductive machinery, and propagate in ways that make rabbits look celibate. These freeloading parasites have a name: virus. But viruses are not just destroyers. They wear another evolutionary hat: developers. Viruses “may have originated the DNA replication system of all three cellular domains (archaea, bacteria, eukarya),” writes Luis P. Villareal, founding director of the Center for Virus Research at the University of California, Irvine.1 Their role in nature is so successful that DNA and RNA viruses make up the most abundant biological entities on our planet. More viruses on Earth than stars in the universe, scientists like to say.

Today more RNA than DNA viruses thrive in cells like ours, suggesting how ruthless they’ve remained. RNA viruses generally reproduce faster than DNA viruses, in part because they don’t haul around an extra gene to proofread their molecular merger with others’ DNA. So when the reckless RNA virus finds a new place to dwell, organisms become heartbreak hotels. Once inside a cell, the RNA virus slams the door on the chemical saviors dispatched by cells’ immunity sensors. It hijacks DNA’s replicative powers and fans out by the millions, upending cumulative cellular functions. Like the ability to breathe.

Humans. We love metaphors. They allow us to compare something as complex as viral infection to something as familiar as an Elvis Presley hit. But metaphors for natural processes are seldom accurate. The language is too porous, inviting our anthropomorphic minds to close the gaps. We imagine viruses have an agenda, are driven by an impetus to search and destroy. But nature doesn’t act with intention. It just acts. A virus lives in a cell like a planet revolves around a sun.

Biologists debate whether a virus should be classified as living because it’s a deadbeat on its own; it only comes to life in others. But that assumes an organism is alive apart from its environment. The biochemist and writer Nick Lane points out, “Viruses use their immediate environment to make copies of themselves. But then so do we: We eat other animals or plants, and we breathe in oxygen. Cut us off from our environment, say with a plastic bag over the head, and we die in a few minutes. One could say that we parasitize our environment—like viruses.”2

Our inseparable accord with the environment is why the coronavirus is now in us. Its genomic signature is almost a perfect match with a coronavirus that thrives in bats whose habitats range across the globe. Humans moved into the bats’ territory and the bats’ virus moved into humans. The exchange is just nature doing its thing. “And nature has been doing its thing for 3.75 billion years, when bacteria fought viruses just as we fight them now,” says Shahid Naeem, an upbeat professor of ecology at Columbia University, where he is director of the Earth Institute Center for Environmental Sustainability. If we want to assign blame, it lies with our collectively poor understanding of ecology.

FLYING LESSON: Bats don’t die from the same coronavirus that kills humans because the bat’s anatomy fights the virus to a draw, neutralizing its lethal moves. What’s the deal with the human immune system? We don’t fly.Martin Pelanek / Shutterstock

Organisms evolve with uniquely adaptive traits. Bats play many ecological roles. They are pollinators, seed-spreaders, and pest-controllers. They don’t die from the same coronavirus that kills humans because the bat’s anatomy fights the virus to a draw, neutralizing its lethal moves. What’s the deal with the human immune system? We don’t fly. “Bats are flying mammals, which is very unusual,” says Christine K. Johnson, an epidemiologist at the One Health Institute at the University of California, Davis, who studies virus spillover from animals to humans. “They get very high temperatures when they fly, and have evolved immunological features, which humans haven’t, to accommodate those temperatures.”

A viral invasion can overstimulate the chemical responses from a mammal’s immune system to the point where the response itself causes excessive inflammation in tissues. A small protein called a cytokine, which orchestrates cellular responses to foreign invaders, can get over-excited by an aggressive RNA virus, and erupt into a “storm” that destroys normal cellular function—a process physicians have documented in many current coronavirus fatalities. Bats have genetic mechanisms to inhibit that overreaction. Similarly, bat flight requires an increased rate of metabolism. Their wing-flapping action leads to high levels of oxygen-free radicals—a natural byproduct of metabolism—that can damage DNA. As a result, states a 2019 study in the journal Viruses, “bats probably evolved mechanisms to suppress activation of immune response due to damaged DNA generated via flight, thereby leading to reduced inflammation.”3

Bats don’t have better immune systems than humans; just different. Our immune systems evolved for many things, just not flying. Humans do well around the cave fungus Pseudogymnoascus destructans, source of the “white-nose syndrome” that has devastated bats worldwide. Trouble begins when we barge into wildlife habitats with no respect for differences. (Trouble for us and other animals. White-nose syndrome spread in part on cavers’ shoes and clothing, who tracked it from one site to the next.) We mine for gold, develop housing tracts, and plow forests into feedlots. We make other animals’ habitats our own.

Our moralistic brain sees retribution. Karma. A viral outbreak is the wrath that nature heaps on us for bulldozing animals out of their homes. Not so. “We didn’t violate any evolutionary or ecological laws because nature doesn’t care what we do,” Naeem says. Making over the world for ourselves is just humans being the animals we are. “Every species, if they had the upper hand, would transform the world into what it wants,” Naeem says. “Birds build nests, bees build hives, beavers build dams. It’s called niche construction. If domestic cats ruled the world, they would make the world in their image. It would be full of litter trays, lots of birds, lots of mice, and lots of fish.”

But nature isn’t an idyllic land of animal villages constructed by evolution. Species’ niche-building ways have always brought them into contact with each other. “Nature is ruled by processes like competition, predation, and mutualism,” Naeem says. “Some of them are positive, some are negative, some are neutral. That goes for our interactions with the microbial world, including viruses, which range from super beneficial to super harmful.”

Nature has been doing its thing for 3.75 billion years, when bacteria fought viruses as we fight them now.

Ultimately, nature works out a truce. “If the flower tries to short the hummingbird on sugar, the hummingbird is not going to provide it with pollination,” Naeem says. “If the hummingbird sucks up all the nectar and doesn’t do pollination well, it’s going to get pinged as well. Through this kind of back and forth, species hammer out an optimal way of getting along in nature. Evolution winds up finding some middle ground.” Naeem pauses. “If you try to beat up everybody, though, it’s not going to work.”

Guess which species is the bully? “There’s never been any species on this planet in its entire history that has had the capacity to modify its niche the way we have,” Naeem says. Our niche—cities, farms, factories—has made the planet into a zoological Manhattan. Living in close proximity with other species, and their viruses, means we are going to rub shoulders with them. Dense living isn’t for everyone. But a global economy is. And with it comes an intercontinental transportation system. A virus doesn’t have a nationality. It can travel as easily from Arkansas to China as the other way around. A pandemic is an inevitable outcome of our modified niche.

Although nature doesn’t do retribution, our clashes with it have mutual consequences. The exact route of transmission of SARS-CoV-2 from bat to humans remains unmapped. Did the virus pass directly into a person, who may have handled a bat, or through an intermediate animal? What is clear is the first step, which is that a bat shed the virus in some way. University of California, Davis epidemiologist Johnson explains bats shed viruses in their urine, feces, and saliva. They might urinate on fruit or eat a piece of it, and then discard it on the ground, where an animal may eat it. The Nipah virus outbreak in 1999 was spurred by a bat that left behind a piece of fruit that came in contact with a domestic pig and humans. The Ebola outbreaks in the early 2000s in Central Africa likely began when an ape, who became bushmeat for humans, came in contact with a fruit bat’s leftover. “The same thing happened with the Hendra virus in Australia in 1994,” says Johnson. “Horses got infected because fruit bats lived in trees near the horse farm. Domesticated species are often an intermediary between bats and humans, and they amplify the outbreak before it gets to humans.”

Transforming bat niches into our own sends bats scattering—right into our backyards. In a study released this month, Johnson and colleagues show the spillover risk of viruses is the highest among animal species, notably bats, that have expanded their range, due to urbanization and crop production, into human-run landscapes.4 “The ways we’ve altered the landscape have brought a lot of great things to people,” Johnson says. “But that has put wildlife at higher pressures to adapt, and some of them have adapted by moving in with us.”

Pressures on bats have another consequence. Studies indicate physiological and environmental stress can increase viral replication in them and cause them to shed more than they normally do. One study showed bats with white-nose syndrome had “60 times more coronavirus in their intestines” as uninfected bats.5 Despite evidence for an increase in viral replication and shedding in stressed bats, “a direct link to spillover has yet to be established,” concludes a 2019 report in Viruses.3 But it’s safe to say that bats being perpetually driven from their caves into our barns is not ideal for either species.

As my questions ran out for Columbia University’s Naeem, I asked him to put this horrible pandemic in a final ecological light for me.

“We think of ourselves as being resilient and robust, but it takes something like this to realize we’re still a biological entity that’s not capable of totally controlling the world around us,” he says. “Our social system has become so disconnected from nature that we no longer understand we still are a part of it. Breathable air, potable water, productive fields, a stable environment—these all come about because we’re part of this elaborate system, the biosphere. Now we’re suffering environmental consequences like climate change and the loss of food security and viral outbreaks because we’ve forgotten how to integrate our endeavors with nature.”

A 2014 study by a host wildlife ecologists, economists, and evolutionary biologists lays out a plan to stem the tide of emergent infectious diseases, most of which spawned in wildlife. Cases of emergent infectious diseases have practically quadrupled since 1940.6 World leaders could get smart. They could pool money for spillover research, which would identify the hundreds of thousands of potentially lethal viruses in animals. They could coordinate pandemic preparation with international health regulations. They could support animal conservation with barriers that developers can’t cross. The scientists give us 27 years to cut the rise of infectious diseases by 50 percent. After that, the study doesn’t say what the world will look like. I imagine it will look like a hospital right now in New York City.

Patients lie on gurneys in corridors, swaddled in sheets, their faces shrouded by respirators. They’re surrounded by doctors and nurses, desperately trying to revive them. In pain, inconsolable, and alone. I know they want nothing more than to see their family and friends on the other side, to be wheeled out of the hospital and feel normal again. Will they? Will others in the future? It will take tremendous political will to avoid the next pandemic. And it must begin with a reckoning with our relationship with nature. That tiny necklace of RNA tearing through patients’ lungs right now is the world we live in. And have always lived in. We can’t be cut off from the environment. When I see the suffering in hospitals, I can only ask, Do we get it now?

Kevin Berger is the editor of Nautilus.

References

1. Villareal, L.P. The Widespread Evolutionary Significance of Viruses. In Domingo, E., Parrish, C.R., & Hooland, J. (Eds.) Origin and Evolution of Viruses Elsevier, Amsterdam, Netherlands (2008).

2. Lane, N. The Vital Question: Energy, Evolution, and the Origins of Complex Life W.W. Norton, New York, NY (2015).

3. Subudhi, S., Rapin, N., & Misra, V. Immune system modulation and viral persistence in Bats: Understanding viral spillover. Viruses 11, E192 (2019).

4. Johnson, C.K., et al. Global shifts in mammalian population trends reveal key predictors of virus spillover risk. Proceedings of The Royal Society B 287 (2020).

5. Davy, C.M., et al. White-nose syndrome is associated with increased replication of a naturally persisting coronaviruses in bats. Scientific Reports 8, 15508 (2018).

6. Pike, J., Bogich, T., Elwood, S., Finnoff, D.C., & Daszak, P. Economic optimization of a global strategy to address the pandemic threat. Proceedings of the National Academy of Sciences 111, 18519-18523 (2014).

Lead image: AP Photo / Mark Lennihan


Read More…




outbreak

Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak


In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.

One of the earliest approaches to machine learning was to construct artificial neural networks that resemble the structure of the human brain. In the last decade this approach has finally taken off. Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before. They can translate between languages with a proficiency approaching that of a human translator. They can produce photorealistic images of humans and animals. They can speak with the voices of people whom they have listened to for mere minutes. And they can learn fine, continuous control such as how to drive a car or use a robotic arm to connect Lego pieces.

WHAT IS HUMANITY?: First the computers came for the best players in Jeopardy!, chess, and Go. Now AI researchers themselves are worried computers will soon accomplish every task better and more cheaply than human workers.Wikimedia

But perhaps the most important sign of things to come is their ability to learn to play games. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond. Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, researchers at the AI company DeepMind created AlphaZero: a neural network-based system that learned to play chess from scratch. In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs. The very same algorithm also learned to play Go from scratch, and within eight hours far surpassed the abilities of any human. The world’s best Go players were shocked. As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong ... I would go as far as to say not a single human has touched the edge of the truth of Go.”

The question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes.

It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari 1970s games at levels far exceeding human ability. Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learnt and mastered these games directly from the score and raw pixels.

This burst of progress via deep learning is fuelling great optimism and pessimism about what may soon be possible. There are serious concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, and violating the norms of war. My book—The Precipice: Existential Risk and the Future of Humanity—is concerned with risks on the largest scale. Could developments in AI pose an existential risk to humanity?

The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with intelligence that surpasses our own. A 2016 survey of top AI researchers found that, on average, they thought there was a 50 percent chance that AI systems would be able to “accomplish every task better and more cheaply than human workers” by 2061. The expert community doesn’t think of artificial general intelligence (AGI) as an impossible dream, so much as something that is more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created.

Humanity is currently in control of its own fate. We can choose our future. The same is not true for chimpanzees, blackbirds, or any other of Earth’s species. Our unique position in the world is a direct result of our unique mental abilities. What would happen if sometime this century researchers created an AGI surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. On its own, this might not be too much cause for concern. For there are many ways we might hope to retain control. Unfortunately, the few researchers working on such plans are finding them far more difficult than anticipated. In fact it is they who are the leading voices of concern.

If their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

To see why they are concerned, it will be helpful to look at our current AI techniques and why these are hard to align or control. One of the leading paradigms for how we might eventually create AGI combines deep learning with an earlier idea called reinforcement learning. This involves agents that receive reward (or punishment) for performing various acts in various circumstances. With enough intelligence and experience, the agent becomes extremely capable at steering its environment into the states where it obtains high reward. The specification of which acts and states produce reward for the agent is known as its reward function. This can either be stipulated by its designers or learnt by the agent. Unfortunately, neither of these methods can be easily scaled up to encode human values in the agent’s reward function. Our values are too complex and subtle to specify by hand. And we are not yet close to being able to infer the full complexity of a human’s values from observing their behavior. Even if we could, humanity consists of many humans, with different values, changing values, and uncertainty about their values.

Any near-term attempt to align an AI agent with human values would produce only a flawed copy. In some circumstances this misalignment would be mostly harmless. But the more intelligent the AI systems, the more they can change the world, and the further apart things will come. When we reflect on the result, we see how such misaligned attempts at utopia can go terribly wrong: the shallowness of a Brave New World, or the disempowerment of With Folded Hands. And even these are sort of best-case scenarios. They assume the builders of the system are striving to align it to human values. But we should expect some developers to be more focused on building systems to achieve other goals, such as winning wars or maximizing profits, perhaps with very little focus on ethical constraints. These systems may be much more dangerous. In the existing paradigm, sufficiently intelligent agents would end up with instrumental goals to deceive and overpower us. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: Being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it.

Ultimately, the system would be motivated to wrest control of the future from humanity, as that would help achieve all these instrumental goals: acquiring massive resources, while avoiding being shut down or having its reward function altered. Since humans would predictably interfere with all these instrumental goals, it would be motivated to hide them from us until it was too late for us to be able to put up meaningful resistance. And if their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

How could an AI system seize control? There is a major misconception (driven by Hollywood and the media) that this requires robots. After all, how else would AI be able to act in the physical world? Without robots, the system can only produce words, pictures, and sounds. But a moment’s reflection shows that these are exactly what is needed to take control. For the most damaging people in history have not been the strongest. Hitler, Stalin, and Genghis Khan achieved their absolute control over large parts of the world by using words to convince millions of others to win the requisite physical contests. So long as an AI system can entice or coerce people to do its physical bidding, it wouldn’t need robots at all.

We can’t know exactly how a system might seize control. But it is useful to consider an illustrative pathway we can actually understand as a lower bound for what is possible.

First, the AI system could gain access to the Internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: Consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the Internet, forming a large “botnet,” a vast scaling-up of computational resources providing a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate. None of these steps involve anything mysterious—human hackers and criminals have already done all of these things using just the Internet.

Finally, the AI would need to escalate its power again. There are many plausible pathways: By taking over most of the world’s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity.

Of course, no current AI systems can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes. History already involves examples of entities with human-level intelligence acquiring a substantial fraction of all global power as an instrumental goal to achieving what they want. And we’ve seen humanity scaling up from a minor species with less than a million individuals to having decisive control over the future. So we should assume that this is possible for new entities whose intelligence vastly exceeds our own.

The case for existential risk from AI is clearly speculative. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

There is actually less disagreement here than first appears. Those who counsel caution agree that the timeframe to AGI is decades, not years, and typically suggest research on alignment, not government regulation. So the substantive disagreement is not really over whether AGI is possible or whether it plausibly could be a threat to humanity. It is over whether a potential existential threat that looks to be decades away should be of concern to us now. It seems to me that it should.

The best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers: 70 percent agreed with University of California, Berkeley professor Stuart Russell’s broad argument about why advanced AI with misaligned values might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the long-term impact of AGI being “extremely bad (e.g. human extinction)” was at least 5 percent.

I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a 1 in 20 chance the field’s ultimate goal would be extremely bad for humanity? There is a lot of uncertainty and disagreement, but it is not at all a fringe position that AGI will be developed within 50 years and that it could be an existential catastrophe.

Even though our current and foreseeable systems pose no threat to humanity at large, time is of the essence. In part this is because progress may come very suddenly: Through unpredictable research breakthroughs, or by rapid scaling-up of the first intelligent systems (for example, by rolling them out to thousands of times as much hardware, or allowing them to improve their own intelligence). And in part it is because such a momentous change in human affairs may require more than a couple of decades to adequately prepare for. In the words of Demis Hassabis, co-founder of DeepMind:

We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.

Toby Ord is a philosopher and research fellow at the Future of Humanity Institute, and the author of The Precipice: Existential Risk and the Future of Humanity.

From the book The Precipice by Toby Ord. Copyright © 2020 by Toby Ord. Reprinted by permission of Hachette Books, New York, NY. All rights reserved.

Lead Image: Titima Ongkantong / Shutterstock


Read More…




outbreak

How COVID-19 Will Pass from Pandemic to Prosaic - Issue 84: Outbreak


On January 5, six days after China officially announced a spate of unusual pneumonia cases, a team of researchers at Shanghai’s Fudan University deposited the full genome sequence of the causal virus, SARS-CoV-2, into Genbank. A little more than three months later, 4,528 genomes of SARS-CoV-2 have been sequenced,1 and more than 883 COVID-related clinical trials2 for treatments and vaccines have been established. The speed with which these trials will deliver results is unknown—the delicate bаlance of efficacy and safety can only be pushed so far before the risks outweigh the benefits. For this reason, a long-term solution like vaccination may take years to come to market.3

The good news is that a lack of treatment doesn’t preclude an end to the ordeal. Viral outbreaks of Ebola and SARS, neither of which had readily available vaccines, petered out through the application of consistent public health strategies—testing, containment, and long-term behavioral adaptations. Today countries that have previously battled the 2002 SARS epidemic, like Taiwan, Hong Kong, and Singapore, have shown exemplary recovery rates from COVID. Tomorrow, countries with high fatality rates like Sweden, Belgium, and the United Kingdom will have the opportunity to demonstrate what they’ve learned when the next outbreak comes to their shores. And so will we.

The first Ebola case was identified in 1976,4 when a patient with hemorrhagic symptoms arrived at the Yambuku Mission Hospital, located in what is now the Democratic Republic of Congo (DRC). Patient samples were collected and sent to several European laboratories that specialized in rare viruses. Scientists, without sequencing technology, took about five weeks to identify the agent responsible for the illness as a new member of the highly pathogenic Filoviridae family.

The first Ebola outbreak sickened 686 individuals across the DRC and neighboring Sudan. 453 of the patients died, with a final case fatality rate (CFR)—the number of dead out of number of sickened—of 66 percent. Despite the lethality of the virus, sociocultural interventions, including lockdowns, contact-tracing, campaigns to change funeral rites, and restrictions on consumption of game meat all proved effective interventions in the long run.

That is, until 2014, when there was an exception to the pattern. Ebola appeared in Guinea, a small country in West Africa, whose population had never before been exposed to the virus. The closest epidemic had been in Gabon, 13 years before and 2,500 miles away. Over the course of two years, the infection spread from Guinea into Liberia and Sierra Leone, sickening more than 24,000 people and killing more than 10,000.

Countries that have previously battled the 2002 SARS epidemic, like Taiwan and Hong Kong, have shown exemplary recovery rates.

During the initial phase of the 2014 Ebola outbreak, rural communities were reluctant to cooperate with government directives for how to care for the sick and the dead. To help incentivize behavioral changes, sociocultural anthropologists like Mariane Ferme of the University of California, Berkeley, were brought in to advise the government. In a recent interview with Nautilus, Ferme indicated that strategies that allowed rural communities to remain involved with their loved ones increased cooperation. Villages located far from the capital, she said, were encouraged to “deputize someone to come to the hospital, to come to the burial, so they could come back to the community and tell the story of the body.” For communities that couldn’t afford to send someone to the capital, she saw public health officials adopt a savvy technological solution—tablets to record video messages that were carried between convalescent patients and their families.

However, there were also systemic failures that, in Ferme’s opinion, contributed to the severity of the 2014 West African epidemic. In Sierra Leone, she said, “the big mistake early on was to distribute [weakly causal] information about zoonotic transmission, even when it was obviously community transmission.” In other words, although there had been an instance of zoonotic transmission—the virus jumping from a bat to a human—that initiated the epidemic, the principle danger was other contagious individuals, not game meat. Eventually, under pressure from relief groups, the government changed its messaging to reflect scientific consensus.

But the retraction shook public faith in the government and bred resentment. The mismatch between messaging and reality mirrors the current pandemic. Since the COVID outbreak began, international and government health officials have issued mixed messages. Doubts initially surfaced about the certainty of the virus being capable of spreading from person to person, and the debate over the effectiveness of masks in preventing infection continues.

Despite the confused messaging, there has been general compliance with stay-at-home orders that has helped flatten the curve. Had the public been less trusting of government directives, the outcome could have been disastrous, as it was in Libera in 2014. After a two-week lockdown was announced, the Liberian army conducted house-to-house sweeps to check for the sick and collect the dead. “It was a draconian method that made people hide the sick and dead in their houses,” Ferme said. People feared their loved ones would be buried without the proper rites. A direct consequence was a staggering number of active cases, and an unknown extent of community transmission. But in the end, the benchmark for the end of Ebola and SARS was the same. The WHO declared victory when the rate of new cases slowed, then stopped. By the same measure, when an entire 14-day quarantine period passes with no new cases of COVID-19, it can be declared over.

It remains possible that even if we manage to end the epidemic, it will return again. Driven by novel zoonotic transmissions, Ebola has flared up every few years. Given the extent of COVID-19’s spread, and the potential for the kind of mutations that allow for re-infection, it may simply become endemic.

Two factors will play into the final outcome of COVID-19 are pathogenicity and virulence. Pathogenicity is the ability of an infectious agent to cause disease in the host, and is measured by R0—the number of new infections each patient can generate. Virulence, on the other hand, is the amount of harm the infectious agent can cause, and is best measured by CFR. While the pathogenicity of Ebola, SARS, and SARS-CoV-2 is on the same order—somewhere between 1 to 3 new infections for each patient, virulence differs greatly between the two SARS viruses and Ebola.

The case fatality rate for an Ebola infection is between 60 to 90 percent. The spread in CFR is due to differences in infection dynamics between strains. The underlying cause of the divergent virulence of Ebola and SARS is largely due to the tropism of the virus, meaning the cells that it attacks. The mechanism by which the Ebola virus gains entry into cells is not fully understood, but it has been shown the virus preferentially targets immune and epithelial cells.5 In other words, the virus first destroys the body’s ability to mount a defense, and then destroys the delicate tissues that line the vascular system. Patients bleed freely and most often succumb to low blood pressure that results from severe fluid loss. However, neither SARS nor SARS-CoV-2 attack the immune system directly. Instead, they enter lung epithelial cells through the ACE2 receptor, which ensures a lower CFR. What is interesting about these coronaviruses is that despite their similar modes of infection, they demonstrate a range of virulence: SARS had a final CFR of 10 percent, while SARS-CoV-2 has a pending CFR of 1.4 percent. Differences in virulence between the 2002 and 2019 SARS outbreaks could be attributed to varying levels of care between countries.

The chart above displays WHO data of the relationship between the total number of cases in a country and the CFR during the 2002-2003 SARS-CoV epidemic. South Africa, on the far right, had only a single case. The patient died, which resulted in a 100 percent CFR. China, on the other hand, had 5,327 cases and 349 deaths, giving a 7 percent CFR. The chart below zooms to the bottom left corner of the graph, so as to better resolve critically affected countries, those with a caseload of less than 1,000, but with a high CFR.

Here is Hong Kong, with 1,755 cases and a 17 percent CFR. There is also Taiwan, with 346 cases and an 11 percent CFR. Finally, nearly tied with Canada is Singapore with 238 cases and a 14 percent CFR.

With COVID-19, it’s apparent that outcome reflects experience. China has 82,747 cases of COVID, but has lowered their CFR to 4 percent. Hong Kong has 1,026 cases and a 0.4 percent CFR. Taiwan has 422 cases at 1.5 percent CFR, and Singapore with 8,014 cases, has a 0.13 percent CFR.

It was the novel coronavirus identification program established in China in the wake of the 2002 SARS epidemic that alerted authorities to SARS-CoV-2 back in November of 2019. The successful responses by Taiwan, Hong Kong, and Singapore can also be attributed to a residual familiarity with the dangers of an unknown virus, and the sorts of interventions that are necessary to prevent a crisis from spiraling out of control.

In West Africa, too, they seem to have learned the value of being prepared. When Ferme returned to Liberia on March 7, she encountered airport staff fully protected with gowns, head covers, face screens, masks, and gloves. By the time she left the country, 10 days later, she said, “Airline personnel were setting up social distancing lines, and [rural vendors] hawking face masks. Motorcycle taxis drivers, the people most at risk after healthcare workers—all had goggles and face masks.”

The sheer number of COVID-19 cases indicates the road to recovery will take some time. Each must be identified, quarantined, and all contacts traced and tested. Countries that failed to act swiftly, which allowed their case numbers to spiral out of control, will pay in lives and dollars. Northwestern University economists Martin Eichenbaum et al. modeled6 the cost of a yearlong shutdown to be $4.2 trillion, a cost that proactive countries will not face. A recent Harvard study7 published in Science suggests the virus will likely make seasonal appearances going forward, potentially requiring new waves of social distancing. In other words, initial hesitancy will have repercussions for years. In the future, smart containment principles,6 where restrictions are applied on the basis of health status, may temper the impact of these measures.

Countries that failed to act swiftly, which allowed their case numbers to spiral out of control, will pay in lives and dollars.

Inaction was initially framed as promoting herd immunity, where spread of the virus is interrupted once everyone has fallen sick with it. This is because getting the virus results in the same antibody production process as getting vaccinated—but doesn’t require the development of a vaccine. The Johns Hopkins Bloomberg School of Public Health estimates that 70 percent of the population will need to be infected with or vaccinated against the virus8 for herd immunity to work. Progress toward it has been slow, and can only be achieved through direct infection with the virus, meaning many will die. A Stanford University study in Santa Clara County9 suggests only 2.5 percent to 4.2 percent of the population have had the virus. Another COVID hotspot in Gangelt, Germany, suggests 15 percent10—higher, but still nowhere near the 70 percent necessary for herd immunity. Given the dangers inherent in waiting on herd immunity, our best hope is a vaccine.

A key concern for effective vaccine development is viral mutation. This is because vaccines train the immune system to recognize specific shapes on the surface of the virus—a composite structure called the antigen. Mutations threaten vaccine development because they can change the shape of the relevant antigen, effectively allowing the pathogen to evade immune surveillance. But, so far, SARS-CoV-2 has been mutating slowly, with only one mutation found in the section most accessible to the immune system, the spike protein. What this suggests is that the viral genome may be sufficiently stable for vaccine development.

What we know, though, is that Ebola was extinguished due to cooperation between public health officials and community leaders. SARS-CoV ended when all cases were identified and quarantined. The Spanish Flu in 1918 vanished after two long, deadly seasons.

The final outcome of COVID-19 is still unclear. It will ultimately be decided by our patience and the financial bottom line. With 26 million unemployed and protests erupting around the country, it seems there are many who would prefer to risk life and limb rather than face financial insolvency. Applying smart containment principles in the aftermath of the shutdown might be the best way to get the economy moving again, while maintaining the safety of those at greatest risk. Going forward, vigilance and preparedness will be the watchwords of the day, and the most efficient way to prevent social and economic ruin.

Anastasia Bendebury and Michael Shilo DeLay did their PhDs at Columbia University. Together they created Demystifying Science, a science literacy organization devoted to providing clear, mechanistic explanations for natural phenomena. Find them on Twitter @DemystifySci.

References

1. Genomic epidemiology of novel coronavirus - Global subsampling. Nextstrain www.nextstrain.org.

2. Covid-19 TrialsTracker. TrialsTracker www.trialstracker.net.

3. Struck, M. Vaccine R&D success rates and development times. Nature Biotechnology 14, 591-593 (1996).

4. Breman, J. & Johnson, K. Ebola then and now. The New England Journal of Medicine 371 1663-1666 (2014).

5. Baseler, L., Chertow, D.S., Johnson, K.M., Feldmann, H., & Morens, D.M. THe pathogenesis of Ebola virus disease. The Annual Review of Pathology 12, 387-418 (2017).

6. Eichenbaum, M., Rebell, S., & Trabandt, M. The macroeconomics of epidemics. The National Bureau of Economic Research Working Paper: 26882 (2020).

7. Kissler, S., Tedijanto, C., Goldstein, E., Grad, Y., & Lipsitch, M. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science eabb5793 (2020).

8. D’ Souza, G. & Dowdy, D. What is herd immunity and how can we achieve it with COVID-19? Johns Hopkins COVID-19 School of Public Health Insights www.jhsph.edu (2020).

9. Digitale, E. Test for antibodies against novel coronavirus developed at Stanford Medicine. Stanford Medicine News Center Med.Stanford.edu (2020).

10. Winkler, M. Blood tests show 14%of people are now immune to COVID-19 in one town in Germany. MIT Technology Review (2020).

Lead image: Castleski / Shutterstock


Read More…




outbreak

We Aren’t Selfish After All - Issue 84: Outbreak


What is this pandemic doing to our minds? Polls repeatedly show it’s having an adverse effect on our mental health. Physical distancing, for some, means social isolation, which has long been shown to encourage depression. Previous disasters have been followed by waves of depression, exacerbated by financial distress. The situation also puts us in a state of fear and anxiety—anxiety about financial strain, about being lonely, about the very lives of ourselves and our loved ones.

This fear can also bring out some of the everyday irrationalities we all struggle with. We have trouble thinking about numbers—magnitudes, probabilities, and the like—and when frightened we tend toward absolutes. Feeling powerless makes people more prone to conspiracy theories. We naturally believe that big effects should have big causes, and we see with the current coronavirus, as we did with AIDS and SARS, conspiracy theories claiming that the virus was engineered as a weapon.

We are seeing the theory of “collective resilience,” an informal solidarity among people, in action.

These psychological ramifications can make us fail to behave as well as we should. We have what psychologists call a “behavioral immune system” that makes us behave in ways that, in general, make us less likely to catch infectious disease. Things we perceive as being risky for disease makes us wary. An unfortunate side effect is that it increases prejudice against foreigners, people with visible sores or deformities, and people we perceive as simply being ugly. Politically, this can result in xenophobia and outgroup distrust. Coronavirus-related attacks, possibly encouraged by the misleading term “Chinese virus,” have plagued some ethnic Asian people.

And yet, in spite of all of the harm the pandemic seems to be wreaking on our minds, there are also encouraging acts of kindness and solidarity. In turbulent times, people come together and help each other.

A RANDOM ACT OF KINDNESS: Author Jim Davies took this photo in Centretown, Ottawa. The sign in the window reads, “Physical distancing is an act of love.”Jim Davies

In the days after the World Trade Center fell, it wasn’t just the police, hospitals, and firefighters who came forward to help, it was normal citizens who often put themselves at risk to help other people out. An equities trader named Sandler O’Neill helped rescue a dozen people and then went back to save more. A tour guide at the Pentagon helped victims outside, and then went back in the burning building to help more. We find these kinds of behaviors in every disaster.

During this pandemic, we see the same thing. Some acts are small and thoughtful, such as putting encouraging signs in windows. Others have made games out of window signs, putting up rainbows for children on walks to count. Some show support for health care and other frontline workers, applauding or banging on pots on their balconies and at windows in a nightly ritual. Others are helping in more substantial ways. In the United Kingdom, over half a million people signed up to be a National Health Volunteer, supporting the most vulnerable people, who have to stay home.

John Drury, a professor of social psychology at the University of Sussex, England, who studies people’s behavior in disasters, has seen these acts of kindness in his own neighborhood over the past month. He and his neighbors set up a WhatsApp group to help one another with shopping. “I think that translates across the country and probably across the world,” Drury says. “People are seeing themselves as an us, a new kind of we, based on the situation that we all find ourselves in. You’ve got this idea of common fate, which motivates our care and concern for others.”

We have always been a social species who rely on each other for happiness and our survival.

Drury is the pioneer of a theory known as “collective resilience,” which he describes as “informal solidarity among people in the public.” Drury’s study of the 2005 London bombing disaster found that mutual helping behaviors were more common than selfish ones. This basic finding has been replicated in other disasters, including the crash of the Ghana football stadium and the 2010 earthquake and tsunami in Chile. In disasters, Drury says, people reach heights of community and cooperation they’ve never reached before.

It turns out that being in a dangerous situation with others fosters a new social identity. Boundaries between us, which seem so salient when things are normal, disappear when we perceive we’re locked in a struggle together, with a common fate, from an external threat. People go from me thinking to we thinking. Respondents in studies about disasters often spontaneously bring up this feeling of group cohesion without being asked. The greater unity they felt, the more they helped.

Popular conceptions of how people respond in a crisis involve helplessness, selfishness, and panic. In practice, though, this rarely happens. “One of the reasons people die in emergencies isn’t overreaction, it’s underreaction,” Drury says. “People die in fires mainly because they’re too slow. They underestimate risk.” The myth of panic can lead to emergency policies that do more harm than good. At one point during Hurricane Katrina, Louisiana governor at the time Kathleen Blanco warned looters that National Guard troops “know how to shoot and kill, and they are more than willing to do so if necessary, and I expect they will.” A few days later, New Orleans police officers shot six civilians, wounding four and killing two.

People revert to selfishness when group identity starts to break down. Drury describes how people acted when the cruise ship, Costa Concordia sank off the coast of Italy in 2012. “There was cooperation until one point, when people got to the lifeboats and there was pushing,” Drury says. “Selfishness isn’t a default because many times people are cooperative. It’s only in certain conditions that people might become selfish and individualistic. Perhaps there isn’t a sense of common fate, people are positioned as individuals against individuals. After a period of time, people run out of energy, run out of emotional energy, run out of resources, and that goodwill, that support, starts to decline. They just haven’t got the resources to help each other.”

Perceptions of group behavior can shape public policy. It’s important that policymakers, rather than seeing groups as problems to be overcome, which can lead to riots and mob behavior, take account of how people in groups help one another. After all, we have always been a social species who rely on each other for happiness and our survival. And groups can achieve things that individuals cannot. This understanding couldn’t be more important than now. We can build on people’s naturally arising feelings of unity by emphasizing that we are all in this together, and celebrating the everyday heroes who, sometimes at great cost, go out of their way to make the pandemic a little less awful.

Jim Davies is a professor of cognitive science at Carleton University and author of Imagination: The Science of Your Mind’s Greatest Power. He is co-host of the Minding the Brain podcast.

Lead image: Franzi / Shutterstock


Read More…




outbreak

Guided by Plant Voices - Issue 84: Outbreak


Plants are intelligent beings with profound wisdom to impart—if only we know how to listen. And Monica Gagliano knows how to listen. The evolutionary ecologist has done groundbreaking experiments suggesting plants have the capacity to learn, remember, and make choices. That’s not all. Gagliano, a senior research fellow at the University of Sydney in Australia, talks to plants. And they talk back. Plants summon her with instructions on how to live and work. Some of Gagliano’s conversations happened in prophetic dreams, which led her to study with a shaman in Peru while tripping on psychoactive plants.

Along with forest scientists like Suzanne Simard and Peter Wohlleben, Gagliano raises profound scientific and philosophical questions about the nature of intelligence and the possibility of “vegetal consciousness.” But what’s unusual about Gagliano is her willingness to talk about her experiences with shamans and traditional healers, along with her use of psychedelics. For someone who’d already received fierce pushback from other scientists, it was hardly a safe career move to reveal her personal experiences in otherworldly realms.

Gagliano considers her explorations in non-Western ways of seeing the world to be part of her scientific work. “Those are important doors that you need to open and you either walk through or you don’t,” she told me. “I simply decided to walk through.” Sometimes, she said, certain plants have given her precise directions on how to conduct her experiments, even telling her which plant to study. But it hasn’t been easy. “Like Alice, [I] found myself tumbling down a rather strange rabbit hole,” she wrote in a 2018 memoir, Thus Spoke the Plant. “I did doubt my own sanity many times, especially when all these odd occurrences started—and yet I know I do not suffer from psychoses.”

Shortly before the COVID-19 lockdown, I talked with Gagliano at Dartmouth College, where she was a visiting scholar. We spoke about her experiments, the new field of plant intelligence, and her own experiences of talking with plants.

PAVLOV’S PEAS: Monica Gagliano sketches a pea plant in her lab at the University of Sydney (above). She conducted experiments with pea plants to determine if, like Pavlov’s famous dogs, the plants learned to anticipate food. They did. “Although they do not salivate,” Gagliano says.Scene from the upcoming documentary, AWARE ©umbrellafilms.org

You are best known for an experiment with Mimosa pudica, commonly known as the “sensitive plant,” which instantly closes its leaves when it’s touched. Can you describe your experiment?

I built a little contraption that allowed me to drop the plants from a height of maybe 15 centimeters. So it’s not too high. When they fall, they land in a softly padded base. This plant closes its leaves when disturbed, especially if the disturbance is a potential predator. When the leaves are closed, big, spiny, pointy things stick out, so they might deter a predator. In fact, they not only close the leaf, but literally droop, like, “Look, I’m dead. No juice for you here.”

You did this over and over, dropping the plants repeatedly.

Exactly. It makes no sense for a plant or animal to repeat a behavior that is actually useless, so we learn pretty quick that whatever is useless, you don’t do anymore. You’re wasting a lot of energy trying to do something that doesn’t actually help. So, can the plant—in this case, Mimosa—learn not to close the leaves when the potential predator is not real and there are no bad consequences afterward?

After how many drops did they stop closing their leaves?

The test is for a specific type of learning that is called habituation. I decided they would be dropped continuously for 60 times. Then there was a big pause to let them rest and I did it again. But the plants were already re-opening their leaves after the first three to six drops. So within a few minutes, they knew exactly what was going on—like, “Oh my god, this is really annoying but it doesn’t mean anything, so I’m just not going to bother closing. Because when my leaves are open, I can eat light.” So there is a tradeoff between protecting yourself when the threat is real and continuing to feed and grow. I left the plants undisturbed for a month and then came back and repeated the same experiment on those individuals. And they showed they knew exactly what was going on. They were trained.

This is who I am. And nobody has the right to tell me that it’s not real.

You say these plants “understand” and “learn” that there’s no longer a threat. And you’re suggesting they “remember.” You’re not using these words metaphorically. You mean this literally?

Yes, that’s what they’re doing. This is definitely memory. It’s the same kind of experiment we do with a bee or a mouse. So using the words “memory” and “learning” feels totally appropriate. I know that some of my colleagues accuse me of anthropomorphizing, but there is nothing anthropomorphic about this. These are terms that refer to certain processes. Memory and learning are not two separate processes. You can’t learn unless you remember. So if a plant is ticking all the boxes and doing what you would expect a rat or a mouse or a bee to do, then the test is being passed.

Do you think these plants are actually making decisions about whether or not to close their leaves?

This experiment with Mimosa wasn’t designed to test that specific question. But later, I did experiments with other plants, with peas in particular, and yes, there is no doubt the plants make choices in real decision-making. This was tested in the context of a maze, where the test is actually to make a choice between left and right. The choice is based on what you might gain if you choose one side or the other. I did one study with peas that showed the plants can choose the right arm in a maze based on where the sound of water is coming from. Of course, they want water. So they will use the signal to follow that arm of the maze as they try to find the source of water.

So plants can hear water?

Oh, yeah, of course. And I’m not talking about electrical signals. We have also discovered that plants emit their own sounds. The acoustic signal comes out of the plant.

What kind of sounds do they make?

We call them clicks, but this is where language might fail because we are trying to describe something we’re not familiar enough with to create the language that really describes the picture. We worked out that, yes, plants not only produce their own sound, which is amazing, but they are listening to sounds. We are surrounded by sound, so there are studies, like my own study, of plants moving toward certain frequencies and then responding to sounds of potential predators chewing on leaves, which other plants that are not yet threatened can hear. “Oh, that’s a predator chewing on my neighbor’s leaves. I better put my defenses up.” And more recently, there was some work done in Israel on the sound of bees and how flowers prepared themselves and become very nice and sweet, literally, to be more attractive to the bee. So the level of sugars gets increased as a bee passes by.

SECRET LIFE OF PLANTS: Monica Gagliano says her experiences with indigenous people, such as the Huichol in Mexico (above), informed her view that plants have a range of feelings. “I don’t know if they would use those words to describe joy or sadness, but they are feeling bodies,” she says.Scene from the upcoming documentary, AWARE ©umbrellafilms.org

You are describing a surprising level of sophistication in these plants. Do you have a working definition of “intelligence?”

That’s one of those touchy subjects. I use the Latin etymology of the word and “intelligere” literally means something like “choosing between.” So intelligence really underscores decision-making, learning, memory, choice. As you can imagine, all those words are also loaded. They belong in the cognitive realm. That’s why I define all of this work as “cognitive ecology.”

Do you see parallels between this kind of intelligence in plants and the collective intelligence that we associate with social insects in ant colonies or beehives?

That kind of intelligence might be referred to as “distributed intelligence” or “collective intelligence.” We are testing those questions right now. Plants don’t have neurons. They don’t have a brain, which is often what we assume is the base for all of these behaviors. But like slime molds and other basal animals that don’t have neural systems, they seem to be doing the same things. So the short answer is yes.

What you’re saying is very controversial among scientists. The common criticism of your views is that an organism needs a brain or at least a nervous system to be able to learn or remember. Are you saying neurons are not required for intelligence?

Science is full of assumptions and presuppositions that we don’t question. But who said the brain and the neurons are essential for any form of intelligence or learning or cognition? Who decided that? And when I say neurons and brains are not required, it’s not to say they’re not important. For those organisms like ourselves and many animals who do have neurons and brains, it’s amazing. But if we look at the base of the animal kingdom, sponges don’t have neurons. They look like plants because when they’re adults, they settle on the bottom of the ocean and pretty much just sit there forever. Yet if you look at the sponge’s genome, they have the genetic code for the neural system. It’s almost like from an evolutionary perspective, they simply decided that developing a neural system was not useful. So they went a different way. Why would you invest that energy if you don’t need it? You can achieve the same task in different ways.

Your food is psychedelic. It changes your brain chemistry all the time.

Your critics say these are just automatic adaptive responses. This is not really learning.

You know, they just say plants do not learn and do not remember. Then you do this study and stumble on something that actually shows you otherwise. It’s the job of science to be humble enough to realize that we actually make mistakes in our thinking, but we can correct that. Science grows by correcting and modifying and adjusting what we once thought was the fact. I went and asked, can plants do Pavlovian learning? This is a higher kind of learning, which Pavlov did with his dogs salivating, expecting dinner. Well, it turns out plants actually can do it, but in a plant way. So plants do not salivate and dinner is a different kind of dinner. Can you as a scientist create the space for these other organisms to express their own, in this case, “plantness,” instead of expecting them to become more like you?

There’s an emerging field of what’s called “vegetal consciousness.” Do you think plants have minds?

What is the mind? [Laughs] You see, language is very inadequate at the moment in describing this field. I could ask you the same question in referring to humans. Do you think humans have a mind? And I could answer again, what is the mind? Of course, I have written a paper with the title “The Mind of Plants” and there is a book coming called The Mind of Plants. In this context, language is used to capture aspects of how plants can change their mind, and also whether they have agency. Is there a “person” there? These questions are relevant beyond science because they have ethical repercussions. They demand a change in our social attitude toward the environment. But I already have a problem with the language we are using because the question formulated in that way demands a yes or no answer. And what if the answer cannot be yes or no?

Let me ask the question a different way. Do you think plants have emotional lives? Can they feel pain or joy?

It’s the same question. Where do feelings arise from, and what are feelings? These are yes or no questions, usually. But to me, they are yes and no. It depends on what you mean by “feeling” and “joy.” It also depends on where you are expecting the plant to feel those things, if they do, and how you recognize them in a human way. I mean, plants might have more joy than we do. It’s just that we don’t know because we’re not plants.

We have only talked about this from the scientific perspective, which is the Western view of the world. But I’ve also had a close relationship with plants from a very different perspective, the indigenous world view. Why is that less valuable? And when you actually do explore those perspectives, they require your experience. You can’t just understand them by thinking about them. My own personal experience tells me that plants definitely feel many things. I don’t know if they would use those words to describe joy or sadness, but they are feeling bodies. We are feeling bodies.

Science is full of assumptions and presuppositions that we don’t question.

You’ve studied with shamans in indigenous cultures and you’ve taken ayahuasca and other psychoactive plants. Why did you seek out those experiences?

I didn’t. They sought me. So I just followed. They just arrived in my life. You know, those are important doors that you need to open and you either walk through or you don’t. I simply decided to walk through. I had this weird series of three dreams while I was in Australia doing my normal life. By the time the third dream came, it was very clear that the people that I was dreaming of were real people. They were waiting somewhere in this reality, in this world. And the next thing, I’m buying a ticket and going to Peru and my partner at the time is looking at me like, “What are you doing?” [laughs] I have no idea, but I need to go. As a scientist, I find this is the most scientific approach that I’ve ever had. It’s like there is something asking a question and is calling you to meet the answer. The answer is already there and is waiting for you, if you are prepared to open the door and cross through. And I did.

What did you do in Peru?

The first time I went, I found this place that was in my dream. It was just exactly the same as what I saw in my dream. It was the same man I saw in my dream, grinning in the same way as he was in my dream. So I just worked with him, trying to learn as much as I could about myself with his support.

This was a local shaman whom you identify as Don M. And there was a particular plant substance, a hallucinogen, that you took.

I did what they call a “dieta,” which is basically a quiet, intense time in isolation that you do on your own in a little hut. You are just relating with the plant that the elder is deciding on. So for me, the plant that I worked with wasn’t by itself a psychedelic in the normal way of thinking about it. But of course, all plants are psychedelic. Even your food is psychedelic because it changes your brain chemistry and your neurobiology all the time you eat. Sugars, almonds, all sorts of neurotransmitters are flying everywhere. So, again, even the idea of what a psychedelic experience is needs to be revised, because a lot of people might think that it’s only about certain plants that they have a very strong, powerful transformation. And I find that all plants are psychedelic. I can sit in my garden. I don’t have to ingest anything and I can feel very altered by that experience.

You’ve said the plant talked to you. Did you actually hear words?

When you’re trying to describe this to people haven’t had the experience, it probably doesn’t make much sense because this kind of knowledge requires your participation. I don’t hear someone talking to me as if from the outside, talking to me in words and sound. But even that is not correct because inside my head it does sound exactly like a conversation. Not only that, but I know it’s not me. There is no way that I would know about some of the information that’s been shared with me.

Are you saying these plants had specific information to tell you about your life and your work?

Yeah, I mean, some of the plants tell me exactly how wrong I was in thinking about my experiments and how I should be doing them to get them to work. And I’m like, “Really?” I’m scribbling down without really understanding. Then I go in the lab and try what they say. And even then, there is a part of me that doesn’t really believe it. For one experiment, the one on the Pavlovian pea, I was trying to address that question the year before with a different plant. I was using sunflowers. And while I was doing my dieta with a different tree back in Peru, the plant just turned up and said, “By the way, not sunflowers, peas.” And I’m like, “what?” People always think that when you have these experiences, you’re supposed to understand the secrets of the universe. No, my plants are usually quite practical. [laughs] And they were right.

Do you think you are really encountering the consciousness of that plant? Maybe your imagination has opened up to see the world in new ways, but it’s all just a projection of your own mind. How do you know you are actually encountering another intelligence?

If you had this experience of connecting with plants the way I have described—and there are plenty of people who have—the experience is so clear that you know that it’s not you; it’s someone else talking. If you haven’t had that experience, then I can totally see it’s like, “No way, it must be your mind that makes it up.” But all I can say is that I have had exchanges with plants who have shared things about topics and asked me to do things that I have really no idea about.

What have plants asked you to do?

I’m not a medical scientist, but I’ve been given information by plants about their medical properties. And these are very specific bits of information. I wrote them in my diary. I would later check and I did find them in the medical literature: “This plant is for this and we know this.” I just didn’t know. So maybe I’m tapping into the collective consciousness.

What do you do with these kinds of personal experiences? You are a scientist who’s been trained to observe and study and measure the physical world. But this is an entirely different kind of reality. Can you reconcile these two different realities?

I think there are some presuppositions that a scientist should just explore the consensus reality that most of us experience in more or less the same way. But I don’t really have a conflict because I find this is just part of experimenting and exploring. If anything, I found that it has enriched and expanded the science I do. This is a work in progress, obviously, but I think I’m getting better at it. And in the writing of my book, which for a scientist was a very scary process because it was laying bare some parts of me that I knew would likely compromise my career forever, it also became liberating because once it was written, now the world knows. And it’s my truth. This is how I operate. This is who I am. And nobody has the right or the authority to tell me that it’s not real.

Steve Paulson is the executive producer of Wisconsin Public Radio’s nationally syndicated show “To the Best of Our Knowledge.” He’s the author of Atoms and Eden: Conversations on Religion and Science. You can subscribe to TTBOOK’s podcast here.

Lead image: kmeds7 / Shutterstock


Read More…




outbreak

What’s Missing in Pandemic Models - Issue 84: Outbreak


In the COVID-19 pandemic, numerous models are being used to predict the future. But as helpful as they are, they cannot make sense of themselves. They rely on epidemiologists and other modelers to interpret them. Trouble is, making predictions in a pandemic is also a philosophical exercise. We need to think about hypothetical worlds, causation, evidence, and the relationship between models and reality.1,2

The value of philosophy in this crisis is that although the pandemic is unique, many of the challenges of prediction, evidence, and modeling are general problems. Philosophers like myself are trained to see the most general contours of problems—the view from the clouds. They can help interpret scientific results and claims and offer clarity in times of uncertainty, bringing their insights down to Earth. When it comes to predicting in an outbreak, building a model is only half the battle. The other half is making sense of what it shows, what it leaves out, and what else we need to know to predict the future of COVID-19.

Prediction is about forecasting the future, or, when comparing scenarios, projecting several hypothetical futures. Because epidemiology informs public health directives, predicting is central to the field. Epidemiologists compare hypothetical worlds to help governments decide whether to implement lockdowns and social distancing measures—and when to lift them. To make this comparison, they use models to predict the evolution of the outbreak under various simulated scenarios. However, some of these simulated worlds may turn out to misrepresent the real world, and then our prediction might be off.

In his book Philosophy of Epidemiology, Alex Broadbent, a philosopher at the University of Johannesburg, argues that good epidemiological prediction requires asking, “What could possibly go wrong?” He elaborated in an interview with Nautilus, “To predict well is to be able to explain why what you predict will happen rather than the most likely hypothetical alternatives. You consider the way the world would have to be for your prediction to be true, then consider worlds in which the prediction is false.” By ruling out hypothetical worlds in which they are wrong, epidemiologists can increase their confidence that they are right. For instance, by using antibody tests to estimate previous infections in the population, public health authorities could rule out the hypothetical possibility (modeled by a team at Oxford) that the coronavirus has circulated much more widely than we think.3

One reason the dynamics of an outbreak are often more complicated than a traditional model can predict is that they result from human behavior and not just biology.

Broadbent is concerned that governments across Africa are not thinking carefully enough about what could possibly go wrong, having for the most part implemented coronavirus policies in line with the rest of the world. He believes a one-size-fits-all approach to the pandemic could prove fatal.4 The same interventions that might have worked elsewhere could have very different effects in the African context. For instance, the economic impacts of social distancing policies on all-cause mortality might be worse because so many people on the continent suffer increased food insecurity and malnutrition in an economic downturn.5 Epidemic models only represent the spread of the infection. They leave out important elements of the social world.

Another limitation of epidemic models is that they model the effect of behaviors on the spread of infection, but not the effect of a public health policy on behaviors. The latter requires understanding how a policy works. Nancy Cartwright, a philosopher at Durham University and the University of California, San Diego, suggests that “the road from ‘It works somewhere’ to ‘It will work for us’ is often long and tortuous.”6 The kinds of causal principles that make policies effective, she says, “are both local and fragile.” Principles can break in transit from one place to the other. Take the principle, “Stay-at-home policies reduce the number of social interactions.” This might be true in Wuhan, China, but might not be true in a South African township in which the policies are infeasible or in which homes are crowded. Simple extrapolation from one context to another is risky. A pandemic is global, but prediction should be local.

Predictions require assumptions that in turn require evidence. Cartwright and Jeremy Hardie, an economist and research associate at the Center for Philosophy of Natural and Social Science at the London School of Economics, represent evidence-based policy predictions using a pyramid, where each assumption is a building block.7 If evidence for any assumption is missing, the pyramid might topple. I have represented evidence-based medicine predictions using a chain of inferences, where each link in the chain is made of an alloy containing assumptions.8 If any assumption comes apart, the chain might break.

An assumption can involve, for example, the various factors supporting an intervention. Cartwright writes that “policy variables are rarely sufficient to produce a contribution [to some outcome]; they need an appropriate support team if they are to act at all.” A policy is only one slice of a complete causal pie.9 Take age, an important support factor in causal principles of social distancing. If social distancing prevents deaths primarily by preventing infections among older individuals, wherever there are fewer older individuals there may be fewer deaths to prevent—and social distancing will be less effective. This matters because South Africa and other African countries have younger populations than do Italy or China.10

The lesson that assumptions need evidence can sound obvious, but it is especially important to bear in mind when modeling. Most epidemic modeling makes assumptions about the reproductive number, the size of the susceptible population, and the infection-fatality ratio, among other parameters. The evidence for these assumptions comes from data that, in a pandemic, is often rough, especially in early days. It has been argued that nonrepresentative diagnostic testing early in the COVID-19 pandemic led to unreliable estimates of important inputs in our epidemic modeling.11

Epidemic models also don’t model all the influences of the pathogen and of our policy interventions on health and survival. For example, what matters most when comparing deaths among hypothetical worlds is how different the death toll is overall, not just the difference in deaths due to the direct physiological effects of a virus. The new coronavirus can overwhelm health systems and consume health resources needed to save non-COVID-19 patients if left unchecked. On the other hand, our policies have independent effects on financial welfare and access to regular healthcare that might in turn influence survival.

A surprising difficulty with predicting in a pandemic is that the same pathogen can behave differently in different settings. Infection fatality ratios and outbreak dynamics are not intrinsic properties of a pathogen; these things emerge from the three-way interaction among pathogen, population, and place. Understanding more about each point in this triangle can help in predicting the local trajectory of an outbreak.

In April, an influential data-driven model, developed by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington, which uses a curve-fitting approach, came under criticism for its volatile projections and questionable assumption that the trajectory of COVID-19 deaths in American states can be extrapolated from curves in other countries.12,13 In a curve-fitting approach, the infection curve representing a local outbreak is extrapolated from data collected locally along with data regarding the trajectory of the outbreak elsewhere. The curve is drawn to fit the data. However, the true trajectory of the local outbreak, including the number of infections and deaths, depends upon characteristics of the local population as well as policies and behaviors adopted locally, not just upon the virus.

Predictions require assumptions that in turn require evidence.

Many of the other epidemic models in the coronavirus pandemic are SIR-type models, a more traditional modelling approach for infectious-disease epidemiology. SIR-type models represent the dynamics of an outbreak, the transition of individuals in the population from a state of being susceptible to infection (S) to one of being infectious to others (I) and, finally, recovered from infection (R). These models simulate the real world. In contrast to the data-driven approach, SIR models are more theory-driven. The theory that underwrites them includes the mathematical theory of outbreaks developed in the 1920s and 1930s, and the qualitative germ theory pioneered in the 1800s. Epidemiologic theories impart SIR-type models with the know-how to make good predictions in different contexts.

For instance, they represent the transmission of the virus as a factor of patterns of social contact as well as viral transmissibility, which depend on local behaviors and local infection control measures, respectively. The drawback of these more theoretical models is that without good data to support their assumptions they might misrepresent reality and make unreliable projections for the future.

One reason why the dynamics of an outbreak are often more complicated than a traditional model can predict, or an infectious-disease epidemiology theory can explain, is that the dynamics of an outbreak result from human behavior and not just human biology. Yet more sophisticated disease-behavior models can represent the behavioral dynamics of an outbreak by modeling the spread of opinions or the choices individuals make.14,15 Individual behaviors are influenced by the trajectory of the epidemic, which is in turn influenced by individual behaviors.

“There are important feedback loops that are readily represented by disease-behavior models,” Bert Baumgartner, a philosopher who has helped develop some of these models, explains. “As a very simple example, people may start to socially distance as disease spreads, then as disease consequently declines people may stop social distancing, which leads to the disease increasing again.” These looping effects of disease-behavior models are yet another challenge to predicting.

It is a highly complex and daunting challenge we face. That’s nothing unusual for doctors and public health experts, who are used to grappling with uncertainty. I remember what that uncertainty felt like when I was training in medicine. It can be discomforting, especially when confronted with a deadly disease. However, uncertainty need not be paralyzing. By spotting the gaps in our models and understanding, we can often narrow those gaps or at least navigate around them. Doing so requires clarifying and questioning our ideas and assumptions. In other words, we must think like a philosopher.

Jonathan Fuller is an assistant professor in the Department of History and Philosophy of Science at the University of Pittsburgh. He draws on his dual training in philosophy and in medicine to answer fundamental questions about the nature of contemporary disease, evidence, and reasoning in healthcare, and theory and methods in epidemiology and medical science.

References

1. Walker, P., et al. The global impact of COVID-19 and strategies for mitigation and suppression. Imperial College London (2020).

2. Flaxman, S., et al. Estimating the number of infections and the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries. Imperial College London (2020).

3. Lourenco, J., et al. Fundamental principles of epidemic spread highlight the immediate need for large-scale serological surveys to assess the stage of the SARS-CoV-2 epidemic. medRxiv:10.1101/2020.03.24.20042291 (2020).

4. Broadbent, A., & Smart, B. Why a one-size-fits-all approach to COVID-19 could have lethal consequences. TheConversation.com (2020).

5. United Nations. Global recession increases malnutrition for the most vulnerable people in developing countries. United Nations Standing Committee on Nutrition (2009).

6. Cartwright, N. Will this policy work for you? Predicting effectiveness better: How philosophy helps. Philosophy of Science 79, 973-989 (2012).

7. Cartwright, N. & Hardie, J. Evidence-Based Policy: A Practical Guide to Doing it Better Oxford University Press, New York, New York (2012).

8. Fuller, J., & Flores, L. The Risk GP Model: The standard model of prediction in medicine. Studies in History and Philosophy of Biological and Biomedical Sciences 54, 49-61 (2015).

9. Rothman, K., & Greenland, S. Causation and causal inference in epidemiology. American Journal Public Health 95, S144-S50 (2005).

10. Dowd, J. et al. Demographic science aids in understanding the spread and fatality rates of COVID-19. Proceedings of the National Academy of Sciences 117, 9696-9698 (2020).

11. Ioannidis, J. Coronavirus disease 2019: The harms of exaggerated information and non‐evidence‐based measures. European Journal of Clinical Investigation 50, e13222 (2020).

12. COVID-19 Projections. Healthdata.org. https://covid19.healthdata.org/united-states-of-america.

13. Jewell, N., et al. Caution warranted: Using the Institute for Health metrics and evaluation model for predicting the course of the COVID-19 pandemic. Annals of Internal Medicine (2020).

14. Nardin, L., et al. Planning horizon affects prophylactic decision-making and epidemic dynamics. PeerJ 4:e2678 (2016).

15. Tyson, R., et al. The timing and nature of behavioural responses affect the course of an epidemic. Bulletin of Mathematical Biology 82, 14 (2020).

Lead image: yucelyilmaz / Shutterstock


Read More…




outbreak

How Science Trumps Denial - Issue 84: Outbreak


There’s an old belief that truth will always overcome error. Alas, history tells us something different. Without someone to fight for it, to put error on the defensive, truth may languish. It may even be lost, at least for some time. No one understood this better than the renowned Italian scientist Galileo Galilei.

It is easy to imagine the man who for a while almost single-handedly founded the methods and practices of modern science as some sort of Renaissance ivory-tower intellectual, uninterested and unwilling to sully himself by getting down into the trenches in defense of science. But Galileo was not only a relentless advocate for what science could teach the rest of us. He was a master in outreach and a brilliant pioneer in the art of getting his message across.

Today it may be hard to believe that science needs to be defended. But a political storm that denies the facts of science has swept across the land. This denialism ranges from the initial response to the COVID-19 pandemic to the reality of climate change. It’s heard in the preposterous arguments against vaccinating children and Darwin’s theory of evolution by means of natural selection. The scientists putting their careers, reputations, and even their health on the line to educate the public can take heart from Galileo, whose courageous resistance led the way.

STAND UP FOR SCIENCE: Participants in the annual March for Science make Galileo proud, protesting those in power who have devalued and eroded science. (Above: Washington, D.C., 2017)bakdc / Shutterstock

A crucial first step, one that took Galileo a bit of time to take, was to switch from publishing his findings in Latin, as was the custom for scientific writings at the time, to the Italian vernacular, the speech of the common people. This enabled not just the highly educated elite but anyone who was intellectually curious to hear and learn about the new scientific work. Even when risking offense (which Galileo never shied away from)—for instance, in responding to a German Jesuit astronomer who disagreed with him on the nature of sunspots (mysterious dark areas observed on the surface of the sun)—Galileo replied in the vernacular, because, as he explained, “I must have everyone able to read it.” An additional motive may have been that Galileo wanted to ensure that no one would somehow distort the meaning of what he had written.

Galileo also understood that while the Church had the pomp and magic of decades of art and music, science had the enchantment of a new invention—the telescope. Even he wasn’t immune to its seductive powers, writing in his famous booklet The Sidereal Messenger: “In this short treatise I propose great things for inspection and contemplation by every explorer of Nature. Great, I say, because of the excellence of the things themselves, because of their newness, unheard of through the ages, and also because of the instrument with the benefit of which they make themselves manifest to our sight. “ And that gave him his second plan for an ambitious outreach campaign.

With alternative facts acting like real facts, there are Galileo’s heirs, throwing up their hands and attempts to make lies sound like truth.

What if he could distribute telescopes (together with detailed instructions for their use and his booklet about the discoveries) all across Europe, so that all the influential people, that is, the patrons of scientists—dukes and cardinals, could observe with their own eyes far out into the heavens. They would see the stunning craters and mountains that cover the surface of the moon, four previously unseen satellites of Jupiter, dark spots on the surface of the sun, and the vast number of stars that make up the Milky Way.

But telescopes were both expensive and technically difficult to produce. Their lenses had to be of the highest quality, to provide both the ability to see faint objects and high resolution. “Very fine lenses that can show all observations are quite rare and, of the more than sixty I have made, with great effort and expense, I have only been able to retain a very small number,” Galileo wrote on March 19, 1610. Who would front the cost of such a monumental and risky project?

Today the papacy is arguably the single most influential and powerful religious institution in the world. But its power is mostly in the moral and religious realms. In Galileo’s time, the papacy was a political power of significance, gobbling up failed dukedoms elsewhere, merging them into what became known as the “papal states.” The persons with the greatest interest in appearing strong in front of the papacy were the heads of neighboring states at the time.

So it is not surprising that Galileo presented his grandiose scheme to the Tuscan court and the Grand Duke Cosimo II de’ Medici. Nor is it surprising that Cosimo agreed to finance the manufacturing of all the telescopes. On his own, he also instructed the Tuscan ambassadors to all the major European capitals to help publicize Galileo’s discoveries. In doing so he tied the House of Medici, ruler of the foundational city of the Renaissance, Florence, to modern science. A win-win for both the Grand Duke and Galileo.

Last, Galileo instinctively understood what modern PR specialists refer to as the “quick response.” He did not let even one unkind word be said about his discoveries without an immediate reply. And his pen could be sharp.

For example, the Jesuit mathematician Orazio Grassi (hiding behind the pseudonym of Sarsi) published a book entitled The Astronomical and Philosophical Balance, in which he criticized Galileo’s ideas on comets and on the nature of heat. In it, Grassi mistakenly thought that he would strengthen his argument by citing a legendary tale about the ancient Babylonians cooking eggs by whirling them on slings.

Really?

Galileo responded with a stupendous piece of polemic literature entitled The Assayer, in which he pounced on this fabled story like a cat on a mouse.

“If Sarsi wishes me to believe, on the word of Suidas [a Greek historian], that the Babylonians cooked eggs by whirling them rapidly in slings, I shall believe it; but I shall say that the cause of this effect is very far from the one he attributes to it,” he wrote. “ To discover the true cause, I reason as follows: ‘If we do not achieve an effect which others formerly achieved, it must be that we lack something in our operation which was the cause of this effect succeeding, and if we lack one thing only, then this alone can be the true cause. Now we do not lack eggs, or slings, or sturdy fellows to whirl them, and still they do not cook, but rather cool down faster if hot. And since we lack nothing except being Babylonians, then being Babylonian is the cause of the egg hardening.’”

Galileo understood what modern PR specialists refer to as the “quick response.” He did not let one unkind word go without an immediate reply.

Did Galileo’s efforts save science from being cast aside perhaps for decades, even centuries? Unfortunately, not quite. The trial in which he was convicted by the Inquisition for “vehement suspicion of heresy” exerted a chilling effect on progress in deciphering the laws governing the cosmos. The famous French philosopher and scientist René Descartes wrote in a letter: “I inquired in Leiden and Amsterdam whether Galileo’s World System was available, for I thought I had heard that it was published in Italy last year. I was told that it had indeed been published, but that all the copies had immediately been burnt in Rome, and that Galileo had been convicted and fined. I was so astonished at this that I almost decided to burn all my papers, or at least to let no one see them.”

I suspect that there are still too few of us who can tell exactly what Galileo discovered and why he is such an important figure to the birth of modern science. But around the world, in conversations as brittle as today’s politics, with alternative facts acting like real facts, there are Galileo’s heirs, throwing up their hands at such attempts to make lies seem like the truth and worse, the truth like a lie, responding with just four words: “And yet it moves.”

Galileo may have never really uttered these words. He surely didn’t say that phrase in front of the Inquisitors—that would have been insanely dangerous. But whether the motto came first from his own mouth, that of a supporter whom he met during the years the Church put him under house arrest after his trial, or a later historian, we know one thing for sure. That motto represents everything Galileo stood for. It conveys the clear message of: In spite of what you may believe, these are the facts! That science won at the end is not solely because of the methods and rules that Galileo set out for what we accept to be true. Science prevailed because Galileo put his life and his personal freedom on the line to defend it.

Mario Livio is an astrophysicist and author. His new book is Galileo: And the Science Deniers.

Lead image: Mario Breda / Shutterstock


Read More…




outbreak

Don’t Fear the Robot - Issue 84: Outbreak


You probably know my robot. I’ve been inventing autonomous machines for over 30 years and one of them, Roomba from iRobot, is quite popular. During my career, I’ve learned a lot about what makes robots valuable, and formed some strong opinions about what we can expect from them in the future. I can also tell you why, contrary to popular apocalyptic Hollywood images, robots won’t be taking over the world anytime soon. But that’s getting ahead of myself. Let me back up.

My love affair with robots began in the early 1980s when I joined the research staff at MIT’s Artificial Intelligence Lab. Physics was my college major but after a short time at the lab the potential of the developing technology seduced me. I became a roboticist.

Such an exhilarating place to work! A host of brilliant people were researching deep problems and fascinating algorithms. Amazingly clever mechanisms were being developed, and it was all converging in clever and capable mobile robots. The future seemed obvious. So, I made a bold prediction and told all my friends, “In three to five years, robots will be everywhere doing all sorts of jobs.”

But I was wrong.

Again and again in those early years, news stories teased: “Big Company X has demonstrated a prototype of Consumer Robot Y. X says Y will be available for sale next year.” But somehow next year didn’t arrive. Through the 1980s and 1990s, robots never managed to find their way out of the laboratory. This was distressing to a committed robot enthusiast. Why hadn’t all the journal papers, clever prototypes, and breathless news stories culminated in a robot I could buy in a store?

Let me answer with the story of the first consumer robot that did achieve marketplace stardom.

RUG WARRIOR: Joe Jones built his “Rug Warrior” (above) in 1989. He calls it “the earliest conceptual ancestor of Roomba.” It included bump sensors and a carpet sweeper mechanism made from a bottle brush. It picked up simulated dirt at a demonstration but, Jones says, “was not robust enough to actually clean my apartment as I had hoped.”Courtesy of Joe Jones

In the summer of 1999, while working at iRobot, a colleague, Paul Sandin, and I wrote a proposal titled “DustPuppy, A Near-Term, Breakthrough Product with High Earnings Potential.” We described an inexpensive little robot, DustPuppy, that would clean consumers’ floors by itself. Management liked the idea and gave us $10,000 and two weeks to build a prototype.

Using a cylindrical brush, switches, sensors, motors, and a commonplace microprocessor, we assembled our vision. At the end of an intense fortnight we had it—a crude version of a robot that conveyed a cleaning mechanism around the floor and—mostly—didn’t get stuck. Management saw the same promise in DustPuppy as Paul and me.

We called our robot DustPuppy for a reason. This was to be the world’s first significant consumer robot and the team’s first attempt at a consumer product. The risk was that customers might expect too much and that we might deliver too little. We were sure that—like a puppy—our robot would try very hard to please but that also—like a puppy—it might sometimes mess up. Calling it DustPuppy was our way of setting expectations and hoping for patience if our robot wasn’t perfect out of the gate. Alas, iRobot employed a firm to find a more commercial name. Many consumer tests later, DustPuppy became Roomba. The thinking was the robot’s random motion makes it appear to be dancing around the room—doing the Rumba.

Paul and I knew building a robotic floor cleaner entails fierce challenges not apparent to the uninitiated. Familiar solutions that work well for people can prove problematic when applied to a robot.

Your manual vacuum likely draws 1,400 watts or 1.9 horsepower from the wall socket. In a Roomba-sized robot, that sort of mechanism would exhaust the battery in about a minute. Make the robot bigger, to accommodate a larger battery, and the robot won’t fit under the furniture. Also, batteries are expensive—the cost of a big one might scuttle sales. We needed innovation.

Melville Bissell, who patented the carpet sweeper in 1876, helped us out. We borrowed from his invention to solve Roomba’s energy problem. A carpet sweeper picks up dirt very efficiently. Although you supply all the power, you won’t work up a sweat pushing one around. (If you supplied the entire 1.9 horsepower a conventional vacuum needs, you’d do a lot of sweating!)

When designers festoon their robots with anthropomorphic features, they are making a promise no robot can keep.

We realized that our energy-efficient carpet sweeper would not clean as quickly or as deeply as a powerful vacuum. But we thought, if the robot spends enough time doing its job, it can clean the surface dirt just as well. And if the robot runs every day, the surface dirt won’t work into the carpet. Roomba matches a human-operated vacuum by doing the task in a different way.

Any robot vacuum must do two things: 1) not get stuck, and 2) visit every part of the floor. The first imperative we satisfied in part by making Roomba round with its drive wheels on the diameter. The huge advantage of this shape is that Roomba can always spin in place to escape from an object. No other shape enables such a simple, reliable strategy. The second imperative, visiting everywhere, requires a less obvious plan.

You move systematically while cleaning, only revisiting a spot if that spot is especially dirty. Conventional wisdom says our robot should do the same—drive in a boustrophedon pattern. (This cool word means writing lines in alternate directions, left to right, right to left, like an ox turns in plowing.) How to accomplish this? We received advice like, “Just program the robot to remember where it’s been and not go there again.”

Such statements reveal a touching faith that software unaided can solve any technical problem. But try this exercise (in a safe place, please!). While standing at a marked starting point, pick another point, say, six feet to your left. Now keep your eyes closed while you walk in a big circle around the central point. How close did you come to returning to your starting point? Just like you, a robot can’t position itself in the world without appropriate sensors. Better solutions are available today, but circa 2000 a position-sensing system would have added over $1,000 to Roomba’s cost. So, boustrophedon paths weren’t an option. We had to make Roomba do its job without knowing where it was.

I design robots using a control scheme called behavior-based programming. This approach is robot-appropriate because it’s fast, responsive, and runs on low-cost computer hardware. A behavior-based program structures a robot’s control scheme as a set of simple, understandable behaviors.

Remember that Roomba’s imperative is to apply its cleaning mechanism to all parts of the floor and not get stuck. The program that accomplishes this needs a minimum of two behaviors. Call them Cruise and Escape. Cruise is single-minded. It ignores all sensor inputs and constantly outputs a signal telling the robot’s motors to drive forward.

Escape watches the robot’s front bumper. Whenever the robot collides with something, one or both of the switches attached to the bumper activate. If the left switch closes, Escape knows there’s been a collision on the left, so it tells the motors to spin the robot to the right. A collision on the right means spin left. If both switches close at once, an arbitrary decision is made. When neither switch is closed Escape sends no signal to the motors.

TEST FLOORS: “Roomba needed to function on many floor types and to transition smoothly from one type to another,” says Joe Jones. “We built this test floor to verify that Roomba would work in this way.” The sample floors include wood, various carpets, and tiles.Courtesy of Joe Jones

Occasionally Cruise and Escape try to send commands to the motors at the same time. When this happens, a bit of code called an arbiter decides which behavior succeeds—the highest priority behavior outputting a command wins. In our example, Escape is assigned the higher priority.

Watching the robot, we see a complex behavior emerge from these simple rules. The robot moves across the floor until it bumps into something. Then it stops moving forward and turns in place until the path is clear. It then resumes forward motion. Given time, this random motion lets the robot cover, and clean, the entire floor.

Did you guess so little was going on in the first Roomba’s brain? When observers tell me what Roomba is thinking they invariably imagine great complexity—imbuing the robot with intentions and intricate plans that are neither present nor necessary. Every robot I build is as simple and simple-minded as I can make it. Anything superfluous, even intelligence, works against marketplace success.

The full cleaning task contains some extra subtleties. These require more than just two behaviors for efficient operation. But the principle holds, the robot includes only the minimum components and code required for the task.

A few months from product launch, we demonstrated one of our prototypes to a focus group. The setup was classical: A facilitator presented Roomba to a cross section of potential customers while the engineers watched from a darkened room behind a one-way mirror.

The session was going well, people seemed to like the robot and it picked up test dirt effectively. Then the facilitator mentioned that Roomba used a carpet sweeper mechanism and did not include a vacuum.

The mood changed. Our test group revised the price they’d be willing to pay for Roomba, cutting in half their estimate from only minutes earlier. We designers were perplexed. We solved our energy problem by eschewing a vacuum in favor of a carpet sweeper—and it worked! Why wasn’t that enough for the focus group?

Did you guess so little was going on in Roomba’s brain? Every robot I build is as simple-minded as I can make it.

Decades of advertising have trained consumers that a vacuum drawing lots of amps means effective cleaning. We wanted customers to judge our new technology using a more appropriate metric. But there was no realistic way to accomplish that. Instead, our project manager declared, “Roomba must have a vacuum, even if it does nothing.”

No one on the team wanted a gratuitous component—even if it solved our marketing problem. We figured we could afford three watts to run a vacuum motor. But a typical vacuum burns 1,400 watts. What could we do with just three?

Using the guts of an old heat gun, some cardboard, and packing tape, I found a way. It turned out that if I made a very narrow inlet, I could achieve the same air-flow velocity as a regular vacuum but, because the volume was miniscule, it used only a tiny bit of power. We had a vacuum that actually contributed to cleaning.

DUST PUPPY: Before the marketers stepped in with the name “Roomba,” Joe Jones and his colleague Paul Sandin called their floor cleaner, “DustPuppy.” “Our robot would try very hard to please,” Jones writes. But like a puppy, “it might sometimes mess up.” Above, Sandin examines a prototype, with designer Steve Hickey (black shirt) and intern Ben Trueman.Courtesy of Joe Jones

There’s a moment in the manufacturing process called “commit to tooling” when the design must freeze so molds for the plastic can be cut. Fumble that deadline and you may miss your launch date, wreaking havoc on your sales.

About two weeks before “commit,” our project manager said, “Let’s test the latest prototype.” We put some surrogate dirt on the floor and let Roomba run over it. The dirt remained undisturbed.

Panic ensued. Earlier prototypes had seemed to work, and we thought we understood the cleaning mechanism. But maybe not. I returned to the lab and tried to identify the problem. This involved spreading crushed Cheerios on a glass tabletop and looking up from underneath as our cleaning mechanism operated.

Our concept of Mr. Bissell’s carpet sweeper went like this: As the brush turns against the floor, bristle tips pick up dirt particles. The brush rotates inside a conforming shroud carrying the dirt to the back where a toothed structure combs it from the brush. The dirt then falls into the collection bin.

That sedate description couldn’t have been more wrong. In fact, as the brush turns against the floor, a flicking action launches dirt particles into a frenetic, chaotic cloud. Some particles bounce back onto the floor, some bounce deep into the brush, some find the collection bin. The solution was to extend the shroud around the brush a little farther on the back side—that redirected the dirt that bounced out such that the brush had a second chance to pick it up. Roomba cleaned again and we could begin cutting molds with a day or two to spare.

Roomba launched in September 2002. Its success rapidly eclipsed the dreams of all involved.

Did Roomba’s nascent reign end the long robot drought? Was my hordes-of-robots-in-service-to-humanity dream about to come true?

In the years since iRobot released Roomba, many other robot companies have cast their die. Here are a few: Anki, Aria Insights, Blue Workforce, Hease Robotics, Jibo, Keecker, Kuri, Laundroid, Reach Robotics, Rethink Robotics, and Unbounded Robotics. Besides robots and millions of dollars of venture capitalist investment, what do all of these companies have in common? None are in business today.

The commercial failure of robots and robot companies is not a new phenomenon. Before Roomba, the pace was slower, but the failure rate no less disappointing. This dismal situation set me looking for ways around the fatal missteps roboticists seemed determined to make. I settled on three principles that we followed while developing Roomba.

1. Perform a Valuable Task

When a robot does a specific job, say, mowing your lawn or cleaning your grill, its value is clear and long-lasting. But over the years, I’ve seen many cool, cute, engaging robots that promised great, albeit vague, value while performing no discernable task. Often the most embarrassing question I could ask the designer of such a robot was, “What does your robot do?” In this case the blurted answer, “Everything!” is synonymous with “Nothing.” The first principle for a successful robot is: Do something people want done. When a robot’s only attribute is cuteness, value evaporates as novelty fades.

2. Do the Task Today

Many robots emerge from research labs. In the lab, researchers aspire to be first to achieve some impressive result; cost and reliability matter little. But cost and reliability are paramount for real-world products. Bleeding edge technologies are rarely inexpensive, reliable, or timely. Second principle: Use established technology. A research project on the critical path to robot completion can delay delivery indefinitely.

3. Do the Task for Less

People have jobs they want done and states they want achieved—a clean floor, a mowed lawn, fresh folded clothes in the dresser. The result matters, the method doesn’t. If a robot cannot provide the lowest cost, least arduous solution, customers won’t buy it. Third principle: A robotic solution must be cost-competitive with existing solutions. People will not pay more to have a robot do the job.

A few robots have succeeded impressively: Roomba, Kiva Systems (warehouse robots), and Husqvarna’s Automower (lawn mower). But I started this article with the question, why aren’t successful robots everywhere? Maybe the answer is becoming clearer.

Robot success is opportunistic. Not every application has a viable robotic solution. The state of the art means only select applications offer: a large market; existing technology that supports autonomy; a robotic approach that outcompetes other solutions.

There’s one more subtle aspect. Robots and people may accomplish the same task in completely different ways. This makes deciding which tasks are robot-appropriate both difficult and, from my perspective, great fun. Every potential task must be reimagined from the ground up.

My latest robot, Tertill, prevents weeds from growing in home gardens. A human gardener pulls weeds up by the roots. Why? Because this optimizes the gardeners time. Leaving roots behind isn’t a moral failure, it just means weeds will rapidly re-sprout forcing the gardener to spend more time weeding.

Tertill does not pull weeds but attacks them in two other ways. It cuts the tops off weeds and it uses the scrubbing action of the wheels to kill weeds as they sprout from seeds. These tactics work because the robot, unlike the gardener, lives in the garden. Tertill returns every day to prevent rooted weeds from photosynthesizing so roots eventually die; weed seeds that are constantly disturbed don’t sprout.

Had Tertill copied the human solution, the required root extraction mechanism and visual identification system would have increased development time, added cost, and reduced reliability. Without reimagining the task, there would be no solution.

Robots have a hard-enough time doing their jobs at all. Burdening them with unnecessary features and expectations worsens the problem. That’s one reason I’m always vexed when designers festoon their robots with anthropomorphic features—they make a promise no robot can keep. Anthropomorphic features and behaviors hint that the robot has the same sort of inner life as people. But it doesn’t. Instead the robot has a limited bag of human-mimicking tricks. Once the owner has seen all the tricks, the robot’s novelty is exhausted and along with it the reason for switching on the robot. Only robots that perform useful tasks remain in service after the novelty wears off.

No commercially successful robot I’m aware of has superfluous extras. This includes computation cycles—cycles it might use to contemplate world domination. All of the robot’s resources are devoted to accomplishing the task for which it was designed, or else it wouldn’t be successful. Working robots don’t have time to take over the world.

Robots have been slow to appear because each one requires a rare confluence of market, task, technology, and innovation. (And luck. I only described some of the things that nearly killed Roomba.) But as technology advances and costs decline, the toolbox for robot designers constantly expands. Thus, more types of robots will cross the threshold of economic viability. Still, we can expect one constant. Each new, successful robot will represent a minimum—the simplest, lowest-cost solution to a problem people want solved. The growing set of tools that let us attack ever more interesting problems make this an exciting time to practice robotics.

Joe Jones is cofounder and CTO of Franklin Robotics. A graduate of MIT, he holds more than 70 patents.

Lead image: Christa Mrgan / Flickr


Read More…




outbreak

Coronavirus World Map: Tracking The Spread Of The Outbreak

A map of confirmed COVID-19 cases and deaths around the world. The respiratory disease has spread rapidly across six continents and has killed thousands of people.




outbreak

In a hurry to reopen state, Arizona governor disbands scientific panel that modeled outbreak

Arizona's Republican Gov. Doug Ducey's administration disbanded a panel of university scientists who had warned that reopening the state now would be dangerous.





outbreak

My Secret Terrius: Netflix show predicted coronavirus outbreak with alarming accuracy in 2018

It's the most accurate one yet




outbreak

Outbreaks in Germany, SKorea show risks in easing up...


Outbreaks in Germany, SKorea show risks in easing up...


(Second column, 4th story, link)


Drudge Report Feed needs your support!   Become a Patron





outbreak

EasyJet grounds entire fleet of aircraft due to coronavirus outbreak

Follow our live updates here Coronavirus: the symptoms




outbreak

These images show tourist hotspots before and after coronavirus outbreak

Tourist numbers have dwindled across Asia as coronavirus grounds travellers





outbreak

LaLiga return: Eibar players and coaching staff admit fears of fresh coronavirus outbreak

Players and coaching staff at LaLiga side Eibar have released a statement revealing fears of a fresh coronavirus outbreak ahead of a planned return for the competition next month.




outbreak

Covid-19 outbreaks at Irish meat plants raise fears over worker safety

Third of workers at factory in Tipperary test positive, while McDonald’s supplier forced to temporarily halt production

An outbreak of Covid-19 among workers in a meat factory in Tipperary has raised fears that the virus is spreading through abattoirs and meat-processing plants in Ireland.

Sinn Féin’s spokesperson on agriculture, Brian Stanley, told the Irish parliament last night that 120 workers at the Rosderra Meats plant in Roscrea had tested positive for the virus. He also said that of 350 workers at the plant, up to 140 were off sick last week. Rosderra is the largest pork-processing company in Ireland.

Continue reading...




outbreak

Coronavirus outbreak creates a college football recruiting year unlike any other

The coronavirus has created a unique year for college football recruiting. With travel restricted and summer camps canceled, many recruits could up playing near home.




outbreak

Swimming championships moved to accommodate Olympics delay due to coronavirus outbreak

Swimming follows a similar move by track, shifting to 2022 to make room for the delayed Tokyo Olympics in 2021.




outbreak

Amid coronavirus outbreak, Hungary's Viktor Orban reaches for unchecked power

Rights groups said the move effectively suspends democracy in the European Union member state in the name of fighting the coronavirus.




outbreak

Intentionally incomplete: US intelligence says China concealed extent of outbreak

China’s public reporting on cases and deaths is intentionally incomplete.




outbreak

How local outbreaks of COVID-19 occurred across Sydney

The suburbs which recorded NSW's first cases of local transmission of COVID-19 have been revealed, as health experts warn that this is the measure Australia needs to watch.




outbreak

How local outbreaks of COVID-19 occurred across Sydney

The suburbs which recorded NSW's first cases of local transmission of COVID-19 have been revealed, as health experts warn that this is the measure Australia needs to watch.




outbreak

The PM says we can't hide under the doona, so what happens when the next outbreak hits?

The Prime Minister says it's inevitable that there will be more outbreaks as restrictions lift. Here's what it means when that happens.




outbreak

Is your steak safe to eat? Abattoir coronavirus outbreak leaves consumers wondering

A coronavirus outbreak at a Melbourne abattoir has left consumers wondering about food safety — but experts say meat is still very safe to eat, and any risk is "ridiculously small".




outbreak

How local outbreaks of COVID-19 occurred across Sydney

The suburbs which recorded NSW's first cases of local transmission of COVID-19 have been revealed, as health experts warn that this is the measure Australia needs to watch.




outbreak

Coronavirus outbreak: Live updates

Among the top coronavirus news out today are: The FDA has given approval to biotech company Moderna to begin the next phase of testing on its coronavirus vaccine and Vice President Mike Pence's press secretary tests positive for the coronavirus.




outbreak

A widespread outbreak could derail plans to ease restrictions, deputy medical chief says

It is very unlikely Australian sports fans will be able to pack out stadiums on grand final weekend, even if the plan to lift restrictions is successful, Deputy Chief Medical Officer Paul Kelly says.




outbreak

The 'madness' and uncertainty of the coronavirus outbreak has floored grassroots sport

It is not just the major national sports leagues and events being stalled by coronavirus, with everyone from semi-professional all the way to junior athletes across the country caught up in the storm.




outbreak

Olympic organisers commit to July games date despite COVID-19 outbreak

While major events are being cancelled and postponed around the world, those behind the Tokyo Olympics say they don't have to make a decision about the games with just four months to go.




outbreak

Congress looks at options to punish China over the coronavirus outbreak

Republican lawmakers, determined to punish China for concealing early data on the coronavirus outbreak, are proposing ways to turn up the heat.




outbreak

States reopen theaters, restaurants amid coronavirus outbreak as experts warn of second wave

Texas reopens restaurants, Utah reopens salons. As several states lift coronavirus restrictions, many warn of a second wave if social distancing ends too soon.




outbreak

As weather warms amid coronavirus outbreak, states face new challenges

Governors across the U.S. are encouraging people to continue practicing social distancing amid summer weather




outbreak

Tijuana coronavirus death rate soars after hospital outbreaks

The number of deaths from the coronavirus in Mexico's best-known border city, Tijuana, has soared and the COVID-19 mortality rate is twice the national average, the health ministry says, after medical staff quickly fell ill as the outbreak rampaged through hospital wards.




outbreak

Regional Dialogue during an Outbreak

“I would like to express our appreciation to APEC member economies that put their faith in Malaysia’s leadership and made it a point to participate.”




outbreak

COVID-19 Outbreak Pausing Live Speaking Engagements

I live in Pennsylvania, just outside Philadelphia, in Montgomery County. Currently, Montco is the worst hit county in Pennsylvania for the COVID-19 outbreak. Consequently, the governor ordered all non-essential businesses to close more than a week ago in Montco, and yesterday expanded that order statewide.

Because most of my work is from home, the outbreak has not yet affected my ability to provide client service; however, for the foreseeable future all live speaking engagements are cancelled.

I was scheduled to deliver the device workshop at DIA advertising conference last week and also had some workshops scheduled with FDAnews for May and June. DIA's conference was been delayed with a decision about how to proceed still to be determined. I'll post an update here when I know more.

The May FDAnews workshop has been cancelled, and the June workshop is on hold. When I know more, I'll post an update.

In addition, I am part of the leadership committee for the Philadelphia RAPS chapter. We held our last event on March 5 at Temple University, and the next day, RAPS HQ sent out a notice asking chapters to hold off on live meetings for March and April. Currently, the chapter leadership is discussing other options, such as webinars to continue getting information to our membership during the outbreak.

While we adjust to life during a pandemic, I'll provide updates as I can. Stay safe and wash your hands!




outbreak

Everything you should know about the coronavirus outbreak

The latest information about the novel coronavirus identified in Wuhan, China, and advice on how pharmacists can help concerned patients and the public.

To read the whole article click on the headline




outbreak

Amid COVID-19 Outbreak, Protecting 2020 Election Should Start Now

March 23, 2020 – As the United States grapples with the COVID-19 outbreak and its ongoing fallout, there is another pressing issue that is crucial to the American public: ensuring safe and fair elections between now and Nov. 3. “The Coalition believes it is important for all Americans to be active in the political process […]




outbreak

Coronavirus World Map: Tracking The Spread Of The Outbreak

A map of confirmed COVID-19 cases and deaths around the world. The respiratory disease has spread rapidly across six continents and has killed thousands of people.




outbreak

Chart: See the day-by-day size of the coronavirus outbreak

Track the number of new COVID-19 cases per day around the rest of the world.




outbreak

Japan's overtime hours take biggest tumble amid virus outbreak