tell

Tara Reade Tells Megyn Kelly She’ll ‘Never Forget’ Alleged Biden Assault

via Youtube

Last week, former Vice President Joe Biden told the world that he “unequivocally” denied accusations by Tara Reade, a former staffer in his Senate office, that he sexually assaulted her in the early ’90s.

On Friday evening, Reade responded: Prove it.

“Joe Biden should take the polygraph,” Reade told former television anchor Megyn Kelly, in an interview that aired on Kelly’s YouTube channel. “I will take one if Joe Biden takes one, but I’m not a criminal.”

Read more at The Daily Beast.




tell

Rosie O’Donnell Reveals She’s Helping Michael Cohen With His ‘Spicy’ Trump Tell-All Book

Photo Illustration by The Daily Beast

On Friday afternoon, I had a fun, wide-ranging conversation with Rosie O’Donnell, the renowned comedian, daytime TV host, philanthropist, and Trump Enemy No. 1.

The occasion for our talk was I Know This Much Is True, an HBO miniseries premiering May 10 which sees the A League of Their Own star flex her dramatic muscles like never before as Lisa Sheffer, a no-nonsense social worker at a mental health facility housing Thomas Birdsey (Mark Ruffalo).

Over the course of our chat—which will run Monday, May 11—we touched on not only the show (she is excellent) but Trump’s years-long vendetta against her, the Tara Reade allegations, and the untimely death of SMILF amid claims of misconduct against creator and star Frankie Shaw.

Read more at The Daily Beast.

Got a tip? Send it to The Daily Beast here




tell

SpaceX Starlink satellites could be ‘existential threat’ to astronomy

Huge constellations of satellites like SpaceX’s Starlink could make ground-based astronomy impossible, and we’re running out of time to deal with the problem




tell

China has developed the world’s first mobile quantum satellite station

China has connected the world’s first portable ground station for quantum communication to the Mozi satellite, and has plans to launch another quantum satellite soon




tell

Astronomy group finds Starlink satellites will have 'negative impact'

The International Astronomical Union has concluded a review of satellite mega constellations such as SpaceX's Starlink satellites and found they will have a major impact on large telescopes, but not naked eye astronomy




tell

First private space rescue mission sees two satellites latch together

A private satellite that is low on fuel could survive five more years because another satellite has come to its rescue – a technique that could be used by future service spacecraft




tell

Interstellar comet Borisov may be breaking up as it exits solar system

The first-ever interstellar comet is showing signs of brightening, suggesting it may have been heated up as it passed near to the sun




tell

Interstellar comet Borisov came from a cold and distant home star

The interstellar comet Borisov, which flew past Earth in December, is full of carbon monoxide ice that implies its home star is smaller and colder than our sun




tell

We may have found 19 more interstellar asteroids in our solar system

A bunch of asteroids near Jupiter and Neptune with orbits perpendicular to the plane of the solar system may have come here from a different star system




tell

Tara Reade Tells Her Story in First On-Camera Interview

Megyn Kelly's exclusive sit down with Tara Reade




tell

AI and the Future of Work: The Economic Impacts of Artificial Intelligence

Experts discuss technological inequality and the “reskilling” problem at an MIT conference



  • robotics
  • robotics/artificial-intelligence

tell

Oh, Internet: Artificial Intelligence Attempts To Create Additional Lyrics To Rick Astley's 'Never Gonna Give You Up'

This is a video of the result of Youtuber Lil'Alien [Agentalex9 Alt.] feeding Rick Astley's rickrolling classic 'Never Gonna Give You Up' into the Jukebox neural network developed by OpenAI to create more song lyrics for the song. The music video consists of AI upscaled gifs from the original video. If you're really interested in the technology utilized and just what the hell is going on, there are a bunch of links on the video's Youtube page HERE. I just managed to watch the whole video and I can attest that, uh, that was really something. "Something good?" Haha, now let's not get ahead of ourselves. Keep going for whatever this is.




tell

RPGCast – Episode 397: “Zierius Satellite Radio”

We’re jumping onto the bandwagon of folks looking for a Vita version of that new Digimon game. We’re also checking our TGS info cards to...




tell

Stella McCartney goes wild to drive home animal-free message

Paris show features wildlife costumes to emphasise the label’s planet-friendly ethos

The singer Janelle Monáe and actor Shailene Woodley were in the front row, but two rabbits, a fox, a horse, two cows and a crocodile stole the show. People in lifesize animal costumes, of the kind more usually seen at theme park parades than at Paris fashion week, joined models for the finale of Stella McCartney’s show, swinging their new-season handbags and posing for the cameras.

The optics were fun, but the message was serious – that there are animals on almost every catwalk, it’s just that they are usually dead. The half-moon shoulder bag carried jauntily by a brown cow here was made from a vegan alternative to leather, while other bags were created from second-life plastic.

Continue reading...




tell

NASA’s Neil Gehrels Swift Observatory Tracks Water Loss from Interstellar Comet 2I/Borisov

Astronomers using NASA’s Neil Gehrels Swift Observatory have tracked water loss from 2I/Borisov, the first known interstellar comet to visit our Solar System, as it approached and rounded the Sun. Their findings were published in the Astrophysical Journal Letters. 2I/Borisov was detected on August 30, 2019 by Gennady Borisov, an astronomer at the Crimean Astrophysical [...]




tell

How DeepMind's artificial intelligence is reinventing the eye exam

Join Pearse Keane to find out why the NHS is collaborating with AI company DeepMind and how deep learning could transform ophthalmology




tell

What the first coronavirus antibody testing surveys can tell us

We need to be very cautious about preliminary studies estimating how many people have already been infected by the coronavirus




tell

What four coronaviruses from history can tell us about covid-19

Four coronaviruses cause around a quarter of all common colds, but each was probably deadly when it first made the leap to humans. We can learn a lot from what happened next




tell

'He always p*ssed himself with fear' - Melo & Balotelli hit back at 'disrespectful' Chiellini comments

The Juventus defender took aim at the pair in his upcoming autobiography but is now finding himself on the receiving end





tell

The Best Smart Pens for More Intelligent Note Taking

The best digital pens let you take notes the old fashioned way while saving them to your phone or computer.




tell

Roaming 'robodog' politely tells Singapore park goers to keep apart

Far from barking its orders, a robot dog enlisted by Singapore authorities to help curb coronavirus infections in the city-state politely asks joggers and cyclists to stay apart.




tell

Police officer tells dad he can't play with his kids in his own front garden




tell

Queen records first ever Easter message telling nation coronavirus 'will not overcome us'

Latest Covid-19 updates HERE




tell

Donald Trump tells American public to wear scarves as face masks in fight against coronavirus: They would be very good

"You can use a scarf," the US Leader said as he addressed the demand for face masks. "A lot of people have scarves... scarves would be very good.




tell

Satellite images shot by ISS are reminder of just how amazing Earth is




tell

Telling public to wear face masks 'would put NHS supplies at risk'

Confusion over whether face masks would reduce the spread of Covid-19 in public places deepened today as a minister suggested that there may not be enough to go round even if scientists recommend their use on public transport and in offices.




tell

Boris Johnson tells Donald Trump he's 'feeling better and on road to recovery' after falling ill with coronavirus

Boris Johnson has told Donald Trump that he is "feeling better and on the road to recovery" at his Chequers country retreat after contracting coronavirus.




tell

Nurse tells how she held elderly woman's hand 'during her last breaths' so she didn't die alone after coronavirus battle

A former NHS nurse who returned to the frontlines to help fight the coronavirus pandemic has shared the moment she held an elderly woman's hand during her last breaths so she didn't die alone from the virus.




tell

Satellite photos locate Kim Jong Un's train as health rumours persist

As speculation continues over Kim Jong Un's health, satellite imagery has found a train likely belonging to the North Korean leader, a website specialising in studies of the country revealed.




tell

Neighbour tells of harrowing moment Ilford mother screamed 'like she was being tortured after children stabbed'

A mother screamed for help "like she was being tortured" after two children were stabbed to death at a home in east London, a neighbour has said.




tell

New tell-all Harry and Meghan biography Finding Freedom promises 'unknown details' of couple's time together

A new biography of Harry and Meghan will "go beyond the headlines" and "reveal unknown details" of the couple's life together.




tell

Cumbria police apologise for 'ill-judged' tweet telling people not to buy plants or compost during lockdown

Cumbria Police have apologised for an "ill-judged" tweet that suggested people should not buy plants or compost during the coronavirus lockdown.




tell

McDonald's employees shot after telling customer to leave due to coronavirus restrictions

A number of McDonald's employees in America were shot on Wednesday after telling a customer to leave due to coronavirus restrictions, police said.




tell

'We're Out There' So Protect Us, Protesting Workers Tell Amazon, Target, Instacart

Workers at Amazon, Target and other companies walked off the job on Friday to demand safer working conditions and transparency about how many front-line workers have gotten sick during the pandemic.




tell

Karissa Sanbonmatsu: What Can Epigenetics Tell Us About Sex And Gender?

We're used to thinking of DNA as a rigid blueprint. Karissa Sanbonmatsu researches how our environment affects the way DNA expresses itself—especially when it comes to sex and gender.





tell

Swarm Technologies chooses Momentus and SpaceX to launch constellation of tiny satellites

Swarm Technologies has struck an agreement with California-based Momentus for the launch of a dozen telecommunication satellites, each the size of a slice of bread, aboard a SpaceX Falcon 9 rocket in December. The December rideshare mission is the first of a series that Momentum plans to execute for Swarm, continuing into 2021 and 2022. Swarm plans to have 150 satellites launched over the next couple of years for a communication network in low Earth orbit. The first 12 SpaceBee satellites covered by the agreement announced today will be deployed into orbit from the Falcon 9. The inch-thick satellites fit… Read More





tell

ICESat-2 laser-scanning satellite tracks how billions of tons of polar ice are lost

A satellite mission that bounces laser light off the ice sheets of Antarctica and Greenland has found that hundreds of billions of tons' worth of ice are being lost every year due to Earth's changing climate. Scientists involved in NASA's ICESat-2 project report in the journal Science that the net loss of ice from those regions has been responsible for 0.55 inches of sea level rise since 2003. That's slightly less than a third of the total amount of sea level rise observed in the world's oceans over that time. To track how the ice sheets are changing, the ICESat-2… Read More





tell

Iceye's small radar satellites achieve big capability

One of the hardest tasks in Earth observation is tracking tiny changes in the shape of the ground.





tell

Iran says it has launched its first military satellite into space

US say they fear long-range ballistic technology used to put satellites into orbit could also be used to launch nuclear warheads




tell

Stargazing in May: An interstellar journey

Comet Swan is due to make an appearance over the northern hemisphere as it travels towards the sun




tell

Google, Facebook tell staff to plan to work from home for the rest of the year

The edicts from the internet giants come as states and corporations grapple with ways to reopen as the virus pandemic rages on




tell

Film News Roundup: Kaniehtiio Horn Romantic Comedy ‘Tell Me I Love You’ Lands at Vision Films

In today’s film news roundup, romantic comedy “Tell Me I Love You” finds a home; the Canadian government gives COVID-19 relief funding to the Canada Media Fund and Telefilm Canada; and the cancelled Sun Valley Film Festival gives out awards. ACQUISITION Vision Films has acquired Los Angeles romantic comedy film “Tell Me I Love You,” […]




tell

Keir Starmer turns up the heat on the Tories: Tell us your lockdown exit strategy

We were too slow to implement lockdown and make sure it was policed, Labour leader tells Tories Follow our live coronavirus updates HERE Coronavirus: the symptoms




tell

Cardi B Tells Bernie Sanders His Nails 'Look Quarantine'

Cardi B invited Bernie Sanders to join her on Instagram Live last night to talk politics, coronavirus and manicures.




tell

Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak


In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.

One of the earliest approaches to machine learning was to construct artificial neural networks that resemble the structure of the human brain. In the last decade this approach has finally taken off. Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before. They can translate between languages with a proficiency approaching that of a human translator. They can produce photorealistic images of humans and animals. They can speak with the voices of people whom they have listened to for mere minutes. And they can learn fine, continuous control such as how to drive a car or use a robotic arm to connect Lego pieces.

WHAT IS HUMANITY?: First the computers came for the best players in Jeopardy!, chess, and Go. Now AI researchers themselves are worried computers will soon accomplish every task better and more cheaply than human workers.Wikimedia

But perhaps the most important sign of things to come is their ability to learn to play games. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond. Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, researchers at the AI company DeepMind created AlphaZero: a neural network-based system that learned to play chess from scratch. In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs. The very same algorithm also learned to play Go from scratch, and within eight hours far surpassed the abilities of any human. The world’s best Go players were shocked. As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong ... I would go as far as to say not a single human has touched the edge of the truth of Go.”

The question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes.

It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari 1970s games at levels far exceeding human ability. Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learnt and mastered these games directly from the score and raw pixels.

This burst of progress via deep learning is fuelling great optimism and pessimism about what may soon be possible. There are serious concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, and violating the norms of war. My book—The Precipice: Existential Risk and the Future of Humanity—is concerned with risks on the largest scale. Could developments in AI pose an existential risk to humanity?

The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with intelligence that surpasses our own. A 2016 survey of top AI researchers found that, on average, they thought there was a 50 percent chance that AI systems would be able to “accomplish every task better and more cheaply than human workers” by 2061. The expert community doesn’t think of artificial general intelligence (AGI) as an impossible dream, so much as something that is more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created.

Humanity is currently in control of its own fate. We can choose our future. The same is not true for chimpanzees, blackbirds, or any other of Earth’s species. Our unique position in the world is a direct result of our unique mental abilities. What would happen if sometime this century researchers created an AGI surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. On its own, this might not be too much cause for concern. For there are many ways we might hope to retain control. Unfortunately, the few researchers working on such plans are finding them far more difficult than anticipated. In fact it is they who are the leading voices of concern.

If their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

To see why they are concerned, it will be helpful to look at our current AI techniques and why these are hard to align or control. One of the leading paradigms for how we might eventually create AGI combines deep learning with an earlier idea called reinforcement learning. This involves agents that receive reward (or punishment) for performing various acts in various circumstances. With enough intelligence and experience, the agent becomes extremely capable at steering its environment into the states where it obtains high reward. The specification of which acts and states produce reward for the agent is known as its reward function. This can either be stipulated by its designers or learnt by the agent. Unfortunately, neither of these methods can be easily scaled up to encode human values in the agent’s reward function. Our values are too complex and subtle to specify by hand. And we are not yet close to being able to infer the full complexity of a human’s values from observing their behavior. Even if we could, humanity consists of many humans, with different values, changing values, and uncertainty about their values.

Any near-term attempt to align an AI agent with human values would produce only a flawed copy. In some circumstances this misalignment would be mostly harmless. But the more intelligent the AI systems, the more they can change the world, and the further apart things will come. When we reflect on the result, we see how such misaligned attempts at utopia can go terribly wrong: the shallowness of a Brave New World, or the disempowerment of With Folded Hands. And even these are sort of best-case scenarios. They assume the builders of the system are striving to align it to human values. But we should expect some developers to be more focused on building systems to achieve other goals, such as winning wars or maximizing profits, perhaps with very little focus on ethical constraints. These systems may be much more dangerous. In the existing paradigm, sufficiently intelligent agents would end up with instrumental goals to deceive and overpower us. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: Being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it.

Ultimately, the system would be motivated to wrest control of the future from humanity, as that would help achieve all these instrumental goals: acquiring massive resources, while avoiding being shut down or having its reward function altered. Since humans would predictably interfere with all these instrumental goals, it would be motivated to hide them from us until it was too late for us to be able to put up meaningful resistance. And if their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

How could an AI system seize control? There is a major misconception (driven by Hollywood and the media) that this requires robots. After all, how else would AI be able to act in the physical world? Without robots, the system can only produce words, pictures, and sounds. But a moment’s reflection shows that these are exactly what is needed to take control. For the most damaging people in history have not been the strongest. Hitler, Stalin, and Genghis Khan achieved their absolute control over large parts of the world by using words to convince millions of others to win the requisite physical contests. So long as an AI system can entice or coerce people to do its physical bidding, it wouldn’t need robots at all.

We can’t know exactly how a system might seize control. But it is useful to consider an illustrative pathway we can actually understand as a lower bound for what is possible.

First, the AI system could gain access to the Internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: Consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the Internet, forming a large “botnet,” a vast scaling-up of computational resources providing a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate. None of these steps involve anything mysterious—human hackers and criminals have already done all of these things using just the Internet.

Finally, the AI would need to escalate its power again. There are many plausible pathways: By taking over most of the world’s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity.

Of course, no current AI systems can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes. History already involves examples of entities with human-level intelligence acquiring a substantial fraction of all global power as an instrumental goal to achieving what they want. And we’ve seen humanity scaling up from a minor species with less than a million individuals to having decisive control over the future. So we should assume that this is possible for new entities whose intelligence vastly exceeds our own.

The case for existential risk from AI is clearly speculative. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

There is actually less disagreement here than first appears. Those who counsel caution agree that the timeframe to AGI is decades, not years, and typically suggest research on alignment, not government regulation. So the substantive disagreement is not really over whether AGI is possible or whether it plausibly could be a threat to humanity. It is over whether a potential existential threat that looks to be decades away should be of concern to us now. It seems to me that it should.

The best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers: 70 percent agreed with University of California, Berkeley professor Stuart Russell’s broad argument about why advanced AI with misaligned values might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the long-term impact of AGI being “extremely bad (e.g. human extinction)” was at least 5 percent.

I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a 1 in 20 chance the field’s ultimate goal would be extremely bad for humanity? There is a lot of uncertainty and disagreement, but it is not at all a fringe position that AGI will be developed within 50 years and that it could be an existential catastrophe.

Even though our current and foreseeable systems pose no threat to humanity at large, time is of the essence. In part this is because progress may come very suddenly: Through unpredictable research breakthroughs, or by rapid scaling-up of the first intelligent systems (for example, by rolling them out to thousands of times as much hardware, or allowing them to improve their own intelligence). And in part it is because such a momentous change in human affairs may require more than a couple of decades to adequately prepare for. In the words of Demis Hassabis, co-founder of DeepMind:

We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.

Toby Ord is a philosopher and research fellow at the Future of Humanity Institute, and the author of The Precipice: Existential Risk and the Future of Humanity.

From the book The Precipice by Toby Ord. Copyright © 2020 by Toby Ord. Reprinted by permission of Hachette Books, New York, NY. All rights reserved.

Lead Image: Titima Ongkantong / Shutterstock


Read More…




tell

Georgia businesses reopen and customers start returning, but only time will tell if it's the right decision

Exactly one week since Georgia Gov. Brian Kemp began reopening the state's economy, small businesses shared early success stories as customers welcomed their return. But at what cost? Business owners say only time will tell.





tell

Trump attacks Joe Scarborough, who tells him 'take a rest' and 'let Mike Pence actually run things' 

With the U.S. death toll from the coronavirus mounting, President Trump on Monday took aim at MSNBC's Joe Scarborough. The cable news host responded by telling Trump to let Vice President Mike Pence “run things for the next couple of weeks.”





tell

Leaked intelligence report saying China 'intentionally concealed' coronavirus to stockpile medical supplies draws scrutiny

The Trump administration has issued an intelligence analysis claiming China purposely delayed notifying the World Health Organization about the spread of the coronavirus.