up

Scientists get 'lucky' with new image of Jupiter that could help solve mystery of its powerful swirling storms

Pictures are some of the sharpest infrared images of Jupiter ever taken from the Earth




up

Resident Evil 3 @ Target for $35 in-store pickup

As the title states, game is currently $49.99 and finally available for ordering. Add it to your cart plus 2 other $49.99 games, make sure the other 2 are shipped. Either keep them all for around $35 each, or cancel the other 2 games and end up with RE3 on day 1 for $35.

https://www.target.com/p/resident-evil-3-xbox-one/-/A-79468974

https://www.target.com/p/resident-evil-3-playstation-4/-/A-79468973




up

Console Games, Merch Sale with Free Shipping and 50% Off 1 Month Uplay+ at Ubi Store

Uplay+ service, with access to + 100 games is is 50% off for the 1st month!  Members can get unlimited access to + 100 games for $6.99
https://store.ubi.com/us/uplayplus/
 
Free shipping and +50% off on all physical games until April 19th. There's merch on sale as well.
https://store.ubi.com/us/free-shipping-sale/




up

Gamestop 20%/30%/40%/50% Off, One Day Flash Sale - Update Extended through 5/9




up

Coronavirus Update: The U.S. Health Care Industry Is Challenged By The Pandemic

The health care sector has cut 1.4 million jobs in April. And as COVID-19 has consumed health care resources, other essential routine procedures — like screenings for strokes — have gone down.




up

‘If we felt there was a problem, we wouldn’t have issued it to frontline staff’: Chair of Health Care Supplies Association on PPE

Earlier Matt Frei spoke to Mark Roscrow, the Chair of Trustees for the Health Care Supplies Association





up

Why you could be fined up to £5,000 for picking wildflowers on a daily walk

Those taking their government-approved daily walk have been warned not to pick wildflowers - or risk facing an eye-watering £5,000 fine.




up

Ontario government to prop-up child-care providers with financial supports

TORONTO - The provincial government said it will help cover operating costs for child-care providers and waive their licensing fees in an effort to keep them from permanently shutting during the COVID-19 crisis. Education Minister Stephen Lecce said Saturday that the government will give out




up

Lovable Lingerie's dream run on as traders lap it up

Lovable Lingerie is 3rd-best performing stock among companies listed this year, with it doubling in value, as traders bet it could repeat performance of Page Indus.




up

Half of the ‘euphoric’ wealth gained in tax cut rally fizzled out in 7 days

Data showed the domestic equity market gave up half the gains that it had amassed.




up

'Big Daddy' laps up Cipla after Q1 nos beat forecast

Shares of Cipla inched up on heavy volumes on Friday, after the company’s first quarter earnings beat the consensus estimate.




up

Weak rupee takes its toll on cos with huge foreign debt

The falling rupee will severely affect the small companies, whereas the big ones will be impacted moderately. Get rid of Debt | Adopt correct investment strategies




up

Nifty may find support at 5300 level

As far as stock futures are concerned, we are very near to the highest-ever open interest with 195 crore shares in open interest.




up

Outlook: Nifty upside capped; stay defensive, protect profits

The upside potential will remain capped, and the index will turn vulnerable again.




up

VCs are gearing up for a post-pandemic auto industry

VCs are gearing up for a post-pandemic auto industryAutotech is trying to pandemic-proof its portfolio as it prepares to deploy $150 million in a funding round announced this week.




up

ICMR teams up with Bharat Biotech to make vaccine

ICMR teams up with Bharat Biotech to make vaccineThe vaccine will be developed using the virus strain isolated at the ICMR's National Institute of Virology (NIV), Pune, a statement said. The strain has been successfully transferred from NIV to BBIL, it added. The death toll due to COVID-19 rose to 1,981 and the number of cases climbed to 59,662 in the country on Saturday, according to the Union Health Ministry.




up

Multi-unit housing starts up in some parts of Canada in April despite COVID-19

Canada Mortgage and Housing Corp. says construction of multi-unit housing projects remained strong in some provinces last month despite the fight against the COVID-19 pandemic.




up

'Fat and happy, that's my motto:' Scott Conant dishes up decadence at USA TODAY Wine & Food Experience in Chicago

From creamy gnudi to champagne macarons, the dishes at USA TODAY's Wine & Food Experience in Chicago didn't disappoint.

      




up

'Never give up, never despair': Queen Elizabeth II's speech recalls royal father, WWII victory in 1945

Britons marked the 75th anniversary of WWII victory with a speech by Queen Elizabeth II, the only British leader left who was there on May 8, 1945.

      




up

Stadia’s latest woe: Its PUBG port is overrun with official, crappy bots [Updated]

If 98 painfully stupid bots fall onto a PUBG island, do they make a sound?




up

Sony says major The Last of Us Part 2 leak didn’t come from employee [Updated]

No spoilers here, but details about character relationships, fates are out there.





up

Netflix's orc cop thriller sequel 'Bright 2' lines up a director

The sequel to 2017 orc-cop buddy movie Bright, starring Will Smith, has lined up a director.




up

Val Kilmer opens up about cancer treatment that lost him the use of his voice

Kilmer, a follower of Christian Science calls it: the “suggestion of throat cancer.”




up

Dwayne Johnson, Emily Blunt Team Up for Superhero Film ‘Ball and Chain’

Dwayne Johnson and Emily Blunt are re-teaming on the superhero movie "Ball and Chain" following their collaboration on "Jungle Cruise." The project is being shopped among studios, including Netflix, but no distribution deal has closed. "Ball and Chain" is being written by Oscar nominee Emily V. Gordon and is an adaptation of the '90s comic […]




up

Jerry O'Connell on 'Justice League Dark': 'Superman belongs to the fans so I take criticisms seriously' (exclusive)

Jerry O'Connell has voiced Superman in a series of movies since 2015, culminating in the new 'Justice League Dark: Apokolips War'.




up

Harry Potter star Rupert Grint becomes a father

He announced that his partner Georgia Groome was pregnant in April.





up

Rivers lines up high school job after NFL career

Philip Rivers already has his next job set up, though he won't start coaching the St. Michael Catholic High School football team in Alabama until he retires as an NFL quarterback.




up

NFL experts pick best matchups, biggest winners from schedule release

Which games should you circle on your calendar? Which rookie debut will be the most interesting?




up

Love: Being back at Cavs' facility 'weird, uplifting'

Kevin Love's Cavs became one of the first teams in the NBA to reopen their practice facility for voluntary individual workouts, a process that Love described as "weird" but also "pretty uplifting."




up

The pandemic ‘unicorn’: Canadian startup dependent on travel joins $1-billion-plus club

Platform connects international students to universities, colleges and high schools with one application system




up

How Europe got caught up in crackpot 5G coronavirus conspiracy theories

At a time of crisis, people want answers — and 5G is a really simple answer




up

Coronavirus: NHS doctor returning to help during pandemic cheers up colleagues by singing opera

Dr Alex Aldren has returned to the NHS after leaving to become an opera singer




up

'We don't do apart': Elderly couple who fought coronavirus together in hospital heap praise on NHS staff

'We've never been apart for sixty plus years, we don't do apart,' says Sidney Moore




up

Coronavirus: Apple and Google update plans to let phones track whether people have been exposed

Without integrating into phones' operating systems, performance of contact-tracing apps is likely to be limited




up

20 apps to up your skills

Want something to show for the weeks you have spent in lockdown? These apps will help you achieve your aims

In early April, one bullish American consultant suggested on Twitter that if people didn’t emerge from coronavirus quarantine having learned a new skill, gained more knowledge or having started something they’d been putting off, then “you didn’t ever lack the time, you lacked the discipline”.

As the tweet was widely shared, it met mockery and anger in equal measure, as people noted that home schooling, financial worries, stress and/or illness are making this period anything but a delightful self-improvement holiday.

Continue reading...




up

Film News Roundup: Kaniehtiio Horn Romantic Comedy ‘Tell Me I Love You’ Lands at Vision Films

In today’s film news roundup, romantic comedy “Tell Me I Love You” finds a home; the Canadian government gives COVID-19 relief funding to the Canada Media Fund and Telefilm Canada; and the cancelled Sun Valley Film Festival gives out awards. ACQUISITION Vision Films has acquired Los Angeles romantic comedy film “Tell Me I Love You,” […]




up

Supreme Court Puts Temporary Hold On Order To Release Redacted Mueller Materials

The procedural move gives attorneys for House Democrats until May 18 to respond. They say they're owed access to confidential evidence and other materials. No, argues the Trump administration.




up

Top 5 Moments From The Supreme Court's 1st Week Of Livestreaming Arguments

From a mysterious toilet flush to Justice Ruth Bader Ginsburg speaking from the hospital, here are the highlights — including audio clips — from a historic week for the high court.




up

'You deserve a raise': PM says deal reached to top up wages for essential COVID-19 workers

Prime Minister Justin Trudeau says that an agreement has been reached with all provinces and territories to top up the wages of some essential front-line workers including those in long-term care facilities where COVID-19 has spread among both residents and staff, with deadly impact. This comes as the military deployment to long-term care homes is being expanded.




up

Supreme Court chief, justice minister studying how courts can resume amid COVID-19

As talk of reopening aspects of society continue across the country, the Chief Justice of the Supreme Court of Canada Richard Wagner and federal Justice Minister David Lametti have begun a study into how courts could safely begin to resume regular operations in light of COVID-19.




up

Shadow chancellor Anneliese Dodds interrupted by daughter in live interview during virus lockdown




up

What is the Scientific Advisory Group for Emergencies and what does the government body do?

Coronavirus: The symptoms




up

Keir Starmer turns up the heat on the Tories: Tell us your lockdown exit strategy

We were too slow to implement lockdown and make sure it was policed, Labour leader tells Tories Follow our live coronavirus updates HERE Coronavirus: the symptoms




up

Celebrities back call for Priti Patel to allow migrants access to support amid coronavirus crisis

Celebrities have backed calls for Home Secretary Priti Patel to end restrictions that prevent thousands of migrants in the UK from accessing financial support during the coronavirus crisis.




up

Row after Dominic Cummings attended key scientific group's coronavirus meetings

A row has broken out over Boris Johnson's chief adviser Dominic Cummings attending meetings of the senior scientists advising the Government on the coronavirus outbreak.




up

Furloughed workers should take up fruit picking this summer, Government says




up

Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak


In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.

One of the earliest approaches to machine learning was to construct artificial neural networks that resemble the structure of the human brain. In the last decade this approach has finally taken off. Technical improvements in their design and training, combined with richer datasets and more computing power, have allowed us to train much larger and deeper networks than ever before. They can translate between languages with a proficiency approaching that of a human translator. They can produce photorealistic images of humans and animals. They can speak with the voices of people whom they have listened to for mere minutes. And they can learn fine, continuous control such as how to drive a car or use a robotic arm to connect Lego pieces.

WHAT IS HUMANITY?: First the computers came for the best players in Jeopardy!, chess, and Go. Now AI researchers themselves are worried computers will soon accomplish every task better and more cheaply than human workers.Wikimedia

But perhaps the most important sign of things to come is their ability to learn to play games. Steady incremental progress took chess from amateur play in 1957 all the way to superhuman level in 1997, and substantially beyond. Getting there required a vast amount of specialist human knowledge of chess strategy. In 2017, researchers at the AI company DeepMind created AlphaZero: a neural network-based system that learned to play chess from scratch. In less than the time it takes a professional to play two games, it discovered strategic knowledge that had taken humans centuries to unearth, playing beyond the level of the best humans or traditional programs. The very same algorithm also learned to play Go from scratch, and within eight hours far surpassed the abilities of any human. The world’s best Go players were shocked. As the reigning world champion, Ke Jie, put it: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong ... I would go as far as to say not a single human has touched the edge of the truth of Go.”

The question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes.

It is this generality that is the most impressive feature of cutting edge AI, and which has rekindled the ambitions of matching and exceeding every aspect of human intelligence. While the timeless games of chess and Go best exhibit the brilliance that deep learning can attain, its breadth was revealed through the Atari video games of the 1970s. In 2015, researchers designed an algorithm that could learn to play dozens of extremely different Atari 1970s games at levels far exceeding human ability. Unlike systems for chess or Go, which start with a symbolic representation of the board, the Atari-playing systems learnt and mastered these games directly from the score and raw pixels.

This burst of progress via deep learning is fuelling great optimism and pessimism about what may soon be possible. There are serious concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, and violating the norms of war. My book—The Precipice: Existential Risk and the Future of Humanity—is concerned with risks on the largest scale. Could developments in AI pose an existential risk to humanity?

The most plausible existential risk would come from success in AI researchers’ grand ambition of creating agents with intelligence that surpasses our own. A 2016 survey of top AI researchers found that, on average, they thought there was a 50 percent chance that AI systems would be able to “accomplish every task better and more cheaply than human workers” by 2061. The expert community doesn’t think of artificial general intelligence (AGI) as an impossible dream, so much as something that is more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created.

Humanity is currently in control of its own fate. We can choose our future. The same is not true for chimpanzees, blackbirds, or any other of Earth’s species. Our unique position in the world is a direct result of our unique mental abilities. What would happen if sometime this century researchers created an AGI surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. On its own, this might not be too much cause for concern. For there are many ways we might hope to retain control. Unfortunately, the few researchers working on such plans are finding them far more difficult than anticipated. In fact it is they who are the leading voices of concern.

If their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

To see why they are concerned, it will be helpful to look at our current AI techniques and why these are hard to align or control. One of the leading paradigms for how we might eventually create AGI combines deep learning with an earlier idea called reinforcement learning. This involves agents that receive reward (or punishment) for performing various acts in various circumstances. With enough intelligence and experience, the agent becomes extremely capable at steering its environment into the states where it obtains high reward. The specification of which acts and states produce reward for the agent is known as its reward function. This can either be stipulated by its designers or learnt by the agent. Unfortunately, neither of these methods can be easily scaled up to encode human values in the agent’s reward function. Our values are too complex and subtle to specify by hand. And we are not yet close to being able to infer the full complexity of a human’s values from observing their behavior. Even if we could, humanity consists of many humans, with different values, changing values, and uncertainty about their values.

Any near-term attempt to align an AI agent with human values would produce only a flawed copy. In some circumstances this misalignment would be mostly harmless. But the more intelligent the AI systems, the more they can change the world, and the further apart things will come. When we reflect on the result, we see how such misaligned attempts at utopia can go terribly wrong: the shallowness of a Brave New World, or the disempowerment of With Folded Hands. And even these are sort of best-case scenarios. They assume the builders of the system are striving to align it to human values. But we should expect some developers to be more focused on building systems to achieve other goals, such as winning wars or maximizing profits, perhaps with very little focus on ethical constraints. These systems may be much more dangerous. In the existing paradigm, sufficiently intelligent agents would end up with instrumental goals to deceive and overpower us. This behavior would not be driven by emotions such as fear, resentment, or the urge to survive. Instead, it follows directly from its single-minded preference to maximize its reward: Being turned off is a form of incapacitation which would make it harder to achieve high reward, so the system is incentivized to avoid it.

Ultimately, the system would be motivated to wrest control of the future from humanity, as that would help achieve all these instrumental goals: acquiring massive resources, while avoiding being shut down or having its reward function altered. Since humans would predictably interfere with all these instrumental goals, it would be motivated to hide them from us until it was too late for us to be able to put up meaningful resistance. And if their intelligence were to greatly exceed our own, we shouldn’t expect it to be humanity who wins the conflict and retains control of our future.

How could an AI system seize control? There is a major misconception (driven by Hollywood and the media) that this requires robots. After all, how else would AI be able to act in the physical world? Without robots, the system can only produce words, pictures, and sounds. But a moment’s reflection shows that these are exactly what is needed to take control. For the most damaging people in history have not been the strongest. Hitler, Stalin, and Genghis Khan achieved their absolute control over large parts of the world by using words to convince millions of others to win the requisite physical contests. So long as an AI system can entice or coerce people to do its physical bidding, it wouldn’t need robots at all.

We can’t know exactly how a system might seize control. But it is useful to consider an illustrative pathway we can actually understand as a lower bound for what is possible.

First, the AI system could gain access to the Internet and hide thousands of backup copies, scattered among insecure computer systems around the world, ready to wake up and continue the job if the original is removed. Even by this point, the AI would be practically impossible to destroy: Consider the political obstacles to erasing all hard drives in the world where it may have backups. It could then take over millions of unsecured systems on the Internet, forming a large “botnet,” a vast scaling-up of computational resources providing a platform for escalating power. From there, it could gain financial resources (hacking the bank accounts on those computers) and human resources (using blackmail or propaganda against susceptible people or just paying them with its stolen money). It would then be as powerful as a well-resourced criminal underworld, but much harder to eliminate. None of these steps involve anything mysterious—human hackers and criminals have already done all of these things using just the Internet.

Finally, the AI would need to escalate its power again. There are many plausible pathways: By taking over most of the world’s computers, allowing it to have millions or billions of cooperating copies; by using its stolen computation to improve its own intelligence far beyond the human level; by using its intelligence to develop new weapons technologies or economic technologies; by manipulating the leaders of major world powers (blackmail, or the promise of future power); or by having the humans under its control use weapons of mass destruction to cripple the rest of humanity.

Of course, no current AI systems can do any of these things. But the question we’re exploring is whether there are plausible pathways by which a highly intelligent AGI system might seize control. And the answer appears to be yes. History already involves examples of entities with human-level intelligence acquiring a substantial fraction of all global power as an instrumental goal to achieving what they want. And we’ve seen humanity scaling up from a minor species with less than a million individuals to having decisive control over the future. So we should assume that this is possible for new entities whose intelligence vastly exceeds our own.

The case for existential risk from AI is clearly speculative. Yet a speculative case that there is a large risk can be more important than a robust case for a very low-probability risk, such as that posed by asteroids. What we need are ways to judge just how speculative it really is, and a very useful starting point is to hear what those working in the field think about this risk.

There is actually less disagreement here than first appears. Those who counsel caution agree that the timeframe to AGI is decades, not years, and typically suggest research on alignment, not government regulation. So the substantive disagreement is not really over whether AGI is possible or whether it plausibly could be a threat to humanity. It is over whether a potential existential threat that looks to be decades away should be of concern to us now. It seems to me that it should.

The best window into what those working on AI really believe comes from the 2016 survey of leading AI researchers: 70 percent agreed with University of California, Berkeley professor Stuart Russell’s broad argument about why advanced AI with misaligned values might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the long-term impact of AGI being “extremely bad (e.g. human extinction)” was at least 5 percent.

I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a 1 in 20 chance the field’s ultimate goal would be extremely bad for humanity? There is a lot of uncertainty and disagreement, but it is not at all a fringe position that AGI will be developed within 50 years and that it could be an existential catastrophe.

Even though our current and foreseeable systems pose no threat to humanity at large, time is of the essence. In part this is because progress may come very suddenly: Through unpredictable research breakthroughs, or by rapid scaling-up of the first intelligent systems (for example, by rolling them out to thousands of times as much hardware, or allowing them to improve their own intelligence). And in part it is because such a momentous change in human affairs may require more than a couple of decades to adequately prepare for. In the words of Demis Hassabis, co-founder of DeepMind:

We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come. The time we have now is valuable, and we need to make use of it.

Toby Ord is a philosopher and research fellow at the Future of Humanity Institute, and the author of The Precipice: Existential Risk and the Future of Humanity.

From the book The Precipice by Toby Ord. Copyright © 2020 by Toby Ord. Reprinted by permission of Hachette Books, New York, NY. All rights reserved.

Lead Image: Titima Ongkantong / Shutterstock


Read More…