big

How big is the universe? The shape of space-time could tell us

We may never know what lies beyond the boundaries of the observable universe, but the fabric of the cosmos can tell us whether the universe is infinite or not




big

Why did humans evolve big brains? A new idea bodes ill for our future

Recent fossil finds suggest that big brains weren't an evolutionary asset to our ancestors but evolved by accident – and are likely to shrink again in the near future




big

The physicist who wants to build a telescope bigger than Earth

Alex Lupsasca plans to extend Earth's largest telescope network beyond the atmosphere with a space-based dish. It could spot part of a black hole we've never seen before – and perhaps discover new physics




big

How clues in honey can help fight our biggest biodiversity challenges

There are secrets aplenty in a pot of honey – from information about bees' "micro-bee-ota" to DNA from the environment – that can help us fight food fraud and even monitor shifts in climate




big

Can we solve quantum theory’s biggest problem by redefining reality?

With its particles in two places at once, quantum theory strains our common sense notions of how the universe should work. But one group of physicists says we can get reality back if we just redefine its foundations




big

Is the world's biggest fusion experiment dead after new delay to 2035?

ITER, a €20 billion nuclear fusion reactor under construction in France, will now not switch on until 2035 - a delay of 10 years. With smaller commercial fusion efforts on the rise, is it worth continuing with this gargantuan project?




big

We may finally know what caused the biggest cosmic explosion ever seen

The gamma ray burst known as GRB221009A is the biggest explosion astronomers have ever glimpsed and we might finally know what caused the blast




big

A slight curve helps rocks make the biggest splash

Researchers were surprised to find that a very slightly curved object produces a more dramatic splash than a perfectly flat one




big

Another blow for dark matter as biggest hunt yet finds nothing

The hunt for particles of dark matter has been stymied once again, with physicists placing constraints on this mysterious substance that are 5 times tighter than the previous best




big

Can we solve quantum theory’s biggest problem by redefining reality?

With its particles in two places at once, quantum theory strains our common sense notions of how the universe should work. But one group of physicists says we can get reality back if we just redefine its foundations




big

The COP16 biodiversity summit was a big flop for protecting nature

Although the COP16 summit in Colombia ended with some important agreements, countries still aren’t moving fast enough to stem biodiversity loss




big

Why falling birth rates will be a bigger problem than overpopulation

Birthrates are projected to have fallen below the replacement level, of 2.1 per woman, in more than three quarters of countries by 2050




big

Why did humans evolve big brains? A new idea bodes ill for our future

Recent fossil finds suggest that big brains weren't an evolutionary asset to our ancestors but evolved by accident – and are likely to shrink again in the near future




big

Obesity in Youth Could Be Big Risk Factor for MS

Title: Obesity in Youth Could Be Big Risk Factor for MS
Category: Health News
Created: 8/26/2020 12:00:00 AM
Last Editorial Review: 8/27/2020 12:00:00 AM




big

Israeli Study Shows Pfizer Booster Gives Seniors Big Rise in Immunity

Title: Israeli Study Shows Pfizer Booster Gives Seniors Big Rise in Immunity
Category: Health News
Created: 8/23/2021 12:00:00 AM
Last Editorial Review: 8/23/2021 12:00:00 AM




big

Long COVID, Big Bills: Grim Legacy of Even Short Hospital Stays

Title: Long COVID, Big Bills: Grim Legacy of Even Short Hospital Stays
Category: Health News
Created: 8/25/2021 12:00:00 AM
Last Editorial Review: 8/26/2021 12:00:00 AM




big

3 Big Pharmacy Chains Must Pay $650 Million to Ohio Counties for Role in Opioid Crisis

Title: 3 Big Pharmacy Chains Must Pay $650 Million to Ohio Counties for Role in Opioid Crisis
Category: Health News
Created: 8/18/2022 12:00:00 AM
Last Editorial Review: 8/18/2022 12:00:00 AM




big

When Removing a Big Kidney Stone, Get the Little Ones, Too

Title: When Removing a Big Kidney Stone, Get the Little Ones, Too
Category: Health News
Created: 8/11/2022 12:00:00 AM
Last Editorial Review: 8/11/2022 12:00:00 AM




big

Legal big gun to help Blues

The man who helped Justin Hodges escape a charge before last year’s grand final will try to save Wade Graham.




big

Turnbull: ‘It is a big economic shock’

PM Malcolm Turnbull says Australians should vote to keep a stable majority government in uncertain economic times, as the fallout from Brexit continues.




big

Take-Two are selling Private Division and closing Roll7 and Intercept, because they're in "the business of making great big hits"

Take-Two Interactive have sold their publishing label Private Division to an unnamed party, along with five of Private Division's "live and unreleased titles". The GTA 6 publisher have also finally confirmed that they have shut down OlliOlli World and Rollerdrome devs Roll7 together with Kerbal Space Program 2 creators Intercept Games, months after performing mass layoffs at both studios.

Read more




big

Our galaxy may host strange black holes born just after the big bang

The Milky Way may be home to strange black holes from the first moments of the universe, and the best candidates are the three closest black holes to Earth




big

Why the T in ChatGPT is AI's biggest breakthrough - and greatest risk

AI companies hope that feeding ever more data to their models will continue to boost performance, eventually leading to human-level intelligence. Behind this hope is the "transformer", a key breakthrough in AI, but what happens if it fails to deliver?




big

A simple driving trick could make a big dent in cars' carbon emissions

An AI-powered model found that approaching intersections more slowly could lower yearly US carbon emissions by up to around 123 million tonnes




big

AIs get worse at answering simple questions as they get bigger

Using more training data and computational power is meant to make AIs more reliable, but tests suggest large language models actually get less reliable as they grow




big

Oceans could be used for carbon capture on a big scale

In this week's issue of our environment newsletter, we look at the carbon capture potential of the world's oceans and what effect beavers are having in the Arctic (spoiler: it's not good).




big

Score big on Amazon Black Friday 2024 with my insider tips

Amazon's Black Friday sales event starts Friday, Nov. 22. Kurt the CyberGuy offers some tips on how to get the best deals on merchandise.



  • 2e5e282c-e75c-5f08-b690-6dda2038f64e
  • fnc
  • Fox News
  • fox-news/tech
  • fox-news/tech/companies/amazon
  • fox-business/fox-business-industries/fox-business-retail
  • fox-news/tech
  • article



big

Exclusive: Liz Truss urged to act with Britain facing biggest loss of sports facilities in a generation




big

Perseid meteor shower peaks Sunday night, potentially giving stargazers big show

The annual Perseid meteor shower is set to peak on Sunday night into early Monday morning, giving stargazers the chance to see hundreds of meteors.



  • f2292f9b-3ab5-59a6-91d5-1aa85abcf80a
  • fnc
  • Fox News
  • fox-news/science/air-and-space/astronomy
  • fox-news/science/air-and-space/nasa
  • fox-news/science/air-and-space
  • fox-news/science/air-and-space/spaceflight
  • fox-news/science
  • article

big

Rejecting standard cancer treatment like Elle Macpherson is a big risk

People with cancer may have understandable reasons to follow Australian supermodel Elle Macpherson in declining chemotherapy, but the odds aren’t in their favour, warns Elle Hunt




big

Andrew Ng: Unbiggen AI



Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”




big

Microsoft reports big profits amid massive AI investments

Xbox hardware sales dropped 29 percent, but that barely made a dent.




big

Amazon Sale: Up to 50% Off Rebecca Minkoff Handbags at the Big Summer Sale

Shop these deep discounts of up to 50% off of loads of styles of Rebecca Minkoff handbags at Amazon's Big Summer Sale.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]




big

Amazon Sale: Up to 60% Off Kate Spade Purses, Jewelry and More at the Big Summer Sale

Shop discounts of up to 60% off of stylish Kate Spade handbags, jewelry, watches and accessories from Amazon's Big Summer Sale.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]





big

The PS5 Pro’s biggest problem is that the PS5 is already very good

For $700, I was hoping for a much larger leap in visual impact.




big

Martin Garrix set to perform in ‘world’s biggest Holi celebration’ in India

Tickets for the event will go on sale on November 10, 2024, via BookMyShow




big

Castle Crumble for Apple Vision Pro Is Now Available on Apple Arcade Alongside Big Updates for Puyo Puyo Puzzle Pop, Crayola Adventures, and More

This week, Apple Arcade sees a major game update for Apple Vision Pro and a few notable updates. Castle Crumble …




big

We’ve been missing a big part of game industry’s digital revolution

NPD “restatement” shows consistent spending growth as digital sales dominate.




big

Tesla posts bigger-than-expected loss, bigger-than-expected revenue [Updated]

Company expects to be cash flow positive in the next two quarters.




big

Why thousands gathering around rancid 'dead whale' by world's biggest lake...


Why thousands gathering around rancid 'dead whale' by world's biggest lake...


(Third column, 18th story, link)





big

'YELLOWSTONE' First Episode Without Costner Scores Biggest Premiere Night Audience...


'YELLOWSTONE' First Episode Without Costner Scores Biggest Premiere Night Audience...


(Third column, 14th story, link)





big

Once-big Portland tech employer lays off dozens

A year after going private in a $6.5 billion private equity deal, software maker New Relic is into its second round of layoffs this year.




big

Jon Stewart names the one big problem with claiming the Democrats were ‘too woke’

‘They acted like Republicans for the last four months,’ Stewart lamented in a scathing monologue




big

What Are the Biggest Lakes in the U.S.?

The United States is home to some truly spectacular lakes. Whether considering the massive Great Lakes themselves or deep alpine gems like Lake Tahoe, with its crystal-clear waters, America is well-stocked with many sizable bodies of water.




big

Sonos Arc Ultra review: New tech powers a big audio upgrade

2024 has been a rough year for Sonos. The company’s would-be triumphant entry into the crowded headphones market was overshadowed by a disastrous app redesign. In the fallout of the botched software update, the company decided to delay products that were ready to be shipped to give itself more time to right the course. Consumer trust eroded, and people who already owned Sonos gear were living in a cycle of constant frustration.

Thanks to a number of rumors, we already knew that one of the pending product releases was the Arc Ultra ($999). A few weeks ago the company decided not to wait any longer to reveal it. While the design is mostly unchanged from the Arc that debuted in 2020, there are several key changes on the inside that make this a better all-in-one solution for people who don’t want to add more speakers to their living room setup. Sonos is promising better bass performance thanks to new speaker tech that’s debuting in the Arc Ultra, but just how good can it be?

The refined design of the original Arc was a massive upgrade from that of the Playbar, and showed a progression from Sonos’ compact Beam soundbar. Honestly, the aesthetic is pretty timeless, in my opinion, and it’s a look that should age well for years to come. That said, it makes sense that Sonos would keep the design for the Arc Ultra, only making some minor changes to the exterior.

The Arc Ultra still comes in both black and white options, allowing you to choose what looks best in your living room or home theater. Sonos updated the controls to mirror what’s available on the newer Era 100 and Era 300 speakers, moving them to a top-facing bar at the back. There’s a volume slider on the right with play/pause and skip controls in the center. On the left side, Sonos gives you a microphone control so you can mute the built-in mics as needed.

And that’s really it in terms of design changes that you can see. The Arc Ultra is slightly shorter height-wise than the Arc, and it’s a little wider than its predecessor. Neither of which make a huge difference, and they won’t drastically change how you position the speaker beneath your TV.

Sonos' new sound motion woofer is situated on the right side of the soundbar
Sonos

Inside, Sonos has re-engineered the Arc Ultra to improve audio performance. The biggest piece of this overhaul is the new Sound Motion woofer that enables better bass performance before you add a standalone wireless sub. The achievement here, thanks to the acquisition of audio company Mayht, is that the new component lays flat, taking up less room than a traditional cone-shaped woofer. The Sound Motion driver also helps deliver increased clarity and depth, on top of doubling the bass output of the original Arc.

Sonos redesigned the entire acoustic architecture of the Arc Ultra during the process of adding the Sound Motion woofer. The soundbar now houses three more drivers than the arc, a list that includes seven tweeters (two of which are upfiring), six mid-range drivers (midwoofers, as Sonos calls them) and the aforementioned woofer for a total of 14. The company also employs 15 Class-D digital amplifiers along with far-field mics for tuning and voice control.

There’s still only a single HDMI (eARC) port, which allows most modern TVs the ability to control the soundbar’s volume and mute options from your TV remote. Moreover, the Arc Ultra is compatible with Wi-Fi 6 and newly-added Bluetooth 5.3 connectivity allows you to stream from any device. And of course, AirPlay 2 is still on the spec sheet. One last thing I’ll mention here is that the Arc Ultra doesn’t ship with an optical adapter if you prefer that connection. The company will sell you one for $25.

Since the Arc Ultra is a Sonos product, there are a lot of core features that are the same as they are on the company’s other devices. You can use the soundbar as part of a multiroom setup and Trueplay tuning is here to adjust the audio to the acoustics of the room. There’s still an adjustable EQ with options for bass, treble and loudness and a Night Sound mode makes things less boomy when someone in your house may be trying to work or sleep.

While Trueplay will give you the best sound customization for the sonic characteristics of your living room, Sonos is enabling a Quick Tune feature for the first time on the Arc Ultra. Here, the soundbar will use its internal mics, as opposed to your phone, to offer a certain degree of improvement. The company says it wanted to give people the option of something quicker than Trueplay, although the full-fledged tuning process doesn’t take very long at all.

Speech Enhancement has been a handy feature on Sonos soundbars for a while, giving you the ability to improve dialog clarity as needed. Before now, it was an all-or-nothing feature, but on the Arc Ultra, the company introduced three levels of speech boost to give you more options to better suit your needs. This means the soundbar can help you hear clearly over background noise or simply follow along better by elevating dialog above the rest of the soundtrack mix.

Sonos moved the controls to a bar along the back
Billy Steele for Engadget

The trademark feature of Sonos’ Ace headphones is the ability to beam the audio from a compatible soundbar to the cans for a private home theater. That TV Audio Swap tool is available on the Arc Ultra, so you can instantly send the sound to the headphones with the press of a button. In fact, Sonos bundles the Arc Ultra and Ace headphones in a $1,373 set. What’s more, the Ace supports spatial audio with dynamic head tracking, so you can count on immersive sound even when you’re employing it on an individual basis.

Despite all of the problems that Sonos has had with its app, some of which it's still working to resolve, I didn't encounter any issues during my testing. The software crashed on me once when the Trueplay tuning process completed, but everything had already run its course and I didn’t have to repeat it. Other than that, the app has worked reliably over the last week while I’ve been putting the Arc Ultra through its paces. Most importantly, the software is stable and the full suite of controls for the new soundbar is available at launch.

The original Arc already sounded great, so Sonos really had its work cut out for it to further improve the audio quality for the Arc Ultra. Thanks to the improved bass of the Sound Motion tech, a change that also takes up less space, the company says it had the ability to then overhaul the mid-range and high-frequency components as well. By using multiple sizes of mid-range drivers and tweeters, Sonos was able to tweak the speaker positioning inside of the soundbar for improved projection and more immersive sound.

In addition to the enhanced bass performance, the second thing that was immediately apparent on the Arc Ultra was those improvements to the dimensional sound. Whether it was a quidditch match in a Harry Potter movie or zooming F1 cars in Drive to Survive, the soundbar now has better directional, immersive sound than its predecessor. Sonos says the Arc Ultra now renders Dolby Atmos content in a 9.1.4-channel setup, versus 5.0.2 with the Arc, which further contributes to the enveloping audio. Interestingly, I haven’t seen any of the competition claim four up-firing channels from the soundbar like Sonos does here (those that do are usually accounting for up-firing drivers in the rear speakers). You can really hear the difference from the second you fire up the Arc Ultra, and the effect is consistent across content sources.

There's still just one HDMI port, but Bluetooth connectivity is now included
Billy Steele for Engadget

The increased bass performance makes the Arc Ultra a much better speaker for music without a separate subwoofer. You won’t get the bombastic low-end tone the newly updated Sonos Sub 4 can produce, but there’s enough from the soundbar to give Kaytranada’s Timeless, Phantogram’s Memory of a Day and Bilmuri’s American Motor Sports plenty of booming backbone when a track demands it. There’s also still the trademark Sonos clarity I’ve come to expect over the years, which means finer details like the texture of synths, layered guitars and the nuance of acoustic instruments cut through the mix cleanly.

And speaking of clarity, the company’s new Speech Enhancement settings are also a big improvement. Being able to select how much of a boost this feature offered depending on either my needs right then or based on the overall tuning of the content is really nice. It allowed me to max out the dialog when watching movies after my toddler was asleep so that speech didn’t suffer when sound effects were louder during intense scenes of The Hobbit: An Unexpected Journey.

There’s no doubt the Arc Ultra packs in deeper, more immersive sound than its predecessor, but some people will still want a more robust setup to wring every ounce of audio out of a Sonos living room setup. The high-end choice for this is what Sonos calls the Ultimate Immersive Set, which includes the Arc Ultra, two Era 300s and the Sub 4. Right now, that will cost you $2,561. For something less expensive, you can get the Arc Ultra and the new Sub 4 (normally $799) for $1,708 (Premium Entertainment Set). And therein lies the biggest problem with Sonos soundbars: expanding your living room setup to get the most immersive experience gets very pricey very quickly when the centerpiece is already $999.

If you can live without all the conveniences of Sonos products, you can get an all-in-one package from Samsung for $1,500. With the Q990D, you’ll get the soundbar, two rear speakers and a wireless subwoofer in the same box. The setup offers 11.1.4 audio for excellent Atmos sound, thanks in part to up-firing drivers in the rear speakers. Samsung offers a host of handy features, from Q-Symphony audio with TV speakers, SpaceFit Sound Pro room calibration, Adaptive Sound audio enhancement, a dedicated gaming mode and more. The Q990D remains my top pick for the best soundbars for a lot of reasons, a key one being everything you could need comes in one all-inclusive package.

The Arc Ultra is an obvious improvement over the Arc in the sonic department. New technology delivers on its promise to boost bass, clarity and immersive before you start adding extra components. Expanded features like Speech Enhancements and a quick-tune option offer new tools for dialing in the sound, while the stock Sonos experience remains intact. And thankfully, that includes an app that’s more stable than it was a few months ago. The Arc Ultra is still pricey at $999, but it exhibits a lot more sonic prowess than its predecessor for only $100 more.

This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/sonos-arc-ultra-review-new-tech-powers-a-big-audio-upgrade-130011149.html?src=rss




big

Paradigm Shift in Science: From Big Data to Autonomous Robot Scientists

Sydney, Australia (SPX) Nov 04, 2024
In a recent study led by Professor Xin Li and Dr. Yanlong Guo of the Institute of Tibetan Plateau Research, Chinese Academy of Sciences, researchers analyze how scientific research is evolving through the power of big data and artificial intelligence (AI). The paper discusses how the traditional "correlation supersedes causation" model is being increasingly challenged by new "data-intensive scie




big

Canada, prepare for the big squeeze. Trump will press on several sensitive fronts

Donald Trump's second term as U.S. president carries implications at home and abroad. That includes potentially wreaking havoc on global economies through the aggressive use of tariffs.