an

British woman busted at Los Angeles airport with meth-soaked T-shirts: police

Myah Saakwa-Mante, a 20-year-old British university student, was caught at Los Angeles International Airport and arrested after allegedly attempting to smuggle T-shirts soaked with methamphetamine.



  • 025772a1-a0d2-5169-b96e-07d8919e9f08
  • fnc
  • Fox News
  • fox-news/us/crime
  • fox-news/us/los-angeles
  • fox-news/travel/general/airports
  • fox-news/us/crime/drugs
  • fox-news/us
  • article

an

Betsy DeVos joins Trump’s call to 'disband' the Department of Education and 're-empower' families

Former Education Secretary Betsy DeVos discusses what a second Trump term could mean for U.S. education on "The Story with Martha MacCallum."



  • 2426f898-56cb-51b3-9650-47f0ef4cf50e
  • fnc
  • Fox News
  • fox-news/media
  • fox-news/topic/fox-news-flash
  • fox-news/us/education/dept-of-education
  • fox-news/politics/elections/presidential/trump-transition
  • fox-news/shows/v-full-ep-the-story
  • fox-news/media
  • article

an

Mark Cuban runs to 'less hateful' social media platform after scrubbing X account of Harris support

Dallas Mavericks minority owner Mark Cuban returned to the Bluesky social media platform with a post after weeks of contentious X posts.



  • 03659cc7-b9b2-59bb-a83a-a51c4f033588
  • fnc
  • Fox News
  • fox-news/sports/nba/dallas-mavericks
  • fox-news/sports/nba
  • fox-news/sports
  • fox-news/politics
  • fox-news/sports
  • article

an

Oregon man defaced synagogue with antisemitic graffiti multiple times: DOJ

A man from Eugene, Oregon, pleaded guilty to federal hate crimes on Tuesday after he spray-painted antisemitic graffiti on a synagogue in 2023 and 2024.



  • 4d913ae7-b00f-581c-8754-ee3ce43df202
  • fnc
  • Fox News
  • fox-news/us/us-regions/west/oregon
  • fox-news/topic/anti-semitism
  • fox-news/politics/justice-department
  • fox-news/politics/judiciary/federal-courts
  • fox-news/us
  • article

an

Deion Sanders said he would tell NFL teams son Shedeur Sanders won't play for them if it's not the right fit

Just like Eli Manning in 2004, Deion Sanders said he would tell NFL teams his son, Shedeur Sanders, won't play for them if it's not the right fit.



  • 2d69b8d3-c449-5d92-b6e9-8a2a28329025
  • fnc
  • Fox News
  • fox-news/sports/ncaa/colorado-buffaloes
  • fox-news/sports/ncaa-fb
  • fox-news/sports/nfl-draft
  • fox-news/sports/nfl
  • fox-news/sports
  • fox-news/sports
  • article

an

SEAN HANNITY: America's massive bureaucracy will soon face a very heavy dose of reality again

Fox News host Sean Hannity says the "decentralization of power as our founders intended is very much on its way to DC."



  • db9b2382-87f4-598f-a2a5-f3e9d45fc8c8
  • fnc
  • Fox News
  • fox-news/shows/hannity
  • fox-news/shows/hannity/transcript/hannitys-monologue
  • fox-news/person/donald-trump
  • fox-news/media/fox-news-flash
  • fox-news/media
  • fox-news/media
  • article

an

Georgia on outside of College Football Playoff bracket as wild week brings rankings shakeup

Georgia's loss to Ole Miss Saturday brought a wild shakeup to the college football rankings, and the Bulldogs find themselves out of the playoff picture.



  • be1a5b1e-e9fd-515d-8deb-af99e8d76913
  • fnc
  • Fox News
  • fox-news/sports/ncaa-fb
  • fox-news/sports/ncaa
  • fox-news/sports
  • fox-news/sports/ncaa/georgia-bulldogs
  • fox-news/sports/ncaa/oregon-ducks
  • fox-news/sports
  • article

an

Man arrested in NYC strangulation death of woman found outside Times Square hotel

Authorities arrested a man accused of strangling a woman outside a Times Square hotel who later died from her injuries, police said Tuesday.



  • d7d30f82-1959-5dbe-99be-c4c6d3d7b418
  • fnc
  • Fox News
  • fox-news/us/crime
  • fox-news/us/new-york-city
  • fox-news/us
  • fox-news/us
  • article

an

Trump selects South Dakota Gov Kristi Noem to run Department of Homeland Security

President-elect Trump announced on Tuesday that Kristi Noem is his pick for secretary of the Department of Homeland Security.



  • 9e2a0339-2cb6-5255-919a-162a332ea710
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/person/donald-trump
  • fox-news/person/kristi-noem
  • fox-news/politics
  • article

an

Republican Gabe Evans wins Colorado's 8th Congressional District, beating incumbent Yadira Caraveo

The Associated Press has declared a winner in Colorado's 8th Congressional District which has been one of the most closely watched races in the country.



  • a466e502-3378-573c-8ecc-0e628d1b45ea
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/us/us-regions/west/colorado
  • fox-news/politics/elections
  • fox-news/politics/house-of-representatives
  • fox-news/politics
  • article

an

Rick Scott gains new Senate endorsements out of candidate forum on eve of leader election

Senate Republicans met on Tuesday night to hear from the three candidates to succeed Mitch McConnell, and Rick Scott left with two new endorsements.



  • 6fb1e070-cf35-5dc6-9a29-2dd83f55001b
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/politics/senate
  • fox-news/person/donald-trump
  • fox-news/us/congress
  • fox-news/politics
  • article

an

Republican David Valadao wins re-election to US House in California's 22nd Congressional District

Incumbent Republican David Valadao is projected to emerge victorious in California's 22nd Congressional District. The highly contested race was considered to be a tossup.



  • 4451eb0e-c159-5978-bbc9-ce2be1359320
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/us/us-regions/west/california
  • fox-news/us/congress
  • fox-news/politics/elections/house-of-representatives
  • fox-news/politics
  • article

an

Senator-elect Jim Justice's team clarifies report claiming famous pooch Babydog banned from Senate floor

Senator-elect Jim Justice's office has clarified reports that his famous pooch Babydog was banned from the Senate floor, saying Justice never intended to bring the dog onto the floor.



  • 5e83cc3c-0f20-531a-a467-f5c5e2547352
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/politics/senate
  • fox-news/politics/elections/senate
  • fox-news/us/us-regions/southeast/west-virginia
  • fox-news/politics
  • article

an

Country star Darius Rucker donates to ETSU’s NIL fund after 'awkward' appearance at football game

Country music star Darius Rucker paid the East Tennessee State University's NIL fund $10 for every minute he was on the field Saturday after what he called an "awkward" appearance.



  • 322459dc-7f98-5929-8f3a-c2c829efc988
  • fnc
  • Fox News
  • fox-news/sports/ncaa/east-tennessee-state-buccaneers
  • fox-news/sports/ncaa
  • fox-news/sports
  • fox-news/topic/trending-news
  • fox-news/sports
  • article

an

Bev Priestman out as Canadian women's head soccer coach following Olympic drone scandal probe

The Canadian women's soccer team was implicated in a drone scandal this past summer. But, an investigation determined drone use against opponents, predated the Paris Olympics.



  • 784150bb-7367-54e1-a4e5-8ad141b4e55e
  • fnc
  • Fox News
  • fox-news/sports/soccer
  • fox-news/world/world-regions/canada
  • fox-news/sports
  • fox-news/sports
  • article

an

Atomically Thin Materials Significantly Shrink Qubits



Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.




an

How AI Will Change Chip Design



The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

How is AI currently being used to design the next generation of chips?

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Heather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.




an

Andrew Ng: Unbiggen AI



Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”




an

Multiband Antenna Simulation and Wireless KPI Extraction



In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.

Overview

This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.

What Attendees will Learn

  • How to design interleaved multiband antenna systems using the latest capabilities in HFSS
  • How to extract Network Key Performance Indicators
  • How to run and extract RF Channels for the dynamic environment

Who Should Attend

This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.

Register now for this free webinar!




an

New Carrier Fluid Makes Hydrogen Way Easier to Transport



Imagine pulling up to a refueling station and filling your vehicle’s tank with liquid hydrogen, as safe and convenient to handle as gasoline or diesel, without the need for high-pressure tanks or cryogenic storage. This vision of a sustainable future could become a reality if a Calgary, Canada–based company, Ayrton Energy, can scale up its innovative method of hydrogen storage and distribution. Ayrton’s technology could make hydrogen a viable, one-to-one replacement for fossil fuels in existing infrastructure like pipelines, fuel tankers, rail cars, and trucks.

The company’s approach is to use liquid organic hydrogen carriers (LOHCs) to make it easier to transport and store hydrogen. The method chemically bonds hydrogen to carrier molecules, which absorb hydrogen molecules and make them more stable—kind of like hydrogenating cooking oil to produce margarine.

A researcher pours a sample of Ayrton’s LOHC fluid into a vial.Ayrton Energy

The approach would allow liquid hydrogen to be transported and stored in ambient conditions, rather than in the high-pressure, cryogenic tanks (to hold it at temperatures below -252 ºC) currently required for keeping hydrogen in liquid form. It would also be a big improvement on gaseous hydrogen, which is highly volatile and difficult to keep contained.

Founded in 2021, Ayrton is one of several companies across the globe developing LOHCs, including Japan’s Chiyoda and Mitsubishi, Germany’s Covalion, and China’s Hynertech. But toxicity, energy density, and input energy issues have limited LOHCs as contenders for making liquid hydrogen feasible. Ayrton says its formulation eliminates these trade-offs.

Safe, Efficient Hydrogen Fuel for Vehicles

Conventional LOHC technologies used by most of the aforementioned companies rely on substances such as toluene, which forms methylcyclohexane when hydrogenated. These carriers pose safety risks due to their flammability and volatility. Hydrogenious LOHC Technologies in Erlanger, Germany and other hydrogen fuel companies have shifted toward dibenzyltoluene, a more stable carrier that holds more hydrogen per unit volume than methylcyclohexane, though it requires higher temperatures (and thus more energy) to bind and release the hydrogen. Dibenzyltoluene hydrogenation occurs at between 3 and 10 megapascals (30 and 100 bar) and 200–300 ºC, compared with 10 MPa (100 bar), and just under 200 ºC for methylcyclohexane.

Ayrton’s proprietary oil-based hydrogen carrier not only captures and releases hydrogen with less input energy than is required for other LOHCs, but also stores more hydrogen than methylcyclohexane can—55 kilograms per cubic meter compared with methylcyclohexane’s 50 kg/m³. Dibenzyltoluene holds more hydrogen per unit volume (up to 65 kg/m³), but Ayrton’s approach to infusing the carrier with hydrogen atoms promises to cost less. Hydrogenation or dehydrogenation with Ayrton’s carrier fluid occurs at 0.1 megapascal (1 bar) and about 100 ºC, says founder and CEO Natasha Kostenuk. And as with the other LOHCs, after hydrogenation it can be transported and stored at ambient temperatures and pressures.

Judges described [Ayrton's approach] as a critical technology for the deployment of hydrogen at large scale.” —Katie Richardson, National Renewable Energy Lab

Ayrton’s LOHC fluid is as safe to handle as margarine, but it’s still a chemical, says Kostenuk. “I wouldn’t drink it. If you did, you wouldn’t feel very good. But it’s not lethal,” she says.

Kostenuk and fellow Ayrton cofounder Brandy Kinkead (who serves as the company’s chief technical officer) were originally trying to bring hydrogen generators to market to fill gaps in the electrical grid. “We were looking for fuel cells and hydrogen storage. Fuel cells were easy to find, but we couldn’t find a hydrogen storage method or medium that would be safe and easy to transport to fuel our vision of what we were trying to do with hydrogen generators,” Kostenuk says. During the search, they came across LOHC technology but weren’t satisfied with the trade-offs demanded by existing liquid hydrogen carriers. “We had the idea that we could do it better,” she says. The duo pivoted, adjusting their focus from hydrogen generators to hydrogen storage solutions.

“Everybody gets excited about hydrogen production and hydrogen end use, but they forget that you have to store and manage the hydrogen,” Kostenuk says. Incompatibility with current storage and distribution has been a barrier to adoption, she says. “We’re really excited about being able to reuse existing infrastructure that’s in place all over the world.” Ayrton’s hydrogenated liquid has fuel-cell-grade (99.999 percent) hydrogen purity, so there’s no advantage in using pure liquid hydrogen with its need for subzero temperatures, according to the company.

The main challenge the company faces is the set of issues that come along with any technology scaling up from pilot-stage production to commercial manufacturing, says Kostenuk. “A crucial part of that is aligning ourselves with the right manufacturing partners along the way,” she notes.

Asked about how Ayrton is dealing with some other challenges common to LOHCs, Kostenuk says Ayrton has managed to sidestep them. “We stayed away from materials that are expensive and hard to procure, which will help us avoid any supply chain issues,” she says. By performing the reactions at such low temperatures, Ayrton can get its carrier fluid to withstand 1,000 hydrogenation-dehydrogenation cycles before it no longer holds enough hydrogen to be useful. Conventional LOHCs are limited to a couple of hundred cycles before the high temperatures required for bonding and releasing the hydrogen breaks down the fluid and diminishes its storage capacity, Kostenuk says.

Breakthrough in Hydrogen Storage Technology

In acknowledgement of what Ayrton’s nontoxic, oil-based carrier fluid could mean for the energy and transportation sectors, the U.S. National Renewable Energy Lab (NREL) at its annual Industry Growth Forum in May named Ayrton an “outstanding early-stage venture.” A selection committee of more than 180 climate tech and cleantech investors and industry experts chose Ayrton from a pool of more than 200 initial applicants, says Katie Richardson, group manager of NREL’s Innovation and Entrepreneurship Center, which organized the forum. The committee based its decision on the company’s innovation, market positioning, business model, team, next steps for funding, technology, capital use, and quality of pitch presentation. “Judges described Ayrton’s approach as a critical technology for the deployment of hydrogen at large scale,” Richardson says.

As a next step toward enabling hydrogen to push gasoline and diesel aside, “we’re talking with hydrogen producers who are right now offering their customers cryogenic and compressed hydrogen,” says Kostenuk. “If they offered LOHC, it would enable them to deliver across longer distances, in larger volumes, in a multimodal way.” The company is also talking to some industrial site owners who could use the hydrogenated LOHC for buffer storage to hold onto some of the energy they’re getting from clean, intermittent sources like solar and wind. Another natural fit, she says, is energy service providers that are looking for a reliable method of seasonal storage beyond what batteries can offer. The goal is to eventually scale up enough to become the go-to alternative (or perhaps the standard) fuel for cars, trucks, trains, and ships.




an

Honor a Loved One With an IEEE Foundation Memorial Fund



As the philanthropic partner of IEEE, the IEEE Foundation expands the organization’s charitable body of work by inspiring philanthropic engagement that ignites a donor’s innermost interests and values.

One way the Foundation does so is by partnering with IEEE units to create memorial funds, which pay tribute to members, family, friends, teachers, professors, students, and others. This type of giving honors someone special while also supporting future generations of engineers and celebrating innovation.

Below are three recently created memorial funds that not only have made an impact on their beneficiaries and perpetuated the legacy of the namesake but also have a deep meaning for those who launched them.

EPICS in IEEE Fischer Mertel Community of Projects

The EPICS in IEEE Fischer Mertel Community of Projects was established to support projects “designed to inspire multidisciplinary teams of engineering students to collaborate and engineer solutions to address local community needs.”

The fund was created by the children of Joe Fischer and Herb Mertel to honor their fathers’ passion for mentoring students. Longtime IEEE members, Fischer and Mertel were active with the IEEE Electromagnetic Compatibility Society. Fischer was the society’s 1972 president and served on its board of directors for six years. Mertel served on the society’s board from 1979 to 1983 and again from 1989 to 1993.

“The EPICS in IEEE Fischer Mertel Community of Projects was established to inspire and support outstanding engineering ideas and efforts that help communities worldwide,” says Tina Mertel, Herb’s daughter. “Joe Fischer and my father had a lifelong friendship and excelled as engineering leaders and founders of their respective companies [Fischer Custom Communications and EMACO]. I think that my father would have been proud to know that their friendship and work are being honored in this way.”

The nine projects supported thus far have the potential to impact more than 104,000 people because of the work and collaboration of 190 students worldwide. The projects funded are intended to represent at least two of the EPICS in IEEE’s focus categories: education and outreach; human services; environmental; and access and abilities.

Here are a few of the projects:

IEEE AESS Michael C. Wicks Radar Student Travel Grant

The IEEE Michael C. Wicks Radar Student Travel Grant was established by IEEE Fellow Michael Wicks prior to his death in 2022. The grant provides travel support for graduate students who are the primary authors on a paper being presented at the annual IEEE Radar Conference. Wicks was an electronics engineer and a radio industry leader who was known for developing knowledge-based space-time adaptive processing. He believed in investing in the next generation and he wanted to provide an opportunity for that to happen.Ten graduate students have been awarded the Wicks grant to date. This year two students from Region 8 (Africa, Europe, Middle East) and two students from Region 10 (Asia and Pacific) were able to travel to Denver to attend the IEEE Radar Conference and present their research. The papers they presented are “Target Shape Reconstruction From Multi-Perspective Shadows in Drone-Borne SAR Systems” and “Design of Convolutional Neural Networks for Classification of Ships from ISAR Images.”

Life Fellow Fumio Koyama and IEEE Fellow Constance J. Chang-Hasnain proudly display their IEEE Nick Holonyak, Jr. Medal for Semiconductor Optoelectronic Technologies at this year’s IEEE Honors Ceremony. They are accompanied by IEEE President-Elect Kathleen Kramer and IEEE President Tom Coughlin.Robb Cohen

IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies

The IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies was created with a memorial fund supported by some of Holonyak’s former graduate students to honor his work as a professor and mentor. Presented on behalf of the IEEE Board of Directors, the medal recognizes outstanding contributions to semiconductor optoelectronic devices and systems including high-energy-efficiency semiconductor devices and electronics.

Holonyak was a prolific inventor and longtime professor of electrical engineering and physics. In 1962, while working as a scientist at General Electric’s Advanced Semiconductor Laboratory in Syracuse, N.Y., he invented the first practical visible-spectrum LED and laser diode. His innovations are the basis of the devices now used in high-efficiency light bulbs and laser diodes. He left GE in 1963 to join the University of Illinois Urbana-Champaign as a professor of electrical engineering and physics at the invitation of John Bardeen, his Ph.D. advisor and a two-time Nobel Prize winner in physics. Holonyak retired from UIUC in 2013 but continued research collaborations at the university with young faculty members.

“In addition to his remarkable technical contributions, he was an excellent teacher and mentor to graduate students and young electrical engineers,” says Russell Dupuis, one of his doctoral students. “The impact of his innovations has improved the lives of most people on the earth, and this impact will only increase with time. It was my great honor to be one of his students and to help create this important IEEE medal to ensure that his work will be remembered in the future.”

The award was presented for the first time at this year’s IEEE Honors Ceremony, in Boston, to IEEE Fellow Constance Chang-Hasnain and Life Fellow Fumio Koyama for “pioneering contributions to vertical cavity surface-emitting laser (VCSEL) and VCSEL-based photonics for optical communications and sensing.”

Establishing a memorial fund through the IEEE Foundation is a gratifying way to recognize someone who has touched your life while also advancing technology for humanity. If you are interested in learning more about memorial and tribute funds, reach out to the IEEE Foundation team: donate@ieee.org.




an

Touchscreens Are Out, and Tactile Controls Are Back



Tactile controls are back in vogue. Apple added two new buttons to the iPhone 16, home appliances like stoves and washing machines are returning to knobs, and several car manufacturers are reintroducing buttons and dials to dashboards and steering wheels.

With this “re-buttonization,” as The Wall Street Journal describes it, demand for Rachel Plotnick’s expertise has grown. Plotnick, an associate professor of cinema and media studies at Indiana University in Bloomington, is the leading expert on buttons and how people interact with them. She studies the relationship between technology and society with a focus on everyday or overlooked technologies, and wrote the 2018 book Power Button: A History of Pleasure, Panic, and the Politics of Pushing (The MIT Press). Now, companies are reaching out to her to help improve their tactile controls.

You wrote a book a few years ago about the history of buttons. What inspired that book?

Rachel Plotnick: Around 2009, I noticed there was a lot of discourse in the news about the death of the button. This was a couple years after the first iPhone had come out, and a lot of people were saying that, as touchscreens were becoming more popular, eventually we weren’t going to have any more physical buttons to push. This started to happen across a range of devices like the Microsoft Kinect, and after films like Minority Report had come out in the early 2000s, everyone thought we were moving to this kind of gesture or speech interface. I was fascinated by this idea that an entire interface could die, and that led me down this big wormhole, to try to understand how we came to be a society that pushed buttons everywhere we went.

Rachel Plotnick studies the ways we use everyday technologies and how they shape our relationships with each other and the world.Rachel Plotnick

The more that I looked around, the more that I saw not only were we pressing digital buttons on social media and to order things from Amazon, but also to start our coffee makers and go up and down in elevators and operate our televisions. The pervasiveness of the button as a technology pitted against this idea of buttons disappearing seemed like such an interesting dichotomy to me. And so I wanted to understand an origin story, if I could come up with it, of where buttons came from.

What did you find in your research?

Plotnick: One of the biggest observations I made was that a lot of fears and fantasies around pushing buttons were the same 100 years ago as they are today. I expected to see this society that wildly transformed and used buttons in such a different way, but I saw these persistent anxieties over time about control and who gets to push the button, and also these pleasures around button pushing that we can use for advertising and to make technology simpler. That pendulum swing between fantasy and fear, pleasure and panic, and how those themes persisted over more than a century was what really interested me. I liked seeing the connections between the past and the present.

[Back to top]

We’ve experienced the rise of touchscreens, but now we might be seeing another shift—a renaissance in buttons and physical controls. What’s prompting the trend?

Plotnick: There was this kind of touchscreen mania, where all of a sudden everything became a touchscreen. Your car was a touchscreen, your refrigerator was a touchscreen. Over time, people became somewhat fatigued with that. That’s not to say touchscreens aren’t a really useful interface, I think they are. But on the other hand, people seem to have a hunger for physical buttons, both because you don’t always have to look at them—you can feel your way around for them when you don’t want to directly pay attention to them—but also because they offer a greater range of tactility and feedback.

If you look at gamers playing video games, they want to push a lot of buttons on those controls. And if you look at DJs and digital musicians, they have endless amounts of buttons and joysticks and dials to make music. There seems to be this kind of richness of the tactile experience that’s afforded by pushing buttons. They’re not perfect for every situation, but I think increasingly, we’re realizing the merit that the interface offers.

What else is motivating the re-buttoning of consumer devices?

Plotnick: Maybe screen fatigue. We spend all our days and nights on these devices, scrolling or constantly flipping through pages and videos, and there’s something tiring about that. The button may be a way to almost de-technologize our everyday existence, to a certain extent. That’s not to say buttons don’t work with screens very nicely—they’re often partners. But in a way, it’s taking away the priority of vision as a sense, and recognizing that a screen isn’t always the best way to interact with something.

When I’m driving, it’s actually unsafe for my car to be operated in that way. It’s hard to generalize and say, buttons are always easy and good, and touchscreens are difficult and bad, or vice versa. Buttons tend to offer you a really limited range of possibilities in terms of what you can do. Maybe that simplicity of limiting our field of choices offers more safety in certain situations.

It also seems like there’s an accessibility issue when prioritizing vision in device interfaces, right?

Plotnick: The blind community had to fight for years to make touchscreens more accessible. It’s always been funny to me that we call them touchscreens. We think about them as a touch modality, but a touchscreen prioritizes the visual. Over the last few years, we’re seeing Alexa and Siri and a lot of these other voice-activated systems that are making things a little bit more auditory as a way to deal with that. But the touchscreen is oriented around visuality.

It sounds like, in general, having multiple interface options is the best way to move forward—not that touchscreens are going to become completely passé, just like the button never actually died.

Plotnick: I think that’s accurate. We see paradigm shifts over time with technologies, but for the most part, we often recycle old ideas. It’s striking that if we look at the 1800s, people were sending messages via telegraph about what the future would look like if we all had this dashboard of buttons at our command where we could communicate with anyone and shop for anything. And that’s essentially what our smartphones became. We still have this dashboard menu approach. I think it means carefully considering what the right interface is for each situation.

[Back to top]

Several companies have reached out to you to learn from your expertise. What do they want to know?

Plotnick: I think there is a hunger out there from companies designing buttons or consumer technologies to try to understand the history of how we used to do things, how we might bring that to bear on the present, and what the future looks like with these interfaces. I’ve had a number of interesting discussions with companies, including one that manufactures push-button interfaces. I had a conversation with them about medical devices like CT machines and X-ray machines, trying to imagine the easiest way to push a button in that situation, to save people time and improve the patient encounter.

I’ve also talked to people about what will make someone use a defibrillator or not. Even though it’s really simple to go up to these automatic machines, if you see someone going into cardiac arrest in a mall or out on the street, a lot of people are terrified to actually push the button that would get this machine started. We had a really fascinating discussion about why someone wouldn’t push a button, and what would it take to get them to feel okay about doing that.

In all of these cases, these are design questions, but they’re also social and cultural questions. I like the idea that people who are in the humanities studying these things from a long-term perspective can also speak to engineers trying to build these devices.

So these companies also want to know about the history of buttons?

Plotnick: I’ve had some fascinating conversations around history. We all want to learn what mistakes not to make and what worked well in the past. There’s often this narrative of progress, that things are only getting better with technology over time. But if we look at these lessons, I think we can see that sometimes things were simpler or better in a past moment, and sometimes they were harder. Often with new technologies, we think we’re completely reinventing the wheel. But maybe these concepts existed a long time ago, and we haven’t paid attention to that. There’s a lot to be learned from the past.

[Back to top]




an

Boston Dynamics’ Latest Vids Show Atlas Going Hands On



Boston Dynamics is the master of dropping amazing robot videos with no warning, and last week, we got a surprise look at the new electric Atlas going “hands on” with a practical factory task.

This video is notable because it’s the first real look we’ve had at the new Atlas doing something useful—or doing anything at all, really, as the introductory video from back in April (the first time we saw the robot) was less than a minute long. And the amount of progress that Boston Dynamics has made is immediately obvious, with the video showing a blend of autonomous perception, full body motion, and manipulation in a practical task.

We sent over some quick questions as soon as we saw the video, and we’ve got some extra detail from Scott Kuindersma, senior director of Robotics Research at Boston Dynamics.


If you haven’t seen this video yet, what kind of robotics person are you, and also here you go:

Atlas is autonomously moving engine covers between supplier containers and a mobile sequencing dolly. The robot receives as input a list of bin locations to move parts between.

Atlas uses a machine learning (ML) vision model to detect and localize the environment fixtures and individual bins [0:36]. The robot uses a specialized grasping policy and continuously estimates the state of manipulated objects to achieve the task.

There are no prescribed or teleoperated movements; all motions are generated autonomously online. The robot is able to detect and react to changes in the environment (e.g., moving fixtures) and action failures (e.g., failure to insert the cover, tripping, environment collisions [1:24]) using a combination of vision, force, and proprioceptive sensors.

Eagle-eyed viewers will have noticed that this task is very similar to what we saw hydraulic Atlas (Atlas classic?) working on just before it retired. We probably don’t need to read too much into the differences between how each robot performs that task, but it’s an interesting comparison to make.

For more details, here’s our Q&A with Kuindersma:

How many takes did this take?

Kuindersma: We ran this sequence a couple times that day, but typically we’re always filming as we continue developing and testing Atlas. Today we’re able to run that engine cover demo with high reliability, and we’re working to expand the scope and duration of tasks like these.

Is this a task that humans currently do?

Kuindersma: Yes.

What kind of world knowledge does Atlas have while doing this task?

Kuindersma: The robot has access to a CAD model of the engine cover that is used for object pose prediction from RGB images. Fixtures are represented more abstractly using a learned keypoint prediction model. The robot builds a map of the workcell at startup which is updated on the fly when changes are detected (e.g., moving fixture).

Does Atlas’s torso have a front or back in a meaningful way when it comes to how it operates?

Kuindersma: Its head/torso/pelvis/legs do have “forward” and “backward” directions, but the robot is able to rotate all of these relative to one another. The robot always knows which way is which, but sometimes the humans watching lose track.

Are the head and torso capable of unlimited rotation?

Kuindersma: Yes, many of Atlas’s joints are continuous.

How long did it take you folks to get used to the way Atlas moves?

Kuindersma: Atlas’s motions still surprise and delight the team.

OSHA recommends against squatting because it can lead to workplace injuries. How does Atlas feel about that?

Kuindersma: As might be evident by some of Atlas’s other motions, the kinds of behaviors that might be injurious for humans might be perfectly fine for robots.

Can you describe exactly what process Atlas goes through at 1:22?

Kuindersma: The engine cover gets caught on the fabric bins and triggers a learned failure detector on the robot. Right now this transitions into a general-purpose recovery controller, which results in a somewhat jarring motion (we will improve this). After recovery, the robot retries the insertion using visual feedback to estimate the state of both the part and fixture.

Were there other costume options you considered before going with the hot dog?

Kuindersma: Yes, but marketing wants to save them for next year.

How many important sensors does the hot dog costume occlude?

Kuindersma: None. The robot is using cameras in the head, proprioceptive sensors, IMU, and force sensors in the wrists and feet. We did have to cut the costume at the top so the head could still spin around.

Why are pickles always causing problems?

Kuindersma: Because pickles are pesky, polarizing pests.




an

Oceans Lock Away Carbon Slower Than Previously Thought



Research expeditions conducted at sea using a rotating gravity machine and microscope found that the Earth’s oceans may not be absorbing as much carbon as researchers have long thought.

Oceans are believed to absorb roughly 26 percent of global carbon dioxide emissions by drawing down CO2 from the atmosphere and locking it away. In this system, CO2 enters the ocean, where phytoplankton and other organisms consume about 70 percent of it. When these organisms eventually die, their soft, small structures sink to the bottom of the ocean in what looks like an underwater snowfall.

This “marine snow” pulls carbon away from the surface of the ocean and sequesters it in the depths for millennia, which enables the surface waters to draw down more CO2 from the air. It’s one of Earth’s best natural carbon-removal systems. It’s so effective at keeping atmospheric CO2 levels in check that many research groups are trying to enhance the process with geoengineering techniques.

But the new study, published on 11 October in Science, found that the sinking particles don’t fall to the ocean floor as quickly as researchers thought. Using a custom gravity machine that simulated marine snow’s native environment, the study’s authors observed that the particles produce mucus tails that act like parachutes, putting the brakes on their descent—sometimes even bringing them to a standstill.

The physical drag leaves carbon lingering in the upper hydrosphere, rather than being safely sequestered in deeper waters. Living organisms can then consume the marine snow particles and respire their carbon back into the sea. Ultimately, this impedes the rate at which the ocean draws down and sequesters additional CO2 from the air.

The implications are grim: Scientists’ best estimates of how much CO2 the Earth’s oceans sequester could be way off. “We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails,” says Manu Prakash, a bioengineer at Stanford University and one of the paper’s authors. The work was conducted by researchers at Stanford, Rutgers University in New Jersey, and Woods Hole Oceanographic Institution in Massachusetts.

Oceans Absorb Less CO2 Than Expected

Researchers for years have been developing numerical models to estimate marine carbon sequestration. Those models will need to be adjusted for the slower sinking speed of marine snow, Prakash says.

The findings also have implications for startups in the fledgling marine carbon geoengineering field. These companies use techniques such as ocean alkalinity enhancement to augment the ocean’s ability to sequester carbon. Their success depends, in part, on using numerical models to prove to investors and the public that their techniques work. But their estimates are only as good as the models they use, and the scientific community’s confidence in them.

“We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails.” —Manu Prakash, Stanford University

The Stanford researchers made the discovery on an expedition off the coast of Maine. There, they collected marine samples by hanging traps from their boat 80 meters deep. After pulling up a sample, the researchers quickly analyzed the contents while still on board the ship using their wheel-shaped machine and microscope.

The researchers built a microscope with a spinning wheel that simulates marine snow falling through sea water over longer distances than would otherwise be practical.Prakash Lab/Stanford

The device simulates the organisms’ vertical travel over long distances. Samples go into a wheel about the size of a vintage film reel. The wheel spins constantly, allowing suspended marine-snow particles to sink while a camera captures their every move.

The apparatus adjusts for temperature, light, and pressure to emulate marine conditions. Computational tools assess flow around the sinking particles and custom software removes noise in the data from the ship’s vibrations. To accommodate for the tilt and roll of the ship, the researchers mounted the device on a two-axis gimbal.

Slower Marine Snow Reduces Carbon Sequestration

With this setup, the team observed that sinking marine snow generates an invisible halo-shaped comet tail made of viscoelastic transparent exopolymer—a mucus-like parachute. They discovered the invisible tail by adding small beads to the seawater sample in the wheel, and analyzing the way they flowed around the marine snow. “We found that the beads were stuck in something invisible trailing behind the sinking particles,” says Rahul Chajwa, a bioengineering postdoctoral fellow at Stanford.

The tail introduces drag and buoyancy, doubling the amount of time marine snow spends in the upper 100 meters of the ocean, the researchers concluded. “This is the sedimentation law we should be following,” says Prakash, who hopes to get the results into climate models.

The study will likely help models project carbon export—the process of transporting CO2 from the atmosphere to the deep ocean, says Lennart Bach, a marine biochemist at the University of Tasmania in Australia, who was not involved with the research. “The methodology they developed is very exciting and it’s great to see new methods coming into this research field,” he says.

But Bach cautions against extrapolating the results too far. “I don’t think the study will change the numbers on carbon export as we know them right now,” because these numbers are derived from empirical methods that would have unknowingly included the effects of the mucus tail, he says.

Marine snow may be slowed by “parachutes” of mucus while sinking, potentially lowering the rate at which the global ocean can sequester carbon in the depths.Prakash Lab/Stanford

Prakash and his team came up with the idea for the microscope while conducting research on a human parasite that can travel dozens of meters. “We would make 5- to 10-meter-tall microscopes, and one day, while packing for a trip to Madagascar, I had this ‘aha’ moment,” says Prakash. “I was like: Why are we packing all these tubes? What if the two ends of these tubes were connected?”

The group turned their linear tube into a closed circular channel—a hamster wheel approach to observing microscopic particles. Over five expeditions at sea, the team further refined the microscope’s design and fluid mechanics to accommodate marine samples, often tackling the engineering while on the boat and adjusting for flooding and high seas.

In addition to the sedimentation physics of marine snow, the team also studies other plankton that may affect climate and carbon-cycle models. On a recent expedition off the coast of Northern California, the group discovered a cell with silica ballast that makes marine snow sink like a rock, Prakash says.

The crafty gravity machine is one of Prakash’s many frugal inventions, which include an origami-inspired paper microscope, or “foldscope,” that can be attached to a smartphone, and a paper-and-string biomedical centrifuge dubbed a “paperfuge.”




an

U.S. Chip Revival Plan Chooses Sites



Last week the organization tasked with running the the biggest chunk of U.S. CHIPS Act’s US $13 billion R&D program made some significant strides: The National Semiconductor Technology Center (NSTC) released a strategic plan and selected the sites of two of three planned facilities and released a new strategic plan. The locations of the two sites—a “design and collaboration” center in Sunnyvale, Calif., and a lab devoted to advancing the leading edge of chipmaking, in Albany, N.Y.—build on an existing ecosystem at each location, experts say. The location of the third planned center—a chip prototyping and packaging site that could be especially critical for speeding semiconductor startups—is still a matter of speculation.

“The NSTC represents a once-in-a-generation opportunity for the U.S. to accelerate the pace of innovation in semiconductor technology,” Deirdre Hanford, CEO of Natcast, the nonprofit that runs the NSTC centers, said in a statement. According to the strategic plan, which covers 2025 to 2027, the NSTC is meant to accomplish three goals: extend U.S. technology leadership, reduce the time and cost to prototype, and build and sustain a semiconductor workforce development ecosystem. The three centers are meant to do a mix of all three.

New York gets extreme ultraviolet lithography

NSTC plans to direct $825 million into the Albany project. The site will be dedicated to extreme ultraviolet lithography, a technology that’s essential to making the most advanced logic chips. The Albany Nanotech Complex, which has already seen more than $25 billion in investments from the state and industry partners over two decades, will form the heart of the future NSTC center. It already has an EUV lithography machine on site and has begun an expansion to install a next-generation version, called high-NA EUV, which promises to produce even finer chip features. Working with a tool recently installed in Europe, IBM, a long-time tenant of the Albany research facility, reported record yields of copper interconnects built every 21 nanometers, a pitch several nanometers tighter than possible with ordinary EUV.

“It’s fulfilling to see that this ecosystem can be taken to the national and global level through CHIPS Act funding,” said Mukesh Khare, general manager of IBM’s semiconductors division, speaking from the future site of the NSTC EUV center. “It’s the right time, and we have all the ingredients.”

While only a few companies are capable of manufacturing cutting edge logic using EUV, the impact of the NSTC center will be much broader, Khare argues. It will extend down as far as early-stage startups with ideas or materials for improving the chipmaking process “An EUV R&D center doesn’t mean just one machine,” says Khare. “It needs so many machines around it… It’s a very large ecosystem.”

Silicon Valley lands the design center

The design center is tasked with conducting advanced research in chip design, electronic design automation (EDA), chip and system architectures, and hardware security. It will also host the NSTC’s design enablement gateway—a program that provides NSTC members with a secure, cloud-based access to design tools, reference processes and designs, and shared data sets, with the goal of reducing the time and cost of design. Additionally, it will house workforce development, member convening, and administration functions.

Situating the design center in Silicon Valley, with its concentration of research universities, venture capital, and workforce, seems like the obvious choice to many experts. “I can’t think of a better place,” says Patrick Soheili, co-founder of interconnect technology startup Eliyan, which is based in Santa Clara, Calif.

Abhijeet Chakraborty, vice president of engineering in the technology and product group at Silicon Valley-based Synopsys, a leading maker of EDA software, sees Silicon Valley’s expansive tech ecosystem as one of its main advantages in landing the NSTC’s design center. The region concentrates companies and researchers involved in the whole spectrum of the industry from semiconductor process technology to cloud software.

Access to such a broad range of industries is increasingly important for chip design startups, he says. “To design a chip or component these days you need to go from concept to design to validation in an environment that takes care of the entire stack,” he says. It’s prohibitively expensive for a startup to do that alone, so one of Chakraborty’s hopes for the design center is that it will help startups access the design kits and other data needed to operate in this new environment.

Packaging and prototyping still to come

A third promised center for prototyping and packaging is still to come. “The big question is where does the packaging and prototyping go?” says Mark Granahan, cofounder and CEO of Pennsylvania-based power semiconductor startup Ideal Semiconductor. “To me that’s a great opportunity.” He points out that because there is so little packaging technology infrastructure in the United States, any ambitious state or region should have a shot at hosting such a center. One of the original intentions of the act, after all, was to expand the number of regions of the country that are involved in the semiconductor industry.

But that hasn’t stopped some already tech-heavy regions from wanting it. “Oregon offers the strongest ecosystem for such a facility,” a spokesperson for Intel, whose technology development is done there. “The state is uniquely positioned to contribute to the success of the NSTC and help drive technological advancements in the U.S. semiconductor industry.”

As NSTC makes progress, Granahan’s concern is that bureaucracy will expand with it and slow efforts to boost the U.S. chip industry. Already the layers of control are multiplying. The Chips Office at the National Institute of Standards and Technology executes the Act. The NSTC is administered by the nonprofit Natcast, which directs the EUV center, which is in a facility run by another nonprofit, NY CREATES. “We want these things to be agile and make local decisions.”




an

Students Tackle Environmental Issues in Colombia and Türkiye



EPICS in IEEE, a service learning program for university students supported by IEEE Educational Activities, offers students opportunities to engage with engineering professionals and mentors, local organizations, and technological innovation to address community-based issues.

The following two environmentally focused projects demonstrate the value of teamwork and direct involvement with project stakeholders. One uses smart biodigesters to better manage waste in Colombia’s rural areas. The other is focused on helping Turkish olive farmers protect their trees from climate change effects by providing them with a warning system that can identify growing problems.

No time to waste in rural Colombia

Proper waste management is critical to a community’s living conditions. In rural La Vega, Colombia, the lack of an effective system has led to contaminated soil and water, an especially concerning issue because the town’s economy relies heavily on agriculture.

The Smart Biodigesters for a Better Environment in Rural Areas project brought students together to devise a solution.

Vivian Estefanía Beltrán, a Ph.D. student at the Universidad del Rosario in Bogotá, addressed the problem by building a low-cost anaerobic digester that uses an instrumentation system to break down microorganisms into biodegradable material. It reduces the amount of solid waste, and the digesters can produce biogas, which can be used to generate electricity.

“Anaerobic digestion is a natural biological process that converts organic matter into two valuable products: biogas and nutrient-rich soil amendments in the form of digestate,” Beltrán says. “As a by-product of our digester’s operation, digestate is organic matter that can’t be transferred into biogas but can be used as a soil amendment for our farmers’ crops, such as coffee.

“While it may sound easy, the process is influenced by a lot of variables. The support we’ve received from EPICS in IEEE is important because it enables us to measure these variables, such as pH levels, temperature of the reactor, and biogas composition [methane and hydrogen sulfide]. The system allows us to make informed decisions that enhance the safety, quality, and efficiency of the process for the benefit of the community.”

The project was a collaborative effort among Universidad del Rosario students, a team of engineering students from Escuela Tecnológica Instituto Técnico Central, Professor Carlos Felipe Vergara, and members of Junta de Acción Comunal (Vereda La Granja), which aims to help residents improve their community.

“It’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community.” —Vivian Estefanía Beltrán

Beltrán worked closely with eight undergraduate students and three instructors—Maria Fernanda Gómez, Andrés Pérez Gordillo (the instrumentation group leader), and Carlos Felipe Vergara-Ramirez—as well as IEEE Graduate Student Member Nicolás Castiblanco (the instrumentation group coordinator).

The team constructed and installed their anaerobic digester system in an experimental station in La Vega, a town located roughly 53 kilometers northwest of Bogotá.

“This digester is an important innovation for the residents of La Vega, as it will hopefully offer a productive way to utilize the residual biomass they produce to improve quality of life and boost the economy,” Beltrán says. Soon, she adds, the system will be expanded to incorporate high-tech sensors that automatically monitor biogas production and the digestion process.

“For our students and team members, it’s been a great experience to see how individuals pursuing different fields of study—from engineering to electronics and computer science—can all work and learn together on a project that will have a direct positive impact on a community. It enables all of us to apply our classroom skills to reality,” she says. “The funding we’ve received from EPICS in IEEE has been crucial to designing, proving, and installing the system.”

The project also aims to support the development of a circular economy, which reuses materials to enhance the community’s sustainability and self-sufficiency.

Protecting olive groves in Türkiye

Türkiye is one of the world’s leading producers of olives, but the industry has been challenged in recent years by unprecedented floods, droughts, and other destructive forces of nature resulting from climate change. To help farmers in the western part of the country monitor the health of their olive trees, a team of students from Istanbul Technical University developed an early-warning system to identify irregularities including abnormal growth.

“Almost no olives were produced last year using traditional methods, due to climate conditions and unusual weather patterns,” says Tayfun Akgül, project leader of the Smart Monitoring of Fruit Trees in Western Türkiye initiative.

“Our system will give farmers feedback from each tree so that actions can be taken in advance to improve the yield,” says Akgül, an IEEE senior member and a professor in the university’s electronics and communication engineering department.

“We’re developing deep-learning techniques to detect changes in olive trees and their fruit so that farmers and landowners can take all necessary measures to avoid a low or damaged harvest,” says project coordinator Melike Girgin, a Ph.D. student at the university and an IEEE graduate student member.

Using drones outfitted with 360-degree optical and thermal cameras, the team collects optical, thermal, and hyperspectral imaging data through aerial methods. The information is fed into a cloud-based, open-source database system.

Akgül leads the project and teaches the team skills including signal and image processing and data collection. He says regular communication with community-based stakeholders has been critical to the project’s success.

“There are several farmers in the village who have helped us direct our drone activities to the right locations,” he says. “Their involvement in the project has been instrumental in helping us refine our process for greater effectiveness.

“For students, classroom instruction is straightforward, then they take an exam at the end. But through our EPICS project, students are continuously interacting with farmers in a hands-on, practical way and can see the results of their efforts in real time.”

Looking ahead, the team is excited about expanding the project to encompass other fruits besides olives. The team also intends to apply for a travel grant from IEEE in hopes of presenting its work at a conference.

“We’re so grateful to EPICS in IEEE for this opportunity,” Girgin says. “Our project and some of the technology we required wouldn’t have been possible without the funding we received.”

A purpose-driven partnership

The IEEE Standards Association sponsored both of the proactive environmental projects.

“Technical projects play a crucial role in advancing innovation and ensuring interoperability across various industries,” says Munir Mohammed, IEEE SA senior manager of product development and market engagement. “These projects not only align with our technical standards but also drive technological progress, enhance global collaboration, and ultimately improve the quality of life for communities worldwide.”

For more information on the program or to participate in service-learning projects, visit EPICS in IEEE.

On 7 November, this article was updated from an earlier version.




an

Azerbaijan Plans Caspian-Black Sea Energy Corridor



Azerbaijan next week will garner much of the attention of the climate tech world, and not just because it will host COP29, the United Nation’s giant annual climate change conference. The country is promoting a grand, multi-nation plan to generate renewable electricity in the Caucasus region and send it thousands of kilometers west, under the Black Sea, and into energy–hungry Europe.

The transcontinental connection would start with wind, solar, and hydropower generated in Azerbaijan and Georgia, and off-shore wind power generated in the Caspian Sea. Long-distance lines would carry up to 1.5 gigawatts of clean electricity to Anaklia, Georgia, at the east end of the Black Sea. An undersea cable would move the electricity across the Black Sea and deliver it to Constanta, Romania, where it could be distributed further into Europe.

The scheme’s proponents say this Caspian-Black Sea energy corridor will help decrease global carbon emissions, provide dependable power to Europe, modernize developing economies at Europe’s periphery, and stabilize a region shaken by war. Organizers hope to build the undersea cable within the next six years at an estimated cost of €3.5 billion (US $3.8 billion).

To accomplish this, the governments of the involved countries must quickly circumvent a series of technical, financial, and political obstacles. “It’s a huge project,” says Zviad Gachechiladze, a director at Georgian State Electrosystem, the agency that operates the country’s electrical grid, and one of the architects of the Caucasus green-energy corridor. “To put it in operation [by 2030]—that’s quite ambitious, even optimistic,” he says.

Black Sea Cable to Link Caucasus and Europe

The technical lynchpin of the plan falls on the successful construction of a high voltage direct current (HVDC) submarine cable in the Black Sea. It’s a formidable task, considering that it would stretch across nearly 1,200 kilometers of water, most of which is over 2 km deep, and, since Russia’s invasion of Ukraine, littered with floating mines. By contrast, the longest existing submarine power cable—the North Sea Link—carries 1.4 GW across 720 km between England and Norway, at depths of up to 700 meters.

As ambitious as Azerbaijan’s plans sound, longer undersea connections have been proposed. The Australia-Asia PowerLink project aims to produce 6 GW at a vast solar farm in Northern Australia and send about a third of it to Singapore via a 4,300-km undersea cable. The Morocco-U.K. Power Project would send 3.6 GW over 3,800 km from Morocco to England. A similar attempt by Desertec to send electricity from North Africa to Europe ultimately failed.

Building such cables involves laying and stitching together lengths of heavy submarine power cables from specialized ships—the expertise for which lies with just two companies in the world. In an assessment of the Black Sea project’s feasibility, the Milan-based consulting and engineering firm CESI determined that the undersea cable could indeed be built, and estimated that it could carry up to 1.5 GW—enough to supply over 2 million European households.

But to fill that pipe, countries in the Caucasus region would have to generate much more green electricity. For Georgia, that will mostly come from hydropower, which already generates over 80 percent of the nation’s electricity. “We are a hydro country. We have a lot of untapped hydro potential,” says Gachechiladze.

Azerbaijan and Georgia Plan Green Energy Corridor

Generating hydropower can also generate opposition, because of the way dams alter rivers and landscapes. “There were some cases when investors were not able to construct power plants because of opposition of locals or green parties” in Georgia, says Salome Janelidze, a board member at the Energy Training Center, a Georgian government agency that promotes and educates around the country’s energy sector.

“It was definitely a problem and it has not been totally solved,” says Janelidze. But “to me it seems it is doable,” she says. “You can procure and construct if you work closely with the local population and see them as allies rather than adversaries.”

For Azerbaijan, most of the electricity would be generated by wind and solar farms funded by foreign investment. Masdar, the renewable-energy developer of the United Arab Emirates government, has been investing heavily in wind power in the country. In June, the company broke ground on a trio of wind and solar projects with 1 GW capacity. It intends to develop up to 9 GW more in Azerbaijan by 2030. ACWA Power, a Saudi power-generation company, plans to complete a 240-MW solar plant in the Absheron and Khizi districts of Azerbaijan next year and has struck a deal with the Azerbaijani Ministry of Energy to install up to 2.5 GW of offshore and onshore wind.

CESI is currently running a second study to gauge the practicality of the full breadth of the proposed energy corridor—from the Caspian Sea to Europe—with a transmission capacity of 4 to 6 GW. But that beefier interconnection will likely remain out of reach in the near term. “By 2030, we can’t claim our region will provide 4 GW or 6 GW,” says Gachechiladze. “1.3 is realistic.”

COP29: Azerbaijan’s Renewable Energy Push

Signs of political support have surfaced. In September, Azerbaijan, Georgia, Romania, and Hungary created a joint venture, based in Romania, to shepherd the project. Those four countries in 2022 inked a memorandum of understanding with the European Union to develop the energy corridor.

The involved countries are in the process of applying for the cable to be selected as an EU “project of mutual interest,” making it an infrastructure priority for connecting the union with its neighbors. If selected, “the project could qualify for 50 percent grant financing,” says Gachechiladze. “It’s a huge budget. It will improve drastically the financial condition of the project.” The commissioner responsible for EU enlargement policy projected that the union would pay an estimated €2.3 billion ($2.5 billion) toward building the cable.

Whether next week’s COP29, held in Baku, Azerbaijan, will help move the plan forward remains to be seen. In preparation for the conference, advocates of the energy corridor have been taking international journalists on tours of the country’s energy infrastructure.

Looming over the project are the security issues threaten to thwart it. Shipping routes in the Black Sea have become less dependable and safe since Russia’s invasion of Ukraine. To the south, tensions between Armenia and Azerbaijan remain after the recent war and ethnic violence.

In order to improve relations, many advocates of the energy corridor would like to include Armenia. “The cable project is in the interests of Georgia, it’s in the interests of Armenia, it’s in the interests of Azerbaijan,” says Agha Bayramov, an energy geopolitics researcher at the University of Groningen, in the Netherlands. “It might increase the chance of them living peacefully together. Maybe they’ll say, ‘We’re responsible for European energy. Let’s put our egos aside.’”




an

Video Friday: Robot Dog Handstand



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

Enjoy today’s videos!

Just when I thought quadrupeds couldn’t impress me anymore...

[ Unitree Robotics ]

Researchers at Meta FAIR are releasing several new research artifacts that advance robotics and support our goal of reaching advanced machine intelligence (AMI). These include Meta Sparsh, the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks; Meta Digit 360, an artificial fingertip-based tactile sensor that delivers detailed touch data with human-level precision and touch-sensing; and Meta Digit Plexus, a standardized platform for robotic sensor connections and interactions that enables seamless data collection, control and analysis over a single cable.

[ Meta ]

The first bimanual Torso created at Clone includes an actuated elbow, cervical spine (neck), and anthropomorphic shoulders with the sternoclavicular, acromioclavicular, scapulothoracic and glenohumeral joints. The valve matrix fits compactly inside the ribcage. Bimanual manipulation training is in progress.

[ Clone Inc. ]

Equipped with a new behavior architecture, Nadia navigates and traverses many types of doors autonomously. Nadia also demonstrates robustness to failed grasps and door opening attempts by automatically retrying and continuing. We present the robot with pull and push doors, four types of opening mechanisms, and even spring-loaded door closers. A deep neural network and door plane estimator allow Nadia to identify and track the doors.

[ Paper preprint by authors from Florida Institute for Human and Machine Cognition ]

Thanks, Duncan!

In this study, we integrate the musculoskeletal humanoid Musashi with the wire-driven robot CubiX, capable of connecting to the environment, to form CubiXMusashi. This combination addresses the shortcomings of traditional musculoskeletal humanoids and enables movements beyond the capabilities of other humanoids. CubiXMusashi connects to the environment with wires and drives by winding them, successfully achieving movements such as pull-up, rising from a lying pose, and mid-air kicking, which are difficult for Musashi alone.

[ CubiXMusashi, JSK Robotics Laboratory, University of Tokyo ]

Thanks, Shintaro!

An old boardwalk seems like a nightmare for any robot with flat feet.

[ Agility Robotics ]

This paper presents a novel learning-based control framework that uses keyframing to incorporate high-level objectives in natural locomotion for legged robots. These high-level objectives are specified as a variable number of partial or complete pose targets that are spaced arbitrarily in time. Our proposed framework utilizes a multi-critic reinforcement learning algorithm to effectively handle the mixture of dense and sparse rewards. In the experiments, the multi-critic method significantly reduces the effort of hyperparameter tuning compared to the standard single-critic alternative. Moreover, the proposed transformer-based architecture enables robots to anticipate future goals, which results in quantitative improvements in their ability to reach their targets.

[ Disney Research paper ]

Human-like walking where that human is the stompiest human to ever human its way through Humanville.

[ Engineai ]

We present the first static-obstacle avoidance method for quadrotors using just an onboard, monocular event camera. Quadrotors are capable of fast and agile flight in cluttered environments when piloted manually, but vision-based autonomous flight in unknown environments is difficult in part due to the sensor limitations of traditional onboard cameras. Event cameras, however, promise nearly zero motion blur and high dynamic range, but produce a large volume of events under significant ego-motion and further lack a continuous-time sensor model in simulation, making direct sim-to-real transfer not possible.

[ Paper University of Pennsylvania and University of Zurich ]

Cross-embodiment imitation learning enables policies trained on specific embodiments to transfer across different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This paper presents LEGATO, a cross-embodiment imitation learning framework for visuomotor skill transfer across varied kinematic morphologies. We introduce a handheld gripper that unifies action and observation spaces, allowing tasks to be defined consistently across robots.

[ LEGATO ]

The 2024 Xi’an Marathon has kicked off! STAR1, the general-purpose humanoid robot from Robot Era, joins runners in this ancient yet modern city for an exciting start!

[ Robot Era ]

In robotics, there are valuable lessons for students and mentors alike. Watch how the CyberKnights, a FIRST robotics team champion sponsored by RTX, with the encouragement of their RTX mentor, faced challenges after a poor performance and scrapped its robot to build a new one in just nine days.

[ CyberKnights ]

In this special video, PAL Robotics takes you behind the scenes of our 20th-anniversary celebration, a memorable gathering with industry leaders and visionaries from across robotics and technology. From inspiring speeches to milestone highlights, the event was a testament to our journey and the incredible partnerships that have shaped our path.

[ PAL Robotics ]

Thanks, Rugilė!




an

This Mobile 3D Printer Can Print Directly on Your Floor



Waiting for each part of a 3D-printed project to finish, taking it out of the printer, and then installing it on location can be tedious for multi-part projects. What if there was a way for your printer to print its creation exactly where you needed it? That’s the promise of MobiPrint, a new 3D printing robot that can move around a room, printing designs directly onto the floor.

MobiPrint, designed by Daniel Campos Zamora at the University of Washington, consists of a modified off-the-shelf 3D printer atop a home vacuum robot. First it autonomously maps its space—be it a room, a hallway, or an entire floor of a house. Users can then choose from a prebuilt library or upload their own design to be printed anywhere in the mapped area. The robot then traverses the room and prints the design.

It’s “a new system that combines robotics and 3D printing that could actually go and print in the real world,” Campos Zamora says. He presented MobiPrint on 15 October at the ACM Symposium on User Interface Software and Technology.

Campos Zamora and his team started with a Roborock S5 vacuum robot and installed firmware that allowed it to communicate with the open source program Valetudo. Valetudo disconnects personal robots from their manufacturer’s cloud, connecting them to a local server instead. Data collected by the robot, such as environmental mapping, movement tracking, and path planning, can all be observed locally, enabling users to see the robot’s LIDAR-created map.

Campos Zamora built a layer of software that connects the robot’s perception of its environment to the 3D printer’s print commands. The printer, a modified Prusa Mini+, can print on carpet, hardwood, and vinyl, with maximum printing dimensions of 180 by 180 by 65 millimeters. The robot has printed pet food bowls, signage, and accessibility markers as sample objects.

MakeabilityLab/YouTube

Currently, MobiPrint can only “park and print.” The robot base cannot move during printing to make large objects, like a mobility ramp. Printing designs larger than the robot is one of Campos Zamora’s goals in the future. To learn more about the team’s vision for MobiPrint, Campos Zamora answered a few questions from IEEE Spectrum.

What was the inspiration for creating your mobile 3D printer?

Daniel Campos Zamora: My lab is focused on building systems with an eye towards accessibility. One of the things that really inspired this project was looking at the tactile surface indicators that help blind and low vision users find their way around a space. And so we were like, what if we made something that could automatically go and deploy these things? Especially in indoor environments, which are generally a little trickier and change more frequently over time.

We had to step back and build this entirely different thing, using the environment as a design element. We asked: how do you integrate the real world environment into the design process, and then what kind of things can you print out in the world? That’s how this printer was born.

What were some surprising moments in your design process?

Campos Zamora: When I was testing the robot on different surfaces, I was not expecting the 3D printed designs to stick extremely well to the carpet. It stuck way too well. Like, you know, just completely bonded down there.

I think there’s also just a lot of joy in seeing this printer move. When I was doing a demonstration of it at this conference last week, it almost seemed like the robot had a personality. A vacuum robot can seem to have a personality, but this printer can actually make objects in my environment, so I feel a different relationship to the machine.

Where do you hope to take MobiPrint in the future?

Campos Zamora: There’s several directions I think we could go. Instead of controlling the robot remotely, we could have it follow someone around and print accessibility markers along a path they walk. Or we could integrate an AI system that recommends objects be printed in different locations. I also want to explore having the robot remove and recycle the objects it prints.




an

Stranded Astronauts Set to Come Home After SpaceX Capsule With Extra Seats Reaches ISS

Two astronauts relinquished their seats on a four-person spacecraft so that their colleagues could return to Earth from the ISS, where they’ve been stuck since June.




an

We Can Thank Deep-Space Asteroids for Helping Start Life on Earth

Samples from the asteroid Ryugu contain key ingredients in the biological cookbook.





an

You Won’t Want to Miss October’s Rare Comet Sighting. Here’s How and When You Can See It

A ”once in a lifetime” comet is expected to light up the night sky as it passes by Earth.




an

The Elegance and Awkwardness of NASA’s New Moon Suit, Designed by Axiom and Prada

A collaboration between a space company and a fashion company yields something elegant.




an

4 Astronauts Return to Earth After Being Delayed by Boeing’s Capsule Trouble and Hurricane Milton

A SpaceX capsule carrying the crew parachuted before dawn into the Gulf of Mexico just off the Florida coast.




an

It’s Time to Redefine What a Megafire Is in the Climate Change Era

It's not the reach of a fire that matters most; it's the speed. Understanding this can help society better prepare.




an

Comment on Are You Breathing More Than Just Festive Cheer This Diwali? Beware Of The Air Pollution by Emlakçılık Belgesi

https://maps.google.co.uk/url?q=https://yukselenakademi.com/kurs/detay/emlakcilik-belgesi-seviye-5




an

Comment on Are You Breathing More Than Just Festive Cheer This Diwali? Beware Of The Air Pollution by Samsun Perdeci

Bütün ihtiyaçlara en iyi şekilde karşılık veren Samsun perde modelleri bütçe dostu fiyatlarla sunulmaktadır. Fon perde, tül perde, stor perde, güneşlik ve plise SAMSUN Ucuz Perde Modelleri ve Fiyatları. Siz hemen şimdi maviperde.com'dan güvenle alışveriş yapın, biz SAMSUN'un her yerine ucuz perde modellerini imalattan Samsun Perde Mağazaları ve PERDES Brillant Şubeleri: İlkadım, Atakum, Bafra, Çarşamba, Canik, Vezirköprü, Terme, Tekkeköy, Havza, 19 Mayıs, Alaçam perdeci, Samsun bölgesi zebra perdeci, zebra perdeci, perdeciler Samsun, perdeci adres Samsun, perde servisi. Samsun zebra perde montajı montajcısı. https://asrtekstil.com/




an

Comment on Unmasking Confidence: 5 Reasons Why Skin Health Can Impact Your Emotional And Mental Health by airhostess

Thank you for the auspicious writeup It in fact was a amusement account it Look advanced to more added agreeable from you By the way how could we communicate




an

Comment on Unmasking Confidence: 5 Reasons Why Skin Health Can Impact Your Emotional And Mental Health by eco flow

helloI really like your writing so a lot share we keep up a correspondence extra approximately your post on AOL I need an expert in this house to unravel my problem May be that is you Taking a look ahead to see you




an

Comment on Numbness In The Arm, Face, And Leg Could Indicate A Stroke: Warning Signs To Watch Out For by 먹튀검증사이트

<a href="https://offhd.com/" rel="nofollow ugc">먹튀검증커뮤니티</a> 전문가들이 꼼꼼하게 검증한 안전한 토토사이트를 소개합니다. 안심하고 베팅하세요. 먹튀오프: https://offhd.com/








an

Charger recall spells more bad news for Humane’s maligned AI Pin

Humane first reported overheating problems with the portable charger in June.







an

Photos: Hail blankets Saudi Arabian desert creating winter-like landscape