tell

At The Opera, Guglielmo Tell (1979), July 8, 2023

Tune in at 8pm to hear the last opera of Gioachino Rossini, Guglielmo Tell ( William Tell). This 1979 recording stars Luciano Pavarotti, Mirela Freni and Sherrill Milnes.






tell

Can You Tell the Kiss From These Black Films? (Quiz)



Can you match the kiss to the film?



  • BET Star Cinema

tell

AM Best Assigns A- Rating To Martello

AM Best has assigned a Financial Strength Rating of A- [Excellent] and a Long-Term Issuer Credit Rating of “a-” [Excellent] to Martello Re Limited. The outlook assigned to these Credit Ratings [ratings] is stable. A statement from the ratings agency said, “The ratings reflect Martello Re’s balance sheet strength, which AM Best assesses as very strong, as […]




tell

A Spider Stellar Engine Could Move Binary Stars Halfway Across a Galaxy

Eventually, every stellar civilization will have to migrate to a different star. The habitable zone around all stars changes as they age. If long-lived technological civilizations are even plausible in our Universe, migration will be necessary, eventually. Could Extraterrestrial Intelligences (ETIs) use stars themselves as stellar engines in their migrations? In broad terms, a stellar …

The post A Spider Stellar Engine Could Move Binary Stars Halfway Across a Galaxy appeared first on Universe Today.




tell

Официально: хитовый экшен Stellar Blade выйдет на PC в 2025 году

Ждать осталось недолго!



  • Sony
  • Игры жанра Action
  • игры на ПК

tell

Как получить все фишки Apple Intelligence, если он недоступен на вашем iPhone

Запуск ИИ-возможностей от Apple почти для всего мира и особенно России оказался бесполезным. Ещё мало кто обновился на iPhone 15 Pro и мощнее, поддерживается только на системах с американской версией английского языка, в России интеграцию ChatGPT наверняка отрежут, а в Европе и Китае Apple Intelligence не доступен в принципе. Самих фишек тоже мало, но польза...




tell

Apple планирует выпустить умную домашнюю камеру с поддержкой Apple Intelligence

Аналитик Минг-Чи Куо рассказал о двух новых продуктах, которые стоит ожидать от Apple в ближайшие годы. В 2026 году компания планирует выпустить беспроводную умную домашнюю камеру с поддержкой Apple Intelligence и Siri. По словам Куо, в долгосрочной перспективе Apple намерена продавать камеру десятками миллионов штук. Также в 2026 году должны выйти новые AirPods, которые получат...





tell

Apple Seeds Second Public Betas of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2 With New Apple Intelligence Features

Apple today seeded the second public betas of upcoming iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 updates, allowing the public to continue testing new features ahead of when the software launches. The public betas come a day after Apple provided developers with new betas.


Public beta testers can download the updates from the Settings app on each device after opting into the beta through Apple's public beta testing website. Note that Apple has also released public betas for watchOS 11.2, tvOS 18.2, and the latest HomePod software.

iOS 18.2, iPadOS 18.2, and ‌macOS Sequoia‌ introduce the next Apple Intelligence features, including the first image generation capabilities.

The update adds Image Playground, a new app for creating images based on text descriptions. You can enter anything you want, though Apple will suggest costumes, locations, items, and more to add to an image. There are options to create characters that resemble your friends and family, and you can choose a photo for ‌Image Playground‌ to use as inspiration to create a related image. Elements added to ‌Image Playground‌ creations are previewed, and there is a preview history so you can undo a change and go back to a prior version.

While ‌Image Playground‌ is a standalone app, it is also integrated into Messages, Notes, Freeform, and more. ‌Image Playground‌ does not make photorealistic images and is instead limited to animation or illustration styles.

The update also adds Genmoji, which are customizable emoji characters that you can create based on descriptions and phrases. Like ‌Image Playground‌ creations, you can base them on your friends and family, with the data pulled from the People album in Photos. You can also make characters using basic elements, and you'll get multiple ‌Genmoji‌ suggestions to choose from. You can create ‌Genmoji‌ using the emoji keyboard.

‌Genmoji‌ are limited to iOS 18.2 and iPadOS 18.2 right now, and will be coming to ‌macOS Sequoia‌ later.

Siri in iOS 18.2, iPadOS 18.2, and ‌macOS Sequoia‌ 15.2 has ChatGPT integration. If ‌Siri‌ is unable to provide an answer to a question, ‌Siri‌ will hand the request over to ChatGPT, though ‌Siri‌ will need user permission first. ChatGPT will answer the question and relay the information back through ‌Siri‌.

ChatGPT can be used to create content from scratch, including text and images. No account is required to use ChatGPT integration, and Apple and OpenAI do not store requests.

If you have an iPhone 16, there's a Visual Intelligence feature in iOS 18.2 that provides information about what's around you. Open up the camera and point it at a restaurant to get reviews, or point it at an item to search Google for it.

Some other Visual Intelligence capabilities include reading text out loud, detecting phone numbers and addresses to add them to Contacts, copying text, and summarizing text.

Apple added Writing Tools in iOS 18.1, but in iOS 18.2, you can more freely describe the tone or content change that you want to make, such as adding more action words, or turning an email into a poem.

‌Apple Intelligence‌ now supports localized English in Australia, Canada, New Zealand, South Africa, Ireland, and the UK in addition to U.S. English.

Wait List


If you've already been testing ‌Apple Intelligence‌ and are opted in, you will have access to Writing Tools, ChatGPT integration, and Visual Intelligence automatically.

There is a secondary waiting list for early access to use ‌Genmoji‌, ‌Image Playground‌, and Image Wand. You can sign up to get access in ‌Image Playground‌ or in the areas where you access ‌Genmoji‌ or Image Wand.

When you request access, you are added to a wait list for all three capabilities and you'll get a notification when the features are available for you to use. Apple will roll out access over time.

Availability and Compatibility


The public betas are available on all devices, but the ‌Apple Intelligence‌ features require a device capable of ‌Apple Intelligence‌.

Apple is still working on refining the new ‌Apple Intelligence‌ tools, and the company warns that ‌Genmoji‌, Image Wand, and ‌Image Playground‌ can sometimes give you results you weren't expecting. Apple is collecting feedback on these experiences and will refine them over time.

Release Date


Apple is expected to release the iOS 18.2, iPadOS 18.2, ‌macOS Sequoia‌ 15.2, watchOS 11.2, tvOS 18.2, and visionOS 2.2 updates in early December.
Related Roundups: iOS 18, iPadOS 18, macOS Sequoia
Related Forums: iOS 18, iPadOS 18, macOS Sequoia

This article, "Apple Seeds Second Public Betas of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2 With New Apple Intelligence Features" first appeared on MacRumors.com

Discuss this article in our forums




tell

How to Tell If You're Using a Slow iPhone Charger

In iOS 18, Apple has introduced a clever new way to identify if your iPhone charging setup is running at less than optimal speeds. The new feature appears directly in Settings, making it easy to spot when you're not getting the fastest possible charge.


The Battery section displays a Slow Charger message when your iPhone detects a "slow" charger in use. You'll also see charging periods with an orange bar. This visual indicator appears in both the 24-hour and 10-day battery usage views.

What Makes a Charger "Slow"?



  • Wired chargers providing 7.5W or less power

  • Standard Qi1 wireless chargers (less than 10W)

  • USB ports in cars or hubs

  • Chargers with multiple connected devices sharing power


Common Causes of Slow Charging


Several situations can slow down your iPhone's charging speed. A counterfeit charger could be the culprit, for example. Even some authentic third-party wireless chargers claim MagSafe compatibility but only deliver standard Qi charging speeds.


If you keep accessories like headphones connected during wireless charging, your device automatically limits power to 7.5W to meet safety standards. Running demanding apps, playing graphics-intensive games, or streaming video at high brightness while charging can also reduce charging speeds as your iPhone manages power and heat. Lastly, it's worth bearing in mind that charging in a warm environment may cause your iPhone to temporarily pause charging until the temperature falls.

Get Faster Charging Speeds


To get the fastest possible charging speeds, you'll want to use a USB-C Power Delivery charger along with the appropriate cable - USB-C for iPhone 15 and later, or USB-C to Lightning for earlier models. Alternatively, you can opt for either Apple's MagSafe Charger or any Qi2-certified wireless charger, both of which provide significantly faster charging than standard Qi chargers.
This article, "How to Tell If You're Using a Slow iPhone Charger" first appeared on MacRumors.com

Discuss this article in our forums




tell

The mainstream media and the Democratic Party and the intelligence agencies and the tech monopolies are your enemies. Like fascists they are misleading you with propaganda so that you will obey.

The real threat is collusion. When journalists strike secret alliances with the very people they're supposed to be holding accountable, we are in deep trouble. Lies go unchallenged.  Democracy cannot function. And that's what we're watching right now. Continue reading







tell

US Climate Official Tells COP29 Oil Boom Aids Energy Transition




tell

I Am Not Real: Artificial Intelligence in the Needlework World

The topic of AI in the needlework world has been on my radar for well over a year. But I’ve …




tell

Vivek Ramaswamy Tells DREAMers To Pound Sand On Mass Deportations

The cruelty is the point with these disgusting excuses for human beings. Maybe they can do something about illegal immigrant Elon while they're at it. The incompetent first Trump administration got their rear ends handed to them during their last attempt to deport all of the DREAMers who were living in the United States under the Deferred Action for Childhood Arrivals program, otherwise known as DACA.

Now they're ready to do it again with Trump's plans for mass deportations, that would destroy the United States' economy.

Trump supporter Vivek Ramaswamy, who could end up with a job in the Trump administration, made an appearance on ABC's This Week, and was asked about whether Trump would actually follow through with his threat or not, and when host Jonathan Karl brought up the DREAMers, who were brought here as children, Ramaswamy basically told them all to pound sand.

KARL: Now, obviously, Trump's promised --and you've talked a lot about this, the -- you know, mass deportation of undocumented immigrants.

read more





tell

Webinar: Learn How Storytelling Can Make Cybersecurity Training Fun and Effective

Let’s face it—traditional security training can feel as thrilling as reading the fine print on a software update. It’s routine, predictable, and, let’s be honest, often forgotten the moment it's over. Now, imagine cybersecurity training that’s as unforgettable as your favorite show. Remember how "Hamilton" made history come alive, or how "The Office" taught us CPR (Staying Alive beat, anyone?)?




tell

Medievalist William Chester Jordan receives Barry Prize for Distinguished Intellectual Achievement

Jordan will also receive the American Historical Society's Award for Scholarly Distinction in January.




tell

Digital Storytelling with ArcGIS StoryMaps

This workshop will introduce participants to the primary features of ArcGIS StoryMaps and the necessary preparation to publish an effective StoryMaps project. As a member of the Princeton community, you have access to ArcGIS Online and its many apps like StoryMaps. Skills taught or addressed include: pairing maps, multimedia, and text; geolocation; embedding content; digital map making; using ArcGIS templates and layouts; digital storytelling strategies. Please bring a laptop. If you have not already activated your Princeton ArcGIS Online account, you are encouraged to do so beforehand.




tell

Students tell local climate stories in NOVA filmmaking program

Students across the country are participating in NOVA's film production program to make videos about climate change solutions in their local communities.




tell

Louisiana schools use Artificial Intelligence to help young children learn to read

In Louisiana, more than 100,000 students are using an AI tutor that is helping to raise reading scores.




tell

Covid was like a daily terror attack, doctor tells inquiry

Covid inquiry hears harrowing testimony from ex-adviser in emergency preparedness at NHS England.




tell

Somebody moved UK's oldest satellite, and no-one knows who or why

Britain's oldest satellite is in the wrong part of the sky, but no-one's really sure who moved it.




tell

How To Tell Hardwood From Softwood Firewood?

When it comes to heating your home or enjoying a cozy evening by the fireplace, the type of firewood you choose can make a significant difference in both the efficiency and quality of your fire. One critical distinction in the world of firewood is whether it is hardwood or softwood. While the terms “hardwood” and […]

The post How To Tell Hardwood From Softwood Firewood? appeared first on Patriot Outdoor News.




tell

Iowa Pediatrician Blistered for Telling Trump Voters that He Hopes Their Children Die

A scumbag, left-wing (is there any other kind?) doctor in Iowa is facing an uncertain future at his hospital after he began posting obscene wishes on social media saying that he hopes the children of Trump voters are murdered. The creep is question is one Dr. Mayank Sharma, 35, a pediatric cardiology fellow for the […]

The post Iowa Pediatrician Blistered for Telling Trump Voters that He Hopes Their Children Die appeared first on The Lid.




tell

Would-Be Reagan Assassin John Hinkley Tells Leftists to Stop Asking Him to Murder Donald Trump

This is how disgusting Democrats are… the man who tried to murder Ronald Reagan in 1981 is now telling Democrats to stop asking him to assassinate Donald Trump. If you are a Democrat, you can’t be hated enough. Hinkley had to tell Democrats to stop with their negativity on social media, WAVY-TV reported. “I’m a […]

The post Would-Be Reagan Assassin John Hinkley Tells Leftists to Stop Asking Him to Murder Donald Trump appeared first on The Lid.




tell

News24 | US climate action won't end with Trump, envoy tells COP29

Washington's top climate envoy sought to reassure countries at the CO29 talks Monday that Donald Trump's re-election would not end US efforts to tackle global warming.




tell

velocityconf: RT @suzaxtell: #WomeninTech You're invited to a women's meetup on Tues May 28 in SF w/ @courtneynash @mjawili, more http://t.co/MsMZ0IK8L2

velocityconf: RT @suzaxtell: #WomeninTech You're invited to a women's meetup on Tues May 28 in SF w/ @courtneynash @mjawili, more http://t.co/MsMZ0IK8L2




tell

5 Helpful Uses for Apple Intelligence on Mac, iPhone, & iPad

Apple Intelligence is here on Mac, iPhone, and iPad, and while the system requirements are strict, the Apple devices that are new and powerful enough to use the AI tools now gain some really fantastic features. We’re going to show you six helpful Apple Intelligence features and uses that you’ll find beneficial to your workflow, ... Read More




tell

Andrea Davis Pinkney: storyteller and more

It’s difficult to encapsulate the impact of Andrea Davis Pinkney on readers and in publishing for young readers. She is an award-winning author, accomplished editor, visionary publisher, and now the co-curator of a museum exhibition.




tell

Back to Elementary School With Storytelling

Engaging in storytelling gives students an opportunity to connect with each other and understand classroom expectations. Teacher Matthew James Friday says, "I tell a story every day for the first two or three weeks. I also suggest that the students can become storytellers themselves. All they need to do is write a story at home. After a few weeks of my telling stories, something magical always happens: A student brings in a story."





tell

Biden Asked If He'll Get a Hostage Deal by the End of His Term, and His Response Is Telling

Legendary Christian writer C.S. Lewis gave us an analogy to help explain otherwise inexplicable moments such as this. Tolerance, Lewis once wrote, “parodies love as flippancy parodies merriment.” In a […]

The post Biden Asked If He'll Get a Hostage Deal by the End of His Term, and His Response Is Telling appeared first on The Western Journal.




tell

Newsroom: Insider Intelligence Slashes Ad Spending Forecast for Russia and Eastern Europe Amid Conflict

Total media ad spend in Russia to drop nearly 50%   March 30, 2022 (New York, NY) – Insider Intelligence expects the ongoing war in Ukraine to have a significant […]




tell

Undercurrents: Episode 10 - Artificial Intelligence in International Affairs, and Women Drivers in Saudi Arabia




tell

Artificial Intelligence and the Public: Prospects, Perceptions and Implications




tell

Undercurrents: Summer Special - Allison Gardner on Artificial Intelligence




tell

Undercurrents: Episode 48 - UK Intelligence Agencies, and Paying for Climate Action




tell

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities Expert comment sysadmin 14 March 2018

Artificial intelligence applications will not be a panacea for addressing India’s grand challenges. Data bias and unequal access to technology gains will entrench existing socio-economic fissures.

Participants at an AI event in Bangalore. Photo: Getty Images.

Artificial intelligence (AI) is high on the Indian government’s agenda. Some days ago, Prime Minister Narendra Modi inaugurated the Wadhwani Institute for Artificial Intelligence, reportedly India’s first research institute focused on AI solutions for social good. In the same week, Niti Aayog CEO Amitabh Kant argued that AI could potentially add $957 billion to the economy and outlined ways in which AI could be a ‘game changer’.

During his budget speech, Finance Minister Arun Jaitley announced that Niti Aayog would spearhead a national programme on AI; with the near doubling of the Digital India budget, the IT ministry also announced the setting up of four committees for AI-related research. An industrial policy for AI is also in the pipeline, expected to provide incentives to businesses for creating a globally competitive Indian AI industry.

Narratives on the emerging digital economy often suffer from technological determinism — assuming that the march of technological transformation has an inner logic, independent of social choice and capable of automatically delivering positive social change. However, technological trajectories can and must be steered by social choice and aligned with societal objectives. Modi’s address hit all the right notes, as he argued that the ‘road ahead for AI depends on and will be driven by human intentions’. Emphasising the need to direct AI technologies towards solutions for the poor, he called upon students and teachers to identify ‘the grand challenges facing India’ – to ‘Make AI in India and for India’.

To do so, will undoubtedly require substantial investments in R&D, digital infrastructure and education and re-skilling. But, two other critical issues must be simultaneously addressed: data bias and access to technology gains.

While computers have been mimicking human intelligence for some decades now, a massive increase in computational power and the quantity of available data are enabling a process of ‘machine learning.’ Instead of coding software with specific instructions to accomplish a set task, machine learning involves training an algorithm on large quantities of data to enable it to self-learn; refining and improving its results through multiple iterations of the same task. The quality of data sets used to train machines is thus a critical concern in building AI applications.

Much recent research shows that applications based on machine learning reflect existing social biases and prejudice. Such bias can occur if the data set the algorithm is trained on is unrepresentative of the reality it seeks to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognizing non-white people. This is what led a recent Google application to tag black people as gorillas.

Alternatively, bias can also occur if the data set itself reflects existing discriminatory or exclusionary practices. A recent study by ProPublica found for example that software that was being used to assess the risk of recidivism in criminals in the United States was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes.

The impact of such data bias can be seriously damaging in India, particularly at a time of growing social fragmentation. It can contribute to the entrenchment of social bias and discriminatory practices, while rendering both invisible and pervasive the processes through which discrimination occurs. Women are 34 per cent less likely to own a mobile phone than men – manifested in only 14 per cent of women in rural India owning a mobile phone, while only 30 per cent of India’s internet users are women.

Women’s participation in the labour force, currently at around 27 per cent, is also declining, and is one of the lowest in South Asia. Data sets used for machine learning are thus likely to have a marked gender bias. The same observations are likely to hold true for other marginalized groups as well.

Accorded to a 2014 report, Muslims, Dalits and tribals make up 53 per cent of all prisoners in India; National Crime Records Bureau data from 2016 shows in some states, the percentage of Muslims in the incarcerated population was almost three times the percentage of Muslims in the overall population. If AI applications for law and order are built on this data, it is not unlikely that it will be prejudiced against these groups.

(It is worth pointing out that the recently set-up national AI task force is comprised of mostly Hindu men – only two women are on the task force, and no Muslims or Christians. A recent article in the New York Times talked about AI’s ‘white guy problem’; will India suffer from a ‘Hindu male bias’?)

Yet, improving the quality, or diversity, of data sets may not be able to solve the problem. The processes of machine learning and reasoning involve a quagmire of mathematical functions, variables and permutations, the logic of which are not readily traceable or predictable. The dazzle of AI-enabled efficiency gains must not blind us to the fact that while AI systems are being integrated into key socio-economic systems, their accuracy and logic of reasoning have not been fully understood or studied.

The other big challenge stems from the distribution of AI-led technology gains. Even if estimates of AI contribution to GDP are correct, the adoption of these technologies is likely to be in niches within the organized sector. These industries are likely to be capital- rather than labour-intensive, and thus unlikely to contribute to large-scale job creation.

At the same time, AI applications can most readily replace low- to medium-skilled jobs within the organized sector. This is already being witnessed in the outsourcing sector – where basic call and chat tasks are now automated. Re-skilling will be important, but it is unlikely that those who lose their jobs will also be those who are being re-skilled – the long arch of technological change and societal adaptation is longer than that of people’s lives. The contractualization of work, already on the rise, is likely to further increase as large industries prefer to have a flexible workforce to adapt to technological change. A shift from formal employment to contractual work can imply a loss of access to formal social protection mechanisms, increasing the precariousness of work for workers.

The adoption of AI technologies is also unlikely in the short- to medium-term in the unorganized sector, which engages more than 80 per cent of India’s labor force. The cost of developing and deploying AI applications, particularly in relation to the cost of labour, will inhibit adoption. Moreover, most enterprises within the unorganized sector still have limited access to basic, older technologies – two-thirds of the workforce are employed in enterprises without electricity. Eco-system upgrades will be important but incremental. Given the high costs of developing AI-based applications, most start-ups are unlikely to be working towards creating bottom-of-the-pyramid solutions.

Access to AI-led technology gains is thus likely to be heavily differentiated – a few high-growth industries can be expected, but these will not necessarily result in the welfare of labour. Studies show that labour share of national income, especially routine labour, has been declining steadily across developing countries.

We should be clear that new technological applications themselves are not going to transform or disrupt this trend – rather, without adequate policy steering, these trends will be exacerbated.

Policy debates about AI applications in India need to take these two issues seriously. AI applications will not be a panacea for addressing ‘India’s grand challenges’. Data bias and unequal access to technology gains will entrench existing socio-economic fissures, even making them technologically binding.

In addition to developing AI applications and creating a skilled workforce, the government needs to prioritize research that examines the complex social, ethical and governance challenges associated with the spread of AI-driven technologies. Blind technological optimism might entrench rather than alleviate the grand Indian challenge of inequity and growth.

This article was originally published in the Indian Express.




tell

Woman tells Dave Ramsey that her husband has been unemployed for 13 years — and he delivered some hard truths




tell

Doctor’s ‘pizza topping’ trick to tell the difference between hemorrhoids and a sign of colon cancer




tell

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence

27 August 2020

Yasmin Afina

Research Assistant, International Security Programme
Increasing dependency on artificial intelligence (AI) for military technologies is inevitable and efforts to develop these technologies to use in the battlefield is proceeding apace, however, developers and end-users must ensure the reliability of these technologies, writes Yasmin Afina.

GettyImages-112897149.jpg

F-16 SimuSphere HD flight simulator at Link Simulation in Arlington, Texas, US. Photo: Getty Images.

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controlling a simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agency’s deputy director remarked that these tools are now ‘ready for weapons systems designers to be in the toolbox’. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use – namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

Testing environment versus real-life applications

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithm’s accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly – if not more so – than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic and battlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversaries will be unpredictable.

Every single eventuality would need to be programmed in line with the commander’s intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Hardware limitations

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems – each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a ‘pilotless’ aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithm’s performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Legal compliance

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithm’s ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law – including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithm’s calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithm’s opacity to improve the algorithm’s performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Reliability, testing and experimentation

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machine’s reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations. 




tell

MIRD Pamphlet No. 31: MIRDcell V4--Artificial Intelligence Tools to Formulate Optimized Radiopharmaceutical Cocktails for Therapy

Visual Abstract




tell

Problem Notes for SAS®9 - 66537: SAS Customer Intelligence Studio becomes non-responsive when you delete a calculated variable from the Edit Value dialog box

In SAS Customer Intelligence Studio, you might notice that the user interface becomes unresponsive, as shown below: imgalt="SAS Customer Intelligence Studio UI becomes unresponsive" src="{fusion_66537




tell

Problem Notes for SAS®9 - 66539: A new calculated variable that you create in the Edit Value dialog box is not available for selection in SAS Customer Intelligence Studio

In SAS Customer Intelligence Studio, you can choose to create a new calculated variable in the Edit Value dialog box when you populate a treatment custom detail. Following creation of the new calculated




tell

Problem Notes for SAS®9 - 66544: You cannot clear warnings for decision campaign nodes in SAS Customer Intelligence Studio

In SAS Customer Intelligence Studio, you might notice that you cannot clear warnings for decision campaign nodes by selecting either the Clear Warnings  option or the Clear All Warnin