intelligence

Как получить все фишки Apple Intelligence, если он недоступен на вашем iPhone

Запуск ИИ-возможностей от Apple почти для всего мира и особенно России оказался бесполезным. Ещё мало кто обновился на iPhone 15 Pro и мощнее, поддерживается только на системах с американской версией английского языка, в России интеграцию ChatGPT наверняка отрежут, а в Европе и Китае Apple Intelligence не доступен в принципе. Самих фишек тоже мало, но польза...




intelligence

Apple планирует выпустить умную домашнюю камеру с поддержкой Apple Intelligence

Аналитик Минг-Чи Куо рассказал о двух новых продуктах, которые стоит ожидать от Apple в ближайшие годы. В 2026 году компания планирует выпустить беспроводную умную домашнюю камеру с поддержкой Apple Intelligence и Siri. По словам Куо, в долгосрочной перспективе Apple намерена продавать камеру десятками миллионов штук. Также в 2026 году должны выйти новые AirPods, которые получат...





intelligence

Apple Seeds Second Public Betas of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2 With New Apple Intelligence Features

Apple today seeded the second public betas of upcoming iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 updates, allowing the public to continue testing new features ahead of when the software launches. The public betas come a day after Apple provided developers with new betas.


Public beta testers can download the updates from the Settings app on each device after opting into the beta through Apple's public beta testing website. Note that Apple has also released public betas for watchOS 11.2, tvOS 18.2, and the latest HomePod software.

iOS 18.2, iPadOS 18.2, and ‌macOS Sequoia‌ introduce the next Apple Intelligence features, including the first image generation capabilities.

The update adds Image Playground, a new app for creating images based on text descriptions. You can enter anything you want, though Apple will suggest costumes, locations, items, and more to add to an image. There are options to create characters that resemble your friends and family, and you can choose a photo for ‌Image Playground‌ to use as inspiration to create a related image. Elements added to ‌Image Playground‌ creations are previewed, and there is a preview history so you can undo a change and go back to a prior version.

While ‌Image Playground‌ is a standalone app, it is also integrated into Messages, Notes, Freeform, and more. ‌Image Playground‌ does not make photorealistic images and is instead limited to animation or illustration styles.

The update also adds Genmoji, which are customizable emoji characters that you can create based on descriptions and phrases. Like ‌Image Playground‌ creations, you can base them on your friends and family, with the data pulled from the People album in Photos. You can also make characters using basic elements, and you'll get multiple ‌Genmoji‌ suggestions to choose from. You can create ‌Genmoji‌ using the emoji keyboard.

‌Genmoji‌ are limited to iOS 18.2 and iPadOS 18.2 right now, and will be coming to ‌macOS Sequoia‌ later.

Siri in iOS 18.2, iPadOS 18.2, and ‌macOS Sequoia‌ 15.2 has ChatGPT integration. If ‌Siri‌ is unable to provide an answer to a question, ‌Siri‌ will hand the request over to ChatGPT, though ‌Siri‌ will need user permission first. ChatGPT will answer the question and relay the information back through ‌Siri‌.

ChatGPT can be used to create content from scratch, including text and images. No account is required to use ChatGPT integration, and Apple and OpenAI do not store requests.

If you have an iPhone 16, there's a Visual Intelligence feature in iOS 18.2 that provides information about what's around you. Open up the camera and point it at a restaurant to get reviews, or point it at an item to search Google for it.

Some other Visual Intelligence capabilities include reading text out loud, detecting phone numbers and addresses to add them to Contacts, copying text, and summarizing text.

Apple added Writing Tools in iOS 18.1, but in iOS 18.2, you can more freely describe the tone or content change that you want to make, such as adding more action words, or turning an email into a poem.

‌Apple Intelligence‌ now supports localized English in Australia, Canada, New Zealand, South Africa, Ireland, and the UK in addition to U.S. English.

Wait List


If you've already been testing ‌Apple Intelligence‌ and are opted in, you will have access to Writing Tools, ChatGPT integration, and Visual Intelligence automatically.

There is a secondary waiting list for early access to use ‌Genmoji‌, ‌Image Playground‌, and Image Wand. You can sign up to get access in ‌Image Playground‌ or in the areas where you access ‌Genmoji‌ or Image Wand.

When you request access, you are added to a wait list for all three capabilities and you'll get a notification when the features are available for you to use. Apple will roll out access over time.

Availability and Compatibility


The public betas are available on all devices, but the ‌Apple Intelligence‌ features require a device capable of ‌Apple Intelligence‌.

Apple is still working on refining the new ‌Apple Intelligence‌ tools, and the company warns that ‌Genmoji‌, Image Wand, and ‌Image Playground‌ can sometimes give you results you weren't expecting. Apple is collecting feedback on these experiences and will refine them over time.

Release Date


Apple is expected to release the iOS 18.2, iPadOS 18.2, ‌macOS Sequoia‌ 15.2, watchOS 11.2, tvOS 18.2, and visionOS 2.2 updates in early December.
Related Roundups: iOS 18, iPadOS 18, macOS Sequoia
Related Forums: iOS 18, iPadOS 18, macOS Sequoia

This article, "Apple Seeds Second Public Betas of iOS 18.2, iPadOS 18.2 and macOS Sequoia 15.2 With New Apple Intelligence Features" first appeared on MacRumors.com

Discuss this article in our forums




intelligence

The mainstream media and the Democratic Party and the intelligence agencies and the tech monopolies are your enemies. Like fascists they are misleading you with propaganda so that you will obey.

The real threat is collusion. When journalists strike secret alliances with the very people they're supposed to be holding accountable, we are in deep trouble. Lies go unchallenged.  Democracy cannot function. And that's what we're watching right now. Continue reading




intelligence

I Am Not Real: Artificial Intelligence in the Needlework World

The topic of AI in the needlework world has been on my radar for well over a year. But I’ve …




intelligence

Louisiana schools use Artificial Intelligence to help young children learn to read

In Louisiana, more than 100,000 students are using an AI tutor that is helping to raise reading scores.




intelligence

5 Helpful Uses for Apple Intelligence on Mac, iPhone, & iPad

Apple Intelligence is here on Mac, iPhone, and iPad, and while the system requirements are strict, the Apple devices that are new and powerful enough to use the AI tools now gain some really fantastic features. We’re going to show you six helpful Apple Intelligence features and uses that you’ll find beneficial to your workflow, ... Read More




intelligence

Newsroom: Insider Intelligence Slashes Ad Spending Forecast for Russia and Eastern Europe Amid Conflict

Total media ad spend in Russia to drop nearly 50%   March 30, 2022 (New York, NY) – Insider Intelligence expects the ongoing war in Ukraine to have a significant […]




intelligence

Undercurrents: Episode 10 - Artificial Intelligence in International Affairs, and Women Drivers in Saudi Arabia




intelligence

Artificial Intelligence and the Public: Prospects, Perceptions and Implications




intelligence

Undercurrents: Summer Special - Allison Gardner on Artificial Intelligence




intelligence

Undercurrents: Episode 48 - UK Intelligence Agencies, and Paying for Climate Action




intelligence

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities Expert comment sysadmin 14 March 2018

Artificial intelligence applications will not be a panacea for addressing India’s grand challenges. Data bias and unequal access to technology gains will entrench existing socio-economic fissures.

Participants at an AI event in Bangalore. Photo: Getty Images.

Artificial intelligence (AI) is high on the Indian government’s agenda. Some days ago, Prime Minister Narendra Modi inaugurated the Wadhwani Institute for Artificial Intelligence, reportedly India’s first research institute focused on AI solutions for social good. In the same week, Niti Aayog CEO Amitabh Kant argued that AI could potentially add $957 billion to the economy and outlined ways in which AI could be a ‘game changer’.

During his budget speech, Finance Minister Arun Jaitley announced that Niti Aayog would spearhead a national programme on AI; with the near doubling of the Digital India budget, the IT ministry also announced the setting up of four committees for AI-related research. An industrial policy for AI is also in the pipeline, expected to provide incentives to businesses for creating a globally competitive Indian AI industry.

Narratives on the emerging digital economy often suffer from technological determinism — assuming that the march of technological transformation has an inner logic, independent of social choice and capable of automatically delivering positive social change. However, technological trajectories can and must be steered by social choice and aligned with societal objectives. Modi’s address hit all the right notes, as he argued that the ‘road ahead for AI depends on and will be driven by human intentions’. Emphasising the need to direct AI technologies towards solutions for the poor, he called upon students and teachers to identify ‘the grand challenges facing India’ – to ‘Make AI in India and for India’.

To do so, will undoubtedly require substantial investments in R&D, digital infrastructure and education and re-skilling. But, two other critical issues must be simultaneously addressed: data bias and access to technology gains.

While computers have been mimicking human intelligence for some decades now, a massive increase in computational power and the quantity of available data are enabling a process of ‘machine learning.’ Instead of coding software with specific instructions to accomplish a set task, machine learning involves training an algorithm on large quantities of data to enable it to self-learn; refining and improving its results through multiple iterations of the same task. The quality of data sets used to train machines is thus a critical concern in building AI applications.

Much recent research shows that applications based on machine learning reflect existing social biases and prejudice. Such bias can occur if the data set the algorithm is trained on is unrepresentative of the reality it seeks to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognizing non-white people. This is what led a recent Google application to tag black people as gorillas.

Alternatively, bias can also occur if the data set itself reflects existing discriminatory or exclusionary practices. A recent study by ProPublica found for example that software that was being used to assess the risk of recidivism in criminals in the United States was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes.

The impact of such data bias can be seriously damaging in India, particularly at a time of growing social fragmentation. It can contribute to the entrenchment of social bias and discriminatory practices, while rendering both invisible and pervasive the processes through which discrimination occurs. Women are 34 per cent less likely to own a mobile phone than men – manifested in only 14 per cent of women in rural India owning a mobile phone, while only 30 per cent of India’s internet users are women.

Women’s participation in the labour force, currently at around 27 per cent, is also declining, and is one of the lowest in South Asia. Data sets used for machine learning are thus likely to have a marked gender bias. The same observations are likely to hold true for other marginalized groups as well.

Accorded to a 2014 report, Muslims, Dalits and tribals make up 53 per cent of all prisoners in India; National Crime Records Bureau data from 2016 shows in some states, the percentage of Muslims in the incarcerated population was almost three times the percentage of Muslims in the overall population. If AI applications for law and order are built on this data, it is not unlikely that it will be prejudiced against these groups.

(It is worth pointing out that the recently set-up national AI task force is comprised of mostly Hindu men – only two women are on the task force, and no Muslims or Christians. A recent article in the New York Times talked about AI’s ‘white guy problem’; will India suffer from a ‘Hindu male bias’?)

Yet, improving the quality, or diversity, of data sets may not be able to solve the problem. The processes of machine learning and reasoning involve a quagmire of mathematical functions, variables and permutations, the logic of which are not readily traceable or predictable. The dazzle of AI-enabled efficiency gains must not blind us to the fact that while AI systems are being integrated into key socio-economic systems, their accuracy and logic of reasoning have not been fully understood or studied.

The other big challenge stems from the distribution of AI-led technology gains. Even if estimates of AI contribution to GDP are correct, the adoption of these technologies is likely to be in niches within the organized sector. These industries are likely to be capital- rather than labour-intensive, and thus unlikely to contribute to large-scale job creation.

At the same time, AI applications can most readily replace low- to medium-skilled jobs within the organized sector. This is already being witnessed in the outsourcing sector – where basic call and chat tasks are now automated. Re-skilling will be important, but it is unlikely that those who lose their jobs will also be those who are being re-skilled – the long arch of technological change and societal adaptation is longer than that of people’s lives. The contractualization of work, already on the rise, is likely to further increase as large industries prefer to have a flexible workforce to adapt to technological change. A shift from formal employment to contractual work can imply a loss of access to formal social protection mechanisms, increasing the precariousness of work for workers.

The adoption of AI technologies is also unlikely in the short- to medium-term in the unorganized sector, which engages more than 80 per cent of India’s labor force. The cost of developing and deploying AI applications, particularly in relation to the cost of labour, will inhibit adoption. Moreover, most enterprises within the unorganized sector still have limited access to basic, older technologies – two-thirds of the workforce are employed in enterprises without electricity. Eco-system upgrades will be important but incremental. Given the high costs of developing AI-based applications, most start-ups are unlikely to be working towards creating bottom-of-the-pyramid solutions.

Access to AI-led technology gains is thus likely to be heavily differentiated – a few high-growth industries can be expected, but these will not necessarily result in the welfare of labour. Studies show that labour share of national income, especially routine labour, has been declining steadily across developing countries.

We should be clear that new technological applications themselves are not going to transform or disrupt this trend – rather, without adequate policy steering, these trends will be exacerbated.

Policy debates about AI applications in India need to take these two issues seriously. AI applications will not be a panacea for addressing ‘India’s grand challenges’. Data bias and unequal access to technology gains will entrench existing socio-economic fissures, even making them technologically binding.

In addition to developing AI applications and creating a skilled workforce, the government needs to prioritize research that examines the complex social, ethical and governance challenges associated with the spread of AI-driven technologies. Blind technological optimism might entrench rather than alleviate the grand Indian challenge of inequity and growth.

This article was originally published in the Indian Express.




intelligence

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence

27 August 2020

Yasmin Afina

Research Assistant, International Security Programme
Increasing dependency on artificial intelligence (AI) for military technologies is inevitable and efforts to develop these technologies to use in the battlefield is proceeding apace, however, developers and end-users must ensure the reliability of these technologies, writes Yasmin Afina.

GettyImages-112897149.jpg

F-16 SimuSphere HD flight simulator at Link Simulation in Arlington, Texas, US. Photo: Getty Images.

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controlling a simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agency’s deputy director remarked that these tools are now ‘ready for weapons systems designers to be in the toolbox’. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use – namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

Testing environment versus real-life applications

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithm’s accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly – if not more so – than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic and battlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversaries will be unpredictable.

Every single eventuality would need to be programmed in line with the commander’s intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Hardware limitations

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems – each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a ‘pilotless’ aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithm’s performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Legal compliance

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithm’s ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law – including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithm’s calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithm’s opacity to improve the algorithm’s performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Reliability, testing and experimentation

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machine’s reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations. 




intelligence

MIRD Pamphlet No. 31: MIRDcell V4--Artificial Intelligence Tools to Formulate Optimized Radiopharmaceutical Cocktails for Therapy

Visual Abstract




intelligence

Problem Notes for SAS®9 - 66537: SAS Customer Intelligence Studio becomes non-responsive when you delete a calculated variable from the Edit Value dialog box

In SAS Customer Intelligence Studio, you might notice that the user interface becomes unresponsive, as shown below: imgalt="SAS Customer Intelligence Studio UI becomes unresponsive" src="{fusion_66537




intelligence

Problem Notes for SAS®9 - 66539: A new calculated variable that you create in the Edit Value dialog box is not available for selection in SAS Customer Intelligence Studio

In SAS Customer Intelligence Studio, you can choose to create a new calculated variable in the Edit Value dialog box when you populate a treatment custom detail. Following creation of the new calculated




intelligence

Problem Notes for SAS®9 - 66544: You cannot clear warnings for decision campaign nodes in SAS Customer Intelligence Studio

In SAS Customer Intelligence Studio, you might notice that you cannot clear warnings for decision campaign nodes by selecting either the Clear Warnings  option or the Clear All Warnin




intelligence

Problem Notes for SAS®9 - 66527: Updating counts in a Link node in SAS Customer Intelligence Studio produces the error "Link: MAIQService:executeFastPath:"

In SAS Customer Intelligence Studio, the following error is displayed when you update a new Link  node in a diagram:   imgalt="Link: MAIQService:executeFastPath:" src="{fusion_665




intelligence

Problem Notes for SAS®9 - 66511: A Russian translation shows the same value for two different variables in the Define Value dialog box for the Reply node in SAS Customer Intelligence Studio

In SAS Customer Intelligence Studio,  when you add  Reply- node variable values in the Define Value dialog box, you might notice that two identically labeled data-grid variables are




intelligence

Artificial Intelligence Prediction and Counterterrorism

Artificial Intelligence Prediction and Counterterrorism Research paper sysadmin 6 August 2019

The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states.

Surveillance cameras manufactured by Hangzhou Hikvision Digital Technology Co. at a testing station near the company’s headquarters in Hangzhou, China. Photo: Getty Images

Summary

  • The use of predictive artificial intelligence (AI) in countering terrorism is often assumed to have a deleterious effect on human rights, generating spectres of ‘pre-crime’ punishment and surveillance states. However, the well-regulated use of new capabilities may enhance states’ abilities to protect citizens’ right to life, while at the same time improving adherence to principles intended to protect other human rights, such as transparency, proportionality and freedom from unfair discrimination. The same regulatory framework could also contribute to safeguarding against broader misuse of related technologies.
  • Most states focus on preventing terrorist attacks, rather than reacting to them. As such, prediction is already central to effective counterterrorism. AI allows higher volumes of data to be analysed, and may perceive patterns in those data that would, for reasons of both volume and dimensionality, otherwise be beyond the capacity of human interpretation. The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats.
  • Developments in AI have amplified the ability to conduct surveillance without being constrained by resources. Facial recognition technology, for instance, may enable the complete automation of surveillance using CCTV in public places in the near future.
  • The current way predictive AI capabilities are used presents a number of interrelated problems from both a human rights and a practical perspective. Where limitations and regulations do exist, they may have the effect of curtailing the utility of approaches that apply AI, while not necessarily safeguarding human rights to an adequate extent.
  • The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results.
  • In future, broader access to less intrusive aspects of public data, direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. It is important that any such measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly.




intelligence

Secrets and Spies: UK Intelligence Accountability After Iraq and Snowden

Secrets and Spies: UK Intelligence Accountability After Iraq and Snowden Book sysadmin 15 January 2020

How can democratic governments hold intelligence and security agencies to account when what they do is largely secret? Jamie Gaskarth explores how intelligence professionals view accountability in the context of 21st century politics.

Using the UK as a case study, this book provides the first systematic exploration of how accountability is understood inside the secret world. It is based on new interviews with current and former UK intelligence practitioners, as well as extensive research into the performance and scrutiny of the UK intelligence machinery.

The result is the first detailed analysis of how intelligence professionals view their role, what they feel keeps them honest, and how far external overseers impact on their work.

The UK gathers material that helps inform global decisions on such issues as nuclear proliferation, terrorism, transnational crime, and breaches of international humanitarian law. On the flip side, the UK was a major contributor to the intelligence failures leading to the Iraq war in 2003, and its agencies were complicit in the widely discredited U.S. practices of torture and ‘rendition’ of terrorism suspects. UK agencies have come under greater scrutiny since those actions, but it is clear that problems remain.

Secrets and Spies is the result of a British Academy funded project (SG151249) on intelligence accountability. The book is published as part of the Insights series.

Praise for Secrets and Spies

Open society is increasingly defended by secret means. For this reason, oversight has never been more important. This book offers a new exploration of the widening world of accountability for UK intelligence, encompassing informal as well as informal mechanisms. It substantiates its claims well, drawing on an impressive range of interviews with senior figures. This excellent book offers both new information and fresh interpretations. It will have a major impact.

Richard Aldrich, Professor of International Security, University of Warwick, UK

About the author

Jamie Gaskarth is Professor of Foreign Policy and International Relations at The Open University. He was previously senior lecturer at the University of Birmingham where he taught strategy and decision-making. His research focused on the ethical dilemmas of leadership and accountability in intelligence, foreign policy, and defence. He is author/editor or co-editor of six books and served on the Academic Advisory panel for the 2015 UK National Security Strategy and Strategic Defence and Security Review.

Purchase




intelligence

Who gains from artificial intelligence?

Who gains from artificial intelligence? 27 February 2023 — 5:30PM TO 6:30PM Anonymous (not verified) 6 February 2023 Chatham House and Online

What implications will AI have on fundamental rights and how can societies benefit from this technology revolution?

In recent months, the latest developments in artificial intelligence (AI) have attracted much media attention. These technologies hold a wealth of potential for a wide range of applications, for example, the recent release of OpenAI’s ChatGPT, a text generation model, has shed light on the opportunities these applications hold including to advance scientific research and discovery, enhance search engines and improve key commercial applications.

Yet, instead of generating an evidence-based public debate, this increased interest has also led to discussions on AI technologies which are often alarmist in nature, and in a lot of cases, misleading. They carry the risk of shifting public and policymakers’ attention away from critical societal and legal risks as well as concrete solutions.

This discussion, held in partnership with Microsoft and Sidley Austin LLP, provides an expert-led overview of where the technology stands in 2023. Panellists also reflect on the implications of implementing AI on fundamental rights, the enforcement of current and upcoming legislation and multi-stakeholder pathways to address relevant issues in the AI space.

More specifically, the panel explores:

  • What is the current state of the art in the AI field?
  • What are the opportunities and challenges presented by generative AI and other innovations?
  • What are some of the key, and potentially most disruptive, AI applications to monitor in the near- and mid-term? 
  • Which applications would benefit from greater public policy/governance discussions?
  • How can current and future policy frameworks ensure the protection of fundamental rights in this new era of AI?
  • What is the role of multi-stakeholder collaboration?
  • What are the pathways to achieving inclusive and responsible governance of AI?
  • How can countries around the world work together to develop frameworks for responsible AI that upholds democratic values and advance AI collaboration across borders?

As with all member events, questions from the audience drive the conversation.

Read the transcript.




intelligence

Trump taps John Ratcliffe, ex-national intelligence chief, for CIA director

President-elect Donald Trump announced his choice Tuesday for CIA Director, tapping his former intelligence chief John Ratcliffe, who he called a "warrior of truth."




intelligence

How High Intelligence Affects Drinking Habits (M)

Your intelligence could influence how much alcohol you consume.




intelligence

Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea?

The rapid shift to tech-driven, remote learning this spring has infused more technology into K-12 education, but AI tools still remain on the fringe.




intelligence

How Artificial Intelligence Is Making 2,000-Year-Old Scrolls Readable Again

When Mount Vesuvius erupted in 79 C.E., it covered the ancient cities of Pompeii and Herculaneum under tons of ash. Millennia later, in the mid-18th century, archeologists began to unearth the city, including its famed libraries, but the scrolls they found were too fragile to be unrolled and read; their contents were thought to be lost forever. Only now, thanks to the advent of artificial intelligence and machine learning, scholars of the ancient world have partnered with computer programmers to unlock the contents of these priceless documents. In this episode of “There’s More to That,” science journalist and Smithsonian contributor Jo Marchant tells us about the yearslong campaign to read these scrolls. And Youssef Nader—one of the three winners of last year’s “Vesuvius Challenge” to make these clumps of vulcanized ash readable—tells us how he and his teammates achieved their historic breakthrough. Read Smithsonian’s coverage of the Vesuvius Challenge and the Herculaneum scrolls here (https://www.smithsonianmag.com/smart-news/three-students-decipher-first-passages-2000-year-old-scroll-burned-vesuvius-eruption-180983738/) , here (https://www.smithsonianmag.com/history/buried-ash-vesuvius-scrolls-are-being-read-new-xray-technique-180969358/) , and here (https://www.smithsonianmag.com/history/archaeologoists-only-just-beginning-reveal-secrets-hidden-ancient-manuscripts-180967455/) . Find prior episodes of our show here (https://www.smithsonianmag.com/podcast/) . There’s More to That is a production of Smithsonian magazine and PRX Productions. From the magazine, our team is Chris Klimek, Debra Rosenberg and Brian Wolly. From PRX, our team is Jessica Miller, Adriana Rosas Rivera, Genevieve Sponsler, Rye Dorsey, and Edwin Ochoa. The Executive Producer of PRX Productions is Jocelyn Gonzales. Fact-checking by Stephanie Abramson. Episode artwork by Emily Lankiewicz. Music by APM Music.




intelligence

Scientists Who Developed the Building Blocks of Artificial Intelligence Win Nobel Prize in Physics

John Hopfield and Geoffrey Hinton shared the award for their work on artificial neural networks and machine learning




intelligence

New Macs with Apple Intelligence, the next Apple Vision Pro on the AppleInsider Podcast

The first reviews of the New Mac mini, iMac, and MacBook Pro, are in — and surprisingly range from delight to strange cynicism. Plus there are yet more rumors of the next Apple Vision Pro, but you need not hold your breath.


A Mac mini seen on an iMac screen

Typical. You wait ages for a new Mac and three of them turn up — to mixed reviews. That's not mixed as in some reviews are critical while others are not, it's that some are fulsome while others are begrudging.



Continue Reading on AppleInsider | Discuss on our Forums




intelligence

New in iOS 18.2 developer beta 3: Changes to Apple Intelligence, video playback, and more

The third developer beta of iOS 18.2 is now available for all compatible iPhone models, as Apple Intelligence testing continues. Here's everything you need to know about the update.


iOS 18.2 developer beta 3 introduces enhancements to existing features.

On Monday, Apple released iOS 18.2 developer beta 3, with build number 22C5131e, up from the previous 22C5125e. While the update is compatible with devices as old as the iPhone XS and iPhone XR, the software includes a variety of Apple Intelligence features that only work on iPhone 15 Pro, iPhone 15 Pro Max, and the iPhone 16 range.

The iOS 18.2 update introduces support for Image Playground, Genmoji, Visual Intelligence, and ChatGPT integration via Siri. There's also a new FindMy feature that helps users locate lost luggage or AirTags.


Continue Reading on AppleInsider | Discuss on our Forums




intelligence

Contest explores artificial intelligence’s strengths, flaws for medical diagnoses

Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17 with top prizes of $1,000.  




intelligence

Trump taps John Ratcliffe, his former director of national intelligence, to lead CIA

President-elect Donald Trump announced that John Ratcliffe, his former director of national intelligence, is his pick to lead the CIA.




intelligence

News24 Business | Say hi to 'Sandton 2.0', as swanky suburb beefs up security with artificial intelligence

While some were writing Sandton's obituary during the height of the Covid-19 pandemic, when the work-from-home phenomenon became the norm, others - like its property owners and businesses - were planning its revival.




intelligence

Can Artificial Intelligence Help Teachers Find the Right Lesson Plans?

The IBM Foundation has launched a website called Teacher Advisor with Watson, which uses artificial intelligence to find high-quality elementary math resources and lessons.




intelligence

Spotlight on SiriusDecisions: Artificial Intelligence

Kerry Cunningham of SiriusDecisions talks about how AI can power the B2B revenue engine




intelligence

State Releases Guidance on Generative Artificial Intelligence in Classrooms

The Delaware Department of Education has developed guidance for districts and charter schools on generative artificial intelligence (AI) in the classroom.




intelligence

SAS Customer Intelligence 360: Introduction to marketing data management

No matter what your brand's level of marketing maturity is, SAS can help you move from data to insight to action with rich functionality for adaptive planning, journey activation and an embedded real-time decision engine – all fueled by powerful analytics and artificial intelligence (AI) capabilities. Let's begin with a [...]

SAS Customer Intelligence 360: Introduction to marketing data management was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Identity management, profiling and unified data model for hybrid marketing

Customer data platforms (CDPs), data management platforms (DMPs), people-based marketing, identity graphs, and more overlapping topics represent an important ingredient of any martech brainstorming session in 2020. As your brand spreads out across touchpoints — from web to mobile applications, as well as call centers, email and direct mail — [...]

SAS Customer Intelligence 360: Identity management, profiling and unified data model for hybrid marketing was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Behavioral event tracking, targeting & engagement analysis

There's no question that we're all increasingly, and often exclusively, interacting with brands digitally. Consumers are now online through countless mechanisms – from laptops and mobile apps to AI-enabled voice assistants and sensor-based wearables. Engagement is diversifying in fascinating new ways. And when organizations can't see their customers interacting in [...]

SAS Customer Intelligence 360: Behavioral event tracking, targeting & engagement analysis was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Enriching analysis with device usage data

Over the last 20+ years, global society has adopted digital devices at scale, and consumer interaction behaviors continually evolve and mature. As analysts, we're uniquely positioned to notice worldwide trends, country-specific nuances and localized market behaviors that can have significant impact on our brand's business goals. This global scope is [...]

SAS Customer Intelligence 360: Enriching analysis with device usage data was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Automated explanation and supervised segmentation

One of the wonderful aspects about my client-facing role at SAS is the breadth of audiences that I get to work with. No matter where you fall on this list: Data engineer. Business or marketing analyst. Citizen data scientist. Data scientist. Statistician. Executive. One topic is certain: We all love [...]

SAS Customer Intelligence 360: Automated explanation and supervised segmentation was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Make better decisions with analytically driven marketing

According to the SAS Experience 2030 global study, by the year 2030 67% of in-person customer engagements (think sales assistance and information queries) will be completed by smart machines rather than humans. And while it may seem a bit ironic, the most personalized customer experiences could involve no people at [...]

SAS Customer Intelligence 360: Make better decisions with analytically driven marketing was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Visual analytics, sankey diagrams and customer journeys

We live in the age of data. From global warming stats to customer behavior patterns, new technologies have made it easier to collect, store, access and analyze information. But our use of these technologies has also eroded our attention spans and fueled post-truth misunderstandings. To combat these trends, the question [...]

SAS Customer Intelligence 360: Visual analytics, sankey diagrams and customer journeys was published on Customer Intelligence Blog.




intelligence

SAS Customer Intelligence 360: Data visualization, location analytics and geospatial insights

Everything happens somewhere, and much of our customer data includes location information. Websites include x, y coordinates in semi-structured click streams, and the mobile apps your prospects depend on frequently support device location to provide a personalized, targeted experience. As my SAS peer Robby Powell said: "Human brains are hardwired [...]

SAS Customer Intelligence 360: Data visualization, location analytics and geospatial insights was published on Customer Intelligence Blog.




intelligence

Artificial Intelligence: Accelerating Knowledge in the Digital Age!

In an era of abundant and constantly evolving information, the challenge is not just accessing knowledge but understanding and applying it effectively. AI is a transformative technology that is reshaping how we learn, work, and grow. In this blog, we’ll explore how AI accelerates our knowledge acquisition and understand how it can relate to the process of learning, which connects with our daily lives.

The role of AI is to accelerate knowledge by personalizing learning experiences, providing instant access to information, and offering data-driven insights. AI empowers us to learn more efficiently and effectively in many ways. I won't go into much detail, as we are already busy searching for the meaning of AI and what it can do; however, I want to share one inspiring fact about AI. It can analyze vast amounts of data in seconds, making sense of complex information and providing instantaneous actionable insights or concise answers. I understand that humans are looking to speed up things, which can help us understand technology better and perform our tasks faster.

The main reason AI is in focus is because of its ability to perform tasks faster than ever. We aim to enhance the performance of all our products, including the everyday household electronic items we use. Similarly, are we striving to accelerate the learning process? I am committed to assisting you, and one such method is concise, short (minute-long) videos.

In today's fast-paced world, where attention spans are shorter than ever, the rise of social media platforms has made it easier for anyone to create and share short videos. This is where minute videos come in. These bite-sized clips offer a quick and engaging way to deliver information to the audience with a significant impact. Understanding the definitions of technical terms in VLSI Design can often be accomplished in just a minute.

Below are the definitions of the essential stages in the RTL2GDSII Flow. For further reference, these definitions are also accessible on YouTube.

What is RTL Coding in VLSI Design?

     

What is Digital Verification?

     

What Is Synthesis in VLSI Design?

     

What Is Logic Equivalence Checking in VLSI Design?

     

What Is DFT in VLSI Design?

     

What is Digital Implementation?

     

What is Power Planning?

     

What are DRC and LVS in Physical Verification?

     

What are On-Chip Variations?  

     

Want to Learn More?

The Cadence RTL-to-GDSII Flow training is available as both "Blended" and "Live" Please reach out to Cadence Training for further information.

And don't forget to obtain your Digital Badge after completing the training!

Related Blogs

Training Insights – Why Is RTL Translated into Gate-Level Netlist?

Did You Miss the RTL-to-GDSII Webinar? No Worries, the Recording Is Available!

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Binge on Chip Design Concepts this Weekend!




intelligence

iOS 18.2 beta 3: 4 Apple Intelligence features you can test now

Apple's latest beta for iOS 18.2 features some important AI features you can test out now.




intelligence

Apple Intelligence on Mac: 5 AI-powered features you can test right now

With the recent macOS Sequoia launch, Apple released some Apple Intelligence features. Here's what you can try out now.




intelligence

China’s Push Into Artificial Intelligence―How Should the United States Respond?

China’s Push Into Artificial Intelligence―How Should the United States Respond? China’s Push Into Artificial Intelligence―How Should the United States Respond?
ferrard Thu, 05/03/2018 - 16:49

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore




intelligence

Catching Up in a Technology War—China's Challenge in Artificial Intelligence

Catching Up in a Technology War—China's Challenge in Artificial Intelligence Catching Up in a Technology War—China's Challenge in Artificial Intelligence
Anonymous (not verified) Tue, 06/16/2020 - 11:20

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore