artificial Former Caltech and Google scientists win physics Nobel for pioneering artificial intelligence By www.latimes.com Published On :: Tue, 8 Oct 2024 12:24:33 GMT John Hopfield dreamed up the modern neural network while at Caltech. Geoffrey Hinton built on it, creating an AI firm that Google bought for $44 million. Full Article
artificial Issues of the Environment: U-M works toward sustainable implementation of new artificial intelligence tool By www.wemu.org Published On :: Wed, 30 Oct 2024 05:51:55 -0400 The University of Michigan is forging ahead and working towards being a leader in generative artificial intelligence with its U-M-GPT program. As it does, there are environmental concerns to be addressed. The initiative is part of Michigan’s broader effort to integrate AI into its academic and administrative infrastructure, enhancing learning, teaching, and research. But, AI consumes a great deal of energy. WEMU's David Fair spoke with the Vice President for Information Technology and Chief Information Officer at U-M, Dr. Ravi Pendse, about how U-M is dealing with the environmental ramifications of AI. Full Article
artificial Los desafíos de la inteligencia artificial en Colombia By www.spreaker.com Published On :: Thu, 04 Aug 2022 22:59:00 +0000 A propósito del nuevo proyecto de Prisa Media, panelistas plantearon que los retos están en términos de financiación, formación de programadores y el frente ético. Full Article
artificial Innovación e inteligencia artificial: ¿cómo aportan al caribe colombiano? By www.spreaker.com Published On :: Fri, 31 Mar 2023 23:52:09 +0000 En medio del Reto Regiones Caribe expertos hablaron de la importancia del desarrollo de la tecnología e innovación en el sector privado, lo público y la academia. Full Article
artificial ¿Cuáles son los temores que despierta la Inteligencia Artificial? By www.spreaker.com Published On :: Fri, 05 May 2023 23:50:00 +0000 Expertos analizaron las alarmas encendidas sobre los riesgos que representa la Inteligencia Artificial; consideran importante avanzar en la regulación. Full Article
artificial Inteligencia Artificial: los desafíos en el empleo y la educación By www.spreaker.com Published On :: Wed, 14 Jun 2023 23:56:42 +0000 Panelistas analizaron los riesgos y oportunidades que tienen estos dos sectores ante los avances de la IA como ChatGPT. Full Article
artificial Inteligencia artificial, redes sociales e internet: los retos para el 2024 By www.spreaker.com Published On :: Fri, 22 Dec 2023 02:25:15 +0000 Panelistas analizaron si la IA enfrentará un frenazo o mayor desarrollo el próximo año. También plantearon las limitaciones que significan la pronta legislación que regula la IA. Full Article
artificial Inteligencia Artificial, ¿para dónde va la hoja de ruta en América Latina? By www.spreaker.com Published On :: Wed, 07 Aug 2024 02:00:00 +0000 Tres ministros y un experto analizaron el panorama que enfrenta la región en términos de gobernanza, regulación e infraestructura digital. Full Article
artificial corazón artificial. By www.spreaker.com Published On :: Thu, 20 Aug 2020 22:22:36 +0000 corazón artificial. Full Article
artificial SANAMENTE - PÚRPURA TROMBOCITOPÉNICA y CORAZÓN ARTIFICIAL 04 DE OCTUBRE By www.spreaker.com Published On :: Thu, 06 Oct 2022 21:05:00 +0000 Full Article
artificial ¿Cómo entender e interactuar con la inteligencia artificial? By www.spreaker.com Published On :: Wed, 11 Oct 2023 20:50:00 +0000 Full Article
artificial Puntos del acuerdo Nacional y Petro dice que hay que cambiar petróleo por Inteligencia Artificial By www.spreaker.com Published On :: Tue, 08 Oct 2024 00:19:00 +0000 Escuche el programa de este lunes 7 de octubre. La Luciérnaga, un espacio de humor y opinión de Caracol Radio que desde hace 31 años acompaña a sus oyentes en su regreso casa. Full Article
artificial Inteligencia artificial clave para el mapa de la vegetación natural By www.spreaker.com Published On :: Sun, 29 May 2022 17:40:00 +0000 Full Article
artificial La inteligencia artificial ya está en nuestras vidas, ¿cómo identificarla? By www.spreaker.com Published On :: Sat, 30 Jul 2022 20:17:23 +0000 La inteligencia artificial ya está en nuestras vidas, ¿cómo identificarla? Full Article
artificial Róbots con inteligecia artificial ya están en Colombia By www.spreaker.com Published On :: Sat, 06 Aug 2022 17:39:16 +0000 Róbots con inteligecia artificial ya están en Colombia Full Article
artificial ¿Es posible que la inteligencia artificial tenga sentimientos? By www.spreaker.com Published On :: Sat, 13 Aug 2022 21:07:00 +0000 Full Article
artificial Aplicaciones de la Inteligencia Artificial hechas en Colombia para mejorar la calidad de vida By www.spreaker.com Published On :: Sat, 27 Aug 2022 17:00:00 +0000 Full Article
artificial Inteligencia no tan artificial: Álvaro Montes recibe Premio Accenture al Periodismo By www.spreaker.com Published On :: Sat, 03 Sep 2022 19:06:00 +0000 Full Article
artificial ¿Arte creado a partir de inteligencia artificial? By www.spreaker.com Published On :: Sat, 10 Sep 2022 17:10:00 +0000 Full Article
artificial Flora, la influencer colombiana creada a partir de inteligencia artificial By www.spreaker.com Published On :: Sat, 17 Sep 2022 19:43:00 +0000 Full Article
artificial Inteligencia no tan artificial: sábado, 24 de septiembre By www.spreaker.com Published On :: Sat, 24 Sep 2022 18:30:00 +0000 Full Article
artificial ¿Qué tan avanzada está Colombia en inteligencia artificial? By www.spreaker.com Published On :: Sat, 01 Oct 2022 17:23:00 +0000 Full Article
artificial Inteligencia no tan artificial: ¿qué son los 'gamers' y por qué han tomado tanta fuerza en el mercado? By www.spreaker.com Published On :: Sat, 15 Oct 2022 17:53:00 +0000 Full Article
artificial 3 cosas interesantes que hace el gobierno con la Inteligencia Artificial By www.spreaker.com Published On :: Sat, 12 Nov 2022 22:49:00 +0000 Full Article
artificial Música creada a partir de desechos e inteligencia artificial By www.spreaker.com Published On :: Sat, 03 Dec 2022 15:12:00 +0000 Full Article
artificial Generadores de imágenes con Inteligencia Artificial: la nueva tendencia en redes sociales By www.spreaker.com Published On :: Sat, 17 Dec 2022 20:15:00 +0000 Full Article
artificial Inteligencia Artificial: la protagonista en la Feria CES este fin de semana By www.spreaker.com Published On :: Sat, 07 Jan 2023 19:08:00 +0000 Full Article
artificial Inteligencia no tan artificial: crean software de abogados By www.spreaker.com Published On :: Sun, 15 Jan 2023 23:20:00 +0000 Full Article
artificial Inteligencia no tan artificial: la revolución de ChatGPT By www.spreaker.com Published On :: Sat, 21 Jan 2023 20:35:00 +0000 Full Article
artificial El afiche del FICCI 2023 fue creado con inteligencia artificial By www.spreaker.com Published On :: Sat, 28 Jan 2023 17:03:00 +0000 Full Article
artificial Inteligencia no tan artificial: googlear ya no será lo mismo By www.spreaker.com Published On :: Sat, 11 Feb 2023 18:42:00 +0000 Full Article
artificial Inteligencia no tan artificial: ¿los robots asesinos? By www.spreaker.com Published On :: Sat, 04 Mar 2023 18:26:00 +0000 Full Article
artificial 'Tirando x Colombia' lanza campaña de salud sexual en adolescentes mediante inteligencia artificial By www.spreaker.com Published On :: Sun, 05 Mar 2023 18:21:00 +0000 Full Article
artificial IV Cumbre de Inteligencia Artificial: ¿cuál ha sido el impacto de ChatGPT? By www.spreaker.com Published On :: Sun, 18 Jun 2023 21:27:00 +0000 Full Article
artificial AI for Good, la conferencia de Naciones Unidas para el uso responsable de la Inteligencia Artificial By www.spreaker.com Published On :: Sat, 08 Jul 2023 16:11:00 +0000 Full Article
artificial Inteligencia artificial recrea personas desaparecidas By www.spreaker.com Published On :: Sat, 05 Aug 2023 17:04:00 +0000 Full Article
artificial La inteligencia artificial confirma que las personas son estafadas en aplicaciones de citas By www.spreaker.com Published On :: Mon, 11 Sep 2023 20:37:00 +0000 Full Article
artificial Artificial Intelligence, Scientific Discovery, and Product Innovation By aidantr.github.io Published On :: 2024-11-13T05:47:01+00:00 Aidan Toner-Rodgers† MIT November 6, 2024 This paper studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product in- novation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of “idea-generation” tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complemen- tarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization. Full Article
artificial The Candy Whips Deliver New Album 'Artificial Melodies' By www.antimusic.com Published On :: The Candy Whips about their new angular synthpop album Artificial Melodies that is out today via Kitten Robot Records Full Article
artificial I Am Not Real: Artificial Intelligence in the Needlework World By www.needlenthread.com Published On :: Mon, 28 Oct 2024 15:00:00 +0000 The topic of AI in the needlework world has been on my radar for well over a year. But I’ve … Full Article Uncategorized miscellaneous embroidery musings
artificial Louisiana schools use Artificial Intelligence to help young children learn to read By www.npr.org Published On :: Wed, 30 Oct 2024 05:08:10 -0400 In Louisiana, more than 100,000 students are using an AI tutor that is helping to raise reading scores. Full Article
artificial Undercurrents: Episode 10 - Artificial Intelligence in International Affairs, and Women Drivers in Saudi Arabia By f1.media.brightcove.com Published On :: Fri, 15 Jun 2018 00:00:00 +0100 Full Article
artificial Artificial Intelligence and the Public: Prospects, Perceptions and Implications By f1.media.brightcove.com Published On :: Fri, 28 Jun 2019 00:00:00 +0100 Full Article
artificial Undercurrents: Summer Special - Allison Gardner on Artificial Intelligence By f1.media.brightcove.com Published On :: Thu, 08 Aug 2019 00:00:00 +0100 Full Article
artificial Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities By www.chathamhouse.org Published On :: Wed, 14 Mar 2018 15:35:52 +0000 Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities Expert comment sysadmin 14 March 2018 Artificial intelligence applications will not be a panacea for addressing India’s grand challenges. Data bias and unequal access to technology gains will entrench existing socio-economic fissures. — Participants at an AI event in Bangalore. Photo: Getty Images. Artificial intelligence (AI) is high on the Indian government’s agenda. Some days ago, Prime Minister Narendra Modi inaugurated the Wadhwani Institute for Artificial Intelligence, reportedly India’s first research institute focused on AI solutions for social good. In the same week, Niti Aayog CEO Amitabh Kant argued that AI could potentially add $957 billion to the economy and outlined ways in which AI could be a ‘game changer’. During his budget speech, Finance Minister Arun Jaitley announced that Niti Aayog would spearhead a national programme on AI; with the near doubling of the Digital India budget, the IT ministry also announced the setting up of four committees for AI-related research. An industrial policy for AI is also in the pipeline, expected to provide incentives to businesses for creating a globally competitive Indian AI industry. Narratives on the emerging digital economy often suffer from technological determinism — assuming that the march of technological transformation has an inner logic, independent of social choice and capable of automatically delivering positive social change. However, technological trajectories can and must be steered by social choice and aligned with societal objectives. Modi’s address hit all the right notes, as he argued that the ‘road ahead for AI depends on and will be driven by human intentions’. Emphasising the need to direct AI technologies towards solutions for the poor, he called upon students and teachers to identify ‘the grand challenges facing India’ – to ‘Make AI in India and for India’. To do so, will undoubtedly require substantial investments in R&D, digital infrastructure and education and re-skilling. But, two other critical issues must be simultaneously addressed: data bias and access to technology gains. While computers have been mimicking human intelligence for some decades now, a massive increase in computational power and the quantity of available data are enabling a process of ‘machine learning.’ Instead of coding software with specific instructions to accomplish a set task, machine learning involves training an algorithm on large quantities of data to enable it to self-learn; refining and improving its results through multiple iterations of the same task. The quality of data sets used to train machines is thus a critical concern in building AI applications. Much recent research shows that applications based on machine learning reflect existing social biases and prejudice. Such bias can occur if the data set the algorithm is trained on is unrepresentative of the reality it seeks to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognizing non-white people. This is what led a recent Google application to tag black people as gorillas. Alternatively, bias can also occur if the data set itself reflects existing discriminatory or exclusionary practices. A recent study by ProPublica found for example that software that was being used to assess the risk of recidivism in criminals in the United States was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. The impact of such data bias can be seriously damaging in India, particularly at a time of growing social fragmentation. It can contribute to the entrenchment of social bias and discriminatory practices, while rendering both invisible and pervasive the processes through which discrimination occurs. Women are 34 per cent less likely to own a mobile phone than men – manifested in only 14 per cent of women in rural India owning a mobile phone, while only 30 per cent of India’s internet users are women. Women’s participation in the labour force, currently at around 27 per cent, is also declining, and is one of the lowest in South Asia. Data sets used for machine learning are thus likely to have a marked gender bias. The same observations are likely to hold true for other marginalized groups as well. Accorded to a 2014 report, Muslims, Dalits and tribals make up 53 per cent of all prisoners in India; National Crime Records Bureau data from 2016 shows in some states, the percentage of Muslims in the incarcerated population was almost three times the percentage of Muslims in the overall population. If AI applications for law and order are built on this data, it is not unlikely that it will be prejudiced against these groups. (It is worth pointing out that the recently set-up national AI task force is comprised of mostly Hindu men – only two women are on the task force, and no Muslims or Christians. A recent article in the New York Times talked about AI’s ‘white guy problem’; will India suffer from a ‘Hindu male bias’?) Yet, improving the quality, or diversity, of data sets may not be able to solve the problem. The processes of machine learning and reasoning involve a quagmire of mathematical functions, variables and permutations, the logic of which are not readily traceable or predictable. The dazzle of AI-enabled efficiency gains must not blind us to the fact that while AI systems are being integrated into key socio-economic systems, their accuracy and logic of reasoning have not been fully understood or studied. The other big challenge stems from the distribution of AI-led technology gains. Even if estimates of AI contribution to GDP are correct, the adoption of these technologies is likely to be in niches within the organized sector. These industries are likely to be capital- rather than labour-intensive, and thus unlikely to contribute to large-scale job creation. At the same time, AI applications can most readily replace low- to medium-skilled jobs within the organized sector. This is already being witnessed in the outsourcing sector – where basic call and chat tasks are now automated. Re-skilling will be important, but it is unlikely that those who lose their jobs will also be those who are being re-skilled – the long arch of technological change and societal adaptation is longer than that of people’s lives. The contractualization of work, already on the rise, is likely to further increase as large industries prefer to have a flexible workforce to adapt to technological change. A shift from formal employment to contractual work can imply a loss of access to formal social protection mechanisms, increasing the precariousness of work for workers. The adoption of AI technologies is also unlikely in the short- to medium-term in the unorganized sector, which engages more than 80 per cent of India’s labor force. The cost of developing and deploying AI applications, particularly in relation to the cost of labour, will inhibit adoption. Moreover, most enterprises within the unorganized sector still have limited access to basic, older technologies – two-thirds of the workforce are employed in enterprises without electricity. Eco-system upgrades will be important but incremental. Given the high costs of developing AI-based applications, most start-ups are unlikely to be working towards creating bottom-of-the-pyramid solutions. Access to AI-led technology gains is thus likely to be heavily differentiated – a few high-growth industries can be expected, but these will not necessarily result in the welfare of labour. Studies show that labour share of national income, especially routine labour, has been declining steadily across developing countries. We should be clear that new technological applications themselves are not going to transform or disrupt this trend – rather, without adequate policy steering, these trends will be exacerbated. Policy debates about AI applications in India need to take these two issues seriously. AI applications will not be a panacea for addressing ‘India’s grand challenges’. Data bias and unequal access to technology gains will entrench existing socio-economic fissures, even making them technologically binding. In addition to developing AI applications and creating a skilled workforce, the government needs to prioritize research that examines the complex social, ethical and governance challenges associated with the spread of AI-driven technologies. Blind technological optimism might entrench rather than alleviate the grand Indian challenge of inequity and growth. This article was originally published in the Indian Express. Full Article
artificial Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence By www.chathamhouse.org Published On :: Thu, 27 Aug 2020 14:13:18 +0000 27 August 2020 Yasmin Afina Research Assistant, International Security Programme @afinayasmin LinkedIn Increasing dependency on artificial intelligence (AI) for military technologies is inevitable and efforts to develop these technologies to use in the battlefield is proceeding apace, however, developers and end-users must ensure the reliability of these technologies, writes Yasmin Afina. GettyImages-112897149.jpg F-16 SimuSphere HD flight simulator at Link Simulation in Arlington, Texas, US. Photo: Getty Images. AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controlling a simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?The agency’s deputy director remarked that these tools are now ‘ready for weapons systems designers to be in the toolbox’. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use – namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.Testing environment versus real-life applicationsFirst, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithm’s accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly – if not more so – than traditional screening.Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic and battlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversaries will be unpredictable.Every single eventuality would need to be programmed in line with the commander’s intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.Hardware limitationsAnother consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems – each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.Hardware may also be affected involuntarily. For instance, a ‘pilotless’ aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithm’s performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.Legal complianceAnother major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithm’s ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law – including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.It is also important that developers address the 'black box' issue whereby the algorithm’s calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithm’s opacity to improve the algorithm’s performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.Reliability, testing and experimentationAlgorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machine’s reliability on the technical front as well as for legal compliance purposes.The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations. Full Article
artificial MIRD Pamphlet No. 31: MIRDcell V4--Artificial Intelligence Tools to Formulate Optimized Radiopharmaceutical Cocktails for Therapy By jnm.snmjournals.org Published On :: 2024-10-24T11:58:49-07:00 Visual Abstract Full Article
artificial Artificial Intelligence Prediction and Counterterrorism By www.chathamhouse.org Published On :: Tue, 06 Aug 2019 10:46:13 +0000 Artificial Intelligence Prediction and Counterterrorism Research paper sysadmin 6 August 2019 The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states. — Surveillance cameras manufactured by Hangzhou Hikvision Digital Technology Co. at a testing station near the company’s headquarters in Hangzhou, China. Photo: Getty Images Summary The use of predictive artificial intelligence (AI) in countering terrorism is often assumed to have a deleterious effect on human rights, generating spectres of ‘pre-crime’ punishment and surveillance states. However, the well-regulated use of new capabilities may enhance states’ abilities to protect citizens’ right to life, while at the same time improving adherence to principles intended to protect other human rights, such as transparency, proportionality and freedom from unfair discrimination. The same regulatory framework could also contribute to safeguarding against broader misuse of related technologies. Most states focus on preventing terrorist attacks, rather than reacting to them. As such, prediction is already central to effective counterterrorism. AI allows higher volumes of data to be analysed, and may perceive patterns in those data that would, for reasons of both volume and dimensionality, otherwise be beyond the capacity of human interpretation. The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats. Developments in AI have amplified the ability to conduct surveillance without being constrained by resources. Facial recognition technology, for instance, may enable the complete automation of surveillance using CCTV in public places in the near future. The current way predictive AI capabilities are used presents a number of interrelated problems from both a human rights and a practical perspective. Where limitations and regulations do exist, they may have the effect of curtailing the utility of approaches that apply AI, while not necessarily safeguarding human rights to an adequate extent. The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results. In future, broader access to less intrusive aspects of public data, direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. It is important that any such measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly. 2019-08-07-AICounterterrorism (PDF) Full Article
artificial Artificial pancreases for type 1 diabetes: Better access is “watershed moment”—but delivery is key By www.bmj.com Published On :: Tuesday, January 23, 2024 - 10:06 Full Article
artificial Who gains from artificial intelligence? By www.chathamhouse.org Published On :: Mon, 06 Feb 2023 14:12:13 +0000 Who gains from artificial intelligence? 27 February 2023 — 5:30PM TO 6:30PM Anonymous (not verified) 6 February 2023 Chatham House and Online What implications will AI have on fundamental rights and how can societies benefit from this technology revolution? In recent months, the latest developments in artificial intelligence (AI) have attracted much media attention. These technologies hold a wealth of potential for a wide range of applications, for example, the recent release of OpenAI’s ChatGPT, a text generation model, has shed light on the opportunities these applications hold including to advance scientific research and discovery, enhance search engines and improve key commercial applications. Yet, instead of generating an evidence-based public debate, this increased interest has also led to discussions on AI technologies which are often alarmist in nature, and in a lot of cases, misleading. They carry the risk of shifting public and policymakers’ attention away from critical societal and legal risks as well as concrete solutions. This discussion, held in partnership with Microsoft and Sidley Austin LLP, provides an expert-led overview of where the technology stands in 2023. Panellists also reflect on the implications of implementing AI on fundamental rights, the enforcement of current and upcoming legislation and multi-stakeholder pathways to address relevant issues in the AI space. More specifically, the panel explores: What is the current state of the art in the AI field? What are the opportunities and challenges presented by generative AI and other innovations? What are some of the key, and potentially most disruptive, AI applications to monitor in the near- and mid-term? Which applications would benefit from greater public policy/governance discussions? How can current and future policy frameworks ensure the protection of fundamental rights in this new era of AI? What is the role of multi-stakeholder collaboration? What are the pathways to achieving inclusive and responsible governance of AI? How can countries around the world work together to develop frameworks for responsible AI that upholds democratic values and advance AI collaboration across borders? As with all member events, questions from the audience drive the conversation. Read the transcript. Full Article