artificial intelligence

[ M.3381 (01/22) ] - Requirements for energy saving management of 5G radio access network (RAN) systems with artificial intelligence (AI)

Requirements for energy saving management of 5G radio access network (RAN) systems with artificial intelligence (AI)




artificial intelligence

[ M.3382 (06/22) ] - Requirements for work order processing in telecom management with artificial intelligence

Requirements for work order processing in telecom management with artificial intelligence




artificial intelligence

[ M.3383 (04/23) ] - Requirements for log analysis in telecom management with artificial intelligence

Requirements for log analysis in telecom management with artificial intelligence




artificial intelligence

[ M.3384 (04/23) ] - Intelligence levels of artificial intelligence enhanced telecom operation and management

Intelligence levels of artificial intelligence enhanced telecom operation and management




artificial intelligence

[ M.3385 (04/23) ] - Intelligence levels evaluation framework of artificial intelligence enhanced telecom operation and management

Intelligence levels evaluation framework of artificial intelligence enhanced telecom operation and management




artificial intelligence

[ Y.3178 (07/21) ] - Functional framework of artificial intelligence-based network service provisioning in future networks including IMT-2020

Functional framework of artificial intelligence-based network service provisioning in future networks including IMT-2020




artificial intelligence

Issue No.1 - The impact of Artificial Intelligence on communication networks and services

Issue No.1 - The impact of Artificial Intelligence on communication networks and services




artificial intelligence

XSTR-SEC-AI - Guidelines for security management of using artificial intelligence technology

XSTR-SEC-AI - Guidelines for security management of using artificial intelligence technology




artificial intelligence

U4SSC - Guiding principles for artificial intelligence in cities

U4SSC - Guiding principles for artificial intelligence in cities




artificial intelligence

YSTP.AIoT - Challenges of and guidelines to standardization on artificial intelligence of things

YSTP.AIoT - Challenges of and guidelines to standardization on artificial intelligence of things




artificial intelligence

[ F.Sup4 (04/21) ] - Overview of convergence of artificial intelligence and blockchain

Overview of convergence of artificial intelligence and blockchain




artificial intelligence

[ F.749.13 (06/21) ] - Framework and requirements for civilian unmanned aerial vehicle flight control using artificial intelligence

Framework and requirements for civilian unmanned aerial vehicle flight control using artificial intelligence




artificial intelligence

[ F.749.4 (06/21) ] - Use cases and requirements for multimedia communication enabled vehicle systems using artificial intelligence

Use cases and requirements for multimedia communication enabled vehicle systems using artificial intelligence




artificial intelligence

[ F.742.1 (12/22) ] - Requirements for smart class based on artificial intelligence

Requirements for smart class based on artificial intelligence




artificial intelligence

[ F.748.17 (12/22) ] - Technical specification for artificial intelligence cloud platform - Artificial intelligence model development

Technical specification for artificial intelligence cloud platform - Artificial intelligence model development




artificial intelligence

[ F.747.12 (12/22) ] - Requirements for artificial intelligence based machine vision system in smart logistics warehouse

Requirements for artificial intelligence based machine vision system in smart logistics warehouse




artificial intelligence

[ L.1305 (11/19) ] - Data centre infrastructure management system based on big data and artificial intelligence technology

Data centre infrastructure management system based on big data and artificial intelligence technology




artificial intelligence

QuantumPay (QTP) represents an ambitious technological initiative that blends blockchain technology and artificial intelligence (AI) to create a secure, efficient, and transparent digital transaction - StreetInsider.com

QuantumPay (QTP) represents an ambitious technological initiative that blends blockchain technology and artificial intelligence (AI) to create a secure, efficient, and transparent digital transaction  StreetInsider.com




artificial intelligence

The possibilities of artificial intelligence in the realm of healthcare

By Brian Phillips, freelance writer.

As data in healthcare continues to grow in complexity, the integration of artificial intelligence (AI) is becoming more prevalent. Across payers, care providers, and life sciences firms, various forms of AI are already in use.




artificial intelligence

Federal Executive Forum Artificial Intelligence & Machine Learning Strategies in Government Progress and Best Practices 2024

How are AI/ML strategies evolving to meet tomorrow’s mission?

The post Federal Executive Forum Artificial Intelligence & Machine Learning Strategies in Government Progress and Best Practices 2024 first appeared on Federal News Network.




artificial intelligence

Federal Executive Forum Chief Data Officers Profiles in Excellence in Government 2024: Data Analytics & Artificial Intelligence Trends

What strategies and technology are driving data strategy in government?

The post Federal Executive Forum Chief Data Officers Profiles in Excellence in Government 2024: Data Analytics & Artificial Intelligence Trends first appeared on Federal News Network.




artificial intelligence

Federal Executive Forum Artificial Intelligence Strategies in Government Progress and Best Practices 2024

How are agencies refining their AI strategy?

The post Federal Executive Forum Artificial Intelligence Strategies in Government Progress and Best Practices 2024 first appeared on Federal News Network.




artificial intelligence

Episode 18: Marketing artificial intelligence (AI) solutions to federal agencies

In this episode of Market Chat!, a top artificial intelligence (AI) official at the General Services Administration discusses how the federal AI marketplace is maturing, and a panel of three senior marketing officials discuss how their companies are tuning their AI marketing messages to address evolving government needs and challenges.

The post Episode 18: Marketing artificial intelligence (AI) solutions to federal agencies first appeared on Federal News Network.




artificial intelligence

Delivering on artificial intelligence’s potential

As agencies move beyond artificial intelligence and machine learning pilots, we find out what it takes to be successful. We talk with experts from the Army, CISA, DoD, GSA, NGA, NRO, NTIS, OSTP and from ABBYY, DataRobot, H2O, MarkLogic and Red Hat.

The post Delivering on artificial intelligence’s potential first appeared on Federal News Network.




artificial intelligence

Securiti launches Gencore AI solution to build secure artificial intelligence systems

Gencore AI provides the same capability for the safe construction of AI tools that their core platform has provided from its inception.




artificial intelligence

Artificial Intelligence and Real Writers

Generative Artifial Intelligence [sic] is one of the issues I've been working on with the National Writers Union and other allies.

Travel writers and others may be interested in the presentation I gave on Artificial Intelligence and Real Writers this issue to the Bay Area Travel Writers at our virtual meeting in September:

Additional resources mentioned in my presentation:




artificial intelligence

Risks of artificial intelligence

Location: Engineering Library- TA347.A78M85 2016




artificial intelligence

Former Caltech and Google scientists win physics Nobel for pioneering artificial intelligence

John Hopfield dreamed up the modern neural network while at Caltech. Geoffrey Hinton built on it, creating an AI firm that Google bought for $44 million.




artificial intelligence

Issues of the Environment: U-M works toward sustainable implementation of new artificial intelligence tool

The University of Michigan is forging ahead and working towards being a leader in generative artificial intelligence with its U-M-GPT program. As it does, there are environmental concerns to be addressed. The initiative is part of Michigan’s broader effort to integrate AI into its academic and administrative infrastructure, enhancing learning, teaching, and research. But, AI consumes a great deal of energy. WEMU's David Fair spoke with the Vice President for Information Technology and Chief Information Officer at U-M, Dr. Ravi Pendse, about how U-M is dealing with the environmental ramifications of AI.




artificial intelligence

Artificial Intelligence, Scientific Discovery, and Product Innovation

Aidan Toner-Rodgers† MIT November 6, 2024 This paper studies the impact of artificial intelligence on innovation, exploiting the randomized introduction of a new materials discovery technology to 1,018 scientists in the R&D lab of a large U.S. firm. AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product in- novation. These compounds possess more novel chemical structures and lead to more radical inventions. However, the technology has strikingly disparate effects across the productivity distribution: while the bottom third of scientists see little benefit, the output of top researchers nearly doubles. Investigating the mechanisms behind these results, I show that AI automates 57% of “idea-generation” tasks, reallocating researchers to the new task of evaluating model-produced candidate materials. Top scientists leverage their domain knowledge to prioritize promising AI suggestions, while others waste significant resources testing false positives. Together, these findings demonstrate the potential of AI-augmented research and highlight the complemen- tarity between algorithms and expertise in the innovative process. Survey evidence reveals that these gains come at a cost, however, as 82% of scientists report reduced satisfaction with their work due to decreased creativity and skill underutilization.




artificial intelligence

I Am Not Real: Artificial Intelligence in the Needlework World

The topic of AI in the needlework world has been on my radar for well over a year. But I’ve …




artificial intelligence

Louisiana schools use Artificial Intelligence to help young children learn to read

In Louisiana, more than 100,000 students are using an AI tutor that is helping to raise reading scores.




artificial intelligence

Undercurrents: Episode 10 - Artificial Intelligence in International Affairs, and Women Drivers in Saudi Arabia




artificial intelligence

Artificial Intelligence and the Public: Prospects, Perceptions and Implications




artificial intelligence

Undercurrents: Summer Special - Allison Gardner on Artificial Intelligence




artificial intelligence

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities

Artificial Intelligence Apps Risk Entrenching India’s Socio-economic Inequities Expert comment sysadmin 14 March 2018

Artificial intelligence applications will not be a panacea for addressing India’s grand challenges. Data bias and unequal access to technology gains will entrench existing socio-economic fissures.

Participants at an AI event in Bangalore. Photo: Getty Images.

Artificial intelligence (AI) is high on the Indian government’s agenda. Some days ago, Prime Minister Narendra Modi inaugurated the Wadhwani Institute for Artificial Intelligence, reportedly India’s first research institute focused on AI solutions for social good. In the same week, Niti Aayog CEO Amitabh Kant argued that AI could potentially add $957 billion to the economy and outlined ways in which AI could be a ‘game changer’.

During his budget speech, Finance Minister Arun Jaitley announced that Niti Aayog would spearhead a national programme on AI; with the near doubling of the Digital India budget, the IT ministry also announced the setting up of four committees for AI-related research. An industrial policy for AI is also in the pipeline, expected to provide incentives to businesses for creating a globally competitive Indian AI industry.

Narratives on the emerging digital economy often suffer from technological determinism — assuming that the march of technological transformation has an inner logic, independent of social choice and capable of automatically delivering positive social change. However, technological trajectories can and must be steered by social choice and aligned with societal objectives. Modi’s address hit all the right notes, as he argued that the ‘road ahead for AI depends on and will be driven by human intentions’. Emphasising the need to direct AI technologies towards solutions for the poor, he called upon students and teachers to identify ‘the grand challenges facing India’ – to ‘Make AI in India and for India’.

To do so, will undoubtedly require substantial investments in R&D, digital infrastructure and education and re-skilling. But, two other critical issues must be simultaneously addressed: data bias and access to technology gains.

While computers have been mimicking human intelligence for some decades now, a massive increase in computational power and the quantity of available data are enabling a process of ‘machine learning.’ Instead of coding software with specific instructions to accomplish a set task, machine learning involves training an algorithm on large quantities of data to enable it to self-learn; refining and improving its results through multiple iterations of the same task. The quality of data sets used to train machines is thus a critical concern in building AI applications.

Much recent research shows that applications based on machine learning reflect existing social biases and prejudice. Such bias can occur if the data set the algorithm is trained on is unrepresentative of the reality it seeks to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognizing non-white people. This is what led a recent Google application to tag black people as gorillas.

Alternatively, bias can also occur if the data set itself reflects existing discriminatory or exclusionary practices. A recent study by ProPublica found for example that software that was being used to assess the risk of recidivism in criminals in the United States was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes.

The impact of such data bias can be seriously damaging in India, particularly at a time of growing social fragmentation. It can contribute to the entrenchment of social bias and discriminatory practices, while rendering both invisible and pervasive the processes through which discrimination occurs. Women are 34 per cent less likely to own a mobile phone than men – manifested in only 14 per cent of women in rural India owning a mobile phone, while only 30 per cent of India’s internet users are women.

Women’s participation in the labour force, currently at around 27 per cent, is also declining, and is one of the lowest in South Asia. Data sets used for machine learning are thus likely to have a marked gender bias. The same observations are likely to hold true for other marginalized groups as well.

Accorded to a 2014 report, Muslims, Dalits and tribals make up 53 per cent of all prisoners in India; National Crime Records Bureau data from 2016 shows in some states, the percentage of Muslims in the incarcerated population was almost three times the percentage of Muslims in the overall population. If AI applications for law and order are built on this data, it is not unlikely that it will be prejudiced against these groups.

(It is worth pointing out that the recently set-up national AI task force is comprised of mostly Hindu men – only two women are on the task force, and no Muslims or Christians. A recent article in the New York Times talked about AI’s ‘white guy problem’; will India suffer from a ‘Hindu male bias’?)

Yet, improving the quality, or diversity, of data sets may not be able to solve the problem. The processes of machine learning and reasoning involve a quagmire of mathematical functions, variables and permutations, the logic of which are not readily traceable or predictable. The dazzle of AI-enabled efficiency gains must not blind us to the fact that while AI systems are being integrated into key socio-economic systems, their accuracy and logic of reasoning have not been fully understood or studied.

The other big challenge stems from the distribution of AI-led technology gains. Even if estimates of AI contribution to GDP are correct, the adoption of these technologies is likely to be in niches within the organized sector. These industries are likely to be capital- rather than labour-intensive, and thus unlikely to contribute to large-scale job creation.

At the same time, AI applications can most readily replace low- to medium-skilled jobs within the organized sector. This is already being witnessed in the outsourcing sector – where basic call and chat tasks are now automated. Re-skilling will be important, but it is unlikely that those who lose their jobs will also be those who are being re-skilled – the long arch of technological change and societal adaptation is longer than that of people’s lives. The contractualization of work, already on the rise, is likely to further increase as large industries prefer to have a flexible workforce to adapt to technological change. A shift from formal employment to contractual work can imply a loss of access to formal social protection mechanisms, increasing the precariousness of work for workers.

The adoption of AI technologies is also unlikely in the short- to medium-term in the unorganized sector, which engages more than 80 per cent of India’s labor force. The cost of developing and deploying AI applications, particularly in relation to the cost of labour, will inhibit adoption. Moreover, most enterprises within the unorganized sector still have limited access to basic, older technologies – two-thirds of the workforce are employed in enterprises without electricity. Eco-system upgrades will be important but incremental. Given the high costs of developing AI-based applications, most start-ups are unlikely to be working towards creating bottom-of-the-pyramid solutions.

Access to AI-led technology gains is thus likely to be heavily differentiated – a few high-growth industries can be expected, but these will not necessarily result in the welfare of labour. Studies show that labour share of national income, especially routine labour, has been declining steadily across developing countries.

We should be clear that new technological applications themselves are not going to transform or disrupt this trend – rather, without adequate policy steering, these trends will be exacerbated.

Policy debates about AI applications in India need to take these two issues seriously. AI applications will not be a panacea for addressing ‘India’s grand challenges’. Data bias and unequal access to technology gains will entrench existing socio-economic fissures, even making them technologically binding.

In addition to developing AI applications and creating a skilled workforce, the government needs to prioritize research that examines the complex social, ethical and governance challenges associated with the spread of AI-driven technologies. Blind technological optimism might entrench rather than alleviate the grand Indian challenge of inequity and growth.

This article was originally published in the Indian Express.




artificial intelligence

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence

27 August 2020

Yasmin Afina

Research Assistant, International Security Programme
Increasing dependency on artificial intelligence (AI) for military technologies is inevitable and efforts to develop these technologies to use in the battlefield is proceeding apace, however, developers and end-users must ensure the reliability of these technologies, writes Yasmin Afina.

GettyImages-112897149.jpg

F-16 SimuSphere HD flight simulator at Link Simulation in Arlington, Texas, US. Photo: Getty Images.

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controlling a simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agency’s deputy director remarked that these tools are now ‘ready for weapons systems designers to be in the toolbox’. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use – namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

Testing environment versus real-life applications

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithm’s accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly – if not more so – than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic and battlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversaries will be unpredictable.

Every single eventuality would need to be programmed in line with the commander’s intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Hardware limitations

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems – each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a ‘pilotless’ aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithm’s performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Legal compliance

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithm’s ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law – including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithm’s calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithm’s opacity to improve the algorithm’s performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Reliability, testing and experimentation

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machine’s reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations. 




artificial intelligence

MIRD Pamphlet No. 31: MIRDcell V4--Artificial Intelligence Tools to Formulate Optimized Radiopharmaceutical Cocktails for Therapy

Visual Abstract




artificial intelligence

Artificial Intelligence Prediction and Counterterrorism

Artificial Intelligence Prediction and Counterterrorism Research paper sysadmin 6 August 2019

The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states.

Surveillance cameras manufactured by Hangzhou Hikvision Digital Technology Co. at a testing station near the company’s headquarters in Hangzhou, China. Photo: Getty Images

Summary

  • The use of predictive artificial intelligence (AI) in countering terrorism is often assumed to have a deleterious effect on human rights, generating spectres of ‘pre-crime’ punishment and surveillance states. However, the well-regulated use of new capabilities may enhance states’ abilities to protect citizens’ right to life, while at the same time improving adherence to principles intended to protect other human rights, such as transparency, proportionality and freedom from unfair discrimination. The same regulatory framework could also contribute to safeguarding against broader misuse of related technologies.
  • Most states focus on preventing terrorist attacks, rather than reacting to them. As such, prediction is already central to effective counterterrorism. AI allows higher volumes of data to be analysed, and may perceive patterns in those data that would, for reasons of both volume and dimensionality, otherwise be beyond the capacity of human interpretation. The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats.
  • Developments in AI have amplified the ability to conduct surveillance without being constrained by resources. Facial recognition technology, for instance, may enable the complete automation of surveillance using CCTV in public places in the near future.
  • The current way predictive AI capabilities are used presents a number of interrelated problems from both a human rights and a practical perspective. Where limitations and regulations do exist, they may have the effect of curtailing the utility of approaches that apply AI, while not necessarily safeguarding human rights to an adequate extent.
  • The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results.
  • In future, broader access to less intrusive aspects of public data, direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. It is important that any such measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly.




artificial intelligence

Who gains from artificial intelligence?

Who gains from artificial intelligence? 27 February 2023 — 5:30PM TO 6:30PM Anonymous (not verified) 6 February 2023 Chatham House and Online

What implications will AI have on fundamental rights and how can societies benefit from this technology revolution?

In recent months, the latest developments in artificial intelligence (AI) have attracted much media attention. These technologies hold a wealth of potential for a wide range of applications, for example, the recent release of OpenAI’s ChatGPT, a text generation model, has shed light on the opportunities these applications hold including to advance scientific research and discovery, enhance search engines and improve key commercial applications.

Yet, instead of generating an evidence-based public debate, this increased interest has also led to discussions on AI technologies which are often alarmist in nature, and in a lot of cases, misleading. They carry the risk of shifting public and policymakers’ attention away from critical societal and legal risks as well as concrete solutions.

This discussion, held in partnership with Microsoft and Sidley Austin LLP, provides an expert-led overview of where the technology stands in 2023. Panellists also reflect on the implications of implementing AI on fundamental rights, the enforcement of current and upcoming legislation and multi-stakeholder pathways to address relevant issues in the AI space.

More specifically, the panel explores:

  • What is the current state of the art in the AI field?
  • What are the opportunities and challenges presented by generative AI and other innovations?
  • What are some of the key, and potentially most disruptive, AI applications to monitor in the near- and mid-term? 
  • Which applications would benefit from greater public policy/governance discussions?
  • How can current and future policy frameworks ensure the protection of fundamental rights in this new era of AI?
  • What is the role of multi-stakeholder collaboration?
  • What are the pathways to achieving inclusive and responsible governance of AI?
  • How can countries around the world work together to develop frameworks for responsible AI that upholds democratic values and advance AI collaboration across borders?

As with all member events, questions from the audience drive the conversation.

Read the transcript.




artificial intelligence

Artificial Intelligence in K-12: The Right Mix for Learning or a Bad Idea?

The rapid shift to tech-driven, remote learning this spring has infused more technology into K-12 education, but AI tools still remain on the fringe.




artificial intelligence

How Artificial Intelligence Is Making 2,000-Year-Old Scrolls Readable Again

When Mount Vesuvius erupted in 79 C.E., it covered the ancient cities of Pompeii and Herculaneum under tons of ash. Millennia later, in the mid-18th century, archeologists began to unearth the city, including its famed libraries, but the scrolls they found were too fragile to be unrolled and read; their contents were thought to be lost forever. Only now, thanks to the advent of artificial intelligence and machine learning, scholars of the ancient world have partnered with computer programmers to unlock the contents of these priceless documents. In this episode of “There’s More to That,” science journalist and Smithsonian contributor Jo Marchant tells us about the yearslong campaign to read these scrolls. And Youssef Nader—one of the three winners of last year’s “Vesuvius Challenge” to make these clumps of vulcanized ash readable—tells us how he and his teammates achieved their historic breakthrough. Read Smithsonian’s coverage of the Vesuvius Challenge and the Herculaneum scrolls here (https://www.smithsonianmag.com/smart-news/three-students-decipher-first-passages-2000-year-old-scroll-burned-vesuvius-eruption-180983738/) , here (https://www.smithsonianmag.com/history/buried-ash-vesuvius-scrolls-are-being-read-new-xray-technique-180969358/) , and here (https://www.smithsonianmag.com/history/archaeologoists-only-just-beginning-reveal-secrets-hidden-ancient-manuscripts-180967455/) . Find prior episodes of our show here (https://www.smithsonianmag.com/podcast/) . There’s More to That is a production of Smithsonian magazine and PRX Productions. From the magazine, our team is Chris Klimek, Debra Rosenberg and Brian Wolly. From PRX, our team is Jessica Miller, Adriana Rosas Rivera, Genevieve Sponsler, Rye Dorsey, and Edwin Ochoa. The Executive Producer of PRX Productions is Jocelyn Gonzales. Fact-checking by Stephanie Abramson. Episode artwork by Emily Lankiewicz. Music by APM Music.




artificial intelligence

Scientists Who Developed the Building Blocks of Artificial Intelligence Win Nobel Prize in Physics

John Hopfield and Geoffrey Hinton shared the award for their work on artificial neural networks and machine learning




artificial intelligence

Contest explores artificial intelligence’s strengths, flaws for medical diagnoses

Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI) will host “Diagnose-a-thon,” a competition that aims to uncover the power and potential dangers of using generative AI for medical inquiries. The virtual event will take place Nov. 11-17 with top prizes of $1,000.  




artificial intelligence

News24 Business | Say hi to 'Sandton 2.0', as swanky suburb beefs up security with artificial intelligence

While some were writing Sandton's obituary during the height of the Covid-19 pandemic, when the work-from-home phenomenon became the norm, others - like its property owners and businesses - were planning its revival.




artificial intelligence

Can Artificial Intelligence Help Teachers Find the Right Lesson Plans?

The IBM Foundation has launched a website called Teacher Advisor with Watson, which uses artificial intelligence to find high-quality elementary math resources and lessons.




artificial intelligence

Spotlight on SiriusDecisions: Artificial Intelligence

Kerry Cunningham of SiriusDecisions talks about how AI can power the B2B revenue engine




artificial intelligence

State Releases Guidance on Generative Artificial Intelligence in Classrooms

The Delaware Department of Education has developed guidance for districts and charter schools on generative artificial intelligence (AI) in the classroom.




artificial intelligence

Artificial Intelligence: Accelerating Knowledge in the Digital Age!

In an era of abundant and constantly evolving information, the challenge is not just accessing knowledge but understanding and applying it effectively. AI is a transformative technology that is reshaping how we learn, work, and grow. In this blog, we’ll explore how AI accelerates our knowledge acquisition and understand how it can relate to the process of learning, which connects with our daily lives.

The role of AI is to accelerate knowledge by personalizing learning experiences, providing instant access to information, and offering data-driven insights. AI empowers us to learn more efficiently and effectively in many ways. I won't go into much detail, as we are already busy searching for the meaning of AI and what it can do; however, I want to share one inspiring fact about AI. It can analyze vast amounts of data in seconds, making sense of complex information and providing instantaneous actionable insights or concise answers. I understand that humans are looking to speed up things, which can help us understand technology better and perform our tasks faster.

The main reason AI is in focus is because of its ability to perform tasks faster than ever. We aim to enhance the performance of all our products, including the everyday household electronic items we use. Similarly, are we striving to accelerate the learning process? I am committed to assisting you, and one such method is concise, short (minute-long) videos.

In today's fast-paced world, where attention spans are shorter than ever, the rise of social media platforms has made it easier for anyone to create and share short videos. This is where minute videos come in. These bite-sized clips offer a quick and engaging way to deliver information to the audience with a significant impact. Understanding the definitions of technical terms in VLSI Design can often be accomplished in just a minute.

Below are the definitions of the essential stages in the RTL2GDSII Flow. For further reference, these definitions are also accessible on YouTube.

What is RTL Coding in VLSI Design?

     

What is Digital Verification?

     

What Is Synthesis in VLSI Design?

     

What Is Logic Equivalence Checking in VLSI Design?

     

What Is DFT in VLSI Design?

     

What is Digital Implementation?

     

What is Power Planning?

     

What are DRC and LVS in Physical Verification?

     

What are On-Chip Variations?  

     

Want to Learn More?

The Cadence RTL-to-GDSII Flow training is available as both "Blended" and "Live" Please reach out to Cadence Training for further information.

And don't forget to obtain your Digital Badge after completing the training!

Related Blogs

Training Insights – Why Is RTL Translated into Gate-Level Netlist?

Did You Miss the RTL-to-GDSII Webinar? No Worries, the Recording Is Available!

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Binge on Chip Design Concepts this Weekend!




artificial intelligence

China’s Push Into Artificial Intelligence―How Should the United States Respond?

China’s Push Into Artificial Intelligence―How Should the United States Respond? China’s Push Into Artificial Intelligence―How Should the United States Respond?
ferrard Thu, 05/03/2018 - 16:49

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore

East-West Wire

Tagline
News, Commentary, and Analysis
East-West Wire

The East-West Wire is a news, commentary, and analysis service provided by the East-West Center in Honolulu. Any part or all of the Wire content may be used by media with attribution to the East-West Center or the person quoted. To receive East-West Center Wire media releases via email, subscribe here.

For links to all East-West Center media programs, fellowships and services, see www.eastwestcenter.org/journalists.

Explore