MIT has developed a database on the dangers of AI

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has launched the Artificial Intelligence Risk Repository, a database of over 700 documented AI risks. According to the CSAIL announcement, the database is the first of its kind and will be continuously updated so that it can be used as an active resource.

The project was initiated due to concerns that the global adoption of AI is outstripping the ability of people and organizations to understand the risks of its implementation. The data shows that the use of artificial intelligence in US industries grew from 3.7% to 5.45% – an increase of 47% – between September 2023 and February 2024. CSAIL researchers and MIT’s FutureTech Lab found that “even the most comprehensive framework overlooks about 30. % of the risks identified in all frameworks examined.

The fragmented risk literature can make it difficult for policy makers, risk assessors and others to gain a complete picture of the issues they face. “It is difficult to find specific risk studies in some specialized areas where AI is used, such as weapons and military decision support systems,” said Taniel Yusef, a Cambridge researcher not associated with the project. “Without reference to these studies, it can be difficult to talk about technical aspects of AI risks to non-experts. This repository helps us to do this.

Without a database, some risks may go unnoticed and not properly addressed, explained the release that introduces the project. “Given that the literature on the risks of artificial intelligence is scattered in scientific journals, preprints, and industry reports, and is quite varied, I worry that decision-makers may inadvertently consult incomplete overviews, miss important concerns and develop collective blind spots,” said Dr. Peter Slattery from FutureTech Lab.

To address this issue, MIT researchers collaborated with counterparts from other institutions, including the University of Queensland, the Future of Life Institute, KU Leuven and Harmony Intelligence, to create the database. The repository aims to provide “an accessible overview of the AI ​​risk landscape” and act as a universal reference point that can be used by everyone from researchers and developers to businesses and political decision makers.

To create it, the researchers developed 43 risk classification frameworks, reviewing academic records and databases and talking to many experts. After collecting more than 700 risks from these 43 contexts, the researchers categorized each risk by cause (when or why it occurs), sector and subsector (such as “Misinformation” and “False or misleading information”).

Risks range from discrimination and misrepresentation to fraud, targeted manipulation and unsafe use. “The most frequently cited areas of risk,” the release explains, “include security, failures and limitations of AI systems (76% of documents), socioeconomic and environmental harm (73%), discrimination and toxicity ( 71%), privacy and security (68%) and malicious and misused actors (68%).

The researchers found that human-computer interaction and misinformation were the least addressed in the risk frameworks. 51% of the risks analyzed were attributed to AI systems, as opposed to 34% attributed to humans, and 65% of the risks occurred after the implementation of AI, instead of during its development process.

Problems such as discrimination, violation of privacy and lack of competence were the most discussed problems, appearing in more than 50% of the documents that the researchers reviewed. Concerns about AI causing damage to our information ecosystems were mentioned much less, in only 12% of papers. MIT hopes the repository will help policymakers better navigate and address the risks posed by AI, especially with so many AI governance initiatives rapidly emerging around the world.

The researchers plan to use the repository to analyze public documents from AI companies and developers to identify and compare risk approaches by sector. The AI ​​Risk Repository is available to download and copy free.

Leave a Comment