nuclear-news

The News That Matters about the Nuclear Industry Fukushima Chernobyl Mayak Three Mile Island Atomic Testing Radiation Isotope

Policy makers should plan for superintelligent AI, even if it never happens

Bulletin, By Zachary Kallenborn | December 21, 2023

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” In a 2022 survey, half of researchers indicated they believed there’s at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOs indicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too……………………………………

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways……………………………………………………………………………………………………………… more https://thebulletin.org/2023/12/policy-makers-should-plan-for-superintelligent-ai-even-if-it-never-happens/?utm_source=Newsletter&utm_medium=Email&utm_campaign=MondayNewsletter12252023&utm_content=DisruptiveTechnologies_SuperintelligentAI_12212023

December 29, 2023 - Posted by | technology

No comments yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.