We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Last week, UN Secretary-General António Guterres announced the creation of a new advisory body to tackle AI. The group has 38 members, including government officials and industry executives, and will be co-chaired by Google’s James Manyika and Carme Artigas, Spain’s official in charge of AI.
In a speech at the UK summit, Guterres said the group’s first task will be “to examine models of technology governance that have worked in the past, with a view to identifying forms that could work for AI governance now and in the future.”
Those findings will be released in a preliminary report by the end of this year — with a final report expected next year. With nearly every major international body, it seems, recognizing the need for AI regulation, the UN doesn’t want to get left behind. That said, the UN has plenty of experience reining in the use of dangerous technology, such as nuclear and chemical weapons, but much less authority on the threats posed by powerful or rogue software.