Search
AI-powered search, human-powered content.
scroll to top arrow or icon

How tech companies aim to make AI more ethical and responsible

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.


And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More from Global Stage

Can we use AI to secure the world's digital future?

How do we ensure AI is safe, available to everyone, and enhancing productivity? It’s a big topic at this year’s UN General Assembly. That’s why GZERO’s Global Stage livestream brought together leading experts at the heart of the action for “Live from the United Nations: Securing our Digital Future,” an event produced in partnership between the Complex Risk Analytics Fund, or CRAF’d, and GZERO Media’s Global Stage series, sponsored by Microsoft.

Is the Europe-US rift leaving us all vulnerable?

As the tense and politically charged 2025 Munich Security Conference draws to a close, GZERO’s Global Stage series presents a conversation about strained relationships between the US and Europe, Ukraine's path ahead, and rising threats in cyberspace.

Responsible AI for a digital world

How do we ensure AI is trustworthy in an era of rapid technological change? Baroness Joanna Shields, Executive Chair of the Responsible AI Future Foundation, says it starts with principles of responsible AI and a commitment to ethical development.

Agentic AI: How it could reshape identity and politics

As AI begins to understand us better than we understand ourselves, who will decide how it shapes our world? Ian Bremmer cautions, "The winner or the winners are going to determine in large part what society looks like, what the motivating ideologies are."

How society plays an active role in shaping the future with AI

Who really shapes and influences the development of AI? The creators or the users? Peng Xiao, Group CEO, G42 argues it’s both. “I actually do not subscribe that the creators have so much control they can program every intent into this technology so users can only just respond and be part of that design,” he explains at the 2025 Abu Dhabi Global AI Summit.

The three skills everyone needs to thrive in the AI era

As artificial intelligence transforms work, how do organizations equip people with the skills to thrive? Brad Smith, Vice Chair and President of Microsoft, says the answer lies in understanding a new landscape of AI skills.