scroll to top arrow or icon

Looking inside the black box

Looking into the code.

Looking into the code.

DPA via Reuters
One of the biggest challenges facing artificial intelligence companies is that they don’t know everything about their algorithms. This so-called black box problem is exacerbated by the fact that deep learning models do precisely that — they learn. And when they learn they change. They take in enormous troves of data, detect patterns, and spit something out: How a sentence should read, what an image should look like, how a voice should sound.

But now researchers at Anthropic, the AI startup that makes the chatbot Claude, claim they’ve had a breakthrough in understanding their own model. In a blog post, Anthropic researchers disclosed that they’ve found 10 million “features” of their Claude 3 Sonnet language model, with certain patterns that pop up when a user inputs something it recognizes. They’ve been able to map features that are close to one another: One for the Golden Gate Bridge, for example, is close to another for Alcatraz Island, the Golden State Warrior, California Governor Gavin Newsom, and the Alfred Hitchcock film Vertigo — set in San Francisco. Knowing about these features allows Anthropic to turn them on or off, manipulating the model to break out of its typical mold.

This development offers hope that the companies behind powerful generative AI models will soon have much more control over their creations, as MIT professor Jacob Andreas told theNew York Times. “In the same way that understanding basic things about how people work has helped us cure diseases,” Andreas said, “understanding how these models work will both let us recognize when things are about to go wrong and let us build better tools for controlling them.”


Subscribe to GZERO's daily newsletter