Skip to main content

Here’s what’s really going on inside an LLM’s neural network

posted onMay 23, 2024
by l33tdawg
Arstechnica
Credit: Arstechnica

With most computer programs—even complex ones—you can meticulously trace through the code and memory usage to figure out why that program generates any specific behavior or output. That's generally not true in the field of generative AI, where the non-interpretable neural networks underlying these models make it hard for even experts to figure out precisely why they often confabulate information, for instance.

Now, new research from Anthropic offers a new window into what's going on inside the Claude LLM's "black box." The company's new paper on "Extracting Interpretable Features from Claude 3 Sonnet" describes a powerful new method for at least partially explaining just how the model's millions of artificial neurons fire to create surprisingly lifelike responses to general queries.

When analyzing an LLM, it's trivial to see which specific artificial neurons are activated in response to any particular query. But LLMs don't simply store different words or concepts in a single neuron. Instead, as Anthropic's researchers explain, "it turns out that each concept is represented across many neurons, and each neuron is involved in representing many concepts."

Source

Tags

Industry News

You May Also Like

Recent News

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th

Friday, June 7th

Thursday, June 6th

Wednesday, June 5th

Monday, June 3rd

Friday, May 31st