· 2 min read

AGI might be an engineering problem

I think AGI might be an engineering problem at this point.

This is exactly what I’m thinking of: give the AI an internal monologue where it constructs its own search strategies as few-shot prompts, then let it prompt itself with those examples so it can use the learned strategy. In-text learning.

This image compares three systems—Vanilla LM, Retrieve-then-Read, and Multi-Hop DSP Program—on answering “How many storeys are in the castle David Gregory inherited?” The Vanilla LM hallucinates a fictional castle, the Retrieve-then-Read system retrieves incorrect information about a different building, while the Multi-Hop DSP Program correctly identifies that David Gregory inherited Kinnairdy Castle, which has five storeys.

The second picture here is the internal monologue of Berduck:

A Twitter thread showing a user named "berduck" replying to "deep face (26/100)" about smoking blunts versus guns, with a surreal image of a person wearing a hat and sunglasses holding two long, cylindrical objects against a fiery background.

A computer screen displaying a Visual Studio Code editor with a Python file named "tools.py" open, showing terminal output of an AI agent replying to Twitter threads about the meaning of "blunts" and their relation to marijuana, including image analysis and Wikipedia lookup.

View original