Discussion about this post

User's avatar
Clyde Wright's avatar

o1-pro got this https://chatgpt.com/share/67a53a03-b8b8-8002-98a9-6d15194b4518

Would you like to work on CryptoBench together to push the boundaries of models on this topic?

Expand full comment
Charlie's avatar

> I base this belief on two facts: classic cipher systems are language-based, which eliminates the complaint that the LLM is given problems outside its area of expertise

While this seems intuitively correct, LLM architecture actually abstracts words into "tokens" rather than assessing them letter-by-letter. This is why you get this infamous two R's in strawberry; the LLM doesn't know that strawberry is a word composed of letters, it knows that it's a mathematical function that loosely corresponds to a word. It's still reasonable to assume that a rational agent should be able to solve a cryptogram, but this objection is still wide open, counterintuitive as it may be.

Expand full comment
1 more comment...

No posts