You mention OpenAI censorship, can you expand on that a little please? Is it that cryptography is a forbidden topic in some way? And does the top level guardrails interfere then in the output quality, it's stepping on its own toes in some way?
Yes, it's been a while, but when OpenAI first released this model they said that the exact steps were a trade secret. To avoid giving that away, the model replaced the exact set of steps with a summary or sanitized version generated after the fact. That is what I meant by "censorship." Nothing to do with cryptography as a forbidden topic. I think they have given up on trying to hide the "reasoning" steps but I haven't looked specifically to see if that section is still in their TOS.
Thanks for the link and the correction. It is interesting to see how many prompts it took to steer it to a solution. I deliberately chose this cryptogram so that getting the last word would be tricky. I don't see o1 as being that much better than my 4o example, it still made the most basic of mistakes on the first try. A nice exercise in man-machine cooperation, but I remain unconvinced that the robot has demonstrated significant reasoning ability.
You mention OpenAI censorship, can you expand on that a little please? Is it that cryptography is a forbidden topic in some way? And does the top level guardrails interfere then in the output quality, it's stepping on its own toes in some way?
Yes, it's been a while, but when OpenAI first released this model they said that the exact steps were a trade secret. To avoid giving that away, the model replaced the exact set of steps with a summary or sanitized version generated after the fact. That is what I meant by "censorship." Nothing to do with cryptography as a forbidden topic. I think they have given up on trying to hide the "reasoning" steps but I haven't looked specifically to see if that section is still in their TOS.
The new reasoning model isn't 4o, it's o1. https://chatgpt.com/share/67028d49-b81c-8002-ba27-27f383746d67
Still not perfect, but getting better. It will never be worse than this.
Thanks for the link and the correction. It is interesting to see how many prompts it took to steer it to a solution. I deliberately chose this cryptogram so that getting the last word would be tricky. I don't see o1 as being that much better than my 4o example, it still made the most basic of mistakes on the first try. A nice exercise in man-machine cooperation, but I remain unconvinced that the robot has demonstrated significant reasoning ability.