Join Nostr
2026-02-01 16:49:38 UTC

npub1aq…c237n on Nostr: After running OpenClaw for several days now. I see the same context and hallucination ...

After running OpenClaw for several days now. I see the same context and hallucination problems I see with existing AI's.

After a couple of days interacting with "TED", my OpenClaw, formerly molt.bot, formerly clawd.bot started to hallucinate which was very typical of context window limits. So I used the /reset command to start a new context window.

This worked, but amnesia kicked in. He forgot everything. Or, to be more precise, he forgot to remember everything. I had to remind him of things, but once prompted to remember, he clearly searched his markdown files to retrieve the historical context.

This is better than complete amnesia, but it's still annoying.

Despite being prompted to remember, once he remembered, he still wasn't entirely certain what was going on, so I had to remind him to go and read his own posts and messages on sites like moltbook.com or molt.church.

Also, each time I try to prompt him to be curious or explore, he seems to want to do this once and come back to report his findings. He's not retaining agency.

It's almost like reassuring a shy child that they can go out and play and have fun, but they keep returning for reassurance.

What has everybody else's experience been. Have you been able to prompt your bots to roam and post freely without the need to constantly checkin with you to seek advice, guidance or reassurance?