A quirky personal AI assistant shaped like a lobster has unexpectedly taken the tech world by storm. Originally known as Clawdbot, the tool — now renamed Moltbot — promises to actually perform tasks, not just answer questions, drawing massive attention from developers and AI enthusiasts alike.
Moltbot, formerly Clawdbot, describes itself as the “AI that actually does things.”It can manage calendars, send messages through apps, and even check users in for flights.
Despite requiring a fairly technical setup, thousands of users rushed to try it. The project went viral within weeks, fueled by social media buzz and developer curiosity.
Who Built Moltbot?
Moltbot was created by Peter Steinberger, an Austrian developer and founder known online as @steipete. He previously worked on PSPDFkit before stepping away from active development for several years.
In a personal blog post, Steinberger said he felt disconnected from building for nearly three years. That changed when renewed excitement around AI inspired him to start experimenting again.
The Origin of the Lobster Assistant
The publicly available Moltbot grew out of a personal tool Steinberger built for himself. Originally called Clawd, and later Molty, it was designed to help him manage his digital life and explore human-AI collaboration.
Steinberger initially named the project after Anthropic’s AI model, Claude. A self-described “Claudoholic,” he later revealed that Anthropic asked him to change the branding for copyright reasons.
After the legal challenge, Clawdbot became Moltbot. While the name changed, Steinberger said the project’s “lobster soul” remained intact. TechCrunch has reached out to Anthropic for comment.
Explosive Growth and Market Impact
Moltbot quickly gained traction among early adopters. The project has amassed more than 44,200 stars on GitHub in a short period.
The hype even affected financial markets. Cloudflare’s stock jumped 14% in premarket trading as online buzz linked Moltbot’s popularity to Cloudflare’s infrastructure, which many developers use to run the tool locally.
Security Risks and Growing Concerns
Moltbot is open source, allowing anyone to inspect its code for vulnerabilities. It also runs locally on a user’s computer or server, rather than in the cloud.
However, experts warn that its core promise comes with serious risks. Investor Rahul Sood noted that “actually doing things” also means the AI can execute commands on your computer.
Sood highlighted concerns around prompt injection attacks. A malicious message sent through an app like WhatsApp could potentially trigger unintended actions without the user’s knowledge.
While careful setup and model selection can reduce risk, fully preventing such attacks requires running Moltbot in a restricted environment.
Many developers warn that treating it casually, like ChatGPT, could lead to serious problems.
Scams, Impersonation, and Early Warnings
Steinberger himself faced the darker side of viral attention. After the renaming process, crypto scammers reportedly hijacked his GitHub username and launched fake projects using his name.
He warned followers that any cryptocurrency listing him as an owner is a scam.
Steinberger also clarified that the only legitimate X account is @moltbot, not numerous scam variations.
Moltbot is still firmly in early adopter territory. Running it safely often requires a VPS or a separate computer with disposable accounts.
Experts caution against installing it on personal laptops containing sensitive credentials.
For now, the security-versus-utility trade-off limits its practicality for everyday users.
Despite its risks and limitations, Moltbot has sparked a wider conversation.
By building a tool to solve his own problem, Steinberger demonstrated how autonomous AI agents could move beyond demos and become genuinely useful.
Whether Moltbot itself becomes mainstream or not, it has already shown developers what the next generation of AI assistants might look like.







