A warning for those looking to proactively automate day‑to‑day workflows

We are currently in a time when new technologies are being adopted in days and weeks, rather than months and years. This has been demonstrated with the recent and rapid adoption of AI automation tools that manage how we interact and communicate, which is going to change the way we fundamentally interact with technology.

In the past few days, OpenClaw (aka ClawdBot, aka MoltBot), a popular self‑hosted (i.e., it can run locally on your PC) open‑source AI assistant, has quickly gained attention – it’s possible that you’ve seen it on the news or in your social feeds. These types of tools are designed to deeply integrate with your computer and automate tasks that have traditionally required human intervention, e.g., reading and replying to emails, sorting your calendar, or talking to your team on Slack.

I am in favour of automating and digitising routine, repetitive tasks. However, I’m not sure everyone is fully aware of the implications or risks involved with this new type of AI assistant, which has the potential to open the door to a lot of your personal information. This is especially important for business users who may unknowingly expose confidential or sensitive company and client data to AI ecosystems that are still new, unproven, and rapidly evolving.

The key risk is that all‑encompassing, always‑on, always‑acting tools are inherently vulnerable to exploitation (in this case via prompt injection). They can run with broad privileges capable of performing admin‑level actions, and many of them may operate without adhering to common cybersecurity standards.

Prompt injection is, at its core, a way of tricking an AI system into doing something harmful by manipulating the instructions it receives. Think of it like writing in hidden ink or adding text to an email in the same colour as the background, which you can’t see, but an AI agent would include in its instructions. When a proactive agent scans your emails, files, chats, and websites, it doesn’t know to exclude this text. A malicious actor can hide commands in an incoming email or document as legitimate actions that can lead to data leaks, sending of unauthorised messages, files being deleted, or potentially financial actions you never intended to transact.

A major concern of mine is that new users of AI don’t know what prompt injection means, or its implications, and with such tools reaching a whole new level of adoption, it’s becoming a real issue. I had two individuals reach out to me recently because their systems were compromised while experimenting with tools like OpenClaw. One lost access to their Gmail and was unable to work, while the other experienced unintended actions on their digital currency accounts after their agent executed tasks they didn’t expect.

For clarity, I advocate and recommend engaging in automation. It has amazing productivity benefits. However, my general recommendation is to stay away from unsupervised AI tools until you’ve done appropriate research and testing, which I appreciate is becoming increasingly difficult with such rapidly evolving technology.

In terms of practical takeaways, this means: –

  • Automate processes but make sure you retain control of your data.
  • Restrict access to tools that are used and deployed within your businesses.
  • Engage with professionals where needed to ensure you are appropriately informed about new technologies.

Looking ahead, I remain optimistic but believe 2026 will reshape how we interact with each other, our technology, and our data. I just hope that there aren’t too many casualties along the way due to bad actors…