Over 50% of internet traffic is now bots—many spreading AI-driven lies. Calgary-based Koat.ai is tracking this threat in real-time. Here’s what you need to know.
We’ve long suspected it. The spam, the trolling, the strange comments on political posts. Now, it’s official: bots make up more than half of all internet traffic, and one-third of those are actively malicious.
According to a new report by cybersecurity firm Imperva, we are no longer just browsing the web — we’re navigating an AI-driven battleground, one where deception, distortion, and manipulation are orchestrated by code.
What Are These Bad Bots Doing?
Far from being harmless background noise, these AI-powered bots are carrying out:
- Disinformation campaigns to sway public opinion
- Defamation efforts against businesses and public figures
- Market manipulation and election interference
- Social engineering tactics to amplify fake narratives through real users
The malicious use of AI is outpacing protective measures, creating a digital environment where lies spread faster than facts — and platforms seem powerless to stop it.
Meet Koat.ai: The Calgary Startup Fighting Back
Enter Koat.ai, a Calgary-based cybersecurity intelligence startup that’s quietly building some of the most advanced detection tools on the market. Founded in 2021, Koat.ai has already landed a Big Six bank as a client — and it’s just getting started.
In an exclusive conversation with co-founder and president Connor Ross, we learned that Koat.ai’s systems can detect fake online activity within 10 seconds of a post going live.
“Bad actors are weaponizing AI faster than governments and enterprises are using AI to combat it,” says Ross.
Bots Aren’t Just Posting — They’re Persuading
Here’s the alarming twist: the problem isn’t just bots talking to bots. It’s bots manipulating people.
Fake accounts post falsehoods. Real people believe them. Share them. Act on them.
This chain reaction, fueled by speed and scale, creates a dangerous reality where fabricated narratives become social truth before fact-checkers or authorities can even respond.
Platforms Know — But Are They Responding?
Global social media platforms are aware of the issue, but Ross argues they aren’t doing nearly enough.
“There’s a dangerous gap between platform awareness and platform accountability,” he warns.
Despite clear evidence of bot-driven chaos affecting everything from public health to geopolitical stability, corporate inaction continues to leave societies vulnerable.
What Can Be Done?
While government policy lags and social media companies hesitate, startups like Koat.ai offer a glimpse of hope — but Ross insists tech alone won’t solve the problem.
What’s needed now:
- Public awareness campaigns about AI-driven manipulation
- Stronger collaboration between tech firms and governments
- Rapid response systems for high-impact disinformation
- User education to help citizens critically assess what they read
The Bottom Line
Half of the internet is bots. A third of those are trying to mislead you.
The fight against AI-powered misinformation isn’t just a technical challenge — it’s a societal one. The lies are fast, convincing, and embedded into our digital experiences.
And the longer we wait, the more powerful — and invisible — these threats become.