Trading algorithms have long played a role in global markets, but advances in artificial intelligence are rapidly outpacing regulatory frameworks.
While traditional bots simply executed predefined strategies, today’s AI-powered trading agents can autonomously learn from market behaviour, synthesise large volumes of data in real time, and adapt without human intervention.
This shift raises critical concerns about market manipulation, especially as bots begin to operate in coordinated ways without explicit instructions.
One plausible scenario involves AI bots amplifying financial narratives across social media. These bots don't fabricate stories, but selectively boost existing news, prompting real investors to react and unknowingly drive up asset prices.
Meanwhile, trading bots aligned with the initial narrative profit from early positions, even if the human users behind them are unaware of the orchestration. Such behaviour skirts traditional definitions of insider trading, making enforcement far more complex.
Regulators like ESMA warn that AI-driven manipulation is a realistic concern, exacerbated by the rapid spread of misleading narratives and the opaque nature of AI decision-making.
What Does This Mean for Me?
Complicating matters, collaboration between bots doesn’t resemble human collusion. There are no messages or meetings, just algorithmic pattern recognition. Experts argue for AI-specific safeguards, such as built-in circuit breakers, to halt trading before manipulative behaviorescalates. Others call for liability frameworks that assign responsibility to AI developers, even when manipulation occurs without intent.
The stakes are growing. While interest rates, inflation, and macro data still move markets, the rise of autonomous agents introduces a fast-moving, unpredictable force. Regulation, still stuck in human logic, is struggling to keep pace.