What happened when an AI played 11 million games of Monopoly?
Two weeks of training to agree with humans. Relatable.
Someone built an AI that played 11.2 million games of Monopoly against itself.
If a human had done that, it would have taken roughly 1,600 years. A creator called b2studios put the whole thing on YouTube (link at the bottom), even open-sourced the code on GitHub, and it’s one of the more entertaining AI stories I’ve come across in a while. Not because of the Monopoly insights per se, but because of what it reveals about how AI actually learns.
The researcher started simple: a basic bot that buys everything, trades nothing, builds houses when it can. Four of them, playing a million games. What he found was that without trading, 70% of four-player games ended in a stalemate. No one ever acquired enough to bankrupt anyone else. The game just... went on forever. It’s a useful reminder that a system optimizing purely for individual acquisition, with no exchange, doesn’t produce winners. It produces gridlock.
Then came NEAT (neuro evolution of augmenting topologies), an algorithm that breeds neural networks through evolution, rewarding the ones that perform best and building the next generation from them. The AI started from absolute scratch. First discovery? Paying to get out of jail is bad. Fine. Reasonable enough start. Then it decided to bid $3,000 on every single property and lost constantly. Then it found building houses. Then it started winning, and things got strange.
At one point the AI decided the optimal strategy was to mortgage everything it owned. Every game ended in a draw. It had accidentally discovered mutually assured stagnation. It figured out pretty quickly that draws don’t win tournaments and snapped itself out of it. Then it started doing something no one expected: auctioning almost everything, bidding next to nothing, and somehow leveraging that through a deep understanding of when properties were actually worth acquiring and at what price.
After two weeks of training and 11.2 million games, the AI’s final favourite properties were the orange set, the railroads (especially the first one), and the reds. The boring brown set it had briefly loved early on? Abandoned completely once it had real trading tools available. And when it did decide to build, it went hard: the optimal house count it landed on was 8-9 houses per property set, a notably specific finding that lines up with what experienced human players have always argued about the “housing shortage” strategy.
The conclusion the video lands on is almost funny in its modesty: two weeks of intensive computation, 11.2 million games, and the AI basically agreed with what experienced human players already knew.
Which is kind of the point, isn’t it. The value wasn’t in some revolutionary discovery. It was in CONFIRMING what works, through a process rigorous enough that you could actually trust the output. That’s the difference between an AI that’s generating plausible-sounding answers and one that’s been properly tested against reality at scale.
We think about this constantly at FOMO.ai. The brands winning in AI Search right now aren’t the ones who plugged a keyword list into ChatGPT and hit publish. They’re the ones building systems around the AI, with proper feedback loops, human oversight, and enough iterations that the output actually holds up. The Monopoly AI didn’t become good by rolling the dice once. Neither will your content strategy.
Further Reading:
The full code is open source: github.com/b2developer/MonopolyNEAT
The stalemate problem is worth sitting with. In a four-player game without trading, 70% of games never resolved. This maps surprisingly well onto marketing teams running AI tools in isolation from each other, each optimizing their own channel, never exchanging data or insight, and wondering why nothing ever compounds.
The AI’s brief “mortgage everything” phase is a well-documented phenomenon in reinforcement learning called reward hacking, where agents discover locally stable but globally useless strategies. It’s also how you get content that scores well on AI detection benchmarks but does nothing for actual customers.
Dax Hamman is the CEO & Co-Founder at FOMO.ai, the author of 84Futures, and dax.fyi. If you’re ready to win with AI Search (SEO, AEO, Google, ChatGPT), schedule a free call and we will make you a custom $5,000 Playbook.


