Meta's Cicero is not a threshold moment, but it could be, if we wanted, just for a different reason
I wake early to do low light photography: time for coffee and a quick scan of current activity in LinkedIn, which is happening during one of the northern Hemisphere posting bursts, when top of planet people are sending updates of their work, each with different relativity 'past-ness' for me, here on the edge of GMT.
Two people I follow had picked up on the reportage of the Meta team running Cicero, an AI experiment into an online game of Diplomacy. As a Diplomacy player, I'm ok with this: if you play hard strategy games, you expect a wide range of dirty tricks. Maria Luciana Axente in Europe and Caryn Lusinchi in Pacific America both covered the story in their usual thorough depth, exploring the implications of AI such as Cicero.
Before I knew it, I nearly missed the colour shift, between the extreme of the night dark and that of the day light. It doesn't last long as there are multiple big physical systems interacting - three main light sources, one big reflector, three elements and two big conveyors. I'm doing this partly because it's one of the best ways I can think of that humans can study how teeny tiny they are in relation to the giant data flows that dominate modern economic society.
I nearly missed it because I've been using Diplomacy tests for a long time to understand how groups interact when change is necessary but power is threatened. Later in life I extended the tests into theorized space of finite games and infinite games. Where the former encourages players to think in terms of play within boundary conditions and end something, the latter encourages players to play with boundary conditions and extend something.
This helped identify players that are unaware of causal models, rarely use imagination and don't worry about the dimensions of trust, sociability and respect. I came to the conclusion they weren't playing to the strategy of pro-group defensive coalition, which my experiments into infinite play predicted would be hard for the game sharks.
Along the way I found these pro-social defensive techniques versus zero-sum players were also remarkably resilient to intrusions by artificial knowledge agents (i.e. AI). Some people just want to win, enough to sacrifice further play and wouldn't hesitate to deploy a Cicero. I spent a couple of years gaming in the Game of Thrones Conquest game environment and much of my time was spent devising identification tests for hostile agents in our Discord.
The zero-sum logic doesn't include dimensions that let human knowledge agents distinguish their humanness. If I'm a zero-sum player, yes, I should be concerned: the development path for AI will quickly zero sum many human players out. That's because the playbook is constrained by the logic that tells us this is the way the world works. It's not, but people roll that way and, if it's Diplomacy, who am I to stop them lemming each other off cliffs.
The main reason a pro-social defensive strategy works is that artificial knowledge agents lack sociality, which is what pro-social defensive Diplomacy players produce as a natural consequence of successful game and campaign play. Artificial knowledge agents stimulates zero-sum game thinking, which ignores sociality unless in the form of a portfolio of forward contracts where the math can return the same sum over an acceptable return period.
In a Good Tech Fest presentation, I identified AI as being epistemically irrational with respect to human knowledge agents because they also lack causal models, an imagination, a body, an autonomous existence and don't really care about human disrespect. Many zero-sum players aren't aware they run on a causal model, deploy only low levels of strategic imagination, lack a body (unless it's a f2f tournament) and don't care about sociality or disrespect. So I'm not surprised the techniques work: I'm interested in how far they can take data governance.
In a zero-sum player world, we're just a bunch of sharks circling each other, only now realising we're being eaten by a robot megalodon. It's a grim world view and not one that needs to happen. If you want to play like a robot, practice against Cicero. If you want to understand how to contain AI, practice against Cicero. If you want to understand human knowledge agents that deploy zero-sum thinking, practice against Cicero. For any other use case, you're better off observing the zero-sum players.
I had a quick read of the Meta blurb on Cicero with some comments.
- We’ve built CICERO, the first AI to play the strategy game Diplomacy at a human level
My sense is this can't actually be proven. It's a supposition, plus the implication of this oeuvre is that this sort of AI are in our games already. 0/3
- CICERO is a step forward in human-AI interactions with AI that can engage and compete with people in gameplay using strategic reasoning and natural language
Factually true and I think also technically true: congratulations to the Meta team, for real. This stuff is hard and moving artificial knowledge agents along the game play path is not easy. 1/3
- The technology behind CICERO could one day lead to more intelligent assistants in the physical and virtual worlds.
'Could' is an example of probabilistic language, which is scattered through everyday language, mostly invisibly and often not by people who have had their estimation skills calibrated. Factually true if observed and because, it's probabilistic, has a complement 'could not' which is also observable. 'Could not' would be true if there was a hard barrier the AI couldn't move through.
Given its Meta I'm thinking they mean 'might' and its complement 'might not'. Whatever the case, it's one of those vague statements that can be simultaneously true and not true, so not really helpful. Plus that's not counting the possible pathway that this technology is exactly what many people don't want to see happen.
I sat down to the blurb video and I catch my first real difference to the Diplomacy I was taught and what the Meta team has benchmarked to: 15-minute diplomacy phases. This is where you talk (or not), plan, plot etc. After this there is the move phase, where you write your orders down and decide if you're going to keep your commitments or not. It's also here that you learn the most about the other player: behaviour reveals intent, the rest is just talk and posturing.
I was shown the short form and long form games. My mum worked in the old Dominion Museum, built on the site of the even older Alexandra Barracks, and for a few years I was babysat by really serious people who kept brains in jars on bookshelves filled with books and cylinders and stacks of things older than time itself.
Here in Aotearoa New Zealand, there is a great tradition of taking the voice of children seriously and I was the only kid about, which meant being babysat by board games, which there meant Diplomacy. The long form was played over days (play by mail) while the short form was 30 minutes (or a bit longer if it was a day game). The reason why these people had Diplomacy boards is they were studying power strategies, using a political economy analytic and using the known case of the European old world immolating itself in 1914.
What was studied were alternative courses of actions that rational actors might take, much of which included the art of the stalemate. They were looking at the balance and imbalance of power. Diplomacy is like Bridge in this regard: you're also playing your partner's hand. Except in Diplomacy, you have to find hard and soft trust mechanisms because your partner might not be your real partner and your enemy might not be your real enemy.
You play the full board. And if you need to, you play the whole room. Especially if it's a day game or a tournament. Which goes to the second difference: all the players in the Meta video are human knowledge agents, in a physical location, within a shared social construct. And this second difference gives me my first test as to whether Cicero is a threshold: it would need to deceive the humans in the room that the orders it was writing were not written by Cicero.
Why, one asks? Because players who display too little sociality are marked as either dangerous or easy and offered up as the first player to be eliminated. Diplomacy is based around blocs and the number of players is capped at seven. There's usually someone that gets taken out early, unless they can architect a series of early bounces (where everyone's turns sequentially fail, producing the stalemate). Given the sentiment of players, if it's a bot it's an easy hit... without breaking something you might need later on, as with a human player.
The Meta team move onto the technical challenges, which they overcame (because they're really good at what they do I expect). When it gets to the interface with the dialogue and knowledge engines, we can now analyse Cicero as a combinatorial advisor, helping a player choose between possible move options. This is good decision support, a great AI use case. Or, as here, an autonomous actor influencing play their own way, just like another player.
Where we're told that 'being honest is the best way to succeed', well, I don't see that as being accurate. That and as Cicero didn't identify itself as a bot and did well enough to end up getting a placing, the advice wasn't taken by Cicero. If you're honest you'll get carved up - people don't expect it and will assume you're up to something. The challenge is identifying reliable trust signals and at the opening moves of the game, they're in short supply unless you're playing the room. Quickly, people focus on what you do, not what you say. When the moves are adjudicated, people are sweating bullets over a backstab.
And there's a third difference: there was always an independent person to manage the announcement and resolution of moves. This was part of the game, designed to establish a trust baseline by putting a human in charge. In AI risk terms, there was always a human on the loop. This person put their reputation on the line, guaranteeing a good game, a clean game and a steady beat to the game. In many cases, some players would only play if a specific person was adjudicating.
I haven't seen technology platforms replicate this trust in a human moderated system and is one of the reasons why I think independent audit is inevitable if we want the AI economy to pick up. But the emphasis on Cicero + Diplomacy making an AI more honest doesn't land with me. Unless we look at this sort of game as showing a way to test for the presence of artificial knowledge agents, in which case I'm very interested.
So this is where I suggest a test, that if passed, the data governance community should be worried:
- Stage a classic Diplomacy game of seven humans in a room playing 30-minute turns. Assuming the artificial knowledge agent has played 800 gazillion games already, it seems fair to fill the room with only very good human players. The human players shouldn't know they are in an experiment.
- To represent an external human actor breaking trust and infiltrating a human group with an artificial knowledge agent, one of the seven humans will make no moves of their own, just those sent by the artificial knowledge agent.
- As the players shouldn't know, then this individual either needs to have lied successfully in advance or is a cold-blooded volunteer (and I can think of many players who would give this a go)
- The artificial knowledge agent also makes the diplomacy decisions and actions, keeping the human briefed (somehow, is a problem for the implementation to solve).
- If the human decided to call an audible, the experiment loses a control and claims about the artificial knowledge agent become less sure. But there's nothing stopping them: very Diplomacy.
- If the artificial knowledge agent sufficiently wrong foots enough of the other players via room manipulation (which is how the humans play) and wins the board, then we have a proof that says six human knowledge agents can be out-manipulated by one artificial knowledge agent.
- This is the same test as humans winning the board, so there is no observable difference between artificial and human: now the data governance profession can get worried. I think it would be irrelevant whether the winner in that setting was a human or artificial knowledge agent.
My conclusions:
- Well done to the Meta team: this is a real step forward
- Don't worry data governance people: this isn't a real step forward
- If Cicero breaks into a social group and acts within that group within a controlled embodied Diplomacy session, then the Meta team all deserve promotions and the data governance profession will need new tools