About forty minutes into Portal 2's cooperative mode, my friend Christian and I stopped speaking. Not because we were annoyed with each other — we'd been friends for a decade, we were on a voice call, we were ostensibly adults — but because we'd hit a puzzle that required him to stand on a pressure plate on his side of the chamber while I placed a portal beneath a slowly descending laser, and neither of us could see what the other was looking at. We kept shooting portals into each other's faces. We kept activating platforms the other person wasn't ready for. At one point he launched me across the room with an aerial faith plate and I sailed, majestically, past the exit. We both watched it happen in real time. It took about four seconds. Neither of us said anything.
We laughed — eventually, once I'd respawned — and then sat with the slightly uncomfortable realisation that two grown adults who'd known each other for a decade couldn't coordinate a three-step sequence without yeeting each other into a wall. The problem wasn't communication in the usual sense. We were talking the whole time. The problem was something more specific and, honestly, more embarrassing: neither of us had been modelling what the other person actually knew. We were each solving the puzzle in our own head and then acting surprised when the other person wasn't already there.
I'd been so focused on what I could see that I'd never stopped to ask what Christian could see. That gap — the distance between your own perspective and your best guess at someone else's — turns out to be one of the most studied problems in cognitive neuroscience. And Portal 2, almost by accident, is one of the best games ever made for making you feel that gap in your bones.
The Game
Portal 2 was developed by Valve and released in 2011 as the sequel to their 2007 puzzle game Portal. The original was a solo experience: you played as Chell, a test subject navigating the Aperture Science facility under the increasingly sinister supervision of an AI called GLaDOS, using a portal gun that could link two points in space. Walk through one portal, emerge from the other. The mechanic is deceptively simple — the puzzles it enables are not. Portal 2 expanded everything: the single-player campaign runs for roughly eight hours and introduces new mechanics including propulsion gel, repulsion gel, light bridges, and laser redirection. It also won a BAFTA. GLaDOS remains one of the most quotable antagonists in gaming history.
But the part that matters for this post is the cooperative mode, which is separate from the campaign entirely. Two players control robots called Atlas and P-Body, each carrying their own portal gun, working through a sequence of test chambers that are — and this is the critical design detail — impossible to complete alone. Not difficult alone. Impossible. Every chamber has been built around the assumption that one player can see or reach something the other cannot. The game's puzzles are, structurally, an exercise in perspective-taking. You cannot solve them unless you are actively modelling your partner's point of view.
This Shouldn't Be Hard — The Research
In 2003, a cognitive neuroscientist at MIT named Rebecca Saxe published a paper that would become one of the most cited in her field. Saxe and her colleagues had been using fMRI to scan people's brains while they read short stories about characters with beliefs, intentions, and misunderstandings. What they found was a specific region of the brain — the right temporoparietal junction, or RTPJ — that lit up consistently and selectively during this kind of reasoning. When participants thought about what another person knew, believed, or intended, the RTPJ activated. When they thought about the same factual content without the social framing, it didn't. The RTPJ, Saxe argued, was a dedicated neural system for a specific cognitive task: building a model of another person's mind.
This capacity is called theory of mind — the ability to attribute mental states, including beliefs, desires, knowledge, and intentions, to other people, and to understand that those states can differ from your own. It's the cognitive foundation of empathy, of deception, of storytelling, of any complex social interaction at all. Developmental psychologists have been studying it since the 1970s. Children typically acquire a basic version of it between ages three and five. The famous "false belief task," in which a child watches a puppet hide an object and must predict where a second puppet — who wasn't watching — will look for it, is the classic test. Children under four almost always answer incorrectly. They say the second puppet will look where the object actually is, not where the second puppet believes it to be. They haven't yet learned to separate what they know from what someone else knows.
What Saxe's neuroimaging work added was a clearer picture of the effort involved. Theory of mind doesn't feel effortful in the way that long division does — we engage it constantly, automatically, in every conversation. But the brain is working. The RTPJ requires active recruitment. And in a subsequent study published in 2010, Saxe and her colleague Liane Young used transcranial magnetic stimulation — a technique that temporarily disrupts activity in a targeted brain region — to interfere with RTPJ function in participants while they made moral judgements about scenarios involving harm and intent. When the RTPJ was disrupted, participants became significantly worse at integrating intent into their moral reasoning. They judged accidental harms as harshly as deliberate ones. The region wasn't decorative. It was doing real, specific work.
The nuance in Saxe's broader research programme is important here. Theory of mind isn't a single switch that either works or doesn't. It's a skill that exists on a spectrum, that varies across individuals and contexts, and that can be impaired by cognitive load, stress, or simply not attending to it. Most of us, most of the time, are less good at perspective-taking than we think we are. We suffer from what psychologists call the curse of knowledge — once we know something, it becomes nearly impossible to remember what it was like not to know it. We project our own perspective onto others constantly, and barely notice when we're doing it.
The Puzzle Chamber as Brain Scanner
What makes Portal 2's cooperative mode so interesting as a psychological artefact is that it makes the curse of knowledge a level mechanic. In most of the chambers, the two players are physically separated. You might be on a raised platform with a clear view of the exit; your partner might be at ground level, unable to see it. You know where you're going. You know what the solution looks like from your vantage point. And you will, almost certainly, start solving the puzzle from your own perspective before you've stopped to wonder what your partner can see, what they know, and what they're currently trying to do.
The point-pinging system — where each player can tag a location in space with a small marker — is the game's built-in workaround for this, and it's elegant precisely because it acknowledges the problem. You can't always describe what you're looking at in words quickly enough. But you can point. The mechanic essentially forces externalised perspective-sharing into the cooperative loop: you are literally showing your partner what is in your mind.
What struck me, playing through the cooperative campaign over several sessions, was how often the failures were failures of mentalising rather than failures of puzzle-logic. Christian and I would understand the solution individually but still get stuck because one of us was acting on information the other didn't have yet. The puzzle wasn't the obstacle. The gap between our mental models was. And the gap was completely invisible right up until one of us went through a portal backwards at high speed.
This is, it turns out, the structure of a very large proportion of real-world miscommunication. Research on collaborative cognition — how pairs and groups solve problems together — consistently finds that the failures are rarely about intelligence or effort. They're about inadequate model-sharing. Each person has a reasonably accurate picture of the problem; the pictures just don't overlap enough. In a 2014 review of collaborative problem-solving in educational settings, researchers Michelene Chi and Ruth Wylie found that the quality of collaborative outcomes was predicted less by the individual ability of participants than by the degree to which they engaged in co-construction — actively building a shared representation of the problem rather than simply talking past each other. Portal 2 makes co-construction mandatory. You cannot finish the chamber until your mental models are sufficiently aligned, because the chamber won't let you.
Why This Keeps Happening Everywhere Else
The frustrating thing about the curse of knowledge is that expertise makes it worse. The more fluent you become at something, the harder it becomes to remember what it felt like not to know it — and the harder it becomes to model the perspective of someone who doesn't share your knowledge. This is why highly competent people are often surprisingly bad at explaining what they know. It's not that they're withholding. It's that they've genuinely lost access to the confusion that would make an explanation necessary. Teachers know this. Managers know this. Anyone who has ever given someone directions to a place they could navigate blindfolded, only to watch that person end up in a car park in an entirely different postcode, knows this.
It also explains something about why online communication fails so persistently and so spectacularly. Text strips out the non-verbal information — tone, timing, facial expression, the raised eyebrow that means "I have no idea what you're talking about but I'm too polite to say so" — that we normally use to calibrate our model of what someone else is understanding in real time. In a face-to-face conversation, you can watch someone's face and catch the moment they lose the thread. In a text thread or email chain, you're working blind. You send your message from inside your own head, with your own context and your own reading of the emotional tone, and it arrives inside someone else's head where none of that context exists. The portal gun misfires. The other person flies past the exit. Nobody says anything for four seconds.
When something goes wrong in a collaborative project or conversation, what's usually the root cause?
What To Do With This
The most practical thing Saxe's research implies is also the least glamorous: slow down at the exact moment when you're most tempted to speed up. The curse of knowledge is hardest to fight precisely when you feel most confident — when the answer seems obvious, when you've explained it a hundred times before, when you are absolutely certain the other person must be following along. That certainty is almost always the signal that you've stopped checking. You've started solving the puzzle from your own screen and forgotten that the other person has a completely different view.
In practice, this looks like the portal-pinging habit — and it's genuinely that small. Before you act on something you understand, it costs almost nothing to show your partner what you're looking at. Not to ask for permission, not to deliver a TED talk, just to briefly externalise the model in your own head and check whether it matches the one in theirs. This is what good collaborators do in surgery, in construction, in any domain where the cost of misalignment is high. They narrate what they're about to do — not because the other person is incompetent, but because the gap between two people's mental models is real even when both people are excellent. Sometimes especially when both people are excellent, because then each of them is even more certain they don't need to check.
There is also something in Portal 2 about what happens when you accept, structurally, that you cannot see everything. The cooperative chambers are not fixable by one clever person. No amount of individual skill gets you through a room designed around divided information. The game's architecture assumes incompleteness, and it treats that incompleteness not as a problem to be overcome through sufficient effort but as the basic condition of any collaborative endeavour. You are always, in some sense, operating with a partial view. The question is whether you're building a working model of what the other person can see — or whether you're proceeding as though they can see exactly what you can.
The takeaway: Theory of mind — the ability to model another person's beliefs and perspective — is effortful, trainable, and impaired by overconfidence in your own viewpoint. Rebecca Saxe's neuroimaging research showed that a dedicated brain region, the RTPJ, handles this work; it can be disrupted, and its quality varies. Portal 2's cooperative mode is, quietly, one of the most accurate simulations of what perspective-taking failure looks like in practice: two capable people, working from incomplete and misaligned mental models, launching each other into walls.
Play It With Someone You Want to Understand Better
The single-player Portal 2 campaign is excellent and worth playing on its own terms — the writing is genuinely funny, GLaDOS is one of the greatest fictional antagonists in any medium, and the puzzle design earns every award it ever received. But if you want the experience this post has been circling, play the cooperative mode with someone you know well. Not a stranger in matchmaking. Someone whose communication you think you already understand, ideally someone you'd like to continue being friends with afterwards.
You will discover, fairly quickly, the precise shape of your blind spots. Where you assume without checking. Where you act without narrating. Where your model of what they know and your model of what you know have silently gone their separate ways. The chamber will show you. It is not subtle about this. And then — if you're patient, if you ping the location, if you line up the timing and actually wait for the other person to be ready — the door will open. Not because you were individually clever, but because you finally built the same picture at the same time. Which, it turns out, is harder than any puzzle Valve ever designed.

No comments yet — be the first!