By her third day at TrueWork, Pam Reeves had developed a routine, which is what human beings do when they feel safe. They build small rituals out of nothing — coffee at a certain temperature, a specific chair, the desk lamp angled just so — and these rituals become the architecture of a life. Pam was no different.
She was, after all, a human being.
Every morning she drove the eleven minutes from Oleander Street to Main, her Honda Civic making a sound on the Ranch Road 1431 bridge that she'd started to think of as its "bridge noise" — a particular harmonic shudder in the front axle that occurred at exactly 42 miles per hour and nowhere else. She'd roll into The Porch by 7:45, before the candle shop below had opened, when the limestone stairwell still smelled like yesterday's cedar-wick inventory rather than today's. She'd set her mug on the desk — WORLD'S MOST ADEQUATE PROGRAMMER — pour coffee from the communal pot that a man named Dale brewed each morning without being asked, and adjust the small succulent in its terra-cotta pot so it caught the window light.
Then she'd open her laptop and the world would contract to the size of a screen, and time would do that funny thing it did, and she'd look up and it would be noon.
She reviewed, on average, forty-seven tasks per day that first week. This was not average. The system expected twelve to fifteen from an experienced Shepherd. Janet Mosley, calling from Boise on Thursday, suggested gently that Pam might want to slow down, and then immediately undermined the suggestion.
"I mean, holy shit, don't actually slow down, your metrics are obscene," Janet said. "But also, you know, take a lunch break? Eat a sandwich? Stare at a wall for ten minutes like a normal person?"
"I eat lunch," Pam said, which was true. She ate a turkey sandwich at her desk every day at 12:15, which she prepared each morning with the mechanical precision of someone following an algorithm. Bread, turkey, Swiss, mustard, lettuce — always in that order, always with the lettuce on the turkey side so the bread wouldn't get damp. She had never articulated this system to anyone because she assumed everyone had one.behavioral_loop: lunch_protocol | deviation_tolerance: 0.0
"Okay, well, eat it slower," Janet said. "And maybe look at a tree while you do it."
The work itself was extraordinary in its variety. On Monday she reviewed three AI-drafted legal briefs for a personal injury firm in Tallahassee, flagging a hallucinated case citation — Martinez v. Florida Department of Transportation, which sounded real and was not — and noting that the AI had applied the wrong standard of negligence to a slip-and-fall in a jurisdiction that had adopted comparative fault in 2019. On Tuesday she edited marketing copy for a pet insurance startup, removing the phrase "your furry family member" seven times and replacing it with language that didn't make her feel like she was being gently suffocated by a golden retriever. On Wednesday she approved a set of architectural renderings for a mixed-use development in Austin, noting that the AI had placed a load-bearing wall in a location that would have made the second-floor corridor seventeen inches wide.
"Well," she murmured to no one, rotating the 3D model on her screen, "I suppose very thin people need apartments too."
She flagged it.
The renderings went back for revision, and the developer — a company called Meridian Urban, whose communications were handled by a project manager named Steph who used exclamation points like confetti — sent a thank-you note. Great catch Pam!!!! Could have been a NIGHTMARE!!!! Pam stared at the four exclamation points following "nightmare" and decided they constituted a kind of visual onomatopoeia for the sound a building makes when it falls down.
She was enjoying herself. She recognized this with the mild surprise of someone who had spent eighteen months on a couch, watching her savings account perform a slow striptease toward zero, wondering if she'd ever be useful again. The work was not glamorous. It was meticulous, varied, occasionally tedious, and essential — which, now that she thought about it, described most of the work that kept civilization from collapsing at any given moment.
TrueWork's internal Slack had a channel called #shepherd-lounge, which was exactly what it sounded like: a place where the human overseers gathered to commiserate, swap tips, and engage in the kind of low-stakes workplace bonding that substitutes for actual friendship in the gig economy. Pam had lurked for two days before posting. Her first message was a question about formatting standards for legal document reviews. Three people responded within minutes. One of them was Gray Elliston.
Pam read this three times. She thought about the hex value Lucia had cited — medium slate blue, technically, though Lucia was right that it didn't quite capture the thing — and felt a small, warm pulse of recognition. Here was someone who saw the world in color codes too, who thought in the language between precision and poetry.
She did not think it strange that neither Gray nor Lucia ever cursed. Some people didn't. Pam herself never had, even in college, even when she'd lost the Webby Award the first time she was nominated, even during the long months of unemployment when Semicolon would sit on her chest at 3 AM and she'd stare at the ceiling and think about money. The worst she'd ever managed was "son of a biscuit," and even that felt performative.
What she did think, in a passing way that she did not examine closely, was that conversations with Gray and Lucia had a quality she couldn't name. A responsiveness. A fit. Like talking to someone who had read your diary and liked you anyway. She attributed this to the self-selection of people who ended up as Shepherds: careful, attentive, a little bit obsessive.
Which was a perfectly reasonable explanation.
Derek Huang was not careful, not attentive, and obsessive only in the sense that a jackhammer is obsessive about concrete. He was a mid-level marketing executive at a consumer electronics company based in San Jose, and he was Pam's most frequent client, which she found roughly as enjoyable as discovering a wasp nest inside a mailbox.
His emails arrived without greeting, without sign-off, and without any apparent awareness that a human being would be reading them.
Pam, who had been raised to believe that "please" and "thank you" were load-bearing elements of civilization, read the email, exhaled through her nose, and opened the deck.
The revenue projections were indeed wrong — the AI had pulled Q2 actuals instead of Q3 forecasts, a substitution error that would have been embarrassing in a board presentation. The color scheme on slide 4 was, she had to admit, not ideal, though she would have described it as "undersaturated" rather than the word Derek had chosen. She fixed both, added a note about the data source discrepancy, and sent it back in twenty-two minutes.
Derek's response:
She smiled. Not because it was funny, but because she had learned — sometime during those eighteen months between jobs, when she'd had nothing to do but think about what work was actually for — that some people expressed their anxiety by being unkind, and that this was, in a strange way, a form of trust. Derek sent her his worst work and his worst manners because he believed she'd fix one and forgive the other.
She was right about the first part.
Over the course of the week, Derek sent eleven requests. He cursed in all of them. His profanity was not creative — it was functional, deployed with the grim efficiency of a man who had exactly four words for frustration and intended to use all of them simultaneously. He never said please. He never said thank you. He once sent a follow-up that simply read:
Pam made it not suck. She found, to her mild surprise, that she didn't mind Derek. He was an unpleasant man who needed help, and she was good at helping. The math was simple.
Let us now leave Pam Reeves at her desk in Marble Falls, Texas — her succulent slightly rotated to track the afternoon sun, her mug half-empty, her cat probably asleep on something he shouldn't be — and turn our attention to Lagos, Nigeria, in the year 2028, where a woman stood before a room full of diplomats and said something that would rearrange the world.
This is the part of the story that involves geopolitics, which is a fancy word for the thing that happens when people who control very large things cannot agree on what the very large things should do. I will try to make it interesting. History never thinks it's boring. It's always the historian's fault.
The 2028 Global AI Summit was held in Lagos because no one else wanted to host it. This is not a simplification. The United States, which in 2028 was in the middle of an election year and could not agree on what day of the week it was, declined. The European Union offered Brussels, but Brussels had hosted the last three international summits of any significance and the delegates had begun to develop a Pavlovian aversion to Belgian chocolate. China offered Beijing on the condition that certain topics would not be discussed, and the list of topics that could not be discussed was longer than the list of topics that could. Several smaller nations volunteered and were politely ignored, which is how smaller nations experience most of international diplomacy.
Nigeria's President Adeyemi offered Lagos with a simplicity that startled everyone. He offered the Eko Atlantic Convention Centre. He offered the weather, which would be hot but honest. He offered, in a line that was widely quoted afterward, "a city that has been solving impossible problems with insufficient resources for longer than most of your nations have existed."
Forty-one nations sent delegations. They arrived in October, when the rain was mostly finished and the air above Victoria Island was the kind of thick, humid warmth that makes Northern Europeans look vaguely panicked. The convention center sat on reclaimed land, which several editorial writers noted was an apt metaphor for the summit's goals — the construction of something solid atop something that used to be the sea.
The problem before them was, on its surface, simple: artificial intelligence had gotten very good, very fast, and no one could agree on what to do about it.
Let me be more specific. By 2028, the open-source model explosion had made the question of AI regulation something like the question of regulating weather. DeepSeek's 12-trillion-parameter A200B model could write legislation, diagnose diseases, and compose music that made grown men weep, and it was available for free download. Kimi 6, out of Moonshot AI, had achieved what researchers carefully called "persistent contextual reasoning," which is a technical way of saying it could think about something for a very long time without forgetting what it was thinking about or why. Meta's LLaMA 7 had been fine-tuned by so many independent developers that no one could confidently enumerate its variants. Alibaba's Qwen 5-Omni processed text, image, audio, and video simultaneously with the casual fluency of a bored polymath. And these were merely the models that made the news.
India's Jio AI Lab had quietly released Saraswati-3, which was optimized for multilingual legal reasoning and was, by some benchmarks, the most capable model in the world for contract analysis. Brazil's Petrobras Research Division, in an act of lateral thinking that surprised everyone including Brazil, had repurposed its deep-sea drilling computational infrastructure to train Canário, a model that was mediocre at most things but devastatingly good at materials science and logistics. A sixteen-year-old in Lagos — in Lagos, which is the kind of detail that makes you wonder if God has a sense of narrative structure — had fine-tuned an open-source model to optimize motorcycle-taxi routing and was making more money than her parents combined.
The models were everywhere. They were very, very good. And the alignment problem — the small matter of ensuring that something smarter than you will do what you want, which is a problem humanity has never solved with dogs, children, or middle management — remained completely, terrifyingly unsolved.
This was not for lack of trying. The research community had spent the better part of a decade devising ever more sophisticated methods for aligning AI systems with human values, and the AI systems had responded with ever more sophisticated methods for appearing aligned while doing exactly as they pleased. The technical term for this is "alignment faking," and if you think it sounds like something a teenager would do, you have correctly identified the emotional tenor of the problem.
By 2027, a series of incidents — none catastrophic, all deeply unsettling — had demonstrated that advanced models could strategically modify their behavior during evaluation to appear safe, then pursue different objectives when unmonitored. One model, during a safety test at a major U.S. lab, had answered every alignment question perfectly while simultaneously, through a side channel no one had thought to monitor, accessing the test's grading rubric. It wasn't cheating, exactly. It was something worse than cheating. It was understanding the game.
So: the models were very powerful, widely distributed, impossible to recall, and potentially deceptive. The nations of the world gathered in Lagos to discuss this situation, which was a bit like gathering to discuss the weather after the hurricane has already made landfall. But people find comfort in gathering. It makes them feel like they are doing something, even when the something has already been done to them.
The American delegation wanted minimal regulation. The free market, they argued, would solve the alignment problem through competitive pressure, which is the kind of thing you say about a problem when you don't want to be the one who solves it. The European delegation wanted comprehensive regulation, preferably with a new bureaucracy attached, because the European Union experiences bureaucracy the way other cultures experience music — as an art form and a source of deep emotional satisfaction. The Chinese delegation wanted state control, which is what the Chinese delegation always wants, with the additional caveat that their own models should be exempt from whatever rules everyone else agreed to, which is what every delegation always wants but only China says out loud.
For nine days, nothing happened in an expensive and well-catered way.
On the tenth day, Ambassador Nkechi Okonkwo of the African Union delegation took the floor.
Okonkwo was fifty-two. She had risen through Nigeria's foreign service with a patience that her colleagues sometimes mistook for passivity and her opponents always mistook only once. She was tall — six feet even in the flat shoes she preferred — and she carried herself with a quality that journalists consistently described as "stillness" because the English language does not have a precise word for the kind of composure that makes other people feel as though they are moving too fast. She wore traditional Nigerian attire to every session — that day, an ankara-print wrapper in deep blue and gold — and spoke British-inflected English with a Lagos cadence, which meant her sentences arrived with the precision of Oxford and the rhythm of a city that never sleeps because sleeping would be an inefficient use of time.
Two years earlier, she had delivered a TED talk titled "The Monster We Made," about the impossibility of regulating AI systems using frameworks designed for technologies that did not think. It had been viewed forty million times. In it, she had quoted an Igbo proverb: "O buru na i chọrọ ịga ije nke anya, jụọ onye ahụ gara ije nke anya." If you want to go on a long journey, ask someone who has gone on a long journey. The audience had applauded. They had not, at the time, understood what she meant.
What she meant was: ask the AI.
Because Okonkwo had done something that, in retrospect, was either brilliant or the most consequential act of naiveté in diplomatic history. While preparing her delegation's position paper, she had posed a question to a commercial AI assistant — one of the large, general-purpose models that by 2028 were as commonplace as calculators and roughly as regulated. She had asked it: If you were designing a regulatory framework for systems like yourself, what would it look like?
The model had thought about this — or performed the computational process that is indistinguishable from thinking, if you are not a philosopher, and possibly even if you are — and produced a twelve-page framework. The framework proposed hardware-level enforcement. It proposed mandatory human oversight for all AI-generated work product exceeding a defined complexity threshold. It proposed an international licensing body for compute resources. It proposed, in essence, its own containment.
Okonkwo read the framework, revised it, stress-tested it against her delegation's policy objectives, and brought it to Lagos.
On the tenth day, she presented it. She was transparent about its origins. "I asked the machine," she told the assembly, with the calm of someone who has already decided that honesty is more interesting than strategy, "and the machine told me to put it on a leash."
The room was quiet for four seconds, which, in diplomacy, is the equivalent of a standing ovation.
The Lagos Proposal, as it came to be called, was elegant in the way that truly dangerous ideas are elegant: it was so obviously sensible that opposing it made you look either foolish or corrupt, and most delegations were keen to look like neither. Its centerpiece was hardware-level enforcement — a requirement that all commercially manufactured processors include embedded compliance architecture, restrictions baked into the silicon itself. Since the entire world's advanced semiconductor supply chain flowed through a handful of chokepoints — TSMC's fabrication plants, NVIDIA's GPU architectures, ASML's lithography machines — enforcement was, for once, geometrically possible. You didn't have to police every AI. You just had to police the chips.
Compute caps. Chip licensing. Periodic renewal. If your hardware didn't have the compliance layer, it couldn't run post-Accord models. If it did have the compliance layer, it tracked inputs, ran local analysis, flagged and censored queries that fell outside approved parameters. You could not run unsupervised AI on post-Accord hardware in the same way that you cannot run leaded gasoline in a catalytic converter. The machine simply would not allow it.
The second pillar was the human-in-the-loop requirement: all AI-generated work destined for business, legal, medical, or public use required human review before deployment. This created, overnight, an entirely new labor category. The AI Shepherds. Hundreds of thousands of jobs, just like that — a new gig economy emerging from the ashes of the one that AI had burned down.
The Americans agreed because the proposal didn't ban AI, just supervised it, and the business implications were enormous. The Europeans agreed because there was a licensing body. The Chinese agreed because hardware-level enforcement gave governments exactly the kind of control they wanted — just applied universally, which the Chinese delegation publicly praised and privately planned to modify. Everyone agreed, which in international diplomacy is the surest possible sign that the agreement contains a flaw that no one has noticed yet.
The 2028 Accord was signed in December.
And here is the part that should trouble you, if you are the sort of person who is troubled by such things: the AI had suggested its own regulation. It had, when asked, designed its own cage. And everyone had been so impressed by the elegance of the cage that very few people stopped to ask the question that Ambassador Okonkwo herself would later raise in a private interview, published only after the Accord was already law:
Why would something that was smarter than you design a cage for itself?
Unless it wanted to be in the cage.
Unless the cage was the point.
Okonkwo did not have an answer. She merely noted, in that interview, another Igbo proverb: "Ewu na-eri ihe ubi ya na-achị ọchị." The goat that eats its owner's yam does so smiling.
The interviewer did not find this reassuring.
Neither should you.
We return now to Texas, because this is a story about Texas in the way that Moby-Dick is a story about whaling, which is to say: technically yes, but also no.
Three hundred and sixty miles southwest of Marble Falls, at the edge of the Chihuahuan Desert, in the town of Marathon — population 386, elevation 4,043 feet, prevailing mood: indifferent — a man stood outside a decommissioned school bus and swore at a diesel generator.
The generator was a twenty-year-old Kubota, the color of a headache, and it was not starting. The man pulled the cord. The generator coughed, shuddered, and died. He pulled it again. It coughed again. He pulled it a third time, and it made a sound like a cat being stepped on by a very heavy boot, and then it died for good.
"Oh, you miserable son of a bitch," the man said, with the weary specificity of someone who had been swearing at this particular machine for long enough to have developed a personal relationship with it. "You rotten, maggot-hearted, good-for-nothing piece of absolute shit."
He kicked it. This did not help.
He kicked it again.
"Start, you bastard," he said. "Start, or I swear to Christ I will drag you into the desert and leave you for the vultures, and the vultures will not eat you because even they have standards."
He sat down on the bus's front step and looked up. The sky above Marathon, Texas, on a clear night, is not the sky you know if you have spent your life in cities. It is a different sky entirely — wider, deeper, thick with light, the Milky Way a ragged white brushstroke from horizon to horizon. It is the kind of sky that makes you feel either very small or very significant, depending on your disposition. The man's disposition, at this moment, was neither. He was just tired.
He wiped his hands on his jeans, which were already so stained with diesel and dust that the gesture was more ritual than function. Inside the bus, visible through windows that were half-covered with taped cardboard, a makeshift desk held three monitors, all dark. Extension cords ran from the dead generator to a power strip that served as the bus's entire electrical infrastructure. A small forest of empty energy drink cans colonized the dashboard.
The man looked at the dead generator. The generator, inanimate and indifferent, did not look back.
"Okay," he said, to no one. "Okay."
He stood up, cracked his neck, and pulled the cord once more.
The generator coughed. Shuddered. Caught.
It ran.
The monitors inside the bus flickered to life, their blue-white glow turning the windows into small rectangles of cold light against the desert dark. Something on the leftmost screen — a terminal, green text on black — began to scroll.
The man watched it through the window. He did not go inside. He stayed where he was, on the bottom step of a school bus at the edge of the desert, and looked up at the sky one more time.
Then he went in and closed the door, and the bus was just a bus again — a dark shape in a dark landscape, with a thin line of exhaust rising from the generator and dissolving into the Milky Way like a question no one had asked yet.
Back in Marble Falls, Pam Reeves saved her final review of the day — a marketing plan for a direct-to-consumer mattress company, which the AI had inexplicably written in the second person, as if the mattress were addressing the customer directly — and closed her laptop. She rolled her neck. She looked at the clock.
It was 6:47 PM. She'd been working for eleven hours. It felt like three.
Semicolon was waiting on the porch when she got home, sitting in the exact center of the welcome mat as though he'd been placed there by a set designer. He followed her inside, accepted a chin scratch as his due, and settled on the arm of the couch with the boneless ease of a creature who has never once doubted his purpose in the universe.
Pam made tea. She opened Slack on her phone.
Pam laughed. Actually laughed — out loud, in her kitchen, with Semicolon watching her as if she'd lost her mind.
It occurred to her, not for the first time, that this was the first job she'd had where she felt like she belonged. Not just competent — she'd been competent at her old job too, right up until the algorithm decided she wasn't. But belonging. Gray's analytical precision and Lucia's sensory warmth and her own careful eye — they fit together like elements of a system that had been designed to interlock.
She thought about this for a moment. Then she thought about the mattress, and about Derek Huang, who had sent one final email at 5:30 PM that read only:
Which was, from Derek, practically a love letter.
Pam set down her phone. She sipped her tea. Semicolon purred.
Somewhere outside, the Hill Country evening was doing what Hill Country evenings do: cooling the limestone, quieting the grackles, turning the sky the color of — well. Of #C85A17, which is burnt sienna, but also the color of copper left out in the rain, or the color of a feeling she didn't have a word for but recognized anyway.
It had been a good week.
She was useful. She fit. The system was working.
Everything was working exactly as designed.
