By Thinkman · January 1, 2025
| ENV BURN | AI MATURITY |
|---|---|
| 63/100 → 63/100 → | 2.7 → 2.8 |
The Algorithm and the Ancient River
2027–2028
2027-28: river data meets ancient scripture
[SHARMA FAMILY — Varanasi, India]
Arjun came home for the first time in two years in the winter of 2027 and found his father changed.
Not diminished — different. Rajan was sixty-seven and moved more slowly, but there was something in his attention that had deepened. He had been spending time with a group of physicists at Banaras Hindu University who were studying quantum coherence in biological systems — the possibility that consciousness involved quantum processes at the neurological level. He had been invited by a professor who had read a paper Rajan had written for a religious philosophy journal and found, to his own surprise, that the pandit was doing better philosophy of mind than most of the philosophy of mind papers he read.
"They let a priest into the physics department?" Arjun said.
"They let a physicist into the temple," Rajan corrected. "There's a difference."
Priya was ten in 2027, and had the river in her the way Rajan did — not as a thing she lived beside but as a thing she was continuous with. She swam in it despite Meera's protests. She knew its seasonal moods. She had, independently and without instruction, begun keeping a journal of the river's colour and level and smell and the birds she saw on the banks. The journal was in a notebook with a blue cover. She had filled three of them.
The Ganga's water quality had deteriorated again since 2020's brief pandemic recovery. The government's Namami Gange programme had achieved some real progress — certain factory outflows had been controlled, certain sewage treatment plants had been built — but the population pressure and the upstream glacier melt dynamics were overwhelming the interventions. Rajan stood at the ghat and watched the water. It was not the river of the scriptures. It was not the river of his childhood. But it was the river of his children, and his children were standing beside him looking at it as if they intended to make it something worth inheriting.
[VAN DEN BERG FAMILY — Amsterdam]
Lucas van den Berg was two years old in 2027 and had his father's quality of stillness and his mother's quality of total presence. He watched everything. He said very little. When he spoke, it was complete sentences, carefully chosen.
Pieter had by now made his most significant professional pivot: he had moved from pure financial analysis to a hybrid role advising the bank on AI risk exposure — not the risk that AI systems posed to financial markets, but the risk the bank faced in adopting, deploying, and becoming dependent on AI systems it did not fully understand. He was, in effect, auditing intelligence.
He approached this work with the methodological rigour that was his defining characteristic. He built a framework. He stress-tested it. He published it internally. It was adopted by three other European financial institutions within a year.
Sofie's radiology practice had transformed fully. She spent sixty percent of her clinical time reviewing AI-flagged cases — the scans the system had identified as edge cases, ambiguous, or outside its training distribution. The other forty percent she spent in clinical consultation, teaching medical students, and participating in the hospital ethics committee that had been convened specifically to deal with questions about AI decision-making in diagnosis and treatment. The ethics committee met fortnightly. Pieter attended twice as a guest. He found the questions being discussed were the same questions, in different vocabulary, that he was dealing with in finance.
There was a pattern emerging. Every field that touched AI was arriving at the same question from different directions: what do we do when the machine is usually right but not always, and we don't know when? The answer Pieter was developing in finance and Sofie was developing in medicine and Arjun Sharma was developing in AI research and Yanmei Chen was studying at Fudan were all convergent: you need humans who understand the machine well enough to know when not to trust it. The most valuable skill in the new world was not the ability to use AI. It was the ability to know when not to.