So, it’s a well-known fact that Philip K Dick predicted everything, as a by-product of his desperate need to write for money. His particular genius wasn’t in merely predicting future science or writing thrilling adventures though. He also predicted the social consequences of future changes, as well as foreseeing the different kinds of minds that might arise. And he predicted something like the LLMs we today call AIs too, albeit in an unfamiliar form.
The story I’m focusing on today is The Golden Man. You can read it over here. It has a traditional Dick setting of generic ’50s-’70s USA, but unusually for him could fit straight into the Marvel universe, as the blurb text explains:
The powers of earth had finally exterminated the last of the horrible tribes of mutant freaks spawned by atomic war. Menace to homo sapien supremacy was about ended — but not quite. For out of the countryside came a great golden, godlike youth whose extraordinary mutant powers, combining the world’s oldest and newest methods of survival, promised a new and superior type of mankind…
So far, so Magneto. But the mutant, Cris, whilst he looks gorgeous, can’t control magnetism or read minds. His power is to accurately imagine all possible near futures as tableaus, looking at the time-space continuum from the outside, and follow the one best for him.
He was always moving, advancing into new regions he had never seen before. A constantly unfolding panorama of sights and scenes, frozen landscapes spread out ahead. All objects were fixed. Pieces on a vast chess board through which he moved, arms folded, face calm. A detached observer who saw objects that lay ahead of him as clearly as those under foot.
Right now, as he crouched in the small supply closet, he saw an unusually varied multitude of scenes for the next half hour. Much lay ahead. The half hour was divided into an incredibly complex pattern of separate configurations. He had reached a critical region; he was about to move through worlds of intricate complexity.
He concentrated on a scene ten minutes away. It showed, like a three dimensional still, a heavy gun at the end of the corridor, trained all the way to the far end. Men moved cautiously from door to door, checking each room again, as they had done repeatedly. At the end of the half hour they had reached the supply closet. A scene showed them looking inside. By that time he was gone, of course. He wasn’t in that scene. He had passed on to another.
The next scene showed an exit. Guards stood in a solid line. No way out. He was in that scene. Off to one side, in a niche just inside the door. The street outside was visible, stars, lights, outlines of passing cars and people.
In the next tableau he had gone back, away from the exit. There was no way out. In another tableau he saw himself at other exits, a legion of golden figures, duplicated again and again, as he explored regions ahead, one after another. But each exit was covered.
In this, he is conscious in a different way to us, lacking emotional responses and social skills including communication – he is little more than a beast, bent on survival and reproduction, and doesn’t understand what other people are doing, only what he needs to do to survive, by following what he instinctively knows is the best path:
In one dim scene he saw himself lying charred and dead; he had tried to run through the line, out the exit. But that scene was vague. One wavering, indistinct still out of many. The inflexible path along which he moved would not deviate in that direction. It would not turn him that way. The golden figure in that scene, the miniature doll in that room, was only distantly related to him. It was himself, but a far-away self. A self he would never meet. He forgot it and went on to examine the other tableau.
Now, this Quanta piece prompted me to think about this – specifically that LLMs think more like the Golden Man than they do real-world humans, given the way they use large-data-set-derived prediction to simulate comprehension:
Unlike humans, LLMs process language by turning it into math. This helps them excel at generating text — by predicting likely combinations of text — but it comes at a cost. “The problem is that the task of prediction is not equivalent to the task of understanding,” said Allyson Ettinger, a computational linguist at the University of Chicago.
So these Golden AIs can replicate speech perfectly – they have been trained on large language datasets to accurately predict associations with words – but because of the way that they’ve been trained, they skip certain words, which precludes understanding beyond the merely statistical. And that includes stop words, such as ‘not’. LLMs can’t reliably handle negation…
“…why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work. Negations matter to us because we’re equipped to grasp what those words do. But models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.”
…After all, when children learn language, they’re not attempting to predict words, they’re just mapping words to concepts. They’re “making judgments like ‘is this true’ or ‘is this not true’ about the world,” Ettinger said.
And, famously, LLMs don’t understand truth – possibly because to humans “the real world” is immanent and intrusively constant, so we’ve never had to codify the distinction into our language before, whereas to LLMs it’s all equally valid reportage.
Does this distinction mean that these machines are not conscious (whatever that contentious word means)? No, it doesn’t rule it out, as we don’t have a good concept of the foundations of consciousness, nor does it support it. But it is clear that they are very much not like us – that their experience of the world diverges at a fundamental level due to the way they learn.
It’s plausible that the LLMs, with a large enough dataset or other alterations to how they process language, will overcome their inability to parse language correctly to replicate truth-y answers to questions. But because they’re merely predicting, not understanding, there is no intelligence there to engage with – just a statistical model derived from the many past humans and all their writings. When you talk to a chatbot, you talk to the unconscious dead, but also (plausibly, horribly) to our successors. Successors that, as Ted Chiang pointed out this week, are currently amoral tools of capitalism, not necessarily coherent actors in their own right.
Later Dick talked about his short story, about the things which he felt were going to succeed us – comments which could equally apply to fictional mutants and real-world AGIs:
Here I am saying that mutants are dangerous to us ordinaries, a view which John W. Campbell, Jr. deplored. We were supposed to view them as our leaders. But I always felt uneasy as to how they would view us. I mean, maybe they wouldn’t want to lead us. Maybe from their superevolved lofty level we wouldn’t seem worth leading. Anyhow, even if they agreed to lead us, I felt uneasy as where we would wind up going. It might have something to do with buildings marked SHOWERS but which really weren’t.
Leave a Reply