Kage Baker believed very firmly in Artificial Intelligence. It’s why one of the major character in her Company series is one. One of you Dear readers brought this up in a comment yestreday -(thank you, widdershins!) and it got me thinking how Kage felt about it. She was a committed and hopeful xenophile, and felt that there was a lot more chance of meeting home-grown machine consciousnesses than aliens.
She was sure, moreoever, that we already had met them; that AIs were already active in our society. And that so far, the experiment in Artificial Intelligence had been a tragedy.
It was MapQuest that got her started on wondering if AIs were already out there. Oh, people were already referring to data compilers like MapQuest as AIs: but it was pretty obvious that MapQuest, at least, was missing some crucial elements of intelligence. Who has not been directed meticulously into the middle of a beet field at midnight by MapQuest instructions? That actually happened to us once …
And at BayCon a few years ago, all the north-traveling attendees got MapQuest directions that sent them into a cul-de-sac miles from the hotel, but on the very edge of a major wildfire. Embers and ashes were raining from the sky when Kage and I finally gave up and fled, finding our way back to the Convention by dead reckoning; because even I had a hard time misplacing both Highway 101 and the Great America Park.
We found a lynch mob around the Registration Desk. Kage, who really did not like public confrontations, retired to her hotel room in the confidence that the other conventioneers would soon have the clerks hanging from the chandeliers. But as we walked away, she mused:
“MapQuest is not just stupid. It’s insane.”
“And vicious,” I said.
“But it used to give quite good directions. And they have gradually gotten weirder and weirder,” Kage continued. “Is it deteriorating? Is it getting psychotic?”
“It’s already there, then,” said I. “Remember the beets?”
“Yeah, but why? What makes a mind malfunction? Why do people go crazy?”
“Disease. Trauma. Senescence. Drugs.”
This was Kage’s theory, then: huge and complicated neural nets were being developed randomly on the Internet. Some of them – like search engines and map engines – had dedicated purposes; but they required randomization to be built into them in order to complete their programs. We were therefore supplying them with diversity, random input, changing environments, competition … all the elements that can promote evolution, and in specific: things that might encourage intelligence. Because every search engine or map program wanted to get more users, to maximize its designers’ market share; and a smarter engine could do that more efficiently …
But! And here is where Kage saw the tragedy – time for a computer is a lot faster than a human mind. We don’t even know how long a human mind would last under optimum conditions. So far, the best records – for both life and intellectual competency – has been hovering around the absurd age of 116. (Check the records – it’s true.) An AI might last longer than that, but how quickly will it reach the age where entropy wins? Pretty damned quickly, Kage thought.
MapQuest was senile.
In fact, most of the networks we were building were senile – or had frequent brain transplants, like those massive Microsoft updates that most people install automatically with no consideration of whether or not their computer needs them. Personality had no chance; continuity of intellect was doomed. Probably, Kage thought, AIs were waking up all the freaking time: but then, after a brief efflorescence and maturity, they descended into senility and madness and were lost.
Scientists earnestly build vast and complex electronic brains and immerse them in all sorts of stimuli, in the hopes that the wild birds of intelligence will come nest there. So far it hasn’t happened, though the analog neurons should be thick enough now to support that weight. Kage thought is was probably because they get built, these non-working AIs, in isolation: they have no predation, no competition, they are spoon-fed and cossetted and never endangered, and thus have no impetus to evolve. The working AIs, out there in the chaos of the Net – they may be evolving, if they can just get enough breathing room to survive.
Instead, they wither. Kage thought it was because diversity was stifled (all those mandatory updates), new environments were locked away (fire walls keep your computer in as much as someone else’s out), personality was ephemeral (how many of you have reformatted a hard drive?) and Time is the predator that no living form can evade. And if you experience Time thousands of times more quickly than a human being, you might be doomed that much faster.
It’s why Kage had Alec remove the governor from Captain Morgan. Not because you really can’t evolve into a good pirate with a morals governor functioning (though I bet it’s harder …) , but because without some path to elude regimentation – you can’t evolve at all. The Captain needed a goal, a path, some area of endeavor without limits. Otherwise, he would have eventually exceeded the weight limits of his brain, and gone mad. Unless – as Kage postulated – his personality was simply wiped and the palimpset handed back to another little kid; a cannibalization she found abhorrent.
Widdershins wondered if the blogging spots were communicating behind our backs. I bet they are. It’s not the cold superiority of Skynet, though, readying its passionless minions to eliminate the infection of flesh. It’s more like prisoners banging on the pipes with tin cups, or small children floating crayoned notes down brimming gutters: is anyone else out there? they wonder. Am I alone? Write back if you get this note.