
Just read a sort of matter-of-fact article stating that in the future (tomorrow?) countries will protect their trillion-dollar investments in AI processing centers with nuclear weapons. Now if this does not remind you of Hal in 2001: A Space Odyssey or maybe Dr. Strangelove then you aren’t paying attention.
A country is ready to kill humans on an epic scale to protect the “Big Brain”. If it wasn’t so sad it would be laughable. And who will decide if the moment has arrived to launch the death missiles—maybe the Big Brain.
I write, or try to write, mystery novels. 100% fiction. I would not dare write something along those lines because I would worry my reader would not find it believable.
Now, of course, this was not a press release from any government, but someone speculating on what would be a logical conclusion based on the investment and the increasing dependence on mega-watt computing power to determine the course of action countries take to defend themselves.
So, it may not be true, but it sure follows logic. Human tragedies have occurred in the past by countries protecting minor assets such as bridges or airplanes or just because they could. The Big Brain will, no doubt, become such a critical part of national security that it will be easy to justify anything to prevent the death of the Big Brain. Just ask the Big Brain!
AI on a massive scale is inevitable. Who would stop it?
*
A certain hypocrisy exists in my tone about AI. I’m using it in many ways, like many people, and finding it intriguing and useful.
I’m old enough to remember the first discussions about computers. These were mostly primitive devices that could count and sort things. This was the 1950s. My brother, Curt, had been drafted into the Navy (yes, there is a story there for another time), and through a testing program to determine your best usefulness, the Navy assigned him to their “state-of-the-art” computer facility. It was the early stages of computing. The public was not told much about what the military was doing with computers, but they were the leaders at this time—not IBM, in advanced use of the technology. That only meant that they had advanced further in sorting and counting.
Even then, there was a great deal of concern that “machines” would take over decision making from humans. In fact, they were working on just that. Leaping forward some seventy years and you can imagine what is going on now. Maybe it’s good or maybe it’s bad; but it is inevitable that the ability to make decisions within seconds based on a massive amount of data is the skill machines excel at, while humans often pause. That pause is the difference between surviving and dying in the scenarios the military studies. Thus, General Buck is no longer the best decision maker it’s AI.
*
I once requested an image of lizards in the desert from AI and one of the lizards had a leg coming out its head—a glitch. Not a big deal. Oops, was that missile just launched – “who ordered that?”—no one answers.


