Artificial intelligence is always a hot topic among science fiction fans. Can anyone argue that although 2001: A Space Odyssey’s Hal has good intensions, he no less presents an interesting story albeit that of a sociopathic nightmare? Additionally, a discussion centered on machines with humanlike attributes could not be complete without Sonny, I, Robot’s anthropomorphic servant droid who proves a heart within a machine is worth saving.

How close has humanity come to creating a true representation of itself within a computer system?
Take Apple’s Siri. As creepy as it sounds, Siri is the digital equivalent to a best friend. Ask it a typical question such as “How are you?” and it will respond, “I’m fine. Thanks for asking.” Should a user of the device choose a male voice to represent the virtual assistant, things begin to sound eerie and familiar. Hal would be proud.
With Apple’s iOS 9 coming out this September, Siri will be able to predict what a user will ask next. For instance, if a user would like directions to a particular address, Siri will also suggest places to dine and a place to lodge, should a user be so inclined to do so.
The logic behind Google Maps is already there. It takes it a step further by providing alternate routes, the ability to avoid toll roads and traffic conditions.
Artificial intelligence couldn’t be better. But what is the cost of all this AI?

Years prior to the year 2000, computer experts warned of an upcoming digital apocalypse where computers would choke and destroy humanity. Obviously, that didn’t happen. The Y2K bug was to usher in a second dark age where electricity would be scarce and hospitals would be full. Again, it didn’t happen. However, what it did prove was just how fragile computer networks were at the time and what precautions governments and companies took to prevent an utter catastrophe from occurring.
Could it happen again, only this time for real?
Given the world’s reliance on electricity, would it be hard to imagine a world hit by a global scale blackout? It happened in 2003 on the northeastern seaboard. In August of that year, the lights went out for several days leaving 50 million people without power. The “glitch” started with a tree branch in Ohio spreading power surges through the grid all the way up across the border to Ontario. With all the safeguards in place, the incident still brought two nations to a halt.
How difficult would it be to think of a time when a routine event couldn’t cause the downfall of the entire global infrastructure? Looking at it a different way. What’s to say that in an effort to protect the grid, artificial intelligence wouldn’t attempt to circumvent safeguards to preserve life on this planet?
RANGER MARTIN AND THE ZOMBIE APOCALYPSE, on sale now.
RANGER MARTIN AND THE ALIEN INVASION, on sale now.
RANGER MARTIN AND THE SEARCH FOR PARADISE, on sale October 20.
Is artificial intelligence too much intelligence to control our lives? Should we allow it?
I think it depends on who develops it. A military AI is too dangerous to contemplate. An intelligence AI will take intrusion to the ultimate extreme, but a Microsoft AI will need to be patched up and reset every evening for it to work properly. A Google AI will never stop talking: ‘when you say holiday in Blackpool, did you mean Liverpool?’
I think we’re a long way from self-aware AI. As long as this stuff relies on code to function it will never be sentient. Maybe quantum computers might go further, but then they’ll reach a point of such advanced intelligence they’ll disappear into the next dimension where they’ll probably feel more at home!
Chris
Somehow a Google AI seems all the more scarier than anything I could contemplate. What kind of a sentient lifeform would come of such an AI. I wouldn’t begin to imagine such a beast taking hold of the entire internet, let alone the world’s entire infrastructure to make it all the more compatible to its programming state.
If our electronic infrastructure were completely brought down, we’d be taken back to the days of the Industrial Revolution ( Maybe ). I think that would be more of a concern than a ” Machine Uprising “, simply because the scientists haven’t built anything that isn’t ultimately dependent on human input / feedback yet. No murderous HAL 9000’s ( because of a conflict in his programming ), Terminators, no See – Threepio’s, No ” Maria ” ( Fritz Lang’s robot character from the movie ” Metropolis “, 1932 ( ? ) ). Certainly nothing capable of trying to exterminate or delete ( both terms used by Doctor Who’s Daleks & Cybermen ) us.
It would be the wild west all over again. In the early days of the internet, it was just that: the wild west. Users would share things online without the thought it might be the wrong thing to do. But in the end, regulation tapped those loopholes and we’re now in an age of a regulated internet. Who’s to say it will stay that way? Even more so, who’s to say the internet wouldn’t turn on us by some mysterious virus unleashed on an unsuspecting populace where it would mutate into a living organism, traveling the wires to hit anything and everything in its path? The cure then would be to fully pull the plug on the world and start from scratch, reminiscent of Jurassic Park.
Possibly the wild west, but would still have access to transportation, minus GPS & other bells & whistles, we would probably have landline telecommunication, unless the satellites malfunction.
The internet becoming conscious – I read about a program called ” Cyc ” that could piece information together & make connections ( This was back in the late 80’s, so I don’t know what became of it ), so maybe a ” conscious internet ” IS possible under the right conditions. If it starts getting ideas about godhood & us humans worshiping it,
we should power it down. Or pull memory & information modules, like Dave Bowman did with HAL 9000.
I think it will always be something that somebody will see as a good idea until someone gets it ‘right’ and things backfire. Humans don’t even treat other humans well, so a sentient computer wouldn’t do any better. How would you stop it from watching Matrix, Terminator, I,Robot, and anything else that may give it some ideas out of context? People joke about aliens not visiting us because they see how violent and environmentally abusive we are. Imagine if a being is designed to help humanity and/or the planet then views the same stuff. Very sci-fi doom and gloom, but it isn’t that farfetched once you go with ‘A.I. can happen’.
I believe it can happen. If we give us enough rope we’ll eventually hang ourselves. Everywhere I go, someone’s holding a device in their hand. What’s to say all this technology one day will not turn against us to form a sentient life form, except with protons and electrons? I don’t find it far-fetch the creation would turn on the creator. Agreed, it can happen.
The charge will be led by the GPS systems. Those things are already trying to kill people.
Got my first GPS attempted murder last weekend trying to find my way out of a neighborhood and finding the road that should have been there was not. I was stranded trying to figure a way out of my jam. Thankfully, once I returned to where I came from I knew where I was going!
Mine loves to tell me to do U-turns on the highways. Forget that there’s a barrier in the way or, one time, a cliff.
Been there. Has it taken you through the “scenic” route to spare you twenty minutes travel, only that you find it’s a rocky road that would a) puncture your tires causing you to swerve into an embankment or b) put a hole in your gas tank and the next spark would ignite your vehicle into a fireball?
Not yet. It’s sent me through side streets with tons of red lights to avoid a 5 minute traffic jam. Oddest one is trying to get back to Long Island from Upstate New York (right on Lake Ontario) and the GPS was determined to send me to Canada. No idea why, but it didn’t stop complaining until we got within a few miles of NYC.