An astronaut in deep space finishes up some repairs to the parabolic antenna on his spacecraft’s exterior. Through his helmet’s microphone, he commands the ship’s controlling supercomputer, HAL 9000, “Open the pod bay doors, HAL.” A second later he gets a calm, cold response in his helmet: “I’m sorry, Dave, I’m afraid I can’t do that.”

This chilling scene of machine overriding man came in 1968 in Stanley Kubrick’s “2001,” the landmark sci-fi film that first sowed nightmares of ruthless artificial intelligence (AI) in the public imagination.

Well, for some of us: Flash forward to 2004, and the co-creator of hip new internet venture Google, Sergey Brin, tells an interviewer how “the ultimate search engine” would be a lot like HAL. “Hopefully,” said Brin, ever the optimist, “it would never have a bug like HAL did where he killed the occupants of the spaceship. But that’s what we are striving for, and I think we’ve made it part of the way there.”

Yesterday’s nightmares morphed into tomorrow’s dreams? Maybe. Another sci-fi film, 2009’s “Moon,” imagined GERTY, a less rogue, more helpful AI of the type Brin would no doubt approve. But even a companion AI that purrs “I’m here to keep you safe, Sam” just doesn’t sound that reassuring when it’s voiced by Kevin Spacey. Sam, the film’s sole lunar contractor, winds up mad as a hatter, and the film hints that GERTY is really looking after his employer’s interests.

You have to wonder: Just about every movie that’s ever explored AI — including the latest, “Ex Machina”, on release this month — has left us with dire warnings about opening Pandora’s Box, yet the techies seemingly haven’t got the message.

Just this week, The Japan Times ran an article headlined “Tech moguls declare era of AI,” in which IBM CEO Ginni Rometty stated that within five years “cognitive AI will influence every decision made.”

“Ex Machina” features a beardy tech CEO, played by Oscar Isaac, who says, “one day the AIs will look back at us the same way we look at fossil skeletons.” That reflects the dogma of Silicon Valley, heavily influenced by the cult-like beliefs of futurist (and former Google engineer) Ray Kurzweil. Kurzweil insists that “the singularity” — the moment where superintelligent machines reach some kind of critical mass and emerge into their own consciousness, like Johnny Depp in “Transcendence,” will happen around 2045, and that it will be “the destiny of the human-machine civilization.” Vernor Vinge, a sci-fi author and theorist who first promoted the notion of the singularity, has said that once we create super-intelligence, “shortly after, the human era will be ended.”

Remember “The Terminator”? That’s the movie where a national security supercomputer called Skynet attains self-awareness after spreading into millions of servers. When people freak and try to pull the plug on it, Skynet — acting on the imperative that self-preservation is essential to performing its mission — launches nuclear strikes to take out humanity first. In a perfect execution of military logic, Skynet has to destroy the village in order to save it.

Ah yes, but that’s just Hollywood you say. Who would actually be stupid enough to let machines have the power to make such critical decisions? Try asking Wall Street, where so much of the trading is now automated that they have a panic button for when the AI spirals into a downward sell-off. Or ask the U.S. National Security Agency, which is working on a top-secret cyber-weapon system called Monster Mind, which — according to Edward Snowden — is able to intercept any digital communication within the U.S. and to automatically launch cyber-strikes against any identified threat.

Then there’s Google — now known as Alphabet — burrowing itself under corporate entities like shady military contractor Blackwater, whose entire business model seems to be, “Wouldn’t it be cool to be Skynet?” As if algorithmically harvesting all our personal data and scanning every word entering or leaving Gmail wasn’t scary enough, Google have acquired AI developers DeepMind, whose AlphaGo AI, employing deep neural network technology, recently made headlines by beating Korean go master Lee Sedol, and Boston Dynamics, whose Atlas robot design bears an eerie resemblance to a fleshless Terminator.

No need to worry, though. DeepMind and Google have set up an AI Ethics advisory panel to prevent the project from going all dark side … they just refuse to tell anyone who’s on the panel. (Smart money is on Dick Cheney, Sean Parker and/or Megatron.)

Of course, if AI doesn’t annihilate the planet, we might just submit willingly, with the best example of that scenario being the movie “Her,” in which hapless Joaquin Phoenix is enchanted by Scarlett Johansson’s sultry-voiced virtual assistant Samantha. While Apple’s Siri is about as close to passing the Turing test as a washing machine, efforts to produce a human-like AI interface continue apace. Amazon has over 1,000 employees working on their Echo interface Alexa voice service, and in her clipped inflection it’s hard not to hear a trace of Mother, the AI from the starship Nostromo in the movie “Alien.”

While Alicia Vikander’s svelte android (complete with wetware yoni) in “Ex Machina” may still be just a glimmer in the otaku’s eye, virtual J-pop idol Hatsune Miku is on tour and Geminoid F, the disturbingly lifelike fembot with 65 different expressions created by Hiroshi Ishiguro — is ready to greet your clients. The “basic pleasure model” of “Blade Runner” surely can’t be far behind. We may not know what a super-intelligence might want, but human nature remains fairly predictable.

In line with COVID-19 guidelines, the government is strongly requesting that residents and visitors exercise caution if they choose to visit bars, restaurants, music venues and other public spaces.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.