I’ve found no better guide through the thickets of high-tech than Kara Swisher.

Her opinions will offend some — she is a self-described “liberal, lesbian Donald Trump of San Francisco” — but few have studied the scene longer or with the intensity and have her insight. One of her eyebrows is permanently raised when assessing tech entrepreneurs and the prospects generally of their most groundbreaking work.

A near constant theme of hers is their refusal to acknowledge — or seriously consider — the downsides of their creativity, the dangers that new technologies pose and the damage that they can do. Swisher’s skepticism is rooted in two powerful, perhaps overwhelming, drivers of behavior. The first stems from the basic truth that it is difficult to make someone appreciate the negative consequences of something when their future and fortune depend on it. The second is Silicon Valley’s readiness to “move fast and break things,” which not only encourages disruption but prioritizes speed above all other considerations: “if I don’t do it, someone else will.” Both put immediate returns above any other concern.

Swisher’s perspective has been especially useful as the debate over artificial intelligence has heated up and the warnings have become more dire.

Earlier this month, there were reports that an AI-enabled drone had decided that the most effective way to accomplish its mission was to kill its human operator. During a simulation, the drone, which had been “trained” that destruction of the target was the preferred option, “turned on its master” and killed the human who had the final decision on whether to attack and had withheld authorization. Explained one U.S. Air Force officer, “while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So ... it killed the operator because that person was keeping it from accomplishing its objective.”

After being given additional training that killing the operator was “bad,” the system, said the officer, “starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

That story unleashed an uproar, prompting the Air Force to deny that any such simulation had occurred, adding that it was committed to “ethical and responsible AI use.” The Air Force officer clarified that he had been speaking hypothetically and his remarks were taken out of context. He added, though, that while the experiment had never been run, there was no “need to in order to realize that this is a plausible outcome ... this illustrates the real-world challenges posed by AI-powered capability.”

This is “the alignment problem”: ensuring that an AI’s goals align with human goals. It was postulated in its most mundane and terrifying form by philosopher Nick Bostrom who warned of an AI designed to produce as many paper clips as possible. It could decide that killing all humans was pivotal to its mission, both because they might switch it off and because they are full of atoms that could be converted into more paper clips.

That sounds crazy — and it is — but it’s also a reminder that AI will do only what it is told, meaning that designers must be extra cautious to ensure that parameters are properly designed and safety mechanisms built in.

That’s obvious, but difficult nonetheless. The Financial Times reported last week that AI software designed for use as a chatbot — to answer customer service queries — but given guardrails to prevent it from either discussing topics outside its purview or revealing personal information, could be “tricked” into ignoring both restraints. It took researchers just hours to compromise the system. The Financial Times cited Yaron Singer, a professor of computer science at Harvard University, who called the results “a cautionary tale about the pitfalls that exist.”

A second AI problem in recent headlines is that of “AI hallucinations.” The popularity of ChatGPT has been tempered by the discovery that the software has a tendency to, uh, make stuff up — basically, it “hallucinates.”

AI learns from a “large language model” (LLM) that analyzes massive amounts of digital text taken from the internet. It doesn’t just mine data; it finds patterns and uses them to fill in the blank. Unfortunately, there are lots of mistakes and “untruths” on the internet, which means that an AI can learn the wrong lesson. In other cases, it just fabricates. (AI experts can’t always explain why.) CNBC cited Yilun Du, a researcher at MIT who explained that AI “are not trained to tell people they don’t know what they’re doing.”

In one New York Times article, reporters asked ChatGPT to trace the history of AI. It did, with a reference to a conference that took place and a NYT article that was never written. Other AI systems also repeatedly provided inaccurate answer to the same question. “Though false, the answers seemed plausible as they blurred and conflated people, events and ideas,” wrote the reporters. During AI product announcements and demonstrations, Google and Microsoft bots also hallucinated.

Researchers at OpenAI, which produced ChatGPT, admit that “even state-of-the-art models are prone to producing falsehoods — they exhibit a tendency to invent facts in moments of uncertainty.” In response, they are training AI in different ways. One method rewards correct answers throughout the reasoning process and not just for the conclusion. (Yes, machines too must show their work.)

Google, which has introduced chatbots into its search engine, compares AI-generated results with those from regular searches; if they don’t match, the AI results aren’t shown. Still, Sundar Pichai, CEO of Google, warns that “No one in the field has yet solved the hallucination problems.” Geoffrey Hinton, one of the founding fathers of the current AI technology, told The Washington Post that when it comes to the hallucination problem, “we’ll improve on it but we’ll never get rid of it.”

AI hallucinations can be used to undermine AI itself. Earlier this month, researchers announced that they could use an AI’s hallucinations to generate URLs, references, code libraries and then create malicious code to be used in their place. “AI package hallucination” is an example of “data poisoning,” whereby the database used to train AI is tainted, rendering it ineffective or vulnerable to abuse or misuse. Bad data cripples the tool. Garbage in, garbage out. Corruption of AI code libraries, which are used to build other AI systems, is an even more effective disruptor.

Hundreds of AI researchers and executives last month signed a statement warning that AI poses an existential risk to humanity. And then, noted Robert Shrimsley, the acerbic FT commentator, they went back to work. “The heads of Google, DeepMind, OpenAi and umpteen others have taken time off from inventing the technology that could wipe out all human life to warn the rest of us that, really, something should be done to stop this happening.” That is a snarkier version of Swisher’s arched eyebrow.

AI may be scary but everyone wants to use it to either beat the competition or make sure that they aren’t beaten in turn. When OpenAI first released ChatGPT, it had more than 100 million monthly users in two months. Every major software company is rushing to incorporate AI into its product. As one headline announced, “AI has finally given moribund software stocks a jolt of life.” In either a heckuva coincidence or — more likely — more proof that computer cookies leave a trail of crumbs for advertisers to follow to your wallet, the ad on a webpage I’m reading as I wrap up this column invites me to “stay ahead of the competition with ChatGPT and other cutting-edge LLM technology.”

This is Swisher’s logic in real time (and in your, or at least my, face). No evidence suggests that corporate self-restraint or self-governance will solve these problems. Governments have registered concern and are beginning to act. The leaders' communique from the Hiroshima Group of Seven summit noted their determination to “advance international discussions on inclusive artificial intelligence governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.”

One of the more interesting suggestions is creating an international organization modeled after the International Atomic Energy Agency to deal with AI. It’s intriguing and workable — in a world in which the most powerful nations put aside geopolitics and focus on “existential threats.” I only wonder if we will act before the threat materializes.

Brad Glosserman is deputy director of and visiting professor at the Center for Rule-Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. He is the author of “Peak Japan: The End of Great Ambitions” (Georgetown University Press, 2019).