In his signature black turtleneck and blue jeans, Apple CEO Steve Jobs introduced the first multitouch iPhone in 2007, with the proud declaration, “Every once in a while, a revolutionary product comes along that changes everything.”

That’s great marketing copy — and even better when it turns out to be true.

The touchscreen smartphone is now our ubiquitous companion, and with each new product release, phone makers unveil new innovations supported by longer-lasting batteries and mightier chips. When connected to the cloud, these minicomputers become even more game-changing, allowing us to capture and share our life and our creations with the world.

Or so the breathless promoters say. Most users, though, probably take these incremental changes for granted. They have become so entrenched in our daily lives that their disruptiveness is often forgotten.

Undoubtedly, photography has grown to become one of the core functions of smartphones. Over time, camera specs have become more prominent, both in terms of the hardware and the underlying software that intelligently and silently adjusts the aperture, shutter speed and focus in real time. In fact, shoppers often see mobile phones as cameras first, with browsing and messaging as secondary features.

Mobile photography and videography have caused undeniable disruption. Most notably, it’s made everyone a creator, capable of dispensing viral content — eyewitness reports, clever TikToks, impromptu comedy sketches. The marketing of smartphone makers often celebrates the cinematic creativity in the hands of both pros and amateurs. It has democratized the visual arts and rerouted the networks of promotion and distribution.

Some say that the smartphone’s photographic innovations have plateaued, but last month Google raised eyebrows with the release of the Reimagine tool, introduced with the Pixel 9 series of phones. Harnessing the Promethean technology of AI, the software allows users to alter textures, materials and styles of objects. While Adobe Photoshop granted mortals similar powers way back in 1990, this is different. The bar to image manipulation has been dropped very low. The tool has safeguards against harmful or offensive prompts, but users have reportedly been able to skirt around them.

The increasing availability of easy-to-use software could end up hardwiring skepticism into our perceptions. The belief in a staged moon landing lives on in the doubters who no longer see a new photo as a reliable depiction of reality and cry “It’s AI!” Media are now having to call in photo forensic experts to verify photos of, say, political rallies.

Image manipulation has been around for a while. Photo retouching and composites in the 19th century were not uncommon. Long before Photoshop and Instagram filters, photographers used a variety of methods to manipulate “the truth.” Although the beautification process of yesterday was considerably labor intensive, no doubt it was motivated by the same desire that modern-day Snapchat users have when they give themselves plumper lips or smoother complexions. Even the portrait painters that predate photographers recognized the importance of flattering a patron.

Around 1865, a printmaker collaged the head of Abraham Lincoln onto the body of South Carolina politician John Calhoun. The photo went undetected until 1970, 105 years later, when an archivist revealed it to be a fake. Fast-forward to spring of this year when Kate Middleton, the Princess of Wales, was called out for poorly doctoring a family photo.

We’re savvier now, but some saw this recent incident as a turning point for public trust, with people becoming more suspicious of images shared by even legacy institutions. Indeed, a side effect of this is the “liar’s dividend,” a phenomenon where the prevalence of AI-generated disinformation allows people to dismiss legitimate scandals as fake, undermines genuine truths and erodes accountability.

What definitely is new about AI tools is their ability to manufacture influence over a relatively short period of time. It can be done at scale, with great speed and ease. The distribution networks of social media are the enablers. They claim they are doing everything they can to police bad actors spewing disinformation, but can they? Ultimately they depend on profits from advertisers who need engaged users.

Like image distortion, disinformation has been employed by pharaohs, kings, popes and dictators throughout history. And subtle propaganda has indeed been effective in making history, or making it up, that is. How many people still believe that Marie Antoinette uttered the words “Let them eat cake,” or that young George Washington said, “I cannot tell a lie.” Your Google results might surprise you.

So yes, we’ve seen this before. But in a year with so many crucial elections around the world taking place and so many more ways to influence opinions, the fears of targeted campaigns, turbocharged by new tools and strategies, are very much warranted.

The goal posts have been moved, but the good news is, with different levels of sophistication, local and national governments around the world are beginning to hammer out regulations and guidelines to address potential risks.

The EU in particular has been aggressive in establishing a legal framework for the development and use of AI technologies focusing on risk management, transparency and accountability. Japan also took an early interest in AI and adopted a more entrepreneur-first approach, but the ruling Liberal Democratic Party is expected to introduce stricter safeguards early next year.

The key will be to balance regulatory legislation with nations’ and corporations’ need to remain competitive in a lucrative and constantly evolving sector. By now, most of the major business players have thrown their hats into the ring with promises and early products, but in general, it still feels like one giant beta test — and that’s scary.

Apple’s fall product launch this week was highly anticipated, primarily because market watchers expected CEO Tim Cook to unveil various uses of AI on a shiny new device. To many fanboys’ dismay, the exact details weren’t all there yet. Users will eventually be able to enjoy an integration of ChatGPT with the helpful yet still basic Siri, but the devices in Japan probably won’t be able to enjoy Apple Intelligence (not a typo) until sometime next year.

Another missed announcement was about Image Playground, an image-generation function that can produce cartoonish variations of photos. It appears that this tool and other enhancements to Apple devices’ photo-editing tools might not surface until next year.

It’s hard to gauge what exactly is behind the delay, but in this particular case, maybe due diligence and a thorough investigation of ethical concerns will make it worth the wait. Apple has nurtured a reputation as being a socially conscious company and this is an important step into new territory. As the phone maker with the largest market share, its ripples can create waves.

Apple has taken initiative to safeguard users’ privacy and develop tech in sustainable and safe ways. Interestingly, though, the company is not a member of the Coalition for Content Provenance and Authenticity, whose steering committee includes Adobe, Google, Meta, Microsoft and OpenAI. This global initiative is creating a path toward a unified industry standard. While it cannot thwart misuse of a photo’s meta info nor penalize wrongdoers, it does establish a crucial baseline in this ever-changing landscape. The genie is out of the bottle and won’t go back in, but it can be given guardrails.

Notably, a number of media companies are a part of the coalition. This inclusion is important in that it can not only forge shared protocols for various forms of AI-generated content but also help educate and protect the public through nurturing media literacy.

Achieving a balance between gung-ho tech companies and concerned lawmakers is essential, but part of the responsibility clearly lies with individual viewers and users. In 1972, the influential critic John Berger wrote: “The relation between what we see and what we know is never settled.” In other words, our perception of the world is constantly influenced and shaped by what we experience, our preconceived notions and so on. Seeing isn’t a purely objective act; it’s subjective and colored by our knowledge, culture and personal biases. At the time, Berger wasn’t addressing mobile photography and AI-assisted content creation, but his observation about the fluid relationship between the viewer and the object applies. Our tools of perception are more than just our eyes.

Perhaps the devices of the future will better equip us to reveal truth and guard against distortions, rather than allow us to simply remix reality.

The Japan Times Editorial Board