Over two hundred years ago, an inventor named Wolfgang von Kempelen unveiled his latest creation in Vienna: a chess-playing robot known as the Mechanical Turk.
At the time, the device was viewed as a masterful automaton but it was later revealed to be a clever illusion, controlled by a hidden human operator.
Today, we have our own modern day Mechanical Turks. Devices like Amazon’s Echo, Google’s Home, Siri on your iPhone or Bixby on Samsung dazzle with their ability to answer voice commands and questions in a way few thought would be possible just a few years ago.
Few of us have much idea about how they work - but they certainly seem to operate using artificial intelligence, even if they are prone to the odd error or mistranslation.
Increasingly, however, it feels like we are being sold a myth. From Shoreditch to Shenzhen, technology companies parrot the power of their AI as a central selling point of every new gizmo.
While smart speakers may seem intelligent and secure, nearly all of our smart devices are linked to systems where somewhere in the world human contractors or employees of tech firms judge the quality of voice recordings and listen to what we say in private.
Tech giants protest that human workers only listen to a tiny portion of recordings, but it has taken growing external pressure and whistleblowers to extract even that admission.
Just two weeks ago, Amazon’s Alexa privacy notice provided no mention that recordings could be reviewed by humans. The retail giant previously said, somewhat opaquely, that it uses Alexa requests to “improve our services”. Now, in plain English on its privacy page, it states “humans review an extremely small sample of requests”.
Such revelations are all the more worrying as Amazon signs deals with the NHS to answer health queries using its AI.
Amazon now lets you switch off this human review, stressing they are carried out by employees with strict privacy standards. On my account I found I was still opted in when I found the tab buried in its privacy page, leaving the possibility that my recordings could have been checked.
It is not just Amazon. Apple, always first to signal its support for the right to privacy, has for years had humans monitoring Siri requests for accuracy.
It is not just bluff on their part, but recklessness. Last month, recordings from one Google review team were leaked to the Dutch press. Meanwhile tech website Motherboard was sent recordings of real conversations from Microsoft’s Skype that were supposedly being used to correct its translation technology.
And social networks like Facebook are kept clean by tens of thousands of contractors monitoring and taking down posts - “yes, that is a real ISIS beheading video, not a clip from a movie, delete” - which train its algorithms.
While big tech peddles this pseudo-AI upon consumers, who may not fully understand its implications, further questions are emerging over the true state of AI knowledge.
“It is not the panacea people think it is,” says Professor Alan Woodward, a computing professor at the University of Surrey. “We are still learning about this.”
This goes beyond ordinary people checking in with their Echo speaker what the weather is like, and Alexa confusing Boston for Bolton. Misgivings and misunderstandings of artificial intelligence could affect potentially new technology for human-level intelligent machines, AI medicine and autonomous transport.
In one controversial paper last year, AI expert Gary Marcus, formerly head of Uber’s deep learning lab, launched a stinging rebuke of the shortcomings in the field. He called on leading figures to “temper some irrational exuberance” if they want to see real progress.
“Six decades into the history of AI, our bots do little more than play music, sweep floors, and bid on advertisements,” he writes.
If we can’t even trust technology companies to get the basics right in terms of the privacy and transparency of consumer technology, we should be sceptical of their role in bigger projects.
Some of the biggest tech firms such as Apple and Google have now suspended their human review of voice data, but only after it was widely reported in the national media (earlier warnings were, tellingly, ignored) and it was clear some of their contractors had been speaking to the press.
Those in the tech industry maintain that such AI training data in the form of voice recordings is necessary to ensure the technology works as promised, and so it can improve over time.
There is no doubt that artificial intelligence can be a powerful force for good. But as tech firms once again reveal a complete lack of forethought, there is a risk that this scientific innovation will be derailed.
Every time our fears about tech companies are proved right, they seem to go one step further.
At the end of its life, the enigmatic Mechanical Turk was destroyed in a fire. If technology firms are not more open about the capabilities, and shortcomings, of their own creations, they risk going up in a puff of smoke as well.