When AI goes rogue - the strange case of "vegetative electron microscopy"

With modern industrial tools our lives became faster and more efficient. We reached a point where it seemed like it was only human error that stood in the way of further great advancement for civilization.

1000078288.png

Then came our current fourth industrial revolution and the rise of AI tools to take us to the next level. The glass ceiling was shattered and it appeared as if now even human error could be removed from the equation. AI was going to take us to the stars.

However, we've since found that AI also makes some serious mistakes sometimes. I'm not talking about the bizarre so-called hallucinations where a chatbot LLM large language model will start sprouting gibberish, abstract word salad or even the outright fabricated lie. These early teething problems can be ironed out.

The case of "vegetative electron microscopy"
I'm taking about this strange phenomenon of AI LLMs misreading data to begin with and then providing that error to others. Or of two unrelated errors compounding and resulting in an amplification and perpetuation of that new error.

In one case two science papers from the 1950s, which were published in the journal Bacteriological Reviews, were scanned and digitised. But during the process the word "vegetative" from one column was added to the phrase "electron microscopy" from an adjacent column accidentally, leading to a new unheard of term - "vegetative electron microscopy". No such thing exists.

Decades later, in 2017 and 2019 the term appeared again in two Iranian science papers. This time presumably because the Farsi (Iranian) language translation for the words "vegetative" and "scanning" are almost identical, differing by only one dot. It was a simple translation error.

However, AI chatbots have quoted or shared these science papers with the nonsense phrase "vegetative electron microscopy" with users. As a result, the absurd concocted term now appears in 22 other scientific papers. This can be investigated using Google Scholar.

One might say that this false scientific term was originally the result of human error, though I'm not so sure. And even if it was, it shows how AI - as helpful as it is in revolutionizing our lives - can perpetuate and amplify errors in our collective knowledge base.

If the AI LLM is trained on faulty data, then reverse engineering these datasets is apparently not always as easy as one would like. As a result the corrupted data gets repeated by the ignorant AI.

A similar thing occurs when the engineers who build the chatbot deliberately customize the learning data feed to the LLM so that it gives out biased data. The difference is that the latter was deliberate and malicious, while the former is accidental and unknown until the erroneous info is found by someone.

And this case of the "vegetative electron microscopy" phrase is just one such error that has been stumbled upon so far. How many other strange or absurd words, phrases or bits of data are out there being disseminated as truth to an unsuspecting world?

As revolutionary and valuable as it is, the rise of AI LLMs may also end up further embedding such errors into our global knowledge base. The scary part is that it's being done via a process that no single person controls, while the blind robot carries on the errors in ignorance.

The more we come to rely on and utilize these AI tools, the more a small error at the start can lead to a massive deviation at the end of the line. It appears that AI is far from safe or foolproof, even in the best of hands.

How much more dangerous might it be in the hands of bad actors with malicious intent from the start?

The vitally important idea of alignment in LLMs is now not the only vulnerability, it appears. This is where a moral or ethical guardrail is built into the AI so that it has a human-like conscience and respect for life and human rights. That challenge alone is a mountain to climb.

But now, we also have to constantly be vigilant about what we take as truth or factual info from these AI chatbots. And they're only going to get more powerful. In the coming few years, maybe even next year, the chatbot will be a robot that is mobile and autonomous, wondering around our kitchen with a sharp knife, making our meals and watering our plants, or whatever your imagination can conjure up. What guarantee do we have that this machine will be safe in every way? None at all.

I just hope we're ready for what we've unleashed upon ourselves as humans because by giving so much power to the machine, we could be building our own prison and it's wardens to corral us like cattle. Science fiction literature as full of just such dystopian futures. And it seems as if that once distant future has just arrived.

Reference: https://www.sciencealert.com/a-strange-phrase-keeps-turning-up-in-scientific-papers-but-why

Image: https://pixabay.com/illustrations/ai-generated-rpg-fantasy-game-8926360/

Written and uploaded from my mobile device to the Hive blockchain for those interested in our AI revolution.



0
0
0.000
0 comments