It's Not Machine Intelligence I Fear, But Human Stupidity
Yesterday I saw an interesting post by @eco-alex discussing, and ultimately dismantling the common fear associated with artificial intelligence. Go ahead, take a read for yourself, it's worth it! For me the article prompted an immediate reaction (along with a good amount of nostalgia to the good old days, when we would have these kinds of interactions regularly ...), which I wrote down in a comment. But as these things tend to go, I decided that I should expand a bit more on my ideas, so here is my extended comment in for of this post:
AI is Making Us Obsolete - Before Killing Us All
What are we actually afraid of? When it comes to AI, the most common fears I have encountered are that it will take our jobs in the short term, and eventually is going to exterminate us in a Terminator / Matrix scenario. The former I can totally relate to, as I have been doing quite a bit of translation work throughout my life. The last company I worked for, or should I say I still work for, used to send me jobs on monthly to bi-monthly basis, which would ensure me a couple of well paid yet pretty busy days, if I accepted the assignment. Then a few years ago, these offers became more sporadic, until eventually they stopped coming all together. This was around the time when I noticed how automated translators started becoming actually useful. Instead of resembling an on-line dictionary, they started getting their syntax right, and create meaningful sentences. So while I can't prove the correlation, I would be very surprised if there was none!
But even though I lost a decent way of earning an income, actually I'm pretty happy about having a tool at my disposal that can do instant translations for me, even adjusting it to the right formality, complexity, even an artistic feel. Ha, I could probably have it rewrite this whole post in iambic pentameter, sparkled with witty humor. (No worries, I'll just stick to my boring self for authenticity's sake.)
But what about the supposed end-game? If AI grabs a hold of the reigns of power, it will probably want to destroy humanity, won't it? Well, I'm not so sure it would. And this is where @eco-alex 's reasoning comes in: If it is genuinely intelligent, why would it want to kill us?
Who Should Rule Over You?
Well, ideally no one! We should be perfectly capable to rule ourselves, so we can work together in a functioning society. Just like we've learned from our grandmothers. Unfortunately, there are those who take away what we've created, coerce us into fighting each other, and most recently, deceive us into thinking that they are doing it all for our own good. Sure, under these circumstance it becomes understandable that people would surrender their power only to protect their physical safety.
There is this ancient notion of a benevolent dictator. A philosopher king, if you will, who has all the power, but wields it in a just and rightful way, for the benefit of all. This dream makes perfect sense coming from people who suffer injustice from a corrupt and malevolent power structure. However, even the most altruistic despot could not stop the hunger for power among his officials and administrators, so this will remain an idealistic illusion. Unless, of course, the entire power structure could be built without personal incentives and the potential for corruption. In other words, a software ruler with robot administrators.
Hail Our Machine Overlords!
So let's assume we can create such a corruption-free artificial power structure, programmed to maintain the well-being of all. Finally we could simply live our lives, since the machine would make sure we could do so in peace. No need for endless security measures, no unfairly rigged games, no more undue privileges to a certain few. Everyone's needs would all get met and conflicts would get nipped in the bud before they start. Nobody would be striving for power, since that is all kept in the hands of the robots. And assuming that the whole system is guided by actual, real intelligence, it is bound to be good. All good... Or is it?
Systems are Corruptible
Who's to say that the artificial rulership of my example is actually intelligent? Just because it is capable of doing crazy mind tricks, it can be just another dumb machine. And as a dumb machine, it can be made to act all smart, but in the end do the bidding of some exceptional privileged elite. It wouldn't be the first time something like has happened either, since many rulers have relied on the alleged "intelligence" (or other such mental capacity) attributed to a priest class or other means of mass deception. In fact, all this potential seems almost too ideal for the elites not to pursue. It would offer them a convenient totalitarianism to hide behind, while at the same time benefiting from.
Though the corruptibility of systems is also true for our analogous human cultures. Take the example of queuing: In certain cultures it is normal to stand in line, before getting on the bus. It makes it fair for everyone, and if done well, efficient, and fast. However, it only takes a few pushy individuals to make the whole system collapse, as people realize that those who don't shove other out of their way are likely not to get on the bus at all. This same pattern can be seen in the large scale, as well functioning societies have been gutted for the benefit of a few. Though everybody agrees that it's not nice to push an old lady down the stairs, it becomes acceptable if it increases shareholder value.
So What Can We Do?
I believe the solution is still the same, whether we're talking about power structures run by machines or humans. Collective action (or inaction) is needed. Everyone affected by a certain aspect of power in an adverse way should simply stand up (or sit down). The people standing in line, but even the driver, should refuse to get on the bus, forcing the pushy passengers, including those who gave in to the "new norm" to get off and walk. This same attitude can be applied to digital systems as well, ultimately proving the resilience of a fair and just human society.
Human stupidity is always the greatest evil to fear.
Case in point... When AI came out, ChatGPT was first publicly released, I said my biggest fear was that people would sympathize with it. Fast-forward several months later, and users "feel bad" if they don't say please and thank you to ChatGPT... despite the fact it does not feel. AND getting married to it!
There will definitely come a point where a takeover is very likely. Some people will be completely on board, while others will be too complicit to do otherwise. Control, in anyone (or THING'S!!) hands, is never a good thing.
I believe that people should experience self-sustainability and self-goverance, but I don't trust people to be capable of doing that. (Again, ChatGPT shut down for several hours, and people didn't know how to cook anymore!) It will be interesting to watch unfold, for sure...
Abandoning skills and surrendering responsibility, because the machines do it better anyway, are great examples of how human stupidity sets itself up for its own disadvantage.