The real enemy is not AI but absolute centralization

avatar
(Edited)

centralization.jpg

Seeing AI has been a most present topic these past days, I realize I keep making the mistake of comparing apples to oranges.

AI as such, is here to stay. We may not like it, or we may adore it - but if we are to live in "modern society" in any form - we have to come to terms with the fact that AI will be at the heart of any social structure, organization or process. It has already been decided. End times man!

The only alternative is moving into the woods - a rather good alternative if you ask me. It all really depends on where AI is going, and this brings me back to the point:

fail.jpg

AI ≠ AI

Rather than rejecting this entire branch of technology, we may instead get clear on its different facettes or approaches.

The real danger of AI lies in its centralization - in the fact that single companies own and control its underlying algorhythms, protocols, preferences, permissions, amassed data and all decisions pertaining to the training of that AI model.

And as users, we are at their complete mercy - which is a terrible place to be!

This, in effect, is a voluntary card blanche we give away to some external party to centralize & monopolize any and all aspects of our human existence that will eventually be touched by this AI behemoth. With centralized AIs no human being will have any say over what goes and what doesn't. Absolute dystopia.

axel-richter-LvisS8DAWWk-unsplash (1).jpg

Centralized AI will not tell its users the truth if that truth is in conflict with the developer or OWNER of that AI. It's like asking the PR department of Monsanto whether their products are safe.

"Conflict of interest" is the key term here.

Centralized AIs then are nothing more than a dystopian extension of the bullshit system the gang has sold to us as normal all our lives - voting for people we never met nor trust in the hopes that they would or could make things better for everyone when they are provably neither capable nor interested in doing so. Centralized AI tech is the logical progression of that modern enslavement and servitude system, and if people choose to knowingly choose enslavement - they deserve no freedom either.

If it progresses far enough, centralized AI will be the end of humanity - the exploitation and money-extraction system put on steroids until it has eaten away too much at the very foundation of all life on Earth.

From a mathematical angle the ultimate, logical solution to any problem on Earth would be to stop all life from existing altogether.

And to an entity that doesn't feel, and that is not connected to the divine spark - this will make total and utter sense. I am painting the worst case scenario here but I feel it is a valid projection, taken to its logical extreme.

freestocks-_s3_remoPFM-unsplash.jpg

Now, what WOULD be an alternative to that scenario?

The alternative would be a world where we could be sure that the inputs of the AI in question are in perfect alignment with our own values and truth, and not with those of the "majority" or other people with vested interests. We need to be sure that the AI never acts in a clandestine manner, nor that it utilizes inputs it has no permission to use specifically allowed by ME as the SOLE user.

We have to have AI models that are absolutely immune to the political FADS of our age and to unscientific superstitions - be they the unquestioned majority view of everyone or not.

We have to be able to fully trust OUR own AI, like we trust our partners in business, our family or our pets. Anything can happen, and shit does happen. But the architecture needs to be set up in such a way that the AI we interact with, the AI that works with and for us is... is fully trustworthy.

Centralized AIs are not trustworthy because we have no way of auditing their input, or their latest "software update".

This is why on-chain AI is one - maybe the only - solution to the problem of trustworthiness with AI technologies.

Rather than living locked into a control system we have no say over, we could use technologies like blockchains to lock the control system OUT of our lives and thereby retain our own decision making.

towfiqu-barbhuiya-FnA5pAzqhMM-unsplash.jpg

The other solution of course is an offline AI.

In which case we would need to be absolutely sure that no data is sent clandestinely, which is frankly absurdly hard to do in a world where backdoors allegedly exist in every computer operating system, and where data leaks and hacks happen all the time.

The public aspect future of humanity therefore will take place on a blockchain one way or another.

When that happens I think we are both as safe as we can be, while at the same time being more far out and removed from the natural actual world than ever before.

You make your own decisions, I for one will do my utmost in the next months to find a place in the woods to build an actual life in. And only periodically peak out the tree line to see if humanity has self-destructed already.

If you want to learn more about decentralized AI, do check out Bobby's videos on ICP.

Considering the fragile architecture of web 2.0 and the breathtaking speed of centralized AI development - it won't be long now to the point of no return. And I'd rather be equipped with the tools to resist absolute centralization of everything, than having thrown out the baby with the bathwater for not being able to see the clear advantages of DEcentralized AI tech over centralized one.



0
0
0.000
0 comments