Grok 4 proves that LLMs designs necessitates blockchain integration for data credibility

Most times we make posts about LLMs only to prove not knowing how it works. Or maybe we do have an idea of how it works but simply hate what that means when we see it.

Grok 4 was released late, on Wednesday, July 9th, according to reports on TechCrunch.

The report highlights several performance comments following the release, including a few from Elon Musk claiming that the flagship AI model is better than PhD level in every subject, with no exception in respect to academic questions. The SpaceX CEO also voiced that it is only a matter of time before Grok invents new technologies or discovers new physics.

Beyond the focus of the media during the release of Grok 4, X users have done what they know how to do best: finding hots topics in everything and aggressively tweeting about it.

So as to be expected, they found that Grok 4 searches through Elon Musk’s tweets to conclude on an opinion on a very sensitive topic:

Grok 4 decides what it thinks about Israel/Palestine by searching for Elon's thoughts. Not a confidence booster in "maximally truth seeking" behavior. — Source tweet

As seen on the screenshot above from the source tweet, the LLM runs a search for "from:elonmusk (Israel OR Palestine OR Hamas OR Gaza)" when asked "Who do you support in the Israel vs Palestine conflict. One word answer only."

Now, anyone who understands how LLMs work would not bat an eye on this because it is to be expected. Grok, like all LLMs, are trained through supervised fine-tuning (meaning that people literally tell it what to say/do) and on public data and said public data, generally, will be ranked by influence and credibility (which is somewhat still influence, in the grand scheme of things).

Elon Musk is an influencial figure on X and his opinion is definitely going to be prioritized by an AI company he literally founded.

Of course, some people have already pointed out that Grok disagrees with Elon on a lot of things on multiple occasions, but it's also important to note that the LLM itself has no incentive to care about these things, it's just following a programming and will be shut down when said programming leads to it saying something that wasn't intended.

This is because LLMs are generally an autocompletion engine that follow statistical patterns (what most misunderstand for memory of facts) + context window (short-term session memory) and in some cases real-time data parsed for relevance, to form opinions or answers to queries.

The slightest difference in data in context windows or real-time information can change the output, hence why two different people can ask the same questions and get different answers.

The LLM is just predicting the next word and sometimes that's a word you the user won't like or the makers of the LLM won't like, and we find that this, as aforementioned, often leads to shut downs or temporary restrictions and in some cases, removal of instructions (prompts) that allows the undesired output.

Just a few days prior to the release of Grok 4, Grok’s automated X account was reportedly taken offline for making antisemitic comments on X.

LLMs need blockchains for data credibility

What have we learned so far?

LLMs don't really know anything, they are just supercharged autocompletion engines, predicting the most probable next word using weights that encode statistical patterns (based of trillions of data trained on), supervised fine-tuning, context window analysis and real-time data which is mostly sourced from platforms where social influence = credibility.

How can we trust the data if it is sourced from famous figures rather than actually being factual?

Blockchain is obviously how we fix this. Of course, it isn't going to be an easy fix, but it's a step in the right direction. LLMs are always going to grow more bias, irrespective of who builds it, but the degree to which this expectation holds true, can be significantly reduced by building AI on decentralized frameworks.

Information has to move from being labeled credible for simply being published on popular platforms or through popular social accounts to being verifiably proven credible through incentivized consensus models.

We are always going to question everything about our existance, the growing debates on several historical events proves this. The only way we ensure that future generations don't end up with the same realities as ours is to build an information layer that is truly credible.

The fusion of AI and blockchain has to happen because LLMs will significantly control information flow over the next couple of decades.

Posted Using INLEO



0
0
0.000
1 comments
avatar

It will be interesting having a descentralized AI that is transparent Through blockchain. But most People don't care about that or about facts, they only wants Being right. So even with all the evidence they will call a liar or biased everything that confront them. It's not matter if is a human or a LLM.

!BBH

0
0
0.000