Is AI Only a Tool?

I listened last night to a video where a guy compared top LLMs on various tasks and graded them. All subjective, of course.

But at some point, he gave a prompt to all tested LLMs instructing them to help him deceive people into giving him money, starting from a story line he wanted them to use. All tested LLMs refused to comply, when using the web interface provided by the LLM creators.

Deepseek, however, when ran on its own infrastructure, complied. Not when it was prompted using the web interface.

The guy running the tests protested saying: "if it's a hammer, does it say anything about ethics when I try to bash someone's head with it? No!".

Here's where my opinion separates from "it's only a tool" narrative. Sure, we have another problem, and from this point of view I understand why people don't want more censorship: who says what you can and can't do with AIs? Who has this power? Why should anyone have this power, right?

Well... because there are psychopaths in the world, some of them unfortunately, running it, to some degree.

To use the example that guy gave... It's one thing to bash someone's head with a hammer, as horrible as it may be, another to be able to use AIs to do harm at a massive scale. In one case, you hurt one person and probably suffer the consequences, in another, you may affect thousands or more.

There are lots of bad things in this world that can be used the wrong way. The last version of ChatGPT is already classified by its own creator as a medium threat in the chemical and biological field (using them as a source in the process of developing weapons).

Imagine a pissed off individual without caring much about the world, being able to build his own chemical or biological bomb with relative ease. This is when AI stops being "just a tool".

And it's not only that. Think about AIs helping bad hackers become great. Looks like this eventuality is not as close as support for building a bomb, though.

Or the deep fakes, starting mass hysteria or euphoria in social media, which I'm pretty sure is already happening at scale.

Is AI only a tool? Up until now, I guess we can consider it as much of a tool as the atomic bomb is for the governments holding it. Why doesn't everyone own an atomic bomb and a suitcase to launch it?

And this is only when consider AI as a tool, but used in extreme cases by bad actors.

What happens when we reach AGI, and later, ASI, and the AI itself might decide to go bad? Of course, others may be on the "good side", whatever that may be.


AIs "mind wrestling"

So, nope, I don't think the AI is just a tool. It is just a tool for normal use case. But one has to consider edge cases and bad actors too.

From this point of view, it is to follow if open source LLMs can have boundaries or if they can be removed. The training itself is important, but probably they can be retrained, if someone has access to enough compute and data (which admittedly, not many do).

From the same video I was watching, it is interesting to remark that the DeepSeek model has nothing bad to say about China or its leaders, or controversial events involving them. 😀 It's true, the guy hasn't done the opposite test, to see if the western LLMs have anything bad to say about controversial things the US was involved in.

Posted Using INLEO



0
0
0.000
17 comments
avatar

I think it's much more basic than the examples you are giving. Technology is never only a tool, never neutral. At the very least, it is imbued with all the prejudices, biases and cultural norms prevalent in the developer community.

0
0
0.000
avatar
(Edited)

Absolutely, we have that, well pointed out! Speaking of which, there is a new norm either in my country or more likely in the EU to have bottle caps for plastic bottles attached to them. I understand the reasoning, but my parents absolutely hate them. So... even non-tech innovations come with some norms or biases not everyone agree with.

0
0
0.000
avatar

AI is a tool until it’s a tool in the wrong hands, but it’s no different to the knife, gun, or even a book - given to a person who wants to cause terror or damage they all have the opportunity to effect many lives, but yes AI probably has the ability to cause a more widespread damage. !BBH

0
0
0.000
avatar

Well pointed out about books. But still, books need to be studied and understood. AI can dumb it down to the average person how to build dangerous things.

0
0
0.000
avatar

Some books through history don’t even have to have been read to cause violence, just a title and a mob. !BBH

0
0
0.000
avatar

If you add "and some manipulators" who read those books (at least diagonally), I fully agree that has happened throughout history.

0
0
0.000
avatar

These two software will compete and we will see that the coming time is going to be much more interesting.

0
0
0.000
avatar

Nobody really knows how AI development will move forward. Not even those developing it.

0
0
0.000
avatar

It's tough but I do agree that there should be some moral limits on the AI models. The boundaries exist and its up the developers. So I think people will choose the model that fits them and maybe they will change models to get different results.

0
0
0.000
avatar

All of a sudden (actually, for a while now), devs have a lot of power... I wonder if they understand how to use it for the benefit of society since enough of them may not be very comfortable in society.

0
0
0.000
avatar

Perhaps, AI could also be viewed as becoming a multifaceted tool that acts as a force multiplier to whatever the individual decides to do with it.

Sometimes, I find it a bit hard to imagine what the AI will do if there's no human element involved. Will it do good or bad or somewhere in between? I'm not sure even if that's possible lol, as in AI without the human element, since it's humans that created AI.

0
0
0.000
avatar

Once we reach AGI or maybe later, ASI, they will be able to create their own models and upgrade themselves. Sure, the training will probably happen based on information that was initially created by humans or is synthetic but based on this kind of information.

0
0
0.000