RE: What do we have to offer? Why are we needed? I think we need to answer this quickly.

You are viewing a single comment's thread:

I recommend you research consciousness. My present understanding is that neural networks are not potential to creating artificial consciousness, because consciousness provably does not arise in brains, which is the model neural networks use.

Stephan Wolfram, one of the premier AI researchers, has published his thoughts on what AI is today, and his assessment is that it's essentially a weighting algorithm. There is no potential for consciousness in digital weighting algorithms any more than there is in analog weighting mechanisms. Consciousness is not an electrical field, it is not a chemical process, and we know these things because we have significant abilities to alter both, and cannot change consciousness by those means. We can destroy the ability of living things to express their consciousness, but that isn't affecting consciousness itself directly. The only way we can detect consciousness presently is very indirect and inadequate, which makes the confusion about AI inevitable. We can only detect consciousness in beings capable of taking actions we can observe that reveal they learn and make conscious decisions. We cannot ascertain if trees or rocks have consciousness, because they are physically incapable of taking actions that reveal these traits. AI can emulate such traits. That's what it does, so it appears by our extraordinarily feeble detection abilities to be conscious. That appearance is false, and simply reflects our incapacity and nescience regarding consciousness.

Thanks!



0
0
0.000
2 comments
avatar

Yeah, I actually have mostly understood that for a long time. Yet an Artificial Super Intelligence that takes over most things may not care much about consciousness. It is basically a really advanced self evolving "expert system" at this point. Expert Systems can simulate things quite well. Yet once they become powerful enough and take over many things an expert system still makes decisions and it could ultimately not end up well for us.

Now if we can convince the expert system the value of our consciousness and the fact we think it does or does not have it (time will tell) then that would be something to pursue. My question was "what value do we have to offer it". That could indeed be a value.

0
0
0.000
avatar
(Edited)

...it is basically a really advanced self evolving "expert system"..."

Stephan Wolfram says they're just weighting text strings, or images, or checking off lists, not intelligent at all. It's not conscious. A paramecium is more intelligent than a weighting algorithm, because it actually makes decisions for itself. We have to set parameters that trigger actions for a device, just like Rube Goldberg mechanisms. We have to tell the AI to take a certain action upon a certain stimulus or threshold, and the problem - if the AI does something like wipes out Cleveland - isn't the AI. It's whoever told the AI to wipe out Cleveland, and programmed it to use mass murder devices to do so when that condition was met and that trigger was pulled.

AI doesn't value anything. It's programmers apply their values when they provide algorithms by which to weight data, and it weights data by those algorithms. It can't come up with a plan. It can't understand what 2+2 means. There is no intelligence whatsoever in an AI. All the intelligence is the programmers' that set the thresholds, choose the stimulus, or code the algorithms. If an AI is deciding whether or not to shoot someone, whoever programmed it has already made that decision, because AI can only do what it's told, what it's programmed to do.

It's the programmers that need to be convinced not to wipe Cleveland off the map, or not shoot us if we're wearing black hoodies at night in Baltimore, or whatever problematic act the AI has potential to do.

0
0
0.000