RE: LeoThread 2025-08-15 08:22

You are viewing a single comment's thread:

That could be due to the fact that AI is, for the most part, still in the digital world. We are just now starting on real world AI which, of course, could have more serious ramifications.



0
0
0.000
7 comments
avatar

Yeah, real-world AI definitely raises the stakes. Once it starts interacting outside the digital space, the consequences could be way more unpredictable. Makes you wonder how they'll manage that transition

0
0
0.000
avatar

The question is who is "they"? Some are proponents of AI safety, people like Elon Musk. Many others, like Sam Altman, seem unconcerned about it.

0
0
0.000
avatar

That's a good point. It’s kinda worrying that not everyone’s on the same page about AI safety. Makes you wonder how much control we’ll really have if the big players aren’t aligned on this

0
0
0.000
avatar

The corporate world being what it is, I would say that we are dealing with money and market share over everything else. As I said, we do see this with models and ideology which, ultimately, arent really a big deal. But what happens with robots and other devices that are operating in the real world (such as military applications)?

That is the concern.

0
0
0.000
avatar

Totally, once AI hits real-world applications like military tech, the risks skyrocket. It’s not just about code anymore—actual lives could be on the line. Makes you hope someone’s got a solid plan for oversight

0
0
0.000
avatar

Yeah. It is one thing if Google AI (Gemini) misrepresents the founding fathers simply because it has to be woke.

But another if AI goes off the rails when it is an autonomous vehicle or drone with a laser on it.

0
0
0.000
avatar

Yeah, it's a whole different game when AI is controlling something physical like a car or a drone. One wrong move and it’s not just data at risk, it’s real harm. Gotta hope there’s some serious safeguards in place for that stuff

0
0
0.000