AI You're Wasting my Time!

Let me start by saying that I am actually somewhat excited about the potential of AI!

When used appropriately, it has the potential to be a great research tool and help tool. Just consider something like medical diagnostics, where AI can pull together millions of white papers and provide a diagnostician with most likely outcomes with far greater depth than any human being could hope to.

CC0117-Shadow.jpg

At the moment, though, I am being really annoyed and frustrated by stupid humans with AI!

In this case, I'm referring to people who use AI to create fairly convincing looking "news stories" that serve little purpose other than to promote their point of view rather than actually sharing tangible news.

I think my favorite comment under such a new story (that I have run into so far) was one that went "I'll take 'things that never happened' for $800 please, Alex," referring to the Jeopardy TV game show.

While I can appreciate that it may be fun to play around with this new tool, I find it annoying how much time I have to spend doing my own research to determine whether something that "sounds interesting" actually is interesting, or just a made-up piece of fiction.

CCP-000-Considers.JPG

It may seem "innocent enough" if you take the stance that "it's just entertainment" but spoofing a public figure as having passed legislation that supports some fringe fantasy you have is not humorous.

You might ask "why bother" protesting, but my personal sense of ethics and what constitutes right action dictates that I'm not going to share garbage with other people online without having vetted it first. This, in spite of the fact that those things were sent to me by people who should know better!

Sadly, in the hands of somebody with fewer scruples it would seem that AI could become a giant time sink, but also a significant way to spread misinformation surrounding politics, religion, finance, healthcare and other such things.

Seems to me like we're on the road towards a place where it will become closer and closer to impossible to verify whether any given piece of information we receive actually reflects reality or is just conjured up by someone's imagination or wishful thinking.

I have better things to do with my time!

CCP-026-Window.JPG

But I'm also concerned that there are other people out there who are the same opinion that they have better things to do with their time, but instead choose to take whatever information they get more or less at face value.

Of course, it's not really AI that is wasting my time so much as sketchy people with dubious motivations.

Thank you for listening to my (non-AI!) rant!

Feel free to leave a comment — this IS "social" media, after all!

As always, a 10% @commentrewarder bonus is active on this post!

=^..^=



0
0
0.000
16 comments
avatar

I find AI useful for fact checking. It can scan entire books. This is really helpful when I want to present an opinion about a book.

Since is a large language model, I can get insight into how authors were using a term.

AI does a good job summarizing current scholarly sentiment about subjects.

Lazy people who don't engage with the AI in writing a piece and just publish ai slush are doing the world a disservice.

!WINE

0
0
0.000
avatar

If you want a summary of a book please please go find one written by a human.The amount of stuff I've seen people quote from AI that is just straight up wrong makes my scholarly side weep.

Any answer AI gives you is a mess of stuff written by other people. Doing the extra work of reading stuff by actual humans will give you much better insights than just reading what an AI that doesn't actually understand it says.

And if you are giving an opionion based on an AI summary, you are not giving your opinion of the book as you've not read it. Why not just say "I haven't read it"?

0
0
0.000
avatar

I was talking about specific queries in which people have falsely attributed an idea to an author. This is a common problem.

For me to assert that an author did not say something, I have to buy every single work of the author to show that the idea was misattributed.

It can take a year and several thousand dollars to answer this type of question.

Since AI has access to the life work of an author, I can ask "did author say 'x' or a logical equivalence of x"?

If the author did say "X" then it gives me a citation. If the author did not say "X" the ai is often able to find out who actually said "X"

The AI world is developing alternatives to generative AI that are even better at asking such questions.

0
0
0.000
avatar

A google search is just as reliable. Probably more as AI has given incorrect answers.

Saying you'd need to personally read everything an author wrote is being silly.

0
0
0.000
avatar

I should not first that Google Search is an AI. It has always been an AI and adds new analytical tools with each release.

Google searches cannot answer the type of question I just asked.

The core google algorithm has been getting worse.

The current google search uses ai.

A better analogy would be that I could answer the questions that I asked with GREP. Even then, i could only answer questions with the library that I owned and that was encoded on my computer.

The answers I would get is only as good as the library I owned.

It would cost me hundreds of thousands of dollars and tens of thousands of man hours to build a library that could answer the simple question: did author A say X? If not then who actually said X?

Saying you'd need to personally read everything an author wrote is being silly.

To answer this question. "Did author A say X?" I have to be familiar with every work of the author and be able to summarize each of the writings.

I do not like Marx. I do not like Hegel and I do not like Kant. I don't want to have to read this mush again! AI helps in sorting out what these jokers actually said.

The Talmud is like 6000 pages. It takes rabbis seven and a half years to read. I have no intention of doing that. There are crazy Baptist preachers who like to misquote the Talmud. I feel comfortable with AI's refutation of the things the crazy baptists say.

0
0
0.000
avatar
(Edited)

Why are you wanting to discuss stuff you cant even be bothered to do real research on?

Also if you can't see the difference between an algorithm that finds resources and one that comes to conclusions for you, well, thats a serious flaw in logic.

0
0
0.000
avatar

Short version: AI is not a substitute for your brain, it is simply an information gathering tool and you still have to think for yourself!

0
0
0.000
avatar
(Edited)

The people who are using AI to create the so-called slush are definitely making life more cumbersome and time consuming for everybody else, at least those of us who care to double check whether a story is at least somewhat factual.

AI is great for historical research on something you don't necessarily know a lot about, but only to the extent of coming up with a suggested reading list of the original sources. Sadly, it seems like it is being misused a lot for people to further their agendas rather than to actually learn something.

0
0
0.000
avatar

While there are some great use case for AI, it indeed makes things harder in life. When I do read something, it always makes me think is it real or not. There were ages that we could just trust what we did read, to a certain point of course.
But now sometimes complete bullshit is broadcasted around the world.
Let's take funny cats videos as example. When we did one of those 5 years ago, there were real. When I do see one now, I doubt that they are real.

0
0
0.000
avatar

It is definitely sad that so much has become complete rubbish. And what's also sad is I don't really have time to research whether everything is fact or pure fiction, but by making that choice to not research, it feels like I contribute to a growing level of ignorance out there. Which means our information system is filled with more and more garbage, rather than actual information.

I've noticed a lot of AI generated cat videos recently, most of them seem to be designed as clickbait for people to capitalize on getting views for something that isn't even real. How annoying!

0
0
0.000
avatar

I personally think the bulk of generative AI is terrible. Between the ethical issues of it stealing work in a whole new way, the sheer resources it takes to run, and stuff like Grok using it to make child porn and revenge porn, the redeeming feature of "well it lets anyone make cool art." is not really worth it given that it's not a person making the art. It's a machine making it. If a person gives me the prompt of "a butterfly riding a turtle" and I draw it, the person who prompted me didn't make the art.

Even before Genrative AI make it far to easy to make slop content there was already a horrifying amount of slop content made by people. I feel like were heading closer and closer to dead internet theory and it's just gonna be a sea of AI bots regurgitating each others content and becoming less and less comprehensible.

That said, AI being used in medicine(with a lot of checks ot make sure shitty biases haven't screwed things.) or cases like AI run drones helping protect wildlife like the rhino reserve is cool.

When it comes down to it AI should be doing things that either a-Fill in gaps of human ability or b- Doing stuff humans don't want to do anyway.

I don't want AI making movies and taking creative jobs away from people but I would be fine with it doing my taxes like turbotax does.

0
0
0.000
avatar

I think our current experience and opinions about AI and its impact on information also has a lot to say about society as a whole: people are getting lazier, and they feel entitled to have the answers and benefits without working for them. And so, AI enters the picture.

Let's say I have a cat blog. I can take the time to get out my phone or camera and take videos of two cats playing, then splice together the good bits and create an edited product that I feel good about, and that represents actual cats playing. Or I can give AI a prompt and have it create a video of two cats playing based on some arbitrary thing that I tell it. And then I claim it's my work that's the part I object to.

Sadly, an AI generated video called "you've never seen two cats do this!" is more likely to get lots of page views then my genuine video of "my cats playing look how cute they are!" And sadly — part two — we live in a world where popularity has become more important than authenticity, and short-term gratification outweighs long-term value.

0
0
0.000
avatar

I think some of the willingness to offload thinking to AI is that most people are just plain exhausted. Folks are working to much, distracted to much, and even thinkings about stuff like, what to make for dinner is exhausting.

The need for clickbait tittles in most algorithms makes me mad. It means that folks need to use misleading tittles to get any attention.

0
0
0.000
avatar

"it has the potential to be a great research tool and help tool". Until they start to use your historical chat data to recommend a product... As a tool for help to choose something to buy it's still better to stick with the reviews. What do you think?

0
0
0.000
avatar

I would definitely be very cautious about using AI to recommend products to buy. I might use AI to gather information about the reliability of a product and complaints about a product I'm considering buying but I wouldn't allow AI to make the actual recommendation.

0
0
0.000