Is your AI hallucinating...?

Your AI probably comes across as confident, a source of neutral, legitimate and factually accurate knowledge, the problem is it sometimes not only gets it wrong, it just plain makes some things up altogether!

I've seen this first hand when querying what the role of the regulator of social housing is compared to the housing ombudsman - feel free to asleep now...

image.png

I've seen the AI results, both Google's and Chat GPTs literally tell me a neighbour can go to the Housing Ombudsman if they are unhappy with the way a housing association has dealt with their complaint about an antisocial neighbour next door.

The problem with this info is they cannot, at least not according to advisors I've spoken to at the Housing Ombudsman over the phone.

The problem is that on their website they are not crystal clear about who CAN'T use their services, you know these organisations are keen to come across as positive so a list of negatives isn't what they want.

And then AI just invents stuff, and presents it as true, and with confidence.

I've also seen clients use obvious AI in their emails to us at work - list style emails typically with hundreds of words listing grievances against institutions according to this or that law or statutory duty.

Often these are wrong as well, the AI misinterprets what duties a council actually does have towards a homeless person for example - offering temporary housing options is VERY different to finding a preferred housing option close to where it is wanted.

AI is leading these people into thinking they have more rights than they do, giving them a false sens e of what they are owed by the organisation they have a duty against - I am not surprised because AI does seem to have this habit of wanting to please you!

Final Thoughts...

One has to be very careful with AI, and the less you know about an area then the more careful you should be....

At the very least click through to the sources and then check the authors of those sources.

Of course that may not work in 5 years as the authors might themselves be AI generated and it could be hard to tell!



0
0
0.000
1 comments
avatar

Of course it is folly to rely on AI alone, and what you say about 'sources' has a ring of truth to it as well, because you don't find true journalists anymore, most copy from each other and the original source may well be a AI generated article by a so called 'journalist'.
I do use AI for artistic creations, because as a surrealist I deliberately write prompts in such a way as to confuse AI and make it halucinate on purpose. The results are fantastic!
There is a good article about the subject on IBM: https://www.ibm.com/think/topics/ai-hallucinations

0
0
0.000