We Have Way Too Much Faith in What AI says
I've recently found it so easy to believe what we're being told by artificial intelligence. We open a chatbot, pose a question and the answer is served up with confidence and speed. And in fact, it does sound so right that we just believe it. I've done that myself. Our minds seem to want to believe something that sounds intelligent and good even when it's completely made up.
That bothers me a little bit. I learned that there was a list of readings created by an AI that included books that don't exist. Lawyers have gotten into trouble for bringing forward false legal advice from chatbots. Even a government report on health relied upon research that didn't exist. These are not small mistakes, they are being made in high stakes places where facts count.
What I worry about most is not so much the AI itself, but how quickly we've opened our doors to it and didn't ask enough questions. I think we all believe that the machine is more intelligent than us. Maybe because it talks like it does. The truth is, though, AI knows nothing. It just words things up correctly.
I do not have anything against using AI. It is convenient and on occasion, awesome. But we need to become proficient at slamming on the brakes and asking ourselves, is this really so? I remind myself that just because something is quick or sounds intelligent doesn't mean it's correct. We still require our own judgment, the same gut instinct we employ when something doesn't pan out in the physical world. That part of humanity remains worth something, if not more so than before.