How do you determine whether ChatGPT (and AI cohorts) are actually reporting facts or are just making things up?
AI can also construct something from fragments of facts it has learned, which ultimately amounts to nonsense. In other words, anyone who takes everything at face value can always fall into a nasty trap.
How do you protect yourself against this? Do you fact-check or something? Or just believe everything?
With your own brain. By having an idea of the topic (while you learn it yourself in doubt) and reviewing the exclaimed result for accuracy. 🤷
This is the case with texts that can be written by others (male/female/machine). If one uses and passes them as His, one is also responsible for the content. So you should think about what you possibly take “blind”.
Note: There is no one who takes away one’s own work or (smooth) thinking. No machine.
Right; but the “men” will make themselves less and less (will, can) I fear.
Yes, unfortunately.
Such a calculator takes a lot of work and his own thinking off.
I haven’t used it much yet… there were human understanding and pre-knowledge. In conclusion, I am aware that a voice AI can primarily generate sounding sentences and the content should at least be questioned.
In principle, I find Bing’s attitude to creativity. In part, further links are also provided.
I just don’t use it seriously if I need so much time for a question to verify an answer as I needed if I had worked out the answer to a question myself by my own research. This is essentially in this context also my understanding of “reliability”. Therefore, I would also denounce ChatGPT (which is likely to be due to my main interests and the lack of skills of ChatGPT).
I can only fully agree with that.
Well, ChatGPT is just a person… And that’s exactly how I would handle it. I don’t think blindly anything other people say. Asking an LLM for facts is a category error. But you can have a wonderful conversation with her and get some suggestions. And at least it is a good starting point for further research.
That’s how you can see it. 😉
Funnyly, it should be a way to ask the AI directly whether this is true or whether it tells the truth. I never tried it, but I read it.
Of course, you can easily check certain “facts”, with the help of Google etc.
To make the Bock a gardener… 😉 I can’t imagine something more “real” coming out.
No, but the AI is supposed to say if it’s fantastic. I don’t know exactly where I’ve read this, but it was a source that’s already serious, so I noticed that.
It doesn’t work
Pity
ChatGPT has no need for self-determination.
In the same way as you can see whether the truth is on a website, for example, or whether the author thinks of something. Or whether the work colleague tells facts or false hearings.
By researching reliable sources on the topic.
For scientific topics, there are, for example, Google Scholar, a search engine for scientific work.
For other topics it is best to find out the primary source. You have to rate this yourself whether you trust her or not.
As always, if you get an answer to a question – by counterchecks.
Alex