How do you determine whether ChatGPT (and AI cohorts) are actually reporting facts or are just making things up?

AI can also construct something from fragments of facts it has learned, which ultimately amounts to nonsense. In other words, anyone who takes everything at face value can always fall into a nasty trap.

How do you protect yourself against this? Do you fact-check or something? Or just believe everything?

(3 votes)
Loading...

Similar Posts

Subscribe
Notify of
17 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Waldmensch70
10 months ago

With your own brain. By having an idea of the topic (while you learn it yourself in doubt) and reviewing the exclaimed result for accuracy. 🤷

This is the case with texts that can be written by others (male/female/machine). If one uses and passes them as His, one is also responsible for the content. So you should think about what you possibly take “blind”.

Note: There is no one who takes away one’s own work or (smooth) thinking. No machine.

Waldmensch70
10 months ago
Reply to  CatsEyes

Yes, unfortunately.

apophis
10 months ago
Reply to  Waldmensch70

Note: There is no one who takes away one’s own work or (smooth) thinking. No machine.

Such a calculator takes a lot of work and his own thinking off.

kmkcl
10 months ago

I haven’t used it much yet… there were human understanding and pre-knowledge. In conclusion, I am aware that a voice AI can primarily generate sounding sentences and the content should at least be questioned.

In principle, I find Bing’s attitude to creativity. In part, further links are also provided.

evtldocha
10 months ago

How do you protect yourself?

I just don’t use it seriously if I need so much time for a question to verify an answer as I needed if I had worked out the answer to a question myself by my own research. This is essentially in this context also my understanding of “reliability”. Therefore, I would also denounce ChatGPT (which is likely to be due to my main interests and the lack of skills of ChatGPT).

segler1968
10 months ago

Well, ChatGPT is just a person… And that’s exactly how I would handle it. I don’t think blindly anything other people say. Asking an LLM for facts is a category error. But you can have a wonderful conversation with her and get some suggestions. And at least it is a good starting point for further research.

esisthalbzwei
10 months ago

Funnyly, it should be a way to ask the AI directly whether this is true or whether it tells the truth. I never tried it, but I read it.
Of course, you can easily check certain “facts”, with the help of Google etc.

esisthalbzwei
10 months ago
Reply to  CatsEyes

No, but the AI is supposed to say if it’s fantastic. I don’t know exactly where I’ve read this, but it was a source that’s already serious, so I noticed that.

segler1968
10 months ago
Reply to  esisthalbzwei

It doesn’t work

esisthalbzwei
10 months ago
Reply to  segler1968

Pity

apophis
10 months ago

In the same way as you can see whether the truth is on a website, for example, or whether the author thinks of something. Or whether the work colleague tells facts or false hearings.
By researching reliable sources on the topic.

For scientific topics, there are, for example, Google Scholar, a search engine for scientific work.
For other topics it is best to find out the primary source. You have to rate this yourself whether you trust her or not.

EinAlexander
9 months ago

How do you find out if ChatGPT (and AI consorts) really shows facts or just what’s going on?

As always, if you get an answer to a question – by counterchecks.

Alex