Similar Posts

Subscribe
Notify of
11 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DerRoll
10 months ago

When it comes to mathematical themes, ChatGPT is really bad. This is because it cannot process its own logic, but only generates sounding, but unfortunately completely insane sentences.

(618) ChatGPT and Mathematics – YouTube

(618) ChatGPT and the Logic – YouTube

KuarThePirat
10 months ago

That’s what a group of my Studis thought and used ChatGPT for exam preparation because they thought it was going faster. I’m sorry, everybody fell through.

AdvanPadawan
10 months ago

Joa is really good, you can better prepare for things.

ILoveCode
10 months ago

I think you should worry about it before it’s too late. It’s serious. This can also go to the back

atoemlein
10 months ago

It’s more overestimated. Or misjudged. You just need to know how “he” ticks what he was done for.
It is necessary to know that he can hallucinate (so that he can pack the greatest bullshit in beautiful words), or that he can get lost by factor 1000 (!) in a scientific calculation.

apachy
10 months ago

Apparently, it always depends on the topics. I’ve tested it for software development here and there. As a rule, I’ve been talking about the thing for 45 minutes to get a solution I’ve worked out in 3 minutes.

Then there were several solutions that were found free. Just the answers after I said it didn’t work. There comes then sorry that is for version X, in version Y it must be so called. There were none of the functions, they came from an external library of a completely different language, which was simply interposted.

But I had to research this and tell the dear Chatbot and then dictate the solution to him almost until I got it out after 45 minutes.

Otherwise there are other problems. Most chatbots, for example, have no integration in the common IDEs in software development, and the others do not have the most common ones. Then the online compulsion for all that would be effective, that you have to feed the stuff properly with context, which is often not possible in the working environment or the employer or Customer certainly does not like to see etc.

In the end, ChatGPT provides answers that logically listen to the language flow. This is also the task of an LLM. But we want to use it for problems that need facts. I believe that until we get to the point that the stuff is spattering out reasonable things and that is so reliable that it takes us off something and that the one who does not know the solution himself in front of the monitor goes a lot, a lot, a lot of time. And even then it becomes difficult when it goes on to questions of liability, etc.

For pictures, audio, etc., this is all good and beautiful. These are creative processes where there is no right and wrong. For the transfer of money, the software of devices in the hospital, a rocket, an airplane or a car, this is a completely different story.

tunik123
10 months ago

When it comes to facts, ChatGPT is really bad. You can’t blame ChatGPT because it wasn’t programmed.

But for such tasks as “describing an old German oak” ChatGPT is really good.

FelixSH
10 months ago

Great answers, completely neutral.

Yes, ChatGPT can write right. It can also be complete nonsense, and really you can only know when you check. And there’s nothing to do with 80% of cases if you can catch a case where it’s nonsense.

It’s not useless, but if you can’t at least fundamentally verify whether the answer is correct, I’ll leave your fingers off. It can just invent something. It’s not about never getting anything meaningful, but that it can just write things that are wrong or completely invented, but they look good enough to believe them.

EinTyppie
10 months ago

Lol, what a fair, unbiased vote selection.

ChatGPT is absolutely not reliable, SPECIAL when it comes to science. Each KI chat model has had training data for historians and science, but still invents every new word in the sentence. You can only hope that it was taken from the record during training, but you can never be sure.

apophis
10 months ago

ChatGPT becomes much too often onestimated.

The chatbot constantly makes mistakes, especially when it comes to factual knowledge, mathematics and logic.
In itself, this is completely okay, because it is pointed out that the chatbot can make mistakes.

It becomes problematic if users place every word from ChatGPT on the gold scale and accept it as a fact without in any way checking the accuracy.

A few weeks ago there was a question where the FS claimed ChatGPT would lie to him.
In another question, someone has been upset because a chatbot is supposed to spread call-damaging lies about politicians.
How such a chatbot works that makes this mistake and that there is no intention behind the answers, both have neither known nor wanted to understand it.

Tamtamy
10 months ago

ChatGPT can give a good overview in many things.
Only: the hook is that you cannot rely on the fact that the information is correct.

I have experienced in two enquiries that ChatGPT has fantasized and – without licking with the virtual lashes – simply invented something that proves insane, because it is plausible, but not true. And the fatal was that there were no doubts in the answer.