Is ChatGPT underrated?
I've researched a wide variety of topics and errors are really rare. It's complete nonsense that ChatGPT is unreliable (at least when it comes to scientific topics).
I've researched a wide variety of topics and errors are really rare. It's complete nonsense that ChatGPT is unreliable (at least when it comes to scientific topics).
Hello, how can I turn off the camera microphone (Windows 11) to use the headset microphone? Best regards
We have to learn this for Bio 7 (topic: Eye). Can you explain it to me? VG
I have a table with the following structure (the number of names varies): The target image is as follows: The duplicates in column C should be removed if this is a duplicate for the person in column A. My question is how I can map the deletion using VBA. Unfortunately, my knowledge is limited (I…
Hi, I have a Garage script, but I don't know how to fix it:
Hello! 😀 I had to rebuild my entire train system because a traffic jam brought the entire network to a standstill. I built a city ring and a ring for trains outside the city, both of which are connected, albeit in a few places. When I wanted to open the network and enable the local…
Hey, So, at first, Phasmophobia was still loading on Steam, albeit for an extremely long time (250 MB in 2 hours out of 5 GB). Then I tried everything to speed it up, but since then, it only loads up to 430 MB and then nothing happens (0 bytes). I've already contacted Steam support, but…
When it comes to mathematical themes, ChatGPT is really bad. This is because it cannot process its own logic, but only generates sounding, but unfortunately completely insane sentences.
(618) ChatGPT and Mathematics – YouTube
(618) ChatGPT and the Logic – YouTube
That’s what a group of my Studis thought and used ChatGPT for exam preparation because they thought it was going faster. I’m sorry, everybody fell through.
Joa is really good, you can better prepare for things.
I think you should worry about it before it’s too late. It’s serious. This can also go to the back
It’s more overestimated. Or misjudged. You just need to know how “he” ticks what he was done for.
It is necessary to know that he can hallucinate (so that he can pack the greatest bullshit in beautiful words), or that he can get lost by factor 1000 (!) in a scientific calculation.
Apparently, it always depends on the topics. I’ve tested it for software development here and there. As a rule, I’ve been talking about the thing for 45 minutes to get a solution I’ve worked out in 3 minutes.
Then there were several solutions that were found free. Just the answers after I said it didn’t work. There comes then sorry that is for version X, in version Y it must be so called. There were none of the functions, they came from an external library of a completely different language, which was simply interposted.
But I had to research this and tell the dear Chatbot and then dictate the solution to him almost until I got it out after 45 minutes.
Otherwise there are other problems. Most chatbots, for example, have no integration in the common IDEs in software development, and the others do not have the most common ones. Then the online compulsion for all that would be effective, that you have to feed the stuff properly with context, which is often not possible in the working environment or the employer or Customer certainly does not like to see etc.
In the end, ChatGPT provides answers that logically listen to the language flow. This is also the task of an LLM. But we want to use it for problems that need facts. I believe that until we get to the point that the stuff is spattering out reasonable things and that is so reliable that it takes us off something and that the one who does not know the solution himself in front of the monitor goes a lot, a lot, a lot of time. And even then it becomes difficult when it goes on to questions of liability, etc.
For pictures, audio, etc., this is all good and beautiful. These are creative processes where there is no right and wrong. For the transfer of money, the software of devices in the hospital, a rocket, an airplane or a car, this is a completely different story.
When it comes to facts, ChatGPT is really bad. You can’t blame ChatGPT because it wasn’t programmed.
But for such tasks as “describing an old German oak” ChatGPT is really good.
Great answers, completely neutral.
Yes, ChatGPT can write right. It can also be complete nonsense, and really you can only know when you check. And there’s nothing to do with 80% of cases if you can catch a case where it’s nonsense.
It’s not useless, but if you can’t at least fundamentally verify whether the answer is correct, I’ll leave your fingers off. It can just invent something. It’s not about never getting anything meaningful, but that it can just write things that are wrong or completely invented, but they look good enough to believe them.
Lol, what a fair, unbiased vote selection.
ChatGPT is absolutely not reliable, SPECIAL when it comes to science. Each KI chat model has had training data for historians and science, but still invents every new word in the sentence. You can only hope that it was taken from the record during training, but you can never be sure.
ChatGPT becomes much too often onestimated.
The chatbot constantly makes mistakes, especially when it comes to factual knowledge, mathematics and logic.
In itself, this is completely okay, because it is pointed out that the chatbot can make mistakes.
It becomes problematic if users place every word from ChatGPT on the gold scale and accept it as a fact without in any way checking the accuracy.
A few weeks ago there was a question where the FS claimed ChatGPT would lie to him.
In another question, someone has been upset because a chatbot is supposed to spread call-damaging lies about politicians.
How such a chatbot works that makes this mistake and that there is no intention behind the answers, both have neither known nor wanted to understand it.
ChatGPT can give a good overview in many things.
Only: the hook is that you cannot rely on the fact that the information is correct.
I have experienced in two enquiries that ChatGPT has fantasized and – without licking with the virtual lashes – simply invented something that proves insane, because it is plausible, but not true. And the fatal was that there were no doubts in the answer.