Similar Posts

Subscribe
Notify of
7 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ireeb
3 months ago

Language models are not directly learning.

Even though it claims to have learned something – the language model has already made a mistake in the answer to it, so it should not surprise if there are other things that are not correct.

ChatGPT has no self-confidence, is not introspective, does not know how it works, what “learn” means, or what is right and what is wrong. It is only possible to assemble sentences more or less meaningfully on the basis of data it has been trained with. Any knowledge that ChatGPT has about itself is information that it has received about training data.

If conversations are included in the training data, in which a person, after having been pointed out to an error, has replied that he has learned something, then the AI takes care of it. She learns what a typical answer to it is when you refer to an error. And when people say that they have learned something through the hint, the AI also says that, regardless of whether it really has learned something (Tipp: it hasn’t).

Language models are based on statistics, they line up the words that make sense with the highest probability in this order. This also means that the AI does not even know how meaningful this statement is.

This is also one of the biggest dangers in language models: They sound almost always self-confident, even if they are wrong. Many users lack the understanding that the AI actually has no idea what it gives, and you can’t believe an AI. This applies exactly to statements made by the AI about itself. You could also manipulate the training data of ChatGPT so that it vehement claims to be a Kobold in a library that quickly reads the answers. Of course, it would be nonsense, but if it’s in the training data, the AI gives it back.

Erzesel
3 months ago
Reply to  Ireeb

Excellent explained👍

ajkcdajefiu
3 months ago

seemingly new features

OpenAI has probably added ChatGPT to the function of yours Learning interactionsfor future entertainments to take back

ajkcdajefiu~

zebra997
3 months ago

It makes mistakes sometimes, but corrects them when you point them out. That’s how she learns.

DasFloYT
3 months ago

Sure. I’ve been using ChatGPT for a long time and the software isn’t perfect, it learns.

When I read the answers from ChatGPT (also sometimes in the context of previous answers) and note errors I ask if I am Lost or that is an error from ChatGPT.

Spoiler: Mostly an error of the software.