Similar Posts

Subscribe
Notify of
27 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ralphdieter
4 months ago

Please ask for a C program that gives π exactly to 50 digits. The result can be easily checked (e.g. here).

The first version of ChatGPT was hopelessly overwhelmed and could not start with clear hints of the mistake.

ralphdieter
4 months ago

Impressive! Unfortunately, the program does not compile (mpz_init_set_si() and mpz_set_ui() deliver void.

When you fix it, you get

π = 3.141592653589793238462643327950288419716767885485

The boldly printed numbers are correct. Only with N=4 is everything right.

After all, a lot has improved enormously in recent years. But if my assistant gave me code that didn’t compile and spit out wrong results after the repair, I’d throw them out.

You may like to point out to ChatGPT her mistakes and tell me how she’s dealing with it now. Two years ago she was very susceptible to criticism, but did not learn anything from it.

Windoofs10
4 months ago

So I can’t say anything about Mathe, but some controversial information is still missing GPT-4o.

If you ask zb which was the last episode of series XY, then it still fails, despite online connection.

Or when it comes to political topics or sights etc.

But: a lot has happened since GPT 3.5.

Windoofs10
4 months ago
Reply to  GefaAa

O1?

Is that a new version?

I’m sorry, I misunderstood that.

What are you doing?

NicolasHelbig
4 months ago

Try to ask “how many r’s are there in strawberry” – so far there has not been a single model before o1, as to give the right answer, i.e. three. I don’t know if they fixed it in o1 now. Also, you can ask to cube, then GPT models always came out for the first attempt.

The question is whether it answers the answers independently with the help of the Language Model or whether it should use a code interpreter. This type of reasoning, i.e., knowing the appropriate limitations, is not a strength of language models, and that doesn’t change either, because you can now philosophize it for yourself twenty seconds before there is a correct answer. It might be better, but not solved.

Otherwise, just ask some specific question about any topic you know very well and it will say more or less instant somewhere something that is not quite right.

ralphdieter
4 months ago
Reply to  NicolasHelbig

the first attempt always four

I guess she did. here 🙂

NicolasHelbig
4 months ago
Reply to  ralphdieter

Of course there is a matching xkcd comic to it… ^^

beelee
4 months ago

Ask ChatGPT something impossible, like the lottery numbers of tonight.

Irgendjemand783
4 months ago

According to ChatGPT, in the word “herdberry” are three R when I ask him

Irgendjemand783
4 months ago
Reply to  GefaAa

It’s true for me sometimes. Once he even said 1 R

WilliamDeWorde
4 months ago

I don’t know the individual versions, but with counts and remnants it is very difficult.

When I ask for a text with 800 words, I get sülze with 350 words. If I’m going to tell you what’s going on, there’s only garbage coming out with 1000 apologies.

WilliamDeWorde
4 months ago
Reply to  GefaAa

Now, words count, however, can find any spelling program or words with the same ending to make mime from it. For the longer texts, I’m right.

WilliamDeWorde
4 months ago

However, the connection is clear and ChatGPT has confirmed it itself. Also on question. You can ask.

WilliamDeWorde
4 months ago

With me, it already fails on the PC memory. Using ChatGPT allows the PC to freeze completely faster and faster, although I keep the discussions short.