How is something like ChatGPT possible?

I downloaded it to my phone and this program can really answer everything and you can't even tell them apart from human answers anymore… I find that incredible

How does this work, how is this possible?

(3 votes)
Loading...

Similar Posts

Subscribe
Notify of
16 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
NoArtFX
10 months ago

In itself, an AI is nothing but a lot of stochastics, so probability.

The AI is fed with training data and then trained. Thus, it is then possible to provide the best possible answer to the questions of users (in the case of GPT).

And in the answer it is recommended, because if GPT does not find a suitable answer, something is simply generated. This is also not wrong, because it is a language model, not a knowledge model. ChatGPT is therefore incorrectly applied in most cases, nevertheless in many cases gives the correct answer, but not always.

Intelligence is actually not behind it. The technology for AI was also already in the 80s and 90s. At that time, there was not enough computing capacity to use mathematics behind it commercially and for the large mass.

By the way, researchers expect about 550 tons of CO2 to train GPT 3.5, for comparison, a car is about 2 tons a year. This is the power consumption of 320 four-family households.

Tamtamy
10 months ago

Well, if you exactly If you want to know, you’re not going to get around a computer science program.
The basic principle – simply explained – is an extremely broad database to which the AI can resort. New – in contrast to previous search engines – is that by corresponding programming (keyword: algorithms) the AI can independently create links between the data, so that practically unpredictable and creative results can arise on the basis of incredibly much existing knowledge. The system can constantly improve itself, so to speak ‘to learn’.

Seliba
10 months ago
Reply to  Tamtamy

I am currently a computer science student and think this answer is quite bullshit. The basic function of generative artificial intelligence is not particularly difficult to identify the technology as a witchcraft that only trained computer scientists could understand, I find more than just questionable. Only when dealing with such technologies can one correctly estimate the chances and risks as a private individual. This is not complicated, there are thousands of fantastic and easy to understand websites and videos on the topic. In the basic study of Computer Science (Bachelor), one must be lucky to have a lecture that deals extensively with such modern technologies as they differ from traditional AIs. In the master with a specialization on AI, this can of course look different.

But to get back to the answer: the generative AI usually has no access to the data base, it was trained on the data base and the results of the training are now part of the LLM.

The comparisons with traditional search engines also suggest a little. Search engines have been wasting algorithms that we could call artificial intelligence (the actual definition is very controversial). Linkings between data are naturally also in search engines, and they have been identified by the AI for decades. The models are also usually not designed to “learn” after their initial training, inputting the past becomes simply an additional context as part of the input in the respective chat (in the case of ChatGPT, Gemini etc.). In other words, a search engine learns more about it than an LLM, as it must constantly evaluate and rank new websites.

Basically, LLMs do not have much to do with search engines, and even if the leaders in Silicon Valley like to have different, I don’t think LLMs can replace search engines – or should. LLMs are simply mathematical models that, based on their training data, calculate probabilities and generate a bit of variance (“temperature”) based on them expenditures that do not have to be actual even if they are usually linguistically correct. I do not see creativity as a human property, even if the expenditures can be “unique”.

It is neither magic nor a serious or creative source of information.

Tamtamy
10 months ago
Reply to  Seliba

Well, that’s why you’ve put yourself in a hard spot thanks to your (previous) knowledge of computer science.
But to your arguments:
“The basic function is not particularly difficult… (etc.).” – Then the question arises, however, why this has not been developed functionally decades ago. Do you have an explanation?

“(The AI) has been developed on a data basis…” – EBEN. Thus, it uses exactly that as a starting point.
“The models are not designed to learn …” – but you can always read corresponding statements in this sense.

“Creative” are such results from AI over and over again – in that connections are made or connections between data are “seen” that have not been discovered by humans. (I have read examples from medicine; I would have to research the sources again).

NoArtFX
10 months ago
Reply to  Seliba

In the basic study of Computer Science (Bachelor), one must be lucky to have a lecture that deals extensively with such modern technologies as they differ from traditional AIs. In the master with a specialization on AI, this can of course look different.

In the meantime, some universities and colleges that also specifically offer a Bachelor’s degree program Aritifical Intelligence, or at least one specialist in the BSc computer science to AI.

Tamtamy
10 months ago

Okay – thanks for your explanatory information.

Seliba
10 months ago

The commentary is only reaffirmed by ignorance of technology and my criticism of your answer. I don’t intend to have a great discussion with you.

Well, that’s where you’ve put yourself – thanks to your (previous) knowledge of computer science?

I don’t know if you’re really in a position to criticize me for my studies.

But then the question arises, why this has not been developed functionally decades ago

Just because something is not complicated, it does not mean that the invention was simple. Today, every child at school learns the findings that humanity has collected over millennia. I have spoken of the basic features of the functioning of the technology, not of the implementation details.

EBEN. So it uses exactly that as a starting point

Generative AIs use a database as a basis; the statement that she actively intervenes, but is inaccurate.

However, it is possible to continuously read corresponding statements in this sense.

As I wrote in my answer, you have to distinguish learning during training and learning after training end. Models that are published once learn only through training. These are then effectively different models. “I heard xyz” also does not refute my description of how it works.

“Creative” are such results from AI always

Well, I see creativity as a purely human property, but if you want to tell her that.

Linkings are made or connections between data are “seen” that have not been discovered by people until now. (I have read examples from medicine; I would have to research the sources again).

I would like to make it clear once again that we are talking about generative AIs, i.e. Large Language Models. In the natural sciences, traditional AI models are used which are suitable for these tasks. These are basically different technologies. Companies and media are often thrown into a pot for marketing and it is understandable that you are quickly confused as a layman. That is exactly why I was at the heart of my statement, which is why I wanted to know about such issues. It’s not hard if you want it.

Seliba
10 months ago

There are, of course, many levels to do so, but I try to explain the basic functioning very easily:

A generative AI like ChatGPT first needs billions of tokens (you can imagine as a kind of word) training data. These can come from websites, books, or forum entries from the Internet. Probabilities are calculated on this basis: how likely is it that a word follows another word in a specific context? If you now give the AI a new input (prompt called), it can create an output, word for word based on it and its training (so the probabilities). With a parameter called temperature, you can also make the AI choose sometimes not the most likely word – so variation comes into the answer, the same input does not always lead to the same output. The AI has no understanding of what she writes, how we humans have it; it is all a matter of probabilities, based on training data. Thus, so-called hallucinations often occur when the AI generates something that, although linguistically speaking, makes sense, is, however, complete nonsense. That is why one should always be critical of the expenditure of generic AIs.

Modern Large Language Models such as ChatGPT 4 are significantly more complex and sometimes consist of several parts that have been trained for special tasks and can solve them particularly well. In this field is currently constantly researched and there are many breakthroughs, but also many unresolved (or insoluble) problems. In principle, however, they are still based on this principle. If you are interested in the topic, there are many beautiful videos on sites like YouTube.

McHusky
10 months ago

This is the result of decades of research and development. Machine learning hasn’t been around since yesterday.

In essence, data are divided into vectors, these are translated into a number and from there it is almost just about probability.

Thus, AI models like GPT are also able to capture contexts and, based on this, deliver the most likely correct answer.

EvaRiddle
10 months ago

Programming and the AIs learn more and more from themselves.

Ifosil
10 months ago

Provides beautiful videos on YT that describe how neuronal self-learning networks work.

Devrunrik
10 months ago

It’s like Google, but knows everything in Google and can give the best answers.

Jannis956
10 months ago

This shit also contains many misinformation

Ifosil
10 months ago
Reply to  Jannis956

No, that’s not true. Some are misinformation, but we really talk about a small part here. AIs have most malfunctions when it comes to ideological things, such as modern woke ideology, feminism, ethnicity and conservatism.

Especially bad is the Gimini AI from Google. Many AIs are politically charged, for example when you ask them for sexist women’s sweats, most refuse. But sexist men’s sweats are okay. You see, they’ve been given some barriers. Or some world views.

Seliba
10 months ago
Reply to  Ifosil

Hallucinations are a huge problem and this statement proves that you haven’t been seriously trying to get from an AI information on somewhat smaller topics. Why you suddenly complain about filters that are naturally imperfect at LLMs due to the huge training records (see Jailbreaks, or your examples), I don’t know.

verreisterNutzer
10 months ago
Reply to  Jannis956

With GPT4o the AI of OpenAI has become significantly better. And the emphasis is on DEUTLICH. I tested GPT4 a few months ago. Slowly, often in the middle of the sentence an error generated etc. 3.5 is also rather bad than right. Now, 4o, it’s like 100 years of work would be in it.

This has become so good that I have now extended my subscription for $20.

The AI for image generation is also 1A tip.