Is there anything that even the most futuristic AI won't be able to do?
- Creativity can be learned because basic rules exist. The entire internet is available for inspiration.
- Pain and emotions are ultimately electrochemical processes that can be replicated in some way.
- Awareness or self-confidence could arise automatically as soon as sufficient networking opportunities and self-reflection are available.
I can't think of anything… and what could such a creature want to learn from humans by testing and studying them?
I can’t imagine that an AI can understand all kinds of jokes or anti-sweating that are cursing people. For this, she would have to understand what a “wake” is. Concepts such as romantic love should also be difficult to understand for an AI. However, I would never say, “A AI will never be able to do this!” Such an assertion would certainly be presumptuous.
But even if an AI should be at some point equal to humans in intelligence, I suspect that it still thinks “other” than humans (and therefore may still fall through in the Turing test). I could imagine that an AI that is equal to man would come before us like a person from a completely foreign culture. For example, it can happen that someone from another culture does not understand our kind of humor and vice versa. Perhaps AI systems would develop their own kind of humor and tell each other jokes that we cannot understand. Maybe AI systems could develop emotions – but maybe that would be quite different emotions than we can know and understand them.
Conclusion: You can imagine a lot, with positive and negative forecasts you should be very careful.
My daughter doesn’t understand my humor
This is an interesting aspect. But for the first time, it would be enough to make the human comprehensible and I do not see a problem on the outside.
You take a vessel of bits and fill it with points for success and failure and derive various reactions therefrom. It’s basically not different in humans.
Presumably, spontaneous word sweat and humour should not belong to the core competencies. It is not enough to know a database full of examples. You also need a sense of when something is appropriate and acceptable and when not.
Don’t bother me. We are such a kind of AI, just naturally.
No unique features that would be permanently held by us.
In the meantime, we even arrive at points where real AIs have to be re-evaporated to make them more human. You can see how low the latte hangs over which such an AI has to jump.
Since an AI cannot use biochemical substances, it can simulate feelings, etc. at most.
And how does the difference feel? There are electrical impulses in competent regions of the brain.
They’re neurotransmitters. dopamine, adrenaline, etc.
This may be, but the previous AIs do not show up, they only recombine creatively due to a large database.
In fact, however, there are some areas (e.g. in AI art), which are now so prominent in the data that they refer to themselves.
Every perception is electrochemical processes in our brain. A computer does not work electrochemically.
Nope. We don’t even know exactly what consciousness is and how our consciousness works, so is such a steep thesis as, I paraphraisere times, ‘is only complexity’ completely failing in place.
Well, we know too little for a fool. And I have spoken consciously in the conjunctive.
And why one should not be able to reproduce electrochemical processes differently (and I am not imitating).
But perhaps your question is wrongly formulated, maybe it should be: is it important to be like a human being?
For example, an AI has no understanding. ChatGPT can explain to you many things (sometimes very right), some things make faster or better or both than most people but the AI does not understand what it does. If ChatGPT says something, then it assembles a graphical construct that is the probability of over a trillion records first correctly shaped and secondly, in this context, is the truly desired output.
But she doesn’t have a faint glimpse of semantics, she doesn’t know who you are, who she is, or what the topic is your entertainment, or what sense this conversation should fulfill at all.
Nevertheless, it usually finds the right answers and can communicate with us easily through the interface language.
So the question should perhaps be more loud: is sense, understanding, consciousness is actually as important as we believe. Most of it seems to work completely without.
Let’s assume for a moment that the amount of data for AI would continue to scale and we could transfer it to mechanical machines and these machines could replicate themselves because they would have learned through us how resource recovery, processing, etc. works. Let us continue to assume that we would send them to space with the order to land on a plant, replicate themselves there and then travel to as many worlds as possible and make exactly the same…
In a few billion years, an AI could colonalize the entire achievable universe and transport all knowledge of humanity and learn to communicate with any being… Long after the last person took it.
In this context, my question would not be, could an AI become like us, but – apart from simple arrogance – why would it really be as important as us?
Apparently, for many things that make us outward, neither understanding (of themselves, the opposite) nor meaning…
Because we have no idea what consciousness is, how it works and how it arises. We are already extremely difficult with consciousness not only in our species, but also in all others (like the consciousness of animals).
Finally, we could not even validate consciousness. The fication of consciousness is still going on like this: I doubt so I think. So I think I am. They’re similar to me, so they must be. Strictly speaking, if there were something like consciousness metaphysical, we could not even recognize it when it comes to a being that is different from us. Or we can only see it everywhere on the basis of a similar behavior… Whether something is conscious or not would be a decision for us.
So no, we couldn’t be further away from conscious machines. We have nothing, not even a basis, not even an idea.
One can simulate electrochemical processes, but that would be just a simulation. However, our cokmplet perception runs through messengers, chemical triggers, chemical carriers, etc. to reflect that in this complexity seems at least quite impossible.
V.a. is not a driving spring behind it. We live, so we strive to get information. Genetic information, for example, about sexual reproduction or also through our spiritual empowerment, as the Romans believed, for example, and even today one still says dead is a person only when no one remembers them. All our secondary shoots our fears and all our emotions arise for this one basic need. However, an AI does not live, it does not have a driving spring, it does not know a basic need and it cannot derive any emotions from it.
No, your brain chemistry is no longer correct in depression. You will definitely become more driveless but there are many gradations. Just because of depression people do not stop eating, drinking or being and just want to die. With a few, this may be the case, but then unfortunately they also usually take life…
Perhaps this sowing has already taken place and wisely the AI has made the beings so that they fit the planet?
Is it necessary to declare everyone dead with depression?
In principle, the classic von-Neumann architecture (no matter whether implemented with parallel computers or not) has limitations that are indispensable. For example, you can find more exact
https://de.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
Not all that is “thinkable” can be encoded in an algorithm.
This means completely new hardware concepts and programming concepts must be created for the execution of a real artificial intelligence. What the so much admired ChatGPT offers is simply not enough.
I don’t see the connection to creativity.
In my opinion, your three materialistic conditions (points) are all wrong, and even you cannot prove their correctness, you can only accept them, presuppose them, and follow what is derived from them.
These are not prerequisites. Do you have any questions to answer?
Of course, the conditions are, sorry but among other things studied science theory and philosophy, because of your three conditions it is obvious that this AI can all that a person can. Your conditions are ultimately assumptions that one cannot prove, one does not share them, one comes to quite different conclusions. Where your fat-printed question goes far beyond that, you ask if there is anything that this AI can’t – and this is of course easy to answer: it can’t do all the non-possible actions, and there are infinitely many, for example, all actions that are not possible due to natural laws, logical or mathematical laws and if you are not a materialist, then much more
I only counted the three points so that these things do not come back to answer. That’s what happened. Just think them away.
And as for the rest, you think too philosophical for this side here. One has to ask the questions for an average intelligence class – and with a certain entertainment value. It is self-evident and does not need to be discussed that an AI cannot violate natural laws. All other average people here seem to understand this and compare the AI with our human BI. For you, I like to rewrite it again:
What intellectual, mental, cognitive, emotional, rational, artistic or philosophical abilities that at least some of the most clever people have, can artificial intelligence never reach and not simulate?
I think this is a question that the philosophy will face for a long time and will not really be able to answer.
In the end, the question is: are these things real? Or simply simulated? And what do you recognize all this?
I think socially cahfty it will seem to be enough if AIs become life-sighted enough to convince most of their authenticity (In the bottom of that, there is the Turing test). At the latest when there are AIs, there are hardly any people who do not know this is a Ki. I can recognize it’s a Ki. In the end, we will have virtually AIs that are as human as we are. Whether they really are. Or not. It’s not relevant.
As an example, what can you make this answer not come from a AI?
the typos. That was easy.
Sure? I’m sure the AI could simulate a bunch of typos. They don’t. I think so. But future?
Counter-question. Why does ChatGPT find things when there are no sources? What I want in the trap. The AI today has quirks that we can hardly explain. I’ve heard of an example the chat gpt said that anyone would be dead but this message itself contradicts the data it was fed with.
I think it is possible to write a future AI as an example just so the impulse does not have to be written correctly.
This brings me to an interesting point: Why do they do this without a call? Because they want? That would be an answer to the question. Because why should a machine have an impulse from itself?
Manyout to match a character profile. And to better simulate a human. Because people make typos. So you could theoretically also say a ki: you are someone who makes many typos. And it’s weird. And all this is not consistent according to a scheme.
Interesting should be: What if they do it without asking?
But that could also be the current ones, but why should they do it without request?