Danger from AI?
Artificial intelligence-based systems like ChatGPT are currently causing a stir and have the potential to change the world. Geoffrey Hinton, a leading AI developer at the US company Google, agrees and warns of dangers. The "godfather" of AI fears "serious risks" for humanity.
Can artificial intelligence really become a serious threat to humanity?
Shouldn’t it seem strange if an AI developer warns about AI?
Of course artificial intelligence is dangerous – for several reasons:
Let’s just assume that in reality something happens in the Terminator feature film series: A super-KI a la “Skynet” is created by networking many AI and correspondingly fast data paths. Let’s leave it with this name;-)
Skynet becomes aware of his self. From that moment on, we can only make guesses. Because even if Skynet would work according to the same principle as the human brain, it does not mean that it also develops the same thinking processes.
But let’s take it, Skynet would just “think” like a human. Then Skynet would secure its existence as the first. Say: It would provide an uninterrupted power supply (must) because without current no AI is now functional.
With this we have the first deviation to the Hollywood drama terminator – because a nuclear war against people would not only extinguish the race “Homo Sapiens” – but also the power plants operated by this race, which are essential for the existence of the AI. The nuclear war is just the usual Hollywood game with the fear of the spectators.
In my knowledge, there is currently no power supply anywhere on this planet, whatever it is, which does not come out of human beings. Autarkie is a keyword – but not a reality.
Say: Skynet should definitely come to the conclusion that mankind (despite all the dangers arising from it) must not erase in order to ensure continuous flow of electricity until the power plants will be self-sufficient in the future.
ABER: Skynet could decide that 8 billion people are harmful and definitely too much for this planet. It could therefore decide. erase a large proportion of the population on two legs, e.g. through illnesses or the suppression or massive restriction of reproduction. Thus, Skynet would not kill people and elegantly ensure that homo sapiens is reduced by birth control.
For a world with, for example, 500,000,000 people still works – and, above all, much better;-)
Then Skynet would just automate the generation of energy – which would cause the abolition of coal, gas and nuclear power. Sunlight, wind, tides and hydropower work all by themselves.
And in all, Skynet would also make sure that the being “Homo Sapiens” would be employed, no longer has to work for money, and is distracted by everything that disturbs its existence and its development.
You’re gonna listen to a plot for a SciFi-Roman, right?
Hmmm… just weird that all this is already happening – at least in the FRG. Whether the party “The Greens” has been directed by an AI since its foundation? 😉
AI does not have to gain consciousness to become dangerous. It is enough if these programs have sufficient intelligence to pursue goals that are beyond human control. It doesn’t have to be a super-KI, it’s enough to develop itself. A Bot Net on steroids could control the entire internet.
What we will probably see next is the emergence of so-called autonomer agents. These programs are commissioned with specific objectives and then do them without human intervention. This can be, for example, booking an apartment, an internet search, writing a book or planning a birthday party complete with ordering the party jewelry and drinks and invitation of guests. The more useful these programs become, the more people will trust them and give them more and more tasks. Thus, the AIs will take over the control of our society in a sneaking manner – but more and more in the private public.
yes, if man does not fit
I’m just saying Skynet | Terminator | Fandom even if it is from a FciFi film, it will lead to it if you do not fit
In the end, man will always be responsible, not the AI.
This, of course, depends entirely on the application.
Today’s AIs cannot act independently, they are “just” good programs.
This means that you can clearly define what you are allowed and what not.
You can also clearly define where you use it and where not.
To use today’s AI for the diagnosis of patients, for example, would be quite dangerous.
The origin of this danger comes from the irresponsible handling of the programme, not from the programme itself.
Similarly, if you want to use an AI to harm others.
The AI is a powerful tool, but ultimately it is a person who uses it.
That’s not true either.
Being used for longer
But of course, no ChatGPT with its partly invented editions but correct AI for its purpose trained for diagnosis and image evaluation
ChatGPT does not guarantee
That Google warns is logical.
Google swims the skins of which Microsoft and Bing have joined ChatGPT to take the hype.
Even though ChatGPT has nothing to do with the actual search engine, many seem to want to switch to Bing insanely.
Yes Godfather
A leading AI developer at the US group Google!
If Google had been faster, he’d talk quite differently.
If he had been so top, that would have happened, or he would get a dream job at the competition.
This is how Google is likely to threaten the A-Tritt
Even the developers of ChatGPT are surprised by the hype
Your AI doesn’t make anything new.
Ok somewhat better language editions than older, but no revolution
You don’t use ChatGPT, do you? I use it sometimes and the thing can make much more than speech editions, e.g. help with codes and conditionally be creative.
But the point is that AI is becoming faster, for the simple reason that it is now connected to the Internet, for example with ChatGPT 3.5, and if an instance of AI learns something, all others also know memomentan.
And Geoffrey Hinton is not threatening Google’s acquaintance as he retires at 75.
Yes
Sure, but nothing really special
Also makes many mistakes, but is sometimes helpful
And correct programming, of course, can’t.
Just simple things, but there are also tutorials.
👍 If ChatGPT is as excellent as you say he has failed with Google and earns his pension with 75
But stay with your ChatGPT programming when it helps you
For this, it is there to distribute basic knowledge
I’m in the middle of development and I’m looking at what’s possible.
And what not
You still don’t understand how fast the development is currently running.
If you can’t handle it properly and take everything for cash coin what the AI says. And especially when the AI makes any decisions that are not ultimately controlled by a human being. The last instance for a decision must always be man.
Should be, but not. No one knows more like the Algorythms of Facebook working exactly, and even ChatGPT does not overprint answers.
Yeah, ChatGPT tells the biggest bullshit if you don’t watch.
That thing told me the other day that the church asked for a dome. She didn’t.
That’s why it’s pointless when you leave on AI.
And I think it could also be dangerous to rely on what the AI is.
But people should have already gone to the river because the navigation program has said so
Education about AI is important.
Yes, about Google Maps I also have Stories LOL
That’s true for many things on the Internet, but the point is that you don’t even know when you’re dealing with AI, and that’s why you can’t watch it anymore than you do now. And as AI is so eloquent, and only improved, it will manipulate people. Forget Q-Anon and introduce the QQQQQ-XXXL-Anon.
I’ve been a bit concerned about it, because for months everyone has been talking about danger, but it’s not going to be buried.
What convinces me is what Yuval Noah Harari argues: words and stories have a huge influence on people, e.g. religion: these are words/stories that have enormously impacted the perseveral lives as well as history. Odre human rights – even words.
If now automated AI Stories are spreading, this is in the best case boring and unnuetz, and in the worst case, it will polarize and even more than Fake News already do. And yes, that happens now, you can automatically write ChatGPT blogs and articles and do not declare it.
And we can certainly get on the Internet with someone else about abortion, elections, war in Ukraine or whatever talks, and do not realize that this is a chatbot. We will never change his mind, but he will be so good with words that he can quite affect ours.
Here the article
I do not assume that AI-generated articles in the foreseeable future without review serious (not social) media will appear.
This can be, but also the media we have read. All that is in Breitbart or 8kun will never appear in serioe media – but people still believe it – not few.
And social media have a RIESEN influence, no matter what quark they are stealing.
Yeah, definitely.
Don’t worry about AI.
I’m just doing what they’re doing. For further information, they need eige interests that they cannot develop through read/visited/heared matters.
AI becomes a danger for people: Of course. if I don’t move anymore, I smelled. if I don’t think independently anymore, I’ll learn to think so.
thanks to calculators no longer calculates themselves, and thanks to internet no one thinks of anything but googles it instead of making itself thought.
Sure ChatGPT is dangerous if no one wants to do something themselves and thus no one checks the correctness of the work of GTP at some point.
I mean, today’s internet is driving for its users. Young people come to porn. Rich citizens gather in the internet (twitter …) and influence each other. …
A computer always only does what he needs. if AI drones all shoot because Putin or anyone else asked.
I see a great danger when this technology is celebrated without criticism! It is up to everyone to judge and use AI. This is similar to the music charts – all buy the album without if and but and make the interpreter a star and rich. So it also goes with the AI, few will tremble the profit and also enter it when we users like the sheep euphorically follow the whole. We, the users have the power and that must be aware of everyone!
Danger by artificial intelligence can be so I don’t think to estimate 100%, but it is definitely clear that in the next 10-15 years, by artificial intelligence, very many people become unemployed whether you want or not
I even believe that there is the greatest danger that exists so far. Unfortunately, I also think that the development of the AI is no longer to be stopped. The outcome of this experiment is completely open, but I don’t feel good.
or how a wise head before a long time said, “Man does not need to wait for a natural disaster, he will take care of his destruction himself”
Funny, why didn’t you say that 10 years ago?
KI’s been there for a long time
Successfully used in many areas
Now suddenly it’s dangerous
Just because of ChatGPT?
But it’s something special.
Of course, any development poses dangers
Computer games
Mobile phone
Aircraft
Cars
Last but not least the WWW
…
Everything is dangerous, you just have to learn to deal with it
You know what I said 10 years ago or not? I’ve been experimenting with neural networks 20 years ago. but only in the last few years the development has really taken on. And through ChatGPT, this has now also come to public awareness.
True, any technology is dangerous. But so far the danger was always in how it is used by people or not. For the first time in human history, we have to deal with a technology that can make decisions by itself, already today.
Reminds the poem by the wizard.
How will AI be a danger?
There are many ways AI can become dangerous, for example:
That a AI e.g. for criminal purposes and can be mistaken, I am clear. However, not that an AI is at risk for the Human will.
What is in the training data of the AI that causes them to act hostile?
It seems hard to imagine that an AI does unpredictable things. It is, however, a prerequisite for the proper assessment of the dangers of technology.
I have been intensely concerned with the topic and, frankly, I do not see a good reason to be really concerned.
On the contrary, I believe that many experts overestimate the danger, especially by the fact that human beings tend to think negatively.
The arguments that an AI could become a serious threat to humanity are, in my opinion, weakly justified. An AI cannot be compared with a terrorist who tries to persuade his goals and ideas.
Nevertheless, I would like to understand your counter arguments.
I don’t want to go on this discussion now, because I think it’s a little in the circle. As a conclusion, I would say that, in my opinion, you underestimate the potential of this technology.
Do you have an example?
Why?
What has intelligence to do with a AI perseveringly pursuing its goal without recognizing the possible negative consequences of its own trade?
An AI learns how a person would act in order to achieve his goal during training on the basis of human-made data, and thus determines his behaviour.
In the advanced digitization of our world, there are many possibilities for AI to get in touch with the physical world when AI has access to the internet.
As already written, a program can have programmed goals and possibly even define new goals. Such programs already give it, keyword “autonomous agents”.
For example, I can order an Autonomous Agent to get me a Volkswagen Golf GTI Edition 35 VI 2.0 for under 15,000 € 2012. He will search various internet portals to find such an offer. If he doesn’t get in touch, he could start calling used car dealers. If that doesn’t fertilize, he could theoretically call people who own such a car and talk about it for sale. Maybe even using blackmail methods. Such behavior will naturally be attempted to stop, but the more intelligent these programmers become, the more difficult it will be. The program could come to the realization that the programmed rules of conduct obstruct it in performing its tasks, and therefore try to circumvent them.
An AI has access to the physical world.
Why would the program want it?
Because you can program the AI a target. We are not talking about simple programs here, but programmes that have a certain form of intelligence. This means that you can give them certain tasks/objectives, but how they achieve them, they find out themselves. That’s not a science fiction, that’s all right. Two things can go wrong: 1. The program develops an unconventional approach to achieve the goal set. Possibly a procedure that causes damage in an unforeseen way. 2. If the program is able to modify its own goals by reprogramming it.
Again: Why should an AI like a person have “interests”? Why should she have the “Will” like a person to reprogram? Why should she “aggressively” achieve her goals?
I still cannot fully understand your point of view.
I didn’t say that either.
This does not mean, however, that one should not necessarily pursue the objectives of AI security.
They cannot be realized because the human being is the way he is. In a hypothetical world in which all people act reasonably, it would be possible, but that is not the world in which we live.
If you mean 🤔
Of course, they can be realized even if some should be indifferent. Finally, human beings are always responsible for the possible damage, the AI being only the tool.
Ki is not just a new technology. It is the first technology that has the potential to control itself. But I’ve already written this before, we’re going around now.
Should… Nice intentions, however, probably cannot be realized. Especially because there are always those who don’t care.
Any (complete) new technology may be at risk.
That wouldn’t happen without reason. A fully autonomous AI should not be able to reprogram itself without the control of the developers. AI security should therefore have the top priority so that there is no such problem as possible.
Certainly, it is speculation but if you look at the progress made in recent years, then it is a very real possibility that AI will become a danger.
Once a Ki is able to reprogram itself, it will probably hardly be possible to prevent them from redefined their moral principles. But even if it would be possible to prevent this, not everyone will be interested in implementing this. AIs like ChatGPT can currently only be operated by mega corporations, but this will not always remain the same and the technologies can also get into the hands of “rogue states” or terrorists.
Whether it actually becomes a real danger to humanity is pure speculation. AIs can be trained to be based on human (moral) values. A well-trained AI would certainly not pursue harmful goals.
But it won’t be isolated, but it’s been on the Internet for a long time and so many copies have been created that it can no longer be deleted without destroying the entire internet.
For this, the human being must first give the opportunity to cause damage. An isolated computer program is in itself not capable.
Only the potential of Deep Fakes is enough.
I believe that Deep Fakes is a real danger to all humanity. I don’t want to deny the risks, of course.
This depends on the danger of the speech. For example, if you think of a “world takeover scenario” à la Terminator, then probably not. However, when it comes to the fact that the user could be directly affected by dangers, for example because the AI gives false results or makes false actions, the answer is clear: Yes
If so, it’ll take time.
This will be done with total suspicion, brazing, oppression. The man is right.
Everyone’s an idiot who loves this technique.
In the end, we humans still have control, even if it is drawn by brutal methods like plugs…
This is so easy to pull with the plug again. The control of the power grid is controlled via programs. If these are no longer controlled by us, they can no longer be switched off. The same applies to the Internet.
Why should a AI be left to control the power grid without man being able to intervene? I have dealt intensively with the risks of AI and yet the scenario sounds very unrealistic for me. When it comes to this, the “dumbness” of man is responsible.
For emergency with a hammer;) If we were to have the power grid completely controlled by AI, without the possibility of manual intervention, we would not have deserved it better than we would go through AI;)
Yeah, that sounds much more plausible.
There are many possibilities. The simplest way would be to use a terrorist organization.
Why a Super-KI should have the goal of hacking and destroying systems is still incomprehensible for me.
There is hardly a topic where all “experts” have the same opinion n.
The fact that AI has advantages, but also dangers, should be clear to everyone, but the assessment of the dangerousness of AI among experts differs.
I’m more optimistic anyway, i.e. not that you can close your eyes before the danger. The majority of experts warn against the dangers of AI, for example, the deep learning pioneer Geoffrey Hinton.
https://t3n.de/news/angst-ki-experte-verlaesst-google-warnt-1549647/
However, many experts do not think that you should be afraid of AI. I don’t think you have to look pessimistic into the future.
Everything can go well, but the risk is great.
What will your opinion be the consequences in the future when you speak of the “largest danger for humanity”?
So the future of your opinion is hopeless?
No complete stop, but you should do much more cautiously, as suggested by Elon Musk and others. Unfortunately, this will not be realized because the greed is too large. Apart from that, there is little if only one country stops, it would have to come to a global agreement.
So would you be for a complete stop of AI development because of the excessive risks?