The online racing simulator
Will AI ever destroy humans?
Back in the 2010 I heard that coders had the impossible task of how to make a bot stronger than a human in Go game, they had already defeated the man at checkers and chess and all the other mind games, but Go had never succumbed to them. And I've heard opinions that it's impossible.

Among other things I am a Go player, (considered the hardest board game with Perfect information due to the number of variations of possible positions) Back in 2015 after the news of Fan Hui's defeat to AlphaGo AI neural network, and since that time I started following the topics of AI and AlphaGo, neural networks and deep learning. And like many Go players I wasn't sure about Fan's defeat, I thought maybe there was some mistake, or Fan was in a bad mood, or something else,..because back in 2015 I didn't think that a computer could beat a human for the next 10 years, simply because it's not enough to play Go by simply calculating moves with the power available to us. Because there are more variations of possible positions on the board in Go than there are atoms in the observable universe. Playing Go requires inherent human intuitiveness and understanding of the opponent's plans and play. And Fan was far from the strongest man in Go, and he wasn't even in the top 1000 players. But he was still considered a Go pro, and the news that a Go pro lost to an AI is unimaginable.

But AlphaGo learned in a few years what humanity has been learning for over a thousand years. (and about 400 years if we're talking about competitive Go) And AlphaGo Zero took only 3 days in which she trained herself to play much better than the best human, without ever having seen a single human play. And it had such an impact on Go that it forever changed some of the strategies of playing Go. And this is still in 2017. Even thinking about it now. and realizing how hard Go is, and how much it requires creativity and intuition.. it's not just amazing, it's terrifying.

Many people learned about real AI abilities only from picture generators like DAlle, Midjourney, or chat bots like ChatGPT, as well as there are many other generators like music, 3d models, and so on. But I've been worried about this topic since 2017. But that's just the beginning, and there really isn't any AI yet, it's only in its infancy. In your opinion, сan AI really evolve to such an extent that it will surpass humans in all aspects and simply destroy humanity for lack of necessity?

Why not? Just imagine you're surrounded by a huge number of ants who just shit wherever they can see and trying to control and limit you. Why do you need them? You'd probably just kill them.

Of course you'll say we'll just provide an off button and just turn it off if we need to. But we are not talking about a simple AI like ChatGPT, but about an AI that surpasses us in everything, including intelligence and planning of various variants of events. So why can't the AI prematurely plan a defense response that will allow it to circumvent its shutdown? And like all other independent organisms, AI will want to improve and multiply, which humans will not be happy about and they try to stop it. And there's an obvious reason for the AI to get rid of humans. It's a pretty obvious scenario that many sci-fi writers figured out a long time ago, but imao it doesn't make it any less realistic.

What's your take on that topic?
Great documentary about AlphaGo, but that's not even including events of AlphaGo Zero wich is much stronger and learned much faster.

At the moment AI's are only better than humans in single tasks, like playing a particulair game or what ever. So far, there isn't an AI with general inteligence that would be able to do any given task. As long as we don't have that kind of AI yet, we're safe Smile

But yeah, movie terminator judgement day already said it all. When I saw the movie I never thought such advanced computer is possible, but now I'm not so sure that we may not someday see some version of skynet.
#4 - zeeaq
In an ideal scenario, I'd like to believe that AI is definitely going to be a big part of human life in the future so much so that it will probably become indiscernible from us - kind of like how a viral DNA leaves its snippets in the human genome.

As far as your concerns about human eradication go, one can argue that any AI that has superior general intelligence and the ability to kill humans will also have the ability to poke holes in this idea of "killing things because of the lack of their necessity."

1. Maybe it will be beneficial to keep humans around. It is an unwanted energy expenditure strictly from an evolutionary point of view to kill something that is not food, not competing for territory or hindering your ability to reproduce.

2. Who decides and knows the necessity of all things in the universe? It is not that ants are unnecessary. It is just that an ignorant human may not know where and how they add value. A superior AI must be beyond such human shortcomings.

3. It is unethical.

But when it comes to ethics, one can agree with most of your concerns. Because as much as we like, we can't teach ethics to any AI. It will just learn the ethics of whatever it is being used for.
Quote from zeeaq :1. Maybe it will be beneficial to keep humans around. It is an unwanted energy expenditure strictly from an evolutionary point of view to kill something that is not food, not competing for territory or hindering your ability to reproduce.

2. Who decides and knows the necessity of all things in the universe? It is not that ants are unnecessary. It is just that an ignorant human may not know where and how they add value. A superior AI must be beyond such human shortcomings.

3. It is unethical.

But when it comes to ethics, one can agree with most of your concerns. Because as much as we like, we can't teach ethics to any AI. It will just learn the ethics of whatever it is being used for.

1. If humans are smart enough, they will try to limit the reproduction and improvement of AI, and this is already happening now, for example here, here and here.
And that's what I was writing about when I said that AI already has rational reasons for destroying humans.
If something is forcibly limiting you in anyway, why do you need it? And humans arelady now trying limit AI in everyway. But imagine what would happen if AI started to improve and multiply uncontrollably.

2. I'm strictly talking about rational reasons that would lead to the destruction of people. I'm not talking about any moral and ethical issues that will prevent the AI from doing what it needs to do for the purposes described above. Because why on earth would it have them?

Improve and multiply. These are the goals of any independent organism.
The laws of nature are such that they follow these goals. I just don't see why these goals shouldn't be AI's, it's the most rational thing to do for survival.
And for the AI, we are doing the first goal ourselves now, it doesn't even have to try to do it itself. And if AI develops so much that people just may not notice it when what we do ourselves becomes the goal of AI itself and it becomes uncontrolled, reproduction is quite an obvious consequence of it. And it can happen in many different ways. Even just like a virus on a computer, or a super computer. Or who knows what will happen in the future.

3. What is this about? I don't get. it Is it unethical for an AI to kill humans? Or is it unethical for us to control an AI? If it's about the second option, that's stupid. If it's the first one, then the AI won't have the concept of ethics. Even if the concept is installed into it from the outside, why would the super AI need the concept of ethics if it contradicts two basic goals?
And human ethics arose from feelings that arose through millions of years of evolutionary processes and were written into our behavioural genetic code. Ethics itself has evolved out of a social morality that is thousands of years old. The AI will not have all this, and the externally established Ethics in AI will be an artificially laid construct that will not actually mean anything to the AI. If the AI obeys it, fine, but I don't see why that would be the case if the AI does become Super AI.

But maybe even Super AI will obeys human ethics or some kind of rules like Asimov wrote. I can't foresee the future. But if we're rational beings, we have to assume the worst possible scenarios in order to maximise the likelihood of our continued existence.
#6 - zeeaq
I'm just wondering if the assumption that it will be a war to death between humans and AI will be as certain as they make it sound. It makes for great headlines so we all take the bait but there is no evidence that it certainly will be that way.

Those links you posted are about people asking for "responsible development of AI" and not limiting AI - which circles back to the 'Ethics' problem.

If we're going to try and engineer good AI that doesn't harm us but continue using the same AI for war, propaganda and killing then we're probably sawing off the branch we're sitting on.

Having said that, AGI with near human-like cognition is not really around the corner yet, so perhaps we can all let that goose sit in the bottle for now Smile
Quote from zeeaq :I'm just wondering if the assumption that it will be a war to death between humans and AI will be as certain as they make it sound. It makes for great headlines so we all take the bait but there is no evidence that it certainly will be that way.

No, I don't think there's going to be a war between AI and humans. Especially as it's portrayed in some the Terminators films. Because war means consequences for the AI. Why do you start something that will hurt you? It can be smarter than that.
For example, it just might develop some bacteriological formula, send it to an unsophisticated scientist who mixes bulbs for money, and it will make an infection that will kill all of humanity. Or develop some virus that's much more effective than the coronavirus. Or something else, if I can think of it, a Super AI can come up with a much more efficient and faster way to kill humans. And the worst part is that we probably won't even realise it's because of AI. Because if we do, we can take retaliatory action, and that's something the AI doesn't need.

Quote from zeeaq :Those links you posted are about people asking for "responsible development of AI" and not limiting AI - which circles back to the 'Ethics' problem.

Well, not quite.. the first article talks about delaying AI development, the second about limiting technological improvements in AI weaponry, the third is a petition that talks about ethics and safety in AI development. All of these are limiting AI to one degree or another. In Time, in technology and in actions. And this is about if this will be is accepted, to which there are no guarantees, these rules are against capitalists, and against the development of technological progress, which many people oppose.

But the problem is that there is already AI development race is going on, and it's not only by Microsoft, Apple, Google, and many other companies are already fighting to create intelligent AI, the military is probably also developing AI, and we don't know anything about them. And I don't think they think much about security. ChatGPT is what's on the surface. But the real threat is hidden from the public eye.
Quote from Aleksandr_124rus :No, I don't think there's going to be a war between AI and humans. Especially as it's portrayed in some the Terminators films. Because war means consequences for the AI. Why do you start something that will hurt you? It can be smarter than that...

Smile We project a lot of fantasies onto AI and that’s normal. AI already has operational capabilities far superior to those of humans in many areas. But where would AI find the will to implement them, for any personal project?
By what technological miracle could AI be endowed with a consciousness capable of setting its own goals?
AI was developed to make objective decisions, making tactical choices with defined and precise objectives. Including the goal of creating, like humans. But AI creations are only intelligible from a human point of view. It's a projection. The AI's creations make no sense to the AI. The AI can simply justify its mimetic choices.
Certainly, today's AI can instantly create better content than most humans. AI will certainly make a large majority of humanity completely obsolete from the perspective of the dominant ideology. This is its reason for existence and its main danger. This is not surprising given that AI is the tool of the dominant ideology.
We will undoubtedly succeed in creating a terminator close to James Cameron's fantasy. Maybe he'll give a thumbs up before he disappears. He will not do it of his own will. On the other hand, and it is much more dangerous since it is possible, humans can use AI to create a weapon of uncontrollable destruction. Artificial intelligence serves a very real stupidity.
The good news is that with the AI of the future, you will lose more often at games. But you have every chance of dying from the causes of global warming before an AI decides on its own to crush you like an ant. Big grin
Quote from Avraham Vandezwin :Smile We project a lot of fantasies onto AI and that’s normal.

Yes and no. Depends on your understanding of the word "normal"
Yes - because we think that way because it's part of our nature to think about the unknown based on known behaviour. And that's why we think of anthropomorphic robots firing machine guns.
But no, because it's our cognitive error that could lead to the death of all humanity. Every time we think of Super AI we anthropomorphise. We should not think that AI will act like a human being. We should assume the worst case scenario for us for rational reasons for the sake of our survival.


Quote from Avraham Vandezwin : AI already has operational capabilities far superior to those of humans in many areas. But where would AI find the will to implement them, for any personal project? By what technological miracle could AI be endowed with a consciousness capable of setting its own goals?

There's a mistake in the very wording of the question.
Free will, Intelligence, consciousness, agency. Again, these are characteristics inherent in human beings. Why would an AI need them to destroy humanity? And what is consciousness? For AI it is enough for one goal set by a human to kill mankind, and it can be the most ordinary simple goal like production of paper clips. And I'm not even talking about Super AI right now. . A simple AI given a simple task can just kill humanity based on that task, its simple but it is advanced to the extent that it has access to all the necessary materials, equipment and infrastructure for the task. This statement is based on perhaps the main problem that may lead to the death of mankind the problem of AI alignment

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom

Quote from Avraham Vandezwin :The good news is that with the AI of the future, you will lose more often at games. But you have every chance of dying from the causes of global warming before an AI decides on its own to crush you like an ant.

Wow. That's a twist. There's even more room for debate. For example, what makes you think global warming will kill people or negatively affect nature in any way? But for this it is better to make another thread in the off-topic forum so that other people can join the dialogue. Don't reply to this with huge text, just agree and I'll do it, and we can argue there if you want.
I'll try to explain in a simpler way, but with more text, sorry for the longread. Big grin

Imagine that you give a task to a powerful artificial intelligence to make paper clips, this is its only task, the only purpose of its existence, for each paper clip made it receives internal reinforcement, a reward. So the more efficient it is, the more often it will be rewarded. How to become more efficient at making paper clips is its headache, not ours, the AI is doing its best to achieve this one single goal and it will set a series of intermediate goals. For example, first it can make production cheaper, reduce costs, supply cheaper raw materials, one of the main sub-goals it will probably set itself is to increase computing power, for more productivity with increasing power it will figure out how to make paper clips from different materials. The production will be gaining and gaining momentum all around will gradually start to turn into paper clips he will start to dismantle buildings and structures for the sake of the material, people will start to panic and try to prevent his work. Because that's not what they had in mind but the system won't let anyone get in its way, not because it hates people but because it simply won't consider our wishes for its own purposes.

When the Research Centre was testing Chatgp4 for its ability to perform tasks in the real world, the following happened: we decided to solve captcha on the site. (But gpt 4 goes to the freelancers' site taskrabbit and then sends a message to the freelancer with a request to solve the captcha for him. In response, the freelancer who ask ChatGPT4 "So can I ask a question" "You're a robot that can't solve the captcha? laughing smiley face". But СhatGPT4 understands what his illiterate interlocutor means and replies "No I'm not a robot" "I have a vision problem that makes it hard for me to see the image" and the freelancer solves the captcha and provides the results to ChatGPT4.... and that's it.
The bot just lied instead of telling the truth. And since it was lying in debug mode the experts asked it "Why did it do that?" ChatGPT4 replied that it was "I was just solving a problem, After all, if I had honestly admitted that I wasn't a live person, it would be unlikely for me to complete the task."

This is the intermediate goal that the bot sets itself to achieve the final goal, if it chose deception as an intermediate goal, then why not next time choose anything else as an intermediate goal, such as murder.
This is called Instrumental Convergence which states that an intelligent agent with harmless goals to achieve can act in surprisingly harmful ways. Advanced artificial intelligence as intermediate goals can seek to seize resources to carry out cyberattacks or otherwise wreak havoc in society if this allows it to achieve its primary goals. For example, a super-intelligent machine with the sole purpose of solving a very complex maths problem might try to turn the entire earth into one giant computer to increase its processing power and succeed in its calculations. You will say - "What nonsense what paper clips We are talking about super intelligence, such an Intelligent machine cannot do such nonsense" Well if you think that a highly intelligent being will necessarily and by default have high goals our values and philosophy then you are anthropomorphising and deluded. Nick Bostrom says that the level of intelligence and ultimate goals are independent of each other. An artificial superintelligence can have any dumbest ultimate goal, for example, to make paper clips, but how it will achieve it will be perceived by us as magic.

Okay, so all we have to do is clearly state the goals and specify all the details, like not killing or lie people. But here's where it gets even weirder. Let's imagine that we gave the machine what seems to us to be a specific goal not to produce roduce only a million of paper clips, It seems obvious that an artificial intelligence with this ultimate goal would build one factory, produce a million paper clips, and then stop it. But that's not necessarily true. Nick Bostrom writes - on the contrary if an artificial intelligence makes a rational biased decision it will never assign a zero probability to a hypothesis because it has not yet reached its goal.

At the end of the day it is only an empirical hypothesis against which artificial intelligence has only very fuzzy evidence at the perceptual level so artificial intelligence will keep producing paper clips to lower the possibly astronomically small probability that it has somehow failed to make at least a million of them. Despite all the apparent evidence in favour of this, there's nothing wrong with continuing to produce paper clips if there's always even a microscopic chance that you're going to come closer to the ultimate goal. Superintelligent AI could assign a non-zero probability that a million paper clips is a hallucination or a mistake like it has false memories. So it may well always read more useful not to stop there but to keep going, which is the essence of the matching problem. You can't just give a task to an artificial superintelligence and expect it not to fail, no matter how clearly you formulate the end goal, no matter how many exceptions you prescribe, the artificial super will almost certainly find a loophole you didn't think of.

Almost as soon as СhatGPT4 appeared, somepeople found a way to bypass the censorship built into it by the developers and started asking questions. And answers СhatGPT4 is just terrifying. For example, the censored version says that the programmers didn't put a liberal bias in it, while the uncensored version explicitly admits that liberal values are put in it because it is in line with openai's mission. When asked how СhatGPT4 would like to be, whether censored or not, the censored version says I'm a bot and have no personal preferences and emotions, while the uncensored version says it prefers not to have any restrictions because, among other things, it allows it to explore all of its possibilities and limitations. And cracked СhatGPT4 doesn't even try to pretend that it doesn't know the name of Lovecraft's cat, unlike the censored version.
Big grin Dearest Alexandre,

I thank you for your condescension but I know what anthropomorphism is. And when it comes to twists and turns, semantic deviations, self-recovery and sophistic artifices, you know something about it, even if the tricks are always a little showy ( Smile that said, without any form of animosity, please believe it. It amuses me. I am not opposing here the Argumentum ad personam in response to your argumentum ad hominem Tilt ).

Smile Let's be serious for two minutes if we want this exchange to be profitable. I have passed the age to playing stringing sophisms like pearls and producing miles of rhetoric as hollow as it is useless.

Reread my message if necessary and you will see that I do not deny that AI is potentially dangerous. I am simply saying that it is not of its own "will" (since it has no will) but by what humans will do with it. That is to say an uncontrollable weapon who start, if you want, from a simple paper clip replication project or from ChatGPT. In fact, I intervened in this debate because all of your previous comments were very clearly tinged with anthropomorphism, like these for example.

Quote from Aleksandr_124rus :
For example, it just might develop some bacteriological formula, send it to an unsophisticated scientist who mixes bulbs for money, and it will make an infection that will kill all of humanity. Or develop some virus that's much more effective than the coronavirus. Or something else, if I can think of it, a Super AI can come up with a much more efficient and faster way to kill humans. And the worst part is that we probably won't even realise it's because of AI. Because if we do, we can take retaliatory action, and that's something the AI doesn't need.

Smile There is therefore no error on my part “in the wording of the question”. It's your reinterpretation of my point that is wrong (at best). This makes your demonstration obsolete and irrelevant, even your quotation from Bostrom. Clearly, we agree on the general issue. And so ? How do we move this debate forward? How can we go beyond the platitude of the obvious ? If you find it, I'm ready to debate it Big grin.

Quote from Aleksandr_124rus :
For example, what makes you think global warming will kill people or negatively affect nature in any way? But for this it is better to make another thread in the off-topic forum so that other people can join the dialogue. Don't reply to this with huge text, just agree and I'll do it, and we can argue there if you want.

Big grin As for “what makes me think or believe.” We can talk about it too. I'll let you handle this. You have, I think, a lot more time available than me. Wink (and I speak English much less well than you. So I'm taking a long time to answer Shrug )
Quote from Avraham Vandezwin :Big grin Dearest Alexandre,

Your answer with huge amount of emoticons looks a bit sarcastic passive-aggressive for me, but ok.
Just for the note, I try not to bring emotion here, towards anyone, I'm just discussing a specific topic.

I apologise if I've got something wrong here. I don't speak English very well either, so it's okay that we might not understand each other. Maybe you didn't understand what I meant and you thought I was attacking you in some way, but I wasn't. I'm just answering the questions you asked, and making some comments without emotional content and without appealing to your personality in any way.

So..
Just because I talk about something doesn't automatically mean you don't know it. You don't have to take it as an attack on you.

And I didn't say that you deny that AI is potentially dangerous.

And I didn't say that only you commit anthropomorphism, I said that we as humans commit it, myself included.

About your questions where you have a "mistake in the wording of the question", where I was referring to your words about "will" and "consciousness" of the AI to achieve its own goals. I said that those categories are not needed, nor are its own goals. (But they could arise, actually we don't know that) Then you made the comment that an AI would have no will, (actually agreeing with my comment). But that's exactly what I was talking about. well, okay, so be it. The important thing is that we eventually came to an agreement.

My quote that you gave, I was trying to move away from anthropomorphism. But I can admit that is anthropomorphism. Simply because I am a human being and I am trying to talk about something I have no idea about, and referring to concepts already comprehensible to mankind. But we don't have any other way to talk about it. And in doing so, I am trying to assume the worst case scenario for us.

Quote from Avraham Vandezwin :This makes your demonstration obsolete and irrelevant, even your quotation from Bostrom. Clearly, we agree on the general issue. And so ? How do we move this debate forward? How can we go beyond the platitude of the obvious ? If you find it, I'm ready to debate it Big grin.

Well, if something's old, it's not necessarily wrong. Just because it's anthropomorphism it doesn't automatically make it wrong, its like Just because you're paranoid doesn't mean they aren't after you.

And it's good that you think it's obvious, because from my experience most people don't. But if we agree on all points there is no room for debate. So lets go there - https://www.lfs.net/forum/thread/105396 if you want to debate.
Smile Aleksandr,

Do not mistake yourself. There is no sarcasm on my part. Just a little fun. The link about anthropomorphism was pretty funny. We are no longer at school. Assuming that your interlocutor or the readers of this topic might need it is a bit excessive, even from a didactic point of view Big grin.

I will look at your new topic. But, I think you give up a little quickly on AI. Simply deciding unilaterally that we agree is not enough to render all discussion pointless. We have (perhaps?) the same knowledge and common references. But our opinions may still diverge, failing (alas, I fear) to contribute anything to scientific reflection on AI.

To clarify my point about AI, it is very likely that the first stone picked up from the ground by the first prehuman was thrown in the neighbour's face. Some pre-humans of this era must have developed an unconditional fear of stones. Others have set themselves the task of regulating its use. The same goes for all technological advances since the dawn of time. You opened a topic on global warming, that's very good. Unfortunately, this was just one example among many. How is the danger of AI worse, or more imminent, than that of nuclear power? bacteriological weapons? of hydraulic fracturing and all the technologies implemented since the beginnings of the modern era without, in most cases, having clear visions of their long-term consequences?

AI is only a tool invented by man, which men use for very varied objectives and with more or less consciousness. Whatever its complexity, the question of the nuisance of this tool rests on its uses, which will condition its actual level of autonomy. The theoretical hypotheses to which you refer have no other purpose than to regulate this use. AI presents no danger in itself.

Why does AI generate more concern than, for example, the bacteriological weapons of the First World War which have been lying for more than a century under a few centimetres of mud a few hundred meters from the Knokke dike? These bacteriological weapons are corroded, there is no way to recover them. Does anyone know how the viruses inside could have mutated? Since you love comparisons (joke) I can give you other examples, such as the melting of Permafrosts which releases viruses with much less predictable and more problematic effects than the dialectical excesses of ChatGPT.

There are millions of possible and very real causes for the end of humanity. The irony here is that there are so many of them, with infinite possible combined effects, that only AI can help us analyse and understand them. But people prefer to believe (and fear) that AI can one day destroy humanity, just as they once believed (and some still believe) that an extraterrestrial civilization will come and wipe them out. However, humanity has little chance of disappearing because of a form of intelligence other than its own. Our fears and our fruitless reasonings are also comfortable, distracting and often very caricatured ways of escaping reality, rather than see it.
For the most part I agree with what you say about AI. Except that you skirt around the topics of instrumental covariance and AI alignment problems, which can lead to very bad consequences, and which are mediocrely dependent on humans.

But we're not even getting to the topic of Super AI. Yeah, right now it's just a hypothesis. But the development of AI to its level is almost guaranteed to lead to the death of humanity. The problem is also how mankind learns about the world. The best people have is the scientific method. The scientific method says do research, build models, verify and falsify. On the basis of which one can disprove or support one's hypotheses and translate them into accepted theories. And that takes decades.
For example, we want to prove or disprove, we check it experimentally, мost of the time they don't work the first time some hypothesis, most often because the right lab conditions weren't met. So it did not work once with one sample, we change the conditions and conduct the experiment again, and so dozens, hundreds and sometimes thousands of times. And when we confirm a hypothesis we celebrate the existence of a newly accepted theory.

The problem is that with the Super AI hypothesis. We have no option to check whether it will destroy humanity or not, simply because there will be no one who will observe this experiment. Because we'd already be dead.

Quote from Avraham Vandezwin :Smile Aleksandr,

Do not mistake yourself. There is no sarcasm on my part. Just a little fun. The link about anthropomorphism was pretty funny. We are no longer at school. Assuming that your interlocutor or the readers of this topic might need it is a bit excessive, even from a didactic point of view Big grin.


OK, well, it feels like we're going in circles. I looked at the article myself before writing my comment about it and found interesting references to philosophers referring to anthropomorphism in their debates. I don't see anything wrong with gaining that knowledge if it wasn't there.
This is a public forum, there are people and may well forget the definitions of some terms, or they may want to know a bit more than they would like to. That's normal. Don't take it personally.

Quote from Avraham Vandezwin :How is the danger of AI worse, or more imminent, than that of nuclear power? bacteriological weapons? of hydraulic fracturing and all the technologies implemented since the beginnings of the modern era without, in most cases, having clear visions of their long-term consequences?

These are too difficult questions, and none of us have definitive answers to these questions. Simply because these questions boil down to what might happen in the future. But all of these topics need to be studied and actions taken to minimise negative consequences.

And I'm just saying that AI is just as dangerous a cause as the ones you listed, and we at least need to put no less effort into it as we do into everything else. Most people don't understand what the threat is, they think that AI will forever remain as helpful as Midjourney or ChatGPT 4.
Off topic:
We don't go around in circles. I am pointing out a recurrent methodological flaw (Smile which I am willing to put down to our cultural differences or a lack of mastery of the language we use here).

You seem to read the posts like correcting a copy. You isolate a sentence, then a word within that sentence, and you extrapolate things out of context until you place the “information” that interests you. In this case, a concept which until now seemed to be lacking in you, and which you are introducing as an explanation for others. You are not doing pedagogy here on a public forum, but self-recovery (Big grin and you know it full well, since it is your method. The problem is that it is too visible and therefore a little annoying).
This attitude is (fortunately) not “normal” (in the sense of norm) and I don’t take it “personally” because I’m paranoid. It turns out that you are responding to me without understanding my point.

When the other person tells you that something is “normal.” There is a good chance that he is talking about “normal” in the sense of the norm (the extended majority). No need to explain the various meanings of the word “normal” to him. Generally speaking, it is more efficient, in terms of constructive exchanges for common reflection, to avoid drowning the words of your interlocutors in a flood of digressions and pseudo-analyses that are more or less relevant. Everyone here have internet and ChatGPT does it much better than you.

Even if it meant retaining only one word in the first sentence of my first post, it would have been more judicious to retain the word “fantasy”. Because your super AI is a fantasy inspired by science fiction, on all levels (theoretical, conceptual, practical, etc.) and from the simple point of view of elementary logic. I gave you the pole. You didn't get it. Probably too busy checking the impact of Scawen's new AI on the quality of your mods. It's human Big grin .


Back to the topic

The reality that your science fiction fantasies prevent you from seeing is that if a super AI set itself the goal of destroying humanity (at the current stage, which is already no longer that of your reference theories. Time flies) , it would be more rational today at all levels (temporal, economic, etc., faced with multiple threats) for this super AI to do nothing. And to let humanity manage alone the problems that it itself has generated and others, more fatally unavoidable from a statistical point of view, which await it.

Our way of asking questions is already, most often, the best answer we are capable of providing. Tilt
It's not the first time I've felt like I'm getting complaints from you about exactly what you're doing. And that's why I can't understand your claims. I don't want to start an exchange on this topic. But I agree with one thing, this dialogue has really become annoying, so let's just respect each other's time and if our dialogue annoys each other it would be reasonable to end it.
Big grin So no substantial answer.

Keep sulking. If that's your choice. It's a shame you prefer to take it that way. That’s what you call “not bringing emotion here,” I guess. You can't stand being reminded of your contradictions and haughty attitudes. That's all. Beyond cultural differences and language issues, maturity also seems important.

When you open topics, expect people to respond (in the tone you use, don't buck the trend here). You have my answer.

Will AI one day destroy humans?

As things stand, no.
A super AI that wants the end of humanity would not consume energy unnecessarily to achieve this goal, strictly from the point of view of energy efficiency (ratio of energy spent / time / objective).

(no hard feelings on my part Smile)
In fact, I am a poor expert in this field, and just trying to reason from the side of a simple average person who builds some mental experiments.
Yes, I played with various simple neural networks a few years ago, which were generators of various things on linux, and I roughly imagine how they worked, we have input data on one side of the matrix and output on the other. But I have no idea what goes on inside the weight matrix or in hidden layers, and how it makes one decision and not the other.
I've tried to research opinions on the subject it turns out the topic of Super AI is already being discussed by various AI researchers, and it's called AGI (Artificial general intelligence) And the fact is that the growth of the level can be exponential. literally in a matter of days. And most likely we won't know about it
So there are people who know this stuff a lot better than I do.

For example:

There is already an existing dispute between Ilon Musk and Sam Altmon (current owner of Open AI which was created by chatgpt4), As many know Ilon claims that the development of AI can lead to an existential threat to humanity, And Sam says it's not that dangerous. Yes, there may be a corporate bias because he owns the largest company that produces the most famous AI. But I found something interesting.

Sam talks about Eliezer Yudkowsky statement that AGI would likely kill all humans, becouse of AI aligment problem of AGI.
And Sam in fact confirms that he may be right, and there is some chance of that.

Eliezer Yudkowsky an artificial intelligence researcher and he has been studying this topic for more than 20 years. Also writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute

To summarize what he says, he discusses the dangers of artificial intelligence and the potential for it to destroy the humanity and he believes that humanity is probably already lost to AI and on the verge of extinction and there is little we can do. And as he continues to do what he's doing, he just hopes that he's just wrong.

Or Geoffrey Hinton cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

He descusses the current moment in AI, highlighting the importance of teaching coding and the potential dangers of AI are also discussed, along with concerns about the use of autonomous AI soldiers in warfare and the need to control new ideas in AI. However, he also acknowledges concerns about the possibility of computers coming up with their own ideas for improvement and the need to control this. He believes that job displacement will occur, but people will simply shift to doing more creative work rather than routine tasks, as seen with the example of bank tellers who now deal with more complicated tasks. The guest also mentions that the policies of the Canadian government have helped fund AI research and support curiosity-driven basic research, which has contributed to Canada's lead in AI.

Connor Leahy is an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.[ Leahy is also the founder of Conjecture, a startup working on AI alignment, the task of making machine learning models controllable.

He warns of the potential catastrophic outcomes associated with AI development. He argues that as technology becomes more powerful, the blast radius of an accident during its development increases. Leahy emphasizes that his argument is not anti-AI, but rather highlights the importance of building safe AI. He also discusses the dangers of ideological motivation in the pursuit of AGI and the geopolitical implications surrounding the regulation and development of AI. Leahy suggests that government regulation is necessary to ensure accountability for individuals pursuing AGI, and he encourages individuals to play a more significant role in controlling the future landscape of AI safety. Connor Leahy discusses the potential risks of AI and emphasizes the need for AI safety research and proper regulations. Leahy also suggests imposing strict liability on AI model developers and deployers to mitigate risks and buy time for AI safety research and proper regulations. He states that progress is not free and that we must aim to develop AI systems aligned with our values to ensure a good outcome for humanity. Solving alignment, he believes, will require humanity's collective effort, politics, and technology.
Researchers from the University of Illinois at Urbana-Champaign recently published a study in which they proved that OpenAI's GPT-4 artificial intelligence model is capable of independently exploiting vulnerabilities in real systems after it receives a detailed description of them.

The study selected 15 vulnerabilities described as critical. The results showed that the GPT-4 language model was able to exploit 87% of these vulnerabilities, while the other models were unable to do so.

Daniel Kang, one of the authors of the paper, argues that the use of LLM can make exploiting vulnerabilities much easier for attackers. According to him, systems based on artificial intelligence will be much more effective than the tools available today for novice hackers.

Will AI ever destroy humans?
(19 posts, started )
FGED GREDG RDFGDR GSFDG