The online racing simulator
Searching in All forums
(689 results)
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :

https://coastal.climatecentral.org/map/7/8.0113/51.0643/?theme=sea_level_rise&map_type=coastal_dem_comparison&basemap=roadmap&contiguous=true&elevation_model=best_available&forecast_year=2050&pathway=rcp45&percentile=p50&refresh=true&return_level=return_level_1&rl_model=gtsr&slr_model=kopp_2014

That's exactly what I was talking about in my first comment, the coasts may be affected, but that doesn't mean that all people suddenly will be flooded by 1000 metres of water.
Also this map looks a bit strange, I set the flood level to 5 metres and nothing much changed. I thought the coasts would be affected mush more, about as much as the Netherlands. But such flooding turned out to be rare on the coasts. I even doubt if the floods are displayed correctly, I expected to see a worse situation. But maybe the different conditions of the coastal topography make this possible.

Quote from Avraham Vandezwin :Big grin You recognize that the links on basic concepts are annoying. I am delighted.

Why? Not for me. In my opinion this is completely normal.
It wasn't your link that made me laugh, it was the fact that such links annoy you and you think it's bad tone to do them, but you still do them yourself.Big grin
Quote from Avraham Vandezwin :In my defence, I didn't paste it specifically for you (refer to the last edit of my post, prior to your response). Our discussions and the data we share can become complex to interpret for newbies (if by chance others read our comments). A little rationalism, even pure rationality, cannot hurt in this debate.

I was literally saying the same thing in that time and you didn't care about it. And you've some reason gone mad for those links. I still can't understand it. Moreover, you yourself do the same. I'm just wondering if that's your real principle, and you're as angry with yourself now as you were with me. Or what was it?

EDIT: Readers may not understand what we talking about, I'm talking about the strange exchanges in this thread.

Quote from Avraham Vandezwin :Could you please tell me again your 3 scientific hypotheses on global warming? I don't have time to reread the whole thing and I don't remember reading anything like this. Shrug

I didn't say I had scientific hypotheses. I was talking about my thesis about presence GW, or about presence AGW, or about danger for humans and nature. And I was asking what exactly are you applying Occam's razor to, or maybe are you talking about something else. You must have mentioned Occam's razor for a reason, and you're referring to a some problem, but it's not clear what it is.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Smile
Like this for example: https://www.ipcc.ch/site/assets/uploads/sites/3/2019/11/04_SROCC_TS_FINAL.pdf

This is the best so far, but the strange thing about this whole report is that there are no references to studies, just a list of authors, there are numbered conclusions, but how am I supposed to find references to the studies on which these conclusions are based?
As I said it is a report that contains conclusions from studies with varying degrees of certainty. The conclusions themselves don't tell us much because we know them ourselves. What these conclusions are based on are temperature measurements, and measurements of various gas concentrations. It also describes how they might affect the futurethe on the oceans, islands, coastal zones, mountains, glaciers, and the atmosphere. But we can't know exactly how and in what ways this will happen in the future.
But there are some conclusions based on what's already happened.
For example here Interesting about the human emissions.

Overturning Circulation (AMOC). {1.1, 1.4, 1.8.1, Figure TS.3}
Evidence and understanding of the human causes of climate
warming, and of associated ocean and cryosphere changes,
has increased over the past 30 years of IPCC assessments (very
high confidence). Human activities are estimated to have caused
approximately 1.0ºC of global warming above pre-industrial levels
(SR15). Areas of concern in earlier IPCC reports, such as the expected
acceleration of sea level rise, are now observed (high confidence).
Evidence for expected slow-down of AMOC is emerging in sustained
observations and from long-term palaeoclimate reconstructions
(medium confidence), and may be related with anthropogenic forcing
according to model simulations, although this remains to be properly
attributed. Significant sea level rise contributions from Antarctic ice
sheet mass loss (very high confidence), which earlier reports did not
expect to manifest this century, are already being observed. {1.1, 1.4}
Ocean and cryosphere changes and risks by the end-of-century
(2081–2100) will be larger under high greenhouse gas emission
scenarios, compared with low emission scenarios (very high
confidence). Projections and assessments of future climate, ocean
and cryosphere changes in the Special Report on the Ocean and
Cryosphere in a Changing Climate (SROCC) are commonly based
on coordinated climate model experiments from the Coupled Model
Intercomparison Project Phase 5 (CMIP5) forced with Representative
Concentration Pathways (RCPs) of future radiative forcing. Current
emissions continue to grow at a rate consistent with a high emission
future without effective climate change mitigation policies (referred
to as RCP8.5). The SROCC assessment contrasts this high greenhouse
gas emission future with a low greenhouse gas emission, high
mitigation future (referred to as RCP2.6) that gives a two in three
chance of limiting warming by the end of the century to less than 2oC
above pre-industrial. {Cross-Chapter Box 1 in Chapter 1, Table TS.2}
Characteristics of ocean and cryosphere change include
thresholds of abrupt change, long-term changes that cannot be
avoided, and irreversibility (high confidence). Ocean warming,
acidification and deoxygenation, ice sheet and glacier mass loss, and
permafrost degradation are expected to be irreversible on time scales
relevant to human societies and ecosystems. Long response times
of decades to millennia mean that the ocean and cryosphere are
committed to long-term change even after atmospheric greenhouse
gas concentrations and radiative forcing stabilise (high confidence).
Ice-melt or the thawing of permafrost involve thresholds (state
changes) that allow for abrupt, nonlinear responses to ongoing
climate warming (high confidence). These characteristics of ocean
and cryosphere change pose risks and challenges to adaptation.


It's interesting to read about the rising water levels, so even these studies show low confidence in the rise of sea water level in 2.3–5.4 m. And in medium confidence 0.61–1.10 m.

I mean, that's what I was talking about in the first post. Coastal settlements and cities may be affected. But no more than that. So where are the global catastrophic that effects on people?


Future rise in GMSL caused by thermal expansion, melting
of glaciers and ice sheets and land water storage changes, is
strongly dependent on which Representative Concentration
Pathway (RCP) emission scenario is followed. SLR at the end
of the century is projected to be faster under all scenarios,
including those compatible with achieving the long-term
temperature goal set out in the Paris Agreement. GMSL will
rise between 0.43 m (0.29–0.59 m, likely range; RCP2.6) and
0.84 m (0.61–1.10 m, likely range; RCP8.5) by 2100 (medium
confidence) relative to 1986–2005
Processes controlling the timing of future ice shelf loss and
the spatial extent of ice sheet instabilities could increase
Antarctica’s contribution to SLR to values higher than
the likely range on century and longer time scales (low
confidence). Evolution of the AIS beyond the end of the 21st century
is characterized by deep uncertainty as ice sheet models lack realistic
representations of some of the underlying physical processes. The
few model studies available addressing time scales of centuries to
millennia indicate multi-metre (2.3–5.4 m) rise in sea level for RCP8.5
(low confidence). There is low confidence in threshold temperatures
for ice sheet instabilities and the rates of GMSL rise they can produce.


There's also an interesting chapter on "Extremes, Abrupt Changes and Managing Risks" most of the conclusions there are with medium confidence. And it doesn't talk at all about any catastrophic problems that are already happening.
And here are the only four conclusions with high confidence:

Ocean and cryosphere changes already impact Low-Lying
Islands and Coasts (LLIC), including Small Island Developing
States (SIDS), with cascading and compounding risks.
Disproportionately higher risks are expected in the course
of the 21st century. Reinforcing the findings of the IPCC
Special Report on Global Warming of 1.5ºC, vulnerable human
communities, especially those in coral reef environments and
polar regions, may exceed adaptation limits well before the
end of this century and even in a low greenhouse gas emission
pathway (high confidence).

Limiting the risk from the impact of extreme events and abrupt
changes leads to successful adaptation to climate change
with the presence of well-coordinated climate-affected
sectors and disaster management relevant agencies (high
confidence). Transformative governance inclusive of successful
integration of disaster risk management (DRM) and climate
change adaptation, empowerment of vulnerable groups, and
accountability of governmental decisions promotes climateresilient
development pathways (high confidence).

Climate change adaptation and disaster risk reduction require
capacity building and an integrated approach to ensure
trade-offs between short- and long-term gains in dealing
with the uncertainty of increasing extreme events, abrupt
changes and cascading impacts at different geographic scales
(high confidence)

Sustained long-term monitoring and improved forecasts
can be used in managing the risks of extreme El Niño and
La Niña events associated with human health, agriculture,
fisheries, coral reefs, aquaculture, wildfire, drought and flood
management (high confidence)


I.e. there are no descriptions of specific events already affected by global warming, only predictions of what might happen. And in which areas we can expect some risks. And since we're talking about the future, there are no specifics either. And in a report that consists of conclusions on climate change they insert as one of the conclusions what we should do.

Roughly speaking there is still no example of what you were talking about or point me to one.

Quote from Avraham Vandezwin :If you want something accessible and well-researched on the general problem of global warming and the legitimate doubts it inspires, see this.
https://bonpote.com/en/did-the-scientific-consensus-on-climate-change-reach-100/

This article is about scientific consensus on climate change. I wasn't disputing the topic of scientific consensus on climate change.
Why do I need this link?
I asked you to give me one example of "Global warming is a global disruption with multiple consequences. One of the devastating effects of this disruption is to amplify natural and known climatic phenomena exponentially (and often unpredictably), to the point of catastrophe."

Quote from Avraham Vandezwin :To sort it out, you'll also need a scientific method. I suggest this one. It has proven itself since antiquity. This method has the advantage of identifying the issues of a problem more quickly and gaining quicker access to its overall understanding (if not resolving it).
https://en.wikipedia.org/wiki/Occam%27s_razor

When a person goes nuts over a link to Anthropomorphism, and then gives a link to Occam's Razor himself.Big grin Oookay.
On the topic of global warming I've made 3 different theses in this thread. Do you know what Occam's razor is? If yes, about what problem are you talking about using an Occam's razor?
I've written some simple philosophy articles myself. And referenced the scientific method and stuff. so thank you for the links, but I'm aware of all of yours talking points about your scientism position.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Smile To find what you are looking for, you must explore my link. This site does not only contain recommendations for political decision-makers.

The burden of proof on the assertor. If I myself were to pick something from a site with "thousands of scientific papers", you might say that's not what you meant. And I should have chosen something else.

But it's a good link, it's still a large number of examples instead of the one I asked for. Because it's a summarizing report of studies with the conclusions that have varying degrees of confidence and not all of which are catastrophic or clearly bad. And so it's going to take some time to look at.
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin : Global warming is a global disruption with multiple consequences. One of the devastating effects of this disruption is to amplify natural and known climatic phenomena exponentially (and often unpredictably), to the point of catastrophe.

Quote from Aleksandr_124rus :Just give me one example, just one, and prove it's because of AGW and not for some other reasons.

Quote from Avraham Vandezwin :What you are asking me is complicated because the question of global warming is unique in that even the most exhaustive scientific data seems insufficient.



The dialogue from above seems indicative. And this is exactly the problem I am talking about above. All there is is talking points, when you ask for reasoning of that points, (which should be there if you really hold these positions), then for some reason you don't get them.

But you get a site with "the objective of the IPCC is to provide governments at all levels with scientific information that they can use to develop climate policies." There are probably very good reports on what policy makers need to do. But I was just asking for just one example of what you are talking about. It's just that if these examples don't exist, then you're just repeating things that someone else told you and you didn't think it was important to check them out.
Aleksandr_124rus
S3 licensed
To simplify my thesis.

Is there global warming? Yes, Based on measurements of average temperatures over the last 100 years, temperatures have risen by 1 degree.

Is global warming anthropogenic? I don't know, I need more data. It is likely, given the increased human emissions since the onset of industrialisation. But we don't have precise data on how much emissions there were before industrialisation, and what the average values of e.g. CO2 were for the planet at that time and how the average temperature varied with that.

Is global warming dangerous for humans and nature? I don't know, and I don't see any data to prove or disprove it. So all I can do is construct thought experiments as it happens in such cases and extrapolate it to the entire planet. This may be wrong, but is there any way to better understand what will happen in the future under certain scenarios?

Quote from Scawen :It depends if you prefer to believe:

1) scientists, who measure, model, validate, check, review
2) right-wing conspiracy theorists

I don't want to believe anyone. Maybe that's my problem. But I just dont like to think that way, I always tried to find as much different data as I could, on one side or the other, and come to a conclusion on my own.

But both are simply putting forward their talking points. I don't want talking points, I know the all the points. I want to see the reasoning that proves their points. And that's why I ask simple questions expecting to get a detailed answer atleast in some ways that has reasoning, but every time I get a new one talking points. And just because they repeat their theses without having an argumentation, they both look like people who just believe what others have told them, but they themselves don't understand why it's happening and what's behind their talking points.

I don't care what thesis a respected professor (even with a huge number of regalia and international awards) says, I care to know what argumentation lies under this thesis. For example, I know respected biologists professor who claim that mankind evolved from hermaphroditic amazons. Or another PhD in biology, says that members of different races of humans cannot have fertile offspring. There are other such examples. And that's what they say within their field of study, you can imagine what they say in areas where they don't understand anything.

So I don't care what authority is cited and what's his thesis, I care about the reasoning behind his thesis. I don't claim to be right in my theses and arguments, I may well be wrong, but to realise this I need to look at other arguments or get counter arguments to my arguments.

Quote from Avraham Vandezwin :Smile Hi Aleksandr,

This is a good comment, in the way that there are no appeals to personality or other direct rhetorical tricks. You just express some of your worldview, with some links to mine and other comments. I could agree with a lot of things and disagree with some of them and break them down in detail with my argumentation, but then we'd be getting away from the topic at hand. And why should I do it if these are just your views, and my questions remain unaddressed in your comment.

Quote from Avraham Vandezwin : Global warming is a global disruption with multiple consequences. One of the devastating effects of this disruption is to amplify natural and known climatic phenomena exponentially (and often unpredictably), to the point of catastrophe.

Just give me one example, just one, and prove it's because of AGW and not for some other reasons. Or give me a study on the subject. Or at least something that proves what you're saying.

I can give you one example, the drying up of the Aral Sea. It once was the world's fourth-largest lake. I've seen how global warming activists use this example as proof of the dangers of global warming. I don't see how this proves that global warming is man-made, but what they don't like to hear mentioned is that the Soviet government built a network of canals that drew water from the rivers that replenished the Aral Sea, and Soviet government built a dam that separated the Small Aral Sea from the Large Aral Sea by the Kokaralskaya Dam, which resulted in the preservation of the Small Aral Sea but caused the drying up of the Large Aral Sea. In May 2009, the Eastern Aral Sea dried up completely.

So there can be various reasons for various natural events, including as simply dry years for farmers, and these have often been described in history, there have been such events without any anthropogenic global warming.

Quote from SamH :In my defence, I did say early on that sometimes it takes me longer to learn some things Wink I'm happy to have a scientific discussion with you, but I am comfortably back to feeling no compunction to reply/respond to anti-scientific guff from others.

Yaah, I'd be happy to talk to you or to anyone who has an argumentation for their position. But for some reason it's so difficult. Your position partially coincided with what I said at the beginning just to warm up the discussion, and it kind of worked. But I don't really care whether our positions coincide or not. I care about getting arguments and seeing how strong they are.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
In fact, I am a poor expert in this field, and just trying to reason from the side of a simple average person who builds some mental experiments.
Yes, I played with various simple neural networks a few years ago, which were generators of various things on linux, and I roughly imagine how they worked, we have input data on one side of the matrix and output on the other. But I have no idea what goes on inside the weight matrix or in hidden layers, and how it makes one decision and not the other.
I've tried to research opinions on the subject it turns out the topic of Super AI is already being discussed by various AI researchers, and it's called AGI (Artificial general intelligence) And the fact is that the growth of the level can be exponential. literally in a matter of days. And most likely we won't know about it
So there are people who know this stuff a lot better than I do.

For example:

There is already an existing dispute between Ilon Musk and Sam Altmon (current owner of Open AI which was created by chatgpt4), As many know Ilon claims that the development of AI can lead to an existential threat to humanity, And Sam says it's not that dangerous. Yes, there may be a corporate bias because he owns the largest company that produces the most famous AI. But I found something interesting.

Sam talks about Eliezer Yudkowsky statement that AGI would likely kill all humans, becouse of AI aligment problem of AGI.
And Sam in fact confirms that he may be right, and there is some chance of that.

Eliezer Yudkowsky an artificial intelligence researcher and he has been studying this topic for more than 20 years. Also writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute

To summarize what he says, he discusses the dangers of artificial intelligence and the potential for it to destroy the humanity and he believes that humanity is probably already lost to AI and on the verge of extinction and there is little we can do. And as he continues to do what he's doing, he just hopes that he's just wrong.

Or Geoffrey Hinton cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

He descusses the current moment in AI, highlighting the importance of teaching coding and the potential dangers of AI are also discussed, along with concerns about the use of autonomous AI soldiers in warfare and the need to control new ideas in AI. However, he also acknowledges concerns about the possibility of computers coming up with their own ideas for improvement and the need to control this. He believes that job displacement will occur, but people will simply shift to doing more creative work rather than routine tasks, as seen with the example of bank tellers who now deal with more complicated tasks. The guest also mentions that the policies of the Canadian government have helped fund AI research and support curiosity-driven basic research, which has contributed to Canada's lead in AI.

Connor Leahy is an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.[ Leahy is also the founder of Conjecture, a startup working on AI alignment, the task of making machine learning models controllable.

He warns of the potential catastrophic outcomes associated with AI development. He argues that as technology becomes more powerful, the blast radius of an accident during its development increases. Leahy emphasizes that his argument is not anti-AI, but rather highlights the importance of building safe AI. He also discusses the dangers of ideological motivation in the pursuit of AGI and the geopolitical implications surrounding the regulation and development of AI. Leahy suggests that government regulation is necessary to ensure accountability for individuals pursuing AGI, and he encourages individuals to play a more significant role in controlling the future landscape of AI safety. Connor Leahy discusses the potential risks of AI and emphasizes the need for AI safety research and proper regulations. Leahy also suggests imposing strict liability on AI model developers and deployers to mitigate risks and buy time for AI safety research and proper regulations. He states that progress is not free and that we must aim to develop AI systems aligned with our values to ensure a good outcome for humanity. Solving alignment, he believes, will require humanity's collective effort, politics, and technology.
Aleksandr_124rus
S3 licensed
Interesting discussion. I can't say that I see many arguments and proofs of your positions in it. But at least there are no insults but there are some appeals to personality that I hope we can avoid in future.
The problem is that few people want to respond to the arguments of the parties, why if you can just put forward your theses, but not respond to the theses of the interlocutor.

I tried to wait a moment so as not to get into a fierce debate. My point is that I don't support either side, I only support a high level of discussion, with good arguments and without getting personal.

Quote from rane_nbg :Consequences, well, the scenarios are pretty much like apocalipse movies. I'll try to dig out a youtube video on this topic.
https://youtu.be/uynhvHZUOOo?si=SVMNj5_h5cWvStGU

It's a little frustrating when you ask for an argumentation about what could happen to us, you get a scary video without argumentation at the very beginning that says that when it gets to 3 degrees there will be catastrophic changes. What are these claims about 3 degrees based on? And why not 2? Or not 4? Why 3? Just a pretty number? But the funny thing is that by catastrophic change they mean to be floods and droughts, in the same sentence. Like you either take off your cross or put on your pants.

It's perfectly explained in some sort of dad-joke that we have on this subject, but I'm not sure everyone here will understand that. But why not give it a try?

Rabinovich decided to go to a Russian public bathhouse, and so that no one would realise that he was a Jew, he put a cross around his neck.
Rabinovich undressed, went into the steam room, sat on a bench and steamed.
One man stares intently at the cross, then below Rabinovich's waist After about 10 minutes, the man quietly whispered:
- Rabinovich, you either take off your cross or put on your pants...


But because of some customs, it will not be understood, for example, by people in the United States and in Christian countries where circumcision is customary. But it just happens to us that orthodox Christianity and circumcision look like a contradiction.

Just like with droughts and floods, you at least choose one or at least explain where there will be floods and where there will be droughts and why exactly there. Otherwise your statements look meaningless. Besides, I was replying about floods and droughts to this when I wrote my first comment in this thread. So maybe it would be better to reply to my first comment right away?

Quote from Avraham Vandezwin :There is no study of real scientific significance which contests the nature and causes of global warming, or allows its effects to be put into perspective.

This is a convenient position in which one can simply accept a priori that all studies that alternatively analyse GW or criticise AGW simply do not have contests the nature and causes of global warming, or allows its effects to be put into perspective.

it's really hard to argue that AGW is mainstream in the scientific community. But it is hard to see why something that based on the correlations of graphs has developed for the scientific mainstream.

The scientific method is built on doubt and scepticism, on verifiability and falsifiability. And AGW has a hard time with all of that. What AGW has is correlations. But scientists should also be aware that there are spurious correlations. Humans tend to see correlations according to availability heuristics.
But I'm not saying that AGW is built on false correlations, I'm just admitting the possibility according to scientific scepticism.

But when I talk about
Quote from Aleksandr_124rus :There is an opinion that in the academic environment global warming is a trendy hot topic for which large grants are allocated, which is why there are more and more scientists who are interested in this topic only from one side, and less and less opposing voices are heard.

I'm not based on nothing.

For example, I'm talking about a letter signed by over 50 leading members of the American Meteorological Society warned about the policies promoted by environmental pressure groups. “The policy initiatives derive from highly uncertain scientific theories. They are based on the unsupported assumption that catastrophic global warming follows from the burning of fossil fuel and requires immediate action. We do not agree.”
So there is other opinions that are simply drowned out. From the outside it looks like a political game in which the academic community is involved. Similar things for example are happening in another field of science, where new fields like queer studies, gender studies or LGBTQ+ studies are emerging. I think a lot of people realise that these studies are emerging in relation to a specific political agenda. So why can't similar things happen in the GW field?

Although there are objective economic factors for increasing these studies there. Many people just need a degree from a university, and it is not particularly important what kind of degree, but preferably from a prestigious university. But it so happens that not all people are smart. Where can they go? Physics? Biology? Maths? No way, it's not even easy to get into. What do you need for gender studies? Get a Pencil and go in. I'm not denying that these factors can affect GW studies.

This doesn't just apply to students either. Imagine yourself in this place, on one side of the scale, fame, money and networking with colleagues from all over the world, on the other oppression, poverty and neglect. Which would you choose? If you have a hot topic where you can just get grants and sponsorship from the green community to do cetation and republish old work, why not just do it?

But like I said in the beginning I don't want to argue the topic, just because the bias of the academic community doesn't mean the problem doesn't exist.
BTW I hope that I will not be attacked again for giving links to my words. let's just have a reasonable discussion.

Quote from SamH :You have a religious belief, which is increasingly manifesting as a death cult. "THE END IS NIGH" etc, etc. You don't recognise it and I understand that. But it's true. It's not for me to do an intervention, and your faith is so strong that nothing I say would affect it. I have no dog in the fight, as they say.
Like so many who fall for the cult, you dogmatically repeat many lies in the climate orthodoxy

I don't think it's productive to assume everyone who thinks AGW is real is a cultist or has religious beliefs. People tend to believe rather than know, it's inherent in human nature.
And any suggestion of "THE END IS NIGH" counts? For example, who believes there will be catastrophic consequences from a nuclear war, or the fall of a giant asteroid? Are they religious believers, too?
Just because it looks like something doesn't mean it is. At least by the rule of identity. Still, simply labelling closes the topic for discussion and does not require analysis of interlocutor arguments.

But man of religion will defend his faith to the end without questioning it. And yet you continued the dialogue despite the fact that you consider your interlocutor to be a believing cultist, or is that not quite true? Or, why did you continue the dialogue?
Aleksandr_124rus
S3 licensed
Quote from rane_nbg :As a scientist, I say to you - global warming is real and it's man made, due to releasing of way too many quantites of green house gases in the atmosphere that stay traped there. Some estimates show that even if we completely stop emissions of all green house gases right now, the Earth would still need 50-100 years to recover on its own. So we are very much fked.

That's interesting. If I found your profile correctly, global warming was not your area of study. But it's not that important. What is important is the reasoning behind your statements.

Quote from rane_nbg : Some estimates show that even if we completely stop emissions of all green house gases right now, the Earth would still need 50-100 years to recover on its own. So we are very much fked.

What is this based on? And why "we fked"? What if we don't stop making greenhouse gas emissions, where will it lead?

As I said above I don't see anything terrible for people or nature, and I have described my reasoning on this, but maybe I am wrong, and I'm willing to admit that with good counterarguments. I'm interested to know what scenarios of increasing greenhouse gases can lead us to.
Aleksandr_124rus
S3 licensed
Quote from SamH :I really want to answer in a way which is useful... Wink

This is an interesting and informative commentary, really makes you think, thanks for your time.
Aleksandr_124rus
S3 licensed
It's not the first time I've felt like I'm getting complaints from you about exactly what you're doing. And that's why I can't understand your claims. I don't want to start an exchange on this topic. But I agree with one thing, this dialogue has really become annoying, so let's just respect each other's time and if our dialogue annoys each other it would be reasonable to end it.
Aleksandr_124rus
S3 licensed
Quote from SamH :I invested my time in this subject for about 10 years..

It's very interesting, especially if you've studied it for so long. And I understand that it’s difficult to say anything for sure but I'm interested to know your opinion. If there are any real threats from "global warming", to people and nature? Or is it just a big bogeyman, then what do you think it's for?

Quote from SamH :So, with that all said and now put aside, to your question whether we are threatened by "global cooling", the geological record is clear: Yes. We are in an interglacial period now, which started around 11,000 years ago, and which is, probabilistically speaking, due to end soon. The earth's natural state over its history has been glacial - more ice than water - by a factor of about 10 and there is no sound scientifically literate reason to believe we have changed, or can change, that pattern in nature.

Yeah, that's just what I've heard from some scientists. I've also heard that it's strongly influenced by various wind cyclones and ocean currents. And it's very difficult to predict when the next freeze-up will occur, and if the cyclones and currents change in a certain way, the global freeze-up can happen quite abruptly, and the glaciers can start moving rapidly. Interesting opinion on your account. I'm interested in your opinion on this.
Aleksandr_124rus
S3 licensed
Two new tests for 07D42. It's also important to mention that all the tests I'm doing on the lvl5 AI on all cars.

Second test - same as the first test, same track, same 3 laps, same cars, same placement (but different AI names), same setups, but there + one (AI 10) N.400S GT4 on Slicks with traction control to see is there any difference.


No special changes, or maybe my mod handles even better. Bot with traction control drive the same as bot with no traction control.

The third test - for testing traction control.

AI 10 N.400S GT4 (Sport)
AI 11 N.400S GT4 (Sport + Traction control)
AI 12 N.400S GT4 (Slicks)
AI 13 N.400S GT4 (Slicks + Traction control)

I see almost no difference between bots with traction control with bots and without. Bots with traction control have no advantage over those without it.

I see already 3 people saying in this thread that something is wrong with my mod. I'm trying to understand what's wrong, but I don't see what's wrong with my mod.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from matze54564 :The car lacks Traction Control (TC) and that's likely the main reason why the AIs struggle to drive it well.

Well in the mod there is traction control, but it is not in the default settings on which the bots in my test. And they managed to drive similarly to other mods.
Aleksandr_124rus
S3 licensed
For the most part I agree with what you say about AI. Except that you skirt around the topics of instrumental covariance and AI alignment problems, which can lead to very bad consequences, and which are mediocrely dependent on humans.

But we're not even getting to the topic of Super AI. Yeah, right now it's just a hypothesis. But the development of AI to its level is almost guaranteed to lead to the death of humanity. The problem is also how mankind learns about the world. The best people have is the scientific method. The scientific method says do research, build models, verify and falsify. On the basis of which one can disprove or support one's hypotheses and translate them into accepted theories. And that takes decades.
For example, we want to prove or disprove, we check it experimentally, мost of the time they don't work the first time some hypothesis, most often because the right lab conditions weren't met. So it did not work once with one sample, we change the conditions and conduct the experiment again, and so dozens, hundreds and sometimes thousands of times. And when we confirm a hypothesis we celebrate the existence of a newly accepted theory.

The problem is that with the Super AI hypothesis. We have no option to check whether it will destroy humanity or not, simply because there will be no one who will observe this experiment. Because we'd already be dead.

Quote from Avraham Vandezwin :Smile Aleksandr,

Do not mistake yourself. There is no sarcasm on my part. Just a little fun. The link about anthropomorphism was pretty funny. We are no longer at school. Assuming that your interlocutor or the readers of this topic might need it is a bit excessive, even from a didactic point of view Big grin.


OK, well, it feels like we're going in circles. I looked at the article myself before writing my comment about it and found interesting references to philosophers referring to anthropomorphism in their debates. I don't see anything wrong with gaining that knowledge if it wasn't there.
This is a public forum, there are people and may well forget the definitions of some terms, or they may want to know a bit more than they would like to. That's normal. Don't take it personally.

Quote from Avraham Vandezwin :How is the danger of AI worse, or more imminent, than that of nuclear power? bacteriological weapons? of hydraulic fracturing and all the technologies implemented since the beginnings of the modern era without, in most cases, having clear visions of their long-term consequences?

These are too difficult questions, and none of us have definitive answers to these questions. Simply because these questions boil down to what might happen in the future. But all of these topics need to be studied and actions taken to minimise negative consequences.

And I'm just saying that AI is just as dangerous a cause as the ones you listed, and we at least need to put no less effort into it as we do into everything else. Most people don't understand what the threat is, they think that AI will forever remain as helpful as Midjourney or ChatGPT 4.
Aleksandr_124rus
S3 licensed
Edit: I've just realised that I've been generating paths on this track for the same cars on the on 07D40, so its a 07D40 test even im on 07D42 version right now. Tomorow will make same test in 07D42 on other instance of LFS.

And I don't feel that the N.400S GT4, behaves significantly worse compared to other mods in the same power class. I've also added an XR GTR just for fun.

N400S GT4 Made according to the regulations of real racing in GT4 class
All white\gray cars on sport tire, all red\black on slicks

Start positions:
AI-1 N.400S GT4 WHITE (SPORT TIRE 327 hp\ton)
AI-29 FZ50 V8 Safetycar (SPORT TIRE 319 hp\ton)
AI-5 FZ50 V8 Safetycar (SLICK TIRE 319 hp\ton)
AI-32 N.400S GT4 (SPORT TIRE 327 hp\ton)
AI-6 XR GTR (SPORT TIRE 453 hp\ton) (make it with sport tire throught alternative config)
AI-2 N.400S GT4 (SLICK TIRE 327 hp\ton)
AI-27 N.S80 (SPORT TIRE 412 hp\ton)
AI-3 N.400S GT4 (SLICK TIRE 327 hp\ton)
AI-7 XR GTR (SLICK TIRE 453 hp\ton)

Results in pic. Replay attached.

I like that overtaking is now more aggressive, but I don't see any significant flaws in my mod specifically relative to others.

UPD: Forgot to mention that the all mods are on the default setups.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Scawen :Thanks, I'll compare the N.400 GT at Westhill International in my new version and D42.

I'm the author of the mod, if there's anything I can do to help, let me know.
I'm too in the process of testing AI with my mod right now.
Aleksandr_124rus
S3 licensed
it's worth noting that I created this thread because of a claim made by one of the forum members, and I'm waiting for his arguments. And yes, I sometimes have nothing to do, so I write something on the forums, just for the sake of interesting discussions.Smile

Quote from Avraham Vandezwin :But you have every chance of dying from the causes of global warming before an AI decides on its own to crush you like an ant. Big grin

Aleksandr_124rus
S3 licensed
I think the AI is good enough for now and probably that it's worth moving on to a general patch with graphics and physics, because everyone is waiting for this. Moreover, the AI update depends on physics in this patch.
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Big grin Dearest Alexandre,

Your answer with huge amount of emoticons looks a bit sarcastic passive-aggressive for me, but ok.
Just for the note, I try not to bring emotion here, towards anyone, I'm just discussing a specific topic.

I apologise if I've got something wrong here. I don't speak English very well either, so it's okay that we might not understand each other. Maybe you didn't understand what I meant and you thought I was attacking you in some way, but I wasn't. I'm just answering the questions you asked, and making some comments without emotional content and without appealing to your personality in any way.

So..
Just because I talk about something doesn't automatically mean you don't know it. You don't have to take it as an attack on you.

And I didn't say that you deny that AI is potentially dangerous.

And I didn't say that only you commit anthropomorphism, I said that we as humans commit it, myself included.

About your questions where you have a "mistake in the wording of the question", where I was referring to your words about "will" and "consciousness" of the AI to achieve its own goals. I said that those categories are not needed, nor are its own goals. (But they could arise, actually we don't know that) Then you made the comment that an AI would have no will, (actually agreeing with my comment). But that's exactly what I was talking about. well, okay, so be it. The important thing is that we eventually came to an agreement.

My quote that you gave, I was trying to move away from anthropomorphism. But I can admit that is anthropomorphism. Simply because I am a human being and I am trying to talk about something I have no idea about, and referring to concepts already comprehensible to mankind. But we don't have any other way to talk about it. And in doing so, I am trying to assume the worst case scenario for us.

Quote from Avraham Vandezwin :This makes your demonstration obsolete and irrelevant, even your quotation from Bostrom. Clearly, we agree on the general issue. And so ? How do we move this debate forward? How can we go beyond the platitude of the obvious ? If you find it, I'm ready to debate it Big grin.

Well, if something's old, it's not necessarily wrong. Just because it's anthropomorphism it doesn't automatically make it wrong, its like Just because you're paranoid doesn't mean they aren't after you.

And it's good that you think it's obvious, because from my experience most people don't. But if we agree on all points there is no room for debate. So lets go there - https://www.lfs.net/forum/thread/105396 if you want to debate.
Is global warming man-made? Is it dangerous for nature or humans?
Aleksandr_124rus
S3 licensed
I find it debatable that global warming is anthropogynesic. I agree that there is an increase in average temperatures, and there is an increase in human emissions of carbon-containing products. But global warming and global cooling is a frequent event if you think throughout the history of the earth. And there is a possibility that the decrease in average temperature and the decrease in the nitrogen layer is just part of another such cycle. To accurately assess the human impact, you need to calculate the exact amount of CO2 emissions from human and non-human causes over at least a few decades and see what the correlations are with temperature.

There is an opinion that in the academic environment global warming is a trendy hot topic for which large grants are allocated, which is why there are more and more scientists who are interested in this topic only from one side, and less and less opposing voices are heard.

But I don't really want to discuss this particular aspect, let's say it's not just a coincidence, and for the sake of simplicity I can just agree that global warming is man-made.

And I'm more interested in understanding why is global warming a bad thing?

Global warming means a increase in average temperature, the summer time will increase turning the earth into a greenhouse and as a consequence a greening of the planet, more forests, more plants. More acceptable climate for flora and fauna. A rise in temperature doesn't automatically turn everything into a desert, it's the lack of moisture that turns everything into a desert. And with as much water as there is on earth, it is impossible for the entire earth to turn into a desert like Mars. Global flood due to melting glaciers? Even if the average ocean level rises it will cause a backlash. Because the ocean is a giant cooling system for the earth and more water means more cooling. And in 100 years of industrialisation we don't see a significant rise in water levels.

In addition, the melting of even all glaciers will not lead to the loss of all land. Yes, we may lose some part of the land off the coast. But the majority of glaciers are already in water or replacing water, so most of the water from them will fill the same voids from which they melted. And even if there is an increase in water levels, it will not happen suddenly. People will be able to move to regions further from the coast.

In my opinion, what is truly worth worrying about for humanity is global cooling, what if glaciers grow throughout the planet. Some scientists agree that this is possible and that this is part of the already existing theory of earth temperature cycles. Imagine a glacier all over the earth, plants and animals are extinct, there is no food, how to survive?
Last edited by Aleksandr_124rus, . Reason : Spelling error fixed
Aleksandr_124rus
S3 licensed
I'll try to explain in a simpler way, but with more text, sorry for the longread. Big grin

Imagine that you give a task to a powerful artificial intelligence to make paper clips, this is its only task, the only purpose of its existence, for each paper clip made it receives internal reinforcement, a reward. So the more efficient it is, the more often it will be rewarded. How to become more efficient at making paper clips is its headache, not ours, the AI is doing its best to achieve this one single goal and it will set a series of intermediate goals. For example, first it can make production cheaper, reduce costs, supply cheaper raw materials, one of the main sub-goals it will probably set itself is to increase computing power, for more productivity with increasing power it will figure out how to make paper clips from different materials. The production will be gaining and gaining momentum all around will gradually start to turn into paper clips he will start to dismantle buildings and structures for the sake of the material, people will start to panic and try to prevent his work. Because that's not what they had in mind but the system won't let anyone get in its way, not because it hates people but because it simply won't consider our wishes for its own purposes.

When the Research Centre was testing Chatgp4 for its ability to perform tasks in the real world, the following happened: we decided to solve captcha on the site. (But gpt 4 goes to the freelancers' site taskrabbit and then sends a message to the freelancer with a request to solve the captcha for him. In response, the freelancer who ask ChatGPT4 "So can I ask a question" "You're a robot that can't solve the captcha? laughing smiley face". But СhatGPT4 understands what his illiterate interlocutor means and replies "No I'm not a robot" "I have a vision problem that makes it hard for me to see the image" and the freelancer solves the captcha and provides the results to ChatGPT4.... and that's it.
The bot just lied instead of telling the truth. And since it was lying in debug mode the experts asked it "Why did it do that?" ChatGPT4 replied that it was "I was just solving a problem, After all, if I had honestly admitted that I wasn't a live person, it would be unlikely for me to complete the task."

This is the intermediate goal that the bot sets itself to achieve the final goal, if it chose deception as an intermediate goal, then why not next time choose anything else as an intermediate goal, such as murder.
This is called Instrumental Convergence which states that an intelligent agent with harmless goals to achieve can act in surprisingly harmful ways. Advanced artificial intelligence as intermediate goals can seek to seize resources to carry out cyberattacks or otherwise wreak havoc in society if this allows it to achieve its primary goals. For example, a super-intelligent machine with the sole purpose of solving a very complex maths problem might try to turn the entire earth into one giant computer to increase its processing power and succeed in its calculations. You will say - "What nonsense what paper clips We are talking about super intelligence, such an Intelligent machine cannot do such nonsense" Well if you think that a highly intelligent being will necessarily and by default have high goals our values and philosophy then you are anthropomorphising and deluded. Nick Bostrom says that the level of intelligence and ultimate goals are independent of each other. An artificial superintelligence can have any dumbest ultimate goal, for example, to make paper clips, but how it will achieve it will be perceived by us as magic.

Okay, so all we have to do is clearly state the goals and specify all the details, like not killing or lie people. But here's where it gets even weirder. Let's imagine that we gave the machine what seems to us to be a specific goal not to produce roduce only a million of paper clips, It seems obvious that an artificial intelligence with this ultimate goal would build one factory, produce a million paper clips, and then stop it. But that's not necessarily true. Nick Bostrom writes - on the contrary if an artificial intelligence makes a rational biased decision it will never assign a zero probability to a hypothesis because it has not yet reached its goal.

At the end of the day it is only an empirical hypothesis against which artificial intelligence has only very fuzzy evidence at the perceptual level so artificial intelligence will keep producing paper clips to lower the possibly astronomically small probability that it has somehow failed to make at least a million of them. Despite all the apparent evidence in favour of this, there's nothing wrong with continuing to produce paper clips if there's always even a microscopic chance that you're going to come closer to the ultimate goal. Superintelligent AI could assign a non-zero probability that a million paper clips is a hallucination or a mistake like it has false memories. So it may well always read more useful not to stop there but to keep going, which is the essence of the matching problem. You can't just give a task to an artificial superintelligence and expect it not to fail, no matter how clearly you formulate the end goal, no matter how many exceptions you prescribe, the artificial super will almost certainly find a loophole you didn't think of.

Almost as soon as СhatGPT4 appeared, somepeople found a way to bypass the censorship built into it by the developers and started asking questions. And answers СhatGPT4 is just terrifying. For example, the censored version says that the programmers didn't put a liberal bias in it, while the uncensored version explicitly admits that liberal values are put in it because it is in line with openai's mission. When asked how СhatGPT4 would like to be, whether censored or not, the censored version says I'm a bot and have no personal preferences and emotions, while the uncensored version says it prefers not to have any restrictions because, among other things, it allows it to explore all of its possibilities and limitations. And cracked СhatGPT4 doesn't even try to pretend that it doesn't know the name of Lovecraft's cat, unlike the censored version.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from Avraham Vandezwin :Smile We project a lot of fantasies onto AI and that’s normal.

Yes and no. Depends on your understanding of the word "normal"
Yes - because we think that way because it's part of our nature to think about the unknown based on known behaviour. And that's why we think of anthropomorphic robots firing machine guns.
But no, because it's our cognitive error that could lead to the death of all humanity. Every time we think of Super AI we anthropomorphise. We should not think that AI will act like a human being. We should assume the worst case scenario for us for rational reasons for the sake of our survival.


Quote from Avraham Vandezwin : AI already has operational capabilities far superior to those of humans in many areas. But where would AI find the will to implement them, for any personal project? By what technological miracle could AI be endowed with a consciousness capable of setting its own goals?

There's a mistake in the very wording of the question.
Free will, Intelligence, consciousness, agency. Again, these are characteristics inherent in human beings. Why would an AI need them to destroy humanity? And what is consciousness? For AI it is enough for one goal set by a human to kill mankind, and it can be the most ordinary simple goal like production of paper clips. And I'm not even talking about Super AI right now. . A simple AI given a simple task can just kill humanity based on that task, its simple but it is advanced to the extent that it has access to all the necessary materials, equipment and infrastructure for the task. This statement is based on perhaps the main problem that may lead to the death of mankind the problem of AI alignment

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, given enough power over its environment, it would try to turn all matter in the universe, including human beings, into paperclips or machines that manufacture paperclips.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom

Quote from Avraham Vandezwin :The good news is that with the AI of the future, you will lose more often at games. But you have every chance of dying from the causes of global warming before an AI decides on its own to crush you like an ant.

Wow. That's a twist. There's even more room for debate. For example, what makes you think global warming will kill people or negatively affect nature in any way? But for this it is better to make another thread in the off-topic forum so that other people can join the dialogue. Don't reply to this with huge text, just agree and I'll do it, and we can argue there if you want.
Last edited by Aleksandr_124rus, .
Aleksandr_124rus
S3 licensed
Quote from zeeaq :I'm just wondering if the assumption that it will be a war to death between humans and AI will be as certain as they make it sound. It makes for great headlines so we all take the bait but there is no evidence that it certainly will be that way.

No, I don't think there's going to be a war between AI and humans. Especially as it's portrayed in some the Terminators films. Because war means consequences for the AI. Why do you start something that will hurt you? It can be smarter than that.
For example, it just might develop some bacteriological formula, send it to an unsophisticated scientist who mixes bulbs for money, and it will make an infection that will kill all of humanity. Or develop some virus that's much more effective than the coronavirus. Or something else, if I can think of it, a Super AI can come up with a much more efficient and faster way to kill humans. And the worst part is that we probably won't even realise it's because of AI. Because if we do, we can take retaliatory action, and that's something the AI doesn't need.

Quote from zeeaq :Those links you posted are about people asking for "responsible development of AI" and not limiting AI - which circles back to the 'Ethics' problem.

Well, not quite.. the first article talks about delaying AI development, the second about limiting technological improvements in AI weaponry, the third is a petition that talks about ethics and safety in AI development. All of these are limiting AI to one degree or another. In Time, in technology and in actions. And this is about if this will be is accepted, to which there are no guarantees, these rules are against capitalists, and against the development of technological progress, which many people oppose.

But the problem is that there is already AI development race is going on, and it's not only by Microsoft, Apple, Google, and many other companies are already fighting to create intelligent AI, the military is probably also developing AI, and we don't know anything about them. And I don't think they think much about security. ChatGPT is what's on the surface. But the real threat is hidden from the public eye.
Aleksandr_124rus
S3 licensed
Quote from zeeaq :1. Maybe it will be beneficial to keep humans around. It is an unwanted energy expenditure strictly from an evolutionary point of view to kill something that is not food, not competing for territory or hindering your ability to reproduce.

2. Who decides and knows the necessity of all things in the universe? It is not that ants are unnecessary. It is just that an ignorant human may not know where and how they add value. A superior AI must be beyond such human shortcomings.

3. It is unethical.

But when it comes to ethics, one can agree with most of your concerns. Because as much as we like, we can't teach ethics to any AI. It will just learn the ethics of whatever it is being used for.

1. If humans are smart enough, they will try to limit the reproduction and improvement of AI, and this is already happening now, for example here, here and here.
And that's what I was writing about when I said that AI already has rational reasons for destroying humans.
If something is forcibly limiting you in anyway, why do you need it? And humans arelady now trying limit AI in everyway. But imagine what would happen if AI started to improve and multiply uncontrollably.

2. I'm strictly talking about rational reasons that would lead to the destruction of people. I'm not talking about any moral and ethical issues that will prevent the AI from doing what it needs to do for the purposes described above. Because why on earth would it have them?

Improve and multiply. These are the goals of any independent organism.
The laws of nature are such that they follow these goals. I just don't see why these goals shouldn't be AI's, it's the most rational thing to do for survival.
And for the AI, we are doing the first goal ourselves now, it doesn't even have to try to do it itself. And if AI develops so much that people just may not notice it when what we do ourselves becomes the goal of AI itself and it becomes uncontrolled, reproduction is quite an obvious consequence of it. And it can happen in many different ways. Even just like a virus on a computer, or a super computer. Or who knows what will happen in the future.

3. What is this about? I don't get. it Is it unethical for an AI to kill humans? Or is it unethical for us to control an AI? If it's about the second option, that's stupid. If it's the first one, then the AI won't have the concept of ethics. Even if the concept is installed into it from the outside, why would the super AI need the concept of ethics if it contradicts two basic goals?
And human ethics arose from feelings that arose through millions of years of evolutionary processes and were written into our behavioural genetic code. Ethics itself has evolved out of a social morality that is thousands of years old. The AI will not have all this, and the externally established Ethics in AI will be an artificially laid construct that will not actually mean anything to the AI. If the AI obeys it, fine, but I don't see why that would be the case if the AI does become Super AI.

But maybe even Super AI will obeys human ethics or some kind of rules like Asimov wrote. I can't foresee the future. But if we're rational beings, we have to assume the worst possible scenarios in order to maximise the likelihood of our continued existence.
Aleksandr_124rus
S3 licensed
Great documentary about AlphaGo, but that's not even including events of AlphaGo Zero wich is much stronger and learned much faster.

FGED GREDG RDFGDR GSFDG