The online racing simulator
Artificial Intelligence in Racing Simulations Project


I didn't know whether to put this in Off-topic or here, but for the past 8 hours I have been thinking about developing the most realist AI Racing Sim I can. I would like to discuss it here with a few programmers because there are a few around, and there is an obvious passion for racing. My idea comes from the fact some tests I've been running in LFS, I am not mentioning them for fear of flaming since what I would have to say is purely my thoughts and opinions based on what I've witnessed. I don't have the code so I can't be sure what is happening.

Regardless. I want to make an AI that can take in only the amount of information that a human can. Information will be limited to Reference Points. Track Reference Points will be placed every 20 meters or so on the left and right side of the track. General Reference Points consist of buildings, billboards and other large / obvious points along the track, and then Detail Reference Points will consist of points close to where the AI Driver is supposed to brake, turn-in or hit for apex.

However, the driver will not get the reference point as a solid position. Instead they will a vector in the direction of the reference point and with a length of near the distance between the view and reference point - but it could be longer or shorter. The error comes into play based on how far the reference point is. Closer points are easier to judge distances, therefore are more accurate.

These reference points are also "culled" by a FieldOfView each frame, so the driver needs to turn their view to see more. This fov can change based on the drivers concentration level. Example, if the driver suddenly starts spinning and losing control they will likely have less fov, and less accurate points of reference.

Other information the driver will be able to gather:
  • Reference Points
  • Velocity of the vehicle
  • Useful Car/Dashboard/Gauges Information: RPM, Fuel, Gear etc.
  • Traction Information (FULLGRIP, NEARLIMIT, ATLIMIT, OVERLIMIT)
  • G-Meter (Represents "feel of the seat")
A very important aspect of this project is the desire to make the AI use car controls accurately. Instead of simply moving the car and using different physics than the player would like several games have done before. This includes having the driver "PrepareTo" before he wants to shift. Otherwise shifting would have a built in lag. Not large, perhaps 150 to 200 milliseconds or less.

---------

Development has not started on this, although I have started working on a writeup of the particular AI problems I am trying to solve, and eventually how I am going to go about solving those. I am looking for a few people who are interested in the project to contact me. I will be working with Visual C++ 2008 Express Edition. I want to keep the environment simple and to the point. The physics of the racing do not need to be 100% accurate or even near it, but the AI is the thing to tackle.

This Artificial Intelligence system will not be dependant on physics. If done properly the driver should be able to drive near the limits. That is not to say that physics won't effect the AI. A bad physics system with the traction could cause issues, though with any luck some rudimentary game physics will work.

See attached file for some existing AI problems.


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Youtube: AIRS: Computing the Racing Line
Youtube: AIRS: Driver Predictions
Youtube: AIRS Driving XRG at FE1 (2014)
Youtube: AIRS Playing with Genetic Algorithms
Youtube: AIRS Driver Lapping XRG at FE1 (early 2016)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Attached files
Racing Artificial Intelligence.pdf - 85.1 KB - 1575 views
IMO the big diference between LFS AI and Human is that an Human can to anticipate every time.

This problem is clear in the BL2R (inverted Blackwood Rally) on the jumps. The AI try to mantain the race line, but don't anticipate to jump, and every time that jump go out of raceline or track

And I think it is the only reason because AI it is not faster ever than Human.
#3 - amp88
Probably the biggest problem for getting the best performance from an AI driver is how to sense the cornering speeds you can achieve from the car. Making it able to follow the track and react to things like cars on the racing line etc is by no means a trivial task, but the ideas involved in that would appear to be simpler than finding out the current potential of the car.

There are so many things to take into account (including tyre grip (based on tyre size, type and wear level, temperatures), downforce levels (where applicable), weight distribution (static/dynamic), car setup (suspension, gearing). If you are writing the sim yourself you obviously have all (or most?) of this information available to you but if you only want to use information that a human would have available then you're going to have to program your AI to "sense" the data somehow and if you want to be quick it'll have to be dynamic to take into account changes which will effect the performance. It will be interesting to see how you achieve this.
I don't know if I made it clear enough in the first post, but creating a super fast and difficult AI is not the goal, although I would like to see how good I can do it. The main point of the exercise is only using data that the player can use. For instance a player does NOT know the heat of the tire, size of the tire or any of this information - but just as well we do know how to sense the limits of the tire by listening to the sound, and by seat of your pants feeling which is lacking in a simulation environment.

Weight distribution doesn't really matter either - if the physics side of things simulates it then great, but all a human is concerned with is the tires being at the limit without going over. Your correct that the AI will need to sense the data somehow; through the reference points which will be placed in some sort of editor or something.

It is likely I work on this in a 2D environment, although I have not made my finalized plan. I've been thinking about this quite a bit, and yes it is a big project. No it will not be easy. And most importantly it will likely be far from perfect. I have reasons to think that some of my techniques will help the AI a bit, but there are still a few lacking ideas. I don't have a 'best' line on the track, and am going to try using only the track side reference points.

I don't think the AI needs to know all the physics of the simulator to be fast. Actually, I think traditionally developers go this route just because it is proven.

For instance: Put yourself in an unknown setup, even in a car you haven't driven in some time. But you are on a course that you know reference points of. (You generally brake *near* the 50m mark in most cars, this car you might brake slightly after, or before - but you know the 50m reference point - and that is all that is needed.) You also know that turning the steering wheel left will turn the car left, but by how much you don't know exactly. Setup could be strange there. I think the point is clear with the example that you don't need to know the dynamics of the setup or car. The only thing the driver is aware of, as far as physics is concerned, is the traction levels of all the tires.

Front tires lost grip, try braking more, (or less if hard on brakes), and try less steering input. Now, as far as getting the AI to find, and drive the perfect line - avoid the other cars when they only know estimation points, and things like that - it will be interesting. This may work great, or it may be terrible.

"All our dreams can come true, if we have the courage to pursue them." -Walt Disney
#5 - Vain
The difficult thing in AI development isn't aquiring the data, it's about making smart use of it. Even when generic AIs are given more knowledge than the human opponent the human normally still outsmarts the AI's algorithms. The unrealistic availability of data is a way of soft cheating that allows the AI to be more competitive without breaking essential game rules.
Alas, that doesn't get most AIs very far and to even beat an average player they require some sort of rules-breaking cheat (money, sight, acceleration, whatever the genre requires).

This problem is down to the fact that interesting games are not mathematically trivial. In a good game the best solution should be difficult to find, thus making the game challenging. This directly equates to the AI being bad at it and thus uncompetitive.

If you want to go on with this project I suggest to choose a boring and monotonous (trivial) game. In the racing genre e.g. restrict your vehicle simulation to a one-wheeler on a two dimensional track. Humans will be under-challenged but your AI can flex it's awesome data-aquirement algorithms.

Vain
Heh, I do know the scale of what I am diving into. Where as I don't have experience with AI exactly, I do have experience designing code and solving problems with said code. I have a fair understanding of what I am asking of myself, yes I am skeptical about the project, but more because I don't want to waste time making the environment and physics. I am far more concerned with the world and physics than I am about the AI challenges. Even a 2D world, still needs to be made, same with the physics - its required. No matter how basic they are, they need to be written before the AI, and that is actually my main concern, and reason that I have not started the project.

Like I said, the goal here isn't for AI that you would look at and say "OMGWTFBBQ" it more about tackling a situation in a different way. Most games allow the AI to control their car completely thus adding hacks or cheats (in my opinion).

And Vain, your right about the easy part is aquiring the data, the hard part is using that data; made especially easy in this case since there is less data. But the idea here is to make an AI using a human like approach. I sat and envisioned myself driving a few laps, figuring out what is needed and what goes on in our little brain. Besides having FAR more reference points than the AI will get, we actually get very little information. Sounds of the tires represent stages of traction, and even then its not an exact number. But its the reference points I am after.

The only single problem I am having with the AI, mostly because I haven't looked further into it - nor have I developed the code to start trying ideas, is to take said reference points and make use out of them. As said in all the other posts, I haven't commited to the project quite yet. But I am not afraid of the math, and problem solving that it presents me. Actually I look at it as a nice challenge to see what comes. It could totally fail, but then I know I tried, and I know why it failed. I just don't feel motivated to work on the world/physics - which are required for the AI to be tested.

I assume that the concerns displayed here are to be sure I am aware of the challenges, and that you don't want to see someone set themselves up for failure - and I appreciate that concern, thank-you. But do understand, I wouldn't look at failure as a bad thing, I would have learned exactly what about the technique does not work. I've actually started writing an article on the problems I see with *most* current AI approaches, and how the system I wanna try would address these concerns.
I kinda see what you're trying to do, but what's the point of it? It seems like an exercise that is doomed to fail. How is restricting the AI's input going to result in a more realistic (i.e., more human-like) AI?

Inducing driving errors by only giving the AI vague input will IMHO just result in an AI that can at best inconsistently traverse the track at not even close to competitive speeds, having increasingly worse performance the more abstract you make the calculations and the less you "cheat" by using existing data. So if you want to realistically simulate what a newbie driver is going to do you might have success with that, but if even achieving fast laptimes is going to be a major conceptual problem (that you seem to brush away very easily with the assumption the AI will be able to drive at the physical limit), just how are you going to create a realistic AI?

What is your goal with this? A mental exercise? Exploring AI concepts? Because it's surely not a realistically driving AI. My definition of that would be an AI that you could put in place of a human driver without anyone noticing for a good amount of time. Achieving that is extremely difficult. Artificially limiting your data input is not going to make that easier, more efficient or convincing than other concepts.

Or do you want to write an AI that manages to traverse a track with only the same input a human gets? Okay, wait, maybe I have misunderstood you and that is indeed what you're trying to do (re-reading your OP as I write this seems to indicate this)? If so then ignore half of what I said . I wish you good luck even though I still don't quite understand why you're doing this.


E: I see you posted in the meantime making things a bit more clear
I'm available for hire to sit in a box under a desk with a wheel + pedals and drive while you claim "HERE IS THIS REMARKABLE AI!!!!"

Except that would yield worse results than just jamming the throttle with a stick and letting the force feedback do the steering.

Touché.

Clear Answers
Quote from AndroidXP :What is your goal with this? A mental exercise? Exploring AI concepts? Because it's surely not a realistically driving AI. My definition of that would be an AI that you could put in place of a human driver without anyone noticing for a good amount of time. Achieving that is extremely difficult. Artificially limiting your data input is not going to make that easier, more efficient or convincing than other concepts.

My goal, if pursued, is to make artificial intelligence circle a track using only the known reference points. That doesn't mean there won't be sub-goals but I am leaving it as that for now, since maybe my posts keep hiding what I say.

Quote :
Or do you want to write an AI that manages to traverse a track with only the same input a human gets? Okay, wait, maybe I have misunderstood you and that is indeed what you're trying to do (re-reading your OP as I write this seems to indicate this)? If so then ignore half of what I said . I wish you good luck even though I still don't quite understand why you're doing this.

Just to be clear, yes I want to traverse the track with only information a human can get. And also with only the controls the human can get. Now that you know what I am doing, the reason is because I want to. I talked with a few friends asking what is lacking from games these days - graphics certainly has become an area that far too many people use to chose the new games and it is extremely annoying. The general consensus was AI. With further thought on what I had been told it hit me, what the entire problem with AI in games actually is. However, it also hit me that that problem doesn't exist in racing AI, but the problem with racing AI is simple, the human generally feels cheated.
  • Catchup; Where the AI goes fast when you're in front, and slow when you're behind. Great for arcade, console and pick'em up racing, but cheesy for a simulator because real people don't slow down for their competitors . . .
  • AI Controlled Physics; This is when the AI has different physics that the player car. The players input travels through the steering column, to the wheels and eventually turns the car via physics, while the AI just rotates their car in the direction they want. Sometimes it works by hitting them and they don't budge while you fly out of control. Not to be confused with AI Super Control
  • AI Super Control; [i]The AI can calculate 'exactly' where they want to place the controller at an 'exact' point in time, and they are capable of doing it. The fastest human can't do this. It takes at least a tenth of a second to shift, with an h-shifter and both hand starting from the wheel, unless you prepare for it. And if you're prepared for shifting it takes at least a tenth to turn the wheel over 180 degrees since it requires the second hand... The AI has no delays.
There are a few other things, and if you have any additional feed back on how AI cheats in racing games, please bring them forth I would like to read them and perhaps put them in the paper I have written. Also you may understand my intentions better when I finish the other paper. I will be sure to post it on here when finished.

Now of course I will play around with the AI after getting it to traverse a track with the human inputs and outputs. It would be nice to see what type of performance the AI can actually achieve, I have high hopes for their level of performance.

Quote :It seems like an exercise that is doomed to fail.

Not at all, even if I can't get the AI to traverse, I will have learned something. To me, learning is an invaluable tool that humans are capable of, and you can't learn without being prepared to fail. After all, learning why something didn't work is sometimes more important than doing something correct on the first try and taking it for granted.

Quote :How is restricting the AI's input going to result in a [I]more realistic[/I] (i.e., more human-like) AI?

Two ways to answer the question.
  1. First as you, and everyone should now know, my goal isn't the most realistic AI possible though it would be an amazing place to end up...
  2. By using these reference points it should make more realistic AI because the AI should be able to drive, NEAR, their same track line each time. But they won't drive the EXACT track line each time. [i]I get that this might not make them fast[/i]. But the point here, is the AI shouldn't have the exact same lap time each lap, they will tend to misjudge where a reference point is now and again. Want harder AI? Just make the reference points more accurate. I think this part of the question will be answered in a lot more detail by my write up.
I really hope this helps clear up some details. Anyone wanna join the cause of the project? Might wanna ask yourself if you are interested, and answer truthfully. It is new. Why be scared?
(this post is kinda off the main topic of this page)

Sorry,sadly I'm learning how to code in C#.My skills are near to zero. But,from the other hand,good luck mate,I'll hope you'll come up with some unique,or even devolop a rally sim,like Scawen and his team did.
Quote from blackbird04217 :
  • Catchup;
  • AI Controlled Physics;

Thankfully these aren't the case with LFS anyway
Quote :
  • AI Super Control;

A symptom of this is for example AI drivers that are hard to ram or spin out, since they'll immediately do the necessary counter steering as you hit them. However, that seems like just a matter of implementing proper AI handicaps and reaction times - hardly anything you need to write a whole new AI concept for.


That said, if you're still in search for a platform, why not use LFS? Of course, maybe that is too complex to deal with from the physics aspect, but it would already give you everything you need for this project.
  1. Graphics / physics engine
  2. Tracks & cars
  3. InSim & OutGauge
The only thing you'd likely have to additionally invent is a virtual device that can emulate analogue steering, throttle, brake and clutch input. In worst case you could facilitate the mouse-steering mode in LFS, since manipulating the mouse cursor and emitting button clicks or key presses is easier from my experience.

Via InSim you can then get the actual position of the car (and whatever else variables you need and deem 'fair' to use) and map this to your internal track markers, using the same coordinate system LFS uses. Together with OutGauge (mainly used for "what does the RPM gauge say?") there's nothing hindering you from writing an add-on AI that can only use limited input.
Quote from AndroidXP :Via InSim you can then get the actual position of the car

Disclaimer: I'm not a programmer or even a tech.

I can't help but think this point alone is a rather large shortcut. Humans do know exactly where they are on the track and what's coming up ahead, I guess due to a) what they see, b) from having analysed the track map and c) experience. If a scripting app like AutoIt v3 can be used to detect pixel colours in real-time in a specific window region (and react to it), then mayhap a similar approach could be used. I.e., car position and orientation from visual input. Obviously far more complex than a fishing bot, but the point was that if it's possible to get the visual input imo it should be used.
Somewhere you have to draw the line between simply knowing things and simulating the complex functioning of a human brain. As a driver I know where I am on the track, without consciously triangulating my position from known reference points all the time.

If you implement it like that, all you get is
1) a lot of wasted CPU time
2) an accurate position of your car
3) ??? some mystical benefit from calculating the position in a roundabout way

All you're going to use that "cheated" position would be to place it on your virtual track marker map, that will, as blackbird described, already consist of track bounds and "turn here" vector zones and whatever. These markers are also cheating if you put it like that. Why not make a visual recognition of what the track is? What a corner is? Why slowing down and turning the wheel will help you traverse this corner? You can go even more and more more basic and abstract until you end up with an AI that 100% simulates a human brain. I very much doubt this is the goal of this project

Anyway, this cheated position inside the cloud of track markers would then only be used to compute what track markers you can see, where they are in relation to your position and how to react to this information. In my opinion this is just an inefficient way to calculate a line through the corner that if you think is too perfect could alter with some Math.Random() here and there to get the same net outcome, but that's blackbird's decision
I completely missed the vector zones part This I think is sufficient, essentially it's the same principle of reference points (or zones, to prevent the super control effect). Still, during dynamic AI control input, the data input by which choices (increase steering/braking/etc) are made are different from what we use if track nodes are used. Blackbird said the challenge was to use the same data as we use, and like he said, we don't know operate by actual mathematical values. Apart from a few fortunate ones, we rely purely on visual and audio data. I couldn't tell what RPM I'm at to within 2k, but I can still tell when I'm over-revving, at the shift point, in the powerband etc. If the AI doesn't use the same data inputs, then really maybe it is as good as just hard-coding behaviour and adding a math.random().

I do know what you're saying about the abstraction, I'm not saying simulate the human brain's neural network at a subatomic level, just that the same inputs should be used as we do.
But what is "the same" input? You have to define what level of input you're talking about.

Input:
- I'm oversteering.
or
- The visually perceived angular momentum of the car is greater than anticipated from prior experience given my current input, combined with a weakened/reversed force feedback that is a result of the suspension geometry and aligning torque of the front wheels when the car points in a different direction than the direction of travel, combined with the increasing loudness and pitch of the sound that the tyres generate due to micro-vibrations of the rubber sliding and hopping on the track surface.


The former is the simple input that you, or your consciousness receives (if at all - when "in the zone" you tend to short-circuit your consciousness anyway and let the completely automatic subconscious do the driving), the latter is roughly the more deep-level input your subconscious uses to figure out the former in a sim environment.

Just to what level of "only human input" do you want to go, and, does it make sense or any difference in going to that level?

If all you want the AI to know is "you're oversteering," then there's no difference whether you come to that conclusion by doing it the complicated way of analysing how he track markers move related to how they should move, etc. or simply asking InSim what the car vector is in relation to the movement vector. One way just uses significantly more CPU cycles.
I have a lot of posts in this one - sorry:

Quote from AndroidXP :That said, if you're still in search for a platform, why not use LFS?

Because I don't have the much needed reference points, and if I made them it would take tons of time - for what I am trying to achieve. I believe you can send commands to set an axis position, though I could be wrong. Either way, LFS would be a great platform tool to use, I wish Scawen would open a few possibilities to me, but it won't happen for security reasons.

Quote from AndroidXP :hardly anything you need to write a whole new AI concept for.

True, and false. Remember the objective is not to write the most outstanding AI ever, but to do it based as close to human limitations as possible. Some parts of this will still need to cheat, I think but I am trying to figure out what, and why. Just because something is the norm, doesn't make it best. Its called inventions, there are often times with things are actually better. Not saying this is, or isn't that will be found out.

Quote from NotAnIllusion :Humans do know exactly where they are on the track and what's coming up ahead

Do you really? Go sit on a track and tell me exactly where you are. You know the billboard to your left is at coordinate <X, Y> and that the cone somewhere in front of you is at coordinate <X+23, Y-14> (Shooting numbers from nowhere, with no meaning here) Point is, you do not know your exact distance from either of those. Humans do not know where they are, nor do they need to. We can know that we are 10 to 15 meters from that cone, and at this spot is where we begin turning. If the cone is blocked from vision from a car ahead, we use the billboard on our left, we know that when its almost directly to our left is where we start turning. These are very rough estimations.

Quote from NotAnIllusion :what they see

The problem with using a 2D image that LFS or a sim generates is that it is 2D. The algorithm to reverse engineer that entire image would be very complex, slow and umm, a challenge a lot harder than what I wanna achieve...

Quote from AndroidXP :Somewhere you have to draw the line between simply knowing things and simulating the complex functioning of a human brain. As a driver I know where I am on the track, without consciously triangulating my position from known reference points all the time.

As far as your position on track, you don't know it exactly, as I mentioned above. I waited to write this here though: More importantly your exact position is quite useless. Your position compared to these reference points are everything. It is all the brain uses to, as you said triangulate your exact position. So with that, yes you know your position relative to the reference points, but its not your position that tells you to turn - its the reference points. This was an exercise that I did, and am writing in more detail in my paper:

Think of the two corners after the long stretch in BL1 (normal direction). Without starting LFS I can invision this perfectly, and I would bet you can too. You know you have a right hand turn coming up because you remember the bridge and the hill telling you a right comes after the straight. (With never playing LFS with no mini-map you have no idea what comes next.) That, combined with knowledge of the wide corner approach tells you to get to the left of the track. You see the small meter signs counting down, and start braking hard at the appropriate spot - which again is from those ref points. then you see the tires, and turn in towards them.

Now, imagine the world being completely empty; no track, tires, distance markers or bridge. Drive the corner knowing your exact position, but not knowing anything else. You want to get from position <X,Y> to <X+10, Y+10> but you can't drive straight there. You can't do it, well with math you can. But it isn't how you drive. Driving is all based on reference points, and estimations.

The Math.Random() will be called in the Pre-AI step. It will be in the vision sensor where the AI is estimating the distances to each object. I am hoping these estimations will allow the AI to brake at a slightly different spot each time. Have you watch the super 'judgment' of an AI in LFS or 90% of the games? They turn-in, brake and accelerate at the same spots on the track, not making a mistake - (until you enter traffic which is not being talked about yet so irrelevant here).
About over-steering / under-steering you're 100% right about the detection can be simple and not cost CPU time and others using the reference points that would take time to get. Two ideas to do this, The way I will likely use is tire sounds, from each of the tires... Though the other idea, less known but is a though is using 4 reference points that are known to a good degree of accuracy by the AI. Previous and Current Positions. Previous and Current Directions. These are not saying collect position code from the physics: but it could. It could also take the two closest reference points. (remember closer reference points are more accurate). What ever it is. And remember them between two frames. Which will allow us to know our direction of travel, and should tell us the direction we are aiming.

However, the more thought about and quite seriously the way I will probably choose is the tire traction data from "sounds". The physics will tell the AI sensor about each tires traction limits. UNDER LIMIT, NEAR LIMIT, AT LIMIT and OVER LIMIT. If your front tires are over the limit and rear tires under/at limit then you are under-steering. If rear tires are over the limit and front are under/at you are over-steering. Pretty simple, and very fast to compute. More importantly it follows the logic that a human can use - though there may be some more known values here depending.

This area is likely the only place where the AI needs to ask the physics system what is happening. It is the only place a human really knows the physics of the world too. Besides knowing that turning the wheel turns the car. Still wondering about that process for the AI, but it will come.
Quote from NotAnIllusion :If the AI doesn't use the same data inputs, then really maybe it is as good as just hard-coding behaviour and adding a math.random()

I almost missed your post, must have written it while I wrote one. Your probably correct here, but you can also look at your RPM gauge. So in this example the AI can do that. Of course you did just add another point to my simulation - thanks. Which is that the AI can tell the engine speed within some degree, I can usually guess accurately to within about 1000 RPM depending on the car, and would bet people with more sensitive hearing can go to 500 RPM. But anyways, you have added a piece of input to the AI for constant updating. RPM +/- 750. Unless the AI chooses to observe the gauges which will be a very accurate RPM gauge.

If the car has a shift-light I would say the AI can know immediately that it is on. Even in peripheral vision that is easy to detect.
#21 - w126
Quote from AndroidXP :That said, if you're still in search for a platform, why not use LFS?

That could be interesting. Maybe it would be possible to generate reference points based on SMX and PTH files. OutSim also contains car positions.

Quote from AndroidXP :The only thing you'd likely have to additionally invent is a virtual device that can emulate analogue steering, throttle, brake and clutch input.

PPJoy may be used for that, as described here: http://ppjoy.bossstation.dnsal ... iagrams/Virtual/IOCTL.htm
Quote from blackbird04217 :Because I don't have the much needed reference points, and if I made them it would take tons of time - for what I am trying to achieve. I believe you can send commands to set an axis position, though I could be wrong. Either way, LFS would be a great platform tool to use, I wish Scawen would open a few possibilities to me, but it won't happen for security reasons.

What exactly are you missing? Again, assuming you use the same coordinate system as LFS:
  1. Drive along the track outline and let your program automatically place track markers (your virtual marker consisting of a single coordinate on your virtual track map) every time you pass a track node. Do this for the outer and inner path of the track and you have your track marker cloud.
  2. Make a simple InSim button interface that lets you spawn brake points, "go there" vector zones, etc. on the current position & direction. You do all this on your virtual track map so you don't need anything from LFS/InSim other than the current coordinate/vector.
Once you have this you use InSim to get your current position. You don't have to let the AI know this position, but just use it to calculate from your track map what reference points the AI is able to see. From there on you can do all your fancy abstract algorithms to figure out the rest. This is all the input your AI needs, as far as I understood (reference points at different places in the virtual field of view of the AI).
Quote from blackbird04217 :Have you watch the super 'judgment' of an AI in LFS or 90% of the games? They turn-in, brake and accelerate at the same spots on the track, not making a mistake - (until you enter traffic which is not being talked about yet so irrelevant here).

Yes, but that's just the missing sensible application of Math.Random(). Once you have an AI that can perfectly follow a line to the best capabilities of the car, there is nothing hindering you from making that line less than perfect.

Using limited input and inaccurate knowledge to make the AI drive less than perfect just seems so... complicated for achieving a simple effect.
#23 - w126
Quote from AndroidXP :
  1. Drive along the track outline and let your program automatically place track markers (your virtual marker consisting of a single coordinate on your virtual track map) every time you pass a track node. Do this for the outer and inner path of the track and you have your track marker cloud.

It seems this is not even necessary. PTH files (http://www.lfs.net/file_lfs.php?name=SMX_PTH_S2Y.zip) contain this information. I'll quote their specification below because I could not find it anywhere apart from the zip file.
Quote :PTH VERSION 0 - Path files for Live for Speed S2
=============


The nodes are given by a fixed point position (X, Y, Z) and a
floating point direction (X, Y, Z)

The node can be considered as a line perpendicular to its direction.

Outer and driving area left and right limits are given.


TYPES :
=======

1) X,Y,Z int : 32-bit fixed point world coordinates (1 metre = 65536)

X and Y are ground coordinates, Z is up.

2) float : 32 bit floating point number


FILE DESCRIPTION :
==================

num unit offset description
--- ---- ------ -----------

HEADER BLOCK :

6 char 0 LFSPTH : do not read file if no match
1 byte 6 version : 0 - do not read file if > 0
1 byte 7 revision : 0 - do not read file if > 0
1 int 8 num nodes : number
1 int 12 finish line : number
......NODE BLOCKS


NODE BLOCK :

1 int 0 centre X : fp
1 int 4 centre Y : fp
1 int 8 centre Z : fp
1 float 12 dir X : float
1 float 16 dir Y : float
1 float 20 dir Z : float
1 float 24 limit left : outer limit
1 float 28 limit right : outer limit
1 float 32 drive left : road limit
1 float 36 drive right : road limit

Quote from AndroidXP :What exactly are you missing? Again, assuming you use the same coordinate system as LFS:

[Snip]

Using limited input and inaccurate knowledge to make the AI drive less than perfect just seems so... complicated for achieving a simple effect.

Hmm, seems my objective is still being misplaced, and will probably be understood when I finish this paper.

On the LFS stuff. I could get the left side, right side info with almost as much work as making a quick environment myself I think. But I would be missing the important reference points of cones, and other trackside things like the billboard. It's hard to say what I mean, but I think these other reference points are more important then the track. It's also extremely important to know I want a line of sight on the reference points.

-----

Please make any repsonses, I am going to try writing my paper as quickly as I can so that you hopefully understand where the project is going and why. The problem isn't a simple effect, and more to the point I don't have an AI system that I am working with. Its not like I have AI working like LFS or something already, if I did and was talking about rewriting everything to test something I would be absurd - especially if it was late in development or something. I am however proposing a new way of doing racing AI. For better or worse.

How would many algorithms be made and known today if someone didn't try them first? Man use to make fire from only sticks, so we are told, and we would continue doing it that way if the person who saw sparks from stones didn't *try* lighting a fire from said sparks... That is probably the most primitive analogy I can find.
Quote from AndroidXP :
Input:
- I'm oversteering.
or
- The visually perceived angular momentum of the car is greater than anticipated from prior experience given my current input, combined with a weakened/reversed force feedback that is a result of the suspension geometry and aligning torque of the front wheels when the car points in a different direction than the direction of travel, combined with the increasing loudness and pitch of the sound that the tyres generate due to micro-vibrations of the rubber sliding and hopping on the track surface.


The former is the simple input that you, or your consciousness receives (if at all - when "in the zone" you tend to short-circuit your consciousness anyway and let the completely automatic subconscious do the driving), the latter is roughly the more deep-level input your subconscious uses to figure out the former in a sim environment.

Just to what level of "only human input" do you want to go, and, does it make sense or any difference in going to that level?

Well, I don't know the intricacies of the implementation so it's hard to say where to draw the "makes sense" line. That said, the latter is more intuitive. In RL it's still your subconscious drawing from several data inputs and making choices (bad wording, it's not exactly what I want to say) according to them. There's far less emphasis on visual data, you can distinguish various car states even with your eyes closed in RL. The aim is a sim-racing AI though, so I guess whether the data inputs are from a sim-racing PoV or an RL PoV but applied to a sim is an issue for the designer. edit: I kind of forgot to consider FF-wheels because I don't use one anymore (), but yeah properly set-up that will increase data quality, without over-emphasising visual input, which I guess is a biggie ^^

Quote from blackbird04217 :Do you really? Go sit on a track and tell me exactly where you are. You know the billboard to your left is at coordinate <X, Y> and that the cone somewhere in front of you is at coordinate <X+23, Y-14> (Shooting numbers from nowhere, with no meaning here) Point is, you do not know your exact distance from either of those. Humans do not know where they are, nor do they need to. We can know that we are 10 to 15 meters from that cone, and at this spot is where we begin turning. If the cone is blocked from vision from a car ahead, we use the billboard on our left, we know that when its almost directly to our left is where we start turning. These are very rough estimations.


The problem with using a 2D image that LFS or a sim generates is that it is 2D. The algorithm to reverse engineer that entire image would be very complex, slow and umm, a challenge a lot harder than what I wanna achieve...

I think my latter post cleared these points up a bit. I agree that we don't know exact coordinates and distances, but since we know when we brake too late or early, it's possible to know where we are w.r.t where we should be. In a complex relativistic way I can't say anything more about the visual input thing, I don't know enough about the hows of implementation. If a sim's co-ordinate system can be used to achieve a practically sound system, yeah not saying no

Quote from blackbird04217 :Shift light, RPM gauge

The shift light is a good and realistic way, provided it's a multi-step one. A single light only is a problem because if you react to it you're losing time. Imo it's perfectly possible, with sufficient experience, to shift closer to the optimum shift point by prediction rather than waiting for the light. I.e., you start moving your hand before the light, so that the actual shift happens as close to the light going on as possible. If you have a multi-stage light system like in F1, it's easier to predict the shift point based on how the lights change.

FGED GREDG RDFGDR GSFDG