The online racing simulator
Searching in All forums
(987 results)
blackbird04217
S3 licensed
Quote from Gutholz :
To me the visual sensor approach only makes sense in two scenarios:
1) If the input data is not perfectly accurate. (as you now mentioned)
For example if the driver eyes misjudge the distance to a point but he still has to make the turn.
Of course could introduce a random factor but it would be a bit artificial?

It may be artificially added through the Visual Sensor, but it is, will be, added for an important reason; our vision is estimations. Now, it might be 98% accurate, or maybe 90% accurate, but in either case this was important to me that the driver works backwards using the sensors (Visual, Physical and Car) to work out where he is.

This will cause the driver to have variation as a real driver would by "messing up" their estimation.
blackbird04217
S3 licensed
I should probably make a diagram, but for now try drawing that in your head as you follow along.

AIRS is the world the driver lives in, it contains all the information needed for the driver. It has several parts, most importantly being the "Scanner" which is how the AIRS world gets information about the world.

LFS is obviously the physical simulation for the car the driver controls. The "LFSScanner" will create connections to LFS InSim, OutSim, OutGauge protocols and read layout files, car info and mesh files and even the provided terrain mesh files to gather all this information and feed it (through the scanner) to the AIRS world.

The driver then uses sensors, Visual Sensor, Physical Sensor and Car Sensor to know what is happening, and those sensors are fed data from the Scanner.

Ultimately, if I created my own racing simulator, I could make a Scanner to feed AIRS and the driver would then be able to control it. Now, I know I will make some assumptions based on 'reasonably realistic physics', and I'll probably unknowingly make assumptions based on the LFS physics, so I don't know how well Jared would do just changing simulations like that. In theory though, it is possible.

--------------------------------------------------------------------------------

The Visual Sensor will take each ReferencePoint and test it against the drivers transform matrix. If within field of view, (85 degrees left/right of straight forward), it will then test if the ReferencePoint is hidden. If it is still visible, it will bring the direction/distance into driverSpace using the drivers transform matrix. In the future it will also fluctuate the direction/distance based on estimation so that the driver doesn't have 100% accuracy, however I'm skipping this for quite some time.. The visible points are stored with information about the direction (driverSpace) and distance from the drivers eye to the visible point.

The final process of the Visual Sensor will then take these visible points, the ones based in driverSpace and possibly not 100% accurate, and transform them back to perceive the drivers position in world space. This will include the error of estimation once I get that far in the project.

The MemoryUnit will then track 10 snapshots, with each snapshot containing up to 32 visible reference points, the perceived position, and some other information from the Physical Sensor; directionOfMotion and feltForces.

The PredictionUnit will use the snapshots in the MemoryUnit to predict the future course the car will be following. My process of thought here, is instead of making Jared drive from point to point, (like I did in my previous attempt half way through the thread), he will instead attempt to drive and stay on track based on the prediction unit. So once the car is moving, the prediction unit will have a vector where it thinks the car will be in 1 second. If that vector falls of the track, the driver will (hopefully) be able to take appropriate actions to slow the car down, if needed, and to turn and stay on track.

Does this make sense where the current state of the project is?

As far as genetic learning, it is something that comes up a lot but, I think I will let that sit on the bench because, I can't speed up the simulation and I think it'd add more complexity trying to tune it. Can't wait until Jared does his first complete lap.

Anyone have any guesses as to a lap time? (XRG at FE1) I'm going to guess 1:45
blackbird04217
S3 licensed
Yea, that is pretty close to what I derived (for a 2D solution) in post 316, at least as I understood it minus you used only point A. There are technically three coordinate systems, LFS World, AIRS World and AIRS Driver. For most purposes (including this) the LFS coordinate system can be ignored.

Basically everything you said is exactly correct, except I don't give the driver a compass, which does make it difficult to continue. I did successfully compute the 'compass' direction by using two points, A and B, and the angle between these points in driverSpace vs worldSpace is the rotation of the driver. This was the reason I used 2 points.

Your solution would work also, if the driver knew the compass heading already because then he could undo that rotation. Long story short though, the driver is doing this now but simply using the transform. It holds 75% true to the project since the driver, doesn't get fed his position or rotation, and can only calculate his position (currently knows nothing about rotation) based on the visual estimations to each reference point. The other 25% is taken away because it is using the transform directly to do so, which as I state in the following paragraph could mostly be computed anyway, minus roll.

I do fully understand matrices, and know how to build one using a forward direction, then cross that with an up (0, 1, 0), to get the right vector, then cross those to get the actual up vector. However, my head tells me doing this in this situation allows for more error than I could allow; it removes the roll from the cars transform by assumption. It is possible I am wrong, and this error amount would be insignificant, in any case the Visual Sensor has a way to perceive the position based on the visual input.

I actually don't use sin/cos much at all, honestly I use the 4x4 matrices and vector3's so much more as it is simply easier. Post 316 was completely my thought process as I looked through a problem without searching for a solution, just my raw thoughts as I worked through it, in 2-dimensions.

//--------------------------------------------------------------------

I'm currently working on getting the driver to drive back to the track from where ever the car is at a given point at time, at least, as well as I can. There are going to be a lot of issues, primarily walls and objects that block the car from driving where the driver wants.

The driver, who my girlfriend named Jared, can current get in the car. The car must be H-shifter only at the moment, using XRG. LFS is not using auto clutch, so he can stall the car. In the StartFromStopAction, the driver can get the car moving from a stop, which if the car is off he will first turn it on, put it in first gear, rev the engine a bit, and fade off clutch/on throttle until moving. The prediction unit requires motion to project where the car might be in the future.

Once the car is moving 25mph, Jared will panic. An action call PanicStopAction, will force him to put both feet on the clutch and brake 100%, and release all other controls until the car is stopped and then he will put the car in neutral.

Both of these actions use other mini actions to help shift, start the car etc.

The new action will be GetOnTrackAction, which will attempt to pick a point on the racing line nearest the driver, and move along the forward racing line some short distance, and drive toward that point. Once the car is detected as on the track, this action will end. This action not yet developed.
blackbird04217
S3 licensed
Hello Todd,

How are you?

I do actually know about the Matrices and Vectors. I'll admit to having troubles from time to time dealing with different coordinate systems but overall I would classify myself as above beginner. The thing about post #316 is that I was attempting to figure out (without the cars transform matrix) where the car was). I solved it on paper for 2D, without much in the need for resources simply as a challenge to myself, and thus that is the lost behavior of that post.

I am not sure it is possible to solve in 3D using this approach as the error of setting/assuming the up vector to 0,1,0 could be far too large.

Therefore I have abandoned that approach. The Visual Sensor will predict the position of the car by doing the following steps. (The Visual Sensor has the transform matrix readily available, required for the field of vision tests, but the driver never gets that information (the transform)).

- The Visual Sensor will take the points within three sections of the track (current, next and previous from where the racecar is actually located.
- The Visual Sensor will test to make sure the vector from the drivers eye to the reference point is within the field of vision, throwing it away if it is not.
- The Visual Sensor will then test to make sure that vector does not go through any terrain, (and eventually other cars), if it does it gets thrown away.
- The Visual Sensor will then take this vector (which is currently worldspace) and transform it into driver space via the racecar transform. It will (eventually) apply some estimation error to the direction and distance. (Currently 100% accurate for simplicity)
- After all visible reference points have been detected, the Visual Sensor will inverse the racecar transform matrix, to bring those directions back into world space, and reverse the process to get the perceived position

Note, once the estimation error has been injected, the perceived position will not be 100% accurate, it currently is, as expected.

I do realize the undertaking of this project, and have been in the game industry for 5+ years. I love programming, and racing, and am just doing this project on the side for fun and to see where it goes.

EDIT: The primary reason for the slow development speed is me dropping the project for other things, then coming back to it for a bit.
blackbird04217
S3 licensed
Hello and finally the video I've been promising has been completed.

Youtube: Artificial Intelligen ... ations: Driver Prediction

*The driver is not actually moving the car, only predicting the path of the car based on the sensors, which are gathering information from a replay.
blackbird04217
S3 licensed


Okay, thanks for the support guys! I'm going to take the chance to talk a little technical as I've found a great reason for writing in this thread is to think through the problems. Usually if I find a way to explain it to someone then I can make it happen. In this case I'm a little stuck on a pretty difficult problem that I know must be possible.

*Warning, this post might be a little technical, hopefully it is explained well, but if you get lost don't worry. I wrote this post while coming up with the algorithm to figure out how to do what I'm trying to do. Meaning this has a lot of unfinished/unpolished raw thoughts as I worked out the problem.

The artificial driver can see two reference points, A world(50, 0, 0) and B world(100, 0, 0).
The driver knows the distance and direction to each of these reference points... *note 1* ...relative to the drivers point of view. A cone directly ahead would be in the direction of (0,0,-1) regardless of the world direction the driver is viewing. In this example, I am going to place the driver at world(30, 0, 40) and he will be looking straight at reference point B.

Therefore the driver knows that point B is in the direction of (0, 0, -1) and a distance of: Magnitude(100-30, 0-0, 0-40) = 80.6 meters. point A is a little more difficult as subtracting the values as such won't give the direction in relation to the driver, hence how I figured I had the issue described in note 1. I had simply did driverPosition - conePosition, to get direction/distance, but that direction remained in world space. Doing this above for point B was simple because I defined the driver to be looking straight at that point, knowing straight ahead is (0, 0, -1).

I now need to figure out how to get the direction from driver to point A relative to the driver. It should be something along the lines of, as a complete and utter guess based on estimating distances from those points on graph paper, seems A is just under 40 meters in front of the driver and just about 25 meters to the right. I will continue this post with these estimated values, though don't be disappointed if the math doesn't come out exactly to the expected driver position, world(30, 0, 40).

The distance to point A (from my estimation) is about ~47.2 meters. The actual world space distance is: (to check my work): ~44.7 meters, so my estimation is withing 3 meters, not bad for graph paper at 10m per square!

///////////////////////////////////// Therefore the driver knows the following:

Reference Point A is 47.2 meters in the direction of driver(0.53, 0.0, -0.85) which would be positioned driver(25, 0, -40)
Reference Point B is 80.6 meters in the direction of driver(0, 0, -1) which would be positioned driver(0, 0, -80.6)

The driver will be given the world space position of each reference point, as he is trying to compute his own world space position.

Reference Point A is world(50, 0, 0)
Reference Point B is world(100, 0, 0)

/////////////////////////////////////


Given only that information the driver needs to figure out he is actually at world(30, 0, 40)....

If I take driverB - driverA that will give me the vector from A to B in driver space. driver(-25, 0.0 -40.6) and he is trying to line that up with the world space vector from A to B which is world(50, 0, 0)

I will admit to being slightly lost and currently rambling about numbers and positions that are known, trying to find a way to use some other mathematical function to get from what the driver knows to where the driver IS.

My line of thought is something along the lines of this, we have a triangle, in the world space we see this as reference point A, to reference point B to driver. world(50, 0, 0) to world(100, 0, 0) to world(30, 0, 40). The driver also sees a triangle, but it currently exists in driver space as follows: driver(25, 0, 40) to driver(0, 0, -80.6) to driver (0, 0, 0). Since the driver has the vector from reference point A to reference point B in both world and driver space, he should be able to figure out his position in world space. Lets see trigonometry....

Getting the angle between worldAtoB and driverAtoB is going to be easy and critical. The question then is what to do with that angle, and my brain isn't connecting dots.

With just a moment (20 minutes) break I figured it out. So the angle between worldAtoB and driverAtoB is 122 degrees, which makes perfect sense given my diagram. If we then find the vector that is 122 degrees from the vector (driverB to driver) and subtract it from worldB, the result will be the drivers position in world space. I didn't quite follow what I just said but...

driverB to driver = (0, 0, 80.6) It is not negative here because the direction is not driver to point B in driver space, it is from point B to driver in driver space

The vector which is 122 degrees from this is: (0 * cos(122) + 80.6 * sin(122), 0, -0 * sin(122) + 80.6 * cos(122)

cos(122) = -0.530, sin(122) = 0.848

which is: (68.3, 0, -42.7)
This added to worldB (0, 0, 100) should give us... (something failed in my logic...) (68.3, 0, 57.3) which is obviously not where we expect the driver I suspect something is only slightly wrong with my rotation vector...


I've decided to try using driverA to driver instead of driverB only because driverB has a special case consideration above where x is 0. The following also only solves the position in 2D space, so I may need to just bite the bullet and multiply the vector from driverA to driver by the drivers local coordinate matrix... (Grabbed directly from LFS and something I was trying to avoid, but I currently see no way to avoid that and accurately compute this while including the dimension of height). Ignoring height...

driverA to driver = (-25, 0, -40) (Again negative of driverA since it points back to the driver)
(-25 * cos(122) + -40 * sin(122), 0, --25 * sin(122) + -40 * cos(122)
13.25 + -33.92, 0, 21.2 + 21.2 = (-20.67, 0, 42.4)

(-20.67, 0, 42.4) + worldA(50, 0, 0) = (29.33, 0, 42.4) Which is within expectations given my estimation above of the driverToA vector in driver space...

But now I'm a little confused because that didn't work for driverB to driver?? I believe it has some reason to do with the stupidity of -Z being the forward direction, however negating the Z in driverA to driver worked flawlessly, if I instead take driver to driverB and run it through then add it I will get (-68.3, 0, 42.7), and add that to worldB(100, 0, 0) is (31.8, 0, 42.7) which is again in the acceptable range of error due to my estimations. I am a little unsure why the vector (driverA to driver) works but the vector (driverB to driver) needs to be negated, I can only assume I didn't negate a Z when I should have.

I may come back to solving this but as said in the notes, I may just leave it in world space, still with estimated world space directions and distances as it would be a severe pain to compute in the third dimension... Ultimately I'd like the driver to have all information from within driver space,so I may give it an attempt at least.


Note 1: This is where I learn that although the driver currently only knows the distance and direction, he currently knows the direction in world space. Which means finding his world position is a matter of taking the reference point position, and moving backwards along that direction by the estimated distance. Initially I planned that the driver would only have knowledge of the direction in driver space.

Meaning if the driver saw a cone 50 meters in front of him, regardless the direction he is facing, the direction would be (0, 0, -1) and a distance of 50m And if he saw a cone 25 meters to the right and 25 meters forward, he would then know the direction as (~0.707, 0, -0.707) and a distance of ~35m.

I'm not sure this matters in the big picture of things, but I am going to attempt to figure that out, if it breaks the prediction unit, I may leave it as it is. Back to how I would compute it if the direction were in the right space...


Note 2: I did manage to get these into the driver space and as I predicted, this breaks the prediction unit completely. Worse than I was expect, it was obvious to me that the prediction unit would then visualize the cars movement always along (nearly) the world z since the visualization is rendered world space, but I didn't expect it to break the reported speed. I may change this up a little so the prediction unit runs using the "estimated world space positions" that the visual sensor uses. Which means storing this estimated position into the drivers memory unit. Which is probably an overall optimization because it will be a bit computational...

Note 3: Sorry this one got really technical, maybe someone sees the mistake I did with driverB to driver, but this post was written as I figured out how, by hand and thought, to compute the drivers world position only given driver space information and two world points.
Last edited by blackbird04217, . Reason : Added new (so-so) logo.
blackbird04217
S3 licensed
I worry too much sometimes.

I do completely understand that I am dealing with something very technical, and making my best attempts at explaining it so it is understandable by a not-so-technical mind. There are times, as any following reader has witnessed, where the technical bits get more than I can handle, and usually I stumble along and it gets clear again.

That said, I try to answer any questions and enjoy doing so as it makes me think about the process, and sometimes simply explaining something differently can spark new ideas or improvements.
Can you hear me now?
blackbird04217
S3 licensed
Is anyone still reading? I will continue writing regardless, if only for my benefit of tracking the progress. But knowing others are reading will make me feel less awful about making new post after new post being the only one talking.
blackbird04217
S3 licensed
I have finally started working on the driver. Currently there isn't much driving however, the driver will "enter a car". Meaning, as soon as AIRS connects to LiveForSpeed, (it currently assumes there is a player car to use), the driver will be placed into an action "EnterCarAction". This action will stop the car, if it was already moving which it almost always won't be. The artificial driver will then put the car into neutral, hold the brake at 25%, and let all other controls be at rest.

I don't have time this weekend to get the driver doing more than this, and at that I still want to design the action system a little better. I am hoping to allow the driver to multi-task with small, reusable actions. The current actions available are: "EnterCarAction", "ShiftToGearAction", "TurnOnCarAction".

Taking a break now to work on the video I've promised regarding each of the sensors and the prediction unit.
blackbird04217
S3 licensed
I've finally gathered all the information I need from the LFS Cars, Track Environment, Current car/driver information for all the sensors (as currently defined) for the artificial driver.

Currently the driver can detect/retrieve this information:
  • Distance and Direction to Reference points that are visible from the drivers view. (an estimation from where the car position is, a better estimation could probably be made using the LFS camera position packet and forcing cockpit view, which I may do in the future?)
  • Forces of motion felt by the driver, relative to the driver.
  • Direction the car is physically pointing and moving.
I've gathered much more information than this, including car dimensions, torqueRPM, powerRPM, idleRPM, limiterRPM, numberOfForwardGears, speedometer reading, tachometer reading, etc etc However the artificial driver does not yet have access to any of that data.

-------------------------------------------

I have just started hooking up a virtual controller using PPJoy. I suspect this will take the rest of this weekend to get working and calibrated, and maybe next weekend I can start working on the actual driver. I still have not allowed the driver to estimate his position in world space yet, but I suspect that will be the direction I take as I can't find anyway it goes against the ideas behind the project and doing it in any other way would be extremely computationally heavy for the same sort of result.

I should make a video of the current state of the sensors and prediction unit, it is pretty impressive how it uses the Visual and Physics sensors to predict the cars slip angle and path of motion for the next second.

-------------------------------------------

I had some troubles setting up PPJoy, and it was a massive pain to get it working the first time long ago. Upon looking through what I needed to do to get it working I found vJoy. Much better, still not documented well, but I now have the ability to set the steering, throttle, brake and clutch input through vJoy to LiveForSpeed. Next up is a few essential buttons, ignition being the primary one. I might actually get to start on the driver today after all!

Took longer than expected on the buttons, but the Virtual Controller is now hooked up to shiftUp/shiftDown, ignition, and h-shifter. Car reset might get on the list as well.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
I don't know much about the Fanaleds, but I would assume you want LFS to send an OutGauge packet and not an InSim packet. In the LFS cfg.txt there are some options for OutGauge.
blackbird04217
S3 licensed
I am sort of battling how I want the artificial driver to 'see' the racing line. On one hand I've been arguing to make him see it based completely on triangulating the position based on what the visual sensor gives him for information. On the other hand, while that is the more 'human thought process' like approach (something I've been aiming at) it also adds some extreme complexity and processing effort to get the information that.

I have been considering to GIVE the artificial driver the entire racing line, in world coordinates. Something I earlier argued against, probably more heavily than I should have. But this does not mean I will GIVE the artificial driver his position in world coordinates. That will still come via the visual sensor. So, basically instead of finding some complex way of finding the racing line without world coordinates, I think I will allow the artificial driver to know the world coordinates of all the reference points.

By knowing the world coordinates of each reference point, the driver can then estimate his position in the world by using the direction and distances provided from those reference points within view. This will still be an estimation, and with a sloppy visual sensor it should produce sloppy results. This should still achieve the same effect I am going for, although it does so a little different than my initial thought. Again, I am not giving the artificial driver his exact position in the world, but making him guess based on what he see's.

I think after that the only thing remaining is to put the driver into the car and start making him drive, hopefully intelligently!
blackbird04217
S3 licensed
I then suggest you start pick up a copy of Visual C# Express, or some other development tools. Start using the wonderful Google to start learning how to program with your chosen tools. And after a considerable amount of effort has been put into learning a new, useful and amazing skill, you will be able to create your own Cruise program meeting all conditions exactly how you want.

It is vastly underestimated what goes into programming even a simple application. Even great programmers constantly underestimate the effort it takes to create a simple project.
blackbird04217
S3 licensed
No it isn't actually learning anything, I'm going to avoid the complexity of AI learning things when I can help it, just so I don't need to keep tweaking and hoping. So unfortunately nothing else would happen by using the WR replay.

Actually there is no concept of a driver in the AIRS project at this moment. I have the sensors, and a memory unit that the driver will use to make his decisions, but no driver at this time.

I'm currently trying to decide whether I should make the next video explanation of the project or attempt using the physics sensor to calculate the current, and predict future, grip/slip angles. Oversteer/Understeer situations. I would really like to use the current slip angle to determine (guess) the possible cause of Oversteer / Understeer situations, however since I have no steering input available I am not sure that is possible to guess the causes.

Understeer: Input overload (Steering too hard)
Understeer: Weight transfer (Braking too hard)
Oversteer: Input overload (Too much throttle, Turning too hard)
Oversteer: Weight transfer (Too much braking / Too little throttle)

There are probably more reasons that I am not thinking about at this moment, that a driver causes for these situations, but maybe I can eliminate all but steering and use steering if all others fail. (Ex; If the car is shown in an oversteer state and the throttle is not pressed (much), and the brake is not pressed (much), then the oversteer much be caused by turning too hard/sharply.

Not sure how to determine under or over steer without knowing the steering angle though!

//-------------------------------------------------------- A little later

Although I don't yet know how to use this to estimate the current oversteer/understeer (may not be possible without knowing steering input) I do have the prediction unit running an analysis of the slip angle to predict whether it will get worse or better given the slip angle of the previous few memory states. By slip angle, I mean the angle between the direction the car point and direction it is actually moving. So if the car were reversing, the angle should be 180*.

Prediction with slip angle

Red means a slip angle of near or over 15* (which will need adjustments)
Green means a no slip, or a slip angle nearing 0*
Last edited by blackbird04217, .
blackbird04217
S3 licensed
Dygear; It certainly would be useful, (and should be shown visually), that the path gets less certain as the prediction runs into the future. That shouldn't be too hard to add so I'm going to give it a whirl.

EDIT: Added screenshots.

Prediction Path with Uncertainty 1
Prediction Path with Uncertainty 2
Prediction Path with Uncertainty 3

The ai driver is not yet driving. I'm still using a replay of me driving a few laps around FE1 that keeps looping over and over again to pull all the information from InSim, OutSim and OutGuage. I have not yet made the virtual controller interface (again) yet and have not attempted to make a driver get into a car. I am delaying this for as long as I can so that the information gathered is useful, somewhat accurate, and ready to be used by the driver.
Last edited by blackbird04217, . Reason : Added screenshots.
blackbird04217
S3 licensed
Another step of awesomeness. It works like a charm, a little hard to see in the first image, but the second has a digital speedometer and matches perfectly. This estimation only comes from the current and previous memory. So when I add more, in theory, it should be more accurate; and/or maybe more inaccurate based on acceleration.

Visual Speed Detection 1
Visual Speed Detection 2

Well. That gave me a great idea of the current speed of the car, and I can use it to get the current acceleration of the car, and both of these values are reasonably accurate in a straight line. This makes it easy to predict the path of the car in a straight line based on the current speed and acceleration, however, race tracks have corners, and I'd like the prediction unit to be able to predict reasonably well where the car will be given the current state. Meaning, if the car is currently making a turn, it will predict the cars position if it continues turning at that rate. I definitely have my work cut out for me here!

//------------------------------------------------ Later

Prediction 1

This shows the line the car will continue to follow if it continued along the current velocity. Since the I haven't been able to estimation rotation/turning based on the visuals this won't work very well for a turn, which is why the image is taken along the straight!

A little later. . . And now based from the average change in distance from the visible cones (velocity), as well as the change in that change (acceleration). This created a fair prediction/estimation of turns. Not perfect, but it I think it's pretty damn good prediction using only the visual sensor. It may be worth trying to create one based on the physics sensor. Maybe combine the feel (acceleration) of the physics sensor with the velocity from the visual sensor.

Path Prediction 2
Path Prediction 3
Path Prediction 4

The driver is predicting the path ONLY using the visual information (direction and distance) to each cone in the field of view.

Youtube: Path Prediction in Action

Please, enjoy.
Last edited by blackbird04217, . Reason : added more content/updates
blackbird04217
S3 licensed
Keling: I do understand what your getting at with the AI learning where to shift, however I'm attempting to avoid that complexity of teaching a machine to learn. I also feel a racing driver would actually know the power band of the car, maybe not the exact values, but then, that is why I've given the range. For your second point, that is easily changed by making sure the shiftUpRPM is less than limiterRPM.

Degats; I believe you'd be right for a semi-serious driver with a lot of experience in a single car/setup. However if the car/gearing changes often, I don't think the driver would go out thinking "5500 for first, 5600 for second, 5300 for third... etc" I do feel they would understand the best torque/power range of the car is between 5200 and 5500, and attempt to keep the car within that range while accelerating, regardless of set. Of course, there is some assumption that the next gear will be within some reasonable range of the current.

Gutholz; I have managed to load the car mesh files from LFS and will use these to determine the width / dimensions of the car. It will be the width of the model, but it should be close enough for the needs the driver has.

As far as the reasons behind predicting from the visuals and not using the actual position provided by LFS, Dygear is correct in I want to make as few LiveForSpeed assumptions as possible and keep the driver generic. Meaning if I could get a certain amount of information from another simulation (perhaps one I make someday) then the AI should be ready to drive there reasonably well. However I am sure there will be assumptions and things tweaked to be optimal in the LFS environment. And though this is a valid thing in the back of my mind, it is not the primary reason.

The primary reason I want the AI driver to use a visual sensor to predict where the car will end up is because that is exactly how we would go about it. I haven't given the AI driver their position in the world. We don't know our position in the LFS world when driving. We can estimate we are about 2 meters from the track edge, 20 meters from start/finish etc... and at what direction from our point of view those objects are. This is how I'm trying to make the AI driver think closer to how we would think with the information we have.

Of course I don't promise it will perform well, or that the computation will be light enough to use in a game, or anything really! But by using this visual sensor to estimate the location of each reference point seen, I could then go in and tweak the visual sensor to give faulty information. Faulty meaning, not perfect estimations. By doing this it should create some degree of error, without actually programming the AI to make mistakes.

Though, I'm working with perfect sensors for now, and will be until the car is driving fairly well.

---------------------------------------------------------------------

Speaking of, I finally have the physics sensor feeling the acceleration and linear velocity correctly. That took a lot of time. Once I am done cleaning up all the failed attempts and broken bits, hopefully without breaking it, I will be able to begin the memory and prediction units for the driver.

The memory unit will store the position of around 32 reference points. Each update will change the memory, throwing away any 'memories' older than 1 second (or so), and adding the new memories. It will have a way to use the remembered reference points to get the average velocity between one memory (A) and another (B). If you do this again and get the average velocity from (B) and (C) you can then use those two average velocities to get the average acceleration (visually).

Of course, as I said much earlier in the thread, this might prove harder than I make it sound as the car could spin slightly and the reference points far away will seem to move far while the ones closer won't. I will need to find a way to get rotational values from the view to correct.

Lets see how it works out. Here is a shot of the physics sensor working.

Physics Sensor
blackbird04217
S3 licensed
Okay, FINALLY I have Vel and Accel in both (airs) World and Car spaces. I couldn't convert heading/pitch/roll the way I was trying to above, at least I was super confused by trying that, instead I just created rotation matrices and multiplied them in the order of Scawen's quote and finally things started working as expected.

See here

The light red/blue line is the world right and forward directions, the dark red and blue line are the cars right and forward directions.

The light green line is the world acceleration and the cyan circle is the cars local acceleration, the driver will feel the opposite of this since this is the actual acceleration value.

The light pinkish line is the world velocity, and the yellow circle is the cars local velocity.

I'm not entirely sure about the local/world of AngVel, but I believe the value that I am using (pink line on physics sensor at top-left of the screen) is working to some degree as I expect. If this value is actually important for the AI driver, I will come back and make sure I fully understand it.

Now to clean up all that mess.
blackbird04217
S3 licensed
Quote from Victor :something quoted from scawen I had lying around:
Quote :heading, pitch and roll are in radians.
acceleration is in metres per second squared.
velocity is in metres per second.
velocity and acceleration are indeed in the "world" coordinate system.

to work out a car's orientation matrix, the calculation order is : roll,
pitch, heading

to work out the acceleration or velocity in the car's coordinate system, a
programmer should create horizontal (x) forward (y) and up (z) vectors
(using the heading, pitch and roll) and take the dot product (scalar
product) of these with the acceleration or velocity vectors supplied in the
outsim packet.


amp88 found this for me and has been extremely useful. I think I'm very close to a solution, my only problem now is the difference between coordinate systems from LFS to AIRS, which I should be able to handle. Heh.
blackbird04217
S3 licensed
Okay, I'm having some troubles again, with this exact packet. Not exactly sure anymore and I feel it is a little less documented than it could/should be - especially if people are using it for motion sensors.

Is OutSimPack.Accel in world coordinate space, or local to the car. Originally I had evidence to suggest it was in world space, and thus I made the above computations to translate it to car space based on the heading value. However, I am now finding evidence that the initial assessment may have been wrong and the OutSimPack.Accel value is already in local coordinate space to the car...

Can anyone who has dealt with this packet confirm this, or let me know any other information about this packet that is known?
blackbird04217
S3 licensed
I am going to try to avoid opening the setup files or gathering every piece of information if I can help it. Although that would allow me to compute the exact values, which I could then blur a little as mentioned.

I'm actually wondering if I can do something with the 4 values I already have, which should probably be enough: IdleRPM, LimiterRPM, TorqueRPM and PowerRPM. I figure the driver will want to keep the RPM as close to the TorqueRPM / PowerRPM range as possible, likely error on the side of less than the range more than over.

I'm thinking something like the following:


//percentage will be < 0 below torqueRPM, 0 at torqueRPM, 1 at powerRPM and > 1 over powerRPM.
percentage = (currentRPM - torqueRPM) / (powerRPM - torqueRPM);
if (percentage > 1.2) { shiftUp(); }
else if (percentage < -1.8) { shiftDown(); }

With a little more logic to prevent shifting from first to neutral when starting out, and maybe tweaking the 1.2 / -1.8 values accordingly. This should have the driver attempt to keep the engine in the power/torque range.
blackbird04217
S3 licensed
Ooh, that might explain it then. I feel it is sort of a strange to give the heading in two completely different ways, but I will give this a shot this weekend when I have more time to work on the project and will certainly update as it will probably help.
blackbird04217
S3 licensed
Well, I have loaded the car mesh from the LiveForSpeed CMX file. Interestingly, the car seems to be missing rims? Already it is an improvement over my old mesh that had no suspension or tires... and I can use this to compute the bounding area of the car, and will use it for the visuals from now on.

Probably will not skin it as I have not created a texture loader, but still it looks neat: See the FOX here
blackbird04217
S3 licensed
I did see the shift light, and I thought about using that, except I felt a player wouldn't wait until the shift light is on. I want the driver to be able to prepare to shift / anticipate it based on the current RPM and the pattern of the RPM over the last few moments.

That said the next portion of your answer does get the calculated optimum shift point for whatever the setup is, and since I don't want to give the driver an unfair advantage, I will probably keep looking. I guess what I can do is just input that I usually upshift at 7200 in the LX6, and downshift at some other value.

Do any aliens (or anyone really) want to weight in here and give me a reasonable shiftDownRPM and shiftUpRPM for each car? Please?

I hope to have the car widths calculated, probably via the cmx file, by the weekend and then I will start working on the MemoryUnit and PredictionUnit for the drivers brain.
blackbird04217
S3 licensed
Dygear; Yea, I was thinking of doing exactly that for the limiter values. Do you by chance have these values available already? If not I'll do this a little later and post them for all to use.

I was surprised to see the wheelbase was available in the car info, but no width. Wonder if Scawen could bump the version and add that information quickly. :-) Or how I would go about even requesting that, pretty sure Improvement Ideas is ignored.

Edit:

Idle to Limiter (note: these values have been truncated to the lowest whole number, ie floating point dropped)
UF1: 969 to 6983
XFG: 950 to 7978
XRG: 951 to 6980
LX4: 934 to 8974
LX6: 934 to 8975
RB4: 958 to 7480
FXO: 955 to 7482
XRT: 958 to 7480
RAC: 964 to 6985
FZ5: 951 to 7971
UFR: 1430 to 8978
XFR: 1424 to 7979
FXR: 1408 to 7492
XRR: 1408 to 7492
FZR: 1425 to 8474
MRT: 966 to 12917
FBM: 1879 to 9179
FOX: 1867 to 7481
FO8: 1852 to 9476
BF1: 3452 to 19912
Last edited by blackbird04217, . Reason : Added information about RPM values of cars.
FGED GREDG RDFGDR GSFDG