Keling: I do understand what your getting at with the AI learning where to shift, however I'm attempting to avoid that complexity of teaching a machine to learn. I also feel a racing driver would actually know the power band of the car, maybe not the exact values, but then, that is why I've given the range. For your second point, that is easily changed by making sure the shiftUpRPM is less than limiterRPM.
Degats; I believe you'd be right for a semi-serious driver with a lot of experience in a single car/setup. However if the car/gearing changes often, I don't think the driver would go out thinking "5500 for first, 5600 for second, 5300 for third... etc" I do feel they would understand the best torque/power range of the car is between 5200 and 5500, and attempt to keep the car within that range while accelerating, regardless of set. Of course, there is some assumption that the next gear will be within some reasonable range of the current.
Gutholz; I have managed to load the car mesh files from LFS and will use these to determine the width / dimensions of the car. It will be the width of the model, but it should be close enough for the needs the driver has.
As far as the reasons behind predicting from the visuals and not using the actual position provided by LFS, Dygear is correct in I want to make as few LiveForSpeed assumptions as possible and keep the driver generic. Meaning if I could get a certain amount of information from another simulation (perhaps one I make someday) then the AI should be ready to drive there reasonably well. However I am sure there will be assumptions and things tweaked to be optimal in the LFS environment. And though this is a valid thing in the back of my mind, it is not the primary reason.
The primary reason I want the AI driver to use a visual sensor to predict where the car will end up is because that is exactly how we would go about it. I haven't given the AI driver their position in the world. We don't know our position in the LFS world when driving. We can estimate we are about 2 meters from the track edge, 20 meters from start/finish etc... and at what direction from our point of view those objects are. This is how I'm trying to make the AI driver think closer to how we would think with the information we have.
Of course I don't promise it will perform well, or that the computation will be light enough to use in a game, or anything really! But by using this visual sensor to estimate the location of each reference point seen, I could then go in and tweak the visual sensor to give faulty information. Faulty meaning, not perfect estimations. By doing this it should create some degree of error, without actually programming the AI to make mistakes.
Though, I'm working with perfect sensors for now, and will be until the car is driving fairly well.
---------------------------------------------------------------------
Speaking of, I finally have the physics sensor feeling the acceleration and linear velocity correctly. That took a lot of time. Once I am done cleaning up all the failed attempts and broken bits, hopefully without breaking it, I will be able to begin the memory and prediction units for the driver.
The memory unit will store the position of around 32 reference points. Each update will change the memory, throwing away any 'memories' older than 1 second (or so), and adding the new memories. It will have a way to use the remembered reference points to get the average velocity between one memory (A) and another (B). If you do this again and get the average velocity from (B) and (C) you can then use those two average velocities to get the average acceleration (visually).
Of course, as I said much earlier in the thread, this might prove harder than I make it sound as the car could spin slightly and the reference points far away will seem to move far while the ones closer won't. I will need to find a way to get rotational values from the view to correct.
Lets see how it works out. Here is a shot of the physics sensor working.
Physics Sensor