I'm not so sure that I am actually prototyping, certainly trying something new, but the prototyping happened much earlier in this thread when I had proven that I could get all the data I needed from LFS, and that I could control a car, to some degree, in LFS using a Virtual Controller.
The geometric inverse is not what will introduce the error at all, actually that would be a bad inverse, with the exception of error in the floating point computations.
It is already done this way, and I'm not going to remove it to simply move the world coordinate for estimation. Actually as it stands for this moment I want the AI to have 100% accuracy with his Vision Sensor and perceiving his world space position to eliminate any problems. This is exactly how it works right now, with a note of where error will be added in the future.
coneA, coneB, coneC, coneD ... coneN all have positions in worldSpace.
The Visual Sensor knows the worldSpace position and orientation of the car/driver.
The Visual Sensor tests each cone, for field of view, some cones will fail here and not continue.
Then the cone is tested for visibility, cones hidden behind terrain will fail here and not continue.
Finally the Visual Sensor will take the cone and bring it into driverSpace: computing the direction to, and distance to the cone from the drivers eye.
there will be an estimation here, that doesn't exist yet.. The distance and direction in driverSpace is stored.
Only once all cones go through that process will the Visual Sensor then perceive the world space position by applying the inverse steps for each distance and direction. Averaging each of those together to get the perceived position in worldSpace. It is done this way so that once the estimation randomness is added, the position is not given.
This is the way my brain thought to solve the problem while sticking to my own beliefs in the project. It sure would be easy to say:
perceivedPosition = worldPosition + estimationError;
But then it isn't based on the same estimation errors in the Visual Sensor, and, I feel, is just fed to the AI. I want to make it clear I'm currently working without any error algorithm in the Visual Sensor, and will continue doing so as it will be a hard enough job without adding the errors.
As you've pointed out earlier there is much more difficult, challenging and interesting things about the project than perceiving the worldSpace position. I think it became a big deal because I sat down to attempt solving it without resources before deciding to use the transform matrix, which oddly enough was used before Todd mentioned using it, and I explained that right after he responded, I admit I should have made another post prior to that explained the solution I came up with. I usually do that and in this case it came later but was explained in post #322.
I am sorry if I seem stubborn on this point but I don't find reason to change something that is already working, especially when I feel the alternative is less true to the overall idea of the project. I may be bad at explaining my overall ideas.
I do appreciate the thoughts, ideas and conversation this is sparking.
I have found a new problem, which I always had in the back of my mind, that will need to be solved before I can go much further. Early indications were that the sensors jumped around, and visually in AIRS the car would jump around. The cause is simply the delay in input from LFS, networking, etc. But it exists, I've gone on ignoring it, knowing it would add a little error to the AI (that is unintentional error) but figured it could work.
Anyways, I recently sped up the rate the AI driver senses the world and controls the car to 100hz (from something like 20hz) and the problem magnified itself. The Prediction Unit can no longer create curved paths of prediction, and sometimes even the straight paths get messed up. As you can see in the
video of the Prediction Unit, it is very jumpy based on the input problem.
The reason for the problem is the Memory Unit can have multiple states of memory that are identical. So when averageVelocity is computed from the information in the memory it has a lot of error, enough to start confusing the Prediction Unit.
The best solution I know will be somewhat difficult to add, but would be added to the LFS Scanning side of the project. That is, to add client side prediction to what the LFS Scanner reports to the AIRS project. I figure tracking the last three OutSimPackets and time between each packet and then either interpolating between them (delaying the AI's knowledge further) or predicting the future path of the car (which could be inaccurate when a new packet comes), and it may need to be a combination of the two, interpolate until time reached then predict.
As of note I do already have LFS sending me OutSimPackets as fast as it will send them
I figure I'd try the prediction first, it will be built similar to the Prediction Unit but used far prior than the AI ever sees the data. Doing this should smooth out the results and help the Memory Unit keep different values for each memory.