The online racing simulator
Searching in All forums
(987 results)
blackbird04217
S3 licensed
Woooh!! Whooo! (Yes, I am every bit excited!) AMAZING!! Whoo!

Proof attached to this post!

I have successfully driven my AI to a target! Okay, when you watch the replay you will likely die of laughter, as I came close to on the first time I saw the behavior the other day... However - the AI driver successfully hits the blue cone; which is where I set him to go! Now, to fix some behavioral issues! ENJOY!

EDIT: Ignore the 30sec penalty. That is _my_ fault as the AI is dependent on me to tell it when the lights turn green. LFS has no accurate way to detect the light change so I substituted an input key for the time.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
You don't need to allocate any memory to pull this off. Simply have it defined as text[240] inside the Insim packet; however when sending it get the size of your string and pad it until it is 4 bytes. The struct IS FIXED size, but what you SEND to LFS is variable sized; the reason for this is likely to save on bandwidth because a lot of InSim applications have lots of buttons going back and forth.
blackbird04217
S3 licensed
In this current example I am using the MCI pack, which would be the CarComp. This is a way that will get me the data available on all the cars, and not just the one being driven; of course the OutSim packet has other information that may be usable for the _actual_ driver. But at this time I am using the MCI pack; so yes I am sure that I am getting this value as an unsigned short as described from InSim quote above.

As far as my world coordinates I am fully used to it that way, doing it this way allows me to change from 2D to 3D without changing the coordinates. In my 3D worlds Z is always forward, X to the right and Y up and down; hence why I used the convention here. Since the AI project will have life outside of LFS I have used my convention. It seems I may have some issues in my conversion from LFS -> AI World interface that I have.

If not stated already I have named the project; Artificial Intelligence for Racing Simulators. Or A.I.R.S. for short. I will need to look further into things for the rotational thing. For some reason when I was reading the InSim document I understood it as Y was back to forward; maybe this explains some of my issues. I will get back to this when I figure out the problem - though I am still willing to accept help if my math above is converting something in the wrong way...
blackbird04217
S3 licensed
I don't know, I would likely be aiming more towards 'real drivers' since I have hopes that eventually sim-racing will get closer to that. Motion simulators do exist, it will only take time before a small enough version is built for the home. And you don't need to feel full forces, even just partial forces would give you that level of feedback needed; of course it will never be quite the same...

Anyways, I came back as I am having some issues with my world coordinates now. One of the things that I could foresee, although this should be something that can be overcome with a little effort; so I ask for some help before I beat my head against the keyboard;


In my world the following apply;
X-Axis: 1, 0, 0 (RIGHT)
Y-Axis: 0, 1, 0 (UP)
Z-Axis: 0, 0, 1 (FORWARD)

Quote from Insim.txt :
// If ISF_MCI flag is set, a set of IS_MCI packets is sent...

struct CompCar // Car info in 28 bytes - there is an array of these in the MCI (below)
{
word Node; // current path node
word Lap; // current lap
byte PLID; // player's unique id
byte Position; // current race position : 0 = unknown, 1 = leader, etc...
byte Sp2;
byte Sp3;
int X; // X map (65536 = 1 metre)
int Y; // Y map (65536 = 1 metre)
int Z; // Z alt (65536 = 1 metre)
word Speed; // speed (32768 = 100 m/s)
word Direction; // direction of car's motion : 0 = world y direction, 32768 = 180 deg
word Heading; // direction of forward axis : 0 = world y direction, 32768 = 180 deg

short AngVel; // signed, rate of change of heading : (16384 = 360 deg/s)
};

// NOTE 1) Heading : 0 = world y axis direction, 32768 = 180 degrees, anticlockwise from above
// NOTE 2) AngVel : 0 = no change in heading, 8192 = 180 degrees per second anticlockwise

struct IS_MCI // Multi Car Info - if more than 8 in race then more than one of these is sent
{
byte Size; // 4 + NumP * 28
byte Type; // ISP_MCI
byte ReqI; // 0 unless this is a reply to an TINY_MCI request
byte NumC; // number of valid CompCar structs in this packet

CompCar Info[8]; // car info for each player, 1 to 8 of these (NumC)
};


namespace LFS_to_AIRS
{

void ConvertToOrientation(ice::iceVector3 *vDir, const unsigned short usAngle)
{
float val = ice::MathConverters::DegreesToRadians((usAngle / 32768.0f) * 180.0f);
vDir->m_fX = sin(val);
vDir->m_fY = 0.0f;
vDir->m_fZ = cos(val);

ice::Math::Vector3Normalize(vDir, vDir);
}
}

The "ConvertOrientation" function should takes in the orientation values from the MCI packet, so 32768 = 180 degrees. All my Math:: stuff has been proven to work on several projects so I already know that code is not suspect; though I am trying to make sure this is working. (The car is not going where I thought it would be and the rudimentary display I made wasn't quite what I expected. Leading me to believe this is the problem; however I am still checking my display code to make sure the units are setup correctly there as that is new code as well.) I am hoping someone could check that code out.

I would want;
X = 0, Y= 0, Z = 1 when the car is oriented with the world Z (0,0,1) Which should be an orientation of 0 I assume; it's not listed but other rotational values in InSim.txt say it rotates anticlockwise. Of course; in LFS Z is the upaxis and Y is the forward, though this should be converting that just fine. Help?

EDIT: It is listed, and not an assumption, I just never read the 'notes' before apparently! Either way it still isn't quite working for me at this time. Though I will keep checking to see if there is something wrong with my world coordinates in the AI Project.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
Sure I don't know Java syntax directly but I think I would be able to pickup the ideas. I finished reading the NN chapter in that book; all except the source code implementation. I like some of the ideas, but would certainly need to test the idea of this outside of realms of the racing simulation; on something very simple and then one step further than simple without over complicating things. I now see that the implementation of NN is considered the easy part of the process, and that the hard part, time consuming part and challenging part is to get the required results.

I do understand that X,Y,Z inputs are patterns which if you answer with A will always lead to B; but when it comes to handling traffic, and other things I think this will lead to less than desired results. In other words, I can't think of any way to accomplish training the NN; which is the hard part of NN's!
blackbird04217
S3 licensed
Like I mentioned you would allow everyone to finish their race; which the timer keeps going and a special InSim based application could be used to retrieve the time that each person finished the distance. So even if LFS thinks you are done racing you still need to do the additional 2 laps that you were lapped by.
blackbird04217
S3 licensed
Ok, well from my 5 minutes spend reading in, not my college book but another AI book I bought, I have learned quite a bit about my own brain... Some of which as you said was astonishing and I looked at the individual letters on the page for a moment and was like; damn - this reading thing is perfect for NNs due to the patterns that are presented.

However, although I am kinda excited about continuing this read I a currently starting to feel more negative about NN's in the AI here. I am not so sure that a racing simulation has patterns in the way NN's deal with them. That said I do need to keep reading and yes racing has a pattern; go around the track. But from a moment to moment pattern, I am less sure about. I got to get deeper into how it would be implemented before I can really decide whether it is useful or not.
blackbird04217
S3 licensed
Not that I want to waste your time, especially where i have not yet gone into reading it myself; which is something I promise I will do at somepoint, likely today if I can find the time; which considering my current state I should be able to do, as I am currently keeping track of the AI thread, and working on the AI project itself. Attempting to get the car to drive towards a point. Interestingly enough, it starts to; and then goes the opposite direction.

Sorry, I got side-tracked while trying to form a question. I get the input/output idea. But I don't understand very well what happens on the hidden layer. Obviously this is likely where the meat is. At least, I am thinking so... I am trying to think of an easier example than driving for you to use to explain that layer; and again, don't bother if it is something that I really will need to read on to understand in the first place. I can likely follow any other simple example you have even if it doesn't involve four tires a couple pedals and a steering wheel!

I am quite lost at fitting this into the project; though that is understandable since I need some reading on. The breif mentioning on this topic in college, (including the article I read and wrote about), has long escaped my memory... Going to dig out the books now.
blackbird04217
S3 licensed
Quote from bunder9999 :nice.

can someone explain to me how his car could survive such a huge jump, when this ford escort couldn't even survive a jump half the size? they reinforced the whole rear end... it landed on the right side first, but somehow the left suspension is busted.

http://www.youtube.com/watch?v=xUqk8Idcgcs#t=2m40s
http://www.youtube.com/watch?v=f631N_HLfCo#t=2m12s
http://www.youtube.com/watch?v=LtWi0-ACkmQ#t=2m41s

This is actually a simple one to explain, it has to do with the landing of the angle. The planned jump took a lot of things into account; this is very evident while watching the angles clip above. You can see that he had reached speed, and feathered it to maintain that speed while launching. The hard part about the jump, as stated in my previous posts, was the making sure X speed was achieved and maintained for launch; after that it was down to gravity (which doesn't magically pull harder now and again). Of course, wind plays another role; and tends to cancel things of this nature.

That said, the Escort video showed them slamming down onto flat ground, the worst possible landing situation; which is why in extreme sports you find people not able to ride/skate/continue away from a landing when it is not on a ramp (after a big jump) because of the force from the impact; when landing on a slope it gets absorbed a lot easier than a flat. Hence, less damage.
blackbird04217
S3 licensed
Unfortunately at this time it is not possibly to detect if a wheel is locked or spinning rapidly via Insim/OutSim or OutGauge. I know this because I have been looking for a way to do this for my AI project.

That said I really like the idea. I've been hoping for FFB pedals for quite some time. Though the brake is really the one that could use FFB; especially with brake-fade and the pedal just drops to the floor! That would make even a simulation experience scary!
blackbird04217
S3 licensed
Ahhh yes, that makes sense now. Basically everyone does 100laps (or some high number) at event 1. Then on event 2 everyone does another amount of laps at another combo. They continue driving after the race has finished until each driver has 100 laps. Take the times and add them up; The winner should be somewhere close to 168hrs. It is a distance race more than a time race, though the distance given is something that would take roughly 168hrs to complete.
blackbird04217
S3 licensed
I am certainly hopeful that it becomes 'human-like'. I assume that the reason a lot of people have trouble understanding my intentions is likely because; they may have changed (though I am not sure they did), and there are probably several layers of intentions that fight at eachother / conflict. I am guessing here as it seems that no one seems to understand what my intention is. (This includes the possibility that perhaps I don't understand my intentions well enough to explain them).

That said, I do know my intention is to remove some things that racing AI have done for years; in my mind cheating. This is acceptable for games, but not for simulations. If I could make the AI behave believably like a person within the constraints I set out for myself than that would be amazing, I would call the 'competitiveness' a sub-goal though, especially when it comes time for the car to judge traffic and handling scenario's that a human can just change input and everything is okay; there are too many special cases.

That said; I am trying to get the Ai to work within human constraints. We don't have super fast reaction skills, while most AI algorithms (including that of LFS) do. We can't spin the steering wheel from 90* to 180* in 0.0001 seconds; whereas some AI algorithms can, ad do. Things like this will come from an interface the AI needs to deal with to input the controls;

Here is something like what my idea is...
Simulation -> Reference Points & Information -> AIWorld -> AI Sensors -> AIDriver -> Decision Layers -> Desired Control Output -> Reaction Time Checker -> Real Control Output -> Simulation -> loop

The decision layer and reaction time checker are the only things left untouched at this moment. Everything else has a structure within my simulation, even if it is not fully linked up to LFS or other things yet. The reaction sensor thing will behave a bit like this (in case you missed that post somewhere here).

Scenario A: RPM: 4500 Driver wants to Shift Up .25 seconds later driver shifts up. (Simulating H-shifter where driver takes hand from steering wheel to shifter. Actual time would need to be tested to human limits) In this case the shift actually happened around ~4800 depending on acceleration.

Scenario B: RPM 3900, Driver prepares to shift. (signals that the arm is moving toward the shifter which takes .25 seconds to complete the action). RPM hits 4500 driver shifts which takes about .05 to take place since the hand is on the shifter already. (Again times need to be worked out.)

This scenario is for h-shifters, cars with paddles obviously don't need the hand to move but I wanted to point out the level of detail I want to put into this layer of the AI control/reaction time thing. Some sort of "Prepare for Twitch Steering" could take place as well, since when the AI knows they are on the verge of the limits they could use that to successfully catch it. This comes with experience on a human, knowing that the car may be upset during a particular corner and preparing for it. However it takes concentration to do this, and I do want to have the AI have some form of emotion gauge to wear them down after time or something. This is also the area where Ai difficulty could be changed; more or less ability to concentrate / judge distances... Like I said, still need to work on the gameplan here.
blackbird04217
S3 licensed
I'm going to say that the title is very misleading.

Quote :7 races will be done where each last 24 hours (or 14 events with 12 hours)

From this statement I gather that he wants a championship; decided by points, or some other form; possibly even where gaps are kept from race to race... But his idea is (from my understanding) to do an endurance race that lasts 1 week in virtual time, and each race/event adds to that virtual timer. It isn't a week straight.

14 races * 12 = 7 * 24. These 14 races could be spread out over the entire year yet his idea of a "1 week race" could be happening. I don't know, maybe I am misunderstanding it. The idea seems fine in concept though I don't know if it really differs from DoP or MoE except for the fact that he states that there isn't multiple car classes; though a possibility of the car changing?? I like the car changing from event to event if that is what is meant by the statement.
blackbird04217
S3 licensed
Hmm. I should probably do some reading on NN's then, I've heard of them including mentioned once already in this thread; though my objective is not for an AI that 'learns'. Teaching the AI is not the idea here, and I don't see it fitting very well, however where you say that very human like output is achieved than I should at least give it a read over. Hopefully if it is useful it fits into the design of my AI already since I've spent some time on that.

Update: The AI now has true access to the gauge sensor. When waiting at the grid it will shift up into first, and step on the throttle. Also, it can now stay at a steady speed; which I have down to 10mph since I shouldn't exceed any limits there and not need to worry about 'driving at the limit'. So now the next step is to get the physical sensor and visual sensor to work for the driver to drive to a destination; and then it will be down to making a layout around FE1 so that I can try getting a car to drive around the track.
blackbird04217
S3 licensed
Hmm, you bring up a good point about the hills, though my AI (as of now) is developed with little to no current information about hills in LFS. Think of it as flattening the track and just knowing where you should be - this could bring up issues in the future; bridges and other things, but I think it may be negligible for now.
blackbird04217
S3 licensed
Well Android you certainly brought up an area I hadn't been able to think of, though I don't know how one would go about implementing this. Really if it could be implemented it could _possibly_ give me some 'idea' of the tire states that I am looking for although I see some issues with attempting to implement this;

First we don't know wheel speed; although like you stated for driven wheels we can take a delayed guess since speedometer has built in delay. This could have an effect where the AI is driving along, hitting the throttle just fine and then thinks "I'm spinning my drive wheels" even though the issue has either been re corrected already or traction was gained.

Time 0; Car starts accelerating from 0 mph. (Throttle Input: 100%)
Time passes: Car hits some grass while accelerating causing excessive wheel spin. (Throttle Input: !00%)
Time passes: Car gets back on tarmac and wheel spin is sorted out. (Throttle Input: 100%)
Time passes: A sensor detects the wheel speed is greater than the car speed. The AI uses this to reduce throttle input. (Throttle Input 70%)
Time passes: Eventually the sensor tells the AI everything matches great. (Throttle Input: 100%)

So in this example the AI behaves a little wonky when it hits a small patch of grass. Off the top of my head I can't think of many other places where this comes into play, although I have a feeling more often than not the delayed speedometer reading would bite me. There would need to be information in the GripSensor that shouldn't be there; how much throttle/braking is going on. This would need to be there so the GripSensor can report accurately depending on slowing down/speeding up since the speedometer is delayed.

-----------------

This said, I might be getting ahead of myself. First thing that is required is to get the AI to go around the track. I wish I hadn't messed up my copy of LFS right now, as I want to run a test in the XFG to see what is the 'optimum' speed to drive Fernbay Club - WITHOUT going over the limits. Which is to say, what speed would you only need steering input and never need to worry about hitting the brakes? Once a car goes around a track successfully I think the project will stand in a much different position; because even then I can start adding my tests by reference points.
blackbird04217
S3 licensed
Yes the state tracking is a lot more memory and less flexible, which is why I was asking, that way if you were planning that route I'd have have given you a hint; of course assuming it was still early enough to do so. But looks like you have the more flexible way to do it already, and that is nice.

FlameCZE - NICE! Very nice. Thanks much.
blackbird04217
S3 licensed
I was going to say that even automatic transmissions still have Neutral. And I know my first car (auto, unfortunately and I won't go back to one), you could even slam it from drive into park; passing through both neutral and reverse to do so. And when this happens the wheels lock up and STOP. I haven't been in this type of situation, though I've had my clutch pedal stick before and pop back up with small delay. But like stated, you can't do much in a panic state of mind; that said, how do you have time to make a 911 call and not think more clearly about the situation at hand?

It's hard to think that a cop, with any sort of vehicle training, would panic under such situations. And a half a mile is at least 15 seconds at 120mph, I don't know how long it takes for the car to stop but that call was certainly long enough to have thought of other ideas; especially when the operator mentioned the ignition. (Keyless or not I do believe there is a big "stop" button, though I don't know if it would help with much as the car has enough momentum to keep the engine firing.) I don't know, sad to see people lose lives, but it needs to happen; the population of 8billion (or whatever it is) is not effected if 25 people die. Sure the emotions of those friends and families are effected, but people die every day. It is a fact of life that will not go away; life == death.

EDIT: And where as it is "sad" that people lose their lives like this it is far more sad that the media points fingers, lawsuits fly and statements like Toyota knew this was a problem for two years and could have saved these people. Stating like it is Toyota's fault. That is sad.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
@AndroidXP: I do believe that the AI need to know that information about each of the tires, as that is the information that keeps the AI driving at the limit and handling what happens when the AI is over the limit. This I think is key to getting the AI to be fast without knowing the exact physical constraints; although I will likely be proven wrong.

As far as getting the AI to drive around the track at 10mph, I am pretty sure this information it not required. And for braking, like you mentioned earlier just use cars with ABS systems for now, or even turn the brake aid on in LFS. However, that doesn't help when dealing with wheel spin like my current LX6 issue, and again back to my previous point about cornering at the limit. As far as only having four values; UNDER, NEAR, AT and OVER_LIMIT I think that it might need more; certainly not less; however this may be debatable at another time, since it might be important to know the actual percentage of the limit that the tire is at; example 0-100% 0-80% = Under Limit 80-95% = Near Limit, 95-100 = At Limit and > 100% is Over Limit.

@logitekg25: Yea, no wheel speed available; I am out of ideas at this moment but I am sure something will come up. As said to AndroidXP, if I wanted to have the car aim for a speed of ~10 to 15mph I believe I could get it to navigate a track, however that is obviously not very competitive...

On the otherhand, if a set was made that limited a car to drive 15mph because of gearing then it might be quite interesting, and thus competitive in a very _slow_ way. (Let the driver with the shortest overall line win).
blackbird04217
S3 licensed
Even in Visual Basic you wouldn't be able to do that. I understood all the logic you were coming from, and that sndskid was used as an example as well. The only way this would work is if you had the source to the project, and could access the sound directly; regardless of syntax/language issues.

So your teacher was able to do this because he had some sort of Object named sndskid. At anytime in the code with that object you could check: if sndskid.isEnabled() then doSomething(); But since I don't, and won't ever have the LFS source code, then I can not detect their sound objects, and test in that sense.

It was worth mentioning, but hopefully you now understand how it works and its limitation.

----------------------

Back on topic; can anyone think of a way using position, direction, heading and angular velocity to detect _which_ tires are UNDER, NEAR, AT or OVER_LIMIT?? I don't think this is even possible, though I am still trying to get outside the box. Perhaps there is other bits of information that LFS has. Android and I have discussed oversteer/understeer using these, and in those specific cases it would be as simple as detecting oversteer and setting both rear tires to OVER_LIMIT. (Vice versa for understeer). However there are situations, like hard braking/accelerating, where the car is not understeering/oversteering but the tires are OVER_LIMIT and this (as of now) is what I am trying to detect using what LFS gives me; and as pointed out by both Android and myself, it may not have fool-proof solution.

EDIT2: If anyone does find a reasonable way I will be paying close attention here, and if shared would be extremely happy. Even if the code is in another language as long as I can understand what math is going on, and the theory behind it I would be willing to try it!

EDIT: Well said AndroidXP.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
I don't have "sndskid" to check if it is enabled or not. I can't magically do things with sounds. Where as you are new to programming let this be a quick lesson you will learn shortly; you can only work within X constraints; not X+1. Meaning that I can only do so much with LFS, I am limited to InSim, OutSim, and OutGauge for detection of RaceRestarts, CarPosition, etc... I am limited by a layout file to read information for my reference points etc, and also limited by PPJoy for the virtual controller.

While I am making my AI work within tight limits, using LFS does tighten the limits a bit more; though it is worth continuing to try because of the physical accuracy in LFS and the potential. That said, sound processing as you are requesting would need to be done more like the following;

Get sound input from lineout or something that records the speaker output. Kinda like a microphone, but a different channel.
Use this raw sound data, and compare, using some tricky algorithm, to that of a skid sound, to detect which tire(s) and how much skidding is going on.
Then hand the AI that value through it's physical sensor; a value that a physics system could tell me. One that I can't currently get from LFS, although I am hoping to find a way to fake it well enough with the information LFS does give me.
blackbird04217
S3 licensed
Yes I am completely sure I do not want to code something that detects sounds and then processes the sound to see if it is a tire squeal, if so which tire and how much grip is lost. A simple number from the physics department can go a long ways; and still keep it within the limits of my project.

Example; Why make this sound processor to detect engine speed, when the RPM value is easily given and available? Just use the value, perhaps add some inaccuracy as described above and the AI seems to 'hear' the engine even when the value comes from the car. The point of this project is not to work on sound processing and/or image processing.
blackbird04217
S3 licensed
No problem about the misunderstanding, especially since it is now understood! But I think now you see how using LFS is limiting to me in ways that I don't want it to be.

As for "Beginner" to "Pro" difficulty range I think that will be tweaked by changing the accuracy of the drivers estimations. Of course, this assumes I can even achieve some competitive level of driving, which as said; I have my doubts - but certainly have my hopes as well.
blackbird04217
S3 licensed
The thing is that the AI needs to get some information from the physical side, no matter how you look at it. As said I do not want to get exact values; but most importantly I do not want the AI to "physically move or control the car". That is my objective in detaching the AI from physics. There are things that must come from the physics side of a game/simulator in order to make the AI in the first place; being the amount of traction at each tire, the current position and orientation of the car. The cars velocity and angular rotation as well. All these physical things make up the "AIPhysicalSensor" which will tell the AIDriver what is currently happening to the car.

Like stated, I am not aiming to make sound detection, image processors or anything obscure - just taking a different approach on how racing AI is done, and I am aiming this approach at AI in Simulations vs AI in Games. Big difference as we know. The AI in games needs to be fun for the player, while the AI in the simulation does not aim to be fun to drive with. (If it is actually competitive for someone than GREAT but I don't have my hopes that high yet, especially for the LFS side of this that has been started.

EDIT: About the RPM and the sensors for the car information, this isn't to simulate a beginner vs experienced racer. The RPM from the GaugeSensor could be exactly as our ears are. Since it could have an inaccuracy to it +/- 200 RPM I would guess. (That is a range/accuracy of within 400rpm from hearing, while the driver could also look and get a better reading. Before I start adding a lot of inaccuracy to the project, I first need to get the AI to successfully navigate a track; even if they stay well below the limits.
Last edited by blackbird04217, .
blackbird04217
S3 licensed
Thanks for the very kind offer, but I have a feeling it won't help my AI too much :P The values are what I am after. That sounds great, adding undo support for an editor _really_ helps the editing process! (Are you using a chain of commands to support undo/redo? or just keeping track of the last state and undo'ing to that state?)
FGED GREDG RDFGDR GSFDG