Here is a good question for you then: "Why do you need good grades?"
A few of the outcomes I could think of:
- Because you are aiming for X job and that requires good academics.
- Because you have plans to learn/study more on a subject, requiring good understanding of the basics.
Both of those mean you have some underlying bigger picture, otherwise you could do: Any old job that pays the bills and "survive" because that is all anyone 'needs'. Materialistic objects are not needed, though are desired and sometimes important to each of us.
---
As far as the last years thing. I don't know how realistic of a goal that is for anyone to make. Sure its likely something that is desired, and wanted to some extreme levels - but its the same idea as me say I want to win the lottery in 2010. It's not a very realistic goal. I don't know how to describe it more than its not something that you can actually achieve just by "trying hard". And that is what makes it a non-realistic goal in my mind.
EDIT: My own goal above: Find a job in the Game Industry. is actually not fully in my hands, as it depends greatly on the job openings in the industry, and the fact that someone wants my expertise. I can only give so much effort to this and still have a possibility of failing for reasons out of my control.
(The winning the lottery is actually a poor example, because if you take that goal to the extremes you would simply buy all the tickets and thus win. Though you probably lost more money than gained from the lottery.)
Last edited by blackbird04217, .
Reason : Added my own unrealistic goal.
You wrote while I was writing. There are certain things you can't plan for - like your example. But a good plan is a flexible plan, so unseen events or challenges will occur, but the plan should be able to change. Everything will have its share of unseen challenges and risks. When confronted with those challenges do you climb on over the hurdle, or turn around and say- "that plan failed".
I'd agree that it is stupid when the goals/plans fall under these categories:
Randomly picked - with no true meaning for you.
No reward for completing the goal, no satisfaction.
Something that is out of reach, unrealistic. These can actually have negative impact.
---
For my I am a, and becoming more and more of a, goal oriented person. It's important to know what you want to achieve, and break it into smaller steps. The paying off my small loan is something I have to do to stay out of debt, so its not a goal - but finishing it off in 2010 vs minimum payments will allow me to achieve future goals of building my own race car / owning a Dodge Viper by the age of 35, (July 2020). So breaking it down into reasonable chunks get me there.
I do say I am becoming more and more goal orientated because; 1 year ago I started planning for my adventure. I didn't have any money saved for it, and was actually just a bit better off financially then I am right now since in 2008 I bought my motorcycle and stuff. But in 4 months, with hard work I managed to do something that "I would never get a chance to do". This all depends on a persons mindset though, and not everyone can work well with goals. But I have discovered that it works great for me, but it is important to keep them realistic and purposeful/meaningful.
When 90% of people choose: "Lose weight" "Get a life" "Quit Smoking" most, but not all, just say these things without purpose. My mom states she will quit smoking every year, yet I've not witnessed a single attempt towards it. When goals/plans are made 'for the hell of it' with no plan to actually follow through, then I agree with you - those goals are pointless.
Last edited by blackbird04217, .
Reason : Fixed age typo
What are your plans / goals for 2010. As you probably know, I completed a major life goal in 2009, and as a goal oriented person have set out to accomplish several small goals for 2010.
Here are a handful of small goals I have:
Financial:
Get back into the Game Industry.
Finish paying off the small student loan: $11k
Get my backup savings restored . . . (Depleted now due to my trip, and lack of job)
Personal:
Read at least 24 books. Including at least 6 fiction, 6 non-fiction.
Write a 1 page story at least once a week. (52 pieces of writing) These help go to an old, almost forgotten goal of writing my own book(s).
I know I have a few other categories and goal but they seemed to run away for the moment. So what are your goals/plans for 2010.
-I will try to remember to bump this thread at the end of 2010 and we can all discuss which of our goals were actually completed successfully or the reasons why they weren't.
Well it would be the same race to race, my line on a track that I have enough experience/practice with is the pretty close to the same from race to race. Its the small differences that add up per lap. "I went slightly wide here because of some reason- boiling down to: I missed a reference point by just a little bit (too early or too late).
I'm not ignoring your ideas, but it is important that I hear from some of the others that have been a pretty large portion of this thread on the above discussion. I am trying to find out if adding these "DriveTo" points goes against my original goals in a way that it should not be followed. It is hard for me to decide either way, so I want other opinions of those who have been following...
Okay, I am back and going to *try* explaining a question in the correct form so that it is understood. I find that the wording is likely to be taken the wrong way, since some people never figured out my actual intention with the project, really it's likely I have been unclear - and also likely that the intention slightly changes... But besides that.
As a reminder the main intention was to try doing AI in a different way, removing some of the "cheats" that are okay for game AI but not really okay for simulation AI, and in doing so I figured to try using only the information a human gets.
With that in mind, and the difficult task of using these reference points as instructions / memories I have been putting some thought into how to achieve this. So I started driving some more laps in LFS, just for fun really. And I realized something pretty important - but I also have the feeling it goes against the original intent of this project.
As I was driving around Westhill Rev, I noticed that the perception of the left and right of the track is just as important as the reference points. This is not a new discovery as I pointed this out somewhere on page 1. The left/right are important when it comes to forming the 'best line' through a corner or around the track. Then I began wondering if it would be possible to form the driving line from knowing only the edges of the track. The basic line would of course just take all the corners as wide as possible, though this is not exactly the best possible line ever. This doesn't sound like a very easy task, but I believe it would be mathematically possible to smooth the curve based on the left/right edges of the track... And after thinking about that, I began asking a question that a few others did, why not give the AI knowledge of this line. Either precomputed or by giving them the information from an editor of some sort...
Thinking about giving the AI knowledge of the driving line goes against some things I have said/planned since the beginning of the project. But as I was thinking about this, in a human we know the line because of our instincts or something. And I was thinking that the AI would use it like this. The most important thing to know is I do NOT want the AI to drive from point A to point B then from point B to point C... That is the way current AI systems generally work, and I feel the placement of the AI car is too exact, their actions are like a robot and they rarely make mistakes in the same manner a human does. So, I began thinking maybe using a driving line, but in the manner that the AI doesn't drive TO a point but rather towards a point it might work.
The Idea:
--------------------------
Using the general ideas of the normal approach, but with a twist. The driver does not need get to know the exact destination point; though they do get to triangulate the point using their reference points. Since the drivers reference points are estimated/approximate, the destination point will change slightly from lap to lap. (Slightly depends on the driving situation and other factors that the reference point estimation uses.)
Unlike the normal approach I will try using less "DriveTo" points meaning instead of several per corner the AI will get at most 3 per corner: Entry, Apex and Exit. There will be none per straight because the next "Entry" marker would work well enough for where to go. I feel this is going against the initial goals, but I also wonder if it really is since the driver has to estimate where to drive to based on their reference points. The more reference points they can use to triangulate the position the more accurate their point. Meaning, if the reference points close to the DriveTo point are blocked then the DriveTo point becomes less accurate and harder to hit.
In this situation I don't think the AI uses reference points as 'braking' points anymore although perhaps it does and I am missing a step. Perhaps what I am describing happens on a layer above the "Attempt Layer" in my previous idea. Perhaps this is a "Direction Layer" and the "Attempt Layer" tries following the direction as fast as it can.
----------
Now this presents a very interesting problem when it comes to a blind hill. Like AS1. In real situation a human would practice that corner a few times at a slower speed, then speeding it up. It's a blind spot, unknowns. The memory would build and reference points let you know whats behind the hill and where to aim - even if you can't see it exactly. So I wonder if my new "DriveTo" point, although hidden behind a hill, would still be estimated based on the Tree-RP and when visible the TireStack-RP which will allow the AI to judge better at where they are aiming. Perhaps it fails altogether; but I still need to get this working as an idea before I try coding it!
This is about as much thought as I have been able to put into this as of lately. My main worry is that it goes against the initial ideas. Which I am becoming less sure about to be honest. I thought I had a clear idea, but apparently I don't since i am starting to wonder if this fits with the initial goal or not.
For those that have followed the thread, and *actually understand the question* please I want your input!
I'm going to have to agree with Android on this one, the only thing that matters is you know your oversteering. Any one thing, or multiple combined things in that "second way of thought" is how a human can perceive that they are oversteering. The most important effects are physical feelings and knowledge of tire traction (via sound/feel through steering wheel etc.). That is where the AI sensory will stop in this situation.
You're point about the humans going through the process is slightly skewed. I am sure during a surprise instance that it takes a moment for "wtf happened and what do I do" but when your driving at the limits you know which tires are likely to start sliding, and you are already countering that, subconsciously or not, while driving at the limit. Take my favorite example: Westhill Rev with the LX6, (You'll see another post about this shortly - still writing it in the other window). After the chicane, and before the final split the car is traveling up hill, very important to keep at the limits of the rear tires while not going over. It is an extremely balanced act of throttle and steering input. And for fast laps its important not to go over the threshold of the tires as it loses time. The sim racer uses visuals to interpret as "physical feeling", but being in the car would give you so much more accurate information than just that. The tires also produce the sounds and in this situation you are listening closely to the rear tires, to keep them at the limit without going past it.
Now of course there are situations, several times each lap, in that section where the driver goes above the traction limits and starts sliding. And chances are even before this slide the actions have been taken to correct them. If you read a bit of the second page you would see I have a "Prevention Layer" and a "Correction Layer" which separate these things. As a driver you know you want to stay at the limit and go up that hill. You're own prevention layer forces the foot off the throttle by a bit so that the traction stays, and the correction layer is used when you're slipping too much. Of course in the human brain this is happening so quick, and we can't say that it happens on multiple levels.
And Android to answer your question if I haven't yet, I want to use a "Physical Sensor" for the AI to have feelings of their motion, and the estimated traction of the tires. (Each one, or front and back only)
I will be back to post a few questions about my progress as soon as I finish trying to figure out the best way to ask. Some things have been taken the wrong way in the thread, probably my own confusion added to the mix, but I want to try explaining it correctly.
Not exactly. I haven't started the coding front as I am still working on the technical design. Been a bit busy the last couple days with family issues. I pretty much stand at the same spot where I ran into the brick wall, but for an update this is what I got.
Most of the high-level stuff, non-technical stuff has been decided.
I started working on the technical design, to prove how things interact with other parts.
I am surprised that I was able to make 1 point of entry; AIWorld.
- Once during loading all reference points are placed into the AIWorld by exact location. (Possibly a terrain mesh, and car meshes for visual obstuction)
- At the beginning of the frame the AIWorld is notified about the AIPhysicalSensor for each car; including tire traction, g-forces etc.
- At the end of the frame the AIWorld gives back the AICarController information.
The only thing needed, which probably happens at load time, is to load a memory of the track into the AIDriver's memory.
--------
Which leads me to the large problem of how to make the memory, whether the memory is actually 'memories' or 'instructions', I use both terms as I have no decision on how it is to be used. But instructions mean the AI will learn from an example that a consistent developer controller car will give them. Some special mode to let the AIDriver specate a developer and set their knowledge of what to do and where to do it...
The problem I am having is by recording a developers actions, I will likely get information that I don't want the AI to have: "Correct this little oversteer here" or "tap the brake lightly to stabilize car here". I don't know if this information is good for the AI or not, and there are so many changes in input that I can't just register a change.
I have thought of adding some sort of reference point that tells the AI to aim near it when it passes another one that it was aiming at. so there would be one of these per corner - however that is going away from what I wanted to remove: "Drive to here, then to here" line. But I've been trying to use this type of point only based on the other reference points around. So the instruction is not: "DRIVE HERE" but rather "aim to get these X reference points to this orientation".
So a few problems remain:
Getting the memory/instructions - and interpreting them where to go. Will post later, I have to go deal with family issues and people are rushing me a lot right now.
I don't think I was around with that exploit, if so I don't remember it well enough - possibly back when I was demo?
Anyways, even without this fix I am not sure how the lifting effect of the BF1 while upside down is not the same as the downward force that is pushing it against the road? Surely the same force is causing both effects here, I can't think of another reason for it.
A wing, as designed on the BF1, at speed will press downwards. If you keep the speed the same, except turn the BF1 upside down the wing will now press upwards. That is where I am coming from, so I don't see how its not the same force?
You just completely confused me on several layers. First thing bugging me most: no horizon statement. What I meant was no visible horizon of mountains, trees etc. Just grass forever.
About the "we don't actually see points" I wasn't going into the miracle's of how the eye works and how the brain interprets these signals. Though if you wanna turn it around, in the game/simulation you see points; they are called pixels. But besides that, I wasn't being 100% literal with the "we see points" but you can see how I broke it down into "we see points", if not ask again and I will try explaining both those situations above again.
That was exactly my point to make, a human with no appropriate shadows would be very confused, add a few dots and they would at least know where to stop running before they slammed into the wall again, even if dots are not on the wall they run to. My point there was that points are used for reference. No other knowledge needed.
Well, then that might be the case of this project, but who really knows the outcome.
What does position mean if it is not relative to something else? Nothing. I am trying to make it so the AI doesn't care about their position but DOES care about their position relative to other objects, how close am I? What direction is it pointing. I am not saying the AI won't triangulate it's position based on the points, that is a very plausible thing to do - though it will still be estimated.
This didn't tell me how you considered it cheating. My earlier posts said I want to get rid of information such as a direct racing line to follow. Not that a developer couldn't put a cone that holds reference point information. In either case, I don't see how knowing these reference points give the AI more information than the player - So I still don't see how it is causing the AI to have more information than the player? Please don't dodge the question if you have an example of it cheating.
No, I understand 100% perfectly what you mean, if your on a peak and comparing something you tried you will consider it worse than what you already have - even when your not to the top of everest yet. But I don't get how having a "starting point" in this situation helps at all as it could be that the trainer / starting point is actually at that smaller peak - therefor the AI doesn't go anywhere else.
It is important to know that I haven't had major plans with the AI actually learning the track, I've toyed around with it here and there - and thats where that comment originally came from, me toying around with the AI learning from nothing, or spectating a human trainer that gives them their info.
It seems to me that to get closer to something that might work, you have to stop thinking so much like a human, and more like an AI
Possibly but currently I have been thinking on a higher level, and have dipped into the AI level a few times. If you've noticed a few posts up I wrote that I have it all formed together except how to achieve these instructions/memories - which ever it is. However the AI is bringing up a reference point and going this is where I turn.
I really think its a shame that you're not interested in persuing visual processing. If I had the time, thats where the interesting stuff is.
This isn't remotely interesting to me. Actually I despise graphical coding, and even though this isn't graphical coding it is the reverse. My interest is in the AI part of this equation though it was more of a thought.
To be honest I am becoming more doubtful about the developmental stage of the project. If I can come up with a clear way to use these reference points and train the AI somehow then I might be more apt, but I still have a lot of thinking there and it is becoming the brickwall. I know there are ways of doing it, but I am considering the best way to achieve. So far my ideas fail in some regards even if it works in others.
its certainly not a problem with the model (not since the high nose fix at least) as theres some very noticeable lift if you flip the bf1 upside down at high speeds
The BF1 'flying' while upside down is the act of downforce from the wings. That force is still applied exactly the same as if the car was right-side up, but since the car is upside down it flies. It may not be a problem with the model, as Android pointed out it could be set to 0 eliminating the lift from those cars without downforce. It would be interesting to race when the car gets lighter the faster you go!
blackbird04217 has stated that he wants to build an AI that has only the same inputs available as a human. anything else would be cheating.
IMO for this rule to be followed, any system of synthetic markers placed by the developer would be a 'cheat'.
personally, I would do as you suggest, using all available data on track and other cars that is available from the game engine. However, that is not what this project is about!
As someone said on the first page: You need to know the limits of the track. Whilst I am trying to give the AI what the player has to work with I also know there are *some* constraints to deal with. I have considered going about it by using each vertex in the terrain as a reference point, being far more reference points in the world for the AI to work with thus giving it more "player like input" But it does not need to visually see. One of my examples above shows the bare minimum needed to go around the Blackwood turn. Go under the bridge and start getting to the outside. Look for the distance markers and brake, turn towards the tires getting as close to them at apex as you can. This turn could be taken with 3 to 5 reference points alone - in a completely black environment.
I'm not here to try reading back screen data, we all know computing power can't handle that in the least bit so that would be a massive disadvantage to the AI. Mainly I want to remove "HERE DRIVE HERE" from the AI. I want to make them MORE AWARE of the environment, and using this combined with memories, instructions (or something undefined) to go around the track. This is where some issues are playing with the design, I am looking for ways to solve it still. Been looking for about three days, but you know, I was looking for a challenge.
And FWIW, your eyes don't reference things by points at all. One of the problems of dealing with this kind of AI project is that what people consciously think they are doing and considering, and what information is actually being subconsciously processed are two completely different things.
Col
I kinda beg to differ. I completely agree that exactly what everyone thinks they do are different then what they actually do when it comes to subconscious levels. But you can't tell me we don't work on points. If you were placed in a room, completely white, had the lighting setup *JUST* perfectly to make every surface the same exact white. And there were no bumps in this room, no surface changes. 100% flat, and everything the same color. And you didn't cast a shadow on any surface... This is a very hypothetical situation but you know the results because it can be tested in the computer world. You would be completely confused. The only thing telling you that you are on the bottom of the room is the feeling of gravity. You could run full speed from one end, and you would never know where you would SMACK into the other wall. You put just a small handful of irregularities in the room, and you can know tell where you are. These irregularities are reference POINTS.
I see you you think we don't see in points though, out in a massive field with no horizon, only grass below and empty blue sky above. And you can walk in a circle and stand where you once were; based on millions of these reference points because each blade of grass is different and our mind is powerful enough to compute the differences and say here is where we want to be. I hope this illustrated how we do actually use reference points, and the only bad thing for the AI will be it has fewer points than the player.
So back to the first quote you made, where these 'synthetic points' are considered cheating in your mind. Why would making a single point, with minimal information; position in world, and a name/id be cheating in your mind. You see a cone and identify it as a different cone than the others, on a very subconscious level, and you can estimate where you are based on estimating the direction the cone is and how far away it is from you. I don't see how it is cheating by turning the cone into a point of information/interest if that is kinda what it is. Surely the brain sees a single cone and can interpret multiple points immediately: at least four on the base and one at the top, also any dirty spots become good points as well. I just want to know how turning that into a point for the AI to detect means the AI is cheating. How not giving the AI a direct "GOTO HERE BY DOING THIS" line. That is one of many goals of the project.
No. an AI that can start from scratch will be MUCH more difficult than one that is given a close approximation as a starting point.
Well take what I said there loosely as the ending was semi-sarcastic but mostly pointing out that either route was difficult. I don't see why with a learning algorithm having the AI start from scratch would be any different than no learning algorithm and starting from an example. I get your mountain example very clearly, but I don't see how having a starting point fixes that issue. Either way, it won't be easy to capture data from a human player driving a track because I need to try separating the layers of driving: Attempt, Prevention and Correction. The only layer that the AI records is attempts, and stores it in little memories/instructions based on ref points. prevention is second nature and correction tries keeping the AI on track when their car starts sliding out from them. How do you define a braking point? First contact of brakes, what if its only used to keep the car under balance over some form of bump? That would be a prevention layer. That is where it becomes "as difficult as" teaching the AI to go from scratch. Again, this is used loosely
Already been asked, at the start of this page or the end of the first. . .
The only question of how to do this, that is left remaining, it the reference points themselves, well the instructions/memories that are attached to them. Course that is the most important part of the project, but it is the point of scratching my head.
I am fiddling with the idea mentioned above, having a driver race around recording the data - but that seems almost more tricky than somehow telling the AI to just go. So yea, thanks for pointing out the obvious but I have been getting caught up in that knot already, though I have most of everything else all planned out - in a nice clean way so the AI is quite independent of the other systems involved. The car needs to talk to the PhysicalSensor every frame / often, the car needs to be talked to by the AIController and the GameWorld needs to talk to AIWorld once at load up of the track.
But none of this will be set in motion until I can find a way for the AI to be instructed; be it pre-recording a lap, or learning from scratch. I like the sounds of pre-recording but I have a lot to look into. How do you signify an event as important for recording. IE a driver changes throttle/braking input constantly, but I need to record only the large changes.
I see, yea I've been thinking that the AI would be 'pre-trained' by watching a developer race the track for 5 laps or so. Averaging where and what they do with some magical method that I have no ideas about, and using that as their base line "intent" layer. Meaning that is what the AI tries to achieve and the rest just happens. This could later be developed into recording the players laps and thus the AI take on a form of the player, though that might be less desired. The 5 laps I was talking about would be someone very consistent driving to set a good example.
Sorry if that sounded smug, wasn't the intention. More of sarcasm, though the last bit does sound smug even when I read it back, I was simply curious.
As for the implementation, I do not expect an easy challenge, as I've said before. But I also don't expect AI to be challenging to a human either. I originally had different ideas for the project, and I will keep those ideas open but I have deviated from the original plan for a few reasons.
I agree the brain processes a crap load more than a computer ever can. But we process more off our previous experiences, searching for the one(s) that most match the situation and then learn from those mistakes - all in an extremely short time. As said, I don't plan to get anywhere where I have OMG AI. As I was saying a few posts up about the layering of decisions, the brain does these layers all at once. Best described, and likely implemented in layers because it abstracts things - in practice things can change.
Its meant as an exercise, practice, learning experience most of all an interesting project. If the AI is miraculously amazing then I would be more than delighted, but I do not expect such results - actually I expect the AI will hardly go around the track correctly, let alone at the limit of the car!
I have started building a very, very simple world while I continue developing my information and ideas. The world will consist of a very simple, flat track and several cones, which indicate reference points. The track will be turned off and on and is mostly available for the humans perspective. I still have a few hurdles and problems to get over, but I will be that much closer if I can get a small environment with a car and *very* basic physics. These physics will not include collision for the time, it will likely not deal with weight transfer. I do hope for some form of friction to test for oversteer / understeer characteristics but beyond that I don't know how far it will get.
I may likely be underestimating things to a degree, but don't overestimate the project here. Even if I am underestimating, I am not worried about doing so, the worst case scenario for me is it doesn't work. It's not like this a multimillion dollar project depends on this - if so I would certainly take a safe route vs trying new things.
By coding in a manner that completes the specific goals. The technical side of this is still being researched, and will then be developed. Currently the discussion is going great for the general ideas, and once I narrow things down it will transition into more technical ideas. Although in all seriousness a lot of technical things have been discussed already in a very non-technical way. This seems to make great sense for everyone.
Are you worried about how it is pulled off? I don't get where the question comes from.
Wow! First I can't believe I missed the wall of text after looking for it two times before this. (Last post on the first page was missed while I responded to Dygear apparently and was skipped each time I searched for it)
AndroidXP - great input in that wall of text, it was actually where I was heading with my thoughts except I went a slightly different version of the same thing. I don't know if you saw my post above about the technical ways of doing such a thing that you were talking about. But regardless I will explain my idea that I've had for a little while now, though it will sound quite like your wall of text.
I think that there are several "layers" going on at the same time while driving, perhaps layers is not what I actually mean is going on, but it is a good way to think of it, and better yet might be useful during implementation. So there may be more layers than what I describe, but for now I will keep it simple; I hope. Attempt, Prevention, Correction. I don't think these names fit but I couldn't find anything better as this is the first I write about the thought.
The Attempt layer is where memories and instructions are stored, processed and attempted. We've agreed that at X point we know to brake, that would be included on this layer. This is the drivers overall goal, to follow the ideal line. This layer does not care how much traction is really available, what condition the car is in or if the tires are warmed up. It just knows, "this is what I want to attempt to do".
The Prevention layer is a bit different as it knows what is being attempted. But this is the layer saying, that this attempt is likely not the best case scenario. Prevention is where the brain processes the actions before they are executed, trying to prevent brakes locking up, or throttle oversteer. The prevention layer will use the grip levels of the tires and know to try keeping them below the limits, not by much but try not pushing too hard.
The Correction layer is what happens when it all goes wrong, which it will. Okay now we are oversteering, lets figure out why and change something. I think perfect use of Androids conditions -> errors -> resolve actions would work perfectly on this level. Also this level of thinking may require reaction time / pre-thinking. You know near the limits that the correction layer may be required, and in certain situations you already know how to react before you need to in small twitch movements - that would be preparing / pre-thinking. And then you have the moments where, oh my that was an oil spot and my rear tires just lost grip completely. It will take a moment, not long but a moment to react.
So far we have been focusing on driving around the track, but I would likely add this into another layer: Car Control What I mean by this layer is; Do I need to shift? Will I need to shift soon, therefor moving my hand to the shifter to prepare? Fuel okay, should I request pitstop? etc.
Car control might even be the layer where the finalized attempts are computed into actual actions. I did notice there is a difference between where I want to go and what I need to do to go there, and I think that these layers combine to make what I want to do actually happen, even if the input to do so is different. Man I need to learn how to say what I want without being overly confusing.
I think we are getting close to something that will be very usable and helpful to me. All the input in here has been great to read and very helpful in deciding what to do.
I would have thought it would be possible to make an AI system that could use multiple 'systems' for driving with. Perhaps allowing you to turn some off, introduce new ones, or run with several at the same time...
That way you can try several ways of coding the AI, compare each method (not just in AI ability, but also in CPU load), and perhaps learn what is best and why....
Off topic, sort of: Is that what they call OOP, with each 'method' being an 'object'?
That was actually the exact thought that started this project originally. But I didn't find the coders that I was hoping for, other than that TORCS thing but it gives me a weary feeling. My original idea had a few programmers work with eachother to create a realistic and challenging AI driver, while creating this we would be competing our ideas against eachother... But then I realized that I wanted to bring my ideas to be proven first, like was said earlier, I may have publically mentioned this project too soon.
You're close to the right idea with OOP. Put it this way;
Driver is an object. Player is a driver, and AIDriver is a driver, and they all can do the same things if designed. So when Driver::GetInput() is called, the G25/KB or whatever is polled in the Player's implementation while the AI performs its things and returns the input. The method calling the Driver::GetInput() doesn't care where it comes from, as long as it gets the input. Much the same way, in your idea above OOP could be used to create AIAggessive, AICoward etc. . .
Where as I would like to aim this question at the general, I don't think it will be possible, so I turn to the programmers around to see if they have ideas.
Two cones on the outside of a corner show where the start and end of the corner is. One cone at the halfway point on the inside of the corner shows the middle. With nothing else, including no other traffic, how can I go about solving this as close to a human would. (See left side of attached image, the right side is for those with a lack of imagination, it should be seen, at least mostly, by only dots - and it is not required that this is a 90 degree turn.)
You can assume that the AI Driver has been through here before, so maybe each reference point tells the driver to do something. But in technical terms, what? And how? This is where I am stumped now. I do have a solid structure, I think, on the rest of the project. The AI Sensors are the easy part, so you have these points (as estimations of course) and you may have memories/instructions attached to them but how:
Here has been my thoughts mostly from this thread.
Take Dot1, which is the closest, outside point and also is approximately where the driver should be anyway just based on previous knowledge and car placement. That point will tell me to brake when I get to a certain distance from it; say 70 meters. So I am driving at X speed, closing in on the dot. Once I estimate I am closer than 70 meters I begin braking as hard as I can. My next instruction tells me to turn, and likely the direction I want to aim?, So when the distance drops below that threshold I begin turning. (Some of the turning will go into 'natural behavior' which keeps the car under control while at the limits). The next point would be the apex point, I was trying to get as close to it as I could with the given circumstances; speed, traction, lucky/unlucky corner estimations... But I know once that point is 90 degrees from my driving perspective, that it is time to start unwinding the car to setup for the next corner. Get on the throttle again and behave as if I was on a straight.
Okay, so that was literally the first time I thought of it and wrote it out and I don't really know how it sounds, will re-read after I post. But you can see the idea. Driver has a list of instructions to follow. Each instruction is a major change in car behavior, and attached to a reference point. Each instruction should also be attached to other ref points incase they are blocked by another vehicle the instruction can still play. It should be noted that distance is not always the best judge for an instruction. Possible even have 'ranges' for it to play out on both distance, and direction/angle. Theorizing here.
Any ideas? Because the rest of this seems do-able, this is my main challenge, and remember I have no objections to training the AI by having a human make a few runs and the AI being attached to the car gathering information about the control changes at specific reference points. But a problem here exists with the fact that I want few instructions, not small changes in throttle control or braking.
I am trying to make a list of what is important to this project and how to go about solving it. I will make a post again shortly.
I also wanted to mention my plans on your clouded vision. I 110% agree situations need to make the driver estimate more clearly or less clearly and that is planned in this project. A situation where the driver lost control will give the vision sensor an overload and likely be very wrong. I plan that this area will be where the AI can become more or less difficult, simply by changing what the driver is affected to and by how much. A driver behind adding pressure can decrease senses as well, and my plan is for the AI to have things to pay attention to...
I believe it was said somewhere in the thread, but the driver will need to turn their 'head' to focus on different objects, so if they wanted to check the mirrors that will make other parts of their vision temporarily less accurate. I am still searching a way to pull this off like I want. Because it is important, to me to have this as accurately as I can. Meaning, us humans can shut our eyes for a split second and still know what the scene will look like when we open them, We pre-computed the positions based on our knowledge of the previous two (or more) interpretations. If you did this with the brakes on, or under hard acceleration it is much harder to line up your vision with what you had expected upon opening your eyes. Of course a driver doesn't close their eyes, but I am mostly referring to the knowledge of our blind spots. Writing this post put a few things in perspective for me, as far as the prediction thing goes.
I am not assuming that humans are incapable of knowing the physics at all, actually what you do with the car depends on your knowledge and assumption of the physical world around you. However, you do not know numbers, you know relations. You know gravity pulls down, and going over a crest will give you less traction. This is based on your experience of a car getting loose in this situation, but you can apply it to anytime in the future that you interpret a crest situation.
The act of removing the dependencies from the AI and Physics system is mostly because of two things; I feel they should not be related, and second a lot of common AI techniques use this dependency. You are completely correct in saying we know the physical world around us, but I think you are wrong about our knowledge of physics. Take the golf example, although lets simplify it a bit.
A golfer hits a ball, depending on the angle of the swing the ball will aim towards the left or right. Depending on how hard the hit the ball will go far or stay near. And finally depending on the length of contact with the club the ball will travel higher, stay airborne longer. (I don't actually play golf so I could be wrong, either way this still works for my example.) So a good golfer can take these three actions with the ball and use them to get a hole in one everytime. Then we add some wind, a golfer that has experienced wind will know to aim away from the hole in order to take account for the wind, whereas another golfer wouldn't know that experience yet.
I believe this is setup exactly the same as the situation you did but slightly more detailed. Now lets move from Earth to say the Moon, or Mars or some other area . . . The gravitational pull is different, the air resistance is different or non-existing completely. If we took the identical golf course in both places, then the golfer who never played in the environment would likely over shoot the hole even if aimed correctly. But as you stated, they would learn the new physical environment, probably faster than you or I even think possible. BUT here is the point, that I am making;
Even with the different environment, the controls the golfer has available is still identical. Aiming the swing changes the direction the ball travels, hitting the ball harder increases distance and keeping the ball in contact with the club longer increases height. So the physics completely changed, but the player still knows how to play the game, they just need to take a few shots to get settled in the new physical environment. Does this explain what I am trying to do a bit better?
As a racer we know what to do in understeer/oversteer situations depending on what caused it in the first place, and really we could move to a slippery surface, a gripy surface or low gravity situation and our actions would still be similar, we still only have a steering wheel and pedals to act with. The physics is irrelevant to our knowledge, but *everything* to our experience.