The online racing simulator
Searching in All forums
(980 results)
wien
S3 licensed
Yeah, I did consider that briefly. Especially syncing across multiplayer could lead to a pretty big bandwidth overhead if you want any kind of accuracy in the tracks left by other cars. The current position packets would not be enough to ensure the same track surface for all players, so you'd probably have to settle for something less accurate for multi-player.

I'm also pretty sure low res physics would be out the window as you need cars at the opposite end of the track to leave their mark, but LFS already runs full physics for all cars in single player with AI so that's not necessarily a huge problem. It will elevate the system requirements of course, but that's pretty much going to happen no matter how clever you get at this level.

But yeah, there are a ton of "little" issues like this when you get down into the details, which is why high level hand waving is so much fun.
wien
S3 licensed
Quote from shadow2kx :If you want a compromise between cpu and memory why not using culing method? While driving part of the track data will be loaded (like it's done now) as well as deformation data. So even if you need a huge file containing the data, it wouldn't take a lot of space in memory.

Err, you'd obviously cull irrelevant geometry when rendering or doing physics, but the data needs to be persistent. You can't just drop it once you've driven past. The next lap the damage you did to the track needs to still be there.

Quote from shadow2kx :Another approach would be to have only spaced points of deformation with a radius of action

Well, that's what the vertices would be if you went the way of Sega Rally. But you can't spread them out very far because then you wouldn't get defined tyre tracks and other high frequency detail. Having a bump every other meter isn't good enough.
Last edited by wien, .
wien
S3 licensed
Quote from Shotglass :or if the nordschleife were covered in mud and 10m wide and you wanted a 64 bit 1cm mesh youd still only need about 160mb

Hmm, you sure about this one? I just did the math, and I'm getting 15.5 GB (1000 * 100 vertices pr. meter * 20810 metres * 8 bytes.)

EDIT: Foiled.

EDIT2: But to take a relevant example then. Aston GP (?) at 8.8km, 10 meters wide with a 5cm mesh and 16 bit per vertex would result in a 67MB dataset. Hardly unreasonable. The gravel tracks in LFS are obviously shorter, but the same model could probably be used for weather effects on tarmac as well.
Last edited by wien, .
wien
S3 licensed
Well, that's it then. No fancy tricks required. 16 bit per vertex should be enough to store both height, density and moisture I reckon. Those parameters combined with global variables like temperature should give you a huge variation in possible conditions.

Guess I got thrown off by the 1.6GB estimate earlier in the thread, though that was obviously a much denser mesh/texture.

You'd need more info than this per vertex of course, but that could easily be deduced from a lower res polygon mesh and then applying the height as an offset from that as you render the thing. This is easy to do using stream out in a geometry shader or using DX11 tessellation, but it probably wouldn't be too bad to do it on the CPU either on lesser hardware.
wien
S3 licensed
Quote from wien :...which is why I was wondering how Sega did it...

According to this video (~4:10) they use "a 6 cm polygon mesh". So they're actually deforming the track mesh itself instead of overlaying a texture then, but same general idea I guess. Seems like they're using relatively short tracks though, so that probably helps a bit. Still surprised they can fit it all in the Xbox' memory (not having done any math on it to get an actual number).
wien
S3 licensed
Ah, then we're on the same page. With the current LFS tyre model that wouldn't work very well, no. Adding this stuff would probably require a major overhaul of the model anyway though, so I don't know how relevant it would be. But yeah, it's something that would have to be fixed.

It will be exciting to see if the new tyre model will deliver any improvements here.
wien
S3 licensed
Quote from Shotglass :keep in mind that consoles usually have a shared ram achitecture between the cpu and gpu with ram thats magnitudes faster than what you have on pcs

Oh sure, what I was worrying about was grand total lack of space. You'd have to stream to/from disk. Some Xboxes don't even have disks, which is why I was wondering how Sega did it...

Quote from Shotglass :oh yes... one dimensional single contact tyres cant simulate grooves in the driving surface

Well the shape of the tyre is known, so as long as you have a one dimensional load sample and tyre angle, you can easily deduce how much of the surface to displace, width of the groove etc.

EDIT: Wait, did I misunderstand your point? Did you mean the tyres can't react properly to the ruts in the surface? That's a very good point I completely missed, which is par for the course for me .
Last edited by wien, .
wien
S3 licensed
Quote from Shotglass :i doubt that this is feasible in any way for a race longer than say 3 laps as the requirements of calculating the effects of say 20 cars with 4 wheels each for 30 laps over and over again would quickly baloon into something thats just not doable
and since those have to be applied in the correct order of time its impossible to parallelise too

All good points. You'd obviously have to fade out older "tracks" as the race progresses. Possibly combine the old ones into larger and more sparsely sampled tracks as the simulation runs. It won't be anywhere near 100% accurate of course, but it may be good enough. Storing 50+ laps and playing them back as you go is obviously unreasonable.

And while you're right that it'd be hard to parallelise the data generation itself, you could very easily throw the whole process off to another core. Also, by tiling "upcoming" track sectors you could easily have many cores working simultaneously to generate the data you will need in the near future.

Quote from Shotglass :also i rather doubt that the amount of data to store wheel position wheel forces and wheel rotation lap after lap will be any less than for a detailed height map if you look at races of realistic length and number of drivers

Oh it'd be a huge chunk of data, no doubt about it, but my initial gut feel is that it'd still be cheaper than a giant texture containing the required data. Especially across the longer tracks. You can always reduce the sample frequency of the tyre-tracks and interpolate if it becomes a problem as well.

Hard to say either way without experimenting though. Anyone know how the Sega Rally game did this? They even did it on the consoles, so they'd have to do a huge amount of streaming in and out of memory if they used a monolithic texture.

Quote from Shotglass :also theres a far more fundamental problem which is that lfs' tyres arent three dimensional (currently lets see what the new tyre model brings to the table)

I don't see that as a problem to be honest. As I imagine it you'd store simple point samples with load information and probably wheel speed/angle. Based on that it should be possible to reasonably approximate what the tyre was doing at the time. At least come close enough. Or am I forgetting something crucial?

Quote from Shotglass :rigs of rods has recently gone open source and has tyres that are a lot more 3d than those in lfs

Yeah, that could be interesting to look into I guess. Never actually looked at that code. It's Not Invented Here though (Oh the horror!)
wien
S3 licensed
You mean to say you've not been off sipping mimosas in your Barbados mansion? Great news. Looking very much forward to giving it a whirl.
wien
S3 licensed
Quote from NotAnIllusion :Is there nothing in the DX or graphic driver APIs that would reduce the need for storing very similar or identical pixels/bitmaps/meshes/etc in such a resource intensive way? Like cloning of some sort, based on some parameters specific to that (group of) pixels/etc

(I have no idea, never even looked at graphics programming, don't shoot the noob )

Well there's different types of texture compression, but when you also need to process the texture on the CPU I'm not sure how well they'd work. Either way they can't perform magic. You need unique data for the entire surface of the track however you look at it, so the dataset will be huge.

But of course, there are lots of ways you could do this. The one I mentioned is one way. I'm sure actual smart people (like Scawen) could come up with something better. In general though its a memory/CPU tradeoff. You can save memory by using clever data structures, compression and algorithms, but this will usually require more CPU to compensate. Striking the right balance is critical.

Jeez, now I want to sit down and give this a try... Probably need an actual sim first though.

Quote from Crashgate3 :Things like that wouldn't need so high a pixel desity though and they wouldn't need to be kept in the same bitmap

Good point. Some of this data doesn't require nearly the same resolution. It's still more data than just height though.
wien
S3 licensed
Also consider that you need more parameters than "height" in order to do a proper surface simulation. Things like density and moisture of the surface can also play a huge part, so you'd need some bits for that stuff as well.
wien
S3 licensed
Meh, did the world turn brown since Crysis? Looking at the latest videos, everything seems to be rendered using the Quake 1 colour palette. Hopefully there's a story-based reason for this look? Last I checked palm trees weren't brown.
wien
S3 licensed
I doubt you could use a simple heightmap for this though. To cover an entire track in reasonable resolution it'd have to be absolutely huge. More reasonable would probably be to record all tire/ground interaction for a given track sector, not unlike what LFS does with skidmarks currently, and then just re-apply the forces the track surface has been under as you drive past. You can still use a height map for the actual calculations/visual effects of course, but you'd regenerate one or two smaller maps as you move around instead of using one huge one covering the entire track.
wien
S3 licensed
Quote from Mattesa :I am also curious why NKP isn't getting more attention?

No I am not considering the "he disappeared and pissed us off" argument.

Well, for me it was a simple inability to get it to run properly. Due to outright bugs and extremely poor performance it was impossible to use, especially online, and with no AI that's pretty much it. It got better with 1.0.3 but at that point I was pretty much done with it. Especially considering the time that had passed. You won't get a community if people can't play your sim.

Anyway, the 1.1 patch - The FPS problem I was complaining about earlier suddenly fixed itself for no apparent reason and with it went a lot of other problems I was having. I was actually able to have a number of bug free laps around Aviano and I must say it seems pretty good so far. Compared to the clean and sterile world of LFS this is very rough and gritty. Feels more alive and not as artificially grainy as it did before. I'm certainly not the one to evaluate the realism of the physics but nothing feels obviously wrong to me, so that's good. If Tristan says it's close I'm inclined to believe him.

Overall a step in the right direction but I must admit I'm a bit hesitant of getting too involved in case more showstopper bugs suddenly appear. I'll wait and see how the remaining betas go (if applicable.)
Last edited by wien, . Reason : Comma overload
wien
S3 licensed
Quote from The Moose :Update your graphics card driver. Quite a few of us had this problem with the 1.04 beta we got to test. Updating the drivers sorted it. I'm getting 120-150 FPS on a prehistoric rig. It runs beautifully.

Already using the latest drivers. I guess I could downgrade to some older ones, but to be honest I can't be bothered. An older version of Netkar (1.02 I think) runs fine with 250+ FPS and so does everything else I care about. Obviously something is wrong with the 1.1 beta.
wien
S3 licensed
Great. Runs horribly on my 3Ghz Quad core with a 3850. It actually seems to be limited programatically since it's rock solid at 31FPS regardless of settings and resolution (No, I don't have vsync enabled). Why he would do that is anybody's guess. Trying to emulate the Xbox racing experience in all its 30FPS glory perhaps? Serious regression though, whatever it is.

The tire squeal is completely crazy. If you even think about braking, the tires scream bloody murder. The wheels don't seem to lock up, but the squeal is deafening. Impossible to know where the limit is when they're like that.

In summary, I did one lap and that was enough for me.
wien
S3 licensed
Quote from DarkTimes :I'm not sure how to make changes to it, so I might need to read the manual for that, but anyway I've made some progress.

Edit your local working copy, right click on the file(s)/folder(s) you changed, select commit, write a descriptive log message, job done.
wien
S3 licensed
Quote from james12s :tbh it dont matter if its a style or not, it is stupid cos it achieves nothing other than weighing the car down, creating drag, and looking utterly stupid

And if the style is, in fact, to "[look] utterly stupid"? If they want to make a comedy number out of their car, who are you to judge? It's a good laugh and makes people smile. Seems like a good thing to me.
wien
S3 licensed
Quote from Bob Smith :I've heard about git but can't say I've ever tried it. Maybe git is better at dealing with conflicts but it's not often I get then with svn and then the conflict editor usually gets me sorted in no time at all.

It not so much how it deals with conflicts (which it does about the same as Subversion), but rather how it avoids conflicts in the first place.

I remember having hell of a time trying to maintain two separate branches of development on my 3D framework in Subversion. Merging in changes from trunk to the branch always seemed to create a mess of conflicts all over every piece of changed code. Merging the branch back to trunk was also a real bitch which resulted in me having to manually solve a boatload of conflicts. And if I wanted to cherry-pick one change from one branch to another? More conflicts. And this was with me as the sole developer. Add a few dozen more, and it gets really hairy.

Git on the other hand makes this trivially easy by nature of how it is structured. You just branch and merge at will and it magically Just Works. It really changes the way you work if you're used to Subversion. Doing a small experiment? Branch and do your changes, merge back if it's good. Fixing a bug? Branch, fix and merge. Need one commit from another branch that fixes a problem you're seeing? Cherry-pick it into your current branch, and you're done. Easy.

The few cases where I've actually had to manually resolve a conflict in git it's always been an actual conflicting change and something that needs programmer intervention. Add to that it's distributed nature (which means you have the entire change history in your local working copy, as well as the ability to commit while working offline), and you've got yourself a winner.

If you've got an hour: Linus Torvalds on why you're stupid if you use Subversion.
Last edited by wien, .
wien
S3 licensed
http://svnbook.red-bean.com/ ?

No sure how this ties into the Codeplex stuff though. I'm sure they've made it more complex than need be somehow.

EDIT: Easiest is probably to just initialise a repository on you local harddrive using TortoiseSVN (right click on folder -> TortoiseSVN -> Create repository here) and experiment on that. Once you get the concepts it's probably easy to figure out the Codeplex stuff.

EDIT2: Oh, and yeah; GIT is indeed much better in a multitude of ways, but it does lack some of the good GUI tools SVN has, as well as integration with things like Codeplex apparently. You'll probably get by fine with SVN though. Just don't rely too heavily on branching and stuff like that as merging code back from a branch is a right pain in the arse.
Last edited by wien, .
wien
S3 licensed
Quote from sam93 :I know loads of people go on about engineering and how you have to know about it before building say a race car, but it must all be self teachable because the first person to discover this stuff had to self teach himself lol.

The first person to discover this stuff wasn't the first at all. He built on the knowledge accumulated by everyone that went before him. You don't concoct ideas out of thin air. You research and understand what's been done before and then use your newfound knowledge to find a better way to solve the problem at hand. It's called education.

The alternative is to reinvent all of modern engineering from scratch through trial and error, which I'm going to assume right now you won't have the noodle to do. Very few people do.
wien
S3 licensed
Went on a fishing/hiking/drinking trip with a buddy last weekend. Got a few good shots (and a lot of really rubbish ones).







wien
S3 licensed
Quote from Crashgate3 :It'd be three passes at the resolution of one screen compared to one pass as the resolution of all three.

Resolution has zero impact on the CPU load involved in rendering a frame. What's producing this CPU load is generating and sending the actual commands to the GPU (API and driver code), and that's the same no matter what the resolution is.

So, if rendering makes up 50% of the current CPU load (I think it's a lot more), you're looking at a 100% increase in CPU load. That means that in CPU bound instances, which is most of the time on recent hardware, three viewpoints could halve your framerate compared to one view. That's significant in my book.

Now this CPU load could obviously be significantly reduced by rendering code that uses modern GPUs and 3D APIs more appropriately, but then we're most likely talking a significant code rewrite, and not a simple fix.
wien
S3 licensed
Quote from Crashgate3 :I doubt it would decrease performance significantly over normal 3-screen use, as the only difference would be a call to directx to move the camera 3 times a frame as each viewport is rendered.

Whoo there, not that simple. There's a huge difference between drawing one really wide view into a single framebuffer and drawing three separate views, with different transformations, across the same framebuffer. For one you'd have to do three passes over all geometry in the scene instead of just one. For a game which is already quite CPU bound in scenes with lots of graphics detail, this added overhead would probably hurt badly.

That said, I'd still like the option even if the performance was bad.
wien
S3 licensed
Quote from mrodgers :WOW! The new FireFox 3.5 is TERRIBLE at displaying pictures. They look horrible and nothing like they did in Photoshop or my old Firefox install.

Firefox now does colour management by default (like it should), so that's probably what you're seeing. Here's how you change the setting: http://blog.duber.cz/misc/firefox-color-management-madness
FGED GREDG RDFGDR GSFDG