Yeah, I did consider that briefly. Especially syncing across multiplayer could lead to a pretty big bandwidth overhead if you want any kind of accuracy in the tracks left by other cars. The current position packets would not be enough to ensure the same track surface for all players, so you'd probably have to settle for something less accurate for multi-player.
I'm also pretty sure low res physics would be out the window as you need cars at the opposite end of the track to leave their mark, but LFS already runs full physics for all cars in single player with AI so that's not necessarily a huge problem. It will elevate the system requirements of course, but that's pretty much going to happen no matter how clever you get at this level.
But yeah, there are a ton of "little" issues like this when you get down into the details, which is why high level hand waving is so much fun.
Err, you'd obviously cull irrelevant geometry when rendering or doing physics, but the data needs to be persistent. You can't just drop it once you've driven past. The next lap the damage you did to the track needs to still be there.
Well, that's what the vertices would be if you went the way of Sega Rally. But you can't spread them out very far because then you wouldn't get defined tyre tracks and other high frequency detail. Having a bump every other meter isn't good enough.
Hmm, you sure about this one? I just did the math, and I'm getting 15.5 GB (1000 * 100 vertices pr. meter * 20810 metres * 8 bytes.)
EDIT: Foiled.
EDIT2: But to take a relevant example then. Aston GP (?) at 8.8km, 10 meters wide with a 5cm mesh and 16 bit per vertex would result in a 67MB dataset. Hardly unreasonable. The gravel tracks in LFS are obviously shorter, but the same model could probably be used for weather effects on tarmac as well.
Well, that's it then. No fancy tricks required. 16 bit per vertex should be enough to store both height, density and moisture I reckon. Those parameters combined with global variables like temperature should give you a huge variation in possible conditions.
Guess I got thrown off by the 1.6GB estimate earlier in the thread, though that was obviously a much denser mesh/texture.
You'd need more info than this per vertex of course, but that could easily be deduced from a lower res polygon mesh and then applying the height as an offset from that as you render the thing. This is easy to do using stream out in a geometry shader or using DX11 tessellation, but it probably wouldn't be too bad to do it on the CPU either on lesser hardware.
According to this video (~4:10) they use "a 6 cm polygon mesh". So they're actually deforming the track mesh itself instead of overlaying a texture then, but same general idea I guess. Seems like they're using relatively short tracks though, so that probably helps a bit. Still surprised they can fit it all in the Xbox' memory (not having done any math on it to get an actual number).
Ah, then we're on the same page. With the current LFS tyre model that wouldn't work very well, no. Adding this stuff would probably require a major overhaul of the model anyway though, so I don't know how relevant it would be. But yeah, it's something that would have to be fixed.
It will be exciting to see if the new tyre model will deliver any improvements here.
Oh sure, what I was worrying about was grand total lack of space. You'd have to stream to/from disk. Some Xboxes don't even have disks, which is why I was wondering how Sega did it...
Well the shape of the tyre is known, so as long as you have a one dimensional load sample and tyre angle, you can easily deduce how much of the surface to displace, width of the groove etc.
EDIT: Wait, did I misunderstand your point? Did you mean the tyres can't react properly to the ruts in the surface? That's a very good point I completely missed, which is par for the course for me .
All good points. You'd obviously have to fade out older "tracks" as the race progresses. Possibly combine the old ones into larger and more sparsely sampled tracks as the simulation runs. It won't be anywhere near 100% accurate of course, but it may be good enough. Storing 50+ laps and playing them back as you go is obviously unreasonable.
And while you're right that it'd be hard to parallelise the data generation itself, you could very easily throw the whole process off to another core. Also, by tiling "upcoming" track sectors you could easily have many cores working simultaneously to generate the data you will need in the near future.
Oh it'd be a huge chunk of data, no doubt about it, but my initial gut feel is that it'd still be cheaper than a giant texture containing the required data. Especially across the longer tracks. You can always reduce the sample frequency of the tyre-tracks and interpolate if it becomes a problem as well.
Hard to say either way without experimenting though. Anyone know how the Sega Rally game did this? They even did it on the consoles, so they'd have to do a huge amount of streaming in and out of memory if they used a monolithic texture.
I don't see that as a problem to be honest. As I imagine it you'd store simple point samples with load information and probably wheel speed/angle. Based on that it should be possible to reasonably approximate what the tyre was doing at the time. At least come close enough. Or am I forgetting something crucial?
Yeah, that could be interesting to look into I guess. Never actually looked at that code. It's Not Invented Here though (Oh the horror!)
Well there's different types of texture compression, but when you also need to process the texture on the CPU I'm not sure how well they'd work. Either way they can't perform magic. You need unique data for the entire surface of the track however you look at it, so the dataset will be huge.
But of course, there are lots of ways you could do this. The one I mentioned is one way. I'm sure actual smart people (like Scawen) could come up with something better. In general though its a memory/CPU tradeoff. You can save memory by using clever data structures, compression and algorithms, but this will usually require more CPU to compensate. Striking the right balance is critical.
Jeez, now I want to sit down and give this a try... Probably need an actual sim first though.
Good point. Some of this data doesn't require nearly the same resolution. It's still more data than just height though.
Also consider that you need more parameters than "height" in order to do a proper surface simulation. Things like density and moisture of the surface can also play a huge part, so you'd need some bits for that stuff as well.
Meh, did the world turn brown since Crysis? Looking at the latest videos, everything seems to be rendered using the Quake 1 colour palette. Hopefully there's a story-based reason for this look? Last I checked palm trees weren't brown.
I doubt you could use a simple heightmap for this though. To cover an entire track in reasonable resolution it'd have to be absolutely huge. More reasonable would probably be to record all tire/ground interaction for a given track sector, not unlike what LFS does with skidmarks currently, and then just re-apply the forces the track surface has been under as you drive past. You can still use a height map for the actual calculations/visual effects of course, but you'd regenerate one or two smaller maps as you move around instead of using one huge one covering the entire track.
Well, for me it was a simple inability to get it to run properly. Due to outright bugs and extremely poor performance it was impossible to use, especially online, and with no AI that's pretty much it. It got better with 1.0.3 but at that point I was pretty much done with it. Especially considering the time that had passed. You won't get a community if people can't play your sim.
Anyway, the 1.1 patch - The FPS problem I was complaining about earlier suddenly fixed itself for no apparent reason and with it went a lot of other problems I was having. I was actually able to have a number of bug free laps around Aviano and I must say it seems pretty good so far. Compared to the clean and sterile world of LFS this is very rough and gritty. Feels more alive and not as artificially grainy as it did before. I'm certainly not the one to evaluate the realism of the physics but nothing feels obviously wrong to me, so that's good. If Tristan says it's close I'm inclined to believe him.
Overall a step in the right direction but I must admit I'm a bit hesitant of getting too involved in case more showstopper bugs suddenly appear. I'll wait and see how the remaining betas go (if applicable.)
Already using the latest drivers. I guess I could downgrade to some older ones, but to be honest I can't be bothered. An older version of Netkar (1.02 I think) runs fine with 250+ FPS and so does everything else I care about. Obviously something is wrong with the 1.1 beta.
Great. Runs horribly on my 3Ghz Quad core with a 3850. It actually seems to be limited programatically since it's rock solid at 31FPS regardless of settings and resolution (No, I don't have vsync enabled). Why he would do that is anybody's guess. Trying to emulate the Xbox racing experience in all its 30FPS glory perhaps? Serious regression though, whatever it is.
The tire squeal is completely crazy. If you even think about braking, the tires scream bloody murder. The wheels don't seem to lock up, but the squeal is deafening. Impossible to know where the limit is when they're like that.
In summary, I did one lap and that was enough for me.
And if the style is, in fact, to "[look] utterly stupid"? If they want to make a comedy number out of their car, who are you to judge? It's a good laugh and makes people smile. Seems like a good thing to me.
It not so much how it deals with conflicts (which it does about the same as Subversion), but rather how it avoids conflicts in the first place.
I remember having hell of a time trying to maintain two separate branches of development on my 3D framework in Subversion. Merging in changes from trunk to the branch always seemed to create a mess of conflicts all over every piece of changed code. Merging the branch back to trunk was also a real bitch which resulted in me having to manually solve a boatload of conflicts. And if I wanted to cherry-pick one change from one branch to another? More conflicts. And this was with me as the sole developer. Add a few dozen more, and it gets really hairy.
Git on the other hand makes this trivially easy by nature of how it is structured. You just branch and merge at will and it magically Just Works. It really changes the way you work if you're used to Subversion. Doing a small experiment? Branch and do your changes, merge back if it's good. Fixing a bug? Branch, fix and merge. Need one commit from another branch that fixes a problem you're seeing? Cherry-pick it into your current branch, and you're done. Easy.
The few cases where I've actually had to manually resolve a conflict in git it's always been an actual conflicting change and something that needs programmer intervention. Add to that it's distributed nature (which means you have the entire change history in your local working copy, as well as the ability to commit while working offline), and you've got yourself a winner.
No sure how this ties into the Codeplex stuff though. I'm sure they've made it more complex than need be somehow.
EDIT: Easiest is probably to just initialise a repository on you local harddrive using TortoiseSVN (right click on folder -> TortoiseSVN -> Create repository here) and experiment on that. Once you get the concepts it's probably easy to figure out the Codeplex stuff.
EDIT2: Oh, and yeah; GIT is indeed much better in a multitude of ways, but it does lack some of the good GUI tools SVN has, as well as integration with things like Codeplex apparently. You'll probably get by fine with SVN though. Just don't rely too heavily on branching and stuff like that as merging code back from a branch is a right pain in the arse.
The first person to discover this stuff wasn't the first at all. He built on the knowledge accumulated by everyone that went before him. You don't concoct ideas out of thin air. You research and understand what's been done before and then use your newfound knowledge to find a better way to solve the problem at hand. It's called education.
The alternative is to reinvent all of modern engineering from scratch through trial and error, which I'm going to assume right now you won't have the noodle to do. Very few people do.
Resolution has zero impact on the CPU load involved in rendering a frame. What's producing this CPU load is generating and sending the actual commands to the GPU (API and driver code), and that's the same no matter what the resolution is.
So, if rendering makes up 50% of the current CPU load (I think it's a lot more), you're looking at a 100% increase in CPU load. That means that in CPU bound instances, which is most of the time on recent hardware, three viewpoints could halve your framerate compared to one view. That's significant in my book.
Now this CPU load could obviously be significantly reduced by rendering code that uses modern GPUs and 3D APIs more appropriately, but then we're most likely talking a significant code rewrite, and not a simple fix.
Whoo there, not that simple. There's a huge difference between drawing one really wide view into a single framebuffer and drawing three separate views, with different transformations, across the same framebuffer. For one you'd have to do three passes over all geometry in the scene instead of just one. For a game which is already quite CPU bound in scenes with lots of graphics detail, this added overhead would probably hurt badly.
That said, I'd still like the option even if the performance was bad.