Yes it is indeed the same engine. Great to see them in such a big production. It's odd I couldn't find a mention of it over at the RoR home page though (only had a quick look).
The way I code things is having each object update themselves independently from frames in their own threads - at their own rate. Regardless of FPS. I found that with this ( multithreaded ) approach a single object can lag without killing FPS ( dependant on thread's load, and priority ). Rate can be controlled with either interrupts per cycle or by sleeping ( the thread ) .
It also means each object is running it's own infinate loop for updating. And requires care that the thread must be manually destroyed by itself ( checking application state - such as a "I Must Now End" variable) or by forcing global termination some other way.
The great part about it is - FPS is less likely to be affected by complicated calculations in game object updates.
I never put update math on the same thread my rendering code takes place on. (In my projects)
In fairness my original post you are quoting was in 2006 which is long before multi-core systems had established themselves as the de facto standard so multi-threaded programming still wasn't that common and I had never done it at that time.
The change in software architecture as a result is really quite massive, I've not been coding all that much on the application/game side of things during the interim period but on the whole I am generally inclined to agree with your approach.
However I still feel there is a place for delta timing, for much the same reason as before, there is no need to calculate something twice if you do not use it's result twice. That's just insane, and yet it is standard practice.
Further, a locked rate of physics at say 100fps might provide smooth gameplay at 25/50 or 100fps. But a framerate of 30fps which would normally be playable will now appear less smooth and more clunky.
Locked rates are still evil, the only difference is that multi-threading has made the practice
A) More Common
B) The Easy Solution
I don't do things because they are easy. Easy things bore me. So do conventions.
no they are fundamentally necessary for correct and consistent physics simulations
otherwise you end up with silly crap like being able to jump further than anyone else at 125fps in quake
If you dig back to the 2006 parts of this thread you'll note that I do not multiply or divide with a delta, I stick to addition and subtraction. Multiplications, divisions and some other operators must be done on locked rates, or performed 'delta' number of times per use.
Some calculations cannot be done on a delta, but part of why I think delta has always worked very well for me as a technique is because I simplify things very well, which enables them to be delta'd.
The standard way of doing physics is to simulate as much of a real car as possible using real world calculations, many of which are available online. I would never take that approach, because scientifically accurate equations do three things:
A) Are slow
B) Are still rarely truly accurate as some factor is always overlooked
C) Miss the point
The important thing is that the car feels right, and this can be done without using physics white papers from automotive engineers. I say this despite my brother - and co-worker on so many computer games in the past - being an automotive engineer.
In a computer game the most important thing is that the car feels right. rFactor proved this, there are a couple of brilliantly modded cars that handle well, the physics are straight off a look-up table and none of it's many players are bothered that the physics are not calculated in real time.
I'm not saying I would necessarily use a look-up table for significant aspects of the physics. Personally when it comes to making games I tend to take the view that if I can see it or feel it then I should simulate it, and the manner of the simulation should be the simplest and most efficient method possible, I certainly don't start by reading up complex equations on wikipedia.
So i'm not wrong in my approach, just different. And the best bit is that these days some bugger pays me an awful lot of money to approach problems the way that I do
Two thoughts:
1. Delta times are different on different computers. If you want two computers to share common data without updating all relevant data rather often over the wire you'll have to stick with a calculation method that is guaranteed to reproduce the exact same results on any computer.
2. Oscillating systems such as suspensions can't be accurately simulated below a certain critical frequency that depends on it's physical properties. A momentary drop in framerate (or increase in delta-time) should not affect any relevant forces.
you dont get it there is a fundamental mathematical and physical reason to use a small and preferably constant timestep
its basically an application of the nyquist theorem and breaking it will lead to instabilities in the simulation (basically exactly what happens whenever 2 cars crash and get catapulted half way across the map in almost every simulator out there)
tons of games have tried that approach and all of them failed miserably
most notably in recent history is iracing
you completely misunderstand what lookup tables are used for
they are used to speed up getting the same result that a complex physics simulation would get to
the results of using the lookup table still need to be integrated into a physics engine that runs at a constant rate (which rfactor does to my knowledge)
so in theory within the tables scope you would get the same result using the lookup table or a correct complex physics simulation
all cars that dont use lookup table values that arent based on solid physics feel like shit
Yep, lookup tables are the fastest way to do the sums but there always has to be a fixed clock somewhere even if your running different speeds for every single component, both physics and game functions.