well everything else about LFS is so completely under user control... then you run into this cone. just nothing you can do about the cone. the cone is permanent.
want to agree but have too much respect for devs to even ask
doing a lot of 20 lap races at fe2r/xfg trying to get under 30min total time raised my race pace a lot.
instead of focusing on slowing down (well yes, compared to hotlapping) it was instead a focus on pushing and pushing hard constantly over a half hour that led to more consistent lap times.
over a few weeks i went from thinking it was impossible to wondering how far under 30min that damn xfg would go
they may be fantasy tracks, but you definitely get the impression it's a real place. that must be pretty hard to achieve and start with a serious look at terrain and how the road would go with it.
the only small complaint i have is that the lines painted in south city don't make much sense, but that is a very minor thing overall
thank you, thank you, thank you. where other developers would have decided the time investment was too much or would have given up after x time without the result they wanted, you remain obstinate in achieving what will push simulated driving forward.
with all due respect and as much as i loathe coming to the defence of microsoft here, it does cost them money to support back versions of code on their platform. just like LFS doesn't play back old versions of replays (although it could) because this would add complexity and weight and because people have a workaround.
i do agree with you fully that it would have been a much better choice for microsoft to retain full backward compatibility with DX9 at least for a few more years. perhaps code lyoko is right and some people have no problems running their latest AAA DX9 title, but in my experience "some people might have x problem" is MS-speak for "yeah this doesn't work at all".
thanks for the clarification, it helps raise my level of knowledge above "next to nothing"
what does it mean to inverse a set of model parameters?
what i mean by "simulating the dynamics" instead of doing interpolation of static data points is keeping track of a few points in the carcass along the circumference of the tire and giving them some basic interactions among each other.
the idea would be to attempt to capture some of the subtleties that get missed by summing the forces they represent into one vector.
the neural net idea isn't bad either. i assume it would be possible to train one to take all input torques and current state and predict if the tire is currently exhibiting non-linear behaviour. a few of these nets might allow the simulation to switch between members in a family of functions.
from the Milliken book i'm reading (race car vehicle dynamics) it seems like the state of the art technology is to take a tire and scrub it against a moving belt at various angles and speeds and come up with tables of static data points. plotting these points can give you a bunch of curves for different variables, e.g. lateral force vs slip angle.
i believe LFS uses a set of fitted polynomials made to match these curves.
as Keling implies, this totally misses any subtle nonlinear effects. in reality the traction obtained by the tire is based on a lot of dynamic effects and what we're missing now is how the traction changes as the tire moves from one angle to another, or goes from over the limit back onto the limit.
the static data point interpolation gives a useful estimate but to get closer we need to simulate the dynamics. one example of something that needs simulating is the ridge of pressure that exists in a small part of the contact patch and that makes up for a lot of the lateral force that is generated. this ridge moves around as you switch from accelerating to braking and otherwise go from one extreme to the other.
thank you for holding this release until you are all satisfied about the quality. LFS stands out as one of the sexiest, most satisfying software experiences on the market.
+1 for Becky's informational posts... good stuff to know for planning long-term strategies with a view to dynamic track environment. thankfully the graphics engine can be worked on completely separately with no impact to the user base.
+2 for access roads being driveable. from my point of view, a major gift from people who stay in touch with what's going on in the community.
note from 1:30 to 2:00 how points appear and disappear.
the simulation undertaken in this video is of another order of computation cost than the one LFS can afford for tire rubber calculations, but the concept of maintaining a set of the most influential points could be useful.
a potentiometer in my G25 shifter has gotten some jitter and this results in occasional double shifts when using it in sequential mode.
it would help if lfs could allow me to set a minimum shift time so that two button presses within a given time frame would be treated as one.
for example if i would enable the setting and choose 50ms, then while racing if i press forward on the sequential lever and the shifter sends two button presses 14ms apart, the car would shift from 3rd to 2nd. currently it is shifting from 3rd to 1st.
i would imagine this would only be useful in sequential mode.
hmm i don't know why i thought that was the biggest reason. it's one of the reasons.. i went looking and learned that electric signals travel only 1/2 the speed of light, so if a circuit is to be synchronised to the same clock, then it can only be 5cm across at 3 GHz. it's amazing to think we are close to this physical limit.
but as you say processors could go much quicker if it weren't for the heat problems and the limitations of how long it takes the switches to go from on to off, not to mention many other factors.
remember a while ago, say 8 years, processors were always faster and faster? we had 400 MHz pentiums then 800Mhz pentiums then next thing computers were 1.6 gigahertz?
all that stopped some years back. we hit just over 3, maybe 4 gigahertz and just stopped.
the reason is - get this - the limitation of the speed of light. because the wires are a certain length in the computer, it takes a certain amount of time (speed of light) for the electrical signals to make their way from one end to the other.
we just can't go faster. if intel or AMD could have made an 8 gigahertz processor they would have.
XP on a 3 gigahertz processor runs lfs the same speed as windows 9 will on the same processor.
adding multiple cores and doing parallel computing works great but only if the computation can be divided, calculated, and put back together without overhead. in real life you get something like 20% gain from having 2 processors instead of 1.
Scawen might cook up something like Id software did using Binary Space Partition trees in Doom, but processors aren't going to get much faster.
i think the biggest difference is tessellation. tessellation is when you send a circle or part of a circle to the video card and say "draw this in 8 pieces" or "draw this in 32 pieces". as you ask it to break the circle into smaller pieces it looks smoother but takes longer to draw.
in a nutshell, video cards became a big cash cow but to sell video cards to kids you want to make them "better" and "cooler" so they added features and to use those features a new API.
really LFS does its magic directly in the CPU and only uses to basic features of video cards.