My best guess so far (most of it worked out yesterday, but I just spotted something extra in one of the screens a few mins ago)
Colour code:
Black - Original
Red - New, with good views from the screens. I'm pretty sure I'm close
Blue - Wild guess, I haven't spotted any decent clues yet. Not sure where else National would go though
Yellow - I'm pretty sure this has been re-profiled a little, but I haven't attempted to work it out yet.
This has always happened after anything to do with the control input has changed - the update probably triggered it, but it always happens to me after I run LFS without my wheel in or a different controller.
As Daniel already said, go to axis options, hit recalibrate, move all axis to the full extents then hit lock. This will save the calibration until whatever caused it happens again (which will be very rare if you always have the wheel plugged in before starting LFS).
It seems to run fine in wine (1.6.2) for me, after installing the dll.
Framerate seems to be same/slightly better with dx9 (50-90FPS on an empty track), even with AA'd mirrors.
AMD A10-7700K APU using onboard graphics, I did find that the official AMD driver works better than the open source one on this machine so far (several games, not just LFS).
On the subject of the DCon, ever since we switched to the DCon on our servers (or possibly the new network code, I can't remember), we've been noticing a *lot* of "UDP ERROR (20) : WOULDBLOCK" messages in the log (number varies usually 20-30) - so many that we can get more WOULDBLOCK messages than anything else when the server is busy or the CPU is running something else as well.
We've tried all sorts of things to solve the issue, including tweaking kernel buffer sizes, but it seems like some other buffer somewhere is somehow getting full for a tiny fraction of a second - it's next to impossible to catch the system buffer with more than a handful of bytes in it because it gets cleared very quickly. Because the buffer seems to be cleared out so fast, there doesn't seem to be any noticeable ill affects other than an almost constant flood of messages in the log.
The cause could be LFS, WINE or something crazy in the custom version of the linux kernel that our host uses, but we gave up trying to solve it ages ago. I'm pretty sure that turning InSim off didn't make a significant difference.
Would it be possible to have an option to remove these messages from the debug log? As I understand it, UDP WOULDBLOCK isn't actually an error anyway and there don't seem to be any problems caused other than the error messages themselves.
As a side note, has anyone else had this issue, or is it just something funky with our server/setup?
(Running 0.6E DCon on Ubuntu 12.04.4 via WINE 1.4, haven't tried this latest DCon version yet, will do later)
Round: 2
Grid: 2
Session of Incident: Race 1
Protestor's LFS Lap: 2
Timecode of Incident: MPR ~2:50.00
Your Car number: 083
Protested Car number(s): 105
Brief description of incident:
We're entering turn 5 side by side, with me on the inside and 105 attempting to overtake around the outside. 105 attempts to take a normal/tight racing line to cut the curb, ignoring the fact that my car is already on the inside. I see him there and try to move over to leave more room, but it seems that even me leaving the track completely isn't enough to avoid their car and we collide at the apex.
This results in my car spinning out, which causes some chaos to the cars behind and I fall to last place.
------
Round: 2
Grid: 2
Session of Incident: Race 2
Protestor's LFS Lap: 1
Timecode of Incident: MPR ~0:40.00
Your Car number: 083
Protested Car number(s): 112
Brief description of incident:
112 completely cuts turn 1, resulting in overspeed compared to the cars around. 112 does not slow down enough and taps the rear left of my car, causing it to become unsettled. I have to slow down briefly to regain control, but 112 has accelerated again resulting in a couple more taps and us both going off track.
112 then rejoins the track aggressively and hits the side of 056.
AFAIK, the jumpyness of keyboard drivers is because of the low packet rate. LFS uses prediction to make movement seem smooth, which works pretty well for analog inputs, as it takes time for the inputs to change.
Keyboard (especially Kn) inputs tend to be much faster and at any one time don't really represent where the driver intends the car to go. By the time the next packet arrives 167 - 250ms plus network latency later, the prediction can be quite far out.
Technically, with the /me command, the whole line (including the name) is the actual text/message, whereas with normal messages the name just tells others who is speaking. The "player name" doesn't really exist in the same way in /me messages - it's treated like any other word (colours removed).
I guess the "after player name" bit could be confusing, but InSim itself is behaving as expected IMO.
Fraps is notoriously delicate when it comes to games going in and out of fullscreen/changing resolution/alt-tabbing. I've heard many cases of it crashing/causing games to crash/stopping or corrupting the recording when you try it, or even when some games/engines (*cough*unity*cough*) change scenes.
It probably didn't happen in older versions of LFS because Fraps uses software capture for DX8 rather than directx hooks.
It may be possible for LFS to work around some of Fraps' failings, but I dunno if it's worth the bother - I guess Scawen can decide that one.
FWIW, OBS can handle LFS going in and out of fullscreen, changing resolutions, changing monitors or even completely closing down and starting again with no problems at all. Even when using Game Capture mode (DX hooks, not software mode).
Pretty certain the acceleration forces are along the world axis. (My old code corrects for that and the output looks perfect)
Edit:
I wrote my code over 6 years ago, so I can't remember exactly how it did things, but I did have to work out what most of the values represented the hard way.
I can glean some info from reading the code and looking at the output display, however some outputs are correct and others aren't so it's going to need a bit of testing to work out exactly what's going on.
I guess it all depends on what you're trying to achieve (accuracy vs realism vs type of driver/driving).
Peak torque/power @rpm values probably aren't actually all that useful for picking decent shift points, although using peak power probably isn't going to be too terrible for up-shifts.
Your percentage calculation seems like a nice simple way of trying to keep in the power band, but how well it would translate to decent shift points would depend on the car's specific torque curve.
Using Keling's method is delving into machine learning and could get quite complex (in both programming and processing), but would indeed represent someone getting into a new car blind so to speak.
If you want it to be like a vaguely serious racing driver, finding out more or less where the shift points are for a specific gear ratio set should be fairly normal.
You probably have two ways of working that out the *actual* ideal shift points (other than trying to use the auto box, which would only be reliable for up-shifts anyway):
1. Pre-compute the torque curves for every car (IIRC there have been a couple of pretty good attempts on the forum already) and read the ratios from the setup. This would be the cheaty way that I suppose you're trying to avoid.
2. When the AI first starts driving, attempt to calculate a rough torque curve in one gear (preferably not traction limited) based on acceleration readings. Work out rough relative ratios by comparing acceleration at the same rpm at different gears.
This would be a compromise between human style complex learning and fully cheating using external/pre-cooked resources, with the added benefit of being generic without having to pre-calculate stuff.
As I said, which method you use depends on what you're ultimately trying to achieve.
AFAIK, the auto gearbox mode uses the optimum shift points, that might be easier to use for measuring than AI drivers.
I guess you'd have to use the up-shift values to work out at what points a kick-down is needed, as the auto box's downshifts while decelerating are pretty shocking for actual racing (mostly because there is no one optimum down-shift point when braking - it can vary massively based on engine braking, limits of traction etc)
To reduce the advantage of knowing the optimal shift points, you could add a bit of randomness to it.
Regarding the Adaptive AA;
I can't reproduce the bug on my system, although mine only calls it "Adaptive Anti-Aliasing" not "Adaptive Multi-Sample Anti-Aliasing", which might have some bearing on it. It does have two quality settings though.
Radeon HD4890; Driver Package 8.97.100.7-121116a-151639C-ATI; Catalyst 13.1 (legacy, latest version for the card); D3D Version 7.12.10.0911
This may be a reason the bug hasn't been fixed. If DX10 doesn't even support the alpha test method, it may be that it is now more common (perhaps even in DX9) to do the same thing with pixel shader trickery. If that's the case, I would guess a shader method would have much better performance, considering how fast shaders are on modern GPUs.
Pure wild speculation on my part though.
The InSim (CompCar/MCI) Heading is an integer range with zero looking down Y+ and 16384 looking down Y-
OutSim Heading isn't really a heading at all; it's a float representing radians left (positive value) or right (negative value) of the Y+ axis.
If you convert them both to degrees:
Between 0 and 180 degrees they're identical
InSim 181 = OutSim -179
InSim 359 = OutSim -1
If you want them both the same:
<?php if (OutSim_deg < 0) OutSim_deg += 360 or if (OutSim_rad < 0) OutSim_rad += 2*PI ?>
Probably because Aero requires DX9 hardware - it probably has to run DX8 stuff in a limited mode.
From my limited subjective testing so far, I seem to be able to get better in-game framerate now, while recording 1080p30 fullscreen in OBS, than I did before with no recording at all.
I haven't had any bugs/crashes as yet.
I haven't seen any documentation to confirm other than the only other acceleration values mentioned in insim.txt are m/s^2 (in CarContact).
However, I also tried dividing by 9.81 and the outputs seem to match the G-Forces displayed in the in-game F9 view perfectly.
So, no documented proof, but I don't know what other unit they could possibly be.
According to this post the August 2007 DirectX SDK was the last one to include Direct3D8.
I haven't had time to verify that yet, but if it's true then it's probably impossible to debug D3D8 in Win 7 (which came out 2 years later than that SDK version).
I add 10 and 11 together, because 11 includes 10 in the same way that 9 includes 8. I have a DX10.1 GPU, but I have DX11 installed, not DX10.
The difference is that 9 and 10+ are fully incompatible - 10 is unique in that it was a rewrite, rather than an upgrade (10.1, 11, 11.1, 11.2 are all upgrades). Vista/7/8 have both 9 and 10+ installed separately, but they don't have individual versions of 10, 10.1, 11 etc installed at the same time, because they are all part of the same thing.
Also, I was not suggesting that Scawen move to DX11 any time soon; I said there's no point. I just wanted the statistics to be represented properly.
Yeah, that sucks and I have no idea why they removed that feature.
However, the GPU manufacturers came to the rescue eventually - every 5000 series and later AMD GPU supports at least 2 monitors as one screen ("Eyefinity"), and nVidia began to support 3+ ("Surround") soon after.
So you can do it in W7, but you need the right hardware (the situation still sucks compared to XP, but at least it's not impossible)
That one was HP's decision, nothing to do with W7. HP do provide full, separate W7 drivers for some printers, but for many (especially simpler or older ones) HP only bother to provide simple, feature limited, often broken ones via Windows Update.