The online racing simulator
Searching in All forums
(25 results)
hetner
S3 licensed
Quote from nacim :I don't think so, but since you can run a 60Hz display at 50Hz, some people like to do this in LFS to remove the judder. I personally prefer disabling VSync and playing at 100 FPS.

Actually, I also prefer disabling V-sync and cap LFS at 100Hz but if you do that on a 60Hz monitor, you will have screen tear randomly (like) all over the screen. Changing the monitor update to 50Hz will keep the screen tear steady in the picture and imo not so noticeable and annoying. And at 60Hz, part of the screen will still have the judder(constant micro stutter) where at 50Hz it will be smooth as butter only eye hurting at the screen tear lines.

When that is said, running with V-sync when monitor is at 50Hz is also very nice. As long as the general frames are steady. I usually set LFS priority to "runtime" in task manager and remove all speed step and power saving options in BIOS to keep CPU clock steady, this almost eliminate the remaining V-sync related stutter when running 50Hz, but also helps even if you run v-sync with 60Hz monitor.

I used this tool to change Monitor update: https://www.monitortests.com/forum/Thread-Custom-Resolution-Utility-CRU

It is by far not all monitors that will run at 50Hz. Therefore, it is not a general solution if you want ultra-smooth frame updates.

Sorry if I got too much out of topic here, but a good solution for this just is way higher on my wish list than tire physics(which already is the best and most immersive physics I have tried) :-)
Last edited by hetner, .
hetner
S3 licensed
Quote from Scawen :
2) Finish some graphical updates that will help with new tracks.

Any chance that you have an improvement of physicsClock(100Hz) to screenClock(typical 60Hz) micro stutter issue in mind?

I know that many don't see the micro stutter as a problem, in fact some claim they don't have it which cannot be true unless they of cause run 20,25,50 or 100Hz on there display, but for me after I found a way to clock my main monitor down to 50Hz and seen the smoothness of constant image change frequency, it hurt my eyes looking at the jitter composed by the 100Hz to (60Hz or 75Hz) "latest frame" sampling which LFS currently does. And I would bet many would see it as a very big improvement on immersion, if this issue was addressed somehow someday.

Actually this issue is why I have not dared to buy a VR headset yet because I would intent to use it almost exclusively with LFS but afraid that the 100Hz to 90Hz jitter would ruin my experiences because I know it could be more smooth. And eyes are very sensitive to jitter when image is scrolling, the brain know it is fake and not a real world image.
This actually stimulate your balance centre the wrong way compared to real world image, which the balance centre use among other things to keep track of your balance and avoid you becoming dizzy. Of cause maybe the micro stutter is not making anyone dizzy and pop but it sure is noticeable when compared to none stutter and much more important for immersion than getting the lighting of the scene or the 3D models more photo realistic or even getting the tire physics(almost back on topic :-) ) more realistic than it already is.

The funny thing is that implementing an interpolation between physics and graphics each thread running on separate CPU core would actually yield more headroom for both physics and graphics improvement and solve micro stutter at the same time. Of cause this is easier said than done on a SW architecture I would guess was developed before multi core CPUs became mainstream.

PS: nice bikes..same story here, quit smoking and began bicycling 5-6years ago :-D only road biking though.
hetner
S3 licensed
Quote from Scawen :hetner, the improvement of the synchronisation between physics frames and vertical sync had to wait for another update

I totally understand your prioritiesThumbs up And thanks for yet a great update, it have been like getting new HW for free!!
hetner
S3 licensed
Quote from AnnieC :
Yeah, I know that completely stutter free gaming is not possible with the current system but if you check my screenshot that is far from the random little stutter that you are referring to.

Well I could understand that you had improved the stutter since that Screenshot was taken as you say here in a later post:
Quote from AnnieC :because it's stuttering less when the minimum processor speed is set to 100%

So i was assuming "less" would be only one frame less or more per physics clock? and that you can't avoid even if you have 1000FPS in overhead and it is still very annoying and very noticable, especially now when the general frame rate is very high and smooth
Quote from AnnieC :
Regarding the interpolation, isn't some kind of interpolation already used in calculating the positions of other cars in multiplayer? Or does that system only use the last know parameters for the calculations?

If anything it would be extrapolation and/or prediction of position, and as i understand it, it has nothing to do with graphics. But there is a synchronization between local and server game clock going on, which could change how these random stutter patterns behave on your PC when switching between offline and online gaming. but it is probably not something most people will notice :-)
hetner
S3 licensed
Quote from AnnieC :
Quote from DANIEL-CRO :I just tried turning on VSYNC and got crazy stutters. Soon I found the issue, my CPU underclocked to 1000 MHz from original 3200 MHz (saw on LFSLazy gadgets). Normally when VSYNC is off I get 250+ FPS, so I guess same problem is on your PC.

It seems that it's something like that because it's stuttering less when the minimum processor speed is set to 100% in the Power Settings. But the stuttering is still there sometimes so I guess the other power saving options are working in the background. I never thought I'd see the day when LFS uses so little CPU power that my low spec computer goes into power saving while playing Smile
(I tried LFSLazy and it seems that because it uses some CPU power it smoothes out LFS Big grin) Thanks for the advice Smile

You will always have the risk of random stutter because of game clock and display refresh clock they are not in sync, as i descriped why here:https://www.lfs.net/forum/post/1892764#post1892764

The only way to get completely stutter free is to resample/interpolate from game to graphics or get HW that supports G-sync or free-sync or simlar.. Or make LFS sync to refresh rate clock if that is possible.
hetner
S3 licensed
Quote from Scawen :
Quote from hetner :You probably have a good reason why its not already in-sync, and maybe it has already been tried but found not possible. I just wanted to know if this idea have been considered?

Good thoughts you had there and it all makes sense. No, I have not considered this properly, but I'll consider it a bit. I'm not sure how easy it really is to do in offline mode. It is obviously complicated by the variety of refresh rates and the varying pattern of the number of physics updates per frame required to stay in sync. It would need to 'loosely synchronise' with the refresh rate so that for example if a refresh was missed it could catch up with real time. There is an additional complication online, as there are micro adjustments to the local game clock to stay in sync with the game clock on the server. I see some complication in trying to make micro adjustments to stay in sync with the monitor refresh rate while also making conflicting adjustments to stay in sync with the server clock. I'll consider this more but it might be too much to rush in for this week as I want to get that full version out this weekend.

Just thinking out loud.. please ignore me if not relevant.Smile
If a hard clock-locking approach(not micro adjusting) is an option then dropping or adding one frame once in a while to be in sync with Server time is not really an issue because one frame time error say every more than 10sec is hardly detected by the eyes, its when a stream of error occurs it really appear as stutter to the eyes, imo. So ideal online the game clock should be keept aligned so that the jitter don't cause stutter and if possible frames should be dropped or added when needed to be in sync with online server. Just like video capture programs do when syncronizing to a VCR frame clock to keep Video and Audio in sync throughout the capture, which is usually 25FPS and go completly unnoticed by most people :-)

At 50Hs refresh rate offline the sequence would look like this:
<wait for next V-sync>
<wait(5ms)> // wait to align edges from colliding because of jitter
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<start sequence over>// aproximatly 20-15=5ms to next v-sync and 10ms to next DoGameCalc

At 60Hs refresh rate offline the sequence would look like this:
<wait for next V-sync>
<wait(5ms)>
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<start sequence over>// aproximatly (3*(1000/60))-(5+(4*10))=5ms to next v-sync and 10ms to next DoGameCalc

At 50Hs refresh rate the psuedo code for frame adding/dropping would look like this:
<wait for next V-sync>
IF (Serverclockdif>=max_time_behind)
{ /*Drop video frame*/
<wait(4ms)>
<DoGameCalc>
<wait((6ms)-(timetoDoGameCalc))>
<DoGameCalc>
<wait((7ms)-(timetoDoGameCalc))>
<DoGameCalc>
<start sequence over>// aproximatly 20-(4+6+7)=3ms to next v-sync and 8ms to next DoGameCalc
}
else
IF (ServerclockDif<=max_time_ahead)
{ /*Add video frame*/
<wait(10ms)>
<DoGameCalc>
<start sequence over>// aproximatly 20-10=10ms to next v-sync and 15ms to next DoGameCalc
}
else
<wait(5ms)>
<DoGameCalc>
<wait((10ms)-(timetoDoGameCalc))>
<DoGameCalc>
<start sequence over>

ofcause it will be more complicated with other refresh rates, put if the game clock can accept momentary jitter of max +/- 5ms then I beleave this approach could work.. Maybe some longer and more complexe sequences can be made to optimize jitter. Else I find it hard to solve the sync to both server and refresh rate at the same time Cool, then resample/interpolate is the only optionUhmm
Last edited by hetner, .
hetner
S3 licensed
Well now i have been enjoying the new improved performance, and with the new frame graph something just seem so obvious now.

And that is the answer to this Q?:
Why is it I can reduce quality options and resolution and get 6-700 FPS!! and when i put on V-sync i still get temporal randomly stuttering??

Answer:
Well with the new frame graph when (display refresh rate@50HZ) it is showing every frame with 2 physics calculations until stutter begins then there could be one frame with one physics cal and next have 3 cals and so on randomly.
I am pretty sure it is the physics clock and display refresh rate going of sync for a moment. And when looking at picture with v.sync off and LFS FPS limit to 50 FPS i can see one screen tear relative steady but moving slowly up or down representing the two clocks difference. And i can also see that there is a fast jitter in the tear position of 20-30 pixels up and down representing the the jitter between the two clocks.
Now.. whenever this screen tear(with v-syn off) is in the middle of the screen and i turn on v-sync, everything is perfectly smooth always. But if the screen tear(with v-sync off) is at the upper or lower end of the screen and i turn on v-sync i get a terrible stutter, which make sense because the Frames change fast from 1,2 and 3 physics cal per frame because of the clocks "edges" overtake each other randomly. I hope you understand what i mean :-).

And what use is High frame rates when frames come randomly with bad timing(stutter) which ruin the illusion of the virtual world?

So, my idea or question to you Scawen: Why not sync the physics clock to the Refresh clock when v.sync is on?

And sorry if my lack of insight make you feel tired of getting this question Shy

It "just" need to micro(0,01Hz) adjust the physic clock to keep sync steady and aligned so the jitter between the clocks don't generate the stuttering. Its not like the physics clock is(or need to be) more precise than the refresh rate is it? This would be a major improvement in smoothness/anti-stutter without screen tear at all refresh rates and HW without having to make a complete "resample/interpolation" implementation.

I made this table just to see how often the Physics clock could be synchronized with different refresh rates
Refresh rate(HZ) ,number of Phys between sync, number of frames between sync
50Hz ,2 phys calcs between sync ,1 frames between sync
60Hz ,5 phys calcs between sync ,3 frames between sync
75Hz ,4 phys calcs between sync ,3 frames between sync
90Hz ,10 phys calcs between sync ,9 frames between sync
100Hz ,1 phys calcs between sync ,1 frames between sync
120Hz ,5 phys calcs between sync ,6 frames between sync

Of cause micro stutter at anything else than 50 and 100Hz will still occur but, it is really not as big a problem as the other stutter, created by the jitter and off sync clocks, is imo.

You probably have a good reason why its not already in-sync, and maybe it has already been tried but found not possible. I just wanted to know if this idea have been considered? Also now when you have made this frame info graph which seems to have all the info needed to update a physics clock to be in sync with refresh rate. I could be wrong Cool
hetner
S3 licensed
Quote from DANIEL-CRO :
Quote from hetner :
Ofcause it would not be a backward compatible solution and therefore out of scope for now(until new physics maybe). But i cant see why 8.333ms step should be any different from 10ms to handle?Uhmm

ATM is simply checks if checkpoint has been crossed in last physics update.

I still dont see why checkpoint should be limited to 1ms resolution.
Quote from DANIEL-CRO :
Quote from hetner :i think maybe it is using highperformance timing which have much better resolution than 1ms. It could be based on QueryPerformanceFrequency and QueryPerformanceCounter calls which usually have better than ~500ns resolution or the intrinsic instruction RDTSCP which have the resolution of your CPU clock(theoretically a 3GHz would have 333,333ps resolution)

These timers have nothing to do with LFS time. They are mostly only used to measure time needed to complete some tasks (parts of code).

Ok maybe you have some insight i dont have, but it was just to say that there is lots of Timing resolution in computers today so to make a 8,333ms clock with same accuracy as a 10ms should not be a problem. But ofcause it depend on how the implementation is now if its an easy task to change or not.
But to be honest i think the best thing to do is to find the most optimal clock for physics and then solve the graphics smoothness by interpolation/resampling, this would also give a steady input lag.

Quote from DANIEL-CRO :
Quote from hetner :Usually Sleep(1) take less than 1ms Uhmm typically 0,98ish ms if i recall correctly. maybe you forgot to take your performance timers calls overhead into account??

None of my tests confirmed that ... Yesterdays tests were done in other Project which run tenths of other threads very light in fact, but I'm sure it had some effect on final results. Todays results are even better. Overhead is really low in this case as you can see in table below. 1000 samples for each.

Hmm well it could be your computer and mine is different, or i could remembering wrong, but i am pretty sure i both read it and measured it on my PCs a while back where i was coding some usec-timing software at work. But i cant seem to find anything about it now though Schwitz
hetner
S3 licensed
Quote from Lucas McFly :monitor driver does not filter what modes are available?

Yes but apparently there is difference between Directx9 and 10, but it could be the nVidia driver making that difference or maybe it something in Dx9 that is not supportedShrug

EDIT: OK i just did some research and found this very cool tool:
http://www.monitortests.com/fo ... om-Resolution-Utility-CRU

With this i could add/mod a "Detailed resolution" to my Display driver directly in the regdatabase :-) and after a display restart(use restart.exe in the zip), 50Hz is now available in LFS :-) Its fantastic and the new graph made this very easy to adjust frame rate which can be set with hi resolution in CRU.exe ex. 50.005Hz. if you want to get closer to sync or deliberately slightly out of sync to avoid the jitter between Physics clock and refresh rate to make more than one frame timing error jump when "passing" the critical point. like if i set my monitor to 50,1Hz i will get that 10ms time jump once every ~4 seconds, which is hardly noticeable compared to 60Hz which does it once every ~3 frames.
Last edited by hetner, .
hetner
S3 licensed
Quote from Scawen :
Quote from hetner :But never mind that i would just be happy if LFS could accept the Refresh rate of 50Hz in full screen mode.Shrug It is like there is some kind of display dependent limit of the Refresh rate range in LFS. ex. my monitor can be set successfully in windows desktop between 50Hz and 68Hz in custom mode, but LFS only display the range 56Hz to 68Hz to select from. And another Monitor and PC i have tried i can set it in windows to 41Hz to 62Hz an in LFS i only get the range 50Hz to 60Hz i really dont get why it should not show the whole range supported by the driver? I could understand it if it just did not support custom rates but that is obviously not the caseBig Eye

I have no idea why the 50 Hz modes is not displaying in LFS on your computer. LFS doesn't filter out modes unless their size is less than 640x480. All refresh rates reported by DirectX are listed for you to select. And some people reported they can see 50 Hz modes in LFS. There must be something different on your computer or the method you are using to create extra modes.

OK that got me thinking, and did som quick testing of Directx samples from the MSDirectxSDK and it looks like all Directx10 samples and up report the 50Hz on my system and all the Directx9 samples only report the same as LFSFrown

Wonder why Uhmm but this remove LFS as a suspect to that problemSmile
hetner
S3 licensed
Quote from Rotareneg :Ah, I hadn't seen that new graph! This time I dropped the frame rate down to 15 fps to make it more visible, and also included a 100 fps sample. Also, I recorded this with vsync and the frame rate limiter both off in LFS and used my capture software (Dxtory) to synchronize the display frame rate with the recording frame rate. Oh, and this was the AI driving each run new, not from a replay.

this is a great demo of the benefits of display being in sync with physics, even if you need to go down in FPS to be in sync. But the graphs dont show the minor problems in 50/100Hz that would show in real life situation because of physics is not 100% in sync like it is here on your captured video.
hetner
S3 licensed
Quote from Rotareneg :My monitor doesn't present any 50 Hz resolutions on my system, but I can add custom resolutions with the nVidia control panel which LFS picks up fine.

I also used nVidia control panel to make a custom resolution. My issue is that LFS dont accept the same range of refresh rates as Windows ;-) on my work laptop i can also make and choose down to 50Hz but not on my privat stationary(which is the PC I use for LFSCool.) I can make it allright but not choose it in LFS if it is below 56HzUhmm
Quote from Rotareneg :
Here's a video demonstration the difference between 50 Hz and 60 Hz in LFS, made by taking two recordings and changing the frame rate to 30 Hz for both to make the difference more visible. For me at least, the video playback in Firefox tended to stutter slightly, ruining the effectiveness of the demo, but Chrome played it smoothly so that you can see the difference.

Ha ha i just had som problems with playback smoothness of this video because my monitor was set to 50HzYa right. But now i have seen it in 60Hz and the every 3rd frame time jump is very much visible at 60Hz, and thats what i mean is very much noticeable realtime also. at least you notice the big difference when the jitter is gone at 50Hz.
Could you do this video test with the new frame graph on so it would be visible when the 10ms time jumps occur? Time jumps would be visible in the physics graph as changes in thickness.
Last edited by hetner, .
hetner
S3 licensed
Quote from DANIEL-CRO :
Quote from hetner :
Actually if the physics would only just be changed to 120Hz any 60Hz monitor would run LFS silky smooth as long as frames never drop below 60Hz and v-sync is on.

There are 75, 100Hz, 144Hz monitors... Solution should be universal for all refresh rates.

All these monitor would be able to run 60Hz and a gamer 144Hz monitor would be able to put out 120Hz. And the frame timing jitter would be 1,66ms less on all systems ;-)
Quote from DANIEL-CRO :
Another issue is time step in 120 Hz physics. Current 100Hz physics relates to 0.01s time precision on splits. 120Hz physics would have 8.333ms time step, how to handle this because 10!=8.333 ? There isn't any interpolation in LFS ATM.

Ofcause it would not be a backward compatible solution and therefore out of scope for now(until new physics maybe). But i cant see why 8.333ms step should be any different from 10ms to handle?Uhmm I can't be sure because i don't know how the physics timing is implemented, but i think maybe it is using highperformance timing which have much better resolution than 1ms. It could be based on QueryPerformanceFrequency and QueryPerformanceCounter calls which usually have better than ~500ns resolution or the intrinsic instruction RDTSCP which have the resolution of your CPU clock(theoretically a 3GHz would have 333,333ps resolution), and we all know how bad sleep(x) timing is so i am pretty sure thats not how physics is timed else we would have way more problems with frame timing, i thinkYa right

Quote from DANIEL-CRO :
Quote from Scawen :For some reason the Sleep command gives up less CPU time than requested.

Quickly measured time to complete Sleep(1); using high performance timers.
Minimum=1.00861ms
Average=2.02136ms
Maximum=6.17521ms

Usually Sleep(1) take less than 1ms Uhmm typically 0,98ish ms if i recall correctly. maybe you forgot to take your performance timers calls overhead into account??
And it is commonly known that it is not wise to use Sleep() in critical timing because it is only a "wish" to the system that it is ok for your code the system handle other stuff for x number of milliseconds.

But never mind that i would just be happy if LFS could accept the Refresh rate of 50Hz in full screen mode.Shrug It is like there is some kind of display dependent limit of the Refresh rate range in LFS. ex. my monitor can be set successfully in windows desktop between 50Hz and 68Hz in custom mode, but LFS only display the range 56Hz to 68Hz to select from. And another Monitor and PC i have tried i can set it in windows to 41Hz to 62Hz an in LFS i only get the range 50Hz to 60Hz i really dont get why it should not show the whole range supported by the driver? I could understand it if it just did not support custom rates but that is obviously not the caseBig Eye

And can i just say: anybody who thinks that 60Hz display refresh rate is better than 50Hz for LFS because 60Hz is higher than 50Hz, should really try to see the difference with your own eyes. Smile and remember you have a ~20Hz micro jitter which your eye can detect easily when things are moving/scrolling steady, or by looking at the wheels spokes not spreading evenly when turning fast. Running LFS at 50Hz is as smooth as having G-sync/free-sync@~50FPS. Only problem is that eventually you can get a 10ms time jump, but at least you dont get it as often as with 60Hz.

And sorry if i am on the edge of topic, but it is these very cool new things like improved FPS and Frame graph that has made it possible to get these smooth frames on my system, and it is really cool and addictive Thumbs up
hetner
S3 licensed
Quote from DANIEL-CRO :
Quote from hetner :I just wanted to try 50Hz refresh rate by making a custom resolution and only changing Refresh rate from 60Hz to 50Hz but the resolution option do not show up in LFS setting???

I can set 23, 24, 30, 50, ... Hz in LFS so it shouldn't be a problem. I guess you made custom resolution in a program like AMD CCC, in that case LFS will not display that resolution/refresh rate as an option. Only options are screen modes reported by Windows (that you can configure using just Control Panel).

I actually use Nvidia Control Panel and if i do the exact same custom resolution with a frame rate set to 56Hz it appears in LFS to select and it works, but if i lower it to 55Hz or below, LFS is not showing it as an option in the setting. Pretty annoying actually when i just experienced 50Hz to be very smooth to the eye in Windowed mode because there is not time jitter of 10ms between every 3rd frame(like 60Hz has)

Actually if the physics would only just be changed to 120Hz any 60Hz monitor would run LFS silky smooth as long as frames never drop below 60Hz and v-sync is on.

In my Windowmode@50Hz i can see that Physics and Display refresh rate is very close to be in sync because the screen tearing is moving very slow and almost steady at one point. That would mean that the 10ms time jump error would not happen very often if at all if only it would work one(window v.sync) way or another(LFS accepting 50Hz)Uhmm.
hetner
S3 licensed
I just wanted to try 50Hz refresh rate by making a custom resolution and only changing Refresh rate from 60Hz to 50Hz but the resolution option do not show up in LFS setting??? if i change it to 56Hz it do show but not if i go under 56Hz??
is this a LFS bug/feature to not allow refresh rates below 56Hz?? I would think it actually would be a good option to be using 50Hz to minimize the frame timing jitter, for a smooth experience even on old HW.

EDIT: OK, I just did some testing in window mode and setting the desktop to 50Hz. It is more smooth than ever.
Only problem is screen-tearing because v-sync does not work in window mode, is that intended?
Last edited by hetner, .
hetner
S3 licensed
Quote from matze54564 :Smoother Graphics when setup "Maximum Buffered Frames" to 2 @50 Hz, Vsync on 50,0 fps. Thank you very much.

Aston is now 100% smooth at my AMD A6 APU, Westhill is at the problem zones not smooth....

If the Monitor and Physics clock is not synchronized you will get 1/100Hz= 0.01second time jump error in the flow of frames even if you have 2 frame buffers. the question is how often you get the "time jump error", and that depend on how off sync the two clocks are. It is basically the same that happen when using 60Hz monitor rate where you make a 0,01sec time jump every 3rd frame(3/60Hz= every 50ms). If we say Physics are the Master clock and compared to that the Monitor Refresh rate is 50,1Hz then you will get that 0.01sec time jump every 250 frames(250/50,1 = 4,16sec) this maybe not detectable or maybe the clocks are much more close so the time jump occur even less frequent i dont know, but you will be having some timing error eventually unless it is dealt with some how.
hetner
S3 licensed
Quote from DANIEL-CRO :
Exactly!
Moreover using VSYNC (60 FPS) while game runs at 100Hz causes discontinuous move of a world around you. For example you are driving at 90 km/h, that makes 25 m every second or 0.25 m every game loop (0.01 s). World around you will move by 0.25m first frame, 0.5m 2nd frame, 0.25m 3rd frame, ... which should be easily noticeable if you look at the side while driving fast.

So the experience might be more smooth if changing your monitor to 50Hz and add V-sync. Only problem is that the 100Hz physics "clock" is not synchronized with the Monitor clockFace -> palm The only right way to get smooth frames between two non synchronized clocks is to resample timewise all frames but i guess that would really be complex and take a performance hit, not to mentioning lag. Even though not impossible since, to my knowledge, digital tv's have been doing something similar real time for some time now. Else a technology as G-sync should be used but that can't be used with Oculus Rift or other HW not build for it.
hetner
S3 licensed
Nice performance boost thank you very much!Thumbs up

And i like the graph it is really helpful when adjusting graphics on an old HW to find compromise between detail and performance.

However i did find a "funny" texture missing error once at WE1X see attached.

I found it both in H and H5. i did try to reproduce in a second run without luck, but it is happening in the attached replay (time 2.51). I am not sure if this is a know bug i just wanted to report it.
hetner
S3 licensed
I got a freez when disabling Haze and try to run a replay

replay loads but nothing is rendered but a black screen and replaytransportbar

i get the same Error popup+freez like ACCAkut and Sobis when starting single player

I am on a Win7 interprise 64bit 1920x1200

same thing happen when in windowed mode
Last edited by hetner, .
hetner
S3 licensed
Quote from PeterN :Can you vary the strength of the blur so it only affects the edges?

Maybe this is beginning to turn into off topic because this would be difficult to optimize for a realtime implementation. But i tried to vary a Gaussian kernel size so that it would approximately fit the scale change caused by the distortion and then using that as weight for calculating the target pixel. And the result is actually more sharp than the pre-blur method and more anti-aliased than the simple bilineary interpolate.

There are a lot of parameters to tweak for this to be perfect, but a one pixel wide grid is very ungrateful to try to anti-alias when distortion like this is applied. But this method would be able to re-size and distort at the same time and hence save a processing step while applying Anti-Alias in any amount one would like or have CPU/GPU overhead for. But again it is not as strait forward to implement as two steps, AA and then distortion.
hetner
S3 licensed
Quote from PeterN :Can you vary the strength of the blur so it only affects the edges?

Not easy and that would differently not do anything we like to the performance.

But it would be possible to pre-calculate an interpolation table that could weight all the source pixels perfect according to the distortion, but it would still demand quite some real-time multiplications per pixel. I am not sure if there is a technology that does this off the shelf already.
hetner
S3 licensed
Of cause you're right. It was just a brain fart from my side. I just did some test with a simple black and white grid. And it is very clear that a simple 2D bilineary interpolating or even bi-cubic for that matter is not handling sharp edges very well when source resolution gets too high as you said.

here is the result if anyone is interested:


But then i could not help it and tried to blur the image before processing with at 3x3 Gaussian kernel. And that of cause removed the artifacts from the end result but I don't think that approach is an option here, because of so many things.
Last edited by hetner, .
hetner
S3 licensed
The final distortion is probably done with a 2D bilineary interpolating table, and if I was writing that code I would make that do the re-sizing(50%) also, so that the only more workload needed was the big render. And i actually think that also would make the artifacts less, because it has more precise data to interpolate from.

But I do not know where the final distortion is done in this case and if that is inside an API maybe this is not possible.
Last edited by hetner, .
hetner
S3 licensed
Quote from Scawen :
All I want to hear from is a graphics programmer, telling me how to run a debug version of DX8 or DX9 in Windows 7.

Maybe you allready know this but it could be what you need to do
run Directx Control Panel
mine is located here (Windows 7 pro):
C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Utilities\bin\x86\dxcpl.exe

goto tab called "Direct3D 9" and select "Use Debug Version of Dirct3D 9" and then Apply



I havent been able to test if not doing this would keep debug builds from executing.
Last edited by hetner, .
hetner
S3 licensed
It is funny. It's like watching bullfighting "Olé!"
FGED GREDG RDFGDR GSFDG