Can you have a look into how LFS chooses between integrated GPU and dedi? Can't you add an option to manually select between Intel HD4600 and 850m GTX in my case? No matter what I tried in nvidia control panel or tips and tricks off internet, it still uses the integrated gpu. Im not saying the HD4600 is not providing enough framerate, but maybe at some point with more graphical updates, it won't be enough anymore to sustain a constant 60fps.
I don't see this on my computer, in Windows 7. The CPU usage (on the Performance tab in Task Manager) drops each time I go to full screen with vertical sync enabled and goes higher when in a window (because of the unlimited frame rate). So it seems my computer doesn't use CPU power while waiting for the vertical sync.
Could it be a graphics card driver option, somehow forcibly disabling Multisampling AA? I am trying to imagine how DX9 could report that AA settings are not available. If there are any such options then the usual one should be "application controlled".
I don't know how that would be done. Is it Optimus? If so, I don't think LFS can control it. As far as I know, LFS can only select the graphics card if you have a choice of adapters, on the Graphics Options screen in LFS. Near the bottom of the screen where you see the card name, try clicking that button and see if you get a choice.
I don't really get any choise, its just highlighted as you can see in the screenshot. Clicking it also does nothing. I tried right clicking LFS shortcut -> Run with graphics proccesor -> High-performance NVIDIA GPU which is already set on default and that doesn't change anything either. Its weird. It just doesn't want to run with the GTX.
Its weird. It just doesn't want to run with the GTX.
Windows 8.1 right? There are various topics/games on the internet with complains about this behavior.
---
In your case, is there any noticeable change from H to H2 / H2 to H3 or does it all seem much the same?
Wanted to do some screen recordings of the screen(s) here to show the differences but for some reason its not so easy reproducable at the moment... But, this machine has a bit more power. The streaming computer I cannot use anymore because of screen lag (whats the proper name for this?) .. That machine has an i7 720qm ... Its a bit weird to buy a different system just for LFS, then to see that single thread performance of much newer processors are also questionable ... Its also the wrong approach of "fixing" a problem.
I will do some random streaming to show what issues Im talking about. Will link later to video's.
I also wonder, it affects the screen, it's hard to drive.. But will it affect physics/steering input as well? I guess so. Will it affect the Rift experience? I think too, because that device is even more hardware hungry? ...
Windows 8.1 right? There are various topics/games on the internet with complains about this behavior.
Oh HI DAVE!
No, its actually a fresh Win7x64 Home Edition up to date. Ultimate has a lot of rubbish preinstalled so no thanks. Doesn't your laptop have integrated + dedi too? I know you have a dedi in a MXM slot, but no idea if you had integrated aswell. What drivers are you using?
My processor seems to have Intel HD 3000 build in but I think HP permanently disabled it in BIOS. Doesnt appear anywhere in device listing. Maybe you need to block / switch it off in BIOS and/or device listing.. I indeed have a dedi GPU in a MXM slot, which I think is awehzom.
because the screen of your laptop is directly connected to the IGP of your CPU (the HD intel), not the same on desktop pc's when the dedicated GPU have their own slot to plug the screen (VGA,DVI,HDMI....).
a dedicated GPU work on laptops like a co-GPU to transfer heavy task from the Intel IGP, that's why you can't disable the intel HD from the bios... eh it's hard to explain with my english.
but you can set the default from the Nvidia Control panel:
3D Settings->Manage 3D Settings
Tab "Global settings"
Preferred Graphics Processor
Select : High performance NVIDIA processor
Just for your information, in my case the IGP is really switched off because it cannot function with the 30 bit display (HP calls it Dreamcolor, other manufacturers like Dell have their own branding for it).. But thats a specific situation.
I have (almost) the same situation as Dave. On my Lenovo IdeaPad Y500, I have 2x GT 650M 2GB (each) in SLI paired with an i5-3210M, which, according to ark.intel.com, has an Intel HD 4000 built in it. However, for some reason Lenovo COMPLETELY disabled my IGP. I can't see it anywhere, even in BIOS. Tried encoding with Intel Quick Sync in OBS, but it says I have no Intel IGP installed.
This way, I have no problems with games, they always run on the 2 high-performance 650M's.
In your case, is there any noticeable change from H to H2 / H2 to H3 or does it all seem much the same?
Wanted to do some screen recordings of the screen(s) here to show the differences but for some reason its not so easy reproducable at the moment... But, this machine has a bit more power. The streaming computer I cannot use anymore because of screen lag (whats the proper name for this?) .. That machine has an i7 720qm ... Its a bit weird to buy a different system just for LFS, then to see that single thread performance of much newer processors are also questionable ... Its also the wrong approach of "fixing" a problem.
I will do some random streaming to show what issues Im talking about. Will link later to video's.
I also wonder, it affects the screen, it's hard to drive.. But will it affect physics/steering input as well? I guess so. Will it affect the Rift experience? I think too, because that device is even more hardware hungry? ...
I'm confused by your text. Not really sure what you mean. It's not clear to me if you are saying there is a difference between H / H2 / H3 on your computer.
Anyway, when you say "screen lag" maybe you are talking about the effect normally called "input lag" which is when there is a noticeable delay between (e.g.) moving the controller or mouse and the on-screen steering wheel moving. This can happen when the CPU is not loaded but the GPU is drawing quite a lot. Then the GPU can get several frames behind the CPU (while trying to work through its list of things to draw and never catching up to real time) causing this strange effect. This effect can be quite extreme in some cases.
LFS has a prevention method for this (checks that the card is finished before sending more instructions to the card) so most people would never have seen that. It should be better in H3 than it was in previous versions, because it now uses an event query (DX9 function) instead of a lock (DX8 style). Though on most systems this shouldn't make a noticeable difference.
One test result I heard suggests that the input lag prevention mechanism may not have worked properly on SLI setups before H3 but should be OK now. Though LFS doesn't get any benefit from SLI anyway and may run faster with SLI disabled.
I have (almost) the same situation as Dave. On my Lenovo IdeaPad Y500, I have 2x GT 650M 2GB (each) in SLI
OMFG - SLI on a laptop??? Wow... :-O
... Its a bit weird to buy a different system just for LFS, then to see that single thread performance of much newer processors are also questionable ...
And yes, single-thread performance hasn't advanced very much in the last several years. Worse on AMD CPUs than Intel but even Intel are massively off the curve.
Though LFS doesn't get any benefit from SLI anyway.
Oh. I thought SLI was "transparent" - that it didn't need app support to work, but that the device drivers just did the sharing under the hood...
Though LFS doesn't get any benefit from SLI anyway.
Oh. I thought SLI was "transparent" - that it didn't need app support to work, but that the device drivers just did the sharing under the hood...
That one was new for me, too!
OMFG - SLI on a laptop??? Wow... :-O
[bit offtopic]
Yeah, quite surprising but it's real - it has a slot called "UltraBay" and it's possible to buy either a HDD, an additional fan or a graphics card (GT650M or GT750M) that works together with the primary graphics card in SLI - so I have a GT650M in it.
Though LFS doesn't get any benefit from SLI anyway.
Oh. I thought SLI was "transparent" - that it didn't need app support to work, but that the device drivers just did the sharing under the hood...
Not really, it depends on how the program does its things. There are many things that can slow down SLI or remove any benefit from it. The best SLI setup is where one card does one frame and the other card does the next frame. So there has to be good overlap, to get any benefit.
Actually LFS's input lag prevention probably does prevent SLI helping at some times (at some points where frame rate is GPU bound). I could modify it to help a bit more. But as the frame rate in LFS is held back by CPU usage most of the time, that is another reason why SLI can't help. I think it might be possible to get higher frame rates with SLI by making LFS take less care to eliminate objects that don't need to be drawn (saving CPU) and just sending more to the cards to deal with it their way.
If you have to add many numbers together from a piece of paper in front of you, you will not be any faster if you use two calculators. It's a bit like that for LFS and SLI at the moment.
Well... in a sense they are not properly utilized... if you mean that LFS isn't taking advantage of multiple cores? I'd like to do physics and graphics on separate threads at some point, but it's a big restructure and is some way down the (imaginary) priority list.
On the other hand if we accept that this is a single thread program for now, it does seem that a lot of CPU is used at some points and this can use all available CPU power, so limiting the frame rate without fully loading the GPU. Seems to be related to the large number of objects at Westhill and LFS doesn't yet have the optimisations to deal with that. I'll have that closer look again at those specific places.
In your case, is there any noticeable change from H to H2 / H2 to H3 or does it all seem much the same?
Having multi threading as a low priority is at odds with your usual support for people with older computers as being able to multi thread core is the main thing you could do to support them. I have a 2.4ghz quad core and, it's seven years old, works well with all but the very newest games and in races my frame rate drops to 20 fps on the first and last corners of the main track and as low as 15 fps on races around the roads. I realise you can do improvements to help performance as is but surely use of multiple cores would give headroom so even unoptimised tracks are at least playable. Presumably, all future tracks released will be of this quality so is there going to be a situation where people have to wait for ages after new content release before it's optimised to being playable?
Having multi threading as a low priority is at odds with your usual support for people with older computers as being able to multi thread core is the main thing you could do to support them.
The reason multithreading is not at the top of the imaginary priority list (although as you quoted, I did say "I'd like to do physics and graphics on separate threads at some point") is because it is very difficult and requires a huge restructure of the graphics and physics systems. It could take months, after dealing with the can of worms it would open. It simply has lower priority than the S3 content, track object shaders, the dynamic reflections on the cars, the tyre physics and rift support.
As far as I know, LFS has higher frame rate than other games, and it just isn't a serious problem at the moment.
Not only could it take months, but due to a historical mistake, I have two separate versions of LFS, a development version and a public version. I have to manually merge in all the changes from one into the other. Separate physics and graphics thread requires some complicated things like multiple, interpolatable, snapshots of previous physics states. It is so difficult and complicated that embarking on such a project, in full knowledge that I would have to merge the two versions at a later point, would be worse than insanity. So it seems like a thing for after the new tyre physics is released and I have a single code base to work with.
The reason multithreading is not at the top of the imaginary priority list [...]
TL;DR: Multithreading is nice, S3 content and real-time reflections are better.
[...] I have to manually merge in all the changes from one into the other. [...]
@Scawen: I know it can be weird to change now when you have years of experiences like that, but I really think that switching to SVN or GitHub (or any other versioning program) will be better for you. You'll have much more control on backups (for example hosting SVN server on a home-made NAS to protect you from data loss on your PC), forks (for example S3 as a fork of current LFS developpement version), etc.
At least do some research on it, I'm sure you can benefit from what it offers you.
I'm confused by your text. Not really sure what you mean. It's not clear to me if you are saying there is a difference between H / H2 / H3 on your computer.
Anyway, when you say "screen lag" maybe you are talking about the effect normally called "input lag" which is when there is a noticeable delay between (e.g.) moving the controller or mouse and the on-screen steering wheel moving. This can happen when the CPU is not loaded but the GPU is drawing quite a lot.
On my daily computer the issues of CPU thread overload doesn't show up that good as on my older machine, which I use as a backup and for streaming from my parents place in NL. I'm living in third world internet country Germany with amazing 1 Mbit upload technology but thats a completely different story.
Anyway this streaming laptop shows the effect of 'screen lag' pretty good. It's like getting all of a sudden 20 FPS in the sections which are marked red by Abone's aerial Westhill map overview. It has the feeling its 20 FPS all of a sudden but the LFS FPS indicator doesn't drop to low level.. I almost would say that this FPS indicator is wrong in these situations.
Sad to hear that multi threading is extremely difficult to do. Ehrm well yeah.. H, H2, H3 for this current machine doesn't matter, its running into the 12.5% limit frequently, although I cannot easily detect a problem it can't be good that the thread is overloaded I think.
---
OK here we go... Streaming computer, streams whats going on CG right now, on Westhill at the moment for 1.5 hours; http://www.twitch.tv/cargamenl/
I will let it run for a while, it's not my electricity bill *cough* ... But ehh yeah, you see on various sections of the track that the frame rate is going down massivly.. Not because the GPU is overloaded (thanks Daniel for making this visible in LFS) but because CPU thread gets overloaded.. I have set up affinity on thread 7, OBS (streaming) is multithread, it uses the other threads except 7. So I think most of the available power goes to LFS (H3 patch).
Now, there actually are drops reported from 60->30 FPS on various sections but even on parts where there is no framedrop reported (currently locked it on 58 FPS @LFS settings) the stream TV camera is not smooth. And its not of OBS dropping frames.
I will make a video later when Im physically there with my DSLR so interference of any other app is out of question. In short; the thing is the panning stutters, also during ~60 FPS ... And thats also noticable during driving in-car or behind the car. The environment around the track stutters.
Yeah, quite surprising but it's real - it has a slot called "UltraBay" and it's possible to buy either a HDD, an additional fan or a graphics card (GT650M or GT750M) that works together with the primary graphics card in SLI - so I have a GT650M in it.
Mmmm, nice. I don't like burning my lap, but getting an extra fan would be... just tragic I love that you get to choose whether to make your lap hotter or colder!
The best SLI setup is where one card does one frame and the other card does the next frame. So there has to be good overlap, to get any benefit.
Ah hell, yes, I was thinking of the original meaning of SLI (scan line interleaving).
Thanks for those links. I see that the closest thing to interleaving is nowadays called SFR (split frame rendering), at least by nvidia.
I can totally understand why lag could be an issue with AFR (alternate frame rendering); in fact I can't see how you could possibly avoid it cos lag is basically built into the process when you are rendering two (or more!) frames in overlapping time periods. I'm inferring that if two rigs with identical CPUs have the same frame rate, but one is using SLI-AFR and the other has a single GPU, then by definition the SLI-AFR rig has more lag between the physics calculation and the pixels hitting the screen. That seems kinda sucky, and I would've thought it only stops being noticeable at high enough frame rates that you could get by with just one of the SLI cards anyway... Edit: Better way to put that: an SLI-AFR rig has exactly the same input lag as with only one of the GPUs doing any work; the only benefit is more frames per second which is a totally cosmetic improvement and will not help to keep you on the road...
So yes, please don't waste your time on making SLI-AFR efficient with LFS [Caveat: your idea about off-loading work from the CPU to the GPU is a special case where you might indeed reduce lag. Complicated shiz...]
Not only could it take months, but due to a historical mistake, I have two separate versions of LFS, a development version and a public version. I have to manually merge in all the changes from one into the other.
Ohhhhhh..... I'll say nothing. (But Victor, if you're reading this, we need a smiley for sucking teeth )
Switched back to LFS_H executable and eehh.. Its much much worse so H3 does make improvements.
Same reports about FPS but the "screen is lagging" so much its a pain for the eyes. (and yes I again configged LFS_H to thread 7 and other CPU consuming programs avoid this thread).
edit1: Do I need to test H2 also? Hmm
edit2: OK as far as my personal experience is, I get the feeling H2 scores a bit better then H but the biggest change actually is H3. Strangely enough H3 reports the biggest frame drops in some sections, but maybe its because this benchmark is actually more accurate @H3? I know, lot of vague talk, it are feelings
Same reports about FPS but the "screen is lagging" so much its a pain for the eyes. (and yes I again configged LFS_H to thread 7 and other CPU consuming programs avoid this thread).
I'm certainly confused about your "lagging screen" problem - have never seen anything like it (the key bit being that the fps indicator doesn't drop). That DSLR vid (or even mobile phone vid! ) you thought about making would perhaps help.
Btw, your CPU affinity settings just might be making things worse... (If you want to confirm the thread is consuming the equivalent of a whole CPU there are other ways.) You may want to see how it compares without being pinned...
Yeah I will be there Monday to do some real video... Streaming complicates things too much. My DSLR can do 720p @50 FPS, my phone is not really state of the art (bcz I dont care about phones ).
today I decided to put my personal textures in 0.6H3 and I faced problem about textures "wrong format" (in printscreen). this changed my FPS for some reason, but textures works normal. i have this in 0.6h without any issues.