Are you sure there is nothing else wrong with that computer? HD 4600 can run Battlefield 3 and Skyrim just fine, it should breeze through LFS without breaking a sweat. I had a laptop with 2 generations older HD 3000 and all Source engine-based games worked perfectly. The only thing HD 4600 might struggle with is high AA.
This has nothing to do with .NET timers. The receiving buffer gets filled up because the application stops picking up data over InSim due to a bug in LFS_External. I'm not sure how the .NET library linkage and symbol resolution works, but if you believe that LFS_External will resolve the issue, can't you just replace the DLL with the 1.1.1.7 version?
There is no other way than the DirectInput way. If you are really out of options, you can try to grab a proxy dinput8.dll and pipe the appropriate DInput calls parameters and results to some debugging output. This might give you some idea what's going on.
Would you please stop repeating this nonsense? A lot of devices _do_ have full FFB support on Linux and even more of them are supported enough for the needs of LFS.
I assume the character you are going for is 'BLACK STAR' (U+2605)? As far as I know the only codepage supported by LFS that contains this character is CP950 and C949 for Traditional and Simplified Chinese respectively.
LFS does not support any form of Unicode encoding so saving the text in the database as UTF-8 encoded string is IMO not a good idea. You should be using a raw byte array instead - that way you can avoid a lot of problems that stem from converting text from one encoding to another. When you send text to LFS via InSim, everything is assumed to be in CP1252 AFAIK. The special character "^" (0x5E in CP1252) then tells LFS to use a different encoding for the part of the string from the "^" character onward. There is no "switch to another encoding" command in your string which is why the characters are displayed as if they were in CP1252 instead of whatever encoding you actually want to use.
Long story short, Russia is trying to sieze the opportunity to take over Crimea and possibly other parts of the eastern Ukraine while the country is unstable after (or more likely during) a revolution. Russia's main motivation are their military bases on Crimea which they have leased from the former Ukraine government. The new Ukrainian gorvernment that is currently being formed would most likely not allow Russia to use those facilities anymore. Since the "new" Ukraine would likely be more EU oriented, it would become much less of a "buffer zone" between Russia and the west.
The situation is obviously much more complex than that.
Can you actually calculate it like that? I've had some trouble trying to find a definitive answer to this but it seems to me that VRAM of a dedicated GPU is not mapped to any physical region. Here's some info from my PowerXpress laptop with HD3000 and Radeon 6400M.
This output might seem a bit incomprehensive, but it's actually quite simple to read. Let's start with the Radeon card: the output says that there are three memory regions mapped to that card:
These are the memory blocks the CPU can reach directly. However, the graphics driver reports a VRAM size of 1024 MB for the Radeon.
[ Tech - why is it like that: ]
It is my understanding that the CPU cannot access VRAM - and it makes sense. GPU has its own MMU and VRAM works in a somewhat different fashion than system RAM. Those 256 MB the CPU can see is what is called BARs. When the CPU wants to talk to a PCI-Express device such as graphics card, it writes or reads data from there. This, however, is *not* the GPUs VRAM.
[ /Tech ]
The upshot of all this is that the VRAM size does not count into the physical address space. If you want to check what eats up the address space in Windows, you can check the ranges in use like this:
Also keep in mind that it is the chipset that creates the physical memory map so if the chipset cannot address more that 4 GBs you're screwed anyway. Some of Gustavo Duarte's blogs cover this topic pretty well.
I find it much more ironic that they were boasting about various Russian accomplishments throughout its entire history during the ceremony - even about those that weren't really Russian - but they can't get drinking water to the hotels...
By "installed AMD Mantle" you mean the 14.1 beta driver? Are you using the Direct3D9 version of LFS? This is probably a bug in the AMD driver or perhaps some texture filtering option is set incorrectly.
I think the question you should have asked yourself before starting this thread was "Why would someone want to code for me?" You do realize that noone is going to spend days or weeks working on this when you offer nothing in return? People who employ the pseudokickstarter tactics with donation money at least try...
You might want to consider modifying some open source Cruise app, that shouldn't be too difficult.
It doesn't seem to be anything like Mantle. The way I understand it, Mantle gives you more direct control over the rendering process and it gives you access to the GPU memory so you can manage your resources more efficiently. High-level APIs are more like "I need this rendered, make it so... somehow".
NVAPI just lets you query some GPU parameters (clock, temp, number of physical GPUs etc.) and control some features such as GSync, SLI and Video output. Some part of the NVAPI is under NDA so god knows what else it can do. Even if the NDA-part was comparable to Mantle (which it most likely isn't) it wouldn't make much of a difference because it's under NDA.
Unless AMD is willing to provide open source implementation for mesa, it will be a setback for the FOSS 3D graphics stack.
I was talking about a hypothetical situation where Mantle is the no. 1 API. Game devs will naturally optimize for the latest graphics hardware available at the time. This might result in those games performing considerably worse on the previous generation hardware and even the next generation hardware. With high-level APIs you have the driver doing some generic optimizations which don't allow you to reach GPU's full potential but they also let you to write a common code for all GPUs which should perform reasonably well in most cases.
I read an article by a guy who specialized in low-level optimizations for CPUs some time ago. The article was from the ancient times when 3D acceleration was very new so you had to do most of the calculations on the CPU. I remember how he sometimes had to bend over backwards just to make sure there were no "bubbles" in the pipeline, how the CPU cache on certain AMD K6 CPUs wasn't entirely random access and other sorts of quite crazy stuff. Imagine this minus the common ground that is the x86 architecture...
I agree, but doesn't Mantle make the problem even worse? OpenGL is a set of common functions plus vendor-specific extensions. Mantle as it is now is completely vendor(and architecture) specific and even if it gets wide adoption, I'm afraid that what I wrote above might apply. Would it not make sense to implement the most desired Mantle features in a form of API extensions as well instead of creating something entirely new?
Whatever is a step away from a vendor lock-in is a good thing, but I'm not convinced that going from D3D to Mantle wouldn't just change the "vendor" part, not the "lock-in" part.
BTW, NVAPI is something completely different and apparently you cannot write a 3D engine with it.
Mantle is a rather weird idea. Although the basic idea of giving programmers more control over the GPU is good I'm quite concerned that it might hurt PC as a gaming platform in the long run. Here's why:
- It's Windows-only and it will most likely stay that way for some time. This is particularly unfortunate because Intel and AMD have been making huge advancements with their open source 3D graphics stack. If this development continued, I think there was a real chance that SteamOS for example could've been running without proprietary GPU drivers in a year or two. Unless somebody will be willing to pour huge resources into FOSS Mantle implementation, this is not going to happen.
- Coding closer to the hardware means the need for more device-specific optimizations. Even if nVidia and Intel provided their own implementation of Mantle, the programmers would still have to write different rendering paths for different GPUs. This could in extreme cases cause old games to perform poorly with new generations of GPUs. Even various kinds of x86 CPUs - which are all fully backwards compatible with the now 39 years old Intel 8086 - perform differently under different workloads and operating systems, compilers and programmers implement CPU-specific code to achieve optimal performance. GPU architectures are much more diverse so the extent of the problem could be worse.
- If nVidia decides to battle Mantle with a low-ish level API of their own, we'll basically regress 20 years back to the age of no 3D API standards. This will be another reason for the game developers to target consoles.
If there is any change to use our GPUs more effectively, I'm all for it but I'd rather see this done by extensions and updates to APIs we already have rather than by introducing something completely new and possibly proprietary.
Yeah, the amount of work involved even in the simple projects is terribly underestimated. Assuming that I could write something like this in two weeks for 3 EUR per hour, it'd make a total of approximately 180 EUR (10 days, 6 hrs per day). I could probably make more money washing dishes...
WINE depends on libjpeg (or in my case libjpeg-turbo) library to do some JPEG file processing. Perhaps your WINE has been built with libjpeg support disabled or the version it was built against does not match to that installed on your system?
FWIW, this is the framerate I can squeeze outta my GPUs, everything was run at 1600x900:
The recomended way for going multiplatform is to
1) Move to SDL2
2) Move from D3D to OpenGL
3) Sort out any remaining issues
Rewriting everything from WinAPI to SDL2 would be a herculean task.
OpenGL drivers on Windows are fine. id Tech engines used to be OpenGL-only. When Valve were porting Source engine to OpenGL, the OpenGL version outperformed the D3D one even on Windows and that was despite the fact that there is an internal D3D->OpenGL wrapper in the engine.
The problem with even the top of their field sppecialists is that without access to MSC's medical data they can either state the obvious or take a best guess. Journalists might also - either on purpose or due to lack of understanding of the subject - mangle what the doctor actually said. The most sensible thing to do is wait for any official statement...