No the problem isn't with head movements, but lack of g-forces. When you try without the moving platform, try slowly stopping the car while looking out of the side window. That is the absolutely worst for me.
My PC runs LFS very well (4.5ghz i7, gtx770), so that's not the problem. The problem when stopping the car is that I anticipate the deceleration ending and the nudge forwards that it causes. You know when you stop a real car all the passengers nudge forward. And when that doesn't happen my eyes go blurry, head feels dizzy and an instant hit of nausea to my stomach. I have thought that a motion platform with even the slightest amount of front & back tilt could fix the issue and make it comfortable. At high speeds I have no problems what so ever, but the slower I go the more pronounced the effects are and stopping the car is the most extreme case.
What I have read about the issue it's because of the vestibular system and visual system "fusion" conflict. There's plenty of research made for military sims that are available online and the conclusion seems to be just that. An other conclusion was also that the less real experience, the less simulation sickness and vice versa.
Interesting. I have been on the edge to start investing into a such setup, but one reason why I haven't has been the tracking issue. Which system are you using?
Do you feel that you get less nauseous when using the moving setup? I feel really really bad hit of nausea and head dizziness when stopping the car and in slow speeds generally.
As far as I know it should be the Oculus SDK that does that, figuring out the platform orientation.
If LFS fixed the head orientation from the platform info I suspect it would lead to very jittery movements because the Oculus positional tracking uses the IMU too. When accelerating you lean backwards in such a platform and in-game you should stay still, but the Rift IMU will sense the backward acceleration and think it is being moved and thus it will be seen in the screen. The SDK should need to know how the platform is moving so it could take the additional movement into consideration in the sensor fusion. If the camera sees the headset staying in place (camera on the platform) but the IMU senses movement it will lead to a conflict and I have no idea how it would handle it. If the positional tracking was done only with the camera this would be a lot easier problem.
There's no good solutions for this yet. What would be the easiest (to my logic at least) is to have a second IMU (identical to what the Rift has) attached to the moving platform. Then the Oculus SDK would know how the platform is oriented and could adjust the "down" vector accordingly. The camera could be either on the platform or not, wouldn't matter as long as it's implemented that way.
I have no idea if it is related, but I noticed when I touch my monitor the head location jumps about half a meter. Doesn't happen every time and works only for a while, then the monitor needs to "recharge" hehe. Some magnetic / electrical interference might cause some jumps.
I agree on this but only partly. I feel with VSync the experience is overall the best, but the jittery head motion with small movements is better with very high FPS. Probably due to less motion prediction issues. Timewarp to the rescue please =)
It does stay in place in space, not relative to your head, so you can look around it. I was surprised it handled that way because I had read exactly what you are saying, so it might have changed between the SDKs.
Edit: or wait a second, do I remember this all wrong? The background was moving and "stayed in place", so it might have been why it was comfortable. I need to check this out.
Edit2: sorry for the confusion, it really is how you describe, stuck to your face and not to the world. It is transparent and that made it comfortable in contrast to say loading screens that are totally static and make you feel ill. Your implementation of it is much much better.
I'm not sure if I'm helping here, but the result would be "just" 2D relative to each eye position, just like we have a 2D image per eye in the Rift itself. In other words what you are describing is actually all that is needed.
Has someone else noticed that the positional tracking does something weird with very small movements? I feel like it could be the prediction, because it feels my viewpoint is for a very short moment moved too far. With bigger and longer motions it's perfect, but for example in the menus it's very easy to notice and makes me feel a bit ill. Edit: Didn't notice this in the Oculus demo scene with the desk.
Would time warp perhaps help if the case is the prediction "predicts too much"? Time warp would update the view just before presenting so it would be much closer to where my head really is.
Btw, awesome work Scawen! In every other way your Rift integration is near perfect!
As far as I know it should be in there already, but you need to enable V-sync for it to work.
Now if only the tracking volume would be bigger we could walk around the car and then hop in. Then in one version when Oculus brings their hand trackers we could roll down the window and give the finger to passing drivers