Would a pixel shader program work properly if it used 3 pictures to calculate blurr?
Of course the GPU utilization of quadratically 'interpolating' three 1280x1024 images is insane, but I'm just wondering wether it'd work. ('interpolating' is used due to lack of a better word for the process of quadratically approximating the direction of movement of each bit and calculating the resulting image that should show the blurred image of the scene at the time of frame two, as it changes from frame one to three)
Have the shader program store a buffer of frames, wait for three images to have reached the buffer, quadratically approximate the correct blurred image, draw it, remove the last frame in the buffer, push the other two through, add the newest sharp buffer image on top, interpolate, remove, push, add, interpolate, etc.
Vain