You're 100% correct in that assumption, how much the brakes fluctuate depends greatly on the brake system of the car though (like Shotglass said, tubing is one of the big keys).
Last edited by der_jackal, .
Reason : Someday spelling and I will get a long. Until then, there is an edit feature!
Actually they do in the case of the V8 Supercars. If I have any time this week I'll pull a chase camera shot and you can see where they just blip the brake lights for a second as they tap the brake pedal.
Not really...it's just something they do, I wish I had tivo'd some of last year's American LeMans series. They showed a full lap of MidOhio with a Porsche driver doing the same thing, and they gave a really great explanation of why he was doing it in some areas where loading the suspension wasn't really required (which is why you would normally do it on a long straight. Big bump you tend to trail brake to load the suspension so you keep the car settled *g*).
Easiest way to think of it is; The pads and pistons are vibrated, shoved and pushed around while racing. The pads in those V8 Supercars "float" in the caliper so they can be changed on the fly (required in some races) and as such they can be pushed back farther from the rotor face or move around just ever so slightly to where they aren't "true" to the rotor face.
What that technique does is just pushes the pistons in the caliper lightly forward, seats the pads to give maximum coverage and ensures the whole braking system is primed to work as they need it.
Welcome to Volkswagen / Audi-Land, at least the Logi shifter reverse is off of 5th / 6th gear gate, try going down to first and fearing a reverse engagment....
But then again, the ease of pushing down during actual usage in the Audi isn't that light. :P
That's vague at best, what processors do is what they are instructed to do.
Again, any and all mutlithreaded applications inherit the usage of both cores from the OS. Again, any application (which is multithreaded, as you're hard pressed to find any commercially available software today that ISN'T) inherits the use of multiple hardware threads via the OS. Anytime an application calls CreateThread from within its execution space, the OS determines what hardware thread it goes to.
For an application to be OPTIMIZED for multiple hardware threads it has to determine and then set Affinity for the hardware thread executor it wants to use for that thread, and for it to really be optimized (ensure 100% CPU utilization on both cores) it has do more to guarantee those threads aren't stumbling over each other for shared resources (amongst other things)...but more on than that later.
CPU clock cycles don't have much play other than they are the basis for the thread quantum (which is 2 clock cycles on Win2k and above, or 12 if you're running Server platforms btw). However a high priority (DPC - High / RealTime) thread will run until its completed regardless of the quantum.
Ever had your single core system "stall" or "hang" for a second or two while something was running? Generally you were the victim of a high priority thread running until it was done, which in turn stalled (on queue) all other lower priority threads (keyboard input, mouse input, etc).
Lower priority threads run on that quantum, they are scheduled, run until they are done, interrupted by a higher priority thread or their quantum is reached. If their quantum is reached or they are interrupted, the context is stored off, and the thread is placed back on queue until its turn comes up again. The vast majority of user mode software runs at < Dispacth Level (Levels go from 0-31, 0 being the lowest, 2 being Dispacth)
Are you thinking only 1 thread can run at a time and that the OS is using parallel processing to split that thread across both cores? Actually not how it works.
Each core was processing a different thread at the time of death (each core is noted by the n: kd>). And actually core 0 was in an idle state when the BugCheck happened.
So, you have three threads on queue. All three w/ the same priority level. All three from one appplication. Thread Scheduler walks the queue, finds that thread1 is in a ready state, sees that processor 0 is running the idle thread, and submits thread1 to processor 0. Scheduler returns to the queue, sees thread2 is in a ready state, sees processor 0 is busy and the thread quantum hasn't been reached, looks at processor 1, sees it's idle and submits thread2 to processor 1. Scheduler walks the queue and sees thread3 is in it's ready state, looks at processor 0, sees that thread1 has reached its quantum, stores the context off for thread1, places it back on queue and starts thread3.
Scheduler walks the queue and sees that thread1 is in a ready state, looks on processor 0 and sees that thread3 hasn't reached it's quantum, looks on processor 1 and sees that thread2 has reached its quantum. Stores the context off for thread2, drops it back on queue and submits thread1 to processor 1. Meanwhile, thread3 completes and unloads, the Scheduler is notified, walks the queue, sees that thread2 is in a ready state, submits thread2 to processor 0.
Your perception about how it works based on processor utilization shown in task manager is kinda of skewed. Multiple processor utilization issues are generally bound by the threads stalling for synchronization objects. I.e. there is lock being held on one thread for a resource required on the other, anything causing contention will limit CPU utilization across two cores.
Unless stalled by higher threads, Thread[0] will run on processor 0, Thread[1] on processor 1
If Foo1 calls or holds any synchronization objects for a chunk of memory, those two threads will be in constant contention and subsequently stalling until the contention is released.
There is one way you reach 50/50 processor utilization. An optimized MP/MC application will clear / reduce those contentions as well as define processor affinity for the threads which is where you will see closer to 100% utilization.
And I believe I answered you in that thread, LFS is not a single threaded application, you'd be hard pressed to find any game today that is single threaded.
By Aaron Cohen & Mike Woodring 1st Edition December 1997
So what is multithreaded programming? Basically, multithreaded programming is implementing software so that two or more activities can be performed in parallel within the same application. This is accomplished by having each activity performed by its own thread. A thread is a path of execution through the software that has its own call stack and CPU state. Threads run within the context of a process, which defines an address space within which code and data exist, and threads execute. This is what most people think of when they refer to "multithreaded programming," but there really is a lot more to programming in a multithreaded environment.
But if I understand you right, you're telling me because LFS's main thread is on processor 1, so any subsequent threads are also launched on processor 1? Or that LFS runs all its threads in a synchronous order and they are all run on the same hardware thread as the main LFS thread?
Meaning Thread1 is launched, LFS waits until it's completed and then Thread2 is launched...
Figured, it's just a pet peeve I'm suffering lately.
Ahh, see you're not really doing anything different than what LFS (or any other normal application) is doing currently. Just spawning threads and waiting for them to complete. There you will see an increase in performance gains, but you would probably see even more in an optimized environment (this really depends on the "weight" of the threads in your app).
Targeting specific processors would probably yield some different results, both bad and good depending on how you did this.
HyperThreading is really more of a product of the processor, programmatically you'd access both thread executors the same way (GetProcessorAffinity, SetThreadAffinity, etc.).
Good point.
Yup yup! Part of design is determining what you can and can't split off and how best to load balance between the hardware thread executors.
Oh lord no, running at level 31 ("real time") is a huge system blocker, but running at 2 (DISPATCH in kernel land) would prevent you from being interrupted as frequently by threads with lower priorities, but you also block any of your other threads which you have affinity for that processor from running on that processor until your thread is done.
Maybe we should spawn a thread in the programming forum (pun intended).
I hate to nitpick here, but LFS is a multi-threaded application already. What LFS is not, is optimized for multiple hardware thread environments.
Interesting data...
How were you targeting your threads? I'm assuming you were using SetThreadAffinity, but what priority levels did you have your threads set at?
You have to remember the Windows scheduler works on a quantum, if your threads are set at lowest level they will be interrupted more frequently by not only higher priority threads, but all other threads running at the same level as you.
CAD/3d, Photo editing, auido and video editing tools have been targeting specific processors for years. Some games have as well; Quake 3, Quake 4 (with a patch), Tribes2, Serious Sam, and Serious Sam 2 are multi-core/proc "aware" (there are a quite a few more, but some require command line args eg; start /jediknight.exe +set r_smp 1).
But quite a few game engines being created now are going to be multi-core/proc aware (CryENGINE2, etc.).
Quake 3 had limited support for SMP, meaning it would do some work to queue threads on to specific processors, but it did not really take full advantage of the power of the system (I have feeling he was still seeing a bottleneck from the AGP / GPU side). Serious Sam 2 on the other hand is a great example of a multi-core/proc utilization game engine.
You're right though, one big problem with specifying thread affinity is load balancing. If you're throwing threads down to CPU1 and overloading it compared to CPU0 you're going to find yourself in a world of performance hurt. You can generally uncover such a hit during testing, make a small tweak in the threading model and voila. Either that or create a dynamic load balancing engine...*shudder*
So I tell you what, prove to me that 100% of LFS threads are only running on ONE processor / core.
REread the post. I never said LFS was the one utilizing both thread executors did I?
LFS is no doubt a multi-threaded application, the OS schedules those threads out to processors as they become available. Sometimes they go to CPU1 sometimes they go to CPU0, sometimes...*shudder* two threads can go out at the same time to CPU1 and CPU0!
If you want to deny the usage of an executor to an application, establish processor affinity for that application. Or go ahead deny that processor to pretty much everybody and create an app that runs an infinite thread at DISPATCH_LEVEL on one processor thus locking out any thing running at < DISPATCH_LEVEL from being executed on that processor.
The Windows kernel (NT, 2k, XP and beyond) schedules thread objects based on a multi-level feedback queue algorithm using the thread states 0-31 (31 being the highest priority).
The dispatcher traverses the priority queues searching for any thread that is in its standby state. When one is found, the dispatcher must determine whether there is a CPU available that the thread has an affinity for. If there is, then the thread is allocated that processor. If no CPU is currently available, but the thread has a higher priority than any of the currently running threads, it will preempt the lowest priority thread and begin execution on that CPU. If it is unable to preempt a thread it will be skipped, and the dispatcher will continue its traversal. If no thread can be found to execute, the dispatcher will execute a special thread called the idle thread.
It's been this way since NT.
Whether you want to believe it or not, Windows uses all available cores for you. Unless you have an AV monitor that runs all its threads at > DISPATCH_LEVEL (btw, that's only level 2 on the aforementioned scale), other threads which are running will get executed on that processor. And btw, you always have *shit* running in the background; networking, system services, etc.
Some of this may actually increase CPU intensive game performance because all those other threads aren't as readily stuck waiting on queue! *gasp*
The OS thread scheduler will toss threads out to both "CPUs" during execution, you will see an increase in performance over a single core or HyperThreaded machine.
If the application was optimized for SMP you'd see even more of a perf increase as the app could / would target threads to processors rather than letting Windows just pick a free proc to shove a thread on to.
If Renault lands Kimi (not going to happen, he's Ferrari bound, and I hope they have fun w/ him) then Fisi'll be #2, otherwise any other driver they can poach will sit #2 to Fisi.
Sad, but true.
There are only 3 drivers with seats currently that are worth a top seat at a top team, one is leaving for McLaren, one will never leave Ferrari, and the last is on his way out of McLaren.
There is no spare Renault seat guys...the only spare seat is the test seat.
Heikki Kovalainen will be #2 driver a Renault unless Kimi jumps to Renault. (Btw, buh-bye Kimi...your brain fart this weekend pretty much sealed your McLaren fate).