• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Can drivers affect over-clocking ?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Zenoth

Member
Joined
Nov 26, 2004
Location
France, Lyon
I was trying to over-clock with RivaTuner a few days ago when I bought my GTS (last week-end), and that day I used the 158.22 drivers found at nVidia.com, but whenever I OC'ed the Core/Memory above (respectively) 550/860 then I experienced a bunch of hangs and freezes in a few games (not all) I own, such as S.T.A.L.K.E.R. and Warhammer 40,000 Dawn of War, for example.

Then, with the same drivers, I tried using ATiTool, but whenever I went above 560/860 (only 10 extra Mhz on the Core compared to RivaTuner) then it also hanged here and there in a few games. And ATiTool wasn't, and still isn't stable enough for me anyway, so I moved back to RivaTuner for OC'ing purposes, hoping to find out why some of my games hanged and froze randomly.

Only two days ago I installed the 158.27 drivers, and tried to OC again via RivaTuner. The results were different, but not necessarily better. With those drivers, it seemed that I could OC the Core up to 568 (that was the exact number I could go up to without issues), and the Memory up to 870 or so, before I got some hangs and freezes again. No actual crashes to the Desktop though, only in-game freezes, as in the game stops, freezes, and nothing responds anymore, not even Ctrl+Alt+Del.

Then, just yesterday I installed the 160.02's, WHQL from Station Drivers, and now it's a completely different story ...

In RivaTuner and ATiTool (both 0.26 and 0.27 Beta 1) I could easily over-clock the Core to 575, and the Memory to 875 without any single issues whatsoever. Even though ATiTool is still unstable, I managed to run a 30 minutes 3D View test so I could see if I could provoke a system hang/freeze. Usually that test is a good indicator of a GPU stability for OC'ing purposes, I used it all the time for my X1800XL. So, the test had no errors, it kept on going, although the temperatures reached a shocking 77ºC, it still worked and nothing froze.

After that, I started an Artifact-Scanning test, again for around 30 minutes, and it found nothing at 575Mhz/875Mhz. Then, I ran three consecutive 3DMark06 tests, one of them with 8x AA and 16x AF, and no errors whatsoever ! That's not all ! After that I had a two hours gaming session, non-stop in S.T.A.L.K.E.R, then at least 30 minutes in Oblivion, and around an hour or so in Medieval II: Total War.

So, now, in RivaTuner, using the 160.02 drivers, I have my Core and Memory respectively set at 575Mhz and 875Mhz for 3D Performance, and everything looks stable, runs smooth like never before, and I'm pretty sure I could do much, much better, and I will try later today.

My own conclusions according to my experiences is that OC'ing might well be affected by drivers, but that's a subjective view, I don't know if in fact it can be technically true, technically possible. Is it ?
 
I dont trust ATI tool or Riva Tuner, never did never will. You trusting a software thats, well, it has flaws of its own, and it cant compensate or learn from its mistakes or success. You and i can do that. Thus you make a better overclocker then any software.


I used NiBitor to OC my BIOS, then flash the card, Run 3Dmark06, play a game, and if all passes with flying colors then its good enough for me. I OC some more if i want to, repeat the process. Thus eliminates the drivers out of the equation all together. Process of elimination my dear boy, we were tought that in 8th grade science class.




Side note to my self;
Firefox spell checker doesn't recognizes properly spelled word "tought".
 
To answer your question from your post, yes, drivers affect overclockability, at least for me, because depending on what drivers I have I can oc my GTO to 690/800 or 695/805, etc.
 
Wow, ok after some cautious over-clocking (by 5Mhz increments) I OC'ed my GTS to 600Mhz Core and 900Mhz Memory. It's unbelievable to see how stable the card is at those speeds without any single voltage adjustments. I had all the trouble in the world to OC my previous X1800XL by only 50 extra Mhz on the Core before hell broke loose. But that GTS of mine is ... is simply outstanding.

I know some people managed to push their own GTS's much further without breaking a sweat, but still, for me it's very nice. It's free extra performance after all ! I mean, from 513Mhz to 600Mhz for the Core, and from 792Mhz to 900Mhz for the Memory, no extra voltages, means stable and not-much-higher temperatures, plus extra performance, I mean I go from around 65GB/s of bandwidth to around 76GB/s with those new frequencies.

However now at those speeds I am reaching the very limit of my system.

Currently my X2 4400+ at 2.42Ghz is tapped, and is certainly bottlenecking the GTS to a significant extent. I score 8586 in 3DMark06 at 575Mhz/875Mhz, and then I scored 8612 at 600Mhz/900Mhz. It won't go much higher. Now I see what bottleneck really is. This summer will be full of surprises, since I plan on upgrading the rest of my system, with, of course, a new Motherboard, probably AM2+, to welcome a brand new Barcelona Quad-Core from AMD, and then I'll take a serious look at DDR3 Memory. Only then will my GTS be able to give its full potential in my games ! Mouahahah ! I can't wait.

Oh, and, as surprising at it may seem, that's on stock cooling, but that cooling is so nice that I don't think I'm going to change it, as I planned. Not to mention that I've read that the cooler on the GTS/GTX is very difficult to remove without risking to physically damage the GPU's board itself, which I surely don't want to risk doing.
 
UglyChild said:
I dont trust ATI tool or Riva Tuner, never did never will. You trusting a software thats, well, it has flaws of its own, and it cant compensate or learn from its mistakes or success. You and i can do that. Thus you make a better overclocker then any software.


What do you mean? I think he's actually just using the programs to set the clocks, and not letting it actually find the max clock. That shouldn't present any problems.

I used NiBitor to OC my BIOS, then flash the card, Run 3Dmark06, play a game, and if all passes with flying colors then its good enough for me. I OC some more if i want to, repeat the process. Thus eliminates the drivers out of the equation all together. Process of elimination my dear boy, we were tought that in 8th grade science class.

The only way to eliminate drivers for hindering an OC is to actually test the card with another set, I don't really understand your logic there :confused: .


Side note to my self;
Firefox spell checker doesn't recognizes properly spelled word "tought".

It doesn't because it should be ¨taught¨ ;).

@ Zenoth:

I think drivers can be a big factor when it comes to overclocking, as the card basically relies on them for stability. If a driver is more stable, then it should be easier for the card to stay so @ higher clocks.



dan
 
Last edited:
Zenoth said:
Wow, ok after some cautious over-clocking (by 5Mhz increments) I OC'ed my GTS to 600Mhz Core and 900Mhz Memory. It's unbelievable to see how stable the card is at those speeds without any single voltage adjustments. I had all the trouble in the world to OC my previous X1800XL by only 50 extra Mhz on the Core before hell broke loose. But that GTS of mine is ... is simply outstanding.

I know some people managed to push their own GTS's much further without breaking a sweat, but still, for me it's very nice. It's free extra performance after all ! I mean, from 513Mhz to 600Mhz for the Core, and from 792Mhz to 900Mhz for the Memory, no extra voltages, means stable and not-much-higher temperatures, plus extra performance, I mean I go from around 65GB/s of bandwidth to around 76GB/s with those new frequencies.

However now at those speeds I am reaching the very limit of my system.

Currently my X2 4400+ at 2.42Ghz is tapped, and is certainly bottlenecking the GTS to a significant extent. I score 8586 in 3DMark06 at 575Mhz/875Mhz, and then I scored 8612 at 600Mhz/900Mhz. It won't go much higher. Now I see what bottleneck really is. This summer will be full of surprises, since I plan on upgrading the rest of my system, with, of course, a new Motherboard, probably AM2+, to welcome a brand new Barcelona Quad-Core from AMD, and then I'll take a serious look at DDR3 Memory. Only then will my GTS be able to give its full potential in my games ! Mouahahah ! I can't wait.

Oh, and, as surprising at it may seem, that's on stock cooling, but that cooling is so nice that I don't think I'm going to change it, as I planned. Not to mention that I've read that the cooler on the GTS/GTX is very difficult to remove without risking to physically damage the GPU's board itself, which I surely don't want to risk doing.


not to push intel or anything but have you seen the article comparing 2 quad core xeons, (8cores and 24ghz of computing power) vs 4 barcelona quad cores (16 corse and 28.8ghz of computing power) ... the intel was scoring a 20% better score in pov ray, with a 20% less clock ... and its not lookign like amd is gonna be able to get their speeds up on their quads. which sucks because i welcome competition... I think a pynern version of the q6600 (or just a dual core) and a new p35 with ddr3... that would be some serious pwnage lol.

Any ways back on topic, most 8800gts's top out round what your doing... mine will do 630/1000
 
Dan.

Your right, its Taught... woops.


He uses ATItool to scan for artifact scanning, and that it self has many flaws and shouldn't be relied up on to begin with.

The only way i see a driver would affect over-clocking, is the over-clocking software it self not being compatible with a certain version of driver to begin with.

A driver is just a software, nothing more. Wheres Core and Memory is the hardware.

A hardware in no way is limited by the software (a driver), thus a driver has no effect on hardwares limits. If there was no such thing as a driver, what else would limit the over-clocking?

A hardware is only limited by its quality in the way it was manufactured. Black Pearl is a good example.

Thus the Driver is eliminated from the equation.
 
Last edited:
He uses ATItool to scan for artifact scanning, and that it self has many flaws and shouldn't be relied up on to begin with.

The way I see it, he was confirming if his OC was stable or not by running games.

Zenoth said:
Then, with the same drivers, I tried using ATiTool, but whenever I went above 560/860 (only 10 extra Mhz on the Core compared to RivaTuner) then it also hanged here and there in a few games. And ATiTool wasn't, and still isn't stable enough for me anyway, so I moved back to RivaTuner for OC'ing purposes, hoping to find out why some of my games hanged and froze randomly.

----

The only way i see a driver would affect over-clocking, is the over-clocking software it self not being compatible with a certain version of [driver to begin with.

A driver is just a software, nothing more. Wheres Core and Memory is the hardware.

Well, it's just software like you say. But the hardware wouldn't be able to operate without it. And it also determines the stability irrespective if OCed or not.

A hardware in no way is limited by the software (a driver), thus a driver has no effect on hardwares limits.

Oh, how wrong you are. Would you also say the first set of drivers for the x2900xt weren't limiting its performance/stability?

Some cards are even unstable @ stock clocks because of drivers (i.e crashes, bluescreens..) take the GF8 and the earliest drivers as an example, imagine trying to get things stable then after OCing.

If there was no such thing as a driver, what else would limit the over-clocking?

I'm going to leave that up to you (since it's such a trivial question from my point of view).

A hardware is only limited by its quality in the way it was manufactured. Black Pearl is a good example.

Ok, then I think we all do volt-mods, buy better cooling, and whatnot with no real purpose in mind.:bang head

Thus the Driver is eliminated from the equation.

If you've actually tried all the set of existing drivers for a specific product, and can confirm that stability settings are equal for all.

dan
 
Hmmm lets see if I can add fuel to the fire.
First off I understand the "calculus" problem above (which was actually a simple logic problem). I agree with what was said. However Ugly has a non-real world point. Dos uses a minimal of "drivers" to run a part so DOS has less compatablity issues and less computational instructions to use, so clocking can go further and more accurately than an os, however only machine language would use no drivers at all. The problem is that an OS needs to use calculations and adjustments on the fly to read and stabalize a computer part to work with EVERYTHING that the os can run (not gonna go into detail about it).
regardless now days (I kinda miss the ol dos days) everything is run from an OS (even DOS isn't a true dos anymore) there for complicated drivers are required. And Dan's statement holds pretty true to that.

Also using an engine is a horrible example. assuming the engine is the video card... dos is a carborator... windows drivers is an EMS... then lets say you want to get the most power out of your card/engine... the engine is as good as it's weakest part, correct? Well the problem is at 1 static temp and 1 static BAR pressure and 1 static fuel type you can get x power before the weekest part goes... however, if you change and of the above factors the motor can blow earlier...thats all good but in the real world all factors will never be consistant so tuning via carborator is a pain in the *** and doesn't auto correct itself...where as an EMS can correct it's values based on those changes. Some EMS's take those values and compute it differently, which can cause delays in response or slightly different values based on the exact same values. Different cars need different ems's to work properly just like different hardware and different software needs to be compatible so the drivers compute differently pending the variables.

hope some of that made sense, I left half way through to do some actuall work so I may have screwed up what I was trying to say.
 
You have yet to put any effort to explain to me in simple terms so i can understand. Instead you throw y-2 and X and Y at me. All you did so far is quote me with nonsense and no relevant argument. Thus provoking an argument. Its not the first time you've done it.


Ill try one more time.

Lets say you have 8800GTX with driver 100.00 max running at 600/1800, because the driver 100.00 only lets you run your card at that speed. Driver 100.00 is of the highest quality ever made and or will ever be made.

Are you with me so far?

Now. I have a Black Pearl that is clocked at 626/2000. Using your therory, that means that my Black Pearl will never be able to run higher then 600/1800 clock speeds with 100.00 driver. Because 100.00 is of the highest quality ever made and alows for highest clocks ever.

So how am i supposed to run my Black Pearl at 626/2000 with 100.00 drivers?? Or do i have to volt mod it and get better cooling??
 
Last edited:
Here, try this one for a size.

Using your theory;

Lets say Nvidia came out with driver 110.00. Amazing driver, best IQ ever, compatible with everything, the driver is just money. But, you cant OC your card at all, no matter what software you use. So your stuck at stock clocks of 575/1800.

How in the world any one supposed to run their Black Pearl or Ultra GTX cards with 110.00 driver in the first place??

If you saying that driver is what affects the OC of the card, then every one that bought an OC'ed card would have to under clock it to stock speeds, just so they can use 110.00 drivers.

Think about it.
 
I'd like to step in here and say both of those situations wouldn't happen. One point, is that every card series or type has it's own needs and drivers. Another point, is they wouldn't bother selling cards above a certain clock if the drivers can't even allow it. Third point, in your first situation, if the driver only allowed 600/1800, then voltmodding and cooling probably couldn't help you much.

PS: To explain Dan's example from above...
Driver #1 uses 1 shader to do a task.
Driver #2 uses 2 shaders to do the same exact task.
Thus, driver #2 is putting more stress on the card.
 
UglyChild said:
You have yet to put any effort to explain to me in simple terms so i can understand. Instead you throw y-2 and X and Y at me. All you did so far is quote me with nonsense and no relevant argument. Thus provoking an argument. Its not the first time you've done it.


Ill try one more time.

Lets say you have 8800GTX with driver 100.00 max running at 600/1800, because the driver 100.00 only lets you run your card at that speed. Driver 100.00 is of the highest quality ever made and or will ever be made.

Are you with me so far?

Now. I have a Black Pearl that is clocked at 626/2000. Using your therory, that means that my Black Pearl will never be able to run higher then 600/1800 clock speeds with 100.00 driver. Because 100.00 is of the highest quality ever made and alows for highest clocks ever.

So how am i supposed to run my Black Pearl at 626/2000 with 100.00 drivers?? Or do i have to volt mod it and get better cooling??

What the hell are you talking about? What driver has ever been the highest quality that will ever be made? You'd have to go back to like an MX3200 for that. And what the hell is a "Black Pearl"? They got some, new, Pirates of the Carribean model out?
 
I'd like to step in and say rather than you guys arguing uselessly about who is 100% right realize that you're both right and both views are applicable. Hardware itself will naturally be one of the limits in oc'ing to a stable setting, software will as well. The way an oc can be stable for one app but not another, and how it can change with driver revision proves the software side, and the hardware side is obvious.
 
reclaimer122 said:
I'd like to step in here and say both of those situations wouldn't happen. One point, is that every card series or type has it's own needs and drivers. Another point, is they wouldn't bother selling cards above a certain clock if the drivers can't even allow it. Third point, in your first situation, if the driver only allowed 600/1800, then voltmodding and cooling probably couldn't help you much.

PS: To explain Dan's example from above...
Driver #1 uses 1 shader to do a task.
Driver #2 uses 2 shaders to do the same exact task.
Thus, driver #2 is putting more stress on the card.

Right...

The hardware is set, you cannot change it.
The software not set, you can change.

Therefore, the software determines how well the card operates. We will never have perfect software, or perfect hardware. The drivers help to achieve better performance and thus better overclocks. There is no problem with the theory, it could be right, the problem is we will never see anything like it and all the OP wanted to know was if they effected the OC capabilities.
So, yes, they do. By offering better efficiently and stability, the card can OC more.

Can't all of you guys chill? :shrug:
 
reclaimer122 said:
I'd like to step in here and say both of those situations wouldn't happen. One point, is that every card series or type has it's own needs and drivers. Another point, is they wouldn't bother selling cards above a certain clock if the drivers can't even allow it. Third point, in your first situation, if the driver only allowed 600/1800, then voltmodding and cooling probably couldn't help you much.

PS: To explain Dan's example from above...
Driver #1 uses 1 shader to do a task.
Driver #2 uses 2 shaders to do the same exact task.
Thus, driver #2 is putting more stress on the card.


Then how would i run a factory OC'ed card in either situation?

If Driver dictates the the OC'ing ability of the card, then why some drivers allow better OC's then others? Thats what im trying to understand here.
 
Last edited:
All my nvidia drivers overclocked practically the same. Personally my opinion is it's b.s. ; that drivers have no effect on overclocking.
I've always forced my overclocks with Systool and never noticed a difference in that last squeezable amount of mhz among many drivers. At most it was 10mhz (too little to attribute as driver reasons imho and more likely due to temperature limits).
 
Different drivers allow better overclocks because they tell the card what the program needs to do using different techniques. Like it or not Dan said it correctly.
Think of the driver as the translater from what the program wants and what the card can do.
lets say the program says I need 4 dots 10 pixles appart, driver 1.0 says put a dot down then put a dot down 10 pixles from the first then put the next dot 20 pixles from the first then put the next 30 pixles from that. Driver 1.1 say it would be faster to put 1 down then 1 down 10 pixles further then the next 10 pixles past the last one then the next 10 pixles past it... then the next driver goes wait... we have a function called offset... we don't need to keep recalculating the distances for each pixel... we'll just offset each one by 10. When the program decides to do a matrix we can use just 2 offset commands instead of complex math.
In the long run the more math thats used the harder the video card has to work and the more chances of artifacts and instability increase. By writing better drivers the card can do more, more accurately with less chances of artifacts and instability.
So to sum it up.. a card has a max potential limited by it's hardware... however using better drivers can use that max potential better. So a card can only get x clocks at y voltage and do z processes till it fails. The drivers can create better efficiency thus creating more stability and better framerates... sometimes improving one thing may cause something else to be worse. If a card is unstable at a certain speed because the way it processes info you have to turn it down (in essence it isn't clocked as fast as it could be thanks to the driver being complicated)... but if a driver makes that same calculation easier to process then the card can be turned up... and if a driver theoreticly was perfect at every calculation then the card could potentially max out it's hardware capabilities... so yes the drivers can and do help overclocking (by placing less stress on the videocard)... if a driver is already very efficient than a revision may not make much of a difference.
 
Last edited:
UglyChild said:
Then how would i run a factory OC'ed card in either situation?

If Driver dictates the the OC'ing ability of the card, then why some drivers allow better OC's then others? Thats what im trying to understand here.
Ok, I think what Akkuma said above is probably the best explanation we have so far. Basically, the factory OC'ed cards are probably set at speeds already stable in the current driver. Normally, as drivers advance their stability increases, so the maker shouldn't have to worry about all their cards suddenly becoming unstable.

And again, it's all a matter of efficiency. If Driver 1 is maxing out its shaders (lets say the card has 100), then you'll probably see lag and drops in FPS because the card is under alot of stress. But now lets say there is Driver 2, and that driver has been written with a new technique of displaying the same image. So, lets say Driver 2 only uses 90 shaders to do the same job as Driver 1, which needed all 100 shaders. That leaves the other 10 shaders unused, or leaving them available to other uses. If those 10 shaders remain unused, then that means less work for the GPU, as it is only working at 90% capacity.

PS: Yes, it is pretty complicated :) There are alot of other factors such as temperature, or voltage. However, the efficiency of drivers definately add in another factor.

I just thought of what seems like a pretty good real-world example. Lets take videogames. Player 1 and 2 are both playing the exact same game on the exact same hardware (and drivers, lol). Player 1 is alot better than Player 2, but only because he better understands the gameplay. So just think of the players as "drivers", where Player 1 knows how to best use the resources in the game to win and Player 2 is just shooting wherever. It may seem irrelevant, but I think this is a pretty good example of whats going on.
 
Last edited:
Back