Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:
511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.
I have the card on a regular desktop case tho. My final settings are:
cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32
Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.
Not yet. I'm finally managing to reproduce your thermal overload problem on my own setup with 2x GTX690s:
| 82% 90C N/A N/A N/A | 1087MiB 2047MiB | N/A Default |
| 57% 78C N/A N/A N/A | 1087MiB 2047MiB | N/A Default |
| 57% 79C N/A N/A N/A | 1087MiB 2047MiB | N/A Default |
| 54% 75C N/A N/A N/A | 1087MiB 2047MiB | N/A Default |
Toasty. That 90C isn't good unless planning on making tea on your computer.
My kernel is going to make your display laggy even in interactive mode, unfortunately. The only thing I can think of to try to reduce both power and lagginess without changing the code is to try -l K2x32,K2x32 or something similar. Have you given that a shot? It should reduce the duration of time that the kernel runs and increase the relative amount of time interactive mode spends telling the GPU to not do mining. Could you let me know how that works?
I'm tied up for a while, but I'll see if now that I have my 690 running I can figure out any efficiency gains for it. Don't hold your breath, though: The 690 is pretty similar to the Grid K2 that I was optimizing for before. I think there are gains to be had for GF110 devices, but maybe not GF104.
-Dave