Overclocking multiple Nvidia graphics card on Linux

Hello all – I hope your mining efforts are going well especially since the prices of Ethereum are going up. No doubt many of you have tweaked, adjusted, upgraded and maybe even built multi-gpu rig and multiple rigs by now.

I wanted to create a side post from 9x GTX 1050 Ti ASRock H110 PRO BTC+ about overclocking and adjusting power consumption on the Linux platform. In that post I briefly mentioned using the utilities that are included with the Nvidia drivers that allow you to overclock and control the power consumption of your Nvidia cards.

If you’re a Windows user, you’ve been having an easy life by the likes of the MSI Afterburner overclocking utility – it truly is the “Gold standard” for overclocking your cards with ease. You get a nice GUI, can easily pick which card to overclock and have really cool sliders – you don’t get that in Linux (well not from what I’ve seen anyway).

Before we begin

Disclaimer: This is not a step-by-step guide for Linux newbies – proceed at your own risk as you could damage your cards somehow.

  • Make sure you’re using a Linux instance that has a desktop interface.
  • I’m going to base my screenshot examples on Ubuntu, don’t fret though, the commands are the same with other Linux distros (Ubuntu, Debian, CentOS etc).
  • Make sure you’re using the latest drivers directly from the Nvidia website.
  • Only certain Nvidia cards can be overclocked – keep reading to find out more about this.

Step 1 – Check if we’re ready to overclock

Firstly, open a new terminal / ssh session:

We want to make sure your Nvidia drivers are installed correctly – we will run a few commands to verify we are good to go:

nvidia-smi

Nvidia-smi is the like the monitoring tool for your GPU. It will display power consumption, temps, fan speed for your GPU(s).

We need nvidia-smi because we’re going to use it to control the power levels for your GPU(s).

Go ahead and run the following command in your terminal window & hit enter.

 nvidia-smi

You should get a very similar output as the screenshot below that lists all your Nvidia GPU(s) – if you get errors, then you have Nvidia driver problems (google the error message for solutions).

nvidia-smi command linux

In the screenshot above, you can see the 9x GTX 1050 Ti’s listed (0,1,2,3,4,5,6,7,8).

Excellent, if you made it here, then we’re very close to having the perfect environment.

nvidia-settings

nvidia-settings is the equivalent to the Windows Nvidia Desktop control panel – allowing you to make some adjustments to your cards such as colours, v-sync settings in a very nice GUI.

It also shows you the GPU card temps and clock speeds.

We’re going to use nvidia-settings to adjust the clock speeds.

When you run the following command, it should open the nvidia-settings GUI after a few seconds:

 nvidia-settings

nvidia-settings

nvidia-settings

That wraps up our requirements so far with having the correct Nvidia drivers installed and working.

Step 2 – Unlock the overclocking settings

By default Nvidia doesn’t  allow you to adjust the clock / memory settings for your GPU. This is because Nvidia classified tweaking clock speeds as sort of “expert setting” therefor they don’t openly support it.

However some people, even after “unlocking the overclocking settings”, still can’t adjust the core / memory speeds of their GPU(s) – this means the GPU / driver doesn’t support overclocking.

Coolbits – unlocking overclocking

Coolbits is a feature of the Nvidia driver that allows you to enable overclocking settings for your GPU. It’s similar to tweaking or enabling a game’s setting by modifying it’s configuration file.

We’re going to do the same thing – we need to edit a Linux configuration file to unlock overclocking settings by adding “Coolbits” to the configuration file.

Coolbits was a Windows registry hack for Nvidia graphics cards Windows drivers, that allows tweaking features via the Nvidia driver control panel (including overclocking).

https://en.wikipedia.org/wiki/Coolbits

For those wondering, the configuration we’ll be adding the “Coolbits” setting to is is called xorg.conf:

The file xorg.conf is a file used for configuring the X.Org Server….

For a long time, editing xorg.conf was necessary for advanced input devices and multiple monitor output to work correctly….. Some devices still require manual editing, notably components utilizing proprietary drivers may require explicit configuration, in order for Xorg to load them.

https://en.wikipedia.org/wiki/Xorg.conf

By default on Ubuntu / Debian, the xorg.conf file is located here:

/etc/X11/xorg.conf

I’m going to help you cheat a little here. Long story short, by default, the xorg.conf file will only have one of your GPU’s listed in the file and usually this is OK as most people overclocking have 1 GPU (fyi: It’s usually the GPU that has the display plugged into it).

We’re going to need to add the remaining GPU(s) in the file in order to add the “Coolbits” settings to every GPU in the rig.

Most people do this manually, i.e. copy and paste and adjust the GPU number & adding the “Coolbits” setting at the same time… but use the following command instead to do the hard part for you:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Now check the newly modified xorg.conf file in your favourite editor (vi/vim/nano etc). The above command should have added the required “Coolbits” to the configuration.

sudo nano /etc/X11/xorg.conf

The contents should look like the following now (this is not a complete file, please do not copy and paste this into your xorg.conf file!):


# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 384.69 (buildmeister@swio-display-x86-rhel47-06) Wed Aug 16 20:57:01 PDT 2017

....

Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen1"
Device "Device1"
Monitor "Monitor1"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen2"
Device "Device2"
Monitor "Monitor2"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen3"
Device "Device3"
Monitor "Monitor3"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen4"
Device "Device4"
Monitor "Monitor4"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen5"
Device "Device5"
Monitor "Monitor5"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen6"
Device "Device6"
Monitor "Monitor6"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen7"
Device "Device7"
Monitor "Monitor7"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Section "Screen"
Identifier "Screen8"
Device "Device8"
Monitor "Monitor8"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
SubSection "Display"
Depth 24
EndSubSection
EndSection

Note: If there is a # in-front of the line for “Coolbits”, remove the #. ie:

#Option "Coolbits" "28"

should become

Option "Coolbits" "28"

Save the file if you needed to make the above changes & finally restart the system.

Making the xorg.conf changes stick on reboot

Some people are commenting after a system reboot, the changes do not stick. One way to avoid this is to set the permission of the conf file to read-only and immutable (make sure you jot down the permissions before in case you encounter problems):

sudo chmod 444 /etc/X11/xorg.conf && sudo chattr +i /etc/X11/xorg.conf

Step 3 – Check if overclocking is unlocked

Now that you have restarted the system, lets open the nvidia-settings control panel to see if we have unlocked overclocking.

 nvidia-settings

You’ll probably see a gazillion “X Screens” and all your GPU(s). Open any of the GPU(s) and click on PowerMizer:

nvidia-settings powermizer

Highlighted in yellow are the overclocking fields – yay. Note these values are offsets, meaning they are additional to the stock clock / memory speed.

For example, when your GPU(s) are mining away, they will be using the “Level 2” Performance level. In the above screenshot the cards default Max Memory Transfer Rate (Memory clock) is 7008MHz.

If I increase the Memory Transfer Rate Offset to 550. This means 7008MHz + 550MHz. so 7558MHz for Memory Clock speed in total.

You can manually adjust the clock for each card but that will take time. And the settings aren’t saved when you restart your rig. So let’s automate this process just a little with bash script (and limit the power limit too).

Step 4 – Automatic Overclock script

Let’s create a new bash script that will essentially apply individual overclocking settings as you would do if manually using the nvidia-settings GUI.

Use your favourite editor or for the purpose of this guide, go to your home folder and create the overclocking script:

cd && touch overclock.sh && chmod +x overclock.sh && nano overclock.sh

Then paste in the following but make sure you apply changes that are relevent to you. I.e. remove the additional lines nvidia-settings -a [gpu:N] or add more if needed and make adjustments where N = the number of your GPU.


#!/bin/bash

# Script needs to run as sudo for nvidia-smi settings to take effect.
[ "$UID" -eq 0 ] || exec sudo bash "$0" "$@"

# Setting a terminal variable called memoryOffset

# Since all my cards are the same, I'm happy with using the same Memory Transfer Rate Offset
memoryOffset="300"

# Enable nvidia-smi settings so they are persistent the whole time the system is on.
nvidia-smi -pm 1

# Set the power limit for each card (note this value is in watts, not percent!
nvidia-smi -i 0,1,2,3,4,5,6,7,8 -pl 53

## Apply overclocking settings to each GPU
nvidia-settings -a [gpu:0]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:1]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:2]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:3]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:3]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:4]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:4]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:5]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:5]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:6]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:6]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:7]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:7]/GPUMemoryTransferRateOffset[2]=$memoryOffset

nvidia-settings -a [gpu:8]/GpuPowerMizerMode=1
nvidia-settings -a [gpu:8]/GPUMemoryTransferRateOffset[2]=$memoryOffset

Once this file has been edited, save it and run it:

 cd && ./overclock.sh 

You should get a similar output to the following.

nvidia-smi and nvidia-settings overclocking script

And that’s pretty much it. You have now overclocked and set the power limit for all your Nvidia GPU(s) in your rig.

p.s. You’ll have to run your overclock.sh script whenever your system restarts – I never got around to running it each time the system restarts.

64 comments

    1. hello,

      Nope – this rig unfortunately doesn’t even hit 14MH/s… These cards were able to hit 14.5/15MHs easily in Windows and different setup.

      How much power? Here’s an approximate:

      13x 1050 Ti mini @ Max 75W = 975W just for the cards.

      so you’re looking at a 1100W PSU minimum (also don’t forget your PSU needs to have enough power connectors for your risers)




      0
  1. I really liked the way you did things here, however, unless my monitor is not connected, it is not showing GPU settings in “nvidia-settings” . I have headless setup, ie. I set up the machine and now connecting from remote desktop using .
    I have two 1060 6GB and right now I am testing it out but stuck as can only see one GPU which is connected to monitor.




    0
    1. Saahib,

      My screenshot showing all the GPU’s is a bit misleading – they will only appear in nvidia-settings after running this command that will enable all your GPU’s & register ‘fake’ attached screens in xorg.conf:

      sudo nvidia-xconfig -a --allow-empty-initial-configuration




      2
        1. Hello Hector – hmm not seeing the error – haven’t got linux installed to double check, can you let me know

          --allow-empty-initial-configuration

          Allow the X server to start even if no connected display devices
          could be detected.

          -a, --enable-all-gpus
          Configure an X screen on every GPU in the system.




          0
    2. You need Xorg installed and running with the nvidia module, even on an SSH server, otherwise the cards stays in P8 (lowest perf mode) anyway.

      Just get away with it by installing xorg-server and lightdm for instance.

      Also, when you’re running from SSH you have to specify nvidia-settings which screens Xorg runs on with the “-c” argument.

      Eg:

      nvidia-setings -c :0 -a [gpu:7]/GPUMemoryTransferRateOffset[2]=$memoryOffset




      0
  2. On what system config you are mining with 140 h/s @ 565 watts per hour sire

    Is this the config ?

    ASRock H110 PRO BTC+
    Intel Celeron G3900 51W TDP
    8GB DDR 4
    Western Digital 300GB Velociraptor
    850W Evga Power supply – 80 Plus Gold rating
    8x Zotac GTX 1050 Ti Mini (single fan)
    1x Gigabye GTX 1050 Ti OC (dual fan)
    Linux Debian 9 + GNOME
    Claymore miner 9.8
    nvidia-settings & nvidia-smi GPU Overclocking / power management
    Latest Linux Nvidia drivers




    0
  3. So 8x Zotac GTX 1050 Ti Mini (single fan), 1x Gigabye GTX 1050 Ti OC (dual fan) and a gtx 1060 3gb

    Both zotax and gigabyte 1050’s are 4 gb ?




    1
  4. Hi I just wanted to inform those who has the same problem as I had on Arch/Manjaro you would need to edit /etc/X11/xorg.conf.d/20-nvidia.conf it should contain same thing that xorg.conf contains. I also have used some of those settings https://pastebin.com/vgAkJLsR not sure how important they are. Everything else pretty much the same as in this guide. Thanks to the author by the way. I think linux mining rig is much better because it would not nag you to death about updates, and would not reboot on you unexpectedly.




    2
    1. Hey there – thanks for your comment and information. Very true about Windows updates – but my linux rig does sometimes go down and it takes out my entire network connected to the same switch lol (reboot fixes it).




      0
  5. Nice setup!
    Newbie here, got tons of questions
    1. “Memory offset”? Don’t you want to overclock the graphics clock Mhz instead? Or is that what this setting does? After all, that is what is doing the calculations right?
    2. Power limit, is that the same a over/under- volting? If i set a lower powerlimit than my cards normal power usage, is it under-volting? If i set it to higher than the normal power usage, is that over-volting?
    3. How do you come up with the number for power limit for a specific card? Do you calculate it based on a percentage of max power? Is there a optimal powerlimit/overclock ratio?
    What i am basically want to know is why did you specifically choose 300Mhz memory transfer offset and 53W power limit?
    Thankful for any answer or links.




    0
    1. Hello Bob,

      Thanks – I’ll try to answer your questions to the best of my ability.

      1) Overclocking – Memory vs Core:
      When you start to experiment with mining you’ll find out it depends on the type of coin you’re mining. It comes down to the coin’s mining algorithm. Certain algorithms will produce a higher hash rate with either higher core clock or memory clock. Ethash (used for mining Ethereum), in my experience benefits from higher memory clocks over core clock speed. Whereas another algorithm will produce a higher hash rate by overclocking the core clock instead of the memory clock.

      2) Power limit:
      Kinda.. It’s a bit more complicated when you’re over-volting since cards dynamically control clocks/voltage/power consumption. Based on my observation with my Nvidia cards, when setting the power limit below 100%, i.e max power is 75%, the card starts to under-volt / throttle clock speed. If I set power limit to say 50% (some cards wont even go this low), clock speed reported may halve (i.e. card usually runs at 1607MHz, it might only run at 800MHz due to insufficient power).

      When I set the power limit to 120% (and nothing else), voltage consumption does not go above the stock value even if I set power limit to 120%. So there’s a bit more to it when overclocking.

      Simple answer: lowering the power limit, undervolts / underclocks the card when the card detects its hitting your specified TDP limit (power consumption).

      This is evident in my posts where I show the difference in hash power vs power consumption. Limiting power reduces the card performance.

      3) Magic overclocking numbers :
      It’s trial and error. My 1050 Ti’s lowest power limit iirc is 70% (53W / 52.5W or there about) and my GTX 1080 Ti’s lowest power limit is 50%.

      You find out what the optimal power limit is by reducing the values to a point where performance degradation is negligible when mining: try the lowest power setting possible without overclocking & with a hash rate you are happy with (I said 53W because that’s the lowest power setting I could set my cards to since I care about power).

      Now that you’re happy with the cards power consumption when mining – you gradually increase memory / core clock speed with the current power setting to figure out where the instability value is.

      That being said I haven’t really hit walls with memory clock and limiting power. I only hit walls because the cards memory simply won’t clock higher due to stability. i.e. I could have +350MHz memory clock and power limit to 53W or 72.5W – it will still be unstable regardless of max power.

      To confuse things further, my 1050 Ti’s max +memory clock was a lot lower on Linux than Windows. So you have drivers and operating system to contend with too!




      1
      1. Wow, i never expected such a detailed answer! Thanks for that!
        So, how you come up with the values are trial an error. Makes sense, since there cannot be a universal values that works for every card.
        I just think that the power limit value is a little bit fuzzy, since power is a result of many things and it can limit the performance even though you have specified a higher overclock value…
        It would be simpler with just a voltage value that you can change, and the power would be what it would be.
        But anyhow, i will do what you wrote, i will lower the power limit until it impacts hashrate, then increase it a little, then start to overclock either core or memory, (depending on the algorithm) until it gets unstable, then lower it a little. I may have to fine tune the values a little but this should work.
        Thanks.




        0
  6. Hello, this setup was working for me until I wanted to use the Monitor on the Integrated graphics chip of the H110 Pro BTC. Somehow the xorg.conf gets overwritten on reboot disabling the coolbits setting :/ setting xorg.conf to read-only locked me out of the system.




    2
    1. Hello Leozolt,

      Damnit I was going to suggest read-only but you said it locks you out

      Can’t help I’m afraid – post a thread on https://bitcointalk.org/ – there are many miners on there who have lots of linux experience.

      let me know how you get on.




      0
  7. Just using one of the gpu’s for the display (nice to see the artefacts when clocking too high) .. As it seems i can choose between integrated grafix or nvidia acceleration in the driver settings…. so why not be able to use both? I think it is the driver trying to manage something it should not




    1
  8. Awesome writeup. Quick question: what’s the property name for adjusting the clock offset? nvidia-settings -q all doesn’t list anything with “offset” in the name (including the memory offset property). I have a 1050 Ti as well. I tried nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=300, and it locked my system up hard. I couldn’t ctl-alt switch to another console, and ssh’ing to reboot just ignored me (i.e. I could ssh in and issue the command, but no reboot).

    Also, are you supposed to actually be able to adjust the settings in the nvidia-settings GUI for the offsets? Putting numbers in there doesn’t do anything, but setting the memory offset via cli and then launching the GUI shows the new setting.

    Again, great write-up.




    1
    1. Hello Jason,

      Thanks – I think there a few holes in the article that need to be filled but your feedback is much appreciated.

      You’re right about nvidia-settings -q all – it reveals nothing about the properties directly linked with overclocking nor does the manual page man nvidia-settings . p.s. you can also get a nicely formatted list of the possible attributes with this command: nvidia-settings --describe=list

      Apparently  man nvidia-settings used to explain all the attributes, including GPUGraphicsClockOffset! Nvidia conspiracy.

      Question 1:  GPUGraphicsClockOffset is correct for setting the clock offset. I’ve had a few lockouts too when overclocking to aggressively – resulted in me doing a hard reset (i.e. ssh access froze and going to the rig itself with a screen wasn’t very responsive). To be honest I don’t tinker with the graphics clock – yields me the least gain for mining ETH.

      To confirm the way you are setting the clock offset is correct:

      Question 2: being able to directly adjust the clocks in the GUI. Make sure you hit ENTER after selecting a value. You can confirm the change by looking at “Graphics Clock” at the top of the PowerMizer Information panel:

      hth!




      2
      1. You nailed it on both counts, which I figured out while waiting for your reply. I guessed the right property from the command line for the clock, but my card becomes frozen at anything above 150, and 150 is prone to my miner shutting down. I also figured out the Enter thing for the GUI just by guessing, but appreciate the confirmation for those who follow after.

        Incidentally, I’ve been able to push the mem transfer rate all the way to 2000 with no stability issues or temperature concerns, and I went from a base hash rate of ~12 Mh/s to topping out at ~15.5 Mh/s. That’s a 29% increase. Not too shabby.

        I’m with you on the Graphics clock. What little bit I could push it, 100 MHz, yielded no appreciable difference in mining rate.




        0
        1. Hey Jason,

          Glad you sussed it out! Sometimes the best approach is the sledgehammer. and my god man – that’s an incredible increase. I’m going to have to review my settings – my 1050 Ti’s never went above 13.5 before crying and dying back then when I set it up – it’s been a while since I’ve updated the drivers or the miner on the rig mind you.

          I’ll update this post if I have anything conclusive to report.




          1
          1. Have you ever tried this method with different cards? I’ve got a 1050 Ti and a 1050. When I just had the Ti in there, Mint by default wouldn’t have an xorg.conf file at all. To get the settings working, I had to place the relevant sections in a file under /etc/X11/xorg.conf.d/20-nvidia.conf. Now with two cards, I run through your steps, and Mint generates and keeps an xorg.conf file. But when I boot with that setup, Cinnamon keeps crashing and X returns to fallback mode. In fallback mode, I get all the overclock settings, but it’s crap for an OS GUI.

            The offending line seems to be Screen 1 line in the ServerLayout section. If I comment that out, X works fine, but no overclock settings. If I leave it on, then OC but crap fallback GUI.

            Any thoughts on getting this going?




            0
  9. Hi, may you help me about the question.
    I have the same rig with you. 10 x MSI 1050Ti and H110 Pro BTC+ motherboard.
    I generate xorg.conf but it is reset after rebooting everytime.

    Does it happen to you?
    Thank you very much.




    0
    1. Hello Harris,

      This does not happen to me. Somebody mentioned a similar problem earlier in an earlier comment – check the comments of this post, they may have mentioned the fix.




      0
        1. Hi Harris

          I have the same issue with the H110 Pro BTC+. I got 4x GTX 1080 Ti and I can’t find the way to enable the overclocking on Nvidia GUI. The Nvidia drivers are 384.111.
          I tried using the motherboard DVI and now I’m connected to the first card of the rig, with no luck. If you find a solution I would like to here about it.

          Noob Miner excellent post.

          Thanks




          1
      1. Hello again Harris,

        I hope you were able to solve the issue.

        I had the monitor connected to one of the 1050 Ti cards – i’ve read and seen people connect to the onboard VGA without any issues though.

        hth




        0
  10. Hi Noob Miner!
    Thank you very much for writing this guide.
    I tried to follow your steps one by one, and I do manage to modify the file xorg.conf at the right path and then I confirm that the line with the “coolbits” “28” were added in all monitors when I open the file with gedit. Just then when I reboot the system, I can’t see the PowerMizer features to be editable, so I check the xorg.conf file again and I see that’s been modified automatically at startup, and the “coolbits” “28” lines aren’t there any more.
    Do you have any idea what am I doing wrong?

    Thank you so much for the effort of helping out us newbies!




    0
    1. Hello Raijard,

      Thanks for the comments. I’m uncertain, but if you’ve read the comments without luck, I would suggest changing the permission of the xorg.conf file to your user. and also setting the file to read only mode, that should be good enough to stop the system from overwriting the file. you can always change this back of it causes problems down the line. just note down who owns that file first and what the permissions are.

      p.s I haven’t checked my rig since I’ve gone back to Windows now that it supports more than 8 gpus

      hth!




      1
      1. Thank you very much for your answer Noob Miner,
        I’ve checked the comments and the solution of using one of the mining cards as display device seems to be the only option to avoid xorg.conf to be overwritten at start up. Nevertheless it doesn’t look like a clean solution to me.
        Well what you say about Windows Nvidia drivers being able to manage more than 8 video cards today was totally unknown for me. I’m trying to set up a 13 Nvidia card rig with an ASRock H110 Pro BTC+ MoBo. Do you think that would be possible to achieve under W10?
        By the way, I’ve always found issues when plugging one of the cards into the PCIE-16x even under Linux which prevents the system to start. Am I missing an important Bios setting for this Motherboard? (I left all BIOS settings by default)
        Thank you very much for your help!




        0
        1. Hello again Raijard,

          I am mining with 11 GPU now – 9x 1050 Ti & 2x 1060 on the ASRock H110 Pro BTC+ and Windows 10 – so yes it is possible. Previously there was a Windows / Nvidia driver limit where you couldn’t mine with more than 8x Nvidia cards. I.e. Windows device manager would show 10 cards, but the miners were only able to use 8 cards – that was a Windows / Nvidia driver issue.

          That has now been resolved since the latest Windows update so you can mine with more than 8 cards

          As for problem with directly connecting the card into the PCIE-16x slot, I had no problem – all the bios settings were left to default. Maybe try another card.

          Also is the system actually failing to start? No power at all? Does it power down? It could be that the display output has been moved to another GPU. Try using the on-board VGA connector instead if you haven’t tried that already. Other than that, I’m out of solutions!

          hth




          0
  11. Hey Noob Miner,

    I already had used this tutorial to get my 1050ti overclocked with no issues, but unfortunately it didnt work with the new 1070 right off the bat.

    When i put in your Nvidia-xconfig line, it would only apply to my first GPU. I tried editing Xorg.conf manually, but it kept giving me boot up issues. so after trying some random tips from other forums, I found this:
    https://gist.github.com/myuriy/b8547b876233363514ed2a4c9a524649

    Since I already had my PC setup with the drivers for nvidia and packages for mining, I only used these lines:
    sudo rm -rf /etc/X11/xorg.conf.d

    sudo mkdir /etc/X11/xorg.conf.d

    sudo nvidia-xconfig --allow-empty-initial-configuration --enable-all-gpus --cool-bits=31 -o /etc/X11/xorg.conf.d/20-nvidia.conf

    I think the “enable-all-gpus” was the key, because it made me realize that in my nvidia-smi readout, it said my 1070 was “off”, but in your screenshot they are all “on”.

    So I hope that saves someone else a bit of trouble.

    Thanks for the great tutorial!




    1
    1. Hello Irish Sausage (coo name lol),

      Good work with finding out the problem and posting your findings! This will be helpful to others who have experienced similar problems as you.

      p.s. if anyones wondering what those commands do, theyre totally removing the xorg configuration folder (settings wiped), re-creating that folder and then creating/initialising a new config file called 20-nvidia.conf, with all the gpus enabled(--enable-all-gpus) and overclocking (--coolbits=31).




      0
  12. Realmente es el mejor tuto q he encontrado para OC las Nvidias…… Es lo q me faltaba para terminar mi Rig de 6x GPU Nvidia GTX 1060

    Translation:

    “It really is the best tutorial that I have found for OC the Nvidias …… It is what I needed to finish my Rig of 6x Nvidia GTX 1060 GPU”




    1
  13. Hey man. Good tutorial. When i set the memory clock offset and hit enter in the GUI nothing happens. The cards just mine at same hash rate. Im mining Neoscrypt algo.




    0
  14. hello

    please help i get this error, BTW i have gtx 1050 ti and i read that mining eth+ sia need 65W should i change the sh file??

    mine-rig-2@minerig2-EP43-UD3L:~$ cd && ./overclock.sh
    Persistence mode is already Enabled for GPU 00000000:01:00.0.
    Persistence mode is already Enabled for GPU 00000000:03:00.0.
    All done.
    Power limit for GPU 00000000:01:00.0 was set to 53.00 W from 65.00 W.
    Power limit for GPU 00000000:03:00.0 was set to 53.00 W from 65.00 W.
    All done.

    (process:2490): Gtk-WARNING **: Locale not supported by C library.
    Using the fallback ‘C’ locale.
    Failed to connect to Mir: Failed to connect to server socket: No such file or directory
    Unable to init server: Could not connect: Connection refused

    ERROR: The control display is undefined; please run `nvidia-settings –help` for usage information.

    (process:2491): Gtk-WARNING **: Locale not supported by C library.
    Using the fallback ‘C’ locale.
    Failed to connect to Mir: Failed to connect to server socket: No such file or directory
    Unable to init server: Could not connect: Connection refused

    ERROR: The control display is undefined; please run `nvidia-settings –help` for usage information.

    here is the overclock.sh

    #!/bin/bash

    # Script needs to run as sudo for nvidia-smi settings to take effect.
    [ “$UID” -eq 0 ] || exec sudo bash “$0” “$@”

    # Setting a terminal variable called memoryOffset

    # Since all my cards are the same, I’m happy with using the same Memory Transfer Rate Offset
    memoryOffset=”300″

    # Enable nvidia-smi settings so they are persistent the whole time the system is on.
    nvidia-smi -pm 1

    # Set the power limit for each card (note this value is in watts, not percent!
    nvidia-smi -i 0,1 -pl 53

    ## Apply overclocking settings to each GPU
    nvidia-settings -a [gpu:0]/GpuPowerMizerMode=1
    nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=$memoryOffset

    nvidia-settings -a [gpu:1]/GpuPowerMizerMode=1
    nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=$memoryOffset




    0
    1. The warning looks to be specific to a missing locale (also known as a language pack).

      Gtk-WARNING **: Locale not supported by C library.

      You probably need to install the language pack, check this link out:

      https://askubuntu.com/questions/359753/gtk-warning-locale-not-supported-by-c-library-when-starting-apps-from-th

      The error:

      ERROR: The control display is undefined; please run `nvidia-settings –help` for usage information.

      Try running this command to fix it: sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

      You also need to make sure you’re using Linux with a desktop interface – not headless.

      If the above fails, then your best bet is Google




      0
  15. Hi guys . I try hard this to overclock my 1080ti’s.
    But any command I insert … is error for me . That xorg seems to be missing in my Linux . Pls help me with e detailed guide or better an video . Any help is greatly appreciated.




    0
      1. miner@miner:~$ sudo nvidia-xconfig -a –cool-bits=28 –allow-empty-initial-configuration
        [sudo] password for miner:

        Using X configuration file: “/etc/X11/xorg.conf”.
        Option “AllowEmptyInitialConfiguration” “True” added to Screen “Screen0”.
        Option “AllowEmptyInitialConfiguration” “True” added to Screen “Screen1”.
        Backed up file ‘/etc/X11/xorg.conf’ as ‘/etc/X11/xorg.conf.backup’

        ERROR: Unable to open the file “/etc/X11/xorg.conf” for writing (Operation not
        permitted).

        ERROR: Unable to write file “/etc/X11/xorg.conf”; please use the
        “–output-xconfig” commandline option to specify an alternative output
        file.




        0
        1. That looks like a permission problem for the file – it can’t write to it. Is it set to read-only? Make it writable and it should be able to save it.




          0
  16. I’m having a hard time with the config file. I load in the cool bits settings for each GPU with your command and reboot. The config file resets back to original setting. How do I make the modification stick?




    0
    1. Sledge hammer approach – after setting up your gpus, change the file permission to read only & immutable.

      chmod 444 filename
      chattr +i filename

      p.s. there is a “proper” away to do this, with custom configuration files, I would head over to bitcointalk.org.




      0
  17. How do we prevent the Russians from hacking into our mining rig? Every few days my miner suddenly stops mining and i get a weird message on my terminal in Russian.
    Please help, I spent $3,000 dollars and i can’t have this continue to happen. Will post a screenshot when i get home. I am at work right now.




    0
    1. Hello Harvey,

      Hmm well I wouldn’t single out Russians in general, but hackers..

      so that’s a tough one.

      The usual online precautions you’ll see posted online:

      • If you’re using Claymore, make sure you haven’t enabled the remote management feature.
      • Install wallets in VM environment
      • Antivirus software
      • Tighten your software or hardware firewall rules
      • Sell your mining rig and just buy coins instead




      0
  18. Hello, do you experience slow boot time with 9 gpu??? in my system hang boot time is too much (with 4 is so fast) and why i change parameters are so slow.




    0

Leave a Reply

Your email address will not be published. Required fields are marked *