Pages

Tuesday, January 4, 2022

Cooling a Nvidia Tesla M40

    Graphics cards with a lot of video memory are difficult to come by without paying exorbitant prices. That's why I was intrigued I found out about the uncommon practice of using data-center accelerator cards for the large CUDA workloads I was seeing in my research. When I saw a cheap Ebay listing for a Nvidia Tesla M40 24GB, an accelerator card with 24 gigabytes of video memory and around the performance of a Titan X. and subsequently bought one. Here is my experience with trying to use it and more importantly, cool the beast.

Link to Assembly

Setting the Stage

    The Nvidia Tesla M40 24GB card is a monster at nearly 11 inches in length and feeling like it weighs a ton compared to the measly GTX 1650 and GTX 950 that I previously owned. It is based on the GM200 die with 3072 CUDA cores which is shared with the Titan X and GTX 980 Ti (an important fact that we will come back to later) and a huge amount of memory surrounding it. It is also still supported by the CUDA, unlike the Kepler based GPUs such as the popular Tesla K80 [1].

    There have been many methods of cooling such a unique card. The Youtube channel Craft Computing has had several videos on the Tesla series of cards [2], both experimenting with blowing air through the card [3] and making custom mounts for CPU air-coolers to fit on the card (on an M60 which has 2 GM200 GPUs)[4]. Wendell from Level1Techs has also investigated the Tesla K40, an older Kepler-based GK180 GPU from 2013 (Note: Nvidia made an actively cooled K40c model) [5]. There have also been various attempts (too numerous to list) to fit 980Ti coolers, liquid coolers such as the NZXT Kraken G12 and high-pitched server-style blower coolers detailed on the Level1Techs, ExtremeHW, and Daz3D forums and Reddit. 

    It would also be negligent of me to not mention that the Youtube channel RaidOwl just streamed the cooling of a Tesla M40 as of November 28th, 2021 using the exact same method I outline here, although I was unaware of this until writing this post [6].

  1.  Nvidia confirms driver support for Kepler GPUs will end in October - Techspot
  2. $220 for Titan X Performance TODAY???? - Craft Computing 
  3. "How do you cool an nVidia Tesla GPU?" - Craft Computing
  4. "I'll make my own heatsink - Just add Blackjack" - Craft Computing 
  5. Gaming, on my Tesla, more likely than you think - Level1Techs
  6. Cooling an NVIDIA Tesla M40 w/ AIO - ID-COOLING ICEFLOW - RaidOwl 

Arrival & Hot Phase

    So I bought one. 

    It arrived in an extremely non-descriptive envelope and was far heavier than I had ever anticipated. I also ordered the requisite power adapter for the card, a 2 8-pin PCI-E to 8-pin EPS connector. While an 8-pin PCI-E connector is the same shape as an EPS connector that you would see on a motherboard for a power-hungry CPU, there are some pin differences that could potentially fry my new card. Also at the same time I ordered a new PCI-E bracket, as the one that came with the card was some other bracket from a server and would not screw into my case. I also 3D printed an adapter to fit a 40mm fan onto the back of the card to blow air through the cooler.

 

The Tesla M40 with original Nvidia heatsink, 3D printed fan mount and 40mm fan (and yes, that is electrical tape).

    Unlike the previous generation of Kepler cards, the heatsink on the M40 cards was closed at the top, preventing the common method of just taping a few fans to the stock heatsink.

    The anemic 40mm fan from Microcenter was never going to realistically cool the card, but it did hold it at a reasonable temperature at idle of 32-34 C, enough for small bits of testing code to run, but after 5 mins of running at 80-90% the card would approach 89 C, its throttling temperature. I was going to have to find a radially different solution to cooling the card that hopefully wasn't a server-grade blower fan. Relying on some board photos from ExtremeHW forums [1] and some vague descriptions that the board was essentially laid out like a GTX 980Ti, I took a leap of faith and ordered an AIO liquid cooler off of Amazon.

  1. Trying to improve a Tesla M40 - ExtremeHW Forums

Liquid Phase

    The ID-COOLING Iceflow 240 VGA is a reasonably cheap all-in-one liquid cooler designed for graphics card that I picked up off of Amazon for $100 on sale. Its appeal, compared to the NZXT Kraken G12 was that its pump was built into the radiator, letting me mount the radiator on the bottom of my case without any concern for pump lifetime or air bubbles, and it was available unlike many of the liquid coolers compatible with the Kraken. I also didn't have to source my own VRAM and VRM heatsinks and thermal pads - it was an all-in-one solution. 


    That isn't to say that there aren't faults with the cooler - the directions were woefully inadequate if you have not watched several videos about what components are on the graphics card or how to put a liquid cooler on a graphics card. The thermal pads where extremely thin, although quite sticky, and the cables and adapters were an absolute rats nest. No bags of screws were labelled, and some screws were so similar that I guessed at one point. 

But if I'm writing this, then it must have worked, right?

Assembly

Note: This is not pretending to be instructions for how to put a liquid cooler on any graphics card, but a helpful description of how I put this specific liquid cooler on this specific graphics card. I recommend watching several videos on liquid cooling before attempting.    

    I started by removing the cooling shroud by unscrewing the four TR8 screws on the front of the card. Then the 8 TR 6 screws on the top and bottom of the card are removed.



  
    Removing the shroud shows the big difference between this card and the previous generation of Teslas, the closed heatsink. Air is only able to be blown from one end of the card to the other.
 

    Then 15 (yes, fifteen! I counted them all and then promptly could only find fourteen of the little buggers) Phillips head screws on the backplate loosen both the backplate and the main die heatsink on the front.


    The backplate hides half of the memory and is made of aluminum so it was important to me to include the backplate for some VRAM cooling in the final assembly. The stock thermal pads are quite thick, possible even 1.5mm, and I kept the original thermal pads for the memory on the back.


    Flipping over the card reveals that the massive heatsink is actually in two - a large aluminum and copper section for the GPU and then a smaller all-aluminum section for the power delivery part of the board. This second section required quite a bit of wiggling and eventually shown that there was a significant number of thermal pads on the power delivery section for this 250 Watt card. While in the second photo above the heatsink (top) is flipped, it appears that the VRMs may not even touch the cooler as they do not have any thermal pads. It is also possible that they have direct contact with the cooler. There are the same, thick, thermal pads for the memory and the resistors around the VRMs. It is unfortunate that this heatsink is also how the backplate is screwed in with threaded inserts as well as the PCI-E bracket, and in the future I would like to 3D print some adapters to replace the current zip-tie solution.

    After cleaning off the die with some isopropyl alcohol, there was another problem with the ID-COOLING kit. While the VRAM chips on the back have some heat dissipation from the backplate, the Iceflow kit only comes with 8 16mmx13mmx3mm heatsinks which are designated in the instructions for VRAM cooling. However, they also include 8 16mmx13mmx5mm heatsinks for the VRM cooling. Only 4 of these fit on the Tesla M40 VRMs so the other four were available to use for the memory.

    The following photo shows the backplate being zip-tied back on before the heatsinks were applied to the front of the card.

    Originally I was intending to use the stock thermal pads from Nvidia, and I did for the first part of the assembly. However they were not sticky enough without compression from screws, and I replaced the thermal pads on the front with the ones included in the Iceflow kit. The next photo shows the difference in thickness between the two - if I had to guess, the Iceflow pads were about 0.5mm.

    The Iceflow cooler comes with a fan designed to blow air of the remaining components and a copper block for the GPU die connected to a 240mm radiator and pump combo. This also comes with a block of two connected 120mm fans on one fan header.




    Fitting the compatible mounting hardware to the cooler was perhaps the simplest part of the assembly, but I managed to not read the instructions for this part - the arrows on the mounting brackets must point towards the cooler. I used to brackets compatible with GTX 9XX series cards and the "long" spring-loaded screws and they were compatible despite there being no apparent difference between the "long" and "short" screws. In addition the thermal paste supplied with the kit was sufficient as my Arctic Silver paste had evolved legs and run off. 


    From then on there weren't many photos as the card had to be assembled with the cooler on the table and the card lowered down onto it. Due to the short length of the tubing to the radiator it had to sit in my lap while I tightened down the spring-loaded screws in a cross-wise fashion.


    Other than having to zip tie the PCI-E bracket back onto the card, the card was ready to be put into the case.

Conclusions


     The Tesla M40 now idles at 20 C and under load has not exceeded 34 C while running the Blender Benchmark Suite from opendata.blender.org even under 100% utilization.

In the Blender Benchmark, bmw27 took 91 seconds and classroom took 267. Comparing to a GTX 1080 which takes ~82 seconds and ~260 seconds respectively this is respectable, and with 3 times as much VRAM. The Tesla M40 is also about 3-3.5 times faster than the GTX 1650 originally in my system.

If I Did It (Again)

    The use of zip-ties really bothers me, I think if I did this again I would go for an amalgam of using a CPU cooler, such as an Arctic cooler with the pump in the radiator, for cooling the die. I would keep the smaller section of the stock cooler for the VRMs with a modified 3D printed adapter since a 40mm fan should be sufficient to cool those. Another time...

Monday, August 30, 2021

Using Github for Scientific Computing (Part 1)

 Large data files and codes with filenames like "module_12-24-2017_2018_updated_v2_working" can be quite a pitfall in scientific computing. Version control can simplify scientific computing projects by saving the history of their development, as well as providing a simple backup. The most popular version control software is Git and Github. Git is the version control software and Github is an online repository that backs up the files and version history. Git and Github can be quite intimidating, as there are many guides that introduce git with opaque terminal commands and obtuse explanations. The following steps show how I set up Github to manage versions of my research project. 

My initial project folder looked a lot like this.

Backups were occasionally taken when there was important data to be saved and looked at later, or when a large amount of code was changed. If there were small changes, the particular lines that were replaced were commented out until they were forgotten about or deleted later. This was not a good way to manage a project that was growing larger and larger the longer I worked on it.

So I signed up with Github using my .edu email account (this gets students some free services) and created a repository.

Repositories are what version control software calls their managed folders. While my code and data is hosted on Github, there is a private repository option so that only I and whom-ever I choose can view it.

To manage git on my computer, I downloaded the Github Desktop software and signed in using my Github account. Then I setup my new repository by "cloning" it into a folder of my choosing. To make sure that nothing gets deleted, I make a backup of my original files and then copied them back into my new folder/repository. 

The Github desktop app sees all the changes that are made in this folder and keeps track of them. But let's say that I have finished writing and running code for the day and I want to save all these changes. I will have to "commit" them to save a version of them in Git. Before committing, these changes show up in the desktop app on the left, where a small commit description is required (just to jog your memory).

Pressing "Commit to main" commits these changes to the version control software. Pushing these changes communicates to Github and syncs the version control and files with Github.

One small snag that I ran into was that I had many files over 100MB in size. To sync these requires another piece of software called "git lfs" which needs to be installed and configured before Github desktop recognizes that it is there and allows you to sync these large files.

Installation and configuration of Git LFS

 In the next post, I will detail a shell script that I have written that manages all of this for me automatically when I run it at the end of the day.







Wednesday, March 3, 2021

Old Model Rockets

In 2016 and 2017, I launched several model rockets at a NAR club in western Pennsylvania. Here are some videos of those launches.

February 19th, 2017 launch of the Estes Star Orbiter. This rocket was modified from the original design with the addition of a payload bay which carried an home-built Arduino altimeter. Apogee height was 1512 ft.

 

 Feb. 19, 2017, launch of the Estes Partizon. This rocket was the largest rocket I have launched to date.

 2016 launch of the Estes Ventris with payload pay (empty this launch).

 

2016 launch of the Apogee Aspire on F10-8 motor. This rocket was simulated using OpenRocket to go ~5000 ft and was never seen again.

 

2016 launch of the Estes Extreme 12 dual-stage rocket.