Graphics cards with a lot of video memory are difficult to come by without paying exorbitant prices. That's why I was intrigued I found out about the uncommon practice of using data-center accelerator cards for the large CUDA workloads I was seeing in my research. When I saw a cheap Ebay listing for a Nvidia Tesla M40 24GB, an accelerator card with 24 gigabytes of video memory and around the performance of a Titan X. and subsequently bought one. Here is my experience with trying to use it and more importantly, cool the beast.
Setting the Stage
The Nvidia Tesla M40 24GB card is a monster at nearly 11 inches in length and feeling like it weighs a ton compared to the measly GTX 1650 and GTX 950 that I previously owned. It is based on the GM200 die with 3072 CUDA cores which is shared with the Titan X and GTX 980 Ti (an important fact that we will come back to later) and a huge amount of memory surrounding it. It is also still supported by the CUDA, unlike the Kepler based GPUs such as the popular Tesla K80 [1].
There have been many methods of cooling such a unique card. The Youtube channel Craft Computing has had several videos on the Tesla series of cards [2], both experimenting with blowing air through the card [3] and making custom mounts for CPU air-coolers to fit on the card (on an M60 which has 2 GM200 GPUs)[4]. Wendell from Level1Techs has also investigated the Tesla K40, an older Kepler-based GK180 GPU from 2013 (Note: Nvidia made an actively cooled K40c model) [5]. There have also been various attempts (too numerous to list) to fit 980Ti coolers, liquid coolers such as the NZXT Kraken G12 and high-pitched server-style blower coolers detailed on the Level1Techs, ExtremeHW, and Daz3D forums and Reddit.
It would also be negligent of me to not mention that the Youtube channel RaidOwl just streamed the cooling of a Tesla M40 as of November 28th, 2021 using the exact same method I outline here, although I was unaware of this until writing this post [6].
- Nvidia confirms driver support for Kepler GPUs will end in October - Techspot
- $220 for Titan X Performance TODAY???? - Craft Computing
- "How do you cool an nVidia Tesla GPU?" - Craft Computing
- "I'll make my own heatsink - Just add Blackjack" - Craft Computing
- Gaming, on my Tesla, more likely than you think - Level1Techs
- Cooling an NVIDIA Tesla M40 w/ AIO - ID-COOLING ICEFLOW - RaidOwl
Arrival & Hot Phase
So I bought one.
It arrived in an extremely non-descriptive envelope and was far heavier than I had ever anticipated. I also ordered the requisite power adapter for the card, a 2 8-pin PCI-E to 8-pin EPS connector. While an 8-pin PCI-E connector is the same shape as an EPS connector that you would see on a motherboard for a power-hungry CPU, there are some pin differences that could potentially fry my new card. Also at the same time I ordered a new PCI-E bracket, as the one that came with the card was some other bracket from a server and would not screw into my case. I also 3D printed an adapter to fit a 40mm fan onto the back of the card to blow air through the cooler.
The Tesla M40 with original Nvidia heatsink, 3D printed fan mount and 40mm fan (and yes, that is electrical tape). |
Unlike the previous generation of Kepler cards, the heatsink on the M40 cards was closed at the top, preventing the common method of just taping a few fans to the stock heatsink.
The anemic 40mm fan from Microcenter was never going to realistically cool the card, but it did hold it at a reasonable temperature at idle of 32-34 C, enough for small bits of testing code to run, but after 5 mins of running at 80-90% the card would approach 89 C, its throttling temperature. I was going to have to find a radially different solution to cooling the card that hopefully wasn't a server-grade blower fan. Relying on some board photos from ExtremeHW forums [1] and some vague descriptions that the board was essentially laid out like a GTX 980Ti, I took a leap of faith and ordered an AIO liquid cooler off of Amazon.
Liquid Phase
The ID-COOLING Iceflow 240 VGA is a reasonably cheap all-in-one liquid cooler designed for graphics card that I picked up off of Amazon for $100 on sale. Its appeal, compared to the NZXT Kraken G12 was that its pump was built into the radiator, letting me mount the radiator on the bottom of my case without any concern for pump lifetime or air bubbles, and it was available unlike many of the liquid coolers compatible with the Kraken. I also didn't have to source my own VRAM and VRM heatsinks and thermal pads - it was an all-in-one solution.
That isn't to say that there aren't faults with the cooler - the directions were woefully inadequate if you have not watched several videos about what components are on the graphics card or how to put a liquid cooler on a graphics card. The thermal pads where extremely thin, although quite sticky, and the cables and adapters were an absolute rats nest. No bags of screws were labelled, and some screws were so similar that I guessed at one point.
But if I'm writing this, then it must have worked, right?
Assembly
Note: This is not pretending to be instructions for how to put a liquid cooler on any graphics card, but a helpful description of how I put this specific liquid cooler on this specific graphics card. I recommend watching several videos on liquid cooling before attempting.
I started by removing the cooling shroud by unscrewing the four TR8 screws on the front of the card. Then the 8 TR 6 screws on the top and bottom of the card are removed.
Then 15 (yes, fifteen! I counted them all and then promptly could only find fourteen of the little buggers) Phillips head screws on the backplate loosen both the backplate and the main die heatsink on the front.
The backplate hides half of the memory and is made of aluminum so it was important to me to include the backplate for some VRAM cooling in the final assembly. The stock thermal pads are quite thick, possible even 1.5mm, and I kept the original thermal pads for the memory on the back.
Flipping over the card reveals that the massive heatsink is actually in two - a large aluminum and copper section for the GPU and then a smaller all-aluminum section for the power delivery part of the board. This second section required quite a bit of wiggling and eventually shown that there was a significant number of thermal pads on the power delivery section for this 250 Watt card. While in the second photo above the heatsink (top) is flipped, it appears that the VRMs may not even touch the cooler as they do not have any thermal pads. It is also possible that they have direct contact with the cooler. There are the same, thick, thermal pads for the memory and the resistors around the VRMs. It is unfortunate that this heatsink is also how the backplate is screwed in with threaded inserts as well as the PCI-E bracket, and in the future I would like to 3D print some adapters to replace the current zip-tie solution.
After cleaning off the die with some isopropyl alcohol, there was another problem with the ID-COOLING kit. While the VRAM chips on the back have some heat dissipation from the backplate, the Iceflow kit only comes with 8 16mmx13mmx3mm heatsinks which are designated in the instructions for VRAM cooling. However, they also include 8 16mmx13mmx5mm heatsinks for the VRM cooling. Only 4 of these fit on the Tesla M40 VRMs so the other four were available to use for the memory.
The following photo shows the backplate being zip-tied back on before the heatsinks were applied to the front of the card.
Originally I was intending to use the stock thermal pads from Nvidia, and I did for the first part of the assembly. However they were not sticky enough without compression from screws, and I replaced the thermal pads on the front with the ones included in the Iceflow kit. The next photo shows the difference in thickness between the two - if I had to guess, the Iceflow pads were about 0.5mm.
The Iceflow cooler comes with a fan designed to blow air of the remaining components and a copper block for the GPU die connected to a 240mm radiator and pump combo. This also comes with a block of two connected 120mm fans on one fan header.
Fitting the compatible mounting hardware to the cooler was perhaps the simplest part of the assembly, but I managed to not read the instructions for this part - the arrows on the mounting brackets must point towards the cooler. I used to brackets compatible with GTX 9XX series cards and the "long" spring-loaded screws and they were compatible despite there being no apparent difference between the "long" and "short" screws. In addition the thermal paste supplied with the kit was sufficient as my Arctic Silver paste had evolved legs and run off.
From then on there weren't many photos as the card had to be assembled with the cooler on the table and the card lowered down onto it. Due to the short length of the tubing to the radiator it had to sit in my lap while I tightened down the spring-loaded screws in a cross-wise fashion.
Other than having to zip tie the PCI-E bracket back onto the card, the card was ready to be put into the case.
Conclusions
The Tesla M40 now idles at 20 C and under load has not exceeded 34 C while running the Blender Benchmark Suite from opendata.blender.org even under 100% utilization.
In the Blender Benchmark, bmw27 took 91 seconds and classroom took 267. Comparing to a GTX 1080 which takes ~82 seconds and ~260 seconds respectively this is respectable, and with 3 times as much VRAM. The Tesla M40 is also about 3-3.5 times faster than the GTX 1650 originally in my system.