Comments Locked

35 Comments

Back to Article

  • Beaver M. - Friday, November 29, 2019 - link

    That bubbling would drive me crazy.
  • valinor89 - Friday, November 29, 2019 - link

    But no defeaning Fan noise. I wonder if this will allow for quieter operation on dense server rooms. Server Fans are LOUD.
  • firewrath9 - Friday, November 29, 2019 - link

    They still need to cool the heated vapor, so they would need giant condensers, which would require big fans.
  • The Chill Blueberry - Friday, November 29, 2019 - link

    Yes but they can use bigger slower fans, rather than tiny ear raping fans to fit in the racks
  • qlum - Saturday, November 30, 2019 - link

    Except bigger fans take up more space and noise really is not a big issue in a server environment so loud fans generally make a lot of sense here.
  • PeachNCream - Saturday, November 30, 2019 - link

    I spend enough time working around rack mounted hardware that I bring my own hearing protection. It does not take a lot of that sort of noise to damage your hearing and you never can get that back once its gone. Things that can reduce server fan noise would be helpful and if the cooling itself is more efficient in terms of power and removal of waste heat, it's good in many ways. Now it we could just develop hardware that doesn't produce as much heat to begin with that'd be even better.
  • mode_13h - Tuesday, December 3, 2019 - link

    More efficient HW will just make it affordable for datacenters to grow even larger.

    I'm not saying not to care about energy efficiency, but demand for compute is forecast to significantly outstrip any energy efficiency improvements on the horizon.
  • rahvin - Saturday, November 30, 2019 - link

    You'd immerse the whole rack (in fact the whole row of racks) and cycle the vapor to a chiller on the roof. It would actually be quite a bit more efficient than the hot/cold isles of the current design if it wasn't for all the complications the system would bring.
  • rbanffy - Monday, December 2, 2019 - link

    Noise may not bother humans who spend most of their time outside the datacenter, but the vibration affects the machines the fans are attached to. You know - screaming at hard disks increase latencies.
  • mode_13h - Tuesday, December 3, 2019 - link

    Fan noise is the sound of wasted energy. By definition, a very low-PUE setup cannot be particularly loud.
  • sharath.naik - Sunday, December 1, 2019 - link

    I heard there is something similar but better way to use building liquid for cooling without all the hassles. It's called heatpipes, from what I hear it is very efficient.
  • Dragonstongue - Sunday, December 1, 2019 - link

    even better is liquid within the CPU or GPU core itself, IBM did this design many years back, still has not hit "mainstream" and likely will once full optical style comes into play.

    I would <3 to see the bubbles

    nothing is saying cannot be liquid to heatpipe to free air rad style i.e no fans required just takes difference between the hot producing part and the cooler ambient air around it (even just a small TEC unit to provide smaller bursts of cooling per heatpipe or heatsink)
  • mode_13h - Tuesday, December 3, 2019 - link

    In a 2-phase immersion cooling setup, you could just run with a bare die. Perhaps the die could even be textured in some way, to assist convection & nucleation.
  • destorofall - Monday, December 2, 2019 - link

    heat pipes and vapor chambers are great but you still rely or relatively large mass flow rates of air to remove the heat from the fin stack in such a confined and restricted space.
  • mode_13h - Tuesday, December 3, 2019 - link

    Funny thing is that both heat pipes and this two-phase liquid cooling work by basically the same principal. The main difference is that heat pipes have a low vapor pressure (i.e. near vacuum) and very little fluid, while these ore mostly full of fluid.

    If you would try to build a computer enclosure like a heatpipe (i.e. with low vapor pressure and little fluid), the failure mode would be much worse (and more likely). Also, you'd need to run capillaries so the fluid could move from the condensation sites to the heat sources.
  • mode_13h - Tuesday, December 3, 2019 - link

    BTW, I know that's not what you were proposing, but I thought it was an interesting observation & tangent.
  • Duncan Macdonald - Friday, November 29, 2019 - link

    Looking at the top picture - I would not trust this cooling as shown - the bottom of the chip would be far better cooled than the top. Much of the top half of the chip is covered by the gas bubbles and will have poorer cooling than the bottom. For the cooling to be good it needs a pump that supplies the liquid quickly enough to the hot surface for it not to get insulated by bubbles of gas.
  • azfacea - Friday, November 29, 2019 - link

    nah small problem. if problem at all. the question is can all components take it for the duration of their life. if so this could be a big deal, for power and space efficiency
  • PVG - Friday, November 29, 2019 - link

    If that's really a concern, a slight angle of the board would suffice to resolve the issue.
  • azfacea - Friday, November 29, 2019 - link

    even vertically its not clear that its a problem. additional turbulence on top might compensate for less contact with liquid. you'd have to measure it but yea your approach can work if its a problem
  • FullmetalTitan - Friday, November 29, 2019 - link

    Looks to be operating in the nucleate boiling regime, the most efficient for heat transfer. It would be a problem if you saw it transition to film boiling, but I'm guessing the fluids are designed with a viscosity and boiling point to maximize heat transfer in the expected operational window. Pretty basic undergrad-level heat transfer equations to find those values
  • Santoval - Saturday, November 30, 2019 - link

    The minor thermal insulation of the gas bubbles is more than compensated by the convection of the liquid these bubbles drive upward. The CPU placement is vertical by design, because this is two-phase immersion cooling. If the CPU was horizontal then heat would only be released by the gas bubbles and that would be inefficient.

    In this design heat is released by both the gas bubbles and the liquid which these bubbles slide over the CPU. So unlike how it might seem the very bottom (the bottom few mm) of the chip is most likely cooled a little *worse* than the rest of the chip, because these few mm are only cooled via the gas bubbles (i.e. via radiative cooling) and not via convective cooling.
  • mode_13h - Tuesday, December 3, 2019 - link

    Phase change is how most of the energy is removed from the chip. So, you'd have to see whether enough liquid was contacting the top of the chip. If so, then you're good.

    But yeah, I had the same reaction. There are a range of possible solutions, including convection, possibly varying viscosity or the boiling point of the fluid, and increasing the surface area of the chip's heat spreader.
  • mode_13h - Tuesday, December 3, 2019 - link

    Also, the heatsreader should conduct heat, itself. So, some heat should be conducted from the top to the bottom of the chip.
  • eachus - Friday, November 29, 2019 - link

    "That being said, given the technology behind them, I wouldn’t be surprised if a 2PILC rack would cost 10x of a standard air-cooled rack."

    May be true, but what is the packing density of the 2PILC rack? In the supercomputing realm, replacing 100 racks with 10 racks would be a no brainer. I think that ten times the density would be unlikely, but upping the CPU density by 4x or 5x and leaving the disk storage untouched might be very useful. In practice though Cray is going for boards with water cooling of CPUs, memory, and VRMs. This will be at least twice the density of air-cooled. An advantage of water cooling is that the water can be pumped through a radiator on the roof. For 2PLIC, the heat will need to be transferred to air or water in the server room.
  • ksec - Saturday, November 30, 2019 - link

    Excuse my ignorance, why would Rack Density be a problem when CPU number are exactly the same, i.e Interconnect are still a problem.

    Rent per Square Feet should hardly make a different in TCO.
  • destorofall - Monday, December 2, 2019 - link

    In a normal 2PILC system the vapor is condensed on an array of condenser coil, positioned just above the liquid, and the condenser water is pumped to a liq-air HX. The water outlet temps can be run at 50°C if the conditions are right. Assuming a working fluid of FC-72 that means you could potentially setup a system to run in a desert provided adequate airflow is present, and low fouling is present. With Tj-Tf resistances being around 0.04°C/W that can put Tj around 66°C at 250W
  • mode_13h - Tuesday, December 3, 2019 - link

    The downside of water-cooling everything is all that tubing that has to be over-built to minimize the chance of leaks. This way, you have minimal overhead and just dunk the whole thing in fluid. Then, you need just one big heat exchanger to remove heat from the entire unit.

    I don't think it's a given that these units would be more expensive than Cray's approach.
  • GreenReaper - Friday, November 29, 2019 - link

    Only two phases? I smell a Gilette-style opening for plasma-cooled components!
  • mode_13h - Tuesday, December 3, 2019 - link

    Why stop there? Let's throw in some ices!
  • sharath.naik - Sunday, December 1, 2019 - link

    So if the load pushes the whole liquid reservoir to above 59c it all evoporates? Not a smart thing to do, it is wiser to have enough thermal head room up to 100c. The thermal cooling is more efficient at a higher temperature difference anyway.
  • destorofall - Monday, December 2, 2019 - link

    under optimal condition one would want these liquids to be at the saturation temperature(boiling point) to take full advantage of nucleate boiling region of the boiling curves. provided the condensers have adequate ability to condense the amount vapor being produced by the load there is never the ability for all the fluid to boil away.
  • mode_13h - Tuesday, December 3, 2019 - link

    Forget what you know about PC cooling. It would take a tremendous amount of energy to boil off all the fluid at once. Most of the energy transfer, in this type of cooling, happens during the phase-change from liquid to gas, so it's not like your PC case temperatures which can gradually increase until they exceed some threshold.

    Basically, you have to know the upper range of how much heat the chips can dissipate, and then make sure your heat exchanger can more than keep up. In the event that something in the chain fails, the CPUs/GPUs/etc. will thermally throttle, worst case.
  • alufan - Monday, December 2, 2019 - link

    weird I can remember this was all the rage for high end systems about 10 years ago and then it went away due to the unobtainable cost of the liquid from 3m, mineral oil was a big thing as well but only for the board components plus of course its difficult to remove the heat soak from the oil without a big reservoir anyone else remember the Zalman resorator? you needed similar to keep the oil under check after a while or a very specific case with lots of fins
  • destorofall - Monday, December 2, 2019 - link

    If only the board and power supply manufactures could make their products denser then fluid cost becomes less of a concern.

Log in

Don't have an account? Sign up now