if they are using the s5 chips, those would have to be some pretty high cfm fans to cool it. the s5 has heatsinks twice the size and they run pretty hot.
So, just detailing the photos now i have some comments:
- What happened with the monolithic heatsink design? Why start using off the shelf heatsinks with these? are these a low production run?
- At 75w max rating per PCI-e connector, those hashboards are bound to consume 225w each. There are 3 boards per unit, reaching 675w per unit. That would make the upper consume bound 2075w!!!! How are you running it on ~3400w? wouldn't this be overspeccing the connectors and probably lead to burnt plastic casings? (not that this hasn't happened before, the S5 is overspecced as well)
The original curved thin fin heatsinks have great efficiency while not costing an arm to manufacture. Thicker but short finned heatsinks aren't quite as good but can be crammed into a much smaller space. Its not going to be a problem.
75W is the official PCI-E spec's max, its not the real world max by a long shot. Most miners have been running 150-200W per PCI-E and still plenty more requiring 250W+. Most people ran the S5 off 2 PCI-E.
The worst thing about the monster is that you need to deploy 28 friggen PCI-e connectors. A 2880w PSU breakout from J4bberwock has at most 20 connectors. Sidehack's 750w PSU breakout has place for 4, and the DPS2000 breakout has 12. You could power this monstruosity with a 2880w PSU + 750w, but then you have to find out if this setup can be used with current sharing... omg... this thing is crazy on all levels
9 boards, 27 PCI-E, 3400W = 380W per board. You could easily use just 2 PCI-E connectors, there are just more than required (like the S3, S5) if you want to use them or want to OC significantly.