all 16 comments

[–]VA_Network_NerdModerator | Infrastructure Architect 29 points30 points  (9 children)

You want to buy all 1200mm cabinets and not 1070mm cabinets.
This PDF explains why:


Some asshole will probably say something like "But we don't use Cisco UCS Blades..."
That point is not relevant. The UCS enclosures are just one representative example.

1070mm cabinets are too shallow to support SOME modern devices without blocking access to PDU outlets & cable management space. This makes them an invalid investment.

Pound on this point over and over again:

IT equipment is refreshed every 5 to 10 years. But these cabinets should last 3 to 5 times that.
These cabinets will live for 30+ years of equipment advancements before you replace them.

1200mm depth is your minimum requirement.

600mm width vs 750mm width is a valid argument.

SOME cabinets, especially anything that will be cable-dense should be deployed using 750mm cabinets.
The wider cabinets will always make cable management easier, will always help with air-flow for side-breathers, and will always provide more room for 4+ PDUs per cabinet.

But you don't always need those capabilities in every cabinet. For most cabinets, 2 x L6-30 PDUs is enough.
If you make everything a 750mm cabinet, you might lose an entire cabinet per row.
Horizontal floor space is finite. So make the best possible use of vertical space that you can.

Buy the tallest cabinets you can safely fit in your equipment room without violating fire code. (48U if you can).

[–]sryan2k1 9 points10 points  (0 children)

Oh god this, so much this. The facility we are in always allows us to install our own deep racks, it's so nice to actually fit zeroU PDUs without having to smash them up against rails or having them not fit.

We occasionally can also get the wide racks, which combined with the extra depth makes you never want a normal sized cabinet again.

Or to put it in the words of Step Brothers: "So much room for activities!"

[–]KantLockeMeInex-Cisco Geek 3 points4 points  (3 children)

And when you get the tall cabinets, buy stepladders. Not everyone is 6'4"

[–]Strahd414 2 points3 points  (1 child)

We almost moved from Equinix to Raging Wire and I was drooling over the opportunity to mount >50U racks. There were some really tall racks in some cages there. Nothing like seeing a fully populated UCS Chassis racked seven or eight feet in the air.

[–]KantLockeMeInex-Cisco Geek 4 points5 points  (0 children)

For me I don't think I'll ever be able to get the density to actually need >42U. Most of the equipment I'm using consumes around 725W per RU, so I'm running out of power before space.

My nephew just started working for a municipal electric company that has a new Raging Wire datacenter being built in their territory and he was telling me that his first project is to upgrade the transmission lines to provide capacity to feed the DC. He was saying that when the datacenter is fully occupied they expect it to consume 50% more power alone than what the entire city needed to generate prior to the datacenter being built.

[–]mhnet360 1 point2 points  (0 children)

I’m 6’2 and even 42U is a pain to work with up top at times. We have some 45U open frame for networking equipment. I can’t imagine 48U.

[–]Sunstealer73 0 points1 point  (0 children)

The new HPE Synergy frames barely fit in the 1070mm too.

[–]wordsarelouderDataCenter/SAN/Storage -1 points0 points  (2 children)

Upvote because we had to order new racks for one row in our Data Center, we only needed a few cabinets but you can't have 19 cabs and 5 of them stick out further than the rest, real eye sore and everyone runs into it. (we upgraded slowly though..)

[–]sryan2k1 0 points1 point  (1 child)

Our facility allows it. shrug

[–]wordsarelouderDataCenter/SAN/Storage 0 points1 point  (0 children)


[–]zanfar 4 points5 points  (1 child)

The APC NetShelter brochures have pretty good diagrams with dimensions and examples of large network hardware. I would poke around there.

[–]JudgeTredCCNP[S] 0 points1 point  (0 children)

Thank you, I'll give it a shot

[–]Show_Me_Your_Packets 1 point2 points  (0 children)

Another proponent of always 1200 deep. Plenty of stuff even in the network world gets reasonably deep (5585X, the newest F5 series is super deep, among others).

Wider is certainly great when possible for cable management. Do ALL the cable management: wire managers, fingers, front to back, brush, you’ll always be happier in the end at least trying to get people to not make things spaghetti.

I’m in the midst of a greenfield with 20+ Vertiv 48U 1200x800 and they’re awesome. Combined with the modular PDU’s (BRM’s) it’s great being able to pick and choose what racks need what when you’re building really dense stuff.

[–]Poulito 1 point2 points  (0 children)

As others have stated, deep and wide for the win

One thing that I have seen in the server vs network racks is some network racks have special baffles for side-to-side cooling of some chassis switches. Also, tapped vs punched holes in the rails. Network racks typically have round, tapped holes so the screws can go right in when mounting big heavy switches. Server racks have square-punched holes that you can attach tool-less rails or ‘destroy your thumbnails’ cage nuts.

This is not to say there isn’t plenty of square-punched racks in use for network gear, it’s just when I hear a LV contractor talk about the differences, this is what comes up. Also, for telecom rooms/IDFs, I see shallow ‘network’ cabinets sometimes. You’re not mounting a full-depth fabric interconnect in a IDF typically. But then you have to screw around with short-depth UPSs and that sucks.

[–]parkprimus 0 points1 point  (0 children)

Three words: Tower of Cool

[–]packetthriller 0 points1 point  (0 children)

Piggy backing on /u/VA_Network_Nerd , 1200 deep or deeper. My end of row racks are always wider to accommodate chassis agg switches and all of the cable management that is necessary to keep it clean. The server racks can be less wide, but still just as deep.

Keeping the length consistent makes it easier to do in row cooling for a POD design.