Header Ads Widget

What is inside a Data center infrastructure?

Data center infrastructure
People who have never seen a data centre wonder what is there inside it. What equipment are used in it. How does it look from inside. So let us see the internal infrastructure of a data centre from inside? 

Here is a typical data centre infrastructure diagram covering almost all components. Here we have not shown the protection and surveillance(CCTV) equipment which are used to secure a #data center. These we will see in upcoming posts. Although there are some other architectures followed by some big data centres (like Google, Facebook etc.) based on the location of data centre and the surrounding environment. 

Here we have shown a data centre where the floor for equipment are raised and the cold air from PAC (Precision Air Conditioner) are fed from below the raised floor which need pressure from fan inside PAC. As cold air (blue arrow) is heavier than hot air, hence this force is required to push the cold air up towards the front of racks holding IT equipment. All the IT devices (servers, network switches etc.) have cooling fans inside which pulls cold air from front of device and the hot air (red arrow) after cooling the components inside IT devices, comes out from back. The hot air then returns back to PAC which is then again cooled down to circulate to rack front from raised floor. There are other designs where cold air is dropped from top through a duct and hot air is returned back from another duct. This method reduces power consumption in the PAC fans to work in pushing the cold air upwards from raised floor. Since cold air is heavy, it falls easily from the duct with minimum efforts reducing power consumption.

Lets discuss on each components of this type of Data Centre infrastructure.

Almost all the equipment and cables used inside the data centre are fire redundant. All cables and plastic items are of LHLS (Low Halogen Low Smoke) grade. That means they release less smoke and less halogen gas, when burnt.

Real Floor: This is the normal concrete floor of the building where the data centre is build upon. The strength of the floor should be enough to hold the entire equipment load of data centre. As a best practice the data centres are not made on ground floor or underground where threat of flood is there. Similarly, the building should be earthquake resistance if the location of data centre is in seismic zone. These points are taken care to avoid any loss to data centre due to natural calamities like flood and earth quake.

Raised Floor: A frame of metal studs is made on real concrete floor, that create a platform to put heavy tiles (normally 600 mm X 600 mm) in an array, to create a false raised floor where server racks can be placed. The area between the racks, are filled with ventilated tiles which allow cold air to enter from bottom to cool down the IT devices in racks.  These metals studs are made of good quality steel coated with other metal (normally zinc) to protect from corrosion as data centres are designed to work for decades without any changes. The height of raised floor may vary from 1 feet to even 3-4 feet. There are big bunch of copper cables and optical fibre cables runs below this raised floor which interconnect each rack that forms the network. More height of raised floor makes easy for data centre engineers to work below it if any cable is to be replaced or new cable has to be laid for expansion of data centre. On the other hand it also creates extra space which needs to be cooled first before the cold air is thrown above the raised floor to push the air inside IT devices which are to be cooled.

Racks: Big metal enclosures are placed in row which hold the IT devices which we listed in brief in previous post. They come in different heights measured in 'U'. 'U' is the basic unit for measurement of rack height as we have 'meter' for measurement of length. Racks comes in different heights like 9U, 19U and 42U racks. Maximum available height is 42U which are used is data centres. Smaller ones are used in small server rooms used to serve small offices. Racks are placed in a row with front of two rows facing each other so that cold air supplied in between them serves for racks in both row. This area between front of two rack rows are called as "Cold Aisle". The rear side of both rows where hot air comes out from working IT devices is called the "Hot Aisle". Rack width is standard i.e. 19 inch as all IT equipment manufacturers make devices of standard width to fit in racks. Depth of rack comes in 1000 mm or 1200 mm.

Cold Aisle Containment: Some portion of the cold air which is supplied in between rack rows are used to cool down the devices but most of them are also escaped out towards top and starting and end of row. This mix with hot air coming from back hence the efficiency of the supplied cold air is wasted and PAC has to work more to actually cool the IT devices running in racks. To avoid this the top side of area between racks is covered with transparent sheet so that sufficient light is available for data centre engineers to work and cold air doesn't escape out. Similarly the entry and exit side of the cold aisle which help protect escaping of air from these two portions. The doors are fixed with auto door closure which ensures that whenever some one enters or exit from door, it closes automatically and cold air loss is minimised.

Server Rack
IT Equipment: The main core of data centre i.e. the IT equipment who process all the data, are hosted in the racks placed on raised floor. Almost all IT equipment are mounted in rack from front side as shown here. This two dimension drawing is called "Rack Elevation Diagram". Since these devices work for 24 X 7 X 365 days hence to take away the heat generated from each components in these devices, cold air is supplied from front side of rack. LIU (Line Interface Unit), Patch panel, network switches, routers, firewall, server, storage, ATS (Automatic Transfer Switch) or STS (Static Transfer Switch) are all mounted from front side. Wherever there are blank spaces, a plastic cover is placed know as blanking panel. This avoid passing of cold air at back from blank spaces or allowing hot air from back side to come in front and mix with cold air and raising rack front temperature. So along with cold aisle containment, blanking panels are must to avoid hot and cold air mixing. In fact any leakage point should be closed appropriately to avoid hot and cold air mixing. This reduces load on PACs to maintain required temperature.

PDU (Power Distribution Unit) are mounted at back side of rack which power all the equipment mounted in front of rack. They are a high quality power strips having multiple sockets to connect IT equipment. They are mounted vertically at back of rack as shown here. We do get horizontal PDUs also, but they again occupy some U-space in rack which could have been used for deploying IT device. But based on requirement any of them can be used. Some PDUs are simple power strips with multiple sockets but some PDU are having monitoring capability to measure voltage, current load and power load on them. They also have feature to connect temperature, humidity, universal (temperature+humidity), rack door status and other sensors. They have network port and they can be configured with some IP to monitor all the above parameters from sensors remotely over network. Two PDUs are used in each rack which take power from two different UPS sources. Almost all the IT equipment in rack are having two separate SMPS (Switched Mode Power Supply) to power these devices. In case one SMPS or connected power fails, the device operate normal on other SMPS/Power. So, these two SPMS are connected to the two different PDUs for power redundancy. There are high end devices with four or six SMPS also, in such case half of the SMPS are connected to one PDU and others to second PDU.

Precision Air Conditioner (PAC): As mentioned earlier and in last topic, we need to maintain 21-23 degree Celsius in front of rack and relative humidity (RH) level of 45% to 55% inside data centre. Lower humidity means more dry air which tends to generate static charge on equipment, metal surfaces and even on non metallic surfaces. These charges are dangerous for the delicate IT equipment running in data centre and they can damage their components leading to downtime of data centre. Also, higher humidity means high moisture content which when goes inside the devices will damage the PCB and components due to corrosion because of high moisture. To achieve this PACs are used which maintain desired temperature as well as humidity level in air inside data centre. These PACs have heaters also inbuilt in them which are turned ON if the relative humidity level is higher than 55% or the set limit. This helps in evaporating excess moisture in the air. Similarly, they also have humidifiers either inbuilt or connected to them. These humidifiers are directly connected to water tanks and they also have ultrasonic heaters to heat the water and add vapour in the air if the relative humidity in the air is below 45% or the set limit. Some data centres also maintain RH from 40% to 60% based on their requirement and power saving.

Uninterrupted Power Supply (UPS): As we need to run the data centre continuously without fail, hence we need 100% power availability which are provided by UPS (uninterrupted power supply). There are two UPS sources to feed two different rails of power path called BBT (Bus Bar Terminal). These are nothing but thick copper bars which carry large current in the data centre to feed each rack PDU. Rather than using thick power cables, thick bars are used. It is easy to connect additional rack PDUs on the go just by snapping tap-off boxes to these bus bars. Tap-off boxes are sockets connected to bus bar which feed each PDU on racks. Each UPS setup is having a separate set of rechargeable battery bank kept in separate room to avoid corrosion of UPS components due to fumes coming out from chemicals of battery while charge-discharge process.

Air Purifier Unit (APU): In some locations air may contain harmful gasses coming out from factories or heavy traffic near the data centres such as Sulphur dioxide (SO2) and Hydrogen Sulphide (H2S). These gases mixed with cold air when enters in the device, over a period of time they react with metallic part of small electronic components inside device and copper layer of PCB. Slowly these metallic parts are damaged to corrosion and the component or the PCB itself fails and the device goes down. To prevent this Air Purifier Units (APU) are used in data centre to filter out these harmful gasses and using different chemical filters inside APU. These chemical filters absorb the harmful gases making them suitable to be circulated in data centre. The filters inside these APU are replaced on certain intervals depending on percentage of such harmful gases mixed in air in particular location.

AC Grid Supply: To maintain the backup and charge the UPS batteries, main AC supply is required for data centres. They feed the online UPS which we discussed above. While mains AC grid supply is available the UPS batteries are charged to support when the supply grid fails.

Diesel Generators: As another power backup to UPS, set of diesel generators are used which are kicked in mostly within 30 sec when the main grid supply fails. This ensures 100% availability of data centre to outside users.

Isolation Transformer: Since there can be different harmonics present in mains AC grid supply hence isolation transformers are also used to cut off such harmonics in AC supply which may interfere with delicate IT equipment and lead to their  malfunctioning. The location of the isolation transformer vary, it can be before UPS supply, or after UPS supply or can be at both locations depending on requirement and level of harmonics present in the AC supply of that area.

Post a Comment