What is new in our datacenter, what are we working on and what are we planning? Everything revolves around cooling and preparing the new 10 Gbps backbone network.
For a long time nothing interesting was happening in the datacenter itself, we were mainly focused on our hosting services. Now we have some news.
Hardware
For now, we still made do with 2 rack cabinets – the server room still looks very empty and poor. However, these cupboards are now completely full. Therefore, during this month we will purchase about 4 more cabinets and a lot of new hardware from Fujitsu.
Electro
We have recently managed to complete the modifications to the electrical supply, we have sufficient capacity for some time. With the current electrical load, a completely bored UPS has a few hours of power supply, a generator has a few days of fuel supply.
In 8 months of operation of our server room, there was only one real power outage from EON lasting 1 hour. Otherwise, the generator ran only during the regular tests we do once a week, and then mainly as a precaution during our wiring modifications and various experiments.
Connectivity
In mid-April, we finally increased the second optical route to 1 Gbps, so we have fully redundant connectivity of this capacity.
Preparations are currently underway to upgrade one route to 10 Gbps. Telefónica O2 has already installed the necessary optical equipment in its exchange in our building, and the 10 Gbps line to Prague will be operational within a few days or weeks. At the same time, we are also preparing – we have selected the necessary routers and switches and will test them intensively in May. Of course, we will bring you more detailed information, reviews and our tests. The result should be that by June our backbone network will be fully on 10 Gbps Ethernet.
In parallel, preparations are underway for a second 10 Gbps route, which will be provided by ČD-Telematika. However, this cannot be done without construction work and the installation of several kilometres of fibre optic cables.
Cooling
I think most of it is around cooling. Several air conditioning units have been installed in the server room for half a year, but until now they have been switched off. Until mid-April, direct freecooling – i.e. cooling by directly blowing cold air in from the outside and exhausting warm air out – was fully sufficient.
However, rising temperatures forced us to plug in one air conditioning unit. To do this, outdoor air conditioning units had to be placed behind our building in the “garden”. This is their temporary location. The final solution will be that the outdoor units will be placed on the roof. However, there is still some construction work to be done – a special support structure must be made and installed and soundproofing must be prepared.
However, one connected air conditioning unit is quite sufficient for now. So far we have only used it during one warm weekend. Otherwise, we’re still freecooling. The building is very good at accumulating cold and heat, so it always cools down overnight and we can get by with the accumulated cold during the day. This is valid until outside temperatures exceed about 18-20 degrees. When it is more than 18 degrees for a longer period of time, we will run the air conditioning – but again, it will only be enough for a few hours a day. At night, freecooling is again sufficient.
And why are we so “economical” with air conditioning? It’s a big electricity guzzler. When running at half power it draws about 3x 40 A, the compressors consume the most power – this is the most expensive for cooling. So the less the air conditioning runs, the better.
For this reason, we are also working hard to expand freecooling. If everything goes according to our plans, we will not need air conditioning in the cold months of the year, even if the server room is completely filled with technology.