One of the racks in Hong Kong is migrating

24 january 2022

As the saying goes, a move is worse than a fire. So it happened that at some point we needed to migrate one of the racks in Hong Kong. We don't want to go into details why it happened here, because everything happens in business (even with good suppliers), but for us it's not a problem, it's a challenge!

Where to start? With the search for a provider, of course. Equinix is particularly receptive to questions about electricity. You don't mess with them in this regard, you don't consume an extra 1-2 kVA so no one notices, everything results in decent bills for the excess. The task was to find a reliable supplier with a reasonable price for a rack with 2 inputs of 5 kVA each. The search wasn't easy, negotiations with different suppliers lasted about a month, but in the end we received the most advantageous offer from one of our upstream providers - PCCW, for which we are very grateful.

new-hong-kong-rack
new-hong-kong-rack-2

Where do we go from here? Of course, we have to redo all the cross connections. Since this process depends on us only financially, our involvement was limited to filling out the forms and paying the bills. Not very exciting, but important paperwork. Equinix and its specialists do the rest. They aren't very efficient, I'd have to wait and not to forget to pay for an empty rack.

Now, the place of the migration is marked, the most interesting stage begins.

Preparation. Every migration begins with planning. We spent a lot of time in meetings so as not to forget any detail, because it must have taken only 8 hours to move the whole rack. Several times neighbouring departments sat down to work out the role of each participant. We put together an up-to-the-minute moving schedule, steps to turn equipment off and on, an inspection schedule, and a list of everyone's contacts. I'd like to note that the guys worked perfectly, everyone understood that the success of the operation depended on him and put all their effort into it.

Beginning. Since the rack is in production, the first thing we did was to inform the customers who might be affected by this maintenance about the preparations and switch them to the neighboring devices. The process went without any problems. In parallel, the on-site technicians began to mark up all the connecting lines and devices.

new-hong-kong-rack-3
new-hong-kong-rack-4
new-hong-kong-rack-5

This was an important step that allows us to quickly put everything back together in the future. We managed to do it in 4 hours. At this point, the preparation is over, everyone is in anticipation.

Migration. It starts at 9 o'clock in Hong Kong, we start shutting down the equipment, the filtering capacity in Asia is temporarily lower than normal, and we distribute the attack traffic to other geolocations. Everything works stable, we can go on. An hour later, the technicians arrive at the entrance to the data center, the equipment is already powered down and waiting to be disassembled and loaded onto a trolley. We disassemble it and move to another rack.

new-hong-kong-rack-6
new-hong-kong-rack-7

We've arrived and we are assembling the rack.

new-hong-kong-rack-10
new-hong-kong-rack-11
new-hong-kong-rack-12Since we have prepared well, from this moment the most boring part of the move begins for the back office, but at the same time the most fun for the technicians. The equipment is mounted into the new rack according to the scheme, all cables and patch cords are still disconnected. First, we mount the out-of-band network and console control, then the main network. Everything is neatly stacked, fastened and tied up. Of course, the technicians don't forget about lunch break. As the saying goes, you can postpone a war, but never a lunch.

 

The magic is done and the rack is ready. The process took about 6.5 hours, 2 technicians involved.

new-hong-kong-rack-13
new-hong-kong-rack-14
new-hong-kong-rack-15
new-hong-kong-rack-16

Last stage - we start the equipment according to the plan. First, the out-of-band router and console server, then the main network devices, then the servers (yes, we scrub traffic on x86 servers with our own software). Surprisingly, after the check-up, it turned out that only 2 connections didn't go up. This isn't much, it was expected to be worse. The problem was solved by replacing faulty patchcords and we started preparing the network to resume traffic. This process took a few more hours, as it had to be done carefully and with minimal impact on customer traffic.

As a result, we have a new rack, we have more kVA for less money, and our team had a great experience. Thanks to everyone involved, and thanks to our customers for their understanding and support.

We use cookies to make the site faster and more user-friendly. By continuing to use the site you agree to our Privacy Policy
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
© 2013-2024 StormWall.network. All rights reservedPrivacy Policy