a·gen·tic a·gil·i·ty

Mastering Data Migration: How to Minimise Downtime and Keep Your Engineers Productive

TL;DR; Data migration can be managed with minimal disruption if you plan thoroughly, use tools like Git to keep engineers productive during downtime, and schedule migrations strategically, such as over weekends. Dry runs and clear communication are essential to ensure everyone knows their tasks and to identify issues early. Invest time in preparation and involve experts when needed to keep your team working smoothly throughout the process.

Published on
3 minute read
Image
https://nkdagility.com/resources/tzmbqdEULUY
Subscribe

When it comes to data migration, one of the most pressing concerns for organisations is often the potential for downtime. However, I’ve learned through experience that this concern can sometimes be overstated, especially in environments with a large number of software engineers. Let me share some insights from my journey that might help you navigate this complex process.

Understanding the Reality of Downtime

In a typical scenario, if you have a collection of 5,000 software engineers, the idea of them being unable to work due to downtime can sound alarming. But let’s unpack that a bit. Even if TFS or Azure DevOps goes offline, your engineers can still continue their work. Sure, collaboration becomes a bit trickier, but it’s not impossible.

  • Git as a Lifeline: If your team is using Git as their source control system, they can still share code and work on their tasks offline. This is reminiscent of how Linux was developed—without a central source control system, developers communicated and shared patches via email. Git supports this kind of decentralised collaboration beautifully.

  • Work Items and Context: While engineers can continue coding, they won’t have access to work items during the downtime. This means they need to be well-informed about their tasks beforehand. Clear communication and planning are essential here.

Planning for Minimal Downtime

From my experience, if you plan your migration effectively, downtime can be kept to an absolute minimum. I recall one of the largest migrations I managed, which involved moving a staggering 2.5 terabytes of data from an on-premises setup in Europe to Azure DevOps . Here’s how we achieved minimal disruption:

  • Strategic Timing: We scheduled the final migration to take place over a weekend. We took the system offline at 5:00 p.m. on Friday and were back online by Sunday morning. This allowed engineers to validate the migration over the weekend, ensuring everything was in order.

  • Thorough Preparation: This migration wasn’t a spur-of-the-moment decision. It took us 3 to 6 months of meticulous planning, dry runs, and validations to ensure everything would go smoothly. Dry runs are crucial—they allow you to test the process and identify potential pitfalls before the actual migration.

  • Support from Experts: Having Microsoft on hand during the migration was invaluable. Their expertise helped us navigate any issues that arose, ensuring a seamless transition.

The Outcome

In the end, we managed to migrate a collection that supported around 5,500 software engineers with minimal downtime. While there was some unavoidable downtime for engineers in different regions, we did everything possible to minimise the impact. The key takeaway here is that with careful planning and the right tools, you can significantly reduce downtime during data migrations.

Final Thoughts

Data migration doesn’t have to be a daunting task. By leveraging tools like Git and investing time in thorough planning and dry runs, you can ensure that your team remains productive, even in the face of potential downtime. Remember, it’s all about preparation and communication. If you approach your migration with these principles in mind, you’ll find that the process can be much smoother than you might expect.

So, the next time you’re faced with a data migration, take a deep breath, plan meticulously, and trust in the capabilities of your team and the tools at your disposal. You might just surprise yourself with how well it goes!

When you’re doing a migration of data, downtime isn’t of always great concern for organisations. If you’ve got 5,000 software engineers in your collection, you don’t want it to be down for an extended period of time and your engineers not able to work. I’m going to put that in air quotes because it’s not really true; not able to work for that period of time.

So, there’s a couple of things that you do need to kind of understand in this context. Even if TFS or Azure DevOps is down, like offline, your engineers can still work. It’s just more difficult for them to collaborate together. So, if they’re using Git as the source control system, which is the primary source control system in Azure DevOps and TFS, then they’re able to even share code in a way that works within the context of the tool, even when they’re offline. That’s how Linux was created; there was no central source control system and they sent patches to each other over email.

Right? So, Git fully supports that. Obviously, they wouldn’t have access to the work items, so they would need to know what it is we’re working on for the time that it’s down. But I will point out that if you plan it right, downtime can be absolutely minimal. The largest migration we have done was 2.5 terabytes, a collection that we moved up from on-prem in Europe to Azure DevOps.

We took the system offline because it needs to be offline to do the final part of the migration. We’re actually moving up to the cloud. We took it offline at 5:00 p.m. on Friday, and we were back online Sunday morning. The engineers came in over the weekend to validate that things looked good. They did their cursory checks; everything’s in the right place, that that’s working, this is working, that kind of thing.

And they were back up and running Monday morning. So, that’s probably one of the very few people out there that have collections that big. But if you plan it right—and that took, in order to do a 2.5 terabyte system in that time, we probably took 3 to 6 months of planning and dry runs and validations and making sure everything’s good in the data.

Dry runs are really important for that. Sorry, that’s practice runs, right? To get the data out of the data centre in a timely manner, get it up to the cloud in a timely manner, or get it processed, because that can be quite failure-prone. You want to have done a dry run so that you know that’s going to work.

And perhaps have Microsoft on hand to help out if there are any issues. So, that was minimal downtime. I think that was about 5,500 software engineers, and they really, really had no downtime. 5:00 p.m. on Friday, back up in the morning. But they were a global company, so there was some downtime for some engineers in some regions, right? Because that’s just unavoidable. But we minimised it as much as possible.

Smart Classifications

Each classification [Concepts, Categories, & Tags] was assigned using AI-powered semantic analysis and scored across relevance, depth, and alignment. Final decisions? Still human. Always traceable. Hover to see how it applies.

Subscribe

Connect with Martin Hinshelwood

If you've made it this far, it's worth connecting with our principal consultant and coach, Martin Hinshelwood, for a 30-minute 'ask me anything' call.

Our Happy Clients​

We partner with businesses across diverse industries, including finance, insurance, healthcare, pharmaceuticals, technology, engineering, transportation, hospitality, entertainment, legal, government, and military sectors.​

Cognizant Microsoft Business Group (MBG) Logo

Cognizant Microsoft Business Group (MBG)

CR2

ProgramUtvikling Logo

ProgramUtvikling

Emerson Process Management Logo

Emerson Process Management

Milliman Logo

Milliman

New Signature Logo

New Signature

Teleplan Logo

Teleplan

Workday Logo

Workday

Microsoft Logo

Microsoft

Lockheed Martin Logo

Lockheed Martin

Freadom Logo

Freadom

Epic Games Logo

Epic Games

Philips Logo

Philips

Jack Links Logo

Jack Links

Lean SA Logo

Lean SA

SuperControl Logo

SuperControl

Bistech Logo

Bistech

Slicedbread Logo

Slicedbread

Ghana Police Service Logo

Ghana Police Service

Washington Department of Transport Logo

Washington Department of Transport

Department of Work and Pensions (UK) Logo

Department of Work and Pensions (UK)

Royal Air Force Logo

Royal Air Force

Nottingham County Council Logo

Nottingham County Council

Washington Department of Enterprise Services Logo

Washington Department of Enterprise Services

Schlumberger Logo

Schlumberger

ALS Life Sciences Logo

ALS Life Sciences

Akaditi Logo

Akaditi

Hubtel Ghana Logo

Hubtel Ghana

Sage Logo

Sage

CR2