Data Migration projects have traditionally adhered to the standard waterfall model of project delivery. Whilst some practitioners may argue that their approach takes a flexible, pragmatic approach, most project structures still follow the all too familiar stages of:
- Step 1: Gather Requirements
- Step 2: Design
- Step 3: Implementation
- Step 4: Test & Verify
- Step 5: Maintenance
Waterfall Data Migration – What’s the Problem?
On short projects with relatively small volumes of data, low complexity, good data quality, a straightforward path to the target system and a good few days window of opportunity to migrate the data, then Waterfall approaches will serve you fine.
The problem is that, back in the modern world, many data migrations have large volumes of data, lots of complex models and architectures, historical data quality issues and a complex data landscape that may include multiple source and target systems, cloud data stores and MDM hubs to contend with.
Oh, the business is 24/7 too so there is no downtime permitted.
Another challenge of modern times is that we simply cannot afford to wait around for 18 months migrating data. The business needs to adapt faster than this.
For example, several years ago I was involved in a utilities migration where the initial focus was on moving data using equipment as the primary item of migration because it made good sense technically. Then the regulator stepped in and all of a sudden the plan was to move all the data one building at a time. Overnight the entire strategy of the migration changed.
So the long drawn out Waterfall model leaves you open to risk because, by the time you’re implementing and testing, the requirements have already changed, often dramatically.
Read our paper 'Data Migration Project Checklist'Download Now
Incremental Data Migration Tactics
There’s been a lot of talk about Agile methods in recent years but you don’t need to be an Agile purist to apply some basic common sense tactics to improve your data migration strategy. Essentially you just want to implement a more incremental approach with shorter release cycles. Fortunately, this fits in well with the modern demands of complex data migration initiatives.
The first tactic I always recommend is the Pre-Migration Impact Assessment [PMIA].
The PMIA is effectively a rapid migration simulation carried out in a few short weeks or even days. The idea is to confirm whether the migration is feasible, what the costs are likely to be, what technology will be required, which skills will be critical and where the big pitfalls are going to emerge. It won’t find everything but in my experience, it’s saved many a project from failure further down the line.
Perhaps the biggest benefit of the PMIA is the creation of a much richer set of requirements than would typically be available at the start of a project.
Use the Experian Pandora Free Data Profiler kick-start your assessment and get a greater insight into the make-up of your data.
You can approach this in different ways but I find that focusing the early increments on identifying the core subject model areas for migration and then incrementally carrying out cycles of ever increasing data profiling, data discovery and data quality assessment, combined with business and technical workshops tends to work well.
The workshops identify the subject areas for migration and the data profiling, discovery and assessment works well to validate assumptions and provide a richer view of how data is being used (or abused on the ground).
Using an incremental approach you can discount datasets as being inaccurate or incomplete straight out of the gate so that design work isn’t based on them and alternative sources are hunted down instead.
With the classic Big Bang style migrations become less frequent, incremental strategies start to come into their own. Businesses may decide to migrate their data by specific units of migration that are far more palatable and easier to manage than shunting the whole enchilada across.
Migrating the business functions and staff using an incremental approach is often more complex. For example, the business may choose to move by region or customer account type so how will calls be routed after the migration?
These are all challenges that need to be worked out but they are surmountable. In many ways, they are easier to manage because it’s mostly a matter of business logistics but you’re still spreading the load far more than previously.
Also, if you hit a snag it’s far easier to roll back a single region or customer segment instead of the entire business operation as you would in a traditional Big Bang style migration.
Where Else Can Incremental Migration Tactics be Applied?
Data Quality Management is an excellent place for incremental tactics during data migration.
For example, you don’t need to profile hundreds of attributes against 100+ rules if the core mapping relationships between two disparate systems are flawed. By taking an incremental approach and assessing the most critical rules in a descending hierarchy, one cycle at a time, you can save considerable time and risk.
Johny Morris cited a possible issue with Data Quality dependent sprints and incremental tactics in that the business often sits on big data cleanup challenges for over 10 weeks at a time, rendering short 30 cycles unrealistic in some cases. I understand that concern but I think with better technical support and appropriate prioritisation I don’t see why shorter release cycles can’t become a reality.
Even if you’re just creating a cycle that gives the business something to simulate in their testing, it’s far better than waiting for the entire migration to complete.
So where can you go from here? As a data migration leader or influencer, I think it’s well within your remit to enquire about agile or incremental tactics. I’ve outlined some potential areas in this article but perhaps you’re aware of others? Share your views in the comments below.
This is a guest post from Dylan Jones as part of the Insight Series where we invite leading experts to share their views on Data Quality, Data Migration and Data Governance. Dylan is the founder and editor of Data Quality Pro and Data Migration Pro.