Your Data Migration Questions Answered: Ask the Expert Q&A Panel

Your Data Migration Questions Answered: Ask the Expert Q&A Panel


The next installment in Galen's complimentary educational webcast series

Data migrations are complicated.  If the right questions aren’t asked at the start of the project, it may be executed in a way that does not meet the needs of all stakeholders –resulting in time-consuming and often expensive rework or project cost overruns. We want to simplify the migration process by giving you the chance to drive the content of our upcoming webcast. Submit any of your data migration questions and we’ll answer them live!

Expert Panel

Our moderators have extensive experience in data migration efforts, having supported over 250+ projects, and migration of 40MM+ patient records and 7K+ providers.  The conversion process itself is often quite quick, but it has to be well thought-out in order to succeed. Your project team and/or vendor must take the time to ask many questions at the beginning, and think carefully about the impact of each answer. From changes in workflows, items to consider when migrating data, to knowing what to migrate versus archive, our experts have the answers to your questions.

Missed the webcast? We’ve got you covered!

We’re sharing a recording of the webcast, along with the slides use. Perpetually Learn & Share, it’s one of our 5 Main Things. And be sure to check out our roster of future web webcasts here.

Webcast Recording

Slides

Q&A

Q: In the example where you said you were not able to migrate tasks, what system limited it? Was it your process? Or was it one of the systems?

A: From our experience, when we’ve tried to migrate tasks, the system that we’ve been loading the data into doesn’t support the consumption of open tasks in a discrete manner, so it hasn’t been our process that’s led to the inability but rather the system being able to take that in.

Q: Is there ever a system that is too small to migrate?

A: Yes, we do have general guidelines in place based on our experience, how many patients, what what volume tends to make the cost and effort worth the actual outcome; maybe it’s more cost effective to manually abstract those patients instead of putting the effort and cost towards a data migration. Generally, we say the cutoff of patients is under 10,000 but there are other factors that come into play, but if you are looking at a system that might only have a couple thousand patients it might be more cost effective to have a manual abstraction process in place.

Q: How does your approach differ for different EHR vendors?

A: Our overall approach to the migration reamains pretty much the same regardless of the legacy EHR or the EHR that we’re migrating to. As long as the two projects have similar scopes we’re going to have a similar project plan with many of the same steps, including a fair amount of testing and validation. One of the key differences from the technical side is the manner in which we may need to obtain the legacy data. Our preferred method is direct access to the legacy EHR which allows us to easily extract it into our Galen ETL platform. However, this isn’t always attainable for numerous reasons so we may need to get creative and use other methods for getting that data but ultimately the data needs to be loaded into Galen ETL.

A recent example of this, where we had to get creative, was for a system that would only supply us with CCDs. The client tried loading the CCD directly into the new EHR, without any manipulation of it, but they weren’t happy with the results. Many of the critical components of the CCD were missing, or specific data elements were missing from the CCD. Our solution was to enhance our Galen ETL platform and allow us to parse the legacy CCD into discrete data elements, then we’d fill in the blanks, where data was missing, using some mapping logic. We then rebuilt a stronger and more complete CCD using a standard feature of our conversion platform. We then delivered and loaded that CCD into the new EHR which the customer was much happier with.

We did something similar recently for another project where the legacy system could only provide data in the HL7 format. Again, we attacked it in a similar way. We enhanced Galen ETL allowing us to parse the HL7 into discrete data where we could then apply mappings and build a CCD from the HL7 data, and then the CCD would be loaded into the new EHR.

Q: You mentioned CCD and HL7 a lot during your presentation, but what if these formats aren’t supported by the new EHR?  Are there other options that you’ve worked with?

A: We discussed earlier that a lot of our migrations are to Epic, which uses CCD and HL7 interfaces as the primary method for migrating data. However, we’ve also worked with other vendors that use different methods for importing data. Some have required flat files with discrete data, some have required working directly with the back end databases and making calls in the database to import data; some have even required us to call the specific APIs that they’ve made available for importing data. We’ve built support for any of these into our Galen ETL platform and we continue to build support for new methods as we work with new systems or are faced with new challenges. So, if you have a system that you’re migrating to that may not fit what we’ve discussed today, it’s still worthwhile to reach out and talk about the formats in which the data needs to be transformed to work with your new EHR.

Q: Do you use automated validation tools or sampling techniques in your validation step?

A: In regards to automated validation tools, we do have some validation tools and those are dependent on the EHR that we are migrating into. For example, for data migrations to EHRs where we are directly inserting data into the database (such as Allscripts TouchWorks), we have processes in our GalenETL platform that compare the legacy data to the data that was migrated into TouchWorks, identify any differences, and report them out to the project team.  We also have some tools that, regardless of the EHR, we can use to validate or even automate the mappings.  The validation and mapping tools are stronger when standard code sets (SNOMED, RxNorm, LOINC, etc.) exist, but we can also validate or auto-map based on name alone.

In regards to sampling techniques for validation, each of our data migrations will use a hybrid approach in selecting patients for validation. Usually for small scale validation, which is our first round of validation, 25 patients are selected due to something unique on their chart. It may be that during the data evaluation process we determined that they had the largest medication list, or largest note. Those 25 patients are not usually a random selection but are selected due to something specific that we are looking to validate.

Then for our large and full scale rounds of validation, we use statistics to determine our testing population. Those patients are selected at random. Generally, we test around 300 patients per database. Based on our experience we run into very few new issues when we hit full scale validation which is our last round of validation.

To learn how we can help with your data migration, visit our website, or contact us below:

Facebook Twitter Email

+ There are no comments

Add yours