Monday, March 18, 2024

Do Your Data Systems Speak the Same Language?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Multiple data systems are common in an enterprise due to organic growth, M&A or divestiture activities, system upgrades and other key business changes. This dynamic can present significant business information issues as systems often do not communicate or communicate poorly with each other, thereby producing a risk of rampant errors as well misinformation due to improper use of data in systems that do communicate with each other.

This scenario increases an enterprise’s overall data and business risk, as interoperability of data is critical for mitigating costly data quality issues, meeting business agility needs across an organization, and ensuring a true enterprise information delivery model.

Many companies seek to achieve “one source of the truth” for their data—regardless of how many systems are at play—but this concept can be a broken model in large organizations. It is very difficult to apply “single source of the data” in reality because not all data can be, nor should be, syndicated uniformly across all systems.

Why? Raw data may look a certain way in one source system but only certain elements or specific ways of presenting it may be relevant for another department’s system. The contextual use of data that is not native to a system is a huge factor in this equation.  As a best practice, there should be a reliable path for data, and as a result, information, to be quickly and meaningfully cross-referenced across multiple systems, especially in large global organizations. IT leaders and data stewards must come together to prioritize strategies and develop information governance programs that address this often-pervasive issue.

Evolving from One Source of the Truth to One View of the Truth

To move beyond the “one source of the truth” concept, enterprises must establish how multiple source systems are each providing value to the organization and supporting the overall business model. They must also embrace the concept that systems, data and business information are actually never in a “one source of the truth” state, due to the constant evolution of system upgrades, M&A, divestitures, and organic growth.

From there, data teams can then begin to take a hard look at the architecture of interoperability of data within the enterprise, establish priority elements as master data objects—such as customers, materials, vendors and employees—and accept that these are syndicated throughout different systems but used for different purposes within them. With these concepts in mind, the process of establishing central cross references that support “one view of the truth” at a central location like an Enterprise Data Warehouse, database repository or other data mart, becomes an overarching principle of enterprise information management. 

This can co-exist with all of the data and metadata changes that will occur within the ever shifting enterprise architecture and will allow many of the business benefits of the concept of “one source of the truth” to be enabled long before it is actually (if ever) realized. Taking the time to establish this type of foundational model for organizing corporate data helps companies maintain clear and accurate data on a global scale over extended periods of time and change, because there is an underlying understanding, appreciation, and basic architecture of how data and information is used across systems, and as a result, at the enterprise.

Examining How to Share Data

When looking at how different types of systems help business units or departments collect and present data, the types of information are usually driven by specific business processes that provide value.

For example, an HR system helps track personnel data, payroll, compensation and other HR-specific elements that are relevant to HR functions. However, if HR system data is then shared with the enterprise’s corporate email system to create uniform email signatures across the company, this presents a data translation issue—some personnel may have different titles for internal vs. external use and it is likely unclear which one should be used for email.

In this type of case, interoperability of data can be accomplished successfully but only if steps are taken to identify translation and applicability issues ahead of time, as well as ensure that governance and operational processes are put in place that support maintenance of not only the technical interface, but the business use of the data.

Understanding these essentials and setting expectations for applying data to varying business needs is imperative to ensuring correct interoperability of data across enterprise systems.

Considering Predictive Analytics

Companies often rely on data and reporting to leverage predictive analytics, but what if core data isn’t being used properly to apply the analytics? This is a common problem among enterprises and one that stems from introducing modeling common elements together across inappropriate systems.

For example, companies may analyze materials, vendor or customer records but decide to pull this data from multiple systems assuming that all data concerning theses elements is uniform and applicable for all departments. This can create major issues if, for instance, incomplete or outdated customer data is pulled from a materials distribution center system that is gathering data without the intention of using the customer pieces for anything more than an ancillary reference to the materials accountability function of the system.

Also, different departments are benchmarking their data based on varying KPIs so there will inevitably be different versions of similar data sets. As part of its overall information governance program, the organization must establish which systems are to be used for specific types of source information so predictive analytics can be successfully and accurately executed.

Investing in Data Interoperability and Quality

Every industry has different operational models with varying costs vs ROI but investing in establishing data quality standards across multiple systems is a best practice across all industries. With billions of dollars at stake and considering the cyclical nature of data, teams that proactively invest in qualified resources and technologies to setup and maintain sustainable data interoperability and the ongoing information governance of the model that supports it, will be well positioned to not only operate at a much high level of efficiency and value, but significantly reduce operational costs and gain competitive advantage. They will also see the benefits of proper customer modeling and long-term data health as information will continue to be shared across systems but with the understanding that not all information can be consumed as received.

In the end, enterprises that take steps to establish source systems and then build interoperability governance practices into their business processes are setting themselves up to leverage quality data for higher margins and overall business health. The more organizations understand how data can be applied meaningfully across departments, the higher value they will receive from overall data initiatives. Making the investment in and commitment to data quality—and sustaining it through proper interfacing practices—is critical for enterprises to get the most out of their data management programs.

About the author:

John Danos, Vice President – Delivery Services Strategy, BackOffice Associates

Photo courtesy of Shutterstock.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles