Talent [R]evolution

How is ETL used to build trusted data pipelines?

Reading Time: 3 minutes

Enterprises have always relied on some sort of data to aid decision-making. Way back when, businesses would collect data from face-to-face customer surveys or telephone marketing initiatives. Whatever the method, this information helped them understand their customer better. Today, with the rise of online transactions, social media, and the Internet of Things (sensors that collect and broadcast data), the volume and variety of data available to business have increased dramatically. 

OUT - Ebook Gig Economy 1 - Text

To ensure quality and utility, this data needs to be organized and analyzed in a timely way, thus necessitating procedures like ETL. In this article, we zoom in on this integral aspect of data pipelines.

What is ETL? 

ETL stands for Extract, Transform, Load. This refers to the three-part process that forms an essential part of data management. After all, merely capturing data isn’t the key to customer insights – it needs to be organized and analyzed first. This is where ETL plays an integral role: the process transfers data from a source file to a target database, commonly the organisation’s data warehouse. Here’s how each stage of the process works:

  • Extract: data are taken, or extracted, from a source file or database. 
  • Transform: said data is converted, or transformed, into a format that is optimised for analysis.
  • Load: the transformed data is stored, or loaded, into a destination data warehouse or system.

The source systems for the data can be any kind of structured data or semi-structured file formats, such as XML and JSON. Even Excel files can be data sources for ETL. The target system is typically a relational database such as a data warehouse or analytical tool. Equally, it can be a transactional system such as ERP or CRM, where data is integrated, migrated, synchronized or reconciled into a master record for customers or products.  

Let’s further illustrate this process with a couple of examples. It could be a telecom operator who needs to transfer all kinds of usage data by its customers (calls, messages, internet) into billing and customer service applications. So here, ETL is an essential part of company operations. Equally, a company might want to optimise their revenues so they transfer website log files to a database to cross-reference transactions with visitors’ mouse clicks and other customers’ granular data.

OUT - Ebook Gig Economy 1 - Text

Whatever the variables or objectives involved, the ETL process captures, converts and uploads the data so it can be searched, analysed and stored in a logical fashion. ETL is generally implemented and executed using a commercial ETL tool. Equally, it might be hand-coded with scripts or use a hybrid of the two. 

Typically, multiple ETL processes are orchestrated simultaneously to construct the end-to-end data pipelines needed to integrate all enterprise data. These ETL processes typically run in incremental batches, while some time-sensitive data can be processed closer to real-time. For example, if a company needs to monitor machinery or a power grid, they need to know about critical issues as soon as they happen.

Purposes and best practice

An enterprise ETL process is typically designed by the IT team based on some well-defined business objectives. Today, it’s advisable that larger organisations initiate a formal cross-functional data governance committee that identifies these objectives and guidelines. Such rules and requirements can include:  

  • Filtering unwanted records.
  • Aggregating rows.
  • Standardizing some attributes, such as date formats and product codes.
  • Unifying customer and supplier naming conventions.
  • Validating addresses are complete, with correct postal codes.
  • And any other transformation or calculation that improves the data quality and value. 

A well-designed ETL should be able to execute fast enough to deliver quality data in a timely fashion. It will also maintain the data lineage, providing information about each data element’s origin, including the list of ETL handling processes it went through from end to end.

Why ETL is integral to quality data

Data isn’t a means unto itself; far from being a business intelligence crystal ball, it needs to be properly organized, managed and delivered. Thus, ETL is a key computational process in any enterprise, big or small. The process facilitates the construction of reliable data pipelines that move and improve data across disparate systems and databases. 

Moreover, the importance of ETL is growing as the variety and quantity of data increases, and concurrently, our dependence on said data increases too. As such, it is imperative that larger organisations implement a formal data management initiative to ensure their data pipeline is up to par. By drawing up a framework of best practice, companies can ensure data are trusted by the end-users to meet current and future business requirements.

OUT - Ebook 1 Gig Economy - Post

My mission is to help organizations succeed in their digital transformation journeys by ensuring they have the required data management capabilities, smart applications and efficient data-driven business processes.

No comments yet

There are no comments on this post yet.