Skip to content
Browse
BABOK Guide
BABOK Guide
10. Techniques
Introduction 10.1 Acceptance and Evaluation Criteria 10.2 Backlog Management 10.3 Balanced Scorecard 10.4 Benchmarking and Market Analysis 10.5 Brainstorming 10.6 Business Capability Analysis 10.7 Business Cases 10.8 Business Model Canvas 10.9 Business Rules Analysis 10.10 Collaborative Games 10.11 Concept Modelling 10.12 Data Dictionary 10.13 Data Flow Diagrams 10.14 Data Mining 10.15 Data Modelling 10.16 Decision Analysis 10.17 Decision Modelling 10.18 Document Analysis 10.19 Estimation 10.20 Financial Analysis 10.21 Focus Groups 10.22 Functional Decomposition 10.23 Glossary 10.24 Interface Analysis 10.25 Interviews 10.26 Item Tracking 10.27 Lessons Learned 10.28 Metrics and Key Performance Indicators (KPIs) 10.29 Mind Mapping 10.30 Non-Functional Requirements Analysis 10.31 Observation 10.32 Organizational Modelling 10.33 Prioritization 10.34 Process Analysis 10.35 Process Modelling 10.36 Prototyping 10.37 Reviews 10.38 Risk Analysis and Management 10.39 Roles and Permissions Matrix 10.40 Root Cause Analysis 10.41 Scope Modelling 10.42 Sequence Diagrams 10.43 Stakeholder List, Map, or Personas 10.44 State Modelling 10.45 Survey or Questionnaire 10.46 SWOT Analysis 10.47 Use Cases and Scenarios 10.48 User Stories 10.49 Vendor Assessment 10.50 Workshops

3. Techniques

3.10 Extract, Transform, and Load (ETL)

Guide to Business Data Analytics

3.10.1 Purpose

Extract, Transform, and Load (ETL) refers to a data sourcing and curation approach for application development, business intelligence, and the analytics domain. ETL is often used to make data available from a variety of sources to a target repository that acts as a single standardized source of data for further analysis to drive important insights.
3.10.2 Description

The core principles of ETL are:

  • Identify high-quality data, free of data integrity and consistency errors, from a variety of sources (for example, data warehouses, data marts, data lakes, transactional databases, external websites, or mobile data).
  • Transition this data to a target repository creating a “single source of truth,” while applying data governance guidelines and verifying business and data rules.
  • Provide easy access and the ability to use this repository of data to support analytics related work.
Big data and streaming data frequently follow different data architecture and definitions than smaller scale business data analytics initiatives. This can create a perception that ETL mechanics are irrelevant. However, the governing principles of ETL, when applied in a business data analytics context, remain the same regardless of the tools used or the approaches followed.

If applied to significantly large or diverse data, such as big data, the underlying architecture and processes to handle this data will change. For example, when using big data approaches and tools, the order of transform and load tasks may be changed or delegated to the analytics tools or solutions. Many of the tools and methods used for ETL in the context of traditional data sources use structured languages such as SQL to interact with relational databases or structured but multi-dimensional data from data warehouses.

Various architectures such as Lambda and Kappa exist for handling unstructured and distributed data. While knowledge of these architectures and distributed environments for data storage and processing is important, analysts also understand their conceptual differences, advantages, and limitations. The finer points of big data technologies, concepts, and architectures fall outside the scope of this guide.

Most analytics solutions transition through a proof of concept stage where traditional ETL processes are more relevant to analytics. Advanced big data or streaming data implementations may be used while deploying advanced analytics solutions. For example, analytics model development versus analytics model deployment. Most often, analysts conceptualize and interpret analytical solutions at the proof of concept stage.
3.10.3 Elements

.1    Extract Data

Data extraction is the first stage in the ETL process which enables identification and aggregation of data across multiple heterogeneous or homogeneous sources.

Key steps include:

  1. Identify data sources and types: Analysts may suggest relevant data sources and the type of data required based on the business problem to be solved. Data classifications, meta-data descriptions, and schema are usually developed through a top-down approach, starting with coarse granularity drilled down to finer granularity as the engagement progresses. For example, if the business problem involves tracking the success of a marketing campaign, the typical sources may be customer relationship management (CRM) data, sales data, and channel data. These can be drilled down to customer data, engagement data, in-store sales, online sales, marketing automation platform data, loyalty programs data, web analytics, and so on.
  2. Create a universal classification of data: Various data sources include different conventions and data dictionaries. When aggregating data from varied sources, the definition, description, and data formats need to be reconciled so that a uniform scheme for extraction can be used.
  3. Verify data integrity: Integrity of extracted data can be automatically verified during data extraction through a universal schema that maps source elements to extracted data elements (conformity of data sizes, formats, redundancy, and data transmission losses) through ETL tools. However, many of the data integrity checks can be manually performed by analysts to verify minimum values, maximum values, identifiers, valid and expected values, or by using data sampling.
.2    Transform Data

Transform Data is the translation of extracted data to a usable and accurate format. It ensures the data follows sound business logic. Analysts may be involved in verifying or prescribing the following common transformation types:

  • Reconciling encoded values from the source to the target format. For example, date formats or list of values.
  • Deriving calculated values. For example, sales = price x units.
  • Standardizing values of a variable. For example, rescaling.
  • Dropping redundant attributes of an entity.
  • Masking attributes. For example, social security number or credit card numbers.
  • Translating text data to word vectors, specifically for Natural Language Processing (NLP) applications.
  • Binning data. For example, age-to-age groups.
  • Splitting strings to different columns, (for example, USIL12@# to Country|State|ID).
  • Missing data imputation.
  • Joins, merges, pivots, roll-ups.
Analysts play an important role in verifying that the relevant business rules are not contradicted while transforming data.

.3    Load Data

Load data involves transitioning the transformed data to a target repository (database, data warehouse, or data lake) that serves the business or analytics applications. In most cases, analytics applications interact directly with the target repository to receive and execute analytical models or generate reports based on analysis of the data. Many of the target databases involve different internal controls; some data warehouses allow only an incremental load which checks the delta between existing data and then adds changed data from the sources. This type of incremental load allows faster generation of business intelligence (BI) reports. On the other hand, a full extraction from sources may correspond to complete replacement of the data on the target system, which allows for better predictive and prescriptive analytics tasks.

The typical tasks in the load process involve:

  • Reviewing target data format and load types in alignment of business need.
  • Performing data load from the staging area to the target. The staging area is a logical placeholder for data which allows easy transformations.
  • Generating audit trails of changed or replaced data.
  • Standardizing the workflows (for example, data pipelines), which enables repeatability of the data load process. For example, when generating quarterly sales reports, once the metrics computation and data transformation requirements are finalized, there would be refreshed data available each quarter to re-generate the report.
Many big data technologies utilize an extract, load, and then transform (ELT) sequence rather than ETL so that transformation tasks can be done at a later stage. The data storage is done through a distributed file system, followed by data processing activities. This allows many big data analytics applications to use the data and experiment quickly. Analysts may provide oversight in verifying and validating whether the data loaded through such a scheme is reliable or not for business utility.
3.10.4 Usage Considerations

.1    Strengths

  • Provides well-established process steps for analytics tasks primarily for descriptive analytics problems.
  • Many of the ETL tools provide a graphical view of the operation and connections to enterprise data sources that facilitate easy data sourcing and self-service analytics.
  • Easy maintenance of data by maintaining automatic traces to data sources and audit trails.
  • Serves as a robust pre-analysis prior to modelling activities by providing a summary of the nature of the data.
.2    Limitations

  • Heterogeneous and streaming data are difficult to process through traditional ETL tools and technologies.
  • High-volume data movement through ETL requires a lot of planning and effort to maintain consistency.
  • ETL is not suitable for near real-time interventions through algorithmic decision-making.