ANOTHER IT CO

Loading Fact Tables

When populating fact tables, the value of foreign keys must be known before a fact row can be inserted. These foreign key values, primary key values in dimension tables, may not be initially known. This most often is the case when new dimension rows are discovered, and your warehouse design requires that dimension table primary key values be automatically assigned by the database. To overcome this situation, a multi-step design must be applied to processing fact rows.

Processing fact rows in a multi-step manner exploits the unique capabilities of DataStage to integrate operating system and database features within a DataStage job.

  1. Process fact rows without regard to dimension key values, instead retaining dimension column values. Theses dimension column values will later be used to determine dimension key values.

  2. For each dimension table, create a temporary dimension table in you database whose structure is similar to the dimension table.

  3. Populate the temporary dimension tables using the retained dimension column values from step 1, setting the dimension key column value to NULL.

  4. Join the temporary dimension tables with the dimension tables, updating the dimension key column in each temporary dimension table.

  5. For all rows in the temporary dimension tables with a dimension key column value of NULL, insert the row into its dimension tables.

  6. Join the temporary dimension tables with the dimension tables for all rows in the temporary dimension tables with a dimension key column value of NULL, updating the dimension key column in each temporary dimension table.

  7. Create a hash file for each temporary dimension table whose key columns are all columns other than the dimension key value.

  8. Populate the temporary dimension hash files with the rows from the temporary dimension tables.

  9. Process the fact rows created in step 1, performing reference lookups to the temporary dimension hash files, resolving the dimension key values, and creating an output file compatible with your database’s bulk loader (e.g. SQLLDR).

  10. Execute your database’s bulk loader using the file created in step 9 as input.

The implementation of this multi-step process in simpler than its description. The entire process can be implemented as three DataStage jobs and two database scripts.

Copyright 2016 Another IT Co