Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Closes #41 The challenge is that we're using a merge statement which is successfully deduplicating against what already exists in the target table but within the batch of records in the stage there are also dupes. The test was failing because no data existed in the destination table so we weren't updating any records, only inserting, but within our staging file we had multiple primary keys ID 1 and 2 so they all get inserting and the result is duplicates in the destination table. The way I fixed it in this PR is by adding a qualify row_num = 1 to deduplicate within our staging file select query. It uses the SEQ8 function, which I've never used before, to order the records based on their place in the file i.e. the bottom of the table takes precedence over the top. I looks to work as expected but it feels a little sketchy, I wonder if unsorted streams would have issues where the wrong record gets selected. Ideally the user would tell us a sort by column to know how to take the latest. --------- Co-authored-by: Ken Payne <[email protected]>
- Loading branch information