Skip to content
This repository has been archived by the owner on Jun 26, 2024. It is now read-only.

Latest commit

 

History

History
212 lines (181 loc) · 6.06 KB

6.operators.markdown

File metadata and controls

212 lines (181 loc) · 6.06 KB

Let's Build a Connector - Part 6 - Custom Operators

In this tutorial, we'll implement some custom binary operators and aggregate functions for our sqlite connector.

So far, we've only required a single scalar type, which we called any, for all of our columns, because we didn't have any way to distinguish different column types. Now that we have different operators, we will need different types on which to pin them. So first of all, let's change our configuration file to track the type of each column:

{
    "tables": [
        {
            "tableName": "albums",
            "columns": {
                "id": {
                    "type": "numeric"
                },
                "title": {
                    "type": "string"
                },
                "artist_id": {
                    "type": "numeric"
                }
            },
        },
        ...
}

Our configuration type for a column will change accordingly:

type Column = {
    type: string;
};

As you can see above, we'll have two new scalar types, called string and numeric. For strings, we'll implement a LIKE binary operator, and a csv aggregation. For numeric columns, we'll implement a total aggregation. We can add these to our scalar_types definitions in the schema response:

let scalar_types: { [k: string]: ScalarType } = {
    'string': {
        aggregate_functions: {
            'csv': {
                result_type: {
                    type: 'named',
                    name: 'string'
                }
            }
        },
        comparison_operators: {
            'eq': {
                type: 'equal'
            },
            'like': {
                type: 'custom',
                argument_type: {
                    type: 'named',
                    name: 'string'
                }
            }
        },
    },
    'numeric': {
        aggregate_functions: {
            'total': {
                result_type: {
                    type: 'named',
                    name: 'numeric'
                }
            }
        },
        comparison_operators: {
            'eq': {
                type: 'equal'
            },
        }
    }
}

We also need to copy over type information for each column from the configuration to the schema response:

for (const columnName in table.columns) {
    let column = table.columns[columnName];

    fields[columnName] = {
        type: {
            type: 'named',
            name: column.type
        }
    };
}

If we run ndc-test now, we will see some new failing aggregate tests, since we have not implemented our new aggregate functions. If we look at our logs, we'll see queries with aggregates of type single_column, so let's go and implement that section of our fetch_aggregates function:

case 'single_column':
    switch (aggregate.function) {
        case 'total':
            target_list.push(`TOTAL(${aggregate.column}) AS ${aggregateName}`);
            break;
        case 'csv':
            target_list.push(`GROUP_CONCAT(${aggregate.column}, ',') AS ${aggregateName}`);
            break;
        default:
            throw new BadRequest("Unknown aggregate function");
    }

Here, we switch on the function property of the aggregate and implement each new aggregate function by generating the appropriate SQL. If we see a function that we don't expect then we throw a Bad Request exception.

Now let's implement our new comparison operator. In the visit_expression function, we'll add a new case in the binary_comparison_operator section:

case 'like':
    return `${visit_comparison_target(expr.column)} LIKE ${visit_comparison_value(parameters, expr.value)}`

This is almost a copy of the existing eq operator implementation.

That's all that's needed to add a new operator to our connector.

If we rebuild and retest, we can now see that ndc-test is covering the new code for our new aggregates. However, ndc-test does not generate tests for the new comparison operator. The reason is that ndc-test does not know enough about our LIKE operator to know how to generate sensible inputs.

Instead, we can add a custom test ourselves. We can do this by copying an existing test from our snapshots directory, and modifying it.

Create a file called snapshots/query/custom_operator/request.json. We'll query albums whose title starts with the letter A:

{
  "collection": "albums",
  "query": {
    "fields": {
      "title": {
        "type": "column",
        "column": "title",
        "fields": null
      }
    },
    "limit": 10,
    "predicate": {
      "type": "binary_comparison_operator",
      "column": {
        "type": "column",
        "name": "title",
        "path": []
      },
      "operator": "like",
      "value": {
        "type": "scalar",
        "value": "A%"
      }
    }
  },
  "arguments": {},
  "collection_relationships": {}
}

Now create an empty JSON file for the response:

cat '[]' > snapshots/query/custom_operator/expected.json

Running ndc-test produces the expected response:

npm run test

> test
> ndc-test replay --endpoint http://localhost:8080 --snapshots-dir snapshots


├ Query ...
│ ├ custom_operator ... FAIL
├ Mutation ...
Failed with 1 test failures:

[1] custom_operator
  in Query
  in custom_operator
Details: snapshot did not match file snapshots/query/custom_operator/expected.json: [
  {
    "rows": [
      {
        "title": "Audioslave"
      },
      {
        "title": "Alcohol Fueled Brewtality Live! [Disc 1]"
      },
      {
        "title": "Alcohol Fueled Brewtality Live! [Disc 2]"
      },
      {
        "title": "Afrociberdelia"
      },
      {
        "title": "Acústico MTV [Live]"
      },
      {
        "title": "Axé Bahia 2001"
      }
    ]
  }
]

If we copy this response JSON into our expected.json file, and rerun npm run test, we will see our custom test now passes.

In this way, we can use ndc-test to create a combination of generated and manual tests, to make sure that we have coverage for all of our connector features.