Skip to content

Commit

Permalink
From elisoncrom fixes misspellings (#2679)
Browse files Browse the repository at this point in the history
* Fixes spelling across repository

* Update CHANGELOG.md

* Revert CHANGELOG.md Line 1723

- included -> include

* Update CHANGELOG.md

Co-authored-by: Jennifer Tran <[email protected]>

* walkthrough -> walk through or walk-through

* Fix Arn -> ARN where appropriate

* Update allowlist for node-fetch while we wait for patches upstream.

* Revert "Update allowlist for node-fetch while we wait for patches upstream."

This reverts commit 286424c.

Co-authored-by: Elison Crum <[email protected]>
Co-authored-by: Jennifer Tran <[email protected]>
  • Loading branch information
3 people authored Jan 24, 2022
1 parent 1c43069 commit 76090ec
Show file tree
Hide file tree
Showing 81 changed files with 97 additions and 97 deletions.
8 changes: 4 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -704,7 +704,7 @@ releases.

- **[PR2224](https://github.com/nasa/cumulus/pull/2244)**
- Changed timeout on `sfEventSqsToDbRecords` Lambda to 60 seconds to match
timeout for Knex library to acquire dataase connections
timeout for Knex library to acquire database connections
- **CUMULUS-2208**
- Moved all `@cumulus/api/es/*` code to new `@cumulus/es-client` package
- Changed timeout on `sfEventSqsToDbRecords` Lambda to 60 seconds to match
Expand All @@ -727,7 +727,7 @@ releases.
[1.6.2](https://cdn.earthdata.nasa.gov/umm/granule/v1.6.2/umm-g-json-schema.json)
- **CUMULUS-2472**
- Renamed `@cumulus/earthdata-login-client` to more generic
`@cumulus/oauth-client` as a parnt class for new OAuth clients.
`@cumulus/oauth-client` as a parent class for new OAuth clients.
- Added `@cumulus/oauth-client/CognitoClient` to interface with AWS cognito login service.
- **CUMULUS-2497**
- Changed the `@cumulus/cmrjs` package:
Expand Down Expand Up @@ -1664,7 +1664,7 @@ new `update-granules-cmr-metadata-file-links` task.
- Update reports to return breakdown by Granule of files both in DynamoDB and S3
- **CUMULUS-2123**
- Added `cumulus-rds-tf` DB cluster module to `tf-modules` that adds a
severless RDS Aurora/ PostgreSQL database cluster to meet the PostgreSQL
serverless RDS Aurora/PostgreSQL database cluster to meet the PostgreSQL
requirements for future releases.
- Updated the default Cumulus module to take the following new required variables:
- rds_user_access_secret_arn:
Expand Down Expand Up @@ -1939,7 +1939,7 @@ the [release page](https://github.com/nasa/cumulus/releases)
result in a "Client not connected" exception being thrown.
- Instances of `@cumulus/ingest/SftpProviderClient` no longer implicitly
disconnect from the SFTP server when `list` is called.
- Instances of `@cumulus/sftp-client/SftpClient` must now be expclicitly closed
- Instances of `@cumulus/sftp-client/SftpClient` must now be explicitly closed
by calling `.end()`
- Instances of `@cumulus/sftp-client/SftpClient` no longer implicitly connect to
the server when `download`, `unlink`, `syncToS3`, `syncFromS3`, and `list` are
Expand Down
2 changes: 1 addition & 1 deletion docs/configuration/lifecycle-policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This document will outline, in brief, how to set data lifecycle policies so that

## Examples

### Walkthrough on setting time-based S3 Infrequent Access (S3IA) bucket policy
### Walk-through on setting time-based S3 Infrequent Access (S3IA) bucket policy

This example will give step-by-step instructions on updating a bucket's lifecycle policy to move all objects in the bucket from the default storage to S3 Infrequent Access (S3IA) after a period of 90 days. Below are instructions for walking through configuration via the command line and the management console.

Expand Down
2 changes: 1 addition & 1 deletion docs/data-cookbooks/about-cookbooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The following data cookbooks are documents containing examples and explanations

## Setup

The data cookbooks assume you can configure providers, collections, and rules to run workflows. Visit [Cumulus data management types](../configuration/data-management-types) for information on how to conifgure Cumulus data management types.
The data cookbooks assume you can configure providers, collections, and rules to run workflows. Visit [Cumulus data management types](../configuration/data-management-types) for information on how to configure Cumulus data management types.

## Adding a page

Expand Down
2 changes: 1 addition & 1 deletion docs/data-cookbooks/queue-post-to-cmr.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Queue PostToCmr
hide_title: false
---

In this document, we walktrough handling CMR errors in workflows by queueing PostToCmr. We assume that the user already has an ingest workflow setup.
In this document, we walk through handling CMR errors in workflows by queueing PostToCmr. We assume that the user already has an ingest workflow setup.

## Overview

Expand Down
2 changes: 1 addition & 1 deletion docs/data-cookbooks/throttling-queued-executions.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Throttling queued executions
hide_title: false
---

In this entry, we will walkthrough how to create an SQS queue for scheduling executions which will be used to limit those executions to a maximum concurrency. And we will see how to configure our Cumulus workflows/rules to use this queue.
In this entry, we will walk through how to create an SQS queue for scheduling executions which will be used to limit those executions to a maximum concurrency. And we will see how to configure our Cumulus workflows/rules to use this queue.

We will also review the architecture of this feature and highlight some implementation notes.

Expand Down
2 changes: 1 addition & 1 deletion docs/deployment/components.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,4 +74,4 @@ documentation:
The recommended approach for handling remote state with Cumulus is to use the [S3 backend](https://www.terraform.io/docs/backends/types/s3.html).
This backend stores state in S3 and uses a DynamoDB table for locking.

See the deployment documentation for a [walkthrough of creating resources for your remote state using an S3 backend](README.md#create-resources-for-terraform-state).
See the deployment documentation for a [walk-through of creating resources for your remote state using an S3 backend](README.md#create-resources-for-terraform-state).
2 changes: 1 addition & 1 deletion docs/docs-how-to.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ hide_title: false

#### Versioning Docs

We lean heavily on Docusaurus for versioning. Their suggestions and walkthrough can be found [here](https://docusaurus.io/docs/en/versioning). It is worth noting that we would like the Documentation versions to match up directly with release versions. Cumulus versioning is explained in the [Versioning Docs](https://github.com/nasa/cumulus/tree/master/docs/development/release.md).
We lean heavily on Docusaurus for versioning. Their suggestions and walk-through can be found [here](https://docusaurus.io/docs/en/versioning). It is worth noting that we would like the Documentation versions to match up directly with release versions. Cumulus versioning is explained in the [Versioning Docs](https://github.com/nasa/cumulus/tree/master/docs/development/release.md).

#### Search

Expand Down
2 changes: 1 addition & 1 deletion docs/features/reports.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ This report shows the following data:
The Cumulus Dashboard offers an interface to create, manage and view these inventory reports.

The Reconciliation Reports Overview page shows a full list of existing reports and the option to create a new report.
![Screenshot of the Dashboard Rconciliation Reports Overview page](assets/rec_reports_overview.png)
![Screenshot of the Dashboard Reconciliation Reports Overview page](assets/rec_reports_overview.png)

Viewing an inventory report will show a detailed list of collections, granules and files.
![Screenshot of an Inventory Report page](assets/inventory_report.png)
Expand Down
2 changes: 1 addition & 1 deletion docs/upgrade-notes/upgrading-tf-version-0.13.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ hide_title: false

Cumulus pins its support to a specific version of Terraform [see: deployment documentation](../deployment/README.md#install-terraform). The reason for only supporting one specific Terraform version at a time is to avoid deployment errors than can be caused by deploying to the same target with different Terraform versions.

Cumulus is upgrading its supported version of Terraform from **0.12.12** to **0.13.6**. This document contains instructions on how to perform the uprade for your deployments.
Cumulus is upgrading its supported version of Terraform from **0.12.12** to **0.13.6**. This document contains instructions on how to perform the upgrade for your deployments.

### Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion docs/workflows/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ ENTRYPOINT ["/work/process.py"]
CMD ["input", "output"]
```

When this Dockerfile is built, docker will first use the latest cumulus-base image. It will then copy the entire GitHub repository (the processing required for a single data collection is a repository) to the `/work` directory which will now contain all the code necessary to process this data. In thie case, a C file is compiled to convert the supplied hdf5 files to NetCDF files. Note that this also requires installing the system libraries `nco` and `libhdf5-dev` via `apt-get`. Lastly, the Dockerfile sets the entrypoint to the processing handler, so that this command is run when the image is run. It expects two arguments to be handed to it: 'input' and 'output' meaning the input and output directories.
When this Dockerfile is built, docker will first use the latest cumulus-base image. It will then copy the entire GitHub repository (the processing required for a single data collection is a repository) to the `/work` directory which will now contain all the code necessary to process this data. In this case, a C file is compiled to convert the supplied hdf5 files to NetCDF files. Note that this also requires installing the system libraries `nco` and `libhdf5-dev` via `apt-get`. Lastly, the Dockerfile sets the entrypoint to the processing handler, so that this command is run when the image is run. It expects two arguments to be handed to it: 'input' and 'output' meaning the input and output directories.

## Process Handler

Expand Down
2 changes: 1 addition & 1 deletion example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ Run `terraform apply`.

These steps should be performed in the `example` directory.

Copy `.env.sample` to `.env`, filling in approriate values for your deployment.
Copy `.env.sample` to `.env`, filling in appropriate values for your deployment.

Set the `DEPLOYMENT` environment variable to match the `prefix` that you
configured in your `terraform.tfvars` files.
Expand Down
2 changes: 1 addition & 1 deletion example/spec/helpers/testUtils.js
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ async function deleteFolder(bucket, folder) {
}

/**
* Returns execution ARN from a statement machine Arn and executionName
* Returns execution ARN from a statement machine ARN and executionName
*
* @param {string} executionArn - execution ARN
* @returns {string} return aws console url for the execution
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1267,7 +1267,7 @@ describe('The S3 Ingest Granules workflow', () => {
if (subTestSetupError) fail(subTestSetupError);
});

it('returns a list of exeuctions', () => {
it('returns a list of executions', () => {
failOnSetupError([beforeAllError, subTestSetupError]);
expect(executions.results.length).toBeGreaterThan(0);
});
Expand Down
2 changes: 1 addition & 1 deletion packages/api/lambdas/create-reconciliation-report.js
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@ exports.reconciliationReportForGranules = reconciliationReportForGranules;
* @param {number} [params.recReportParams.StartTimestamp]
* @param {number} [params.recReportParams.EndTimestamp]
* @param {string} [params.recReportparams.collectionIds]
* @returns {Promise<Object>} - a reconcilation report
* @returns {Promise<Object>} - a reconciliation report
*/
async function reconciliationReportForCumulusCMR(params) {
log.info(`reconciliationReportForCumulusCMR with params ${JSON.stringify(params)}`);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -808,7 +808,7 @@ test.serial('Generates valid reconciliation report when there are both extra ES
});

test.serial(
'With input time params, generates a valid filtered reconcilation report, when there are extra cumulus/ES and CMR collections',
'With input time params, generates a valid filtered reconciliation report, when there are extra cumulus/ES and CMR collections',
async (t) => {
const { startTimestamp, endTimestamp, ...setupVars } = await setupElasticAndCMRForTests({ t });

Expand Down Expand Up @@ -856,7 +856,7 @@ test.serial(
);

test.serial(
'With location param as S3, generates a valid reconcilation report for only S3 and DynamoDB',
'With location param as S3, generates a valid reconciliation report for only S3 and DynamoDB',
async (t) => {
const dataBuckets = range(2).map(() => randomId('bucket'));
await Promise.all(dataBuckets.map((bucket) =>
Expand Down Expand Up @@ -914,7 +914,7 @@ test.serial(
);

test.serial(
'With location param as CMR, generates a valid reconcilation report for only Cumulus and CMR',
'With location param as CMR, generates a valid reconciliation report for only Cumulus and CMR',
async (t) => {
const params = {
numMatchingCollectionsOutOfRange: 0,
Expand Down
4 changes: 2 additions & 2 deletions packages/db/src/models/execution.ts
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ class ExecutionPgModel extends BasePgModel<PostgresExecution, PostgresExecutionR
* @param {Knex | Knex.Transaction} knexOrTrx -
* DB client or transaction
* @param {Array<number>} executionCumulusIds -
* single execution cumulus_id or array of exeuction cumulus_ids
* single execution cumulus_id or array of execution cumulus_ids
* @param {Object} [params] - Optional object with addition params for query
* @param {number} [params.limit] - number of records to be returned
* @param {number} [params.offset] - record offset
* @returns {Promise<Array<number>>} An array of exeuctions
* @returns {Promise<Array<number>>} An array of executions
*/
async searchByCumulusIds(
knexOrTrx: Knex | Knex.Transaction,
Expand Down
2 changes: 1 addition & 1 deletion packages/ingest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ LOCALSTACK_HOST=localhost npm test

All modules are accessible using require: `require('@cumulus/ingest/<MODULE_NAME>')` or import: `import <MODULE_NAME> from '@cumulus/ingest/<MODULE_NAME>'`.

- [`consumer`](./consumer.js) - comsumer for SQS messages
- [`consumer`](./consumer.js) - consumer for SQS messages
- [`crypto`](./crypto.js) - provides encryption and decryption methods with a consistent API but differing mechanisms for dealing with encryption keys
- [`ftp`](./ftp.js) - for accessing FTP servers
- [`granule`](./granule.js) - discovers and ingests granules
Expand Down
4 changes: 2 additions & 2 deletions packages/integration-tests/api/executions.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ const executionsApi = require('@cumulus/api-client/executions');
* @returns {Promise<Object>} - the execution fetched by the API
*/
const getExecution = async (params) => {
deprecate('@cumulus/integration-tests/exeuctions.getExecution', '1.21.0', '@cumulus/api-client/ems.getExecution');
deprecate('@cumulus/integration-tests/executions.getExecution', '1.21.0', '@cumulus/api-client/ems.getExecution');
return await executionsApi.getExecution(params);
};

Expand Down Expand Up @@ -43,7 +43,7 @@ const getExecutions = async (params) => {
* @returns {Promise<Object>} - the execution status fetched by the API
*/
async function getExecutionStatus(params) {
deprecate('@cumulus/integration-tests/exeuctions.getExecutionStatus', '1.21.0', '@cumulus/api-client/executions.getExecutionStatus');
deprecate('@cumulus/integration-tests/executions.getExecutionStatus', '1.21.0', '@cumulus/api-client/executions.getExecutionStatus');
return await executionsApi.getExecutionStatus(params);
}

Expand Down
4 changes: 2 additions & 2 deletions packages/integration-tests/sfnStep.js
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ class SfnStep {
* If there are multiple executions of a step, we currently assume a retry and return
* either the first passed execution or the last execution if no passing executions exist
*
* @param {string} executionArn - Arn of the workflow execution
* @param {string} executionArn - ARN of the workflow execution
* @param {string} stepName - name of the step
* @returns {List<Object>} objects containing a schedule event, start event, and complete
* event if exists for each execution of the step, null if cannot find the step
Expand Down Expand Up @@ -164,7 +164,7 @@ class SfnStep {
/**
* Get the output payload from the step, if the step succeeds
*
* @param {string} workflowExecutionArn - Arn of the workflow execution
* @param {string} workflowExecutionArn - ARN of the workflow execution
* @param {string} stepName - name of the step
* @param {string} eventType - expected type of event, should be 'success' or 'failure'
* @returns {Object} object containing the payload, null if error
Expand Down
2 changes: 1 addition & 1 deletion tf-modules/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,6 @@ If the module has been integrated into the Cumulus module or the example Cumulus
If the module is a standalone module that should not be integrated as a submodule (e.g. [`data-persistence`](https://github.com/nasa/cumulus/blob/master/tf-modules/data-persistence/outputs.tf)), then you will need to follow these steps to include it in the CI/CD pipeline:

1. Add a reference implementation for using your module in the `example` directory. See the [reference implementation for the `data-persistence` module](https://github.com/nasa/cumulus/blob/master/example/data-persistence-tf).
- Make sure to include a [`provider` configuration](https://www.terraform.io/docs/configuration/providers.html) in your `.tf` files, which defines what provider will be interpret the Terraform reosources
- Make sure to include a [`provider` configuration](https://www.terraform.io/docs/configuration/providers.html) in your `.tf` files, which defines what provider will be interpret the Terraform resources
2. Update the [CI Terraform deployment script](https://github.com/nasa/cumulus/blob/master/bamboo/bootstrap-tf-deployment.sh) to deploy your module.
- Make sure to add remote state handling for deploying your module so that each CI build only update the existing deployment as necessary, because local Terraform state in the CI will not persist between builds.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ original_id: setup

# Setup

### Getting setup to work with data-cookboooks
### Getting setup to work with data-cookbooks

In the following data cookbooks we'll go through things like setting up workflows, making configuration changes, and interacting with CNM. The point of this section is to set up, or at least better understand, collections, providers, and rules and how they are configured.

Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-1.11.0/docs-how-to.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ hide_title: true # So the title of the Doc doesn't show up at the top of the

### Versioning Docs

We lean heavily on Docusaurus for versioning. Their suggestions and walkthrough can be found [here](https://docusaurus.io/docs/en/versioning). It is worth noting that we would like the Documentation versions to match up directly with release versions. Cumulus versioning is explained in the [Versioning Docs](https://github.com/nasa/cumulus/tree/master/docs/development/release.md).
We lean heavily on Docusaurus for versioning. Their suggestions and walk-through can be found [here](https://docusaurus.io/docs/en/versioning). It is worth noting that we would like the Documentation versions to match up directly with release versions. Cumulus versioning is explained in the [Versioning Docs](https://github.com/nasa/cumulus/tree/master/docs/development/release.md).

### Search

Expand Down
2 changes: 1 addition & 1 deletion website/versioned_docs/version-1.11.0/upgrade/1.9.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ original_id: 1.9.0

## Additional Functionality

Cumulus 1.9 uses versioned collections to support granules from different collections and verison numbers.
Cumulus 1.9 uses versioned collections to support granules from different collections and version numbers.
These granules usually come from PDRs where objects are defined with a DATA_VERSION and DATA_TYPE
The associated collections should reflect the same version and dataType. CMR would also need to use the same dataType.
If a dataType is not provided, Cumulus will use the collection name.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ original_id: setup

# Setup

### Getting setup to work with data-cookboooks
### Getting setup to work with data-cookbooks

In the following data cookbooks we'll go through things like setting up workflows, making configuration changes, and interacting with CNM. The point of this section is to set up, or at least better understand, collections, providers, and rules and how they are configured.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Our goal here is to create a rule through the Cumulus dashboard that will define

The `Executions` page presents a list of all executions, their status (running, failed, or completed), to which workflow the execution belongs, along with other information. The rule defined in the previous section should start an execution of its own accord, and the status of that execution can be tracked here.

To get some deeper information on the execution, click on the value in the `Name` column of your execution of interest. This should bring up a visual representation of the worklfow similar to that shown above, execution details, and a list of events.
To get some deeper information on the execution, click on the value in the `Name` column of your execution of interest. This should bring up a visual representation of the workflow similar to that shown above, execution details, and a list of events.

## Summary

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ original_id: setup

# Setup

## Getting setup to work with data-cookboooks
## Getting setup to work with data-cookbooks

In the following data cookbooks we'll go through things like setting up workflows, making configuration changes, and interacting with CNM. The point of this section is to set up, or at least better understand, collections, providers, and rules and how they are configured.

Expand Down
Loading

0 comments on commit 76090ec

Please sign in to comment.