Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor softmax templates to use outer dims #844

Closed
wants to merge 2 commits into from

Conversation

int3
Copy link
Contributor

@int3 int3 commented Jul 25, 2023

Summary:
Previously, the softmax templates assumed that reduction would always be done over the last dim, so the only parameter passed to the templates was the rank of the tensor. To set the stage for generalizing softmax, we pass the reduction dim instead.

The output is functionally identical, though the codegen changes slightly in the case where all the inner dimensions are 1: we now pass only the outer dimensions to the function call, dropping the redundant inner dimension parameters.

For the tail_shapes_all_1_bf16 softmax test case, we have

Before:

      softmax_0(
         X,
         Y,
         &input_batch,
         &X_dim_1,
         &X_dim_2,
         stream
      );

After:

      softmax_0(
         X,
         Y,
         &input_batch,
         stream
      );

Differential Revision: D47732859

@facebook-github-bot facebook-github-bot added CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported labels Jul 25, 2023
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47732859

int3 added a commit to int3/AITemplate that referenced this pull request Jul 25, 2023
Summary:
Pull Request resolved: facebookincubator#844

Previously, the softmax templates assumed that reduction would always be done over the last dim, so the only parameter passed to the templates was the rank of the tensor. To set the stage for generalizing softmax, we pass the reduction dim instead.

The output is functionally identical, though the codegen changes slightly in the case where all the inner dimensions are 1: we now pass only the outer dimensions to the function call, dropping the redundant inner dimension parameters.

For the `tail_shapes_all_1_bf16` softmax test case, we have

Before:
```
      softmax_0(
         X,
         Y,
         &input_batch,
         &X_dim_1,
         &X_dim_2,
         stream
      );
```

After:
```
      softmax_0(
         X,
         Y,
         &input_batch,
         stream
      );
```

Differential Revision: D47732859

fbshipit-source-id: 604ef9901036b913d5f8bc0ce779fb9c3f4b6aa2
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47732859

int3 added 2 commits July 25, 2023 10:28
Summary:
Very minor cleanups I did while familiarizing myself with the code

Aside from whitespace changes I also removed a few unnecessary automatic variables

Differential Revision: D47732846

fbshipit-source-id: 88635358cf795f21ce1c582dea85185db6c5ac07
Summary:
Pull Request resolved: facebookincubator#844

Previously, the softmax templates assumed that reduction would always be done over the last dim, so the only parameter passed to the templates was the rank of the tensor. To set the stage for generalizing softmax, we pass the reduction dim instead.

The output is functionally identical, though the codegen changes slightly in the case where all the inner dimensions are 1: we now pass only the outer dimensions to the function call, dropping the redundant inner dimension parameters.

For the `tail_shapes_all_1_bf16` softmax test case, we have

Before:
```
      softmax_0(
         X,
         Y,
         &input_batch,
         &X_dim_1,
         &X_dim_2,
         stream
      );
```

After:
```
      softmax_0(
         X,
         Y,
         &input_batch,
         stream
      );
```

Reviewed By: aakhundov

Differential Revision: D47732859

fbshipit-source-id: 9c081a21338ba5d6cb6386b06b113171611746e4
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D47732859

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in ecf6037.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants