Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PROPOSAL] Code quality analysis #307

Open
zethuman opened this issue Apr 18, 2023 · 11 comments
Open

[PROPOSAL] Code quality analysis #307

zethuman opened this issue Apr 18, 2023 · 11 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@zethuman
Copy link
Contributor

What/Why

What are you proposing?

Analyze overall code quality in the automated pipeline.

What problems are you trying to solve?

When make PR pull , a contributors wants to check up requested code, so they can fix abstract errors and cover with tests

Are there any security considerations?

I haven't research deeper. If there is some cases with security, let's discuss it

Are there any breaking changes to the API

No, that is only loyality for developers

Why should it be built? Any reason not to?

Describe the value that this feature will bring to the OpenSearch community, as well as what impact it has if it isn't built, or new risks if it is. Highlight opportunities for additional research.

What will it take to execute?

We should integrate some static code quality analyzer like a SonarQube

@dblock
Copy link
Member

dblock commented Apr 18, 2023

+1 on any kind of static analysis tooling

@dblock dblock added enhancement New feature or request good first issue Good for newcomers and removed untriaged labels Apr 18, 2023
@Jakob3xD
Copy link
Collaborator

I have a few questions related to that.

  • How do we define code quality?
  • Do we only use the defaults from the action provider or do we define them by our self?
  • If so, how would we define them by our self?
  • What do we do if some defaults don't match our wanted code quality or lets say we don't like some of the defaults?

@zethuman
Copy link
Contributor Author

If your questions about Codacy, then my arguments are below. But if you are talking about any analysis, then there is describe a lot here, since there are different tools, different approaches. Personally, when I analyzed, there were two top tools, these are codacy and sonarqube. The sonarqube is more popular and the solution can somehow suit us, but the only thing is where to put a server for it? Or it is possible to build it in actions?

Why Codacy?

  • Judging by the documentation, with the widespread tools that are present in the encoding
  • As far as I understood from the documentation, you can only use built-in
  • Skipping...
  • The documentation shows only methods through the interface, but I admit this is inconvenient for open source projects, but nevertheless, there is such a possibility, and I even tried to ignore some rules. Link for review

In general, I am open to any other tools that are offered, I am ready to go deeper and make integration

@Jakob3xD
Copy link
Collaborator

Does it scans the PR code quality or the whole repo?

I am generally okay with adding it and see how it works, but I don't think it makes sense in current state of this repo. As we first need to handle #65. I want to avoid that contributors start solving quality alerts now, before we change the whole lib, so they don´t wast there time on code that will change anyway.

Beside, we could also replace the current linter with https://github.com/golangci/golangci-lint-action.

@tannerjones4075
Copy link
Contributor

I am interested on helping with this issue :)

@VachaShah
Copy link
Collaborator

Thank you @tannerjones4075! Looking forward to your contributions :)

@tannerjones4075
Copy link
Contributor

tannerjones4075 commented Oct 13, 2023

It seems that this issue should be broken in to several issues:

I can start with the linter if that is priority and then security scanning.

@Jakob3xD
Copy link
Collaborator

After defining what Code Quality Objectives we want, we can create sub tasks / issues.

Tow ideas :

  • Test Coverage Report that comments to the PR
  • govulncheck

4. Replace/update linter:
The current version of the linter: https://github.com/opensearch-project/opensearch-go/blob/main/.github/workflows/lint.yml
The version that is currently running is v1.53.3 and the current version is 3.7.0 https://github.com/golangci/golangci-lint-action

Version 3.7.0 is the version of the workflow and v1.53.3 is the actual golangci-lint version. The latest version is therefore v1.54.2.

@tannerjones4075
Copy link
Contributor

tannerjones4075 commented Oct 24, 2023

For the SonarQube we could leverage SonarCloud. This would allows us to create a workflow and implement SonarQube through a GitHub Action. For test coverage I came across this. We could have a report generated locally and when a PR is created.

govulncheck: What do we want the output to be? Do want a report to be generated? This will require a token with write access.

@tannerjones4075
Copy link
Contributor

For the SonarQube we could leverage SonarCloud. This would allows us to create a workflow and implement SonarQube through a GitHub Action. For test coverage I came across this. We could have a report generated locally and when a PR is created.

govulncheck: What do we want the output to be? Do want a report to be generated? This will require a token with write access.

It looks like SonarQube has been suggested and research before. The conclusion was to update the current golinter, which as been done.

@Jakob3xD
Copy link
Collaborator

Removed govulncheck workflow as it report vulnerabilities of imported libs that we have no control of and can only be solved by updating the go version. As we are a lib and don't build anything, the user need to update their version.
#421

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

5 participants