Re: [DISCUSS] Adding static code analysis into our review process


Dmitry Kovalev <dk.g...@...>
 

My recent PR https://github.com/JanusGraph/janusgraph/pull/1483 must be one of the first victims of this change, and I thought I should provide some feedback from the contributor's perspective while it is fresh :)

Without heading straight into discussion about the actual usefulness of 99.9% of the issues that this Codacy tool has found, I would like to check what people think about the way they are presented.

I think that the idea of posting each "issue" as a comment on PR leads to this bot completely hijacking any meaningful discussion(s) on the PR. And it is clearly unable to clean up after itself when the "issues" are addressed. 

I can see that some of the "stale" "issues"/comments are deleted by "JanusGraph" (another bot?), but some remain intact even though they are actually addressed, and it seems like it duplicates the same comments with every new commit if the committer "dared" to ignore them, so it has a potential to snowball to insane levels unless it is pacified completely, and each and every "issue" is resolved to its satisfaction.

I hope that the mode where each and every "issue" it finds is posted as a comment can be switched off, otherwise IMO this bot does much more harm than good, and is basically just a big nuisance. There is a link to full report from the "check" entry, and I think this is the only thing required on the PR page.

Coming back to the usefulness of the "issues" it reports - so far I have seen 3 broad categories:
1) "issues" which are absolutely useless in terms of code quality or readability, but at least are easy to "fix"
2) "issues" which could be helpful (such as method complexity) if they were tuned to more reasonable thresholds
3) "issues" which directly contradict the intention/design, and dismiss it as something to avoid, without providing any meaningful explanation or alternative

I am sure that it is also capable of detecting actual issues such as coding errors, deficiencies and potential runtime errors - but as currently configured, they are likely to be buried under the immense amounts of useless spam it generates

Any thoughts?

Many thanks,
Dmitry


On Friday, 1 March 2019 13:08:03 UTC, Florian Hockmann wrote:
We now have Codacy activated for all repos in the JanusGraph organization. The dashboard for the main repo can be found here:


Codacy automatically checks pull requests to verify that they don't introduce new issues. By default, it only adds that information as a check, just like how Travis reports the build result. I now also configured Codacy to add inline comments directly where a new issue is introduced in the PR and also a summary comment.
We can of course change that if we want less integration of Codacy in the PR process.

Codacy will likely complain about a lot of issues that we don't really care about, so false positives basically. I suggest therefore that we don't treat Codacy as a gateway keeper that prevents pull requests from being merged, especially in the beginning.
Instead, we should determine which checks don't make sense for us so we can disable them. So, if you spot any things Codacy complains about but it shouldn't in your opinion, then just write here and we can remove them.

Codacy will hopefully make the review process easier as reviewers don't have to look for issues that Codacy can already find and contributors get quicker feedback if they introduce new issues.

Committers / TSC members: If you want to be able to configure Codacy for JanusGraph, then please just send me an email and I will add you: f...@.... (I already invited some of you where I have your email addresses already.)

Am Mittwoch, 29. August 2018 15:50:47 UTC+2 schrieb Florian Hockmann:
JanusGraph currently uses Coverity as a static code analysis tool. While this already led to some findings, it can unfortunately not be used to analyse code changes from pull requests. I wondered whether we could improve our review process with a service that analyses changes in pull requests and directly comments on the PRs to report its findings. This hopefully makes reviews more efficient as reviewers don't have to comment on obvious style issues for example and it could find problems before they are added to production branches.

I already searched for a service that we could use. In my opinion, it should fulfil these requirements:
  1. Support for the languages we currently use / might add in the near future: Java, C#, Python, JavaScript / TypeScript.
  2. Integration into code reviews in GitHub.
  3. Free for open source projects with enough scans per day (some services limit these to 5 per day which could be not enough).
GitHub already lists services that can be used for code reviews here. Codacy seems to be the only service of those listed there that fulfils all three requirements. I wanted to try out Codacy and therefore already let it analyse forks of JanusGraph and the version of JanusGraph.Net that is currently in review:
So, what do others think about the idea to integrate a code analysis service into our review process in general? And are there any other suggestions for such a service besides Codacy?

Join {janusgraph-dev@lists.lfaidata.foundation to automatically receive all group messages.