How an analysis happens
Sending a Query
When triggering an analysis, the IDE gets the list of rules for the current files.
It then sends a query to the service at
analysis.codiga.io (HTTPS). The query is specified as is:
There is the details for every field:
filename: the path of the file being analyzed. The path is relative to the project path.
language: the programming language being analyzed
fileEncoding: the file encoding in the IDE (
utf-8works most of the time)
codeBase64: the code in the IDE, encoded in Base 64
rules: an array of rules, each one having the following attributes
id: full identifier of the rule
language: language of the rule. It must match the top-definition in the rule, otherwise, the rule is ignored
type: if the rule checks the Abstract Syntax Tree (
ast) of a Pattern (
entityChecked: ONLY FOR TYPE
astthe AST node/entity being checked (
pattern: ONLY FOR TYPE
patternthe pattern being checked in the code
true, the rule output is being captured.
Getting the results
The server response has the following schema:
"message": "there is an important error!",
"description": "there is a fix for you",
errors: contains the list of potential execution errors for this rule. Potential values are
violations: the list of violations returned by the rule, each violation has
message: a message to show in the editor about the issue
start: the position to start to highlight in the IDE (line and col)
end: the position to end to highlight in the IDE (line and col)
severity: the severity of the violation, select it to show how to display the violation. Values are
category: the category of the violation. Possible values are:
fixes: a list of fix, each containing
description: the description of the fix to show in the IDE
edits: list of edit to apply sequencially, each edit has
editTypeto explain what action to do. Values are
startposition: where to start editing the code
endposition: where to stop editing the code
content: what to add or remove in the code
executionError: if the error
error-executionis set, the attribute contains a message that explains why the execution fails
output: if the request has
logOutputis true, this attribute contains the output of the rule (e.g. what is written on
errors: list of errors when processing the requests. These are the errors that apply to the request and not the rule. List of errors:
Triggering an analysis
The analysis should be triggered after the user stop typing. Some IDEs provide a hook when to trigger such analysis. If not, the plugin should detect that the user stopped writing code for at least 500ms and then trigger the analysis by:
- sending the code with all rules
- getting the results and annotating the code
For each opened file in the editor, we should cache the list of rules that applies. A background job takes care of updating the rules for a specific file. The rules are retrieved from the Codiga API.
There are two cases to fetch the rules:
- There is a
.codigafile present in the repository
.codigafile is present in the repository
.codiga file is present
To find the
.codiga file, we walk the project directory backwards.
Imagine we have the following file hierarchy
If we edit the file
module1/subdir1/myfile.py, the file
module1/.codiga will be used to get the rules.
If we edit the file
module2/subdir2/myfile2.py, the file
.codiga will be used to get the rules.
.codiga file structure
.codiga file is a YAML file defined as is:
- enabled: false
rulesets elements list all the rulesets being used for the IDE.
Rules in the ruleset can be disabled by specifying
enabled: false to the rule.
.codiga file is absent
.codiga file is absent, the IDE fetches default rules for the language from the Codiga API.
The background job continues to poll for the presence of the file.
To debug and troubleshoot our analyzer, we can specify rules directly to the IDE. This is done by
.codiga.debug file. This is a JSON file that contains all the rules information to send
to the analysis service.
There is the definition of such a file:
When such a file is present, we take the content of this file and send the rules directly to the analysis service.
If the user has an API token specified, the token is specified with the
Fetching rules occur in the IDE and may incur some latency. For this very reason, fetching data from the API should be always done asynchronously and never incur any lag or latency in the IDE.
In the IDE, we should keep the following preferences
- API Token: the API token must be stored and secured. Some IDE provide a way to securely store a password/api token. If available, such services must be used.
- Enabled/disabled: we should provide a way to enable/disable the analysis