How Dyte Uses Linting to Code Fast and Clean

While working in a team it's important to have a common language to code fast & clean. That's where linting helps. Learn how we set it up @ Dyte.

2 months ago   •   8 min read

By Jesús Leganés-Combarro

Dyte is a growing developer tribe that writes thousands of lines of code every day. We began to encounter conflicts when merging due to everyone's distinct code writing style. So, we decided to implement a unified linting code style across all Dyte projects (including a Jira ticket!) to address this issue.

Read this blog to learn how we optimized code quality rather than simply applying a linter to our source code.

Dyte does not simply apply a linter to its code, meme.

Linting

ESLint Logo

As linting engine, we use eslint, which is currently one of the most popular and configurable linting engines for JavaScript and TypeScript. In that way, @dyte-in/eslint-config package is the central element for our unified linting scheme, since it hosts the shareable config with all our customized linting rules. Later each project can extend the rules by applying their own project-specific ones, although we’ve adjusted the common rules to be used without needing to do any extra customizations.

Shareable config is defined in the .eslint.js file, working also as the package main export. It can feel strange to use a hidden file as a package export, but that’s on purpose since it’s one of the filenames that eslint uses by default when looking for a project config. This way, we can reuse it to lint the shareable config project itself. That’s the reason why we are exporting it as a JavaScript file instead of as a static JSON one too because eslint doesn’t lint JSON files by default.

The shareable config extends and enables rules from eslint:recommended and import/recommended configs, and for TypeScript files, it additionally makes use of the typescript-eslint/recommended and typescript-eslint/recommended-requiring-type-checking configs. This last one makes use of the actual tsconfig.json file being used by the project, so our config also looks for and uses it automatically on the project root, including when using an eslint custom named one, like the tsconfig.eslint.json file we use specifically for linting since it uses the first one if finds alphabetically.

Regarding the rules that we’ve enabled or customized to our usage habits, the most interesting ones are:

  • consistent-return: enforces functions always return a value, or never do it. This has helped us to find A LOT of bugs where we were returning undefined to notify failure instead of throwing an exception (or in this case, returning a failed Promise), probably due to some legacy code from EduMeet project that Dyte used in its origins.
  • import/order: ensure all import statements are sorted alphabetically, for tidiness and to prevent duplicates or misconfigurations.
  • max-len: strict length of lines, where we’ve customized it to allow unlimited length only on comment lines with an URL, so we don’t break them.
  • no-restricted-globals and no-restricted-properties: configured to prevent usage of deprecated global event variable (due to being a bug in most of the cases) and usage of the setInterval() statement (due to performance and runtime execution stability).

And for TypeScript-specific rules, the most interesting ones are:

  • @typescript-eslint/ban-ts-comment: forbid usage of ts-comments to disable TypeScript checkings without a detailed reason of why it has been disabled.
  • Multiple rules to notify users of unsafe any type.
  • no-restricted-syntax: configured to prevent usage of TypeScript private keyword instead of JavaScript #[private] class members.

To use the eslint-config package is just like using any other eslint shared config, and you only need two steps:
1. Install the package as a devDependency:

npm install --save-dev @dyte-in/eslint-config

2. Extend the eslint config from our project .eslintrc file. Since the package follows the @<scope>/eslint-config name, eslint will automatically detect and locate it in the node modules folder, so only the scope name is required.

{
  "extends": ["@dyte-in"]
}

Semantic releases

After finishing implementing the eslint-config package, it was time to publish it so all the other projects can use it as a devDependency. At Dyte we are using the semantic-release tool to automate the publish and release of packages' new versions, just only we were copying its configuration by hand. It showed us to be ironic to unify and automate the linting rules configuration, while at the same time we needed to copy again by hand the semantic release configuration, so we decided to fix that too. Not only that, but also it made usage of semantic-release easier by not needing to install all the release process dependencies since they are already installed by the semantic-release-config package itself.

Similar to eslint-config package, we created a shareable configuration where to store all our common release configurations. This is stored in a release.config.js file that’s exported as package main entry for the same reasons we did something similar with eslint-config .eslintrc.js one: to be able to make use of the config on the project itself, so we can automate the semantic releases of the semantic-release-config package itself.

In contrast to eslint-config, rules are more generic, just only analyzing the commit, generating the release notes and the changelog (and upgrading the package version number), and publishing the package release in both the GitHub Packages Registry and as an asset of the GitHub release. The only interesting points are that we detect the environment we are working on and flag the release as a prerelease in case we are running it from our staging (preproduction) environment, and that we don’t commit the release version upgrade changes in the source code if all the previous steps have not been successful.

To use the semantic-release-config package, it only needed two steps:

  1. install the package as a devDependency:
npm install --save-dev @dyte-in/semantic-release-config

2. configure semantic-release in .releaserc file:

{
  "extends": "@dyte-in/semantic-release-config"
}

To run the semanic-release command, it's recommended to install the semantic-release package as devDependency and add a semantic-release script in the package.json file:

{
  "scripts": {
    "semanic-release": "semantic-release"
  }
}

Releases in GitHub Actions

Now that we have created the packages for the unified eslint and semantic-release configs, it’s time to use them in all the other projects. And yes, both @dyte-in/eslint-config and @dyte-in/semantic-release-config are devDependencies of each other, but I’m not talking about that, but instead about the other projects.

Since we are using Kubernetes to deploy our systems (and by extension, Docker), we needed some way to provide access to private packages in GitHub Packages Registry from inside Docker containers. The way we were doing it was by providing a custom NPM_TOKEN environment variable (that in fact hosts a GitHub Personal Access Token), and using it as authToken for the GitHub Packages Registry entry in the project .npmrc file used to define that the project dependencies need to be fetched from the GitHub Packages Registry. The problem with this approach is that it requires a token with elevated permissions (including write packages) for all the operations related to npm packages, also when they are not needed (more specifically, install packages), not allowing a fine grain access control, and opening a security threat. But more especially, it prevented developers to install packages easily in local environments by not using the standard location for users npmrc authentication. This has given us problems in the past, both in setup local environments and also to install packages inside GitHub Actions CI servers, since tokens needed to be provided read access by hand on all the repos that would need to be used.

For that reason, we decided to take an approach more aligned with how GitHub Actions works, allowing us to simplify the process by removing the addition of useless environment variables, and making it more secure. The first step was obviously to remove the usage of the NPM_TOKEN inside the project .npmrc file, and any other environment variable being set in the GitHub environment variables, setting them instead as env entries of the npm install and npm ci GitHub Actions steps, and replace it in all the config files it was being used in the project for GITHUB_TOKEN, the GitHub Actions standard default one with lowered permissions (mostly just only read access to the repos). In local environments, it would be using the standard global ~/.npmrc auth config, so developers would just need to authenticate against GitHub Packages Registry only once and forget about it.

- run: npm install
  env:
    NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

On the other hand, for the GitHub Actions steps that will publish the packages, we’ll use the npm GitHub Action standard NODE_AUTH_TOKEN environment variable with the same Github Personal Access Token we were using before since it’s just needed only to publish packages. We’ll use it to assign the GitHub Access Token to the GITHUB_TOKEN environment variable to create the GitHub release itself, too. There’s a little push down though: Docker containers are fully isolated from the host environment, so usage of the registry-url field in the GitHub standard setup-node action will not work. We’ll need to replicate what setup-node does… that in fact, it overwrites and updates the content of the project .npmrc file to add explicitly the line with the token, just the same way we were doing before 😅 just only using the standard NODE_AUTH_TOKEN environment variable, and doing it in the checkout code without committing and pushing it afterward, so it’s safe to do the modification. In our case, we are doing it inside the Docker container itself, so we are equally safe here 🙂

RUN echo //npm.pkg.github.com/:_authToken=$NODE_AUTH_TOKEN >> .npmrc

Bonus: Print secrets in GitHub Actions

trick.png
GitHub Actions Trick

GitHub Action detects when you want to print a secret on the console, so it prevents getting them logged by replacing the secret strings with ***. Sometimes we need to get them printed for debugging purposes, so we need to trick it. The most simple way is to just concatenate the output with the secret using the UNIX sed command to split the secret string using spaces:

run: echo ${{secrets.YOUR_SECRET }} | sed 's/./& /g'

This way, GitHub Action could not match the output with any of the stored secrets and would print the string verbatim.

Disclaimer: Please don’t do that with your production secrets, just use some ones dedicated for testing purposes, and ideally one-use-only throwaway ones.

Conclusion

In this blog post we have seen how we have improved our code quality standards by unifying the linting style in a portable and replicable way, and also simplified our CI systems to make them easier to use by all our coworkers.

If you haven’t heard about Dyte yet, head over to https://dyte.io to learn how we are revolutionizing live video calling through our SDKs and libraries and how you can get started quickly on your 10,000 free minutes which renew every month. If you have any questions, you can reach us at support@dyte.io or ask our developer community.

Spread the word

Keep reading