56

Building and testing Go apps + monorepo + speed – Wattpad Engineering – Medium

 6 years ago
source link: https://medium.com/wattpad-engineering/building-and-testing-go-apps-monorepo-speed-9e9ca4978e19
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Building and testing Go apps + monorepo + speed

Or, how we test and build go code in a monorepo, with TravisCI, and deploy to Docker, quickly and easily.

It will come as no surprise that Wattpad adopted Docker some time ago. Implementation bugs aside, we are deriving great benefit from the technology and its ecosystem.

Like many others, when we started deploying Docker images to production, they layered on top of ubuntu or golang base images, weighed hundreds of MBs, and took a while to build and start.

Around the same time we also started migrating to a monorepo, at least for backend Go apps. We still had some services running Go 1.3 and didn’t want to prioritize updating their build and runtime environment, so the Go 1.5 vendor experiment wasn’t an option, not to mention that we didn’t want to replicate all shared dependencies for each app.

v1: Jenkins is powerfully stupid

Jenkins is a really powerful tool, which enabled us to do complex things like create a sequence of jobs so:

  1. a pull-request can be tested,
  2. a merge to master can be tested,
  3. the result packaged as a Docker image,
  4. then deployed, with vaguely informative status updates in Slack.

Unfortunately, that also meant that adding a new service that needs testing, building, and deploying required logging into the Jenkins web app, cloning a set of jobs, and making small changes to each job carefully. Missing a job or forgetting to update a setting wouldn’t just make the new service not build properly, it could actually break an existing service. Additionally, some of the settings that needed tweaking were hidden behind cryptically named “Advanced” buttons that littered the job configuration screen.

Few people, also, knew how it worked, so when a dev needed something updated in Jenkins, they would rather ask a test ops person for help rather than do it themselves.

v2: TravisCI is simple. Maybe too simple.

To escape this hell, we moved to TravisCI, where build/test config is all in a yaml file at the root of the repo, and some helper scripts.

On the plus side, changing the config is now entirely under the control of the people who work on the repo, which is such a big win it cannot be understated. On the other hand, the first version of the config used a single Go version for the whole repo’s tests, even though different services get built and deployed with different Go versions in production. Scary dangerous.

Merges use Docker to build images, which means not being able to use Travis’ really fast booting containers, but instead a more traditional VM. Each and every PR tests the entire repo, and every merge builds the entire repo, which makes feedback loops a lot slower than they could be if it was more targeted.

This is one of the downsides to monorepos: build and test tooling is optimized for a repo per app, so the default behaviours don’t align with the needs of a monorepo. Even though it all generally worked, we were averaging a 30 minute CI cycle, where ideally we’re counting our build times in single digit minutes or seconds.

v3: making CI great again

A very brief prelude about abstraction: The biggest benefit we get from Docker is that it affords us an abstraction on applications by providing a consistent packaging and execution pattern that anything we want to deploy can satisfy. The abstraction sometimes leaks a bit when we need to inject data with various Docker fairy dust such as volume mounts, configs with environment variables, and specify ports to bind to. At least we have a common language for doing all that and can use tools like Kubernetes to further let developers control these details.

The monorepo demands a similar abstraction on how to test, build, and package services. Not only should there be a common way to perform these actions, but we wish to maximize code reuse to make it as painless as possible for devs adding or maintaining components in the repo. make to the rescue! A base Makefile containing "standard" targets like test, build, image, and image_push, with behaviours largely dictated by variables that have reasonable default values, means that a new service can be added to the repo with a Makefile containing only 1-2 lines. Simple as to be trivial, yet a component that needs different behaviours can override the targets that constitute the contract of the abstraction and still be tested, built, and deployed as any other.

The details of the Makefiles and Dockerfiles will be specific to a given monorepo and organization, but some examples are available for inspiration at github.com/jharlap/affected_example_monorepo.

Partial builds FTW!

New Go developers tend to celebrate the speed of its unit tests, largely a result of fast build times, but a large enough code base will still impose build and test times that can try one’s patience. More importantly, the longer the feedback delay in the edit/test loop, the more flow is interrupted, and our precious time is wasted. The monorepo will inevitably grow until a single go test ./... at the root of the repo will take far too long to be pleasing.

affected (github.com/jharlap/affected) computes, for a given git commit range, all the packages that may be affected by the change. It does so by finding the set of packages wherein files are modified by the commits and then augmenting the affected package list by finding all packages that import an affected package, until no new packages can be added to the affected set.

Using affected to reduce the set of things to build works perfectly in combination with make_shard, so a large monorepo can now run partial builds, in parallel, with high confidence that all affected components in the repo are tested and built.

tl;dr: A monorepo does require some new tooling, but you don’t need a self-hosted cluster of bare metal or a complex tool like Pants or Buck. Simple tools (affected, make, a shell script) work great.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK