0

Ask HN: How do you work with Dependabot?

 1 year ago
source link: https://news.ycombinator.com/item?id=32436316
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ask HN: How do you work with Dependabot?

Ask HN: How do you work with Dependabot?
8 points by dynamite-ready 1 hour ago | hide | past | favorite | 16 comments
I like the idea of Dependabot. A to that actively tracks down dependency updates can be useful. Where I work, we have a daily CI job that creates a PR for each new dependency and runs a build in both our UI (JavaScript) and API (Python) projects. If the build passes, "Happy Days", we can merge the PR, and the app is all the more secure and effective for it.

What I've noticed in practice however, is that occasionally, this process will allow an upgrade to a dependency that will pass the automated build and test step, but introduce the wildest runtime error into the application. Usually at the time when we aim to deliver something.

Dependency 'spam' is also a very real issue - https://news.ycombinator.com/item?id=27929596 - the daily deluge of often insignificant updates is a trudge to deal with, especially when coupled with the risk of these sly runtime errors.

Dependabot is a great idea, but no-one appears to have anything bad, or practical, to say against it. But it does clearly have flaws.

I don't think I'd want to switch the bot off, but I would be interested in hearing how other people get on with the tool.

Thanks. :]

Same feelings. I like the idea but in practice I don’t trust it.

Or rather, I don’t trust package maintainers to adhere to semver. I prefer to manually go through dependencies updating one at a time and reading the change logs. I usually do this in batch. Peace of mind is worth more than the hour saved every week or two.

I do really like the tool that flags security issues with packages though.

> What I've noticed in practice however, is that occasionally, this process will allow an upgrade to a dependency that will pass the automated build and test step, but introduce the wildest runtime error into the application. Usually at the time when we aim to deliver something.

Sounds like dependabot is very useful for uncovering insufficient test coverage or missing integration tests :)

s.gif
That would be a shallow reading, however. The last two major runtime issue was actually one that broke the test runner, and ignored a number of tests. Another runtime error was a Python Django specific sub dependency, that broke the admin interface, which obviously, we don't explicitly test.

On the other hand, very recently, we had to abort a release, because of an outdated dependency that Dependabot DID actually raise.

Which is why I don't want to throw the baby out with the bathwater, as one or two people have suggested.

But I can say that I think that the reality of working with Dependabot, is not very well reflected in popular online articles.

s.gif
> the admin interface, which obviously, we don't explicitly test

This... is not obvious. It broke a part of your software which you care about. You care, it caused a problem, so it should be tested.

s.gif
You routinely write unit tests for dependencies packages in your apps?
s.gif
> one that broke the test runner, and ignored a number of tests

That's unfortunate! For the project I'm working on, we've "solved" that by showing the number of test and the difference to the number of tests that ran on main.

FWIW, at previous jobs, upgrading Java dependencies was a major pain because they were all outdated and the latest versions introduced too many breaking changes for us. At my current job, we pretty much instantly merge all PRs from dependabot because we trust our CI. Upgrades rarely introduce problems and if they do, they are easy to fix.

s.gif
Was it an update to the test runner or test specific packages that broke the test runner? I would ignore infrastructure/testability/tooling packages in dependabot and do them manually to prevent these errors.
s.gif
> Python Django specific sub dependency, that broke the admin interface, which obviously, we don't explicitly test.

There's your problem.

To address this issue we designed a static analysis that can check if the upgrade is likely to break the application. Here are some details of the work- “Effective Static Checking of Library Updates” https://dl.acm.org/doi/abs/10.1145/3236024.3275535

When using the analysis a PR for upgrading the dependencies would look like this - https://github.com/tmroberts56/java-maven/pull/3

At a previous job, we loved the idea of depandabot but in practice it didn't match the way we work or review PRs. And as you said, just because a test passed doesn't mean that the update is 100% safe.

So instead, we identified our critical dependencies, then included dependencies update task to the list of tech-debt tasks we handle every week manually.

s.gif
Yes. This is exactly what I would prefer to do. My current plan is to round up all the PR requests, at the start of each new sprint (every two weeks), and make it the first development task in the sprint.

The problem though, is that so far, I can't point to any literature online, to support this idea.

I would still like to keep Dependabot, because the diagnostic step it performs is useful. But introducing new dependency upgrades daily, even minor upgrades, seems like a recipe for trouble. Minor upgrades are just as likely to introduce a vulnerability, as they are to patch them, after all.

s.gif
We ended up developing a (naive) internal tool that allowed to see how many versions (and days) behind each dependency is, sorted by how much we care about each package.

This gave us a quick dashboard to checkout before running `yarn upgrade-interactive --latest`

My main gripe is that Dependabot can end up raising multiple PRs for the same dependency bump in the same repo (especially with Dockerfiles). I really wish I could tell it to do rollups e.g. `@dependabot rollup #1234 #1235 #1236` or something like that.

To save having to do multiple rounds of merge PR, rebase next PR, wait for CI... I end up doing my own rollup PRs by merging the various Dependabot branches. At least Dependabot is smart enough to close all of the original PRs when the rollup is merged.

I set it up on a repo, then delete emails I recieve from it.
I ignore everything it says, for me it's just noise
Someone knowledgeable of the codebase reviews changelogs of individual packages. They merge all simple cases and flag any breaking changes and anything that affects features used in the codebase. These can then be tested/fixed by others not so deeply familiar with the codebase, which fuels knowledge transfer.
s.gif
Applications are open for YC Winter 2023
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK