Ask HN: How do you keep track of software requirements and test them?
source link: https://news.ycombinator.com/item?id=31083131
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Ask HN: How do you keep track of software requirements and test them?
Ask HN: How do you keep track of software requirements and test them? 105 points by lovehatesoft 5 hours ago | hide | past | favorite | 72 comments I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.
Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.
I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!
Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.
> WidgetX handles foobar in main case
> WidgetX handles foobar when exception case arises (More Foo, than Bar)
> WidgetX works like <expected> when zero WidgetY present
Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"
If builders built buildings the way programmers write programs, we’d have progressed from wattle-and-daub through wood and reinforced concrete to molecular nanotechnology construction in the first two generations of humans building occupied structures.
Bad analogy is bad because programs and buildings aren't remotely similar or comparable.
We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.
I don’t see why you can’t maintain the spirit of agile and develop iteratively while increasing fidelity, in order to learn out these things as early as possible.
I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.
I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."
* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc
Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.
How to build test cases is another story.
The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.
Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.
Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem. Assuming the team is committed to some form of Agile and you have such a thing as a PO.
However, I also disagree with the main thrust of this comment. A PO should have responsibility, sure. But if that gets translated into an environment where junior devs on the team are expected to not know requirements, or be able to track them, then you no longer have a team. You have a group with overseers or minions.
There's a gray area between responsibility and democracy. Good luck navigating.
In some work environments, there may be unspoken requirements, or requirements that the people who want the work done don't know they have.
For example, in an online shopping business the head of marketing wants to be able to allocate a free gift to every customer's first order. That's a nice simple business requirement, clearly expressed and straight from the user's mouth.
But there are a bunch of other requirements:
* If the gift item is out of stock, it should not appear as a missing item on the shipping manifest
* If every other item is out of stock, we should not send a shipment with only the gift.
* If we miss the gift from their first order, we should include it in their second order.
* The weight of an order should not include the gift when calculating the shipping charge for the customer, but should include it when printing the shipping label.
* If the first order the customer places is for a backordered item, and the second order they place will arrive before their 'first' order, the gift should be removed from the 'first' order and added to the 'second' order, unless the development cost of that feature is greater than $3000 in which case never mind.
* The customer should not be charged for the gift.
* If the gift item is also available for paid purchase, orders with a mix of gift and paid items should behave sensibly with regard to all the features above.
* Everything above should hold true even if the gift scheme is ended between the customer checking out and their order being dispatched.
* The system should be secure, not allowing hackers to get multiple free gifts, or to get arbitrary items for free.
* The software involved in this should not add more than, say, half a second to the checkout process. Ideally a lot less than that.
Who is responsible for turning the head of marketing's broad requirement into that list of many more, much narrower requirements?
Depending on the organisation it could be a business analyst, a product owner, a project manager, an engineer as part of planning the work, an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.
That is just restating the problem that the "PO can't define the goals."
It's a bigger problem in the industry. Somehow, the Agile marketing campaign succeeded, and now everyone is Agile, regardless of whether the team is following one of the myriad paradigms.
I can rattle off dozens of orgs doing Scrum, but maybe 1 or 2 that actually are. Maybe doing two weeks of work and calling it a sprint, then doing another two weeks of work...and so on. No defined roles. It's just a badge word on the company's culture page.
The companies that are really doing something Agile are the consultancies that are selling an Agile process.
Or maybe I just need to do better testing myself? There's no code reviews around here, or much of an emphasis on writing issues, or any emphasis on testing that I've noticed. So it's kind of tough figuring out what I can do
- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'
- there's one pull request per story
- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green
- even side effects like outgoing emails are verified
- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged
- practically no manual testing as anything that a manual tester would do is likely covered with automated tests
- no QA team
And we have a system that provides us a full report of all the tests and links between tests and tickets.
We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.
All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.
(we're hiring :) )
After I am unsure of the whole "testing part" especially running all the tests for each PR for typical projects..
My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.
Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.
You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.
In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".
* It seems like this would have been a milestone?
* So then maybe a few issues for the different classes or requirements?
* For each issue, after/during development I would note what tests are needed, maybe in the comments section of the issue? Maybe in the description?
* And then automated tests using junit?
I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.
Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.
Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior
For simple microservice-type projects I've found a .md file, or even mentioning the requirements in the main README.md to be sufficient.
I think it's important to track requirements over the lifetime of the project. Otherwise you'll find devs flip-flopping between different solutions. E.g. in a recent project we were using an open-source messaging system but it wasn't working for us so we moved to a cloud solution. I noted in the requirements that we wanted a reliable system, and cost and cloud-independence wasn't an important requirement. Otherwise, in two years if I'm gone and a new dev comes on board, they might ask "why are we using proprietary tools for this, why don't we use open source" and spend time refactoring it. Then two years later when they're gone a new dev comes along "this isn't working well, why aren't we using cloud native tools here"....
Also important to add things that aren't requirements, so that you can understand the tradeoffs made in the software. (In the above case, for example, cost wasn't a big factor, which will help future devs understand "why didn't they go for a cheaper solution?")
Also, if there's a bug, is it even a bug? How do you know if you don't know what the system is supposed to do in the first place?
Jira tickets describe individual changes to the system. That's fine for a brand new system. But after the system is 10 years old, you don't want to have to go through all the tickets to work out what the current desired state is.
However, what would be missing from this is discussions for each requirement specified. Or would you want to include that as well?
It would be nice having a dedicated directory for requirements, src, infra, tests and docs. Which would make things easier to track over long period of time I think
Here are some options that I've seen in practice.
A: put everything in your repository in a structured way:
pros: - consistent - actually used in practice by the engineers
cons: - hard to work with for non-developers - too much detail for audits - hard to combine with documents / e-signatures
B: keep separate word documents
pros: - high level, readable documentation overview - works with auditor workflows - PM's can work with these documents as well
cons: - grows to be inconsistent with your actual detailed requirements - hard to put in a CI/CD pipeline
A whole different story is the level of details that you want to put in the requirements. Too much detail and developers feel powerless, too little detail and the QA people feel powerless.
Another useful tool to use in conjunction to the above is running code coverage on each branch to ensure you don't have new code coming in that is not covered by unit tests.
Whatever that feature is meant to be, it definitely isn't requirements management. Requirements don't stop being requirements after you've written the code.
For requirement, use any kind of issue tracker and connect your commit with issues. Jira, people here hate it for various reason. But it get the job done. Otherwise GitHub issue would (there are problems with GitHub issues, e.g. cross repo issue tracking in a single place. That's another story)
For QA, you want your QA be part of the progress tracking and have it reflected in Jira/GitHub commit.
One thing I think is of equal importance, if not more, is how the code you delivered is used in the wild. Some sort of analytics.
Zoom out a bit, requirement is what you THINK the user want. QA is about whether your code CAN perform what you think the user want plus some safeguard. Analytics is how the user actually perform in real world
A bit off topic here, QA and analytics is really two side of the same coin. Yet people treat it as two different domains, two set of tools. On one hand, the requirement is verified manually through hand crafted test cases. On the other hand, production behavioural insight is not transformed into future dev/test cases effectively. It is still done manually, if any.
Think about how many time a user wander into a untested undefined interaction that escalated into a support ticket. I'm building a single tool to bridge the gap between product(requirement and production phase) and quality (testing)
See the post at https://jiby.tech/post/gherkin-features-user-requirements/
I made this into a series of post about gherkin, where I introduce people to Cucumber tooling and BDD ideals, and show an alternative low-tech for cucumber in test comments.
As for actually doing the tracking of feature->test, aside from pure Cucumber tooling, I recommend people have a look at sphinxcontrib-needs:
https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...
Define in docs a "requirement" bloc with freeform text (though I put gherkin in it), then define more "specifications", "tests" etc with links to each other, and the tool does the graph!
Combined with the very alpha sphinx-collections, it allows jinja templates from arbitrary data:
Write gherkin in features/ folder, make the template generate for each file under that folder an entry of sphinxcontrib-needs with the gherkin source being quoted!
Spec-like BDD tests (RSpec, Jest, Spock, et al. - most languages except Python seem to have a good framework) have all the advantages of forcing behavioral thinking without having to maintain a layer of regex redirects.
I wish this wasn't the case but it's been the reality in my experience and I've been developing software for 20+ years. I'm the rare developer that will ask questions and write things down. And if it seems necessary I will even model it formally and write proofs.
Some industries it is required in some degree. I've worked in regulated industries where it was required to maintain Standard Operating Procedures documents in order to remain compliant with regulators. These documents will often outline how requirements are gathered, how they are documented, and include forms for signing off that the software version released implements them, etc. There are generally pretty stiff penalties for failing to follow procedure (though for some industries I don't think those penalties are high enough to deter businesses from trying to cut corners).
In those companies that had to track requirements we used a git repository to manage the documentation and a documentation system generated using pandoc to do things like generate issue-tracker id's into the documentation consistently, etc.
A few enterprising teams at Microsoft and Amazon are stepping up and building tooling that automates the process of checking a software implementation of a formal specification. For them mistakes that lead to security vulnerabilities or missed service level objectives can spell millions of dollars in losses. As far as I'm aware it's still novel and not a lot of folks are talking about it yet.
I consider myself an advocate for formal methods but I wouldn't say that it's a common practice. The opinions of the wider industry about formal methods are not great (and that might have something to do with the legacy of advocates past over-promising and under-delivering). If anything at least ask questions and write things down. The name of the game is to never be fooled. The challenge is that you're the easiest person to fool. Writing things down, specifications and what not, is one way to be objective with yourself and overcome this challenge.
As an example, you could have a test that calls some public API and checks that you get the expected response. Assuming your requirement cares about the public API, or the functionality it provides.
I've tried to be as detailed as I can without knowing much about your application: assumptions were made, apply salt as needed.
Personally, I like having a test-suite be the documentation for what requirements exist. Removing or significantly modifying a test should always be a business decision. Your local Jira guru will probably disagree
2. Architecture and design docs explain the “how” to engineering.
3. The work gets broken down to stories and sub-tasks and added to a Scrum/Kanban board. I like Jira, but have also used Asana and Trello.
Testing is just another sub-task, and part of the general definition of some for a story. For larger projects, a project-specific test suite may be useful. Write failing tests. Once they all pass, you have an indication that the project is nearly done.
You can skip to #3 if everyone is aligned on the goals and how you’ll achieve them.
Of course, don't make the mistake I am guilty of sometimes making, and think you now know better than everyone else just because you've read some things others have not. Gain knowledge, but stay focused on loving the people around you. ("Loving" meaning in the Christian sense of respect, not being selfish, etc; sorry if that is obvious)
Why? Why would you like that? Why you?
If it's not happening, the business doesn't care. Your company is clearly not in a tightly regulated industry. What does the business care about? Better to focus on that instead of struggling to become a QA engineer when the company didn't hire you for that.
Generally, if the team wants to start caring about that, agree to:
1. noting whatever needs to be tested in your tracker
2. writing tests for those things alongside the code changes
3. having code reviews include checking that the right tests were added, too
4. bonus points for making sure code coverage never drops (so no new untested code was introduced)
What sort of tests you want depends a lot on your system. If you're working on some data processing system where you can easily generate many examples of test input then you'll probably get lots of ROI from setting up lots of regression tests that cover loads of behaviour. If it's a complex system involving hardware or lots of clicking in the UI then it can be very good to invest in that setup but it can be expensive in time and cost. In that case, focus on edge or corner cases.
Then in terms of how you use it, you have a few options depending of the types of test:
- you can run through the tests manually every time you do a release (i.e. manual QA) - just make a copy of the spreadsheet and record the results as you go and BAM you have a test report
- if you have some automated tests like pytest going on, then you could use the mark decorator and tag your tests with the functional test ID(s) that they correspond to, and even generate a HTML report at the end with a pass/fail/skip for your requirements
I know it doesn't get much love on here, but a feature per requirement is a good level to start at. I'd recommend using `Examples` tables for testing each combination.
Having your features run on every PR is worth its weight in gold, and being able to deal with variations in branches relieves most of the headaches from having your requirements outside of the repo.
I have never seen a requirements tracking software that worked well for large systems with lots of parts. Tracing tests to requirements and monitoring requirements coverage is hard. For projects of the size I work on I think more and more that writing a few scripts that work on some JSON files may be less effort and more useful than customizing commercial systems.
I really like end-to-end tests for this, because it tests the system from a user perspective, which is how many requirements are actually coming in, not how they are implemented internally. I also like to write tests for things that can't actually break indirectly. But it makes it so that someone who changes e.g. some function and thus breaks the test realizes that this is an explicit prior specification that they are about to invalidate and might want to double check with someone.
The product owner writes User Stories in a specific human-and-machine readable format (Given/when/then). The engineers build the features specified. Then the test author converts the “gherkin” spec into runnable test cases. Usually you have these “three amigos” meet before the product spec is finalized to agree that the spec is both implementable and testable.
You can have a dedicated “test automation” role or just have an engineer build the Acceptance Tests (I like to make it someone other than building the feature so you get two takes on interpreting the spec). You keep the tester “black-box” without knowing the implementation details. At the end you deliver both tests and code, and if the tests pass you can fee pretty confident that the happy path works as intended.
The advantage with this system is product owners can view the library of Gherkin specs to see how the product works as the system evolves. Rather than having to read old spec documents, which could be out of date since they don’t actually get validated against the real system.
A good book for this is “Growing Object-Oriented Software, Guided by Tests” [1], which is one of my top recommendations for junior engineers as it also gives a really good example of OOP philosophy.
The main failure mode I have seen here is not getting buy-in from Product, so the specs get written by Engineering and never viewed by anyone else. It takes more effort to get the same quality of testing with Gherkin, and this is only worthwhile if you are reaping the benefit of non-technical legibility.
All that said, if you do manual release testing, a spreadsheet with all the features and how they are supposed to work, plus a link to where they are automatically tested, could be a good first step if you have high quality requirements. It will be expensive to maintain though.
1: https://smile.amazon.com/Growing-Object-Oriented-Software-Ad...
Based on our experience with some heavyweight requirements management tools we tried to develop quite the opposite a simple requirements management tool. It is not open source but at least it has a open json format - good for git/svn, integration with Jira, ReqIF export/import, quick definition of requirements, attributes, links and various views. See https://reqview.com
For a small team you can probably build a workable process in Microsoft Access. I use access to track my own requirements during the drafting stage.
TLDR adding structure isn’t always the answer. Your team/org needs to be open to that.
Your job is to do what is being asked of you and not screw it up too much.
If they wanted to track requirements, they'd already track them.
People have very fragile egos - if you come in as a junior dev and start suggesting shit - they will not like that.
If you come in as a senior dev and start suggesting shit, they'll not like it, unless your suggestion is 'how about I do your work for you on top of my work, while you get most or all of the credit'.
That is the only suggestion most other people are interested in.
Source: been working for a while.
However there is a spectrum, and if it turns from "listen rather than speak" in a respectful, learning sort of mentality to "shut up and do as I say, no questions", then requirements tools are not going to address the real problems.
In my experience, having requirements and processes and tools being used in a mindful way can be wonderful, but all that pales in comparison with the effectiveness of a well-working team. But that's the human factor and the difficult part.
Source: also been working a while. Seen good teams that were very democratic and also good teams that were very top-heavy militaristic (happy people all around in both scenarios).
So I _was_ trying to do only what was asked of me, just writing the code, but I guess I thought what I did at my previous job could have helped - which is keeping track of what was needed and then how I planned to accomplish and test.
But yeah, you've got me thinking about how or whether I should broach this topic; I think my lead is great, seems open to ideas, wants things to work well, so maybe I'll just ask what they think about how to avoid these kinds of mistakes.
Top-level requirements are system requirements and each of them should be tested through system tests. This usually then drips through the implementation layers from system tests to integration tests, to unit tests.
Regression testing really is just running your test suite every time something changes in order to check that everything still works fine.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Search:
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK