3

Ask HN: How do you keep track of software requirements and test them?

 2 years ago
source link: https://news.ycombinator.com/item?id=31083131
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ask HN: How do you keep track of software requirements and test them?

Ask HN: How do you keep track of software requirements and test them? 105 points by lovehatesoft 5 hours ago | hide | past | favorite | 72 comments I'm a junior dev that recently joined a small team which doesn't seem to have much with regards to tracking requirements and how they're being tested, and I was wondering if anybody has recommendations.

Specifically, I would like to track what the requirements/specifications are, and how we'll test to make sure they're met? Which I don't know if this could be a mix of unit and integration/regression tests? Honestly though if this is maybe even the wrong track to take, I'd appreciate feedback on what we could be doing instead.

I used IBM Rational DOORS at a previous job and thought it really helped for this, but with a small team I don't think it's likely they'll spring for it. Are there open source options out there, or something else that's easy? I thought we could maybe keep track in a spreadsheet (this to match DOORS?) or some other file, but I'm sure there would be issues with that as we added to it. Thanks for any feedback!

In a safety-critical industry, requirements tracking is very important. At my current employer, all of our software has to be developed and verified in accordance with DO-178 [0]. We have a dedicated systems engineering team who develop the system requirements from which we, the software development team, develop the software requirements; we have a dedicated software verification team (separate from the development team) who develop and execute the test suite for each project. We use Siemens's Polarion to track the links between requirements, code, and tests, and it's all done under the supervision of an in-house FAA Designated Engineering Representative. Boy is it all tedious, but there's a clear point to it and it catches all the bugs.

[0] https://en.wikipedia.org/wiki/DO-178C

s.gif
Just wanted to ask, this pretty much ensures you're doing waterfall development, as opposed to agile, right?
s.gif
Not sure about how parent concretely operates. But there's no reason you cannot do Agile this way.

Agile iteration is just as much about how you carve up work as how you decide what to do next. For example you could break up a task into cases it handles.

> WidgetX handles foobar in main case

> WidgetX handles foobar when exception case arises (More Foo, than Bar)

> WidgetX works like <expected> when zero WidgetY present

Those could be 3 separate iterations on the same software, fully tested and integrated individually, and accumulated over time. And the feedback loop could come internally as in "How does it function amongst all the other requirements?", "How is it contributing to problems achieving that goal?"

s.gif
If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization. ~ Gerald Weinberg (1933-10-27 age:84) Weinberg’s Second Law

https://www.mindprod.com/jgloss/unmain.html

s.gif
> If builders built buildings the way programmers write programs, then the first woodpecker that came along would destroy civilization.

If builders built buildings the way programmers write programs, we’d have progressed from wattle-and-daub through wood and reinforced concrete to molecular nanotechnology construction in the first two generations of humans building occupied structures.

Bad analogy is bad because programs and buildings aren't remotely similar or comparable.

s.gif
Still I feel like your analogy is the better one, things are moving very fast. With declarative infra and reproducible builds you’re pumping out high quality, well tested buildings at record speeds.
s.gif
On that path a lot of people would have died due to building collapses and fires though.
s.gif
Waterfall and Agile are tools. If you need to hang a photo, a hammer and a nail. Cut down a tree? Maybe not the hammer and the nail.
s.gif
Could you use both to good effect? Waterfall to make a plan, schedule, and budget. Then basically disregard all that and execute using Agile and see how you fare. Of course there would be a reckoning as you would end up building the system they want rather than what was spec'd out.
s.gif
You can and will make changes on the way but every change is extremely expensive so it’s better to keep changes low.
s.gif
And... is your team consistently hitting the estimated product delivery schedules? (honest question)
s.gif
Waterfall is a great methodology where warranted. It ensures you're doing things in a principled, predictable, repeatable manner. We see all this stuff lamenting about and trying to implement reproducibility in science and build systems, yet seem to embrace chaos in certain types of engineering practices.

We largely used waterfall in GEOINT and I think it was a great match and our processes started to break down and fail when the government started to insist we embrace Agile methodologies to emulate commercial best practices. Software capabilities of ground processing systems are at least somewhat intrinsically coupled to the hardware capabilities of the sensor platforms, and those are known and planned years in advance and effectively immutable once a vehicle is in orbit. The algorithmic capabilities are largely dictated by physics, not by user feedback When user feedback is critical, i.e. UI components, by all means, be Agile. But if you're developing something like the control software for a thruster system, and the physical capabilities and limitations of the thruster system are known in advance and not subject to user feedback, use waterfall. You have hard requirements, so don't pretend you don't.

s.gif
Even with “hard” requirements in advance, things are always subject to change, or unforeseen requirements additions/modifications will be needed.

I don’t see why you can’t maintain the spirit of agile and develop iteratively while increasing fidelity, in order to learn out these things as early as possible.

When it's technically feasible, I like every repo having along side it tests for the requirements from an external business user's point of view. If it's an API then the requirements/tests should be specified in terms of API, for instance. If it's a UI then the requirements should be specified in terms of UI. You can either have documentation blocks next to tests that describe things in human terms or use one of the DSLs that make the terms and the code the same thing if you find that ergonomic for your team.

I like issue tracking that is central to code browsing/change request flows (e.g. Github Issues). These issues can then become code change requests to the requirements testing code, and then to the implementation code, then accepted and become part of the project. As products mature, product ownership folks must periodically review and prune existing requirements they no longer care about, and devs can then refactor as desired.

I don't like over-wrought methodologies built around external issue trackers. I don't like tests that are overly-concerned with implementation detail or don't have any clear connection to a requirement that product ownership actually cares about. "Can we remove this?" "Who knows, here's a test from 2012 that needs that, but no idea who uses it." "How's the sprint board looking?" "Everything is slipping like usual."

I review software for at least 3-5 companies per week as part of FDA submission packages. The FDA requirements require traceability between reqs and the validation. While many small companies just use excel spreadsheets for traceability, the majority of large companies seem to use JIRA tickets alongside confluence. While those arent the only methods, they seem to be 90% of the packages I review.
s.gif
I would love to see how other companies do it. I understand the need for traceability but the implementation in my company is just terrible. We have super expensive systems that are very tedious to use. The processes are slow and clunky. There must be a better way.
s.gif
We have been working software for FDA submissions as well. We use Jama https://www.jamasoftware.com/ for requirements management and traceability to test cases.
s.gif
Health tech - we also use this combo. The Jira test management plugin XRay is pretty good if you need more traceability.
s.gif
Xray and R4J plugins make it pretty nice in JIRA... as far as traceability goes it's MUCH more user friendly than DOORS.
s.gif
Exactly the same process for us, also in healthcare and medical devices.
s.gif
hi, we're trying to build a validated software environment for an ELN tool. I would be interested in learning more about your experience with this software review if you could spare a few minutes -- [email protected]
Zooming into "requirements management" (and out of "developing test cases") there's a couple of Open Source projects that address specifically this important branch of software development. I like both approaches and I think they might be used in different situations. By the way, the creators of these two projects are having useful conversations on aspects of their solutions so you might want to try both and see what's leading from your point of view.

* https://github.com/doorstop-dev/doorstop * https://github.com/strictdoc-project/strictdoc

Of course requirements can be linked to test cases and test execution reports, based on a defined and described process.

How to build test cases is another story.

Let the product owner (PO) handle them.

The PO has to make the hard decision about what to work on and when. He/She must understand the product deeply and be able to make the hard decisions. Also the PO should be able to test the system to accept the changes.

Furthermore. You don't really need to have endless lists of requirements. The most important thing to know is what is the next thing that you have to work on.

s.gif
This actually has a nugget of wisdom. I wish I was more open to soaking up wisdom - and less likely to argue a point - when I was a junior dev. Or still now, really.

Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem. Assuming the team is committed to some form of Agile and you have such a thing as a PO.

However, I also disagree with the main thrust of this comment. A PO should have responsibility, sure. But if that gets translated into an environment where junior devs on the team are expected to not know requirements, or be able to track them, then you no longer have a team. You have a group with overseers or minions.

There's a gray area between responsibility and democracy. Good luck navigating.

s.gif
> Moreover, if your PO can't define the goals, and what needs to be tested to get there, well you have a problem.

In some work environments, there may be unspoken requirements, or requirements that the people who want the work done don't know they have.

For example, in an online shopping business the head of marketing wants to be able to allocate a free gift to every customer's first order. That's a nice simple business requirement, clearly expressed and straight from the user's mouth.

But there are a bunch of other requirements:

* If the gift item is out of stock, it should not appear as a missing item on the shipping manifest

* If every other item is out of stock, we should not send a shipment with only the gift.

* If we miss the gift from their first order, we should include it in their second order.

* The weight of an order should not include the gift when calculating the shipping charge for the customer, but should include it when printing the shipping label.

* If the first order the customer places is for a backordered item, and the second order they place will arrive before their 'first' order, the gift should be removed from the 'first' order and added to the 'second' order, unless the development cost of that feature is greater than $3000 in which case never mind.

* The customer should not be charged for the gift.

* If the gift item is also available for paid purchase, orders with a mix of gift and paid items should behave sensibly with regard to all the features above.

* Everything above should hold true even if the gift scheme is ended between the customer checking out and their order being dispatched.

* The system should be secure, not allowing hackers to get multiple free gifts, or to get arbitrary items for free.

* The software involved in this should not add more than, say, half a second to the checkout process. Ideally a lot less than that.

Who is responsible for turning the head of marketing's broad requirement into that list of many more, much narrower requirements?

Depending on the organisation it could be a business analyst, a product owner, a project manager, an engineer as part of planning the work, an engineer as part of the implementation, or just YOLO into production and wait for the unspoken requirements to appear as bug reports.

s.gif
> there may be unspoken requirements, or requirements that the people who want the work done don't know they have

That is just restating the problem that the "PO can't define the goals."

It's a bigger problem in the industry. Somehow, the Agile marketing campaign succeeded, and now everyone is Agile, regardless of whether the team is following one of the myriad paradigms.

I can rattle off dozens of orgs doing Scrum, but maybe 1 or 2 that actually are. Maybe doing two weeks of work and calling it a sprint, then doing another two weeks of work...and so on. No defined roles. It's just a badge word on the company's culture page.

The companies that are really doing something Agile are the consultancies that are selling an Agile process.

s.gif
That would be nice, and maybe I should have clarified why I asked the question. I was asked to add a new large feature, and some bugs popped up along the way. I thought better testing could have helped, and then I thought it would possibly help to list the requirements as well so I can determine which tests to write/perform. And really I thought I could have been writing those myself - PO tells me what is needed generally, I try to determine what's important from there.

Or maybe I just need to do better testing myself? There's no code reviews around here, or much of an emphasis on writing issues, or any emphasis on testing that I've noticed. So it's kind of tough figuring out what I can do

s.gif
This is a LOT to put on a PO. I hope they have help.
What we do:

- we track work (doesn't matter where), each story has a list of "acceptance criteria", for example: 'if a user logs in, there's a big red button in the middle of the screen, and if the user clicks on it, then it turns to green'

- there's one pull request per story

- each pull request contains end-to-end (or other, but mostly e2e) tests that prove that all ACs are addressed, for example the test logs in as a user, finds the button on the screen, clicks it, then checks whether it turned green

- even side effects like outgoing emails are verified

- if the reviewers can't find tests that prove that the ACs are met, then the PR is not merged

- practically no manual testing as anything that a manual tester would do is likely covered with automated tests

- no QA team

And we have a system that provides us a full report of all the tests and links between tests and tickets.

We run all the tests for all the pull requests, that's currently something like 5000 end-to-end test (that exercise the whole system) and much more other types of tests. One test run for one PR requires around 50 hours of CPU time to finish, so we use pretty big servers.

All this might sound a bit tedious, but this enables practically CICD for a medical system. The test suite is the most complete and valid specification for the system.

(we're hiring :) )

s.gif
Dream board for any projects. One PR per PBI/US is already hard to make people understand this or that we/they shouldn't start working on a PBI/US without acceptance criteria.

After I am unsure of the whole "testing part" especially running all the tests for each PR for typical projects..

Gitlab. Just use Issues you can do everything with the free tier. (It's called "Issues workflow" - gitlab goes a little overboard though, but I'd look at pictures of peoples issues list to get examples).

My opinion would be to not use all the fancy features that automatically tie issues to merge requests, releases, epics, pipelines etc... it's way to much for a small team that is not doing any type of management.

Just use some basic labels, like "bug" or "feature" and then use labels to denote where they are in the cycle such as "sprinted", "needs testing" etc. Can use the Boards feature if you want something nice to look at. Can even assign weights and estimates.

You can tie all the issues of a current sprint to a milestone, call the milestone a version or w/e and set a date. Now you have history of features/bugs worked on for a version.

In terms of testing, obviously automated tests are best and should just be a requirement built into every requirement. Some times though tests must be done manually, and in that case attach a word doc or use the comments feature on an issue for the "test plan".

s.gif
If possible, could I get your opinion on a specific example? In my current situation, I was asked to add a feature which required a few (java) classes. So -

* It seems like this would have been a milestone?

* So then maybe a few issues for the different classes or requirements?

* For each issue, after/during development I would note what tests are needed, maybe in the comments section of the issue? Maybe in the description?

* And then automated tests using junit?

I was at lockheed martin for a few years where Rational DOORS was used. Now at a smaller startup (quite happy to never touch DOORS again)

I think the common answer is you don't use a requirements management tool, unless it's a massive system, with System Engineers who's whole job is to manage requirements.

Some combination of tech specs and tests are the closest you'll get. Going back to review the original tech spec (design doc, etc) of a feature is a good way to understand some of the requirements, but depending on the culture it may be out of date.

Good tests are a bit closer to living requirements. They can serve to document the expected behavior, and check the system for that behavior

I think it's important to keep requirements in Git along with the source code. That way when you implement a new feature you can update the requirements and commit it along with the code changes. When the PR is merged, code and requirements both get merged (no chance to forget to update e.g. a Confluence document). Each branch you check out is going to have the requirements that the code in that branch is supposed to implement.

For simple microservice-type projects I've found a .md file, or even mentioning the requirements in the main README.md to be sufficient.

I think it's important to track requirements over the lifetime of the project. Otherwise you'll find devs flip-flopping between different solutions. E.g. in a recent project we were using an open-source messaging system but it wasn't working for us so we moved to a cloud solution. I noted in the requirements that we wanted a reliable system, and cost and cloud-independence wasn't an important requirement. Otherwise, in two years if I'm gone and a new dev comes on board, they might ask "why are we using proprietary tools for this, why don't we use open source" and spend time refactoring it. Then two years later when they're gone a new dev comes along "this isn't working well, why aren't we using cloud native tools here"....

Also important to add things that aren't requirements, so that you can understand the tradeoffs made in the software. (In the above case, for example, cost wasn't a big factor, which will help future devs understand "why didn't they go for a cheaper solution?")

Also, if there's a bug, is it even a bug? How do you know if you don't know what the system is supposed to do in the first place?

Jira tickets describe individual changes to the system. That's fine for a brand new system. But after the system is 10 years old, you don't want to have to go through all the tickets to work out what the current desired state is.

s.gif
I really like this idea.

However, what would be missing from this is discussions for each requirement specified. Or would you want to include that as well?

It would be nice having a dedicated directory for requirements, src, infra, tests and docs. Which would make things easier to track over long period of time I think

This is super interesting and incredibly difficult. In some regulated environments, like medical devices, you MUST keep track of requirements in your product's technical documentation. I work on a Software Medical Device product and have seen tons of workflows at similar companies. There are many different approaches to this and none that I have seen work really well. In my view this field is ripe for disruption and would benefit from standardization and better tooling.

Here are some options that I've seen in practice.

A: put everything in your repository in a structured way:

pros: - consistent - actually used in practice by the engineers

cons: - hard to work with for non-developers - too much detail for audits - hard to combine with documents / e-signatures

B: keep separate word documents

pros: - high level, readable documentation overview - works with auditor workflows - PM's can work with these documents as well

cons: - grows to be inconsistent with your actual detailed requirements - hard to put in a CI/CD pipeline

A whole different story is the level of details that you want to put in the requirements. Too much detail and developers feel powerless, too little detail and the QA people feel powerless.

s.gif
For option A, how do you put the requirements in the repo? Another user mentioned the possibility of having a "req" folder at the same level of e.g. "src" and "test". Maybe the file structure would match that of the other directories? And what do you use - excel files, word docs, .md files, something else?
We use an issue tracking system like Jira, Trello, Asana, etc and each "ticket" is a unique identifier followed by a brief description. You can add all other sorts of labels, descriptions, etc to better map to the requirements you get. Next, all git branches are named the exact same way as the corresponding ticket. Unit tests are created under the same branch. After getting PR'd in, the code and unit tests can always be matched up to the ticket and therefore the requirement. For us, this system is good enough to replace the usual plethora of documentation the military requires. It does require strict following that can take extra time sometimes, but all devs on my team prefer it to writing more robust documentation.

Another useful tool to use in conjunction to the above is running code coverage on each branch to ensure you don't have new code coming in that is not covered by unit tests.

We have Word documents for requirements and (manual) test cases plus a self-written audit tools that checks the links between them, and converts them into hyperlinked and searchable HTML. It’s part of the dayly build. We are mostly happy with it. It is nice to know that we at any time can switch to a better tool (after all our requirements have an “API”), but we still have not found a better one.
s.gif
I just tried that feature. I added a requirement. I could only add a title and description, which wasn't great. The requirement appeared in the Issues list, which was a bit odd, and when I closed the issue the requirement disappeared from the requirements list.

Whatever that feature is meant to be, it definitely isn't requirements management. Requirements don't stop being requirements after you've written the code.

s.gif
GitLab employee here - we list Requirements Management as at "minimal" maturity. I'm sure the team would love to hear more about why the feature didn't work for you - you can learn more about the direction and the team here: https://about.gitlab.com/direction/plan/certify/#requirement...
It depends where you are in your career and what the industry at the time offers.

For requirement, use any kind of issue tracker and connect your commit with issues. Jira, people here hate it for various reason. But it get the job done. Otherwise GitHub issue would (there are problems with GitHub issues, e.g. cross repo issue tracking in a single place. That's another story)

For QA, you want your QA be part of the progress tracking and have it reflected in Jira/GitHub commit.

One thing I think is of equal importance, if not more, is how the code you delivered is used in the wild. Some sort of analytics.

Zoom out a bit, requirement is what you THINK the user want. QA is about whether your code CAN perform what you think the user want plus some safeguard. Analytics is how the user actually perform in real world

A bit off topic here, QA and analytics is really two side of the same coin. Yet people treat it as two different domains, two set of tools. On one hand, the requirement is verified manually through hand crafted test cases. On the other hand, production behavioural insight is not transformed into future dev/test cases effectively. It is still done manually, if any.

Think about how many time a user wander into a untested undefined interaction that escalated into a support ticket. I'm building a single tool to bridge the gap between product(requirement and production phase) and quality (testing)

Having a similar discussion at work recently, I've written in favour of using Gherkin Features to gather high level requirements (and sometimes a bit of specifications), mostly stored in Jira Epics to clarify what's asked.

See the post at https://jiby.tech/post/gherkin-features-user-requirements/

I made this into a series of post about gherkin, where I introduce people to Cucumber tooling and BDD ideals, and show an alternative low-tech for cucumber in test comments.

As for actually doing the tracking of feature->test, aside from pure Cucumber tooling, I recommend people have a look at sphinxcontrib-needs:

https://sphinxcontrib-needs.readthedocs.io/en/latest/index.h...

Define in docs a "requirement" bloc with freeform text (though I put gherkin in it), then define more "specifications", "tests" etc with links to each other, and the tool does the graph!

Combined with the very alpha sphinx-collections, it allows jinja templates from arbitrary data:

Write gherkin in features/ folder, make the template generate for each file under that folder an entry of sphinxcontrib-needs with the gherkin source being quoted!

https://sphinx-collections.readthedocs.io/en/latest/

s.gif
I have never met a dev who ever enjoyed Cucumber/Gherkin stuff. There's a lot of decorative overhead to make code look friendly to non-coders. Non-coders who eventually never look at the "pretty" code.

Spec-like BDD tests (RSpec, Jest, Spock, et al. - most languages except Python seem to have a good framework) have all the advantages of forcing behavioral thinking without having to maintain a layer of regex redirects.

Depends on the industry. In most web services, applications, and desktop software shops; you don't. You track them informally through various tests your team may or may not maintain (ugh) and you'll hardly ever encounter any documentation or specification, formal or informal of any kind, ever.

I wish this wasn't the case but it's been the reality in my experience and I've been developing software for 20+ years. I'm the rare developer that will ask questions and write things down. And if it seems necessary I will even model it formally and write proofs.

Some industries it is required in some degree. I've worked in regulated industries where it was required to maintain Standard Operating Procedures documents in order to remain compliant with regulators. These documents will often outline how requirements are gathered, how they are documented, and include forms for signing off that the software version released implements them, etc. There are generally pretty stiff penalties for failing to follow procedure (though for some industries I don't think those penalties are high enough to deter businesses from trying to cut corners).

In those companies that had to track requirements we used a git repository to manage the documentation and a documentation system generated using pandoc to do things like generate issue-tracker id's into the documentation consistently, etc.

A few enterprising teams at Microsoft and Amazon are stepping up and building tooling that automates the process of checking a software implementation of a formal specification. For them mistakes that lead to security vulnerabilities or missed service level objectives can spell millions of dollars in losses. As far as I'm aware it's still novel and not a lot of folks are talking about it yet.

I consider myself an advocate for formal methods but I wouldn't say that it's a common practice. The opinions of the wider industry about formal methods are not great (and that might have something to do with the legacy of advocates past over-promising and under-delivering). If anything at least ask questions and write things down. The name of the game is to never be fooled. The challenge is that you're the easiest person to fool. Writing things down, specifications and what not, is one way to be objective with yourself and overcome this challenge.

I'd go for integration or end-to-end tests, depending on your application. Name each test after a requirement and make sure the test ensures the entirety of that requirement is fulfilled as intended(but avoid testing the implementation).

As an example, you could have a test that calls some public API and checks that you get the expected response. Assuming your requirement cares about the public API, or the functionality it provides.

I've tried to be as detailed as I can without knowing much about your application: assumptions were made, apply salt as needed.

Personally, I like having a test-suite be the documentation for what requirements exist. Removing or significantly modifying a test should always be a business decision. Your local Jira guru will probably disagree

1. Start with a product/project brief that explains the who, why, and what if the project at a high level to ensure the business is aligned.

2. Architecture and design docs explain the “how” to engineering.

3. The work gets broken down to stories and sub-tasks and added to a Scrum/Kanban board. I like Jira, but have also used Asana and Trello.

Testing is just another sub-task, and part of the general definition of some for a story. For larger projects, a project-specific test suite may be useful. Write failing tests. Once they all pass, you have an indication that the project is nearly done.

You can skip to #3 if everyone is aligned on the goals and how you’ll achieve them.

Since you mention you're a junior dev, wanted to suggest taking the long road and (1) listening to what others say (you're already doing that by asking here, but don't overlook coworkers much closer to you) and (2) start reading on the subject. Might I suggest Eric Evans "Domain-driven design" as a starting point, and don't stop there? Reading is not a quick easy path, but you will benefit from those that have gone before you.

Of course, don't make the mistake I am guilty of sometimes making, and think you now know better than everyone else just because you've read some things others have not. Gain knowledge, but stay focused on loving the people around you. ("Loving" meaning in the Christian sense of respect, not being selfish, etc; sorry if that is obvious)

> I would like to track what the requirements/specifications are, and how we'll test to make sure they're met

Why? Why would you like that? Why you?

If it's not happening, the business doesn't care. Your company is clearly not in a tightly regulated industry. What does the business care about? Better to focus on that instead of struggling to become a QA engineer when the company didn't hire you for that.

Generally, if the team wants to start caring about that, agree to:

1. noting whatever needs to be tested in your tracker

2. writing tests for those things alongside the code changes

3. having code reviews include checking that the right tests were added, too

4. bonus points for making sure code coverage never drops (so no new untested code was introduced)

Given the large, monolithic legacy nature of our backend, we use a combination of JIRA for feature tracking and each story gets a corresponding functional test implemented in CucumberJS, with the expectation that once a ticket is closed as complete, it is already part of ‘the test suite’ we run during releases. Occasionally the tests flake, it’s all just webdriver under the hood, so they require maintenance, but to cover the entire codebase with manual tests even if well documented would take days, so this is by far our preferred option.
s.gif
As a bonus, we run the suite throughout the day as a sort of canary for things breaking upstream, which we’ve found to be almost as useful as our other monitoring as far as signalling failures.
At my work we’ve needed a QMS and requirements traceability. We first implemented it in google docs via AODocs. Now we’ve moved to Jira + Zephyr for test management + Enzyme. I can’t say I recommend it.
In a small team, I have found that a simple spreadsheet of tests can go a long way. Give it a fancy name like "Subcomponent X Functional Test Specification" and have one row per requirement. Give them IDs (e.g. FNTEST0001).

What sort of tests you want depends a lot on your system. If you're working on some data processing system where you can easily generate many examples of test input then you'll probably get lots of ROI from setting up lots of regression tests that cover loads of behaviour. If it's a complex system involving hardware or lots of clicking in the UI then it can be very good to invest in that setup but it can be expensive in time and cost. In that case, focus on edge or corner cases.

Then in terms of how you use it, you have a few options depending of the types of test:

- you can run through the tests manually every time you do a release (i.e. manual QA) - just make a copy of the spreadsheet and record the results as you go and BAM you have a test report

- if you have some automated tests like pytest going on, then you could use the mark decorator and tag your tests with the functional test ID(s) that they correspond to, and even generate a HTML report at the end with a pass/fail/skip for your requirements

s.gif
This is where Cucumber is great.

I know it doesn't get much love on here, but a feature per requirement is a good level to start at. I'd recommend using `Examples` tables for testing each combination.

Having your features run on every PR is worth its weight in gold, and being able to deal with variations in branches relieves most of the headaches from having your requirements outside of the repo.

We use a system called Cockpit. It’s terrible to say the least.

I have never seen a requirements tracking software that worked well for large systems with lots of parts. Tracing tests to requirements and monitoring requirements coverage is hard. For projects of the size I work on I think more and more that writing a few scripts that work on some JSON files may be less effort and more useful than customizing commercial systems.

For smaller teams/projects I like to have as much of tracking requirements as possible as code because of how hard it is to keep anything written down in natural language up to date and having a useful history of it.

I really like end-to-end tests for this, because it tests the system from a user perspective, which is how many requirements are actually coming in, not how they are implemented internally. I also like to write tests for things that can't actually break indirectly. But it makes it so that someone who changes e.g. some function and thus breaks the test realizes that this is an explicit prior specification that they are about to invalidate and might want to double check with someone.

One framework that is appealing but requires organizational discipline is Acceptance Testing with Gherkin.

The product owner writes User Stories in a specific human-and-machine readable format (Given/when/then). The engineers build the features specified. Then the test author converts the “gherkin” spec into runnable test cases. Usually you have these “three amigos” meet before the product spec is finalized to agree that the spec is both implementable and testable.

You can have a dedicated “test automation” role or just have an engineer build the Acceptance Tests (I like to make it someone other than building the feature so you get two takes on interpreting the spec). You keep the tester “black-box” without knowing the implementation details. At the end you deliver both tests and code, and if the tests pass you can fee pretty confident that the happy path works as intended.

The advantage with this system is product owners can view the library of Gherkin specs to see how the product works as the system evolves. Rather than having to read old spec documents, which could be out of date since they don’t actually get validated against the real system.

A good book for this is “Growing Object-Oriented Software, Guided by Tests” [1], which is one of my top recommendations for junior engineers as it also gives a really good example of OOP philosophy.

The main failure mode I have seen here is not getting buy-in from Product, so the specs get written by Engineering and never viewed by anyone else. It takes more effort to get the same quality of testing with Gherkin, and this is only worthwhile if you are reaping the benefit of non-technical legibility.

All that said, if you do manual release testing, a spreadsheet with all the features and how they are supposed to work, plus a link to where they are automatically tested, could be a good first step if you have high quality requirements. It will be expensive to maintain though.

1: https://smile.amazon.com/Growing-Object-Oriented-Software-Ad...

disclosure: I'm involved in the product mentioned - https://reqview.com

Based on our experience with some heavyweight requirements management tools we tried to develop quite the opposite a simple requirements management tool. It is not open source but at least it has a open json format - good for git/svn, integration with Jira, ReqIF export/import, quick definition of requirements, attributes, links and various views. See https://reqview.com

We used to use MKS and switched to Siemens Polarion a few years ago. I like Polarion. It has a very slick document editor with a decent process for working on links between risks, specifications, and tests. Bonus points for its ability to refresh your login and not loose data if you forget to save and leave a tab for a long time.

For a small team you can probably build a workable process in Microsoft Access. I use access to track my own requirements during the drafting stage.

Even in the strictest settings, documentation has a shelf life. I don't trust anything that's not a test.
If you are interested in a formal approach, Sparx Enterprise Architect is relatively inexpensive, and can model requirements, and provide traceability to test cases, or anything else you want to trace.
Writing stories/tasks in such a way that each acceptance criteria is something that is testable, then having a matching acceptance test for each criteria. Using something like Cucumber helps match the test to the criteria since you can describe steps in a readable format.
I write have Gherkin use cases. It works well as it is plain English. This makes it easy to have in a wiki while also being part of a repo.
You should probably first assess whether or not your organization is open to that kind of structure. Smaller companies sometimes opt toward looser development practices since it’s easier to know who did what, and, the flexibility of looser systems is nice.

TLDR adding structure isn’t always the answer. Your team/org needs to be open to that.

As a junior dev, this isn't your job.

Your job is to do what is being asked of you and not screw it up too much.

If they wanted to track requirements, they'd already track them.

People have very fragile egos - if you come in as a junior dev and start suggesting shit - they will not like that.

If you come in as a senior dev and start suggesting shit, they'll not like it, unless your suggestion is 'how about I do your work for you on top of my work, while you get most or all of the credit'.

That is the only suggestion most other people are interested in.

Source: been working for a while.

s.gif
Sensing some sarcasm, but I agree there is some wisdom is "keeping your place." Not very popular to say that these days, but boy I wish I took that advice more as a junior dev. Still need to do that more.

However there is a spectrum, and if it turns from "listen rather than speak" in a respectful, learning sort of mentality to "shut up and do as I say, no questions", then requirements tools are not going to address the real problems.

In my experience, having requirements and processes and tools being used in a mindful way can be wonderful, but all that pales in comparison with the effectiveness of a well-working team. But that's the human factor and the difficult part.

Source: also been working a while. Seen good teams that were very democratic and also good teams that were very top-heavy militaristic (happy people all around in both scenarios).

s.gif
Well so the reason I asked this questions is that I did screw up a bit, and I think it could have been caught had I done sufficient testing - but I didn't because it doesn't seem to be part of the culture here, and neither are peer reviews.

So I _was_ trying to do only what was asked of me, just writing the code, but I guess I thought what I did at my previous job could have helped - which is keeping track of what was needed and then how I planned to accomplish and test.

But yeah, you've got me thinking about how or whether I should broach this topic; I think my lead is great, seems open to ideas, wants things to work well, so maybe I'll just ask what they think about how to avoid these kinds of mistakes.

s.gif
"give it a quick test and ship it out, our customers are better at finding bugs than we are" - lecture from the CEO of a company I used to work for who didn't want me to waste any time testing and didn't want to pay me to do testing. I left soon after that to find a place with a different culture, trying to change it was way too hard
It's very useful to keep track of changes and to be able to have text to describe and explain, so for me the simplest tool would not be to use a spreadsheet but to create a git repo and to have one file per requirement, which can be grouped into categories through simple folders. You can still have a spreadsheet as top level to summarise as long as you remember to keep it up to date.

Top-level requirements are system requirements and each of them should be tested through system tests. This usually then drips through the implementation layers from system tests to integration tests, to unit tests.

Regression testing really is just running your test suite every time something changes in order to check that everything still works fine.

s.gif Applications are open for YC Summer 2022
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK