Andrei Pfeiffer logo
Back to Articles

The code etymologistPart 4: Product requirements

Software entropy
5 min read

The Code Etymologist is a multi-part series that covers typical software knowledge that deserves to be documented and shared within development teams.

  1. Preface
  2. Part 1: Project setup
  3. Part 2: Coding guidelines
  4. Part 3: Development workflows
  5. Part 4: Product requirements
  6. Part 5: UI Components library
  7. Part 6: Data structures
  8. Part 7: Technical decisions
  9. Closing thoughts

Specifications

Software projects have requirements. Some of them are small, others are huge. We often use various terminology to refer to them depending on their size and scope: acceptance criteria, use case, user story, feature, epic, etc.

These features are usually described in software requirements specification (SRS) documents before the development phase, to make sure that everybody involved in the delivery of the feature are on the same page.

However, new features often interact with, impact, or change existing features. Different teams might end up working on the same code base, changing existing behavior in different ways.

So, the big question is: How do we maintain and keep up-to-date the current features list? The theory is simple: you just update the SRS documents constantly. But as simple as it might be in theory, the practice demonstrates that written documentation gets easily outdated. The reasons could be multiple: lack of ownership, unclear who's responsible for their maintenance, deprioritisation in favor of other urgent tasks, lack of communication of feature changes, etc.

Automated tests

Now, I'm sorry that I use tests and documentation in the same sentence, knowing that both of them are usually dreaded by most software developers, but that's the truth. Automated tests provide feature documentation. They show how the software currently works.

In contrast to written documentation, tests don't get outdated so easily. Well, they do if we don't execute them. But if we do run them in a CI on a regular basis, the tests will prompt us once they get outdated, so we know when to update them.

Some larger features or complex scenarios that integrate multiple features might be more suitable for end-to-end tests, even if they don't cover all edge cases and acceptance criteria. Other smaller features or business logic details might be more suitable to be tested in isolation. The level of isolation or integration used in testing a specific business requirement depends and should be debated on a case-by-case basis.


Not every feature can indeed be automated. We cannot test "everything". For instance, interacting with a <canvas> element and asserting the result is not trivial at all. Depending on 3rd party services that we don't control, will probably force us to focus on isolated tests, leaving the integration for manual testing. That's completely fine. Just because we cannot reach 100%, it doesn't imply that we shouldn't settle for less.

Test maintenance cost

In the end, it's a matter of measuring the effort. What's more cumbersome?

  • Writing, executing, and maintaining automated tests?
  • Or writing and updating SRS documents?

SRS documents are definitely mandatory before development. But on the long term, they are non-trivial to maintain. And these documents should be reliable. Developers, testers and managers should be able to use them and understand how the product works. The question is What level of detail should be used to describe the specifications?

  • We can maintain a superficial list containing only the use cases with their acceptance criteria, skipping the test cases and corner cases. This might be easier to digest, but it lacks some details.
  • Or we could include comprehensive details regarding what buttons to click and what data to input in forms. This might include all the required information, but it would be a nightmare to read, digest, search, and maintain.

In the end, the question of having automated tests is simply a matter of cost. However, there is no formula, tool, or LLM to calculate this cost. Therefore, we have to rely on our best judgement and experience to estimate if an automated test makes for sense for a specific use case or not.

Diagrams

While written documentation and automated tests provide a great level of detail, it's often difficult to "visualize" the feature as a whole. This is typically problematic in situations where we have either a sequence of linear steps or multiple chained decisions resulting in a multitude of logical paths.

Sequence diagrams

To help us visualize a particular scenario of a use case, we can use sequence diagrams. They are useful to describe the data exchange between different parts of a system and to understand the communication between two or multiple actors.

One such example is the account binding sequence diagram from Slack tutorials. The above diagram can fully replace a written specification document, as it encapsulates all the important details. Additionally, it helps us understand in which part of the system each action takes place.

Flow charts

To better visualize and understand a feature that includes a multitude of scenarios based on various decisions, flow charts are a great solution. One popular example is the notification decision flow documented by the Slack engineering team:

Imagine explaining this entire graph using only written text. While text offers a higher degree of details, it can spread across a multitude of pages. As a result, the readers might be unable to see the forest for the trees. Flow charts enable us to see the entire jungle and gradually dive into the details.

State charts

State machines are powerful tools, capable of modelling and defining logical flows between a finite number of states. State machines are typically defined using a programming language.

State charts represent the graphical extension to state machines. In addition, SCXML is a W3C standard, which describes a formal format to define state machines.

The benefit of state charts is that from a single source of truth, defined in a standard format, we can generate the visual diagram to help us understand the state machine, but we can also generate the executable code for the state machine.

In addition, since all the behavior is encapsulated in the formal definition, we can create interactive diagrams to simulate and experience the behavior of the state machine. Some tools that support interactive state diagrams include Stately Studio and StateSmith.

Continue reading Part 5: UI Components library


Scroll to top