Using Cucumber in split discipline teams


Back when I worked in a team that was in the process of adopting behaviour driven development practices, there was a choice to make that we struggled with, due to our relative inexperience with BDD at the time. We were a large team, large enough that our developers were split into front-end and back-end duties. Naturally, the front-end developers were experienced in JavaScript but with little experience of Java, and the back-end developers were experienced in Java but with little experience of JavaScript.

Whilst this wasn’t an issue with the development of production code, it did raise some questions around our automated acceptance tests. Who should own them? Where should we store them? What language should they be written in? As we were new to BDD, and at my insistence at the time, we split the difference and decided that the test team should take ownership of the acceptance tests.

Unfortunately, this turned out to be the wrong choice. Teams should have ownership of tests, not an individual or a set of individuals based on their roles. However, we lacked the experience with tooling to work out a way that could allow both the front-end and back-end developers to easily use the captured examples as automated acceptance tests.

If I was in that position again, I would have done things differently and since this is Hindsight Software, let me break down how I would have solved this issue based on the features of Cucumber and Behave Pro.

Where should we store our acceptance tests?

To ensure team ownership of tests we, as a team, had to work out where to store the acceptance tests. The product we were delivering was designed using Software as a service (SaaS) architecture, which meant we had multiple repositories for our front-end and back-end services, and it was unclear where the tests should be stored.

On the one hand, since our code was split across different repositories that each contained a sum of the part and not the whole, it was not possible to store our automated acceptance tests in just one of the chosen repositories, as the code in one single repo didn’t represent the whole product. On the other hand, neither was it possible to split the acceptance tests across each repo, as the level of maintenance for each framework and setting up / tearing down environments would have been too much.

Ideally, we wanted to have our tests in one single place to encourage the team to access and maintain them. This leaves us with two options:

  • Create a single, specific repository dedicated to our automated acceptance tests

This would sit alongside the other repositories and the benefit would be that this repository would clearly be the main source of our acceptance tests, which would have been clear to everyone in the team. We would also have been able to integrate different toolings for front-end and back-end with ease (as we’ll learn about in a minute).

The downside would have been that there would be a degree of separation between the tests and the production code, meaning developers would have to work with two repositories at the same time when developing new features. This type of overhead can bias teams towards cutting corners, causing the acceptance tests to fall behind the latest version of the product and result in increased maintenance.

  • Create a mono-repo and place the automated acceptance tests in the repo

One of the benefits of having a mono-repo means that you can have all your tests and production code under one roof, but still have a degree of separation by utilising subfolders for each service. A mono-repo approach would have resulted in our tests being much closer to our production code, and we would have been able to take advantage of tooling to spin up local instances and run our acceptance tests against them. However, getting repositories to run locally requires time to set up.

Regardless of either option, the goal should be to create a single location where our tests live for our team to access. If we were facing the same challenge now, we could have leveraged the Git integration features of Behave Pro. These features would have allowed us to collaborate around, and manage, our captured examples in Jira, whilst ensuring the results of our conversations were directly copied into the repository of our choice to begin work on.

What language should they be written in?

With the question of where the tests should be stored resolved, we still needed to address how both front-end and back-end developers could use the tests to aid them in their delivery. As I described earlier, we had clearly defined roles for both front-end and back-end and the languages they were comfortable using. So to ensure both roles would get the most out of the acceptance tests there were two things we should have set out:

1. Setting boundaries for team roles’ definition done

Ideally, modern teams that practise DevOps would encourage developers to have a full stack ownership of a feature or product. However, this is dependent on team and company organisation and available skill set, meaning not all teams can take advantage of this way of working. It is therefore important to set clear boundaries of what is being delivered if the work is being split across multiple individuals. In our example, the boundary was between the HTTP API endpoints and the UI.

If the back-end developers were responsible for delivering API endpoints to the UI, then we could have used Cucumber to automate acceptance tests against the API endpoints to demonstrate they deliver business value, even if that wasn’t the interface customers were going to interact with. On the flip side, our UI developers would have been able to create automated acceptance tests against their UI with a stubbed backend. Both roles have a clearly defined boundary of what they are responsible for delivering and using tests to help deliver, and then either ignoring or stubbing out the areas outside of that boundary.

Another benefit of this approach is that whilst both sides would get the guidance they required, the UI tests could also be run with a full stack version of the application to confirm the integration of all features.

2. Using the glue code location feature of Cucumber

So with boundaries agreed, we needed tooling that would allow us to create automated acceptance tests from a single source. Fortunately, it’s possible to have multiple flavours of Cucumber run in the same repository that uses the same feature files, by using the glue code location feature in both. By using Cucumber’s ‘require’ feature in its command line interface, we tell Cucumber where the step definitions that a developer wants to be used exist. For example, we could have an instance of CucumberJS that points at a UI folder of step definitions, and an instance of Cucumber-JVM that points at an API folder of step definitions, like so:

Diagram showing a scenario feature with step definitions - cucumber acceptance tests

Both instances would use the same folder of feature files by running a command like so:


mvn test -Dcucumber.options=”location/of/feature/files  — glue ui”


cucumber-js location/of/features/files -r features/ui_step_defs

If you want to learn more about how to use the Cucumber glue location feature across different languages, you can learn more about it on our help guide.


In the end, the answer to our problem was threefold:

  1. Agree on a location to a single point of storage for our tests, preferably in a place as close as possible to the production code.

  2. Set the boundaries of what is being delivered by specific team members, based on their skillsets and the team’s context.

  3. Leverage Cucumber acceptance test features to set the location of glue code and feature files so that we can store different instances of Cucumber in one single location.

So if you find yourself in a similar situation, spend time learning about feature sets of Cucumber and understanding the role of automated acceptance tests in guiding development to help you come to an agreement and implementation that works for everyone in the team.

You may also like…