Ninja QA

Collaborative Development - using Jira, Bitbucket and Git to work effectively

Like many Quality Analysts, the first form of source control I was exposed to would have been something like SVN or CVS or the like. They were simple tools to work with. You check code out, you check code in. Many of these tools however had problematic aspects associated with them with regards to merging and collaborating in a busy development environment.

The company I work for has been using Bitbucket for a while now for the development aspect of our product. To briefly describe this process...

Our base product is in the master repository in git (this would be equivelant to 'trunk' in other Source Control systems)
When we have a new feature that needs developed in Jira - we are able to assign that task to a developer and he would then create a 'feature branch' of the master repository.
This effectively clones the base product into an identical repository, but this repository is dedicated to the development of that one feature that he is assigned to. As time goes on and he completes his feature, he is then ready to try and merge it back into the master branch. Now this process should have unit tests or some sort of quality check in place to ensure that nothing dodgy is merged into master. We use a mix of unit tests and peer reviews. This merge process is called the 'Pull Request'. We are effectively requesting 'master' to pull our changes into itself. 
In a tool such as BitBucket, the pull request system is very configurable. You can configure your pull requests to require 1 or more reviewers and approvals, it could also require that the code successfully builds.

 

 

Generally speaking when 2 or more developers are developing in a system, they would try to stay away from class files that each would be interacting with. This is just a courtesy to avoid messing with code that the other developer might need to work on.

Short of telepathy or the invention of a hive mind for developers, there is no way to resolve situations where two developers do end up working on the same code.
This is where 'merge' issues may arise. Tools such as BitBucket and Git provide features to mitigate the pain caused by merges. Git will provide you with options to perform intelligent merging etc. That being said, Git is just a tool and will never be able to tell if the merged code is up to standard. The typical way for handling merges, is for the developer who is about to do his pull request, to initiate what is called a Git 'rebase'. This tells GIT to re-update the repo from the master branch, to include any new changes in master in their own modified code. The onus is on the developer doing the pull request to ensure that the code is properly merged and no conflicts arise.

So what has this got to do with Quality Assurance or Automation.

As I said above, this is the process that many development organizations are working to. We are now trialing it in Automation and Quality Assurance in general internally.
I had mentioned previously that I have a framework of my own design that I use for Automation within the company I work for.
A dependency diagram of it is shown below...

Your probably thinking - Holy Crap!!

That's a lot of modules.
Your right... it is.
The framework was developed to be modular, decoupled and highly cohesive. However, the larger it grew, suddenly there become issues with maintenance and how to manage the future development of the framework. This is when the collaborative development process above comes into play with coding standards and other processes for managing code quality.
Full disclosure, when I first developed this framework, I was the only author of it and it kinda grew organically and it was all done in GIT master branch. No branches for features....

Now that other people within the organization are contributing to the framework, we need a quality control mechanism to ensure that bad code cannot be committed to master.
Borrowing from the above process that our developers use, we introduces the Feature Branching strategy along with peer review approval process for pull requests.
In addition, we are in the process of adding unit test projects for each feature. These then get built and executed by TeamCity which will either pass or fail. Failed builds / executions will not get deployed to our Artifactory server and will not get consumed by the users of the automation framework via NuGet. (Each feature is a separate NuGet package).

Using examples for this process, it might be like this...

The framework exists, but we want to add Appium functionality, we also want to add functionality to allow for database communication with a Redis database.

Sally is free to work on one of the features, so she clones the whole framework to an Appium branch - she will build the Appium integration feature.
James is going to work on the Redis functionality, he clones to a Redis branch.

They both work on their respective features, getting the functionality working, ensuring that everything builds, existing unit tests are run to ensure no regression and they create new tests to ensure their feature never breaks either. James is finished, he performs a rebase to pull down any changes that have made it into master while he was working on his branch. No changes are detected, so he commits to his branch.
He goes into BitBucket and raises a Pull Request from his branch, into master - Sally and John are listed as reviewers.

Sally is busy finishing her Appium functionality, but she has enough time to perform a quick review of the pull request, she examines the code and approves it, John performs a review also. The pull request is approved and makes it into the master branch.
Sally now wants to commit her work to master, she performs a rebase and finds that James modified one of the classes she was working with. She reviews it and merges it manually to ensure there are no compatibility issues. She runs the unit tests to ensure everything passes. It seems fine. She commits to her branch and starts a pull request.
John and James now review her work, approve it and merge it into master.

TeamCity would then build their respective features and deploy them to Artifactory as NuGet packages. These are then available to anyone in the organization from the private NuGet repository URL.

This way features can be developed collaboratively, tested and deployed and consumed by people in the organization.

Once the code is merged into Master, the theory is that you should be able to close out the Jira task that the feature belonged to, the feature branch should be able to be deleted and cleaned up. As it is no longer relevant - since its code now exists in master.

Framework Design - Statement of Intent

Before you start constructing your automation framework, you need to figure out what you are looking to achieve first, so you can then assess whether you have succeeded in the task later on. In the AGILE world, this is called defining your 'Definition of Done'.
Frameworks have a habit of growing organically and while this can be a great thing when it follows a design pattern, it can be horrible if it occurs randomly and without planning.
It essentially becomes the difference between an oak tree growing upwards towards the sky and something that is cancerous and growing out of control. One follows a pattern of behavior while the other is random and serves no benefit. 

 

Lets write our 'statement of intent' - what we want to achieve with our framework.

'I want an automation framework that can be installed into a test project and allows the user to start writing automation code almost instantly with minimal setup involved. The framework should be modular and decoupled to facilitate maintainability. The framework will be built using .Net C# so our developers can use the same framework to write their Unit Tests. BA's currently write their specifications using the Gherkin syntax, so the framework will have Specflow at its heart to allow mapping of tests to requirements. The framework should promote a solution / project layout to help keep its users consistent in their approach.'

From the above statement we get:

  1. Must be a quick installation process
  2. Modular and decoupled
  3. Using .Net C#
  4. Using Specflow
  5. Framework promotes a pattern/layout

While these are by no means the only requirements we care about, these are the ones that will be required for our definition of done to be achieved. This is how we determine if our framework is ready for consumption.

Now that we have our needs or 'requirements' - we can start planning how we want to achieve these at a high level.

Requirement #1 could be achieved using Nuget packages for instance. Nuget packages allow you to attach example files, class files and documentation that will get installed into a project when the Nuget package is consumed. We could have a Nuget Package called 'Automation.Framework.Base' for example.

Requirement #2 could also be achieved using Nuget packages, but instead of packaging the whole framework, we would separate the individual pieces of functionality into individual Nuget packages. This means that updating one piece of functionality should not necessarily impact another area of functionality.

Requirement #3 is achieved simply by using C# as our language of choice. Training and documentation will be important if your QA staff are inexperienced in .Net C#.

Requirement #4 requires that we use Specflow for our BDD Language interpreter. By using C# we have eliminated Cucumber and other Gherkin BDD tools due to incompatibilities with C#. Specflow remains the logical choice for C# based test frameworks. Installing this is two fold - we need to install the Nuget Package for Specflow.NUnit.Runners as well as the Visual Studio Specflow extension. Without the extension, visual studio will interpret .feature files as text files.

Requirement #5 can also be facilitated through use of Nuget. A properly configured .nuspec file can create folder structures, create example files and setup app.configs seamlessly without requiring the user to provide input. 

Our above statement has given us 5 requirements which can be solved through various tools and strategies.
The next post will delve into Visual Studio and show how to get started with our decoupled framework.