Category Archives: Automation

Goldilocks and Test Automation

I’ve struggled with the right level of automation for a long time and alluded to a level of salvation here.  Since that post, I’ve moved on to engage automation using Given-When-Then (and its collateral ideas) much like a sailor engages a lighthouse at night.  In my most recent experience, it was more like a sailor is on the bridge watching the light and I’m below deck providing interpretation and direction.

My testing team created an excellent suite of automated regression tests for the stored procedures used in our application.  But I’m struggling with the number of tests created for this suite.

It might be that my definition of “regression tests” is more economical than that of my teammates and I own that communication error: I probably was not clear on what I thought this task should produce.  But it also goes to the heart of what I believe about automation tests.

In 2007, I, like many others, was enamored with automated tests against the UI of my applications.  A steady diet of automation tools over the succeeding years (starting with HP QTP) were both valuable and frustrating.  I eventually gave into the frustration because the tests I, and many others, wrote were fragile and numerous.  My response was to limit what I automated because inflicting such a level  of maintenance on a product team was just wrong.

I watched as other teams continued to indulge in not only creating these tests but to develop frameworks around frameworks that provided some theoretical value and, in my opinion, dubious claims of a positive ROI.  I was unimpressed with what appeared to be the emperor (in their “new clothes”) dining happily on over-heated porridge.

With training and some practice in Given-When-Then and its description of business outcomes, I started to see practicality again in test automation.  A test that evaluates business outcome is, in my opinion, more stable, valuable, and maintainable than several poorly designed automated UI tests.  Then, there are two questions.  Always.  Two.  Questions.

Can I automate this test?  Of course, I can automate just about any test.  The avalanche of software utilities that pop up around tools for test automation is evidence of over-indulgence in coding.  Not testing.  The second question is most key:  Should I automate this test?

I advocate for tests designed for simplicity and to exercise isolated business outcomes.  The tests should be motivated by risk, value, and maintenance.  Upon a review of these criteria, I stand the best chance of balancing tests (the most value) and the code (that which must be maintained to deliver the test).

My team and I will meet to review our regression suite.  I look forward to exploring differing definitions of “regression test”.  I welcome their perspectives that could help all of us discover the combination of test and code that is “just right”.

My GWT Identity Crisis

There are a few things I’ve not tried yet.  Sadly, possibly, I’ve not tried karaoke, and I’ve not tried kite boarding.  I am however getting my first experience in applying Given-When-Then in describing product behaviors on a real project.  I’m very excited at the possibility of merging business behavior with automation.

I engaged Specification By Example, I attended hands-on training, and I have participated in periodic drive-by creations of GWT scenarios.  I felt very ready.  Hurtling into some of our project’s first story cards, I was shocked to review GWT that contained behavior AND, take a deep breath, implementation details.

As an example, I offer a scenario from an API I wrote for a website.

GIVEN active flights exist,

WHEN the website requests a list of active flights,

THEN the reply includes all active flights

AND their unique identifier,

AND their altitude,

AND their last update.

This GWT represents my interpretation of what I have learned and, as noted above, my limited experience.  Here is how the same scenario might appear if I violate one of the most fundamental rules.

GIVEN the active_flight field indicates true,

WHEN the website executes GET against the activelist API,

THEN the response contains an HTML table detailing the flight’s GUID,

AND the flight’s altitude as an integer,

AND the last update in a UTC format.

In my opinion, this is more than a violation of the “no implementation details” rule.  This description locks the behavior to a specific column of a database, defines the verb and API name, prescribes a specific rendering method, and calls out individual field data types.  Its as if a developer provided the information they needed to implement the scenario to an analyst.

While this is clearly an extreme example, I recently read such a scenario.  When I asked, the reason implementation details were included was because this is a “technical story card” (different from a “business story card”), and because it is easier to provide all the information in one place.

(ahem) Bull.  Shit.

In my opinion, this method of practicing and segregating GWT ignores one of its most quintessential and valuable aspects: collaboration.  Yes, I read Specification By Example and attended training.  I learned the mechanics but in both cases I left with something more.

I perceived an overwhelming feeling that during a conversation about product details between an analyst, a tester, and a developer, these few people begin to share not only their ideas but their understanding.  It IS NOT the GWT itself that encourages automation or reduces defects rather it is those precious minutes where an honest, introspective, and respectful dialog clarifies the behavior of a simple product.

Just writing GWT provides a small window into a product.  Collaborating over the value of a specific business behavior encourages teamwork and quality where everyone can take pride and feel their contribution matters.


Lead Testers and Lead Testing

As I exit a project with disappointing results for me personally, I’m motivated to re-invent my role by combining select experiences from past projects.  At the start of my next project, my conversation with project team members might take on this flavor.

My role as test lead is to facilitate testing, advocate for smart testing, and to guide testing within our project.  Developers may create and execute unit tests, and analysts may evaluate product definitions; both are testing activities.  I encourage them to collaborate early with project testers for feedback, to augment the testing, and to identify automation opportunities.  The goal is not a fully tested product; the goal is verifying business behavior that delivers value at a rapid pace.

I go into the project with a position of achieving 100% automation.  I’m not advocating for 100% rather I’m setting a goal of 100%.  I want to use automation to demonstrate behavior not just after a product is complete, but frequently as we make changes to the product.  I want our automation to provide confidence to our team so they can continue to add functionality, explore different implementations, optimize the product for performance, and verify we still deliver business value.

I will foster a sense of community as we explore the details of the design.  Together, I want to create definitions of our products that we can build and evaluate quickly, products that we are proud to demonstrate, value that distinguishes us in the marketplace.  As equals at the drawing board, I want our team to collaborate, to challenge, and to bring their diverse talents for a common purpose.

Our project will likely required services and integration with other parts of our enterprise.  I want to meet with them soon, share our story and engage their team to assist us.  I believe a proactive approach helps them prepare better, makes them more willing to participate, and contributes to our mutual success.

The Shift Left
The role of Test Lead is no longer confined to leading a team of testers, it must lead all testing within the project.  With “Shift Left” continuing to gain momentum, the Test Lead’s focus and responsibility grows to guide the testing of the project such that product definitions are clear, small bits of code are exercised, and value is demonstrated.  Success will depend more on your ability to support team building and encourage effective collaboration than testing or technical skill.

Test Automation is an Investment

I submit that test automation can improve your project’s pace.  I realize that some believe it helps detect defects or that it can be developed independently from project execution.  I recommend you reconsider those positions.

Ever since I read Specification By Example, my understanding and attitude towards test automation has changed.  It did not change how I design or build test automation, it changed how I see its purpose.  For this post, I refer to test automation in a spectrum from evaluating newly checked in code to that used in regression testing.

I’ve been pitching what I believe is a familiar notion around test automation in testing circles.  But it was suggested to me that I provide some context.  Put simply:

The purpose of test automation is to enable frequent evaluation of application behavior in the face of change.

I did not arrive at this conclusion by myself.  Adzic’s book was a great catalyst to think about test automation more, and to discuss it with other test engineers.  A context took shape for me and its present form is my declaration above.

Defect Detection is a Narrow Focus
Test automation costs too much to make it beneficial in just detecting defects in the short term.  Indeed, test automation developers find defects many times when they are developing (not executing) the automation.  That means that by the time the test automation is completed and verified, it’s possible someone else could have located that defect AND corrected it.

I recommend you invest in your test automation today.  You will realize the real value when it detects a defect in a succeeding iteration, or a succeeding project.  When it does, it demonstrates how recent changes have introduced errors in existing behavior.  In that manner, it is acting to preserve your business intent.

Independent Development Costs More
Let’s say you wait to develop test automation.  At some point in the future (the next iteration or the next project), the testing team is discovering defects in code that was completed AND working.  The current pace begins to slow:

  • Defects in existing code must be corrected so testing can evaluate the newly deployed code.
  • There is a good chance that correcting defects will introduce more defects.
  • When the code is operating again, you may find defects in the newly deployed code.

If there was an investment in test automation, frequent execution would detect changes to existing functionality at the time they were introduced.  That is, defects would be detected when developers execute the test automation, and they would correct them.  This saves time because the project team continues building products rather than fixing products; the project maintains its pace.

Your Investment Pays You Back
Test automation pays dividends in protecting your investments in building and expanding business functionality, and detecting unintentional changes to business behavior early.

  • Time is saved because the team executes test automation often to detect behavior changes early – before the products are deployed for testing.
  • System and integrated testing is more effective because simple defects are corrected before deployment for testing.

Lastly, the project team builds confidence in their products because the testing team can spend more time exploring products broadly, deeply, and for risk.