Testing is like…

This post was inspired by a thread on The Club at Ministry of Testing. Join the best testing community on the internet at Ministry of Testing!

Testing is like searching for clues to solve a mystery. It’s almost like those old black and white detective movies where the hero is up against some dangerous foe. The foe may have somehow compromised the forces of good causing corruption their work.

We join our hero staying late at the office as they are running their last test…

Without any motion detected in the office area, the lights turn off automatically and I’m left in the glow of my laptop’s screen. I stare into the web page thinking it holds clues to despicable behaviors. If our customers saw them, they’d leave in droves. It was up to me. I had to find the culprit; he was known as Bugsy.

Previously, Bugsy hinted at his mischievousness on multiple pages causing spelling errors, massive misalignment of fields on small screens, and API errors. We found his accomplices, the demented NoCodeReview and the slothful CutNPaste, earlier today. The project team arrested them and sent them away for good. I sought they who ventured into the new API and tainted its reputation.

I reviewed the log for any hint that would unveil the darkness brought into our innocent API. The timestamps aligned with my test but the log message said only “Transaction Started”. Nothing else. What would cause the API to just stop?

I held my head in my hands above the keyboard. There had to be something else. What have I forgotten?

With a start, I straighten up in my chair. The sudden motion tripped the office lights on. I logged into the source control and reviewed the configuration file.

“You vile demon!”

In a moment, I realized Bugsy had teamed up with the captivating enchantress iForgot. The developer must have surrendered to her siren song as the project team worked quickly to deploy the API to a new environment. With incorrect credentials in the configuration file, the API just stopped.

I called the developer and explained my scenario. They logged on, sheepishly apologized, and updated the credentials. I sent iForgot up the river. Bugsy, unfortunately, may have slipped away. I wondered if we would meet again.

Epilogue

I walked towards the exit with my backpack and a knowing smile. We would be a better team because of this experience. Together, I thought, we will continue to fight and succeed against Bugsy. Through the door and into the night, I welcomed a Summer evening’s cool air on my face and victory in my heart.

Test Engineering vs Automation Engineering – 1

I see a difference between Test Engineering and Automation Engineering. As a Test Engineer, you work with people who test to improve the efficiency and effectiveness of their testing. These improvements span a spectrum of solutions from improving testability to providing automation. In my opinion, an Automation Engineer has a smaller scope. They design, develop, and maintain test automation.

I recently reviewed an email sent to our in-house test engineering community. It asked for assistance with an automation tool. The tool was unable to detect a modal dialog in a desktop application. The email asked if anyone had a solution.

There were two possible solutions. First, there was a suggestion to investigate a specific debug mode. Second, there was a suggestion to improve the testability by changing the design so that the model dialog was not needed. Both suggestions are valid but have different perspectives, different costs, and different impacts on project team dynamics.

Rather than seeing the issue framed as a tool constraint, I suggest a consideration of the testing challenge could help illuminate other possibilities.

Sometimes, it is not a automation challenge to quell through technical prowess, it is a testability challenge to be explored with a project team.

Testing and Modeling

We have seen the news and read the articles on testing.  These include the amount of testing, the type of testing, the testing results, debates on how much testing is needed, and who should be tested.  I have been in testing for about a decade and the information I see and hear is occasionally frustrating in its characterization, and many times presented without the context that could provide clarity.

Testing
Testing is an information collection activity.  While that may be a simple explanation, it grows complex as you begin to explore what is tested, how testing is performed, and what test results mean.

The most important question in testing is: what do you want to learn?  There are many answers because there are many people asking the same question.  Some things to learn are:

  • About the percentage of the population infected
  • If a person is infected
  • If some actions help reduce infection

Learning about the percentage of the population infected is important because it informs us of the spread of the infection.  Learning about this frequently is more valuable because it informs us on the pace of the spread.  That is, it can answer a secondary question of how fast the infection travels through the population.
Learning if I am infected is important because it informs me that my future actions could be harmful to others and I may become ill.  Learning about this frequently is more valuable because it informs us that our actions may be helping to reduce the pace of the spread.  Alternatively, it may inform me that my present actions are preventing infection in me.
Learning if my actions reduce infection, say in the case of drugs administered to address the infection, is important because it informs me that my actions are correct.  Learning about this frequently is more valuable because we can inform others who may benefit from the same actions.

Once you determine what you want to learn, you ask what should be tested.  Since the infection lives in bodily fluids (such as mucous or blood), the answer seems clear.  There are challenges to collecting fluids because fluids can become contaminated and render the test invalid.  Collection methods must be sterile and, very importantly, those who collect the fluids must be protected.
Once the fluid is collected, the test occurs.  But how is the fluid tested?  We hear some tests attempt to detect the infection, and some tests attempt to detect antibodies.  My guess is the detection technique between the two is different.  Each test provides different information and answers different questions.  One test helps you learn if you might be infected; the other helps you learn if you might have been infected.
Note that I said “might be” and “might have”.  An important aspect of testing is accuracy.  An accurate testing result depends on an uncontaminated fluid sample, collected in a sterile manner, and evaluated by a highly accurate method.  If a test for infection resulted in no infection and that result was wrong, it is called a “false positive.”  Manufacturers of these tests must demonstrate the ability to produce accurate results and this ability requires time.

Lastly, the infection progression inside the body seems to show symptoms after many days.  Since the growth rate appears slow, a test may show a negative result one day (not infected), and a positive result the next (infection present).  This demonstrates test sensitivity.  That is, there must be some threshold of infection present in the collected fluid for the test to render a positive result.  A better sensitivity may detect infection sooner.

Models
I have also read and heard information about models.  In Information Technology, we are familiar with models and their uses.  Recently, I have heard how wrong some models are.  This conclusion seems uninformed and amateurish.

Weather forecasters use models to predict weather.  The models are based on information collected over a long period of time.   This history of information helps build an understanding of weather patterns.  Presently, models predict local weather patterns with pretty good accuracy.  As we all know, the weather forecaster occasionally makes an inaccurate prediction.

Models for infections are built in a similar manner.  The primary difference between weather and a new infection is the amount of information available.  With very little information about an infection, the prediction accuracy varies greatly.  As more information is collected, the infection model is updated and accuracy may improve but it improves very slowly.

The model presents a number representing a prediction for infections.  The number is not, and was never meant to be, exact.  It is a guess, an estimate.  The model may be inaccurate but I would not consider it wrong.  The inaccuracy helps identify things to consider.  These things can improve the accuracy for the next model.

A Short WFH Primer

I admit that when I started working from home, it seemed like a break.  It also provided my wife some peace of mind knowing that one day per week (usually Friday), I was home with the kids.

For the first few times, I realized I didn’t have to be “in the office” until 7 AM so I could sleep until almost 6:55 and still make it “in”.  By the time I had breakfast, I probably wasn’t all that available or effective until 7:30 or later.  Also, I felt sluggish throughout the day.  I needed to try something different.

On my next WFH day, I did what I usually do: I got up early to exercise, showered, and ate breakfast.  I felt more ready when I “arrived” and more awake during the day.  As I grew into this, I found the speaker and microphone on laptops suck.  Having a headset or puck helps me hear the conversation better, and they can understand me.

Do I take breaks?  Yes!  After a meeting, I might go up and down the stairs a few times, or go get the mail.  Were I at the office, I would normally walk somewhere to get water, collaborate with a peer, or oblige nature calls.

By treating the experience as I would for any other work day, I found I was just as effective when Working From Home.

The Decline of Civil Conversation

Perhaps I’ve been in too many meetings but I would like to call a truce on conversation artifacts such as frequent interruptions, unanswered questions, and inattention.

I generally wait for a lull in a conversation to add new information or ask a clarification.  When someone else is speaking, I practice some active listening or I take a note.  Mostly, I wait.  What I’ve noticed is someone will be talking or I will be talking and someone interrupts.  Blatantly interrupts.  Sometimes they want to clarify something or add their information.  I welcome it but an interruption seems rude.

Many times the conversation is going along and someone asks a question.  I would expect a short period of silence as everyone considers the question and a response.  What I’ve noticed is someone will start speaking about the same topic or another topic apparently clueless that a question had been posed.  I want to hear what they want to say but many times I want to hear an answer to the question.  An unanswered question feels like friction to the conversation.

Lastly, during those lulls or short periods of silence, someone looks up from their laptop and begins talking about a topic already covered or asking a question already answered.  This is exceptionally frustrating and takes time to provide the information they missed.

I blame social media for these annoying artifacts.  Anyone can post an update on social media sites at anytime and this practice may be carrying over into everyday conversation.

Listen twice as much as you speak, consider questions carefully, close your laptop unless you are reviewing a shared screen.  Enjoy a topic, collaborate, and move your conversations forward faster.  Who knows, it may even shorten your meeting!

Necessary Conversations – 1

Thanks for meeting with me <fill in Project Manager’s name>!  I appreciate this opportunity to explore the testing approaches for this project.  I realize we have just started planning and have no product designs yet, but this is the best time to have this conversation.

Soon, our team will begin to absorb and understand business needs, and transform them into viable products.  The Testing team, Testers and Test Engineers, look forward to collaborating with the team on that effort.

Yes, I can see where you might believe there is nothing to test.  Many are mistaken in this belief that there must be a product before testing can execute.  I would argue that the business needs will require some scrutiny and I believe Testers are best equipped to ask questions about assumptions and details.  These questions will help clarify the needs, clarify product definitions, and reduce the number of defects resulting from misunderstood or poorly written definitions.

Why the Test Engineers, then?  I’m glad you asked.  They listen to conversations for opportunities to make testing smoother for everyone.  By everyone, I mean Testers and Developers.  Basically, we have been practicing Shift Left, that is, moving evaluations closer to product construction.  We believe it can make a difference on this project.

How?  When the project team delivers unit tests with their minimal viable products, we know requirements are probably met.  This helps the Testing team focus on risks.  The time to complete testing is likely shorter.  That improves Pace.

Also, unit tests provide a first look at the quality of the product.  Both Testers and Developers review these tests.  More scenarios may be added to the unit test suite so the quality improves.  Unit tests are a leading indicator of quality and improve the confidence of Quality in the product.

The unit tests execute with every build.  If the build fails, the project team stops until they know the Build passes.  No sense in letting defects deploy because when they do, it is both unplanned down time for the team and an administrative defect management exercise.

Lastly, a growing suite of unit tests become the regression test.  We don’t need a large testing event (saving time and improving pace).  Additionally, the product team can confidently refactor and maintain the product in the long term which supports the Team.

Thanks for indulging me!  I look forward to a great project!

Unit Test Advocacy

Unit tests – those tests created, executed, and maintained by developers – are no longer optional or a luxury.  In a world that demands CI/CD environments, unit tests are a necessity that keeps product updates flowing into production.  Unit tests are not the domain of developers, they are the domain of quality advocacy.  They are activities that are part of a team that accepts and pursues quality products AS a team.

Unit Tests Impact Pace, Quality, the Build, and the Team

  • Unit tests aid pace in detecting defects early and reducing post construction testing.
  • Unit tests improve quality by encouraging collaboration with the testing team to identify multiple scenarios and executing them early.
  • Unit tests help determine the status of the product by executing tests before check-in and after a build.
  • Lastly, unit tests demonstrate respect for your future self and your teammates. Everyone depends on prompt, valid, and valuable information provided by unit tests.

Testing as an Advocacy

When you work in testing long enough, you develop or adopt positions on many aspects of testing.  Some positions may be influenced by your environment, some by your peers (respected and otherwise), and some by practice.  Work in testing longer and you may want to share your positions and occasionally encourage others to adopt them.  We see this with testing models or techniques, testing schools of thought, and testing automation.  I believe it is testing positions and the varied interpretations, discussions, and debates that makes the craft a satisfying career choice.

I believe it is becoming something more.

With the introduction and practice of agile methodologies (Scrum, DevOps, et al), the practice of testing inside those methodologies has grown in importance.  Testing must not “wait until the end”, or plan large testing events “after the code is deployed to the test server”.  Waiting impedes project pace by time spent waiting and by discovering defects that were in the code while we were waiting.
The demands of business are driving the adopting of CI/CD and testing can not only help with that, it can drive it.  Testing must drive project pace and to drive project pace, we must become advocates.

Testers advocate for testability, shift left, and buying more than building.

Testability
Testers must advocate for testability in requirements and designs.  Testing can no longer afford to wait for information.  Rather, it must influence requirements for a single clarity and collaborate with team members to share that clarity.
Additionally, testers must actively participate is product designs (high level and detailed) to influence them for testability.  Request the design be transparent with key information and behaviors by using some form of logging, and request the design be controllable by having the ability to mock product objects.  When a product has good testability, it easier to test and can be tested earlier and quicker.

Shift Left
Testers must advocate for Shift Left.  Shift Left encourages new and changed products be evaluated closer to construction.  In many cases, this means more unit tests and deeper unit tests.  In some cases, the unit tests become the regression suite that can execute at any time.  Regression no longer need occur at the end of development!  We need to know as soon as possible if a recent change has impacted the application.
If much of the testing is shifted left, what do testers do?  They are reviewing the unit tests and suggesting more, they are exploring risks, and they are exploring environmental dependencies such as security, configurations, and connectivity.

Buy More Than You Build
Testers must advocate for buying tools and utilities rather than requesting them be built.  There are certainly many cases where building a tool or utility makes sense.  For several other cases, buying tools and utilities can get testers testing quickly.

I believe by advocating some or all of these ideas, testers become project drivers and CI/CD supporters.

Lead on, Testers!

An RPA Journey

Last Summer, I was assigned to a Robotic Process Automation (RPA) project as the Lead Developer. RPA is a growing field because the automation can improve work throughput at a greater accuracy.

We recently completed the development and introduced our automated process.  A process that once require tens of hours was completed in minutes.  While it was very gratifying to deliver this product, I look forward to returning to test engineering soon.  As I take leave of this RPA world, I thought about the development experience and wanted to share some thoughts.

From Test Engineer to RPA Developer?
I was looking for a different gig within my company.  One platform was experimenting with RPA and thought a Test Engineer (that is, a person with automation experience) would be the best fit.  When I investigated the job a little more, there was some chance that I could explore AI.  I was offered the position and started learning about RPA and an RPA tool.

Emperor’s New Clothes
RPA is marketed as tools that can save labor, improve accuracy, and increase productivity at a lower cost all through automation.  In my opinion, all of that is true.  I also believe that RPA tools are marketed more towards the business side of an enterprise.  The sirenic user interface of an RPA tool may have you believe that anyone can create process automation.  The reality is that these tools are used to write programs, and writing programs – even RPA programs – is challenging.

The RPA tool is actually just another Integrated Development Environment (IDE).  Make no mistake – it is used to create scripts that interact with your applications in the same way as your employees.  In that sense, the creation of a script is very much a software development effort and must be treated as such.  There are no robots here and the use of the word “robot” is very misleading and should be considered window dressing.

If you believe that someone with Excel macro experience can use RPA tools to deliver high quality, error free automation, you will be, in many cases, profoundly disappointed.  Excel macros and RPA programs are on opposite ends of a development complexity spectrum.  Without an understanding of software development methodologies, basic programming concepts, data quality, or detailed process definitions, it is possible to realize significantly less benefit than expected.  In my opinion, the “best practices” recommended by RPA tool companies are a sad substitute for software development experience.

Early in the project, I found many similarities between what I could do in Visual Studio (also an IDE) in C# and the RPA tool.  With my experience in software development, I was able to learn the RPA tool quickly.  The RPA user interface is but an abstraction of basic programming concepts, and it reminded me of the Lego Mindstorms IDE used by many children to build and program small mechanical machines (these machines can perform some sophisticated activities as demonstrated annually by many First Lego League teams).
In my opinion, placed into the hands of a professional developer (or developers), the RPA tools can deliver great products that can benefit your enterprise.  However, that benefit can be realized only with a strong working partnership between the development team and your business team.

Tests and Testing Tell the Story
The first process we automated was complex.  It was as complex as a new API and warranted the normal approach of decomposing it into components.  The components were described in multiple story cards and we started construction.

At the end of a story card, we found it valuable to create unit tests for the components.  Even the tests were built in the RPA tool.  In this manner, we could evaluate multiple scenarios easily.  When we had a critical mass of components, the development team met in what we called an “Integration Session” to assemble the components into the automated process.

We evaluated the automated process with sample input data, and we were able to provide diverse scenarios that exercised the process.  We found defects and corrected them.  We were also able to demonstrate the automated process to our business team members frequently.

Discoveries Along the Way
The description of our development approach should sound familiar to agile practitioners.  Agile provided us the flexibility we needed to learn and adapt.

For example, we wrote code to collect a set information for a person from an application.  We discovered that, sometimes, there is more than one person.  We refactored to collect a set of information per person.

We wrote code to place numbers into a spreadsheet and retrieve formula results.  We discovered that, sometimes, the spreadsheet provides feedback and the formulas require a macro to make adjustments.  We refactored to detect the feedback and run the macro.

We continued in this way until we deployed the process and started using production data.  While we still made discoveries and refactored, we also realized that the cost of some changes may not have enough benefit to justify the change.  We discovered “done” and, as hard as it was, we needed to say “enough”.  That doesn’t mean our discoveries are not addressed rather it means the project met its goals.  The discoveries and enhancements will be addressed over time.

Right Size Your Team to the Process
When considering processes for RPA, the complexity and effort should help set the level of project management.
Our team is a handful of developers and business consultants.  To resolve details for our first process, we required information from our business consultants frequently.  Additionally, we relied on a separate team for test data.  Since RPA is a software development effort and the process complexity was high, I appreciated the benefits our project manager brought to our project.
However, the second process was far simpler.  In my opinion, it was simple enough that a two developers could complete it without the need for project management.

Celebrate The Team
Over the course of my project, the developers and I worked with some wonderfully gifted and passionate business people.  They knew their business processes, they knew some of the challenges, and they believed in the promise of process automation to improve their productivity.  Together, we defined, built, tested, and celebrated the products we created.