Blog

BackBack to Blogs
Hiker

A Journey of Using Automated Testing to Save Time and Increase Quality


"We just need the system to do XYZ when the user types "A" into the textbox." We have all been given requirements like these. Tasks like these are commonplace for web development, and with ASP.NET Core MVC, its very simple to pull this off. The steps would look like:

  1. Create the frontend razor markup
  2. Create/update the view model for the property
  3. Drop the code into a class and make the class part of your business rules/services

Testing a scenario like this is easy too, right? Navigate to the site via a browser, type in "A" on the textbox and XYZ should happen. It’s a basic approach to verify something works once (i.e. right now). What happens if you start introducing new rules when the user types "B"? What about rules like typing "B" when another value is already set? Your basic test approach just got more difficult (in many cases it grows exponentially), takes more time to verify, and is subject to a person remembering to test for it.

Usually, on smaller projects, when you have this happen, you code it, test it quickly via the browser and move on. On a recent project, we had a simple area of the code that we did just that. Since our project stakeholders were working closely with us, in an agile team fashion, they saw the screen and started asking questions (mostly amongst themselves) very early on. They asked for changes, introduced new business rules, and turned this code into a very complicated area of logic. It became very apparent that testing via the browser was not going to be enough and would be time intensive. We saw the following issues:

  • Testing required intimate knowledge of what the rules were
  • Setting up scenarios where the rules would apply
  • Remembering to exercise all the rules and taking the time
  • Recognizing false positives on screen (which mandates you setup your scenarios perfectly)
  • Testing all the rules takes a lot of TIME!

Automated Testing to the Rescue!

Enter unit tests and integration tests. At Omnitech, most of our projects are set up with an automated build and deployment pipeline. Azure DevOps makes this incredibly easy to do and in most cases can be done with 5 - 10 minutes of work. The build pipeline, in particular, has a lot that you can do within it, of interest for this article, is running automated tests. These automated tests can assert that logic is working as intended and can "fail" the build if they do not assert correctly. On our project, we had the build and release pipelines setup, but we had no tests to speak of. We knew that writing the tests, getting the framework setup, scripting scenarios, and mocking objects was going to take a lot of effort and time. After weighing that time against the labor-intensive, repetitive, and error-prone web page testing, we took the gamble that writing automated tests would pay off in the end as the logic continued to evolve.

Our first task was determining how best to approach adding tests. This turned out to be less daunting than it sounded as Omnitech recently had a book club around the book "The Art of Unit Testing" by Roy Osherove that gives us many ways of adding tests. In addition to that, ASP.NET Core, with built-in dependency injection (DI) already gives us a head start on making our code testable. Our service layer, which contained our business rules, was loosely coupled with the frontend controllers and our data/repository layer. With this setup, our code was already testable but not entirely "unit testable." I say this due to the fact that unit testable code should be all the following:

  • It should be easy to understand
  • It should be able to run consistently
  • It should run quickly
  • It should have full control of the unit under test
  • It should be fully isolated
  • It should be easy to pinpoint the problem should it fail

So we started down the road of refactoring our code using methodology from the book club and discussions spinning out of the book club. Our goal was to achieve as close to 100% coverage as possible from all of our unit tests and integration tests for ALL services/business rules (not just the really complicated areas). Our technical road map looked pretty easy once we got into the swing of things. The technical road map included using MSTest framework, EF Core in-memory database, and FakeItEasy for mocking. MSTest framework is robust and did everything we wanted in terms of attribute decorating and seamless integration with Visual Studio 2019. EF Core's in-memory database capability and the fact that our DB context is also DI'd, allowed us to consistently build up and tear down our DB to test our logic scenarios. Utilizing the framework FakeItEasy, we were able to control any non-targeted unit test dependencies within each test. Overall, very little code had to be refactored to make the testing work.

The Payout

In the end, due to time constraints, we ultimately had to settle for a lesser degree of coverage that focused on the most business-critical and obtuse areas of logic. The payout of the tests cannot be understated, however. The tests, once written for a business case, allow us to

  • Exercise logic automatically during builds ensuring consistent, repeatable results.
  • Document our business rules with code for future developers.
  • Deliver a higher quality product before QA testing begins.
  • Run 100s of tests in seconds every build gave us the quick feedback necessary not to feel bogged down.

We're Learning and Improving at Omnitech

We are just starting to scratch the surface testing our projects in an automated fashion. Omnitech is looking further into how we can provide value to our customers through automated tests on not only business logic and compiled code, but also on front end frameworks and backend database stored procedures/functions.

Omnitech Automated Testing Manifesto

At Omnitech, we value solving problems for customers with high-quality solutions in an efficient way. We believe that solutions cannot be of quality unless they are tested. Therefore, we value testing that is automated, maintainable, consistent, and durable over manual testing, which may be tedious, repetitive, slow, error-prone and requires re-occurring human time.