Random Test Inputs – part 2

Test

If you saw my last post Random inputs in unit testing you’ll remember that I was advocating the benefits of using random test data in your unit tests.

One of the bits of feedback that I had (and seems to be the main complaint of most people who are against this) is that you need reliable test input so that you can recreate failures.

I don’t think these two concepts are mutually exclusive. Let me explain.

A decent test framework will help you recreate failures

I use testtools as my preferred test framework for Python. One of its benefits is that its so-called Matchers will output useful information for failures, including the data used for input. As a very basic example:

import testtools 
 
class TestMe(testtools.TestCase): 
    def test_failure(self): 
        self.assertThat(5, testtools.matchers.Equals(4))

This produces the output:

$ python -m testtools.run fail.py       
Tests running... 
====================================================================== 
FAIL: fail.TestMe.test_failure 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "fail.py", line 7, in test_failure 
    self.assertThat(5, testtools.matchers.Equals(4)) 
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in assertThat 
    raise mismatch_error 
testtools.matchers._impl.MismatchError: 4 != 5

You can see that the inputs are clearly listed. If this is not clear enough you can add more explanatory text that is output on failure. Changing your assertion to:

def test_failure(self):
    self.assertThat(5, testtools.matchers.Equals(4), "Expected 4, got 5")

Results in:

testtools.matchers._impl.MismatchError: 4 != 5: Expected 4, got 5

Thus, if you get a failure from a random input, it’s easy to see what data caused that.

Making your data recognisable

Once you start randomising a lot of input data, it starts to become hard to identify which input is which when you have a failure for code that has many inputs. Take this for example, which I have taken from my days as the MAAS engineering lead:

from itertools import imap, islice, repeat
import testtools
import random
import string


class TestMe(testtools.TestCase):
    random_letters = imap( 
        random.choice, repeat(string.letters + string.digits))

    def make_string(self, size=10):
        return "".join(islice(self.random_letters, size))

    def test_large_comparison(self):
        dict1 = dict(
            person=self.make_string(),
            age=self.make_string(),
            weight=self.make_string()
        )
        expected_dict = dict(person="foo", age="10", weight="200")

        self.assertThat(dict1, testtools.Matchers.Equals(expected_dict))

This results in:

====================================================================== 
FAIL: fail.TestMe.test_large_comparison 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "fail.py", line 26, in test_large_comparison 
    self.assertThat(dict1, testtools.matchers.Equals(expected_dict)) 
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in assertThat 
    raise mismatch_error 
testtools.matchers._impl.MismatchError: !=: 
reference = {'age': '10', 'person': 'foo', 'weight': '200'} 
actual    = {'age': 'HL8qw3xnJO', 'person': 'IuhoaGzQHB', 'weight': 'Rfi3lxsiHf'}

My first question on seeing output like this is “where did that data come from?” because it could have leaked from buggy code and be totally unrelated to your intended input.

This has an easy answer if you make a small change to the test harness:

    def make_string(self, prefix="", size=10): 
        return prefix + "".join(islice(self.random_letters, size)) 
 
    def test_large_comparison(self): 
        dict1 = dict( 
            person=self.make_string("person"), 
            age=self.make_string("age"), 
            weight=self.make_string("weight") 
        ) 
        expected_dict = dict(person="foo", age="10", weight="200") 
 
        self.assertThat(dict1, testtools.matchers.Equals(expected_dict))

Here, the code has been changed so that the make_string() function will accept a fixed prefix for the string you want generated. This has the following effect on the output:

reference = {'age': '10', 'person': 'foo', 'weight': '200'} 
actual    = {'age': 'ageb8BIwdwZrM', 
 'person': 'personBSUhvYxFTU', 
 'weight': 'weightngdjfm7ef1'}

You can instantly see that your random input was in fact generated in the way you intended (or not, as the case may be) and thus easy to identify its source!

I hope this was useful. Leave me feedback if this helped you at all!

This entry was posted in tech. Bookmark the permalink.

Leave a Reply

Be the First to Comment!

Notify of
avatar
wpDiscuz