Last Updated: December 25, 2018
·
317
· Brian Zeligson

Generator driven property tests, why and how

Why:

When your code is run in the real world, you don't know the inputs to a function. The values come from up the call stack, or they come from external sources like user input or remote systems. Otherwise your software is a completely closed system, and you can use a proof assistant instead of a general purpose programming language.

Assuming this is not your scenario, why then, do you test your software as if you do know the inputs? Tests that pass known inputs and assert outputs are completely anemic. What is the type of your input? A string? A number? Some product containing a mix of other types? How many values can inhabit that type? Hundreds? Thousands? Infinite? Probably yes.

So what do your five to twenty assertions against the results of your five to twenty hand-picked known inputs really tell you about your code? Nothing meaningful, but you can say you tried.

If you want meaningful assurance that your software is correct, you test it the way it is called, with unknown inputs. If you haven't done this before (and even if you have) it's not immediately obvious how to do this, and it's even more challenging to accomplish without shoehorning in the old approach, or re-implementing the code under test in order to reach a finish line that amounts to asserting truth.

How:

Testing code with unknown inputs is unique to the code under test, and requires identifying observable properties of said code, which are broader in scope than the typical "give it x, it returns y" variety. This is why generator driven (inputs are generated randomly instead of hard-coded) testing seemingly always gets paired with property testing (the definition given in the prior sentence).

Despite the demands which vary based on code under test, there is a formula that generally yields highly effective assurances, incidentally surfacing meaningful properties of the code under test along the way. It's stupidly simple:

  1. Pass the inputs to the function under test.
  2. Observe the output.
  3. Assert on the inputs based on the output.

The key is that instead of trying to know things about the result based on the input (as typically found in tests based on statically known inputs,) you invert the relationship.

Assertions here will pair what happened (the state of the result given by the function under test) with why it happened (the aspects of the inputs that can be deduced based on the properties of the result being observed.)

The end result is a fairly precise representation of the observable properties of a function across a meaningful set of inputs, where the process will lead you to further constrain and/or liberate the inputs and outputs the function accepts and produces respectively, so as to support the definition of such a set of observable properties. Functions like this are reliable, understandable, and safe to build upon.

In short, your tests should link inputs to outputs, by observing outputs as a means to describe inputs, such that the description of the input is an explanation for why the output obtained was produced as it was.

2 Responses
Add your response

Basically your function is input -> output, your properties are input <- output. Properties are in essence dual to what they describe.

over 1 year ago ·

Concretely, for some function under test f :: a -> b, a set of properties can be defined by p :: (a -> c, b -> c) when fst p == (snd p) . f.

Interestingly, given an additional g :: a -> x, h :: b -> y, i :: c -> x -> a, and j :: c -> y -> b such that (\a -> i ((fst p) a) (g a)) == id and (\b -> j ((snd p) b) (h b)) == id then i and j effectively capture the data loss from a -> c and b -> c. You know exactly what is and is not covered by your defined properties, and can surgically tune what is included and left out at balance with how close you are to re-implementing the code under test in order to preserve as much resolution as possible.

over 1 year ago ·