When your code is run in the real world, you don't know the inputs to a function. The values come from up the call stack, or they come from external sources like user input or remote systems. Otherwise your software is a completely closed system, and you can use a proof assistant instead of a general purpose programming language.
Assuming this is not your scenario, why then, do you test your software as if you do know the inputs? Tests that pass known inputs and assert outputs are completely anemic. What is the type of your input? A string? A number? Some product containing a mix of other types? How many values can inhabit that type? Hundreds? Thousands? Infinite? Probably yes.
So what do your five to twenty assertions against the results of your five to twenty hand-picked known inputs really tell you about your code? Nothing meaningful, but you can say you tried.
If you want meaningful assurance that your software is correct, you test it the way it is called, with unknown inputs. If you haven't done this before (and even if you have) it's not immediately obvious how to do this, and it's even more challenging to accomplish without shoehorning in the old approach, or re-implementing the code under test in order to reach a finish line that amounts to asserting truth.
Testing code with unknown inputs is unique to the code under test, and requires identifying observable properties of said code, which are broader in scope than the typical "give it x, it returns y" variety. This is why generator driven (inputs are generated randomly instead of hard-coded) testing seemingly always gets paired with property testing (the definition given in the prior sentence).
Despite the demands which vary based on code under test, there is a formula that generally yields highly effective assurances, incidentally surfacing meaningful properties of the code under test along the way. It's stupidly simple:
- Pass the inputs to the function under test.
- Observe the output.
- Assert on the inputs based on the output.
The key is that instead of trying to know things about the result based on the input (as typically found in tests based on statically known inputs,) you invert the relationship.
Assertions here will pair what happened (the state of the result given by the function under test) with why it happened (the aspects of the inputs that can be deduced based on the properties of the result being observed.)
The end result is a fairly precise representation of the observable properties of a function across a meaningful set of inputs, where the process will lead you to further constrain and/or liberate the inputs and outputs the function accepts and produces respectively, so as to support the definition of such a set of observable properties. Functions like this are reliable, understandable, and safe to build upon.
In short, your tests should link inputs to outputs, by observing outputs as a means to describe inputs, such that the description of the input is an explanation for why the output obtained was produced as it was.