Note that many of these classifications may match on similar values to try
to thwart potential optimizations,
present or future,
but these approaches
may need further adjustment to thwart future optimizations (or a way to
explicitly inhibit them).
These tests are also written a bit lazily,
given the difficulties in matching comprehensively;
that ought to be fixed in the future.
The old classification system would interpret missing values as $0$,
which could potentially trigger a match.
The new classification system will always yield \tparam{FALSE}
regardless of predicate when values are undefined.
Certain behavior is the same between the old and the new system---%
in particular,
when the match of lower length is first.
The legacy system is frightenly problematic when the matrix of
lesser column length appears after the first match---%
the commutative properites of the system are lost,
and the value from the previous match falls through!
The legacy classification system does something terrible when
the second match is the shorter---%
it discards the indexes entirely!