2015-03-23 15:11:35 -04:00
|
|
|
<?xml version="1.0"?>
|
2015-03-18 13:32:24 -04:00
|
|
|
<!--
|
2021-07-22 15:00:15 -04:00
|
|
|
Copyright (C) 2014-2021 Ryan Specialty Group, LLC.
|
2015-03-18 13:32:24 -04:00
|
|
|
|
|
|
|
This file is part of tame-core.
|
|
|
|
|
|
|
|
tame-core is free software: you can redistribute it and/or modify it
|
2018-01-26 11:13:33 -05:00
|
|
|
under the terms of the GNU General Public License as
|
2015-03-18 13:32:24 -04:00
|
|
|
published by the Free Software Foundation, either version 3 of the
|
|
|
|
License, or (at your option) any later version.
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
-->
|
2015-03-18 11:31:47 -04:00
|
|
|
<package xmlns="http://www.lovullo.com/rater"
|
2015-03-23 15:11:35 -04:00
|
|
|
xmlns:c="http://www.lovullo.com/calc"
|
x/0=0 with global flag for new classification system
This was originally my plan with the new classification system, but it was
undone because I had hoped to punt on the somewhat controversial
issue. Unfortunately, I see no other way. Here I attempt to summarize the
reasons why, many of which are specific to the design decisions of TAME.
Keep in mind that TAME is a domain-specific language (DSL) for writing
insurance rating systems. It should act intuitively for our use case, while
still being mathematically sound.
If you still aren't convinced, please see the link at the bottom.
Target Language Semantics (ECMAScript)
--------------------------------------
First: let's establish what happens today. TAME compiles into ECMAScript,
which uses IEEE 754-2008 floating-point arithmetic. Here we have:
x/0 = Infinity, x > 0;
x/0 = -Infinity, x < 0;
0/0 = NaN, x = 0.
This is immediately problematic: TAME's calculations must produce concrete
real numbers, always. NaN is not valid in its domain, and Infinity is of no
practical use in our computational model (TAME is build for insurance rating
systems, and one will never have infinite premium). Put plainly: the
behavior is undefined in TAME when any of these values are yielded by an
expression.
Furthermore, we have _three different possible situations_ depending on
whether the numerator is positive, negative, or zero. This makes it more
difficult to reason about the behavior of the system, for values we do not
want in the first place.
We then have these issues in ECMAScript:
Infinity * 0 = NaN.
-Infinity * 0 = NaN.
NaN * 0 = NaN.
These are of particular concern because of how predicates work in TAME,
which will be discussed further below. But it is also problematic because
of how it propagates: once you have NaN, you'll always have NaN, unless you
break out of the situation with some control structure that avoids using it
in an expression at all.
Let's now consider predicates:
NaN > 0 = false.
NaN < 0 = false.
NaN === 0 = false.
NaN === NaN = false.
These will be discussed in terms of classification predicates (matches).
We also have issues of serialization:
JSON.stringify(Infinity) = "null".
JSON.stringify(NaN) = "null".
These means that these values are difficult to transfer between systems,
even if we wanted them.
TAME's Predicates
-----------------
TAME has a classification system based on first-order logic, where ⊥ is
represented by 0 and ⊤ is represented by 1. These classifications are used
as predicates to calculations via the @class attribute of a rate block. For
example:
<rate-each class="property" generates="propValue" index="k">
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</rate>
As can be observed via the Summary Page, this calculation compiles into the
following mathematical expression:
∑ₖ(pₖ(tₖ/dₖ)),
that is—the quotient is then multiplied by the value of the `property`
classification, which is a 0 or 1 respectively for that index.
Let's say that tivPropDivisor were defined in this way:
<rate-each class="property" generates="tivPropDivisor" index="k">
<!--- ... logic here ... -->
</rate>
It does not matter what the logic here is. Observe that the predicate here
is `property` as well, which means that, if this risk is not a property
risk, then `tivPropDivisor` will be `0`.
Looking back at `propValue`, let's say that we do have a property risk, and
that `buildingTiv` is `[100_000, 200_000]` and `tivPropDivisor` is 1000. We
then have:
1(100,000 / 1000) + 1(200,000 / 1000)) = 300.
Consider instead what happens if `property` is 0. Since we have no property
locations, we have `[0, 0]` as `buildingTiv` and `tivPropDivisor` is 0.
0(0/0) + 0(0/0)) = 0(NaN + NaN) = NaN.
This is clearly not what was intended. The predicate is expected to be
_strongly_ zero, as if using an Iverson bracket:
((0/0)[0] + (0/0)[0]) = 0.
Of course, one option is to redefine TAME such that we use Iverson's
convention in place of summation, however this is neither necessary nor
desirable given that
(a) NaN is not valid within the domain of any TAME expression, and
(b) Summation is elegantly generalized and efficiently computed using
vector arithmetic and SIMD functions.
That is: there's no use in messing with TAME's computational model for a
valid that should be impossible to represent.
Short-Circuiting Computation
----------------------------
There's another way to look at it, though: that we intended to skip the
computation entirely, and so it doesn't matter what the quotient is. If the
compiler were smart enough (and maybe one day it will be), it would know
that the predicate of `tivPropDivisor` and `propValue` are the same and so
there is no circumstance under which we would compute `propValue` and have
`tivPropDivisor` be 0.
The problem is: that short-circuiting is employed as an _optimization_, and
is an implementation detail. Mathematically, the expression is unchanged,
and is still invalid within TAME's domain. It is unrepresentable, and so
this is not an out.
But let's pretend that it was defined that way, which would yield this:
{ ∑ₖ(pₖ(tₖ/dₖ)), ∀x∈p(x = 1);
propValue = <
{ 0, otherwise.
This is the optimization that is employed, but it's still not mathematically
correct! What happens if p₀ = 1, but p₁ = 0? Then we have:
1(100,000/1000) + 0(0/0) = 100 + NaN = NaN,
but the _intent_ was clearly to have 100 + 0 = 100, and so we return to the
original problem once again.
Classification Predicates and Intent
------------------------------------
Classifications are used as predicates for equations, but classifications
_themselves_ have predicates in the form of _matches_. Consider, for
example, a classification that may be used in an assertion to prevent
negative premium from being generated:
<t:assert failure="premBuilding must not be negative for any index">
<t:match-gte value="premBuilding" value="#0" />
</t:assert>
Simple enough—the system will fail if the premium for a given building is
below $0.
But what happens if premBuilding is calculated as so?
<rate-each class="property" yields="premBuildingTotal"
generates="premBuilding" index="k">
<c:product>
<c:value-of name="propValue" index="k" />
<c:value-of name="propRate" index="k" />
</c:product>
</rate-each>
Alas, if `property` is false for any index, then we know that `propValue` is
NaN, and NaN * x = NaN, and so `premBuilding` is NaN.
The above assertion will compile the match into the first-order sentence
∀x∈b(x > 0).
Unfortunately, NaN is not greater than, less than, equal to, or any other
sort of thing to 0, and so _this assertion will trigger_. This causes
practical problems with the `_premium_` template, which has an
`@allow-zero@` argument to permit zero premium.
Consider this real-world case that I found (variables renamed), to avoid a
strawman:
<t:premium class="loc" round="cent"
yields="locInitialTotal"
generates="locInitial" index="k"
allow-zero="true"
desc="...">
<c:value-of name="premAdditional" />
<c:quotient>
<c:value-of name="premLoc" index="k" />
<c:value-of name="premTotal" />
</c:quotient>
</t:premium>
This appears to be responsible for splitting up `premAdditional` relative to
the total premium contribution of each location. It explicitly states that
it wants to permit a zero value. The intent of this block is clear: a value
of 0 is explicitly permitted and _expected_.
But if `premTotal` is for whatever reason 0—whether it be due to a test
case or some unexpected input—then it'll yield a NaN and make the entire
expression NaN. Or if `premAdditional` or `premLoc` are tainted by a NaN,
the same result will occur. The assertion will trigger. And, indeed, this
is what I'm seeing with test cases against the new classification system.
What about Infinity? Is it intuitive that, should `propValue` in the
previous example be positive and `propRate` be 0, that we would, rather than
producing a very small value, produce an infinitely large one? Does that
match intuition? Remember, this system is a domain-specific language for
_our_ purposes—it is not intended to be used to model infinities.
For example, say we had this submission because the premium exceeds our
authority to write with some carrier:
<t:submit reason="Premium exceeds authority">
<t:match-gt name="premBuilding" value="#100k" />
</t:submit>
If we had
(100,000 / 0) = ∞,
then this submit reason would trigger. Surely that was not intended, since
we have `property` as a predicate and `propRate` with the same predicate,
implying that the answer we _actually_ want is 0! In that case, what we
_probably_ want to trigger is something like
<rate yields="premFinal">
<t:maxreduce>
<c:value-of name="premBuildingTotal" />
<c:value-of name="#500" />
</t:maxreduce>
</rate>,
in order to apply a minimum premium of $500. But if `premBuildingTotal` is
Infinity, then you won't get that—you'll get Infinity, which is of course
nonsense.
And nevermind -Infinity.
Why Wasn't This a Problem Before?
---------------------------------
So why bring this up now? Why have we survived a decade without this?
We haven't, really—these bugs have been hidden. But the old classification
system covered them up; predicates would implicitly treat missing values as
0 by enclosing them in `(x||0)` in the compiled code. Observe this
ECMAScript code:
NaN || 0 = 0.
Consequently, the old classification system absorbed bad values and treated
them implicitly as 0. But that was a bug, and had to be removed; it meant
that missing indexes in classifications would trigger predicates that were
not intended to be triggered, if they matched against 0, or matched against
a value less than some number larger than zero. (See
`core/test/core/class` for examples.)
The new classification system does not perform such defaulting. _But it
also does not expect to receive values outside of its valid domain._
Consequently, _NaN and Infinity lead to undefined behavior_, and the
current implementation causes the predicate to match (NaN < 0) and therefore
fail.
The reason for this is because that this implementation is intended to
convey precisely the computation necessary for the classification system, as
formally defined, so that it can be later optimized even further. Checking
for values outside the domain not only should not be necessary, but it would
prevent such future optimizations.
Furthermore, parameters used to compile into (param||0), to account for
missing values or empty strings. This changed somewhat recently with
5a816a4701211adf84d3f5e09b74c67076c47675, which pre-cast all inputs and
allowed relaxing many of those casts since they were both wasteful and no
longer necessary.
Given that, for all practical purposes, 0/0=0 in the system <1yr ago.
Infinity, of course, is a different story, since (Infinity||0)=Infinity;
this one has always been a problem.
Let's Just Fail
---------------
Okay, so we cannot have a valid expression, so let's just fail.
We could mean that in two different ways:
1. Fail at runtime if we divide by 0; or
2. Fail at compile-time if we _could_ divide by 0.
Both of these have their own challenges.
Let's dismiss #2 right off the bat for now, because until we have TAMER,
that's not really feasible. We need something today. We will discuss that
in the future.
For #1—we cannot just throw an error and halt computation, because if the
`canterm` flag passed into the system is `false`, then _computation must
proceed and return all results_. Terminating classifications are checked
after returning rather than throwing errors.
Since we have to proceed with computation, then the computations have to be
valid, and so we're left with the same problem again—we cannot have
undefined behavior.
One could argue that, okay, we have undefined behavior, but we're going to
fail because of the assertion anyway! That's potentially defensible, but it
is at the moment undesirable, because we get so many failures. And,
relative to the section below, it's not clear to me what benefit we get from
that behavior other than making things more difficult for ourselves.
Furthermore, such an assertion would have to be defined for every
calculation that performs a quotient, and would have to set some
intermediate flag in the calculation which would then have to be checked for
after-the-fact. This muddies the generated calculation, which causes
problems for optimizations, because it requires peering into state of the
calculation that may be hidden or optimized away.
If we decide that calculations must be valid because we cannot fail, and we
have to stick with the domain of calculations, then `x/0` must be
_something_ within that domain.
x/0=0 Makes Sense With the Current System
-----------------------------------------
Let's take a step back. Consider a developer who is unaware that
NaN/Infinity are permitted in the system—they just know that division by
zero is a bad thing to do because that's what they learned, and they want to
avoid it in their code.
Consider that they started with this:
<rate-each class="property" generates="propValue" index="k">
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</rate>
They have inspected the output of `tivPropDivisor` and see that it is
sometimes 0. They understand that `property` is a predicate for the
calculation, and so reasonably think that they could do something like this:
<classify as="nonzero-tiv-prop-divisor" ...>
<t:match-ne on="tivPropDivisor" value="#0" />
</classify>
and then change the rate-each to
<rate-each class="property nonzero-tiv-prop-divisor" ...>.
Except that, of course, we know that will have no effect, because a NaN is a
NaN. This is not intuitive.
So they'd have to do this:
<rate-each class="property" generates="propValue" index="k">
<c:cases>
<c:case>
<t:when-ne name="tivPropDivisor" value="#0" />
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</c:case>
<c:otherwise>
<c:value-of name="#0" />
</c:otherwise>
</c:cases>
</rate>.
But for what purpose? What have we gained over simply having x/0=0, which
does this for you?
The reason why this is so unintuitive is because 0 is the default case in
every other part of the system. If something doesn't match a predicate, the
value becomes 0. If a value at an index is not defined, it is implicitly
zero. A non-matching predicate is 0.
This is exploited for reducing values using summation. So the behavior of
the system with regards to 0 is always on the mind of the developer. If we
add it in another spot, they would think nothing of it.
It would be nice if it acted as an identity in a monoidic operation,
e.g. as 0 for sums but as 1 for products, but that's not how the system
works at all today. And indeed such a thing could be introduced using a
special template in place of `c:value-of` that copies the predicates of the
referenced value and does the right thing.
The _danger_, of course, is that this is _not_ how the system as worked, and
so changing the behavior has the risk of breaking something that has relied
on undefined behavior for so long. This is indeed a risk, but I have taken
some confident in (a) all the test cases for our system pass despite a
significant number of x/0=0 being triggered due to limited inputs, and (b)
these situations are _not correct today_, resulting in `null` in serialized
result data because `JSON.stringify([NaN, Infinity]) === "[null, null]"`.
Given all of that, predictable incorrect behavior is better than undefined
behavior.
So x/0=0 Isn't Bad?
-------------------
No, and it's mathematically sound. This decision isn't unprecedented—
Coq, Lean, Agda, and other theorem provers define x/0=0. APL originally
defined x/0=1, but later switched to 0. Other languages do their own thing
depending on what is right for their particular situation.
Division is normally derived from
a × a⁻¹ = 1, a ≠ 0.
We're simply not using that definition—when we say "quotient", or use the
`/` symbol, we mean a _different_ function (`div`, in the compiled JS),
where we have an _additional_ axiom that
a / 0 = 0.
And, similarly,
0⁻¹ = 0.
So we've taken a _normally undefined_ case and given it a definition. No
inconsistency arises.
In fact, this makes _sense_ to do, because _this is what we want_. The
alternative, as mentioned above, is a lot of boilerplate—checking for 0 any
time we want to do division. Complicating the compiler to check for those
cases. And so on. It's easier to simple state that, in TAME, quotients
have this extra convenient feature whereby you don't have to worry about
your denominator being zero because it'll act as though you enclosed it in a
case statement, and because of that, all your code continues to operate in
an intuitive way.
I really recommend reading this blog post regarding the Lean theorem prover:
https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/
2022-02-24 11:25:15 -05:00
|
|
|
xmlns:t="http://www.lovullo.com/rater/apply-template"
|
2015-03-23 15:11:35 -04:00
|
|
|
core="true"
|
|
|
|
desc="Base features">
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
The \pkgself~package exposes common and internal
|
|
|
|
defintions. Ideally, this package will be included automatically by
|
|
|
|
the compiler to remove repetitive, boilerplate imports. Importing
|
|
|
|
this package isn't necessary if none of these definitions are
|
|
|
|
needed.
|
2013-02-13 09:27:33 -05:00
|
|
|
|
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<section title="Internal Constants">
|
|
|
|
\ref{_CMATCH_} is a magic constant that contains the result of
|
|
|
|
a~classification match. This is used implicity by
|
|
|
|
\ref{rate-each}.\footnote{The symbol is \Xi~because it looks like
|
|
|
|
a sideways array.}
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
\todo{Remove in favor of a local variable or generated
|
|
|
|
classification; there is no need (anymore) for this to be magic.}
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<const name="_CMATCH_" type="boolean" sym="\Xi"
|
2017-12-22 11:14:29 -05:00
|
|
|
desc="Classification match vector (applicability)">
|
2015-03-23 15:11:35 -04:00
|
|
|
<item value="0"
|
|
|
|
desc="Dummy value; this set is populated upon entering
|
|
|
|
each rate block" />
|
|
|
|
</const>
|
2013-02-13 09:27:33 -05:00
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
The runtime is responsible for populating \ref{__DATE_YEAR__} with
|
|
|
|
a proper value representing the current year.
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
\todo{TAME is deterministic with this one exception; remove it and
|
2017-04-04 10:29:24 -04:00
|
|
|
have users use the params from {\tt datetime} instead if they need this
|
|
|
|
datum.}
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<const name="__DATE_YEAR__" magic="true"
|
|
|
|
value="0" type="integer"
|
|
|
|
desc="Current year"
|
|
|
|
sym="\widehat{D^\gamma}" />
|
|
|
|
</section>
|
2013-02-13 09:27:33 -05:00
|
|
|
|
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<section title="Primitive Types">
|
|
|
|
Primitives are defined internally; these definitions simply
|
|
|
|
provide symbols to permit their use.
|
2013-02-13 09:27:33 -05:00
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<typedef name="integer"
|
|
|
|
desc="Any value in the set of integers"
|
|
|
|
sym="\mathbb{I}">
|
|
|
|
<base-type />
|
|
|
|
</typedef>
|
2013-02-13 09:27:33 -05:00
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
<typedef name="float"
|
|
|
|
desc="Any real number (represented as a float)"
|
|
|
|
sym="\mathbb{R}">
|
|
|
|
<base-type />
|
|
|
|
</typedef>
|
2013-02-13 09:27:33 -05:00
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
\ref{empty} does not have much use outside of the compiler.
|
|
|
|
|
|
|
|
<typedef name="empty"
|
|
|
|
desc="Empty set"
|
|
|
|
sym="\emptyset">
|
|
|
|
<base-type />
|
|
|
|
</typedef>
|
|
|
|
</section>
|
|
|
|
|
|
|
|
|
2018-07-12 16:41:03 -04:00
|
|
|
<section title="Boolean and Unknown">
|
|
|
|
\ref{boolean} contains the boolean \ref{TRUE} and~\ref{FALSE} values,
|
|
|
|
which map to~$1$ and~$0$ respectively.
|
|
|
|
The \ref{maybe} type is the union of \ref{boolean} and \ref{NOTHING},
|
|
|
|
with a value of~$-1$;\footnote{
|
|
|
|
This is similar in spirit to the Haskell \tt{Maybe} type,
|
|
|
|
or the OCaml \tt{Option} type.
|
|
|
|
}this is commonly used to represent an unknown state or missing
|
|
|
|
value.\footnote{
|
|
|
|
The \ref{nothing}~type is used for the sake of the union;
|
|
|
|
it should not be used directly.}
|
|
|
|
|
|
|
|
<typedef name="maybe" desc="Boolean or unknown value">
|
|
|
|
<union>
|
|
|
|
<typedef name="nothing" desc="Unknown value">
|
|
|
|
<enum type="integer">
|
|
|
|
<item name="NOTHING" value="-1" desc="Unknown or missing value" />
|
|
|
|
</enum>
|
|
|
|
</typedef>
|
|
|
|
|
|
|
|
<typedef name="boolean" desc="Boolean values">
|
|
|
|
<enum type="integer">
|
|
|
|
<item name="TRUE" value="1" desc="True" />
|
|
|
|
<item name="FALSE" value="0" desc="False" />
|
|
|
|
</enum>
|
|
|
|
</typedef>
|
|
|
|
</union>
|
|
|
|
</typedef>
|
|
|
|
|
|
|
|
The constant \ref{UNKNOWN} is also defined as~$-1$ to serve as an
|
|
|
|
alternative to the term~``nothing''.
|
|
|
|
|
|
|
|
<const name="UNKNOWN" value="-1"
|
|
|
|
desc="Unknown or missing value" />
|
|
|
|
</section>
|
|
|
|
|
|
|
|
|
2015-03-23 15:11:35 -04:00
|
|
|
|
|
|
|
<section title="Convenience">
|
2015-03-24 13:48:11 -04:00
|
|
|
$0$~is a~common value. Where a value is required (such
|
2015-03-23 15:11:35 -04:00
|
|
|
as a~template argument), \ref{ZERO} may be used. TAME now
|
|
|
|
supports a~constant-scalar syntax ({\tt #0}; \todo{reference this
|
|
|
|
in documentation}), making this largely unnecessary.
|
|
|
|
|
|
|
|
This is declared as a float to provide compatibility with all
|
|
|
|
types of expressions.
|
|
|
|
|
|
|
|
<const name="ZERO" value="0.00"
|
|
|
|
desc="Zero value" />
|
|
|
|
|
|
|
|
|
|
|
|
In the case where classifications are required, but a~static
|
|
|
|
assumption about the applicability of the subject can be made, we
|
|
|
|
have values that are always~true and always~false. The use
|
|
|
|
of~\ref{never} may very well be a~code smell, but let us not rush
|
|
|
|
to judgment.\footnote{\ref{never} has been added as an analog
|
|
|
|
to~\ref{always}; its author has never had use for it. Oh, look,
|
|
|
|
we just used ``never''.}
|
|
|
|
|
|
|
|
<classify as="always"
|
|
|
|
desc="Always true"
|
|
|
|
yields="alwaysTrue"
|
|
|
|
keep="true" />
|
|
|
|
|
|
|
|
<classify as="never"
|
2015-04-29 15:23:04 -04:00
|
|
|
any="true"
|
2015-03-23 15:11:35 -04:00
|
|
|
desc="Never true"
|
|
|
|
yields="neverTrue"
|
|
|
|
keep="true" />
|
|
|
|
</section>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<section title="Work-In-Progress">
|
2018-01-25 16:16:34 -05:00
|
|
|
\ref{_todo_} formalizes TODO items and may optionally yield a
|
|
|
|
value~\tt{@value@} for use within calculations.%
|
|
|
|
\footnote{This is different than its previous behavior of always
|
|
|
|
yielding a scalar~$0$.}
|
|
|
|
All uses of the \ref{_todo_} template will produce a warning composed of
|
|
|
|
its description~\tt{@desc@}.
|
2015-03-23 15:11:35 -04:00
|
|
|
|
|
|
|
<template name="_todo_"
|
|
|
|
desc="Represents work that needs to be done">
|
|
|
|
<param name="@desc@" desc="TODO desc">
|
|
|
|
<text>TODO</text>
|
|
|
|
</param>
|
|
|
|
|
2018-01-25 16:16:34 -05:00
|
|
|
<param name="@value@" desc="Placeholder value" />
|
|
|
|
<param name="@index@" desc="Placeholder value index">
|
|
|
|
<text></text>
|
|
|
|
</param>
|
2015-03-23 15:11:35 -04:00
|
|
|
|
2018-01-25 16:16:34 -05:00
|
|
|
<unless name="@value@">
|
|
|
|
<unless name="@index@" eq="">
|
|
|
|
<error>Using @index@ without @value@</error>
|
|
|
|
</unless>
|
2015-03-23 15:11:35 -04:00
|
|
|
</unless>
|
2018-01-25 16:16:34 -05:00
|
|
|
|
|
|
|
|
|
|
|
<warning>
|
|
|
|
TODO: <param-value name="@desc@" />
|
|
|
|
</warning>
|
|
|
|
|
|
|
|
|
|
|
|
<if name="@value@">
|
|
|
|
<c:value-of name="@value@" index="@index@" />
|
|
|
|
</if>
|
2015-03-23 15:11:35 -04:00
|
|
|
</template>
|
|
|
|
|
|
|
|
|
|
|
|
The \ref{_ignore_} template serves as a~block
|
|
|
|
comment.\footnote{This is useful since XML does not support nested
|
|
|
|
comments, which makes it difficult to comment out code that
|
|
|
|
already has XML comments.} It may be useful for debugging, but is
|
|
|
|
discouraged for use otherwise. The \ref{_ignore_/@desc@} param
|
|
|
|
should be used to describe intent.
|
|
|
|
|
|
|
|
<template name="_ignore_"
|
|
|
|
desc="Removes all child nodes (as if commented out)">
|
|
|
|
<param name="@values@" desc="Nodes to comment out" />
|
|
|
|
<param name="@desc@" desc="Reason for ignore" />
|
|
|
|
|
2018-01-26 11:15:27 -05:00
|
|
|
|
|
|
|
<warning>Ignored block!</warning>
|
2015-03-23 15:11:35 -04:00
|
|
|
</template>
|
|
|
|
</section>
|
2020-04-13 09:03:31 -04:00
|
|
|
|
2021-06-08 13:26:49 -04:00
|
|
|
|
|
|
|
<section title="Calculations">
|
|
|
|
These templates represent calculations that used to be defined as XSLT
|
|
|
|
tempaltes before TAME's template system existed.
|
|
|
|
|
|
|
|
<template name="_yield_"
|
|
|
|
desc="Final scalar result provided to caller">
|
|
|
|
<param name="@values@" desc="Yield calculation" />
|
|
|
|
|
|
|
|
<rate yields="___yield" local="true">
|
|
|
|
<param-copy name="@values@" />
|
|
|
|
</rate>
|
|
|
|
</template>
|
|
|
|
|
|
|
|
|
|
|
|
<template name="_rate-each_"
|
|
|
|
desc="Convenience template that expands to a lv:rate block summing over
|
|
|
|
the magic _CMATCH_ set with the product of its value">
|
|
|
|
<param name="@values@"
|
|
|
|
desc="Yield calculation" />
|
|
|
|
|
|
|
|
<param name="@generates@" desc="Generator name (optional)">
|
|
|
|
<text></text>
|
|
|
|
</param>
|
|
|
|
|
|
|
|
<param name="@yields@" desc="Yield (optional)">
|
|
|
|
<text>_</text>
|
|
|
|
<param-value name="@generates@" />
|
|
|
|
</param>
|
|
|
|
|
|
|
|
<!-- at least one of generates or yields is required -->
|
|
|
|
<if name="@yields@" eq="">
|
|
|
|
<if name="@generates@" eq="">
|
|
|
|
<error>must provide at least one of @generates or @yields</error>
|
|
|
|
</if>
|
2020-04-13 18:58:49 -04:00
|
|
|
</if>
|
2021-06-08 13:26:49 -04:00
|
|
|
|
|
|
|
<param name="@class@"
|
|
|
|
desc="Space-delimited classifications for predicated iteration" />
|
|
|
|
<param name="@no@"
|
|
|
|
desc="Space-delimited classifications for predicated iteration to prevent matches">
|
|
|
|
<text></text>
|
|
|
|
</param>
|
|
|
|
|
|
|
|
<param name="@index@"
|
|
|
|
desc="Generator index" />
|
|
|
|
|
|
|
|
<param name="@dim@" desc="Dim (optional)">
|
|
|
|
<text></text>
|
|
|
|
</param>
|
|
|
|
|
|
|
|
<param name="@gensym@" desc="Generator TeX symbol">
|
|
|
|
<text></text>
|
|
|
|
</param>
|
|
|
|
|
|
|
|
<rate class="@class@" no="@no@" yields="@yields@"
|
|
|
|
gentle-no="true"
|
|
|
|
desc="Total {@yields@} premium">
|
|
|
|
<c:sum of="_CMATCH_" dim="@dim@" sym="@gensym@"
|
|
|
|
generates="@generates@" index="@index@"
|
|
|
|
desc="Set of individual {@yields@} premiums">
|
|
|
|
<c:product>
|
|
|
|
<c:value-of name="_CMATCH_" index="@index@"
|
|
|
|
label="One if {@class@} and not {@no@} (if provided), otherwise zero" />
|
|
|
|
<param-copy name="@values@" />
|
|
|
|
</c:product>
|
|
|
|
</c:sum>
|
|
|
|
</rate>
|
|
|
|
</template>
|
|
|
|
</section>
|
2021-06-08 14:06:47 -04:00
|
|
|
|
|
|
|
|
|
|
|
<section title="Feature Flags">
|
|
|
|
These templates alter the behavior of the TAME compiler or runtime.
|
|
|
|
They will be removed at some point in the future.
|
|
|
|
|
|
|
|
|
|
|
|
<section title="Classification System">
|
|
|
|
The template \tt{_use-new-classification-system_} sets a compile-time
|
|
|
|
flag that will cause all following sibling classifications to be
|
|
|
|
compiled using the new classification system.
|
|
|
|
Once the feature is enabled by default,
|
|
|
|
this template will become a noop and will begin to emit a warning,
|
|
|
|
before eventually being removed.
|
|
|
|
|
|
|
|
It is possible to mix both old and new classifications within the same
|
|
|
|
package,
|
|
|
|
though such behavior may lead to confusion in certain cases.
|
|
|
|
For more information on where the new and old system differ,
|
|
|
|
see the \tt{core/test/core/class} specification.
|
|
|
|
|
|
|
|
<template name="_use-new-classification-system_"
|
|
|
|
desc="Compile following-sibling::lv:classify using the new
|
|
|
|
classification system">
|
|
|
|
<!-- Even though this is a template param-meta, it will only affect
|
|
|
|
following-sibling for performance reasons -->
|
|
|
|
<param-meta name="___feature-newclassify" value="1" />
|
x/0=0 with global flag for new classification system
This was originally my plan with the new classification system, but it was
undone because I had hoped to punt on the somewhat controversial
issue. Unfortunately, I see no other way. Here I attempt to summarize the
reasons why, many of which are specific to the design decisions of TAME.
Keep in mind that TAME is a domain-specific language (DSL) for writing
insurance rating systems. It should act intuitively for our use case, while
still being mathematically sound.
If you still aren't convinced, please see the link at the bottom.
Target Language Semantics (ECMAScript)
--------------------------------------
First: let's establish what happens today. TAME compiles into ECMAScript,
which uses IEEE 754-2008 floating-point arithmetic. Here we have:
x/0 = Infinity, x > 0;
x/0 = -Infinity, x < 0;
0/0 = NaN, x = 0.
This is immediately problematic: TAME's calculations must produce concrete
real numbers, always. NaN is not valid in its domain, and Infinity is of no
practical use in our computational model (TAME is build for insurance rating
systems, and one will never have infinite premium). Put plainly: the
behavior is undefined in TAME when any of these values are yielded by an
expression.
Furthermore, we have _three different possible situations_ depending on
whether the numerator is positive, negative, or zero. This makes it more
difficult to reason about the behavior of the system, for values we do not
want in the first place.
We then have these issues in ECMAScript:
Infinity * 0 = NaN.
-Infinity * 0 = NaN.
NaN * 0 = NaN.
These are of particular concern because of how predicates work in TAME,
which will be discussed further below. But it is also problematic because
of how it propagates: once you have NaN, you'll always have NaN, unless you
break out of the situation with some control structure that avoids using it
in an expression at all.
Let's now consider predicates:
NaN > 0 = false.
NaN < 0 = false.
NaN === 0 = false.
NaN === NaN = false.
These will be discussed in terms of classification predicates (matches).
We also have issues of serialization:
JSON.stringify(Infinity) = "null".
JSON.stringify(NaN) = "null".
These means that these values are difficult to transfer between systems,
even if we wanted them.
TAME's Predicates
-----------------
TAME has a classification system based on first-order logic, where ⊥ is
represented by 0 and ⊤ is represented by 1. These classifications are used
as predicates to calculations via the @class attribute of a rate block. For
example:
<rate-each class="property" generates="propValue" index="k">
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</rate>
As can be observed via the Summary Page, this calculation compiles into the
following mathematical expression:
∑ₖ(pₖ(tₖ/dₖ)),
that is—the quotient is then multiplied by the value of the `property`
classification, which is a 0 or 1 respectively for that index.
Let's say that tivPropDivisor were defined in this way:
<rate-each class="property" generates="tivPropDivisor" index="k">
<!--- ... logic here ... -->
</rate>
It does not matter what the logic here is. Observe that the predicate here
is `property` as well, which means that, if this risk is not a property
risk, then `tivPropDivisor` will be `0`.
Looking back at `propValue`, let's say that we do have a property risk, and
that `buildingTiv` is `[100_000, 200_000]` and `tivPropDivisor` is 1000. We
then have:
1(100,000 / 1000) + 1(200,000 / 1000)) = 300.
Consider instead what happens if `property` is 0. Since we have no property
locations, we have `[0, 0]` as `buildingTiv` and `tivPropDivisor` is 0.
0(0/0) + 0(0/0)) = 0(NaN + NaN) = NaN.
This is clearly not what was intended. The predicate is expected to be
_strongly_ zero, as if using an Iverson bracket:
((0/0)[0] + (0/0)[0]) = 0.
Of course, one option is to redefine TAME such that we use Iverson's
convention in place of summation, however this is neither necessary nor
desirable given that
(a) NaN is not valid within the domain of any TAME expression, and
(b) Summation is elegantly generalized and efficiently computed using
vector arithmetic and SIMD functions.
That is: there's no use in messing with TAME's computational model for a
valid that should be impossible to represent.
Short-Circuiting Computation
----------------------------
There's another way to look at it, though: that we intended to skip the
computation entirely, and so it doesn't matter what the quotient is. If the
compiler were smart enough (and maybe one day it will be), it would know
that the predicate of `tivPropDivisor` and `propValue` are the same and so
there is no circumstance under which we would compute `propValue` and have
`tivPropDivisor` be 0.
The problem is: that short-circuiting is employed as an _optimization_, and
is an implementation detail. Mathematically, the expression is unchanged,
and is still invalid within TAME's domain. It is unrepresentable, and so
this is not an out.
But let's pretend that it was defined that way, which would yield this:
{ ∑ₖ(pₖ(tₖ/dₖ)), ∀x∈p(x = 1);
propValue = <
{ 0, otherwise.
This is the optimization that is employed, but it's still not mathematically
correct! What happens if p₀ = 1, but p₁ = 0? Then we have:
1(100,000/1000) + 0(0/0) = 100 + NaN = NaN,
but the _intent_ was clearly to have 100 + 0 = 100, and so we return to the
original problem once again.
Classification Predicates and Intent
------------------------------------
Classifications are used as predicates for equations, but classifications
_themselves_ have predicates in the form of _matches_. Consider, for
example, a classification that may be used in an assertion to prevent
negative premium from being generated:
<t:assert failure="premBuilding must not be negative for any index">
<t:match-gte value="premBuilding" value="#0" />
</t:assert>
Simple enough—the system will fail if the premium for a given building is
below $0.
But what happens if premBuilding is calculated as so?
<rate-each class="property" yields="premBuildingTotal"
generates="premBuilding" index="k">
<c:product>
<c:value-of name="propValue" index="k" />
<c:value-of name="propRate" index="k" />
</c:product>
</rate-each>
Alas, if `property` is false for any index, then we know that `propValue` is
NaN, and NaN * x = NaN, and so `premBuilding` is NaN.
The above assertion will compile the match into the first-order sentence
∀x∈b(x > 0).
Unfortunately, NaN is not greater than, less than, equal to, or any other
sort of thing to 0, and so _this assertion will trigger_. This causes
practical problems with the `_premium_` template, which has an
`@allow-zero@` argument to permit zero premium.
Consider this real-world case that I found (variables renamed), to avoid a
strawman:
<t:premium class="loc" round="cent"
yields="locInitialTotal"
generates="locInitial" index="k"
allow-zero="true"
desc="...">
<c:value-of name="premAdditional" />
<c:quotient>
<c:value-of name="premLoc" index="k" />
<c:value-of name="premTotal" />
</c:quotient>
</t:premium>
This appears to be responsible for splitting up `premAdditional` relative to
the total premium contribution of each location. It explicitly states that
it wants to permit a zero value. The intent of this block is clear: a value
of 0 is explicitly permitted and _expected_.
But if `premTotal` is for whatever reason 0—whether it be due to a test
case or some unexpected input—then it'll yield a NaN and make the entire
expression NaN. Or if `premAdditional` or `premLoc` are tainted by a NaN,
the same result will occur. The assertion will trigger. And, indeed, this
is what I'm seeing with test cases against the new classification system.
What about Infinity? Is it intuitive that, should `propValue` in the
previous example be positive and `propRate` be 0, that we would, rather than
producing a very small value, produce an infinitely large one? Does that
match intuition? Remember, this system is a domain-specific language for
_our_ purposes—it is not intended to be used to model infinities.
For example, say we had this submission because the premium exceeds our
authority to write with some carrier:
<t:submit reason="Premium exceeds authority">
<t:match-gt name="premBuilding" value="#100k" />
</t:submit>
If we had
(100,000 / 0) = ∞,
then this submit reason would trigger. Surely that was not intended, since
we have `property` as a predicate and `propRate` with the same predicate,
implying that the answer we _actually_ want is 0! In that case, what we
_probably_ want to trigger is something like
<rate yields="premFinal">
<t:maxreduce>
<c:value-of name="premBuildingTotal" />
<c:value-of name="#500" />
</t:maxreduce>
</rate>,
in order to apply a minimum premium of $500. But if `premBuildingTotal` is
Infinity, then you won't get that—you'll get Infinity, which is of course
nonsense.
And nevermind -Infinity.
Why Wasn't This a Problem Before?
---------------------------------
So why bring this up now? Why have we survived a decade without this?
We haven't, really—these bugs have been hidden. But the old classification
system covered them up; predicates would implicitly treat missing values as
0 by enclosing them in `(x||0)` in the compiled code. Observe this
ECMAScript code:
NaN || 0 = 0.
Consequently, the old classification system absorbed bad values and treated
them implicitly as 0. But that was a bug, and had to be removed; it meant
that missing indexes in classifications would trigger predicates that were
not intended to be triggered, if they matched against 0, or matched against
a value less than some number larger than zero. (See
`core/test/core/class` for examples.)
The new classification system does not perform such defaulting. _But it
also does not expect to receive values outside of its valid domain._
Consequently, _NaN and Infinity lead to undefined behavior_, and the
current implementation causes the predicate to match (NaN < 0) and therefore
fail.
The reason for this is because that this implementation is intended to
convey precisely the computation necessary for the classification system, as
formally defined, so that it can be later optimized even further. Checking
for values outside the domain not only should not be necessary, but it would
prevent such future optimizations.
Furthermore, parameters used to compile into (param||0), to account for
missing values or empty strings. This changed somewhat recently with
5a816a4701211adf84d3f5e09b74c67076c47675, which pre-cast all inputs and
allowed relaxing many of those casts since they were both wasteful and no
longer necessary.
Given that, for all practical purposes, 0/0=0 in the system <1yr ago.
Infinity, of course, is a different story, since (Infinity||0)=Infinity;
this one has always been a problem.
Let's Just Fail
---------------
Okay, so we cannot have a valid expression, so let's just fail.
We could mean that in two different ways:
1. Fail at runtime if we divide by 0; or
2. Fail at compile-time if we _could_ divide by 0.
Both of these have their own challenges.
Let's dismiss #2 right off the bat for now, because until we have TAMER,
that's not really feasible. We need something today. We will discuss that
in the future.
For #1—we cannot just throw an error and halt computation, because if the
`canterm` flag passed into the system is `false`, then _computation must
proceed and return all results_. Terminating classifications are checked
after returning rather than throwing errors.
Since we have to proceed with computation, then the computations have to be
valid, and so we're left with the same problem again—we cannot have
undefined behavior.
One could argue that, okay, we have undefined behavior, but we're going to
fail because of the assertion anyway! That's potentially defensible, but it
is at the moment undesirable, because we get so many failures. And,
relative to the section below, it's not clear to me what benefit we get from
that behavior other than making things more difficult for ourselves.
Furthermore, such an assertion would have to be defined for every
calculation that performs a quotient, and would have to set some
intermediate flag in the calculation which would then have to be checked for
after-the-fact. This muddies the generated calculation, which causes
problems for optimizations, because it requires peering into state of the
calculation that may be hidden or optimized away.
If we decide that calculations must be valid because we cannot fail, and we
have to stick with the domain of calculations, then `x/0` must be
_something_ within that domain.
x/0=0 Makes Sense With the Current System
-----------------------------------------
Let's take a step back. Consider a developer who is unaware that
NaN/Infinity are permitted in the system—they just know that division by
zero is a bad thing to do because that's what they learned, and they want to
avoid it in their code.
Consider that they started with this:
<rate-each class="property" generates="propValue" index="k">
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</rate>
They have inspected the output of `tivPropDivisor` and see that it is
sometimes 0. They understand that `property` is a predicate for the
calculation, and so reasonably think that they could do something like this:
<classify as="nonzero-tiv-prop-divisor" ...>
<t:match-ne on="tivPropDivisor" value="#0" />
</classify>
and then change the rate-each to
<rate-each class="property nonzero-tiv-prop-divisor" ...>.
Except that, of course, we know that will have no effect, because a NaN is a
NaN. This is not intuitive.
So they'd have to do this:
<rate-each class="property" generates="propValue" index="k">
<c:cases>
<c:case>
<t:when-ne name="tivPropDivisor" value="#0" />
<c:quotient>
<c:value-of name="buildingTiv" index="k" />
<c:value-of name="tivPropDivisor" index="k" />
</c:quotient>
</c:case>
<c:otherwise>
<c:value-of name="#0" />
</c:otherwise>
</c:cases>
</rate>.
But for what purpose? What have we gained over simply having x/0=0, which
does this for you?
The reason why this is so unintuitive is because 0 is the default case in
every other part of the system. If something doesn't match a predicate, the
value becomes 0. If a value at an index is not defined, it is implicitly
zero. A non-matching predicate is 0.
This is exploited for reducing values using summation. So the behavior of
the system with regards to 0 is always on the mind of the developer. If we
add it in another spot, they would think nothing of it.
It would be nice if it acted as an identity in a monoidic operation,
e.g. as 0 for sums but as 1 for products, but that's not how the system
works at all today. And indeed such a thing could be introduced using a
special template in place of `c:value-of` that copies the predicates of the
referenced value and does the right thing.
The _danger_, of course, is that this is _not_ how the system as worked, and
so changing the behavior has the risk of breaking something that has relied
on undefined behavior for so long. This is indeed a risk, but I have taken
some confident in (a) all the test cases for our system pass despite a
significant number of x/0=0 being triggered due to limited inputs, and (b)
these situations are _not correct today_, resulting in `null` in serialized
result data because `JSON.stringify([NaN, Infinity]) === "[null, null]"`.
Given all of that, predictable incorrect behavior is better than undefined
behavior.
So x/0=0 Isn't Bad?
-------------------
No, and it's mathematically sound. This decision isn't unprecedented—
Coq, Lean, Agda, and other theorem provers define x/0=0. APL originally
defined x/0=1, but later switched to 0. Other languages do their own thing
depending on what is right for their particular situation.
Division is normally derived from
a × a⁻¹ = 1, a ≠ 0.
We're simply not using that definition—when we say "quotient", or use the
`/` symbol, we mean a _different_ function (`div`, in the compiled JS),
where we have an _additional_ axiom that
a / 0 = 0.
And, similarly,
0⁻¹ = 0.
So we've taken a _normally undefined_ case and given it a definition. No
inconsistency arises.
In fact, this makes _sense_ to do, because _this is what we want_. The
alternative, as mentioned above, is a lot of boilerplate—checking for 0 any
time we want to do division. Complicating the compiler to check for those
cases. And so on. It's easier to simple state that, in TAME, quotients
have this extra convenient feature whereby you don't have to worry about
your denominator being zero because it'll act as though you enclosed it in a
case statement, and because of that, all your code continues to operate in
an intuitive way.
I really recommend reading this blog post regarding the Lean theorem prover:
https://xenaproject.wordpress.com/2020/07/05/division-by-zero-in-type-theory-a-faq/
2022-02-24 11:25:15 -05:00
|
|
|
|
|
|
|
<t:todo desc="remove _use-new-classification-system_ application;
|
|
|
|
the new classification system is enabled by default
|
|
|
|
and this template no longer has any effect" />
|
2021-06-08 14:06:47 -04:00
|
|
|
</template>
|
|
|
|
</section>
|
|
|
|
</section>
|
2015-03-18 11:31:47 -04:00
|
|
|
</package>
|
2013-02-13 09:27:33 -05:00
|
|
|
|