This sets us up to be able to determine how `Dangling` expressions will be
rooted into templates.
This new strategy isn't yet handling `Dangling`; I wanted to get this
committed first so that the `Dangling` refactoring is more clear.
DEV-13708
Expressions were previously tied to packages. This prepares for using a
`Tpl` as a container for expressions.
This does not yet handle the situation of auto-rooting dangling expressions
within the container.
DEV-13708
This result in less useful debug output, but it'll be needed for using
a (possibly-anonymous) template as evidence.
This evidence is simply for debugging, and to require some sort of value
during development to help obviate when maybe something is being done
incorrectly (if no obvious value exists).
DEV-13708
This is more of the same refactoring that has been happening. This
extraction also helps emphasize the relationship between imported objects,
and isolates the growing number of test cases. This parser will only grow.
DEV-13708
Just as was done with the expression parser, which this will utilize. This
initializes it, but doesn't yet make use of it (`AirExprAggregate`).
Refactoring was definitely needed; decomposing this is quite a bit of work,
in no small part because of the complexity. This helps significantly.
DEV-13708
This works around limitations of Rust's borrow checker as of the time of
writing. See the provided documentation for more information.
The branch context is not yet exposed to the `delegate` family of methods;
it will be added only as needed in the future.
DEV-13708
This delegates expression parsing to `AirExprAggregate`, in an effort to
both begin to simplify the understanding and maintenance of `AirAggregate`;
and allow for parser composition for template parsing.
This utilizes the prior changes for token sum types to precisely define the
subset of AIR tokens supported by the expression parser. This differs from
prior approaches which delegated until a dead state, relying on runtime
information to determine if a parser has finished. This allows us to
determine that statically.
I do want to be able to eliminate the dead state from the parser so we can
get rid of the `unreachable!`, but I need to move on; that's something I had
tried to do in the past too, which ended up adding a bit of complexity, and
I'll have to consider my options in the future, including whether the dead
state transition can be entirely eliminated in favor of the combination of
these sum types and recovery; the parsing framework decisions were made
while recovery was still an open question, at least in practice.
DEV-13708
This was a rather frustrating thing to encounter. I was working on
refactoring `AirAggregate`, and found that my tests were hanging despite no
apparent cause in the parser itself.
As it turns out, rather than failing with a `FinalizeError` as I
expected (since I was mid-refactor), `collect()` was allocating space for an
endless stream of errors. This was easily verified by adding a `take(x)`
and observing the assertion failure (in this case, in `close_pkg_mid_expr`).
This happens to be the first time in a long time that I actually had to
debug---the combination of robust types as proofs and tests to fill in the
gaps means that runtime issues are caught at build time in all but
exceptional cases (like this one).
It's also worth noting that, because of my policy of iterating only at the
higher levels of the program, it was clear that this must somehow be
Parser-related, since that's the only part of the system that has the
potential for unbounded recursion due to its cyclic state machines.
DEV-13708
This introduces a new macro `sum_ir!` to help with a long-standing problem
of not being able to easily narrow types in Rust without a whole lot of
boilerplate. This patch includes a bit of documentation, so see that for
more information.
This was not a welcome change---I jumped down this rabbit hole trying to
decompose `AirAggregate` so that I can share portions of parsing with the
current parser and a template parser. I can now proceed with that.
This is not the only implementation that I had tried. I previously inverted
the approach, as I've been doing manually for some time: manually create
types to hold the sets of variants, and then create a sum type to hold those
types. That works, but it resulted in a mess for systems that have to use
the IR, since now you have two enums to contend with. I didn't find that to
be appropriate, because we shouldn't complicate the external API for
implementation details.
The enum for IRs is supposed to be like a bytecode---a list of operations
that can be performed with the IR. They can be grouped if it makes sense
for a public API, but in my case, I only wanted subsets for the sake of
delegating responsibilities to smaller subsystems, while retaining the
context that `match` provides via its exhaustiveness checking but does not
expose as something concrete (which is deeply frustrating!).
Anyway, here we are; this'll be refined over time, hopefully, and
portions of it can be generalized for removing boilerplate from other IRs.
Another thing to note is that this syntax is really a compromise---I had to
move on, and I was spending too much time trying to get creative with
`macro_rules!`. It isn't the best, and it doesn't seem very Rust-like in
some places and is therefore not necessarily all that intuitive. This can
be refined further in the future. But the end result, all things
considered, isn't too bad.
DEV-13708
This sets the stage for template parsing, and finally decides how we're
going to represent templates on the ASG. This is going to start simple,
since my original plans for improving how templates are
handled (conceptually) is going to have to wait.
This is the last difficult object type to figure out, with respect to graph
representation and derivation, so I wanted to get it out of the way.
DEV-13708
I wasn't initially sure whether I'd want separate tokens for different types
of identifying operations, but now that I see that it is clear from the
current state of the parser, there's no need.
This matches the name of the token in NIR.
DEV-13708
The previous commit demonstrated the amount of boilerplate necessary for
introducing new `ObjectKind`s; this abstracts away a lot of that
boilerplate, and allows for declarative relationship definition for the
ASG's ontology.
DEV-13708
There's quite a bit of boilerplate here that'll eventually need factoring
out. But it's also clear that it is somewhat onerous to add new object
types.
Note that a good chunk of this burden is _intentional_, via exhaustiveness
checks---adding a new type of object is an exceptional occurrence (well, in
principle, but we haven't added them all yet, so it'll be more common
initially), and we'd rather be safe to ensure that everything is properly
considering how that new type of object interacts with it.
Let's not confuse coupling with safety---the latter causes a burden because
of the former, not because of itself; it provides a service to us.
But, nonetheless, we'll want to reduce this burden somewhat since there are
a number more to add.
DEV-13708
Just as `rate` is a `sum`, `classify` is an `all` by default. The `@any`
attribute will change that interpretation, though I only intend to recognize
that in parsing later on, not emit that in XMLI.
DEV-13708
Let's start to be explicit about what's missing as we continue to add new
tokens; the exhaustiveness checks throughout the system will guide the
changes that need to be made.
DEV-13708
The element only, no attributes yet.
I'll keep forming boilerplate until abstraction points become obvious with
more variety; this is still pretty close to what was already supported.
DEV-13708
We already had `TreeContext`, and I'm passing the same arguments around, so
this uses it to lift arguments out of these functions, like partial
application.
DEV-13708
This tidies this method up into a decent state that I'm fairly content
with. This goes to emphasize my dislike of returns, which muddies control
flow and makes the code more difficult to read at a glance, which increase
the likelihood of logic bugs.
`match` statements in tail position, on the other hand, are very clear, and
less cognitively burdensome since you can see each individual code path at a
glance.
DEV-13708
This begins to develop a pattern for doing these transformations. I had
tried a number of things using iterators, but I wasn't satisfied with either
how they were turning out; had to fight too much with the type system; or
had to resort to heap allocations. Sticking with an explicit
`push`/`push_all` for now works just fine.
Almost done cleaning up `AsgTreeToXirf::parse_token`, and then I can move on
to introducing more objects.
DEV-13708
This is generic over the source, just as the target, defaulting just the
same to `ObjectIndex`.
This allows us to use only the edge information provided rather than having
to perform another lookup on the graph and then assert that we found the
correct edge. In this case, we're dealing with an `Ident->Expr` edge, of
which there is only one, but in other cases, there may be many such edges,
and it wouldn't be possible to know _which_ was referred to without also
keeping context of the previous edge in the walk.
So, in addition to avoiding more indirection and being more immune to logic
bugs, this also allows us to avoid states in `AsgTreeToXirf` for the purpose
of tracking previous edges in the current path. And it means that the tree
walk can seed further traversals in conjunction with it, if that is so
needed for deriving sources.
More cleanup will be needed, but this does well to set us up for moving
forward; I was too uncomfortable with having to do the separate
lookup. This is also a more intuitive API.
But it does have the awkward effect that now I don't need the pair---I just
need the `Object`---but I'm not going to remove it because I suspect I may
need it in the future. We'll see.
The TODO references the fact that I'm using a convenient `resolve_oi_pairs`
instead of resolving only the target first and then the source only in the
code path that needs it. I'll want to verify that Rust will properly
optimize to avoid the source resolution in branches that do not need it.
DEV-13708
This makes the inner `Object` type generic (but defaulting to the same inner
types as before) so that it can be used as a sum type for various types
where `ObjectKind`-based narrowing is required.
In this case, it's used to narrow `ObjectIndex` alongside the inner
`ObjectKind` so that the two are definitely in sync. This not only results
in cleaner code and a more intuitive API that's approachable to people
less familiar with the system, but it also helps to eliminate logic bugs
that might result form manually narrowing (as was done before this change).
DEV-13708
This was a fairly simple addition, since rate blocks already lower into sum
expressions; these are just non-identified.
This does emphasize that the nir::parse `ele_parse!` abstraction I spent so
much time on ended up not being a perfect fit, as it now has some
boilerplate after it was stripped of much of its capabilities some time ago.
Don't worry, `nir::air` and `asg::graph::xmli` will get cleaned up.
DEV-13708
This extends the POC a bit by beginning to reconstruct rate blocks (note
that NIR isn't producing sub-expressions yet).
Importantly, this also adds the first system tests, now that we have an
end-to-end system. This not only gives me confidence that the system is
producing the expected output, but serves as a compromise: writing unit or
integration tests for this program derivation would be a great deal of work,
and wouldn't even catch the bugs I'm worried most about; the lowering
operation can be written in such a way as to give me high confidence in its
correctness without those more granular tests, or in conjunction with unit
or integration tests for a smaller portion.
DEV-13708
This provides a test harness for running shell-based system tests. The
first of such tests will be introduced in the following commit.
This is done in place of integration tests written in Rust because it will
invoke the final binary exactly as the user or build system (using TAMER)
will, providing greater confidence. Besides, a lot of things are simply
more convenient to do in shell. ...though some of you may debate that.
DEV-13708
The intent is to source this in shell scripts, like tests.
This exposes feature flags to shell scripts, but it doesn't do so in quite
the same way that Rust does---it doesn't apply the dependencies. While this
isn't needed now, it does make me a little uncomfortable, and so I may take
a different approach in the future.
DEV-13708
Just some final POC setup for how this'll work; it's nothing
significant. This just emits an `@xmlns` on the `package` element to
demonstrate use of the stack.
With that, it's time to formalize this.
I also need to document at some point why I choose to use `ArrayVec` still
over `Vec`---it's not a microoptimization. It's intended to simplify the
runtime to keep execution simple with fewer code paths and make it more
amenable to analysis. Memory allocation is a pretty complex thing and
muddies execution. It's also another point of failure, though practically
speaking, I'm not worried about that---this is replacing a system that
consumes many GiB of memory (XSLT-based compiler) with one that consumes 10s
of MiB.
DEV-13708
This (a) hold the state of a stack that I can populate with tokens rather
than introducing a state for every single e.g. attribute and such on
elements (so, more like the `xmle` XIR lowering).
It also hides the obvious awkwardness of the `&mut &'a Asg`, but that's not
the intent of it.
DEV-13708
This is just a special case of lowering with a context, and maintaining two
separate implementations has resulted in divergence. I don't recall why I
didn't do this previously, though it's possible that the lowering pipeline
was in a state that made it more difficult to do (e.g. with error
handling).
DEV-13708
Technically, an "acceptor" in the context of state machines is actually a
state machine; the terminology here is more describing the configuration of
the state machine (`XirToXirf`) as an acceptor.
This change comes with significant documentation of the rationale and why
this is important; see that for more information.
This change is necessary so that we can enforce finalization on all parsers
in the lowering pipeline, which is not currently being done. If we were to
do that now, then `tameld` would fail because it halts parsing of the tokens
stream at the end of the `xmlo` header.
This is also quite the type soup, but I'm not going to refine this further
right now, since my focus is elsewhere (XMLI lowering).
DEV-13708
This has been a long time coming. The wiring of it all together is a little
rough around the edges right now, but this commit represents a working POC
to begin to fill in the gaps for the entire lowering pipeline.
I had hoped to be at this point a year ago. Yeah.
This marks a significant milestone in the project because this allows me to
begin to observe the implementation end-to-end, testing it on real-life
inputs as part of a production build pipeline.
...and now, with that, we can begin. So much work has gone into this
project so far, but aside from the linker (which has been in production for
years), most of this work has been foundational. It's been a significant
investment that I intend to have pay off in many different ways.
(All this outputs right now is `<package/>`.)
DEV-13708
This replaces the stub `derive_xmli` with the same result (well, minus a
space before the '/' in the output) using what will become the lowering
pipeline. Once again, this is quite verbose, and the lowering pipeline in
general needs to be further abstracted away.
Unlike the rest of the pipeline, an error during the derivation process will
immediately terminate with an unrecoverable error, because we do not want to
write partial files. This does not remove the garbage file, because the
build system ought to do that itself (e.g. `make`)...but that is certainly
open for debate.
DEV-13708
The reader previously yielded a `ParsedResult`, presumably to simplify
lowering operations. But the reader is not a `ParseState`, and does not
otherwise use the parsing API, so this was an inappropriate and confusing
coupling.
This resolves that, introducing a new `lowerable` which will translate an
iterator into something that can be placed in a lowering pipeline.
See the previous commit for more information.
DEV-13708
The token type was previously hard-coded to `UnknownToken`, since the use
case was the beginning of the lowering pipeline at the start of the program,
where there was no token type because the first parser (`XirReader`,
currently) is responsible for producing the first token type.
But when we're lowering from the graph (so, the other side of the lowering
pipeline), we _do_ have token types to deal with.
This also emphasizes the inappropriate coupling of `<XirReader as
Iterator>::Item` with `ParsedResult`; I'd like to follow the same approach
that I'm about to introduce with `tamec`, so see a future commit.
DEV-13708
This was missed (because it was not used) when EOF tokens were originally
introduced via `ParseState::eof_tok`---`LowerIter` also needs to consider
the token.
This separation betwen the two iterators is a maintenance burden that needs
to be taken care of; I knew that at the time, and then I forgot about it,
and here we are.
This was caught while beginning to wire together a POC graph lowering
pipeline to emit derived sources.
DEV-13708
This parser does exactly what it says it does. Its implementation is
simple, but I added a test anyway just to prove that it works, and the test
seems more complicated than the implementation itself, given the types
involved.
DEV-13708
This introduces a `Token` in place of the original tuple for
`TreePreOrderDfs` so that it can be used as input to a parser that will
lower into XIRF.
This requires that various things be describable (using `Display`), which
this also adds. This is an example of where the parsing framework itself
enforces system observability by ensuring that every part of the system can
describe its state.
DEV-13708
This lowering operation is intended to allow me to write a more concise and
clear mapping from the graph to XIRF, without having to worry about
balancing tags, which really complicated the implementation.
This has details docs; see that for more information.
I can't help but be reminded of Wisp (the whitespace-based Lisp-like
syntax). Which is unfortunate, because I'm not fond of Wisp; I like my
parenthesis.
DEV-13708
The `TreePreOrderDfs` iterator needed to expose additional edge context to
the caller (specifically, the `Span`). This was getting a bit messy, so
this consolodates everything into a new `DynObjectRel`, which also
emphasizes that it is in need of narrowing.
Packing everything up like that also allows us to return more information to
the caller without complicating the API, since the caller does not need to
be concerned with all of those values individually.
Depth is kept separate, since that is a property of the traversal and is not
stored on the graph. (Rather, it _is_ a property of the graph, but it's not
calculated until traversal. But, depth will also vary for a given node
because of cross edges, and so we cannot store any concrete depth on the
graph for a given node. Not even a canonical one, because once we start
doing inlining and common subexpression elimination, there will be shared
edges that are _not_ cross edges (the node is conceptually part of _both_
trees). Okay, enough of this rambling parenthetical.)
DEV-13708
This information is necessary to be able to reconstruct the tree, since
the `ObjectIndex` alone does not give you enough information. Even if you
inspected the graph, it _still_ wouldn't give you enough information, since
you don't know the current path of the traversal for nodes that may have
multiple incoming edges. (Any assumptions you could make today won't
always be valid in the future.)
DEV-13708