Comments ought not have any more semantic meaning than whitespace. Other
languages may have conventions that allow for various types of things in
comments, like annotations, but those are symptoms of language
limitations---we control the source language here.
DEV-7145
This properly integrates the trampoline into `ele_parse!`. The
implementation leaves some TODOs, most notably broken mixed text handling
since we can no longer intercept those tokens before passing to the
child. That is temporarily marked as incomplete; see a future commit.
The introduced test `ParseState`s were to help me reason about the system
intuitively as I struggled to track down some type errors in the monstrosity
that is `ele_parse!`. It will fail to compile if those invariants are
violated. (In the end, the problems were pretty simple to resolve, and the
struggle was the type system doing its job in telling me that I needed to
step back and try to reason about the problem again until it was intuitive.)
This keeps around the NT states for now, which are quickly used to
transition to the next NT state, like a couple of bounces on a trampoline:
NT -> Dead -> Parent -> Next NT
This could be optimized in the future, if it's worth doing.
This also makes no attempt to implement tail calls; that would have to come
after fixing mixed content and really isn't worth the added complexity
now. I (desperately) need to move on, and still have a bunch of cleanup to
do.
I had hoped for a smaller commit, but that was too difficult to do with all
the types involved.
DEV-7145
I had previously used `Context` to hold the parser configuration for
repetition, since that was the easier option. But I now want to utilize the
`Context` for a stack for the superstate trampoline, and I don't want to
have to deal with the awkwardness of the repetition in doing so, since it
requires that the configuration be created during delegation, rather than
just being passed through to all child parsers.
This adds to a mess that needs cleaning up, but I'll do that after
everything is working.
DEV-7145
And here's the thing that I've been dreading, partly because of the
`macro_rules` issues involved. But, it's not too terrible.
This module was already large and complex, and this just adds to it---it's
in need of refactoring, but I want to be sure it's fully working and capable
of handling NIR before I go spending time refactoring only to undo it.
_This does not yet use trampolining in place of the call stack._ That'll
come next; I just wanted to get the macro updated, the superstate generated,
and tests passing. This does convert into the
superstate (`ParseState::Super`), but then converts back to the original
`ParseState` for BC with the existing composition-based delegation. That
will go away and will then use the equivalent of CPS, using the
superstate+`Parser` as a trampoline. This will require an explicit stack
via `Context`, like XIRF. And it will allow for tail calls, with respect to
parser delegation, if I decide it's worth doing.
The root problem is that source XML requires recursive parsing (for
expressions and statements like `<section>`), which results in recursive
data structures (`ParseState` enum variants). Resolving this with boxing is
not appropriate, because that puts heap indirection in an extremely hot code
path, and may also inhibit the aggressive optimizations that I need Rust to
perform to optimize away the majority of the lowering pipeline.
Once this is sorted out, this should be the last big thing for the
parser. This unfortunately has been a nagging and looming issue for months,
that I was hoping to avoid, and in retrospect that was naive.
DEV-7145
"Mixed content" is the XML term representing element nodes mixed with text
nodes. For example, `foo <strong>bar</strong> baz` is mixed.
TAME supports text nodes as documentation, intended to be in a literate
style but never fully realized. In any case, we need to permit them, and I
wanted to do more than just ignore the nodes.
This takes a different approach than typical parser delegation---it has the
parent parser _preempt_ the child by intercepting text before delegation
takes place, rather than having the child reject the token (or possibly
interpret it itself!) and have to handle an error or dead state.
And while this makes it more confusing in terms of state machine stitching,
it does make sense, in the sense that the parent parser is really what
"owns" the text node---the parser is delegating _element_ parsing only, take
asserts authority when necessary to take back control where it shouldn't be
delegated.
DEV-7145
Previously a `Depth` was provided only for `Open` and `Close`. This depth
information, for example, will be used by NIR to quickly determine whether a
given parser ought to assert ownership of a text/comment token rather than
delegating it.
This involved modifying a number of test cases, but it's worth repeating in
these commits that this is intentional---I've been bit in the past using
`..` in contexts where I really do want to know if variant fields change so
that I can consider whether and how that change may affect the code
utilizing that variant.
DEV-7145
Recent changes regarding whitespace were all to support this change (though
it was also needed for XIRF, pre- and post-root).
Now I'll have to conted with how I want to handle text nodes in various
circumstances, in terms of `ele_parse!`.
DEV-7145
This teaches XIRF to optionally refine Text into RefinedText, which
determines whether the given SymbolId represents entirely whitespace.
This is something I've been putting off for some time, but now that I'm
parsing source language for NIR, it is necessary, in that we can only permit
whitespace Text nodes in certain contexts.
The idea is to capture the most common whitespace as preinterned
symbols. Note that this heuristic ought to be determined from scanning a
codebase, which I haven't done yet; this is just an initial list.
The fallback is to look up the string associated with the SymbolId and
perform a linear scan, aborting on the first non-whitespace character. This
combination of checks should be sufficiently performant for now considering
that this is only being run on source files, which really are not all that
large. (They become large when template-expanded.) I'll optimize further
if I notice it show up during profiling.
This also frees XIR itself from being concerned by Whitespace. Initially I
had used quick-xml's whitespace trimming, but it messed up my span
calculations, and those were a pain in the ass to implement to begin with,
since I had to resort to pointer arithmetic. I'd rather avoid tweaking it.
tameld will not check for whitespace, since it's not important---xmlo files,
if malformed, are the fault of the compiler; we can ignore text nodes except
in the context of code fragments, where they are never whitespace (unless
that's also a compiler bug).
Onward and yonward.
DEV-7145
We need to be able to export generated identifiers. Trying to figure out a
syntax for this was a bit tricky considering how much is generated, so I
just settled on something that's reasonably clear and easy to parse with
`macro_rules!`.
I had intended to just make everything public by default and encapsulate
using private modules, but that then required making everything else that it
uses public (e.g. error and token objects), which would have been a bizarre
thing to do in e.g. test cases.
DEV-7145
The tests had certain things in scope, but now that I'm trying to use it
outside of those modules, some fixes are needed.
This is admittedly a sloppy commit, with a number of miscellaneous fixes. I
didn't bother separating it more because most of them are type fixes, and
the `From<Attr>` stuff is going to have to change into, likely,
`TryFrom<Attr>` so that parse failures can occur when attributes do not
match certain patterns.
DEV-7145
The only additional information needed was opening spans so that we can
provide useful information regarding closing tags.
This uses a generic Span in place of {Open,Close}Span because the latter
wasn't necessary, but more descriptive types would be nice; it may be
beneficial later on to introduce newtypes for each of the span generated by
{Open,Close}Span.
DEV-7145
This allows an element to be repeated by the parent NT. The easiest way I
saw to implement this for now was to abuse the Context to provide a runtime
configuration that would allow the state machine to reset after it has
completed parsing.
This also influences error recovery, in that if we're expecting zero or more
of something, we cannot provide an error for an unexpected name, and instead
must emit a dead state so that the caller can determine what to do.
DEV-7145
This introduces `Nt := (A | ... | Z);`, where `Nt` is the name of the
nonterminal and `A ... Z` are the inner nonterminals---it produces a parser
that provides a choice between a set of nonterminals.
This is implemented efficiently by understanding the QName that is accepted
by each of the inner nonterminals and delegating that token immediately to
the appropriate parser. This is a benefit of using a parser generator macro
over parser combinators---we do not need to implement backtracking by
letting inner parsers fail, because we know ahead of time exactly what
parser we need.
This _does not_ verify that each of the inner parsers accept a unique QName;
maybe at a later time I can figure out something for that. However, because
this compiles into a `match`, there is no ambiguity---like a PEG parser,
there is precedence in the face of an ambiguous token, and the first one
wins. Consequently, tests would surely fail, since the latter wouldn't be
able to be parsed.
This also demonstrates how we can have good error suggestions for this
parsing framework: because the inner nonterminals and their QNames are known
at compile time, error messages simply generate a list of QNames that are
expected.
The error recovery strategy is the same as previously noted, and subject to
the same concerns, though it may be more appropriate here: it is desirable
for the inner parser to fail rather than retrying, so that the sum parser is
able to fail and, once the Kleene operator is introduced, retry on another
potential element. But again, that recovery strategy may happen to work in
some cases, but'll fail miserably in others (e.g. placing an unknown element
at the head of a block that expects a sequence of elements would potentially
fail the entire block rather than just the invalid one). But more to come
on that later; it's not critical at this point. I need to get parsing
completed for TAME's input language.
DEV-7145
This adds the ability to bind identifiers to represent `OpenSpan` and
`CloseSpan`, available to the `@` and `/` maps. Since identifiers in TAME
originate from attributes, this may not get a whole lot of use, but it's
important to be available.
There is some awkwardness in that the opening span appears to be scoped to
the entire nonterminal, but it's actually only available in the `@`
mapping. I'll change this if it's actually needed; this keeps things simple
for now.
DEV-7145
Since the parsers produce streaming IRs, we need to be able to emit tokens
representing closing delimiters, where they are important.
This notably doesn't use spans; I'll add those next, since they're also
needed for the previous work.
DEV-7145
This begins generating parsers that are capable of parsing elements. I need
to move on, so this abstraction isn't going to go as far as it could, but
let's see where it takes me.
This was the work that required the recent lookahead changes, which has been
detailed in previous commits.
This initial support is basic, but robust. It supports parsing elements
with attributes and children, but it does not yet support the equivalent of
the Kleene star (`*`). Such support will likely be added by supporting
parsers that are able to recurse on their own definition in tail position,
which will also require supporting parsers that do not add to the stack.
This generates parsers that, like all the other parsers, use enums to
provide a typed stack. Stitched parsers produce a nested stack that is
always bounded in size. Fortunately, expressions---which can nest
deeply---do not need to maintain ancestor context on the stack, and so this
should work fine; we can get away with this because XIRF ensures proper
nesting for us. Statements that _do_ need to maintain such context are not
nested.
This also does not yet support emitting an object on closing tag, which
will be necessary for NIR, which will be a streaming IR that is "near" to
the source XML in structure. This will then be used to lower into AIR for
the ASG, which gives structure needed for further analysis.
More information to come; I just want to get this committed to serve as a
mental synchronization point and clear my head, since I've been sitting on
these changes for so long and have to keep stashing them as I tumble down
rabbit holes covered in yak hair.
DEV-7145