This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
Really, with a C background, I should have known that `write` may not write
all bytes, and I'm pretty sure I was aware, so I'm not sure how that slipped
my mind for every call. But it's not a great default, and I do feel like
`write_all` should be the deafult behavior, despite the syscall and C
library name.
It shouldn't take clippy to warn about something so significant.
This removes quite a bit of work, and work that was difficult to reason
about. While I'm disappointed that that hard work is lost (aside from
digging it up in the commit history), I am happy that it was able to be
removed, because the extra complexity and cognitive burden was significant.
This removes more `memcpy`s than the sum state could have hoped to, since
aggregation is no longer necessary. Given that, there is a slight
performacne improvement. The re-introduction of required and duplicate
checks later on should be more efficient than this was, and so this should
be a net win overall in the end.
DEV-13346
This cleans up the old implementation now that it's no longer used (as of
the previous commit) by `ele_parse!`. It also removes the two error
variants that no longer apply: required attributes and duplicate
attributes.
DEV-13346
This handles the bulk of the integration of the new `attr_parse_stream!` as
a replacement for `attr_parse!`, which moves from aggregate attribute
objects to a stream of attribute-derived tokens. Rationale for this change
is in the preceding commit messages.
The first striking change here is how it affects the test cases: nearly all
`Incomplete`s are removed. Note that the parser has an existing
optimization whereby `Incomplete` with lookahead causes immediate recursion
within `Parser`, since those situations are used only for control flow and
to keep recursion out of `ParseState`s.
Next: this removes types from `nir::parse`'s grammar for attributes. The
types will instead be derived from NIR tokens later in the lowering
pipeline. This simplifies NIR considerably, since adding types into the mix
at this point was taking an already really complex lowering phase and making
it ever more difficult to reason about and get everything working together
the way that I needed.
Because of `attr_parse_stream!`, there are no more required attribute
checks. Those will be handled later in the lowering pipeline, if they're
actually needed in context, with possibly one exception: namespace
declarations. Those are really part of the document and they ought to be
handled _earlier_ in the pipeline; I'll do that at some point. It's not
required for compilation; it's just required to maintain compliance with the
XML spec.
We also lose checks for duplicate attributes. This is also something that
ought to be handled at the document level, and so earlier in the pipeline,
since XML cares, not us---if we get a duplicate attribute that results in an
extra NIR token, then the next parser will error out, since it has to check
for those things anyway.
A bunch of cleanup and simplification is still needed; I want to get the
initial integration committed first. It's a shame I'm getting rid of so
much work, but this is the right approach, and results in a much simpler
system.
DEV-13346
This really does need documentation.
With that said, this changes things up a bit: the value is now derived from
an `SPair` rather than an `Attr`, given that the name is redundant. We do
not need the attribute name span, since the philosophy is that we're
stripping the document and it should no longer be important beyond the
current context.
It does call into question errors, but my intent in the future is to be able
to have the lowering pipline augment errors with its current state---since
we're streaming, then an error that is encountered during lowering of an
element will still have the element parser in the state representing the
parsing of that element; so that information does not need to be propagated
down the pipeline, but can be augmented as it bubbles back up.
More on that at some point in the future; not right now.
DEV-13346
As I talked about in the previous commit, this is going to be the
replacement for the aggreagte `attr_parse!`; the next commit will integrate
it into `ele_parse!` so that I can begin to remove the old one.
It is disappointing, since I did put a bit of work into this and I think the
end result was pretty neat, even if was never fully utilized. But, this
simplifies things significantly; no use in maintaining features that serve
no purpose but to confound people.
DEV-13346
Also: Revert "tamer: nir::desugar::interp: Token {SPair=>Attr}"
This reverts commit 7fd60d6cdafaedc19642a3f10dfddfa7c7ae8f53.
This reverts commit 12a008c66414c3d628097e503a98c80687e3c088.
This has been quite a tortured experience, trying to figure out how to best
fit desugaring into the existing system. The truth is that it ultimately
failed because I was not sticking with my intuition---I was trying to get
things out quickly by compromising on the design, and in the end, it saved
me nothing.
But I wouldn't say that it was a waste of time---the path was a dead end,
but it was full of experiences.
More to come, but interpolation is back to operating on NIR directly, and I
chose to treat it as a source-to-source mapping and not represent it using
the type system---interpolation can be an optional feature when writing TAME
frontends (the principal one being the XML-based one), and it's up to later
checks to assert that identifiers match a given domain.
I am disappointed by the additional context we lose here, but that can
always be introduced in the future differently, e.g. by maintaining a
dictionary of additional context for spans that can be later referenced for
diagnostic purposes. But let's worry about that in the future; it doesn't
make sense to further complicate IRs for such a thing.
DEV-13346
This changes the input token from a more generic `SPair` to `Attr`, which
reflects the new target integration point in the `attr_parse!`
parser-generator.
This is a compromise---I'd like for it to remain generic and have stitching
deal with all integration concerns, but I have spent far too much time on
this and need to keep moving.
With that said, we do benefit from knowing where this must fit in---it's
easier to reason about in a more concrete way, and we can take advantage of
the extra information rather than being burdened by its presence and
ignoring it. We need to be able to convert back into `XirfToken` (see a
recent commit that discusses that) for `StitchExpansion`, which is why
`Attr` is here. And since it is, we can use it to explain to the user not
just the interpolation specification used to derive params, but also the
attribute it is associated with. This is what TAME (in XSLT) does today,
IIRC (I wrote it, I just forget exactly). It also means that I can name the
parameters after the attribute.
So, that'll be in a following commit; I was disappointed when my prior
approach with `SPair` didn't give me enough information to be able to do
that, since I think it's important that the system be as descriptive as
possible in how it derives information. Of course, traces would reveal how
the parser came about the derivation, but that requires recompilation in a
special tracing mode.
DEV-13156
This was a substantial change. Design and rationale are documented on
`AttrFieldSum` and related as part of this change, so please review the diff
for more information there.
If you're a Ryan employee, DEV-13209 gives plenty of profiling information,
including raw data and visualizations from kcachegrind. For everyone else:
you're able to easy produce your own from this commit and the previous and
comparing the `__memcpy_avk_unaligned_erms` calls. The reduction is
significant in this commit (~90%), and the number of Parsers invoking it has
been reduced. Rust has been able to optimize more aggressively, and
compound some of those optimizations, with the smaller `NirParseState`
width.
It also worth noting that `malloc` calls do not change at all between
these two changes, so when we refer to memory, we're referring to
pre-allocated memory on the stack, as TAMER was designed to utilize.
DEV-13209
This introduces the concept of sugared NIR and provides the boilerplate for
a desugaring pass. The earlier commits dealing with cleaning up the
lowering pipeline were to support this work, in particular to ensure that
reporting and recovery properly applied to this lowering operation without
adding a ton more boilerplate.
DEV-13158
This helps to clarify the situations under which these errors can occur, and
the generality also helps to show why the inner types are as they
are (e.g. use of `String`).
But more importantly, this allows for an error type in `finalize` that is
detached from the `ParseState`, which will be able to be utilized in the
lowering pipeline as a more general error distinguishable from other
lowering errors. At the moment I'm maintaining BC, but a following commit
will demonstrate the use case to introduce recoverable vs. non-recoverable
errors.
DEV-13158
A type alias was added for BC before errors were hoisted out in a previous
commit, but they are unnecessary because of the associated type on
`ParseState`.
This also corrects the long-existing issue of using generated identifiers in
tests.
DEV-7145
This moves `paste::paste!` up a line and reduces a level of indentation,
since it's so squished. Aside from docblock reformatting, there are no
other changes.
DEV-7145
This slims out the macro even further. It does result in an
awkwardly-placed `PhantomData` because I don't want to add another variant
that isn't actually used (since they represent states).
DEV-7145
This is in preparation for hoisting out the common states, as was done with
the Sum NT in a previous commit.
I also think that organizing states in this way is more clear. The previous
embedding of the variants named after the NTs themselves was because the
parser was storing the child state within it, before the introduction of the
superstate trampoline.
DEV-7145
Everything except for one state was already accounted for. We can now have
confidence that the parser will never panic due to state transitions (beyond
legitimate error conditions).
There are some `unreachable!`s to contend with still.
DEV-7145
This is the same as the previous commits, but for non-sum NTs.
This also extracts errors into a separate module, which I had hoped to do in
a separate commit, but it's not worth separating them. My _original_ reason
for doing so was debugging (I'll get into that below), but I had wanted to
trim down `ele.rs` anyway, since that mess is large and a lot to grok.
My debugging was trying to figure out why Rust was failing to derive
`PartialEq` on `NtError` because of `AttrParseError`. As it turns out,
`AttrParseError::InvalidValue` was failing, thus the introduction of the
`PartialEq` trait bound on `AttrParseState::ValueError`. Figuring this out
required implementing `PartialEq` myself without `derive` (well, using LSP,
which did all the work for me).
I'm not sure why this was not failing previously, which is a bit of a
concern, though perhaps in the context of the macro-expanded code, Rust was
able to properly resolve the types.
DEV-7145
The `ele_parse!` macro is a monstrosity, and expands into many different
identifiers. The hope is that chipping away at things like this will not
only make the template easier to understand by framing portions of the
problem in terms of more traditional Rust code, but will also hopefully
reduce compile times by reducing the amount of code that is expanded by the
macro.
DEV-7145
This introduces NIR, but only as an accepting grammar; it doesn't yet emit
the NIR IR, beyond TODOs.
This modifies `tamec` to, while copying XIR, also attempt to lower NIR to
produce parser errors, if any. It does not yet fail compilation, as I just
want to be cautious and observe that everything's working properly for a
little while as people use it, before I potentially break builds.
This is the culmination of months of supporting effort. The NIR grammar is
derived from our existing TAME sources internally, which I use for now as a
test case until I introduce test cases directly into TAMER later on (I'd do
it now, if I hadn't spent so much time on this; I'll start introducing tests
as I begin emitting NIR tokens). This is capable of fully parsing our
largest system with >900 packages, as well as `core`.
`tamec`'s lowering is a mess; that'll be cleaned up in future commits. The
same can be said about `tameld`.
NIR's grammar has some initial documentation, but this will improve over
time as well.
The generated docs still need some improvement, too, especially with
generated identifiers; I just want to get this out here for testing.
DEV-7145
This includes when on the last state / expecting a close.
Previously, there were a couple major issues:
1. After parsing an NT, we can't allow preemption because we must emit a
dead state so that we can remove the NT from the stack, otherwise
they'll never close (until the parent does) and that results in
unbounded stack growth for a lot of siblings. Therefore, we cannot
preempt on `Text`, which causes the NT to receive it, emit a dead
state, transition away from the NT, and not accept another NT of the
same type after `Text`.
2. When encountering an unknown element, the error message stated that a
closing tag was expected rather than one of the elements accepted by the
final NT.
For #1, this was solved by allowing the parent to transition back to the NT
if it would have been matched by the previous NT. A future change may
therefore allow us to remove repetition handling entirely and allow the
parent to deal with it (maybe).
For #2, the trouble is with the parser generator macro---we don't have a
good way of knowing the last NT, and the last NT may not even exist if none
was provided. This solution is a compromise, after having tried and failed
at many others; I desperately need to move on, and this results in the
correct behavior and doesn't sacrifice performance. But it can be done
better in the future.
It's also worth noting for #2 that the behavior isn't _entirely_ desirable,
but in practice it is mostly correct. Specifically, if we encounter an
unknown token, we're going to blow through all NTs until the last one, which
will be forced to handle it. After that, we cannot return to a previous NT,
and so we've forefitted the ability to parse anything that came before it.
NIR's grammar is such that sequences are rare and, if present, there's
really only ever two NTs, and so this awkward behavior will rarely cause
practical issues. With that said, it ought to be improved in the future,
but let's wait to see if other parts of the lowering pipeline provide more
appropriate places to handle some of these things (even though it really
ought to be handled at the grammar level).
But I'm well out of time to spend on this. I have to move on.
DEV-7145
`ele_parse!` was recently converted to accept zero-or-more for every NT to
simplify the parser-generator, since NIR isn't going to be able to
accurately determine whether child requirements are met anyway (because of
the template system).
This ensures that `Close` can be accepted when we're expecting an
element. It also adds a test for a scenario that's causing me some trouble
in stashed code so that I can ensure that it doesn't break.
DEV-7145
This sets the maximum depth to 64, which is still arbitrary, but
unfortunately the sum types introduce multiple levels of nesting, in
particular for template applications, so nested applications can result in a
fairly large stack.
I have various ideas to improve upon that---limited a bit in that repetition
as it is current implemented inhibits tail calls---but they're not worth
doing just yet relative to other priorities. The impact of this change is
not significant.
DEV-7145
This removes support for configurable repetition.
What? Why?
As it turns out, the complexity that repetition adds is quite significant
and is not worth the effort. The truth is that NIR is going to have to
allow zero-or-more matches on virtually everything _anyway_ because template
application is allowed virtually anywhere---it is not possible to fully
statically analyze TAME's sources because templates can expand into just
about anything. Given that, AIR (or something down the line) is going to
have to supply the necessary invariants instead.
It does suck, though, that this removes a lot of code that I fairly recently
wrote, and spent a decent amount of time on. But it's important to know
when to cut your losses.
Perhaps I could have planned better, but deriving this whole system as been
quite the experiment.
DEV-7145
If attributes fail to parse (e.g. missing required attribute) and parsing
reaches a dead state, this will recover by ignoring the entire element. It
previously panicked with a TODO.
DEV-7145
These were initially used to prevent conflicts with generated variants, but
we are no longer generating such variants since they're being jumped to via
the trampoline.
DEV-7145
I'm starting to clean up some TODOs, and this was a glaring one causing
panics when encountered. The recovery for this is simple, because we have
no choice: just stop parsing; leave it to the next lowering operation(s) to
complain that we didn't provide what was necessary. They'll have to,
anyway, since templates mean that NIR cannot ever have enough information to
guarantee that a document is well-formed, relative to what would expand from
the template.
DEV-7145
This allows for a construction like this:
```
ele_parse! {
[...]
StmtX := QN_X {
[...]
};
StmtY := QN_Y {
[...]
};
ExprA := QN_A {
[...]
};
ExprB := QN_B {
[...]
};
Expr := (A | B);
Stmt := (StmtX | StmtY);
// This previously was not allowed:
StmtOrExpr := (Stmt | Expr);
}
```
There were initially two barriers to doing so:
1. Efficiently matching; and
2. Outputting diagnostic information about the union of all expected
elements.
The first was previously resolved with the introduction of `NT::matches`,
which is macro-expanded in a way that Rust will be able to optimize a
bit. Worst case, it's effectively a linear search, but our Sum NTs are not
that deep in practice, so I don't expect that to be a concern.
The concern that I was trying to avoid was heap-allocated `NodeMatcher`s to
avoid recursive data structures, since that would have put heap access in a
very hot code path, which is not an option.
That left problem #2, which ended up being the harder problem. The solution
was detailed in the previous commit, so you should look there, but it
amounts to being able to format individual entries as if they were a part
of a list by making them a function of not just the matcher itself, but also
the number of items in (recursively) the sum type and the position of the
matcher relative to that list. The list length is easily
computed (recursively) at compile-time using `const`
functions (`NT::matches_n`).
And with that, NIR can be abstracted in sane ways using Sum NTs without a
bunch of duplication that would have been a maintenance burden and an
inevitable source of bugs (from having to duplicate NT references).
DEV-7145
This allows using a `[attr]` special form to stream attributes as they are
encountered rather than aggregating a static attribute list. This is
necessary in particular for short-hand template application and short-hand
function application, since the attribute names are derived from template
and function parameter lists, which are runtime values.
The syntax for this is a bit odd since there's a semi-useless and confusing
`@ {} => obj` still, but this is only going to be used by a couple of NTs
and it's not worth the time to clean this up, given the rather significant
macro complexity already.
DEV-7145
This uses the same mechanism that was introduced for handling `Text` nodes
in mixed content, allowing for arbitrary element `Open` matches for
preemption by the superstate.
This will be used to allow for template expansion virtually
anywhere. Unlike the existing TAME, it'll even allow for it at the root,
though whether that's ultimately permitted is really depending on how I
approach template expansion; it may fail during a later lowering operation.
This is interesting because this approach is only possible because of the
CPS-style trampoline implementation. Previously, with the composition-based
approach, each and every parser would have to perform this check, like we
had to previously with `Text` nodes.
As usual, this is still adding to the mess a bit, and it'll need some future
cleanup.
DEV-7145
This introduces the concept of superstate node preemption generally, which I
hope to use for template application as well, since templates can appear in
essentially any (syntatically valid, for XML) position.
This implements mixed content handling by defining the mapping on the
superstate itself, which really simplifies the problem but foregoes
fine-grained text handling. I had hoped to avoid that, but oh well.
This pushes the responsibility of whether text is semantically valid at that
position to NIR->AIR lowering (which we're not transition to yet), which is
really the better place for it anyway, since this is just the grammar. The
lowering to AIR will need to validate anyway given that template expansion
happens after NIR.
Moving on!
DEV-7145
This was accepting an early EOF when the active child `ParseState` was in an
accepting state, because it was not ensuring that anything on the stack was
also accepting.
Ideally, there should be nothing on the stack, and hopefully in the future
that's what happens. But with how things are today, it's important that, if
anything is on the stack, it is accepting.
Since `is_accepting` on the superstate is only called during finalization,
and because the check terminates early, and because the stack practically
speaking will only have a couple things on it max (unless we're in tail
position in a deeply nested tree, without TCO [yet]), this shouldn't be an
expensive check.
Implementing this did require that we expose `Context` to `is_accepting`,
which I had hoped to avoid having to do, but here we are.
DEV-7145
Along with this change we also had to change how we handle dead states in
the superstate. So there were two problems here:
1. Sum states were not yielding a dead state after recovery, which meant
that parsing was unable to continue (we still have a `todo!`); and
2. The superstate considered it an error when there was nothing left on
the stack, because I assumed that ought not happen.
Regarding #2---it _shouldn't_ happen, _unless_ we have extra input after we
have completed parsing. Which happens to be the case for this test case,
but more importantly, we shouldn't be panicing with errors about TAMER bugs
if somebody puts extra input after a closing root tag in a source file.
DEV-7145
This does two things:
1. Places the expected list on a separate help line as a footnote where
it'll be a bit more tolerable when it inevitably overflows the terminal
width in certain contexts (we may wrap in the future); and
2. Removes angled brackets from the element names so that they (a) better
correspond with the span which highlights only the element name and (b)
do not imply that the elements take no attributes.
DEV-7145
When we match a QName against a namespace, we ought to store the matching
QName to use (a) in error messages and (b) to make available as a
binding. The former is necessary for sensible errors (rather than saying
that it's e.g. expecting a closing `t:*`) and the latter is necessary for
e.g. getting the template name out of `t:foo`.
DEV-7145
This allows matching on a namespace prefix by providing a `Prefix` instead
of a `QName`. This works, but is missing a couple notable things (and
possibly more):
1. Tracking the QName that is _actually_ matched so that it can be used in
messages stating what the expected closing tag is; and
2. Making that QName available via a binding.
This will be used to match on `t:*` in NIR. If you're wondering how
attribute parsing is supposed to work with that (of course you're wondering
that, random person reading this)---that'll have to work differently for
those matches, since template shorthand application contains argument names
as attributes.
DEV-7145
This introduces `NodeMatcher`, with the intent of introducing wildcard QName
matches for e.g. `t:*` nodes. It's not yet clear if I'll expand this to
support text nodes yet, or if I'll convert text nodes into elements to
re-use the existing system (which I had initially planned on doing, but
didn't because of the work and expense (token expansion) involved in the
conversion).
DEV-7145