This was removed in a previous commit while working on simplifying the
implementation, with the hope of returning to it once things were in a
better place. They are, so let's bring it back.
DEV-12151
`SpanLabel` was created during a very early refactoring of this system, and
I've just been fighting with it sense. This removes it, and simplifies
some things in the process.
It also makes clear that `Level` is never optional and removes the awkward
`Level::default` that was there previously; the default is now the lowest
level, which will always be able to be escalated.
DEV-12151
This does what the original proof-of-concept implementation did---skip a
span that was just processed, since it'll be squashed into the previous
anyway. These duplicate spans originate from the diagnostic system when
producing supplemental help information.
DEV-12151
Tests are large and will be getting larger. The source will also grow as
it's better documented and cleaned up. It's getting more difficult to
navigate efficiently and concurrently modify implementation and tests, and
parsing via LSP is getting slower with certain types of changes.
DEV-12151
Alright, starting to settle on an abstraction now, and things are coming
together. This gives us line numbers in the previously-empty gutter, and
widens the gutter to accommodate. Gutters are normalized across
sections. Sections are not yet collapsed for sequential line numbers in the
same context.
Exciting!
Here's an example, on an xmlo file:
error: expected closing tag for `preproc:symtable`
--> /home/.../foo.xmlo:16:4
|
16 | <preproc:symtable xmlns:map="http://www.w3.org/2005/xpath-functions/map">
| ----------------- note: element `preproc:symtable` is opened here
--> /home/.../foo.xmlo:11326:4
|
11326 | </preproc:wrong>
| ^^^^^^^^^^^^^^^^ error: expected `</preproc:symtable>`
DEV-12151
The `Section` itself is now responsible for outputting the gutter, which
puts us in a position to be able to apply consistent formatting without
having to propagate width data to every line variant.
Now `SourceLine` _does_ actually correspond to a line of output, which will
allow for better formatting (e.g. collapsing padding) and, importantly,
proper management of gutters.
Note that the seemingly unnecessary `SectionSourceLine` allows for a subtle
consistent formatting for all variants' gutters in `SectionLine`, which will
allow us to hoist that rendering out in the next commit. The other option
was to include a trailing space for padding and marks, but that is not only
sloppy and undesirable, but asking for confusion, especially in editors (like
mine) that trim trailing whitespace.
DEV-12151
If a column isn't present, it degrades to displaying labels like footnotes
anyway, so this simplifies the system rather than catering to a rare
case. With that said, this does lose functionality, since it does not
render the source line at all, even though we _could_ do so.
I may re-introduce that rendering after some further refactoring,
specifically for gutters.
DEV-12151
Using a byte vector just makes life more difficult with regard to preparing
the diagnostic reports. We're already validating UTF-8 data for column
generation, which is necessary for a robust report, so let's just store it
as a String to begin with.
DEV-12151
Note that, if a span is first encountered with a mark but with _no_ label,
the first label (if collapsed) will be on the next line. This allows a span
to be marked without extra visual noise if it's not necessary, and to be
able to trust that it'll stay that way.
Until coloring is introduced, this may or may not be easier to read
depending on context.
This is also not yet taking into account where on the line it begins, and so
may render poorly if the span is at the end of a line. That will be fixed
later on.
DEV-12151
This is now visible in the diagnostic output. Example at this point in
time, on an xmlo file for one of our smallest systems:
error: expected closing tag for `preproc:symtable`
--> /home/.../foo.xmlo:16:4
|
| <preproc:symtable xmlns:map="http://www.w3.org/2005/xpath-functions/map">
| -----------------
= note: element `preproc:symtable` is opened here
--> /home/.../foo.xmlo:11326:4
|
| </preproc:wrong>
| ^^^^^^^^^^^^^^^^
= error: expected `</preproc:symtable>`
DEV-12151
Looking more and more Rust-like. Shameless copy.
TBH I forget what character it uses for help, but it's easy enough to
change.
Also, to be clear: this is modeled after Rust, but it's not a requirement of
mine that it look exactly like it. I just like the general style; I'll
surely deviate over time, as appropriate (or as I feel like it).
DEV-12151
This has the effect of highlighting the columns of the source lines using
'^' as an underline.
The next step will be to have the underline character depend on the
`Level`.
If this commit message doesn't sound all that exciting, given what it
finally achieved after all this time, it's because I'm exhausted, and my
prototype has already taken my excitement. But this is significant, given
all the work leading up to it.
There is some code cleanup needed and some unit tests that ought to be
written rather than relying on integration, but considering how much this is
being refactored, I don't want to add to that refactoring cost just yet
before gutters are introduced and I know things are settled for now.
DEV-12151
This has been a lot of refactoring for something that I prototyped a week
ago, and the prototype is still further along in its output formatting (it
has line numbering in gutters and span markings).
But, this has come a long way, and I'm happy with it overall, though I'm not
happy with my slow pace and struggle to maintain focus. But those are
personal issues.
This leaves a lot to be desired, but at the same time is still really
helpful. There's a couple notable TODOs regarding pointless allocation and
UTF8 re-checking, but otherwise, the feature-related steps are:
- Gutters with line numbers; and
- Marking columns associated with the span.
DEV-12151
Rather than squashing as a separate operation, and explicitly denoting when
it occurred, we'll just always squash, as was done before these changes. It
doesn't really make sense to make this optional and there's not any value in
keeping the decision around.
This also sets us up favorably for future changes: it creates a vector of
labels, which can be analyzed later to determine how to best lay out marks
and labels.
DEV-12151
Just renames the lifetime to refer to the `Diagnostic`, rather than a
`Label` returned by it, which was all `'l` was previously used for.
Note that many labels have a `'static` lifetime; this doesn't change that or
somehow cause it to reallocate; the label must life _for at least `'d`_.
DEV-12151
Rather than rendering the diagnostic `Display` message to a string only to
copy it to yet another buffer later on, this simply stores a reference to
the `Diagnostic` that was provided. This also adds a type to the `Report`
associating it with the provided `Diagnostic`, which does seem appropriate,
given that the report was produced for it.
I should probably rename '{l=>d} now.
DEV-12151
Rather than writing to the provided `Write` object, this produces a `Report`
object. While a lifetime still exists for the diagnostic data (labels,
specifically), I was able to remove the other lifetime resulting from
`ResolvedSpan` by transferring ownership of the data to the `Report`
itself. Once actual source lines are integrated shortly, `Report` will
include those as well.
This has been a tedious process, but it's coming together. Hopefully these
commits documenting the progressive and ugly refactoring are found useful by
some reader in the future.
DEV-12151
The line number was getting special treatment that is simply not worth the
cost (with regards to how burdensome it is on the type definitions). This
simplifies things quite a bit.
If we want header customization in the future, we can worry about that in a
different way, or allow the header as a whole to be swapped out, rather than
its constituents.
DEV-12151
`HeadingColNum` is no longer constructed by `HeadingLineNum`. This both
narrows the types and required data (e.g. removing dummy values in test
cases), and reduces the coupling (by favoring composition, but still coupled
with the concrete type).
DEV-12151
I'm unhappy with the current state of this, which is why I haven't settled
on docs or unit tests for these changes yet (though note that the
integration tests do cover these changes)---this is still a prototype
refactoring.
In particular, this needs to do more lowering---the `ResolvedSpan` and
`MaybeResolvedSpan` need to be eliminated and lowered into exactly what is
needed so that we can stop reasoning about them and propagating them.
Further, having lines and columns lazily evaluate themselves for
display---based on `MaybeResolvedSpan`---adds extra generics that shouldn't
be necessary; they should be pre-computed and store the concrete data they
need in variants. Display shouldn't involve computation beyond formatting
of pre-computed data.
That was always the plan, but this refactoring has been incremental.
Anyway: this is in a working and integration-tested state, but it's going to
change.
DEV-12151
This generalizes the types a bit more and introduces unit tests. Note that
these are still also covered by integration tests.
The next step will be to finish generalizing
`<VisualReporter as Reporter>::render`, after which I'll get back to the
task of outputting the source line along with markings and labels.
DEV-12151
This is just to provide clarity. `ctx` is not so widely used that we
benefit from such a short identifier, and it's not worth the cognitive
burden of people unfamiliar with what it may mean.
DEV-12151
This is redundant with the `Endpoints` variant, although it did read
better. It's just another case to have to handle.
I was originally going to use `std::ops::RangeInclusive` for `Endpoints`,
however that struct also contains an extra bool indicating whether it was
exhausted (as an iterator), which isn't appropriate for this.
DEV-12151
This logic is still covered by the integration tests; I'll be adding unit
tests once it's decoupled to the point where that's possible, which should
be shortly, and after I make sure this is the route I do want to go down.
DEV-12151
This simplifies types and error handling since we will always have at least
one line, provided that the span is within the range of the context. To
ensure that, this patch introduces a new error.
DEV-12151
I did not initially introduce lifetimes because I wasn't sure how the system
was going to evolve, but now lifetimes are going to be needed in a number of
contexts. The core of TAMER is able to avoid lifetimes in most instances
because of its internment system, but its use is not appropriate for the
diagnostic system's buffers (beyond sourcing strings from already-interned
data).
DEV-12151
Determining the column number is not as simple as performing byte
arithmetic, because certain characters have different widths. Even if we
only accepted ASCII, control characters aren't visible to the user.
This uses the unicode-width crate as an alternative to POSIX wcwidth, to
determine (hopefully) the number of fixed-width cells that a unicode
character will take up on a terminal. For example, control characters are
zero-width, while an emoji is likely double-width. See test cases for more
information on that.
There is also the unicode-segmentation crate, which can handle extended
grapheme clusters and such, but (a) we'll be outputting the line to the
terminal and (b) there's no guarantee that the user's editor displays
grapheme clusters as a single column. LSP measures in UTF-16,
apparently. I use both Emacs and Vim from a terminal, so unicode-width
applies to me. There's too much variation to try to solve that right now.
The columns can be considered a visual span---this gives us enough
information to draw line annotations, which will happen soon.
Here are some useful links:
- https://hsivonen.fi/string-length/
- https://unicode.org/reports/tr29/
- https://github.com/rust-analyzer/rowan/issues/17
- https://www.reddit.com/r/rust/comments/gpw2ra/how_is_the_rust_compiler_able_to_tell_the_visible/
DEV-10935
This does not yet resolve columns, and omits the length of the span, but
it's starting to come together.
This is particularly exciting for me to see because I've been wanting line
numbers in TAME error messages for over a decade.
DEV-10935
This does adds support for rewinding the underlying buffer when necessary to
read a span that occurs earlier within the same context (which could also
include the same span read twice).
As part of this change, I cleaned up the code a bit. Working with this
system can be confusing with the different meanings of the byte offsets and
the different ways of interpreting lines relative to the span that is
provided. There's not a lot of code here, but it represents a lot of work
to get right.
This works, but it's ugly and requires some cleanup. It shows that there
are some interesting considerations when determining how to best represent
the location of spans to the user in a way that is intuitive.
This is not yet integrated with the reporter, which will require a layer to
load a `Context` from disk.
DEV-10935
This is a POC, minimal-effort integration that also creates the TamecError
sum type analogous to TameldError.
I'll work on reducing the boilerplate in the future.
A note regarding the type and boilerplate vs. dynamic dispatch, for any
future readers: the purpose of this is to be explicit about the error types
so that the system is self-documenting and it forces and understanding of
its error conditions. `Box<dyn Error>` is basically "eh idk anything can
happen!", which is not what I'm interested in having.
DEV-10935
This is a working concept that will continue to evolve. I wanted to start
with some basic output before getting too carried away, since there's a lot
of potential here.
This is heavily influenced by Rust's helpful diagnostic messages, but will
take some time to realize a lot of the things that Rust does. The next step
will be to resolve line and column numbers, and then possibly include
snippets and underline spans, placing the labels alongside them. I need to
balance this work with everything else I have going on.
This is a large commit, but it converts the existing Error Display impls
into Diagnostic. This separation is a bit verbose, so I'll see how this
ends up evolving.
Diagnostics are tied to Error at the moment, but I imagine in the future
that any object would be able to describe itself, error or not, which would
be useful in the future both for the Summary Page and for query
functionality, to help developers understand the systems they are writing
using TAME.
Output is integrated into tameld only in this commit; I'll add tamec
next. Examples of what this outputs are available in the test cases in this
commit.
DEV-10935
We can just use PathSymbolId directly and simplify things. Typing can (and
should) happen on the symbol itself, and if we want a separate symbol type,
it ought to have its own interner.
For now, it doesn't, and having this extra type is just a PITA.
DEV-10935
There's no use in complicating the error handling here when we'd just
default to `UNKNOWN_SPAN` anyway when trying to render it. `UNKNOWN_SPAN`
didn't exist at the time of writing.
DEV-10935
This entirely removes the old XmloReader that has since been replaced with a
XIR-based reader.
I had been holding off on this because the new reader is slower, pending
performance optimizations (which I'll do a little later on), however the
performance loss is of no practical consideration and only affects the
linker, which is still fast.
Therefore, it's better to get this old code out of the way to simplify
refactoring going forward. In particular, I'm working on the diagnostic
system.
This is a little sad, in a way---this is some of my first Rust code that I'm
deleting.
DEV-10935
This does not deal directly with XIRF (that's composed into a pipeline
outside of this parser).
I'd like to clean up further...perhaps I should retire the
wip-xmlo-xir-reader flag now, despite the minor performance regression (see
previous recent commits for explanation).
DEV-10935
This aggregates all non-panic errors that can occur during link time, making
`Box<dyn Error>` unnecessary. I've been wanting to do this for a long time,
so it's nice seeing this come together. This is a powerful tool, in that we
know, at compile time, all errors that can occur, and properly report on
them and compose them. This method of error composition ensures that all
errors have a chance to be handled within their context, though it'll take
time to do so in a decent way.
This just maintains compatibility with the dynamic dispatch that was
previous occurring. This work is being done to introduce the initial
diagnostic system, which was really difficult/confusing to do without proper
errors types at the top level, considering the toplevel is responsible for
triggering the diagnostic reporting.
The cycle error is in particular going to be interesting once the system is
in place, especially once it provides spans in the future, since it will
guide the user through the code to understand how the cycle formed.
More to come.
DEV-10935
tamec and tameld will now both introduce a `Context` to XIR, which will use
it to create spans.
Here's an example of an error, now that it's all working well together:
$ target/release/tameld --emit xmle -o /dev/null path/to/package.xmlo
error: invalid preproc:sym/@dim `9` at [/../path/to/package.xmlo offset 1175451-1175452]
A future task will make this human-readable by producing line and column
numbers, and perhaps even a snippet (if not now, then eventually).
It's exciting to see this coming together finally.
DEV-10934
There's a bit to unpack here. Some of the spans originate from quick-xml's
error handling, but in coming up with test cases to try to trigger errors, I
found that quick-xml is far too permissive in what it accepts, and
oughtright dangerous in some situations.
I feel like the writing is on the wall for quick-xml, but I'll probably wait
until replacing `xmlo` with a more efficient format before deciding whether
to use a different library or implement parsing ourselves. There's a lot of
factors to consider, and a library would have to not only be correct and
performant, but provide useful information for span generation.
But for now, I have other more important things to work on, like a
functioning compiler. So while quick-xml is around, I'll just have to do
the best I can to provide a correct parser with useful errors.
DEV-10934
This is a large change, and was a bit of a tedious one, given the
comprehensive tests.
This introduces proper offsets and lengths for spans, with the exception of
some quick-xml errors that still need proper mapping. Further, this still
uses `UNKNOWN_CONTEXT`, which will be resolved shortly.
This also introduces `SpanlessError`, which `Error` explicitly _does not_
implement `From<SpanlessError>` for---this forces the caller to provide a
span before the error is compatable with the return value, ensuring that
spans will actually be available rather than forgotten for errors. This is
important, given that errors are generally less tested than the happy path,
and errors are when users need us the most (so, need span information).
Further, I had to use pointer arithmetic in order to calculate many of the
spans, because quick-xml does not provide enough information. There's no
safety considerations here, and the comprehensive unit test will ensure
correct behavior if the implementation changes in the future.
I would like to introduce typed spans at some point---I made some
opinionated choices when it comes to what the spans ought to
represent. Specifically, whether to include the `<` or `>` with the open
span (depends), whether to include quotes with attribute values (no),
and some other details highlighted in the test cases. If we provide typed
spans, then we could, knowing the type of span, calculate other spans on
request, e.g. to include or omit quotes for attributes. Different such
spans may be useful in different situations when presenting information to
the user.
This also highlights gaps in the tokens emitted by XIR, such as whitespace
between attributes, the `=` between name and value, and so on. These are
important when it comes to code formatting, so that we can reliably
reconstruct the XML tree, but it's not important right now. I anticipate
future changes would allow the XIR reader to be configured (perhaps via
generics, like a strategy-type pattern) to optionally omit these tokens if
desired.
Anyway, more to come.
DEV-10934
When wip-frontends is on, this will parse the input file using XIR and then
immediately output it again. This makes the necessary changes to be able to
read every source file we have in our largest project, such that the output
is identical after having been formatted with `xmllint --format -` (there
are differences because e.g. whitespace between attributes is not yet
maintained).
This is performant too, with times remaining essentially identical despite
the additional work.
DEV-10413
This resolves the performance issues caused by Rust's failure to elide the
ElementStack (ArrayVec) memcpys on move.
Since XIRF is invoked tens of millions of times in some cases for larger
systems, prior to this change, failure to optimize away moves for XIRF
resulted in tens of millions of memcpys. This resulted in linking of one
program going from 1s -> ~15s. This change reduces it to ~2.5s with the
wip-xmlo-xir-reader flag on, with the extra time coming from elsewhere (the
subject of future changes).
In particular, this change introduces a new mutable reference to
`ParseState::parse_token`, which is a reference to a `Context` owned by the
caller (e.g. `Parser`). In the case of XIRF, this means that
`Parser<flat::State, _>` will own the `ElementStack`/`ArrayVec` instead of
`flat::State`; this allows the latter to remain pure and benefit from Rust's
move optimizations, without sacrificing the otherwise-pure implementation.
ParseStates that do not need a mutable context can use `NoContext` and
remain pure.
DEV-12024
This makes the necessary tweaks to have the entire linker work end-to-end
and produce a compatible xmle file (that is, identical except for
nondeterministic topological ordering). That's good, and finally that can
get off of my plate.
What's disappointing, and what I'll have more information on in future
commits, is how slow it is.
The linking of our largest package goes from ~1s -> ~15s with this
change. The reason is because of tens of millions of `memcpy` calls. Why?
The ParseState abstraction is pure and passes an owned `self` around, and
Parser replaces its own reference using this:
let result;
TransitionResult(Transition(self.state), result) =
take(&mut self.state).parse_token(tok);
Naively, this would store a copy of the old state in `result`, allocate a
new ParseState for `self.state`, pass the original or a copy to
`parse_token`, and then overwrite `self.state` with the new ParseState that
is returned once it is all over.
Of course, that'd be devastating. What we want to happen is for Rust to
realize that it can just pass a reference to `self.state` and perform no
copying at all.
For certain parsers, this is exactly what happens. Great!
But for XIRF, it we have this:
/// Stack of element [`QName`] and [`Span`] pairs,
/// representing the current level of nesting.
///
/// This storage is statically allocated,
/// allowing XIRF's parser to avoid memory allocation entirely.
type ElementStack<const MAX_DEPTH: usize> = ArrayVec<(QName, Span), MAX_DEPTH>;
/// XIRF document parser state.
///
/// This parser is a pushdown automaton that parses a single XML document.
#[derive(Debug, Default, PartialEq, Eq)]
pub enum State<const MAX_DEPTH: usize, SA = AttrParseState>
where
SA: FlatAttrParseState,
{
/// Document parsing has not yet begun.
#[default]
PreRoot,
/// Parsing nodes.
NodeExpected(ElementStack<MAX_DEPTH>),
/// Delegating to attribute parser.
AttrExpected(ElementStack<MAX_DEPTH>, SA),
/// End of document has been reached.
Done,
}
ParseState contains an ArrayVec, and its implementation details are causes
LLVM _not_ to elide the `memcpy`. And there's a lot of them.
Considering that ParseState is supposed to use only statically allocated
memory and be zero-copy, this is rather ironic.
Now, this _could_ be potentially fixed by not using ArrayVec; removing
it (and the corresponding checks for balanced tags) gets us down to
2s (which still needs improvement), but we can't have a core abstraction in
our system resting on a house of cards. What if the optimization changes
between releases and suddenly linking / building becomes shit slow? That's
too much of a risk.
Further, having to limit what abstractions we use just to appease the
compiler to optimize away moves is very restrictive.
The better option seems like to go back to what I used to do: pass around
`&mut self`. I had moved to an owned `self` to force consideration of _all_
state transitions, but I can try to do the same thing in a different type of
way using mutable references, and then we avoid this problem. The
abstraction isn't pure (in the functional sense) anymore, but it's safe and
isn't relying on delicate inlining and optimizer implementation details to
have a performant system.
More information to come.
DEV-10863
This concludes the bulk of the header parsing, though there are surely going
to be other issues when I try to read a real xmlo file, such as
whitespace. That is something I expect that I'd rather handle as part of
XIRF, but maybe I'll initially ignore it here just to get it working. We'll
see.
DEV-10863
This parses the symbol dependency list (adjacency list).
I'm noticing some glaring issues in error handling, particularly that the
token being parsed while an error occurs is not returned and so recovery is
impossible. I'll have to address that later on, after I get this parser
completed.
Another previous question that I had a hard time answering in prior months
was how I was going to compose boilerplate parsers, e.g. handling the
parsing of single-attribute elements and such. A pattern is clearly taking
shape, and with the composition of parsers more formalized, that'll be able
to be abstracted away. But again, that's going to wait until after this
parser is actually functioning. Too many delays so far.
DEV-10863
Ideally this would just be an attribute, but I guess I never got around to
making that change in the compiler and I don't want a detour right now.
DEV-10863
I clearly was not paying attention to what was correct behavior here, since
the tests also verified the wrong behavior: rather than taking the last
processed attribute span, we should be taking the span of the opening
tag for the `preproc:sym` node.
DEV-10863
This simply removes boilerplate.
This will receive concrete examples once I come up with docs for the entire
module; there's boilerplate involved in testing and documenting this in
isolation and the time investment is not worth it yet until I'm certain that
this will not be changed.
DEV-10863
This integrates much of the work done so far to parse into a
`XmloEvent::SymDecl`. The attribute parsing _is_ verbose, and I do intend
to abstract it away later on, but I'm going to wait on that for now.
The new reader should be finishing up soon, which is really exciting, since
I started working on this months ago (before having to take a break on
TAMER); I'm anticipating strong performance gains in the reader, and this is
a test that will tell us how the compiler will perform moving forward with
the abstractions that I've spent so much time on.
DEV-10863
This introduces a new method similar to the previous `delegate`, but with
another closure that allows for handling lookahead tokens from the child
parser.
Admittedly, this isn't exactly what I was going for---a list of arguments
isn't exactly self-documenting, especially with the brevity when the
arguments line up---but this was easy to do and so I'll run with this for
now.
This also modified `delegate` to accept a context, even though it wasn't
necessary, both for consistency with its lookup counterpart and for brevity
with the `into` argument (allowing, in our case, to just pass the name of
the variant, rather than a closure).
I'm not going to handle the actual starting and accepting state stitching
abstraction for now; I'd like to observe future boilerplate more before I
consider the best way to handle it, though I do have some ideas.
DEV-10863
This is the delegation portion of what I've come to call "state
stitching"---wiring together two state machines that recognize the same
input tokens.
This handles the delegation of tokens once the parser has been entered, but
does not yet handle the actual stitching part of it: wiring the start and
accepting states of the child parser to the parent.
This is indirectly tested by the XmloReader, but it will receive its own
tests once I further finalize this concept. I'm playing around with some
ideas. With that said, a quick visual inspection together with the
guarantees provided by the type system should convince any familiar reader
of its correctness.
DEV-10863
This wasn't the simplest thing to start with, but I wanted to explore
something with a higher level of complexity. There is some boilerplate to
observe here, including:
1. The state stitching (as I guess I'm calling it now) of SymtableState
with XmloReaderState is all boilerplate and requires no lookahead,
presenting an abstraction opportunity that I was holding off on
previously (attr parsing for XIRF requires lookahead).
2. This is simply collecting attributes into a struct. This can be
abstracted away in the future.
3. Creating stub parsers to verify that generics are stitched rather than
being tightly coupled with another state is boilerplate that maybe can
be abstracted away after a pattern is observed in future tests.
DEV-10863
This does some cleanup and adds `parse::Object` for use in disambiguating
`From` for `ParseStatus`, allowing the `Transition` API to be much more
flexible in the data it accepts and automatically converts. This allows us
to concisely provide raw output data to be wrapped, or provide `ParseStatus`
directly when more convenient.
There aren't yet examples in the docs; I'll do so once I make sure this API
is actually utilized as intended.
DEV-10863
This replaces u8 and will be used for the new XmloReader.
Previously I wasn't sure what direction TAMER was going to go in with
regards to dimensionality, but I do not expect that higher dimensions will
be supported, and if they are, they'd very likely compile down to lower ones
and create an illusion of higher-dimensionality.
Whatever the future holds, it's not used today, and I'd rather these types
be correct.
ASG needs changing too, but one step at a time.
DEV-10863
This converts the tuple type alias into a newtype, so that we may provide
our own implementations.
This differs from a previous approach that I took, which involved making
this type `Result<(S, T), (S, E)>` so that the return values composed well
with other functions. But the reality is that this is used only by other
`ParseState`s and `Parser`, so it's unnecessary.
However, this is also an attempt to utilize the new Try and FromResidual
traits; note how the Try associated types match precisely what I was trying
to do before, though they're used as intermediate types. I'll see how this
evolves.
DEV-10863
This allows the Results to compose and, importantly, is compatible with
`?` without having to put in any extra effort.
This makes puts the caller in an awkward spot, so I introduced a utility
function `result_tup0_invert` for now; we'll see if that stays or evolves
differently.
DEV-10863
Since this is the object produced by this parser, this is likely the most
useful first thing to present as a summary of what `XmloReader` actually
does.
DEV-10863
This removes the flag from most of the code, which also resolves the
indentation. Not only was it bothering me, but I don't want (a) every line
modified when the module body is hoisted and (b) `rustfmt` to reformat
everything when that happens.
This means that everything will be built, even though it's not used, when
the flag is off, but I see that as a good thing.
DEV-10863
Finally we get to do some actual parsing with all of the preparatory work!
This means that we're finally ready to fully replace the old XmloReader,
provided that I'm okay with some boilerplate / lack of abstractions for
now (and I am, because all I've been doing is working on abstractions to
prepare lowering operations).
DEV-10863
This makes more sense for pattern matching. Encapsulation of these fields
is not necessary, given that it's passed around as an owned value and its
`new` method constructs it verbatim; the individual fields are
self-validating.
DEV-10863
This introduces a WIP lowering operation, abstracting away quite a bit of
the manual wiring work, which is really important to providing an API that
provides the proper level of abstraction for actually understanding what the
system is doing.
This does not yet have tests associated with it---I had started, but it's a
lot of work and boilerplate for something that is going to
evolve. Generally, I wouldn't use that as an excuse, but the robust type
definitions in play, combined with the tiny amount of actual logic, provide
a pretty high level of confidence. It's very difficult to wire these types
together and produce something incorrect without doing something obviously
bad.
Similarly, I'm holding off on proper docs too, though I did write some
information here.
More to come, after I actually get to work on the XmloReader.
On a side note: I'm happy to have made progress on this, since this wiring
is something I've been dreading and wondering about since before the Parser
abstraction even existed.
Note also that this makes parser::feed_toks private again---I don't intend
to support push parsers yet, since they're only needed internally. Maybe
for error recovery, but I'll wait to decide until it's actually needed.
DEV-10863
This begins to transition XmloReader into a ParseState. Unlike previous
changes where ParseStates were composed into a single ParseState, this is
instead a lowering operation that will take the output of one Parser and
provide it to another.
The mess in ld::poc (...which still needs to be refactored and removed)
shows the concept, which will be abstracted away. This won't actually get
to the ASG in order to test that that this works with the
wip-xmlo-xir-reader flag on (development hasn't gotten that far yet), but
since it type-checks, it should conceptually work.
Wiring lowering operations together is something that I've been dreading for
months, but my approach of only abstracting after-the-fact has helped to
guide a sane approach for this. For some definition of "sane".
It's also worth noting that AsgBuilder will too become a ParseState
implemented as another lowering operation, so:
XIR -> XIRF -> XMLO -> ASG
These steps will all be streaming, with iteration happening only at the
topmost level. For this reason, it's important that ASG not be responsible
for doing that pull, and further we should propagate Parsed::Incomplete
rather than filtering it out and looping an indeterminate number of times
outside of the toplevel.
One final note: the choice of 64 for the maximum depth is entirely
arbitrary and should be more than generous; it'll be finalized at some point
in the future once I actually evaluate what maximum depth is reasonable
based on how the system is used, with some added growing room.
DEV-10863
This introduces a (still-private) way to _push_ tokens into the parser,
rather than relying purely on a pull-based interface. Not only does this
simplify the iterator, but this is also preparing to make the new `feed_tok`
public so that parsers can be composed in more contexts. I suspect that
this method may also be useful for error recovery, since it can be used to
inject tokens into arbitrary points of a token stream.
I kept the new method private for now so that I can introduce the new API
and docs separate from this refactoring.
DEV-10863
The parsing framework originally created for XIR is now more general and
useful to other things. We'll see how this evolves.
This needs additional documentation, but I'd like to see how it changes as
I implement XmloReader and then some of the source readers first.
DEV-10863
This adds a `Token` type to `ParseState`. Everything uses `xir::Token`
currently, but `XmloReader` will use `xir::flat::Object`.
Now that this has been generalized beyond XIR, the parser ought to be
hoisted up a level.
DEV-10863
This does a couple of things: it ensures that documents one and only one
root note, and it properly handles dead transitions once parsing is
complete (allowing it to be composed).
This should make XIRF feature-complete for the time being. It does rely on
the assumption that the reader is stripping out any trailing whitespace, so
I guess we'll see if that's true as we proceed.
DEV-10863
I'm not rendering errors yet in practice, so this wouldn't have been
noticed, but we want error messages to reference the final byte in a file on
EOF, not the offset of the last-encountered token, which would be confusing.
This doesn't _directly_ pertain to what I'm working on; I just happened to
notice it.
DEV-10863
XIRF introduced the concept of `Transition` to help document code and
provide mental synchronization points that make it easier to reason about
the system. I decided to hoist this into XIR's parser itself, and have
`parse_token` accept an owned state and require a new state to be returned,
utilizing `Transition`.
Together with the convenience methods introduced on `Transition` itself,
this produces much clearer code, as is evidenced by tree::Stack (XIRT's
parser). Passing an owned state is something that I had wanted to do
originally, but I thought it'd lead to more concise code to use a mutable
reference. Unfortunately, that concision lead to code that was much more
difficult than necessary to understand, and ended up having a net negative
benefit by leading to some more boilerplate for the nested types (granted,
that could have been alleviated in other ways).
This also opens up the possibility to do something that I wasn't able to
before, which was continue to abstract away parser composition by stitching
their state machines together. I don't know if this'll be done immediately,
but because the actual parsing operations are now able to compose
functionally without mutability getting the way, the previous state coupling
issues with the parent parser go away.
DEV-10863
This introduces XIR Flat (XIRF), which is conceptually between XIR and
XIRT. This provides a more appropriate level of abstraction for further
lowering operations to parse against, and removes the need for other parsers
to perform their own validations (inappropriately) to ensure well-formed
XML.
There is still some cleanup worth doing, including moving some of the
parsing responsibility up a level back into the XIR parser.
DEV-10863
This behavior is unchanged, but it allows us to create more constant spans
for testing. For example:
const S = DUMMY_SPAN.offset_add(1).unwrap();
This, in turn, will allow for removing lazy_static! for tests that use it
for span generation.
DEV-10863
Petgraph was previously held back due to petgraph-graphml. I'd like to
transition away from that at some point, given that it's tied to petgraph
and also pulls in xmlns, on top of quick-xml and our XIR, but that can come
down the line.
The Options here are awkward and will be able to go away in the new reader
and in AsgBuilder once it has a proper state machine.
This gets rid of some of the initial migratory work for the new reader,
because PackageAttrs is gone. I'm going to wait to update this to the new
way until I get further into this.
DEV-11449
I'm finally back to TAMER development.
The original plan, some time ago, was to gate an entirely new XmloReader
behind a feature flag (wip-xmlo-xir-reader), and go from there, leaving the
existing implementation untouched. Unfortunately, it became too difficult
and confusing to marry the old aggregate API with the new streaming one.
AsgBuilder is the only system interacting with XmloReader, so I decided (see
previous commits) to just go the route of refactoring the existing
one. I'm not yet sure if I'll continue to progressively refactor this one
and eliminate the two separate implementations behind the flag, or if I'll
get this API similar and then keep the flag and reimplement it. But I'll
know soon.
DEV-11449
This is simply not worth it; the size is not going to be the bottleneck (at
least any time soon) and the generic not only pollutes all the things that
will use ASG in the near future, but is also incompatible with the SymbolId
default that is used everywhere; if we have to force it to 32 bits anyway,
then we may as well just default it right off the bat.
I thought that this seemed like a good idea at the time, and saving bits is
certainly tempting, but it was premature.
It's a bit odd that I've done next to nothing with TAMER for the past week
or so, and decided to do this one small thing before I go on break for the
holidays, but I felt compelled to do _something_. Besides, this gets me in
a better spot for the inevitable mental planning and writing I'll be doing
over the holidays.
This move was natural, given what this has evolved into---it has nothing to
do with the concept of a "tree", and the modules imports emphasized that
fact given the level of inappropriate nesting.
Now that the parser has been simplified by removing attributes, we can
further simplify the state transitions to make it more clear what further
refactoring can be done.
DEV-11339
More information can be found in the prior commit message, but I'll
summarize here.
This token was introduced to create a LL(0) parser---no tokens of
lookahead. This allowed the underlying TokenStream to be freely passed to
the next system that needed it.
Since then, Parser and ParseState were introduced, along with
ParseStatus::Dead, which introduces the concept of lookahead for a single
token---an LL(1) grammar.
I had always suspected that this would happen, given the awkwardness of
AttrEnd; it was just a matter of time before the right abstraction
manifested itself to handle lookahead.
DEV-11339
Note that AttrParse{r=>}State needs renaming, and Stack will get a better
name down the line too. This commit message is accurate, but confusing.
This performs the long-awaited task of trying to observe, concretely, how to
combine two automata. This has the effect of stitching together the state
machines, such that the union of the two is equivalent to the original
monolith.
The next step will be to abstract this away.
There are some important things to note here. First, this introduces a new
"dead" state concept, where here a dead state is defined as an _accepting_
state that has no state transitions for the given input token. This is more
strict than a dead state as defined in, for example, the Dragon Book, where
backtracking may occur.
The reason I chose for a Dead state to be accepting is simple: it represents
a lookahead situation. It says, "I don't know what this token is, but I've
done my job, so it may be useful in a parent context". The "I've done my
job" part is only applicable in an accepting state.
If the parser is _not_ in an accepting state, then an unknown token is
simply an error; we should _not_ try to backtrack or anything of the sort,
because we want only a single token of lookahead.
The reason this was done is because it's otherwise difficult to compose the
two parsers without requiring that AttrEnd exist in every XIR stream; this
has always been an awkward delimiter that was introduced to make the parser
LL(0), but I tried to compromise by saying that it was optional. Of course,
I knew that decision caused awkward inconsistencies, I had just hoped that
those inconsistencies wouldn't manifest in practical issues.
Well, now it did, and the benefits of AttrEnd that we had in the previous
construction do not exist in this one. Consequently, it makes more sense to
simply go from LL(0) to LL(1), which makes AttrEnd unnecessary, and a future
commit will remove it entirely.
All of this information will be documented, but I want to get further in
the implementation first to make sure I don't change course again and
therefore waste my time on docs.
DEV-11268
These were missed from a couple of commits ago, after I recalled that I
could now simplify the Stack variants; they were made more complicated due
to isolated attribute parsing.
These progressive refactorings do a good job illustrating why composing
parsers is better than a monolith---the complexity of the parsers is
significantly reduced, and the number of combinations of states are also
greatly reduced, which allows us to reason about them in isolation.
DEV-11268
This was added only for isolated attribute parsing. Of course, this does
mean that a new union type will be needed when combining the two parsers,
depending on the desired resolution, but that'll come at a later time and
possibly in a more general way.
DEV-11268
This nearly completely integrates the new Parser with xir::tree, but does
not yet compose AttrParseState. I also need to determine what to do with
`parse()` and, further, make `parser_from` generic as part of mod parse.
If we take a moment to reflect on all of the changes, this struggle has been
a roundabout way of converting tree's parser into parse::Parser; providing
a trait for Stack (as ParseState); beginning parser decomposition; and
moving some common logic into Parser. The composition of parsers is the
final piece to be realized.
This could have been a lot less work if I really understood exactly what I
wanted to do up front, but as was mentioned in previous commits, I was
really confusing myself trying to maintain API BC in ways that I should not
have for XmloReader. More on that will be coming soon as well.
DEV-11268
This will allow Parser to operate on both owned and &mut values, and is the
same approach that Rust's built-in iterators take.
This is at first quite surprising, and I often forget that this is a
feature, and, as a bonus, an attractive way to avoid lifetimes in struct
definitions when generics are used for the type that may become a
reference.
DEV-11268
This isn't currently used by anything, and this is collecting, which does
not fit well with the streaming model. AttrList was originally written for
Element parsing, and the isolated attr parser was written for test cases,
before it was fully decided how this system ought to work.
Instead, if AttrList is in fact needed, we can either collect (ideally not)
or implement Extend for AttrList. (Or create TryExtend.)
DEV-11268
This removes the layer of encapsulation that was hiding Stack, which is the
actual parser. The new layer of encapsulation is parse::Parser, which will
be introduced here soon. Baby steps, so it's clear how this evolves.
DEV-11268
The old Parsed was renamed to ParseStatus to be used by Parser, and Parser
converts it into Parsed, which has the same variants as it did before and
has all but the Done variant, since it's not possible for Parser to yield
it.
DEV-11268
This removes Option from ParseState, as mentioned in previous commits.
This is ideal because it not only removes a layer of abstraction, but also
makes the intent very clear; the use of None was too tied to the concept of
an Iterator, which is the concern of Parser, _not_ ParseState.
This is now similar to tree::Parsed, which will help with that refactoring
shortly.
The Done variant is not accessible outside of Parser, since it always
coverts it to None (to halt iteration); given that, we should have another
public-facing type, as was also mentioned in a previous commit.
DEV-11268
This also renames related types.
See previous commits for more in formation. In essence, this trait
represents the reification of all parser state. The omission of "r" in the
name ParseState is intentional, since it indicates the state of a current
parse. We'll see whether that naming ends up being too confusing; it's easy
enough to change.
DEV-11268
This just leaves Parser, which is what I started with, but I wasn't sure how
far I was going to take this. I went against my usual judgment in creating
a trait that I may not need, in an attempt to try to reason about the API
that I wanted, because it wasn't yet clear at the time whether the Parser
ought to be generic.
Since then (as detailed in the last commit), this has become more of a
coordinator/mediator, and the real parser is actually TokenStreamState,
which will be renamed shortly.
DEV-11268
This begins to integrate the isolated AttrParser. The next step will be
integrating it into the larger XIRT parser.
There's been considerable delay in getting this committed, because I went
through quite the struggle with myself trying to determine what balance I
want to strike between Rust's type system; convenience with parser
combinators; iterators; and various other abstractions. I ended up being
confounded by trying to maintain the current XmloReader abstraction, which
is fundamentally incompatible with the way the new parsing system
works (streaming iterators that do not collect or perform heap
allocations).
There'll be more information on this to come, but there are certain things
that will be changing.
There are a couple problems highlighted by this commit (not in code, but
conceptually):
1. Introducing Option here for the TokenParserState doesn't feel right, in
the sense that the abstraction is inappropriate. We should perhaps
introduce a new variant Parsed::Done or something to indicate intent,
rather than leaving the reader to have to read about what None actually
means.
2. This turns Parsed into more of a statement influencing control
flow/logic, and so should be encapsulated, with an external equivalent
of Parsed that omits variants that ought to remain encapsulated.
3. TokenStreamState is true, but these really are the actual parsers;
TokenStreamParser is more of a coordinator, and helps to abstract away
some of the common logic so lower-level parsers do not have to worry
about it. But calling it TokenStreamState is both a bit
confusing and is an understatement---it _does_ hold the state, but it
also holds the current parsing stack in its variants.
Another thing that is not yet entirely clear is whether this AttrParser
ought to care about detection of duplicate attributes, or if that should be
done in a separate parser, perhaps even at the XIR level. The same can be
said for checking for balanced tags. By pushing it to TokenStream in XIR,
we would get a guaranteed check regardless of what parsers are used, which
is attractive because it reduces the (almost certain-to-otherwise-occur)
risk that individual parsers will not sufficiently check for semantically
valid XML. But it does _potentially_ match error recovery more
complicated. But at the same time, perhaps more specific parsers ought not
care about recovery at that level.
Anyway, point being, more to come, but I am disappointed how much time I'm
spending considering parsing, given that there are so many things I need to
move onto. I just want this done right and in a way that feels like it's
working well with Rust while it's all in working memory, otherwise it's
going to be a significant effort to get back into.
DEV-11268
This stores the last seen Span and uses that when reporting EOF, so that the
user will be able to be notified of where exactly the problem occurred.
When I get into creating combinators, it'll be the responsibility of those
combinators to ensure that any None return value will be supplemented by its
own last span.
DEV-11268
This permits retrieving a Span from any Token variant. To support this,
rather than having this return an Option, Token::AttrEnd was augmented with
a Span; this results in a much simpler and friendlier API.
DEV-11268
This removes XIRT support for attribute fragments. The reason is that
because this is a write-only operation---fragments are used to concatenate
SymbolIds without reallocation, which can only happen if we are generating
XIR internally.
Given that this cannot happen during read, it was a mistake to complicate
the parsers. But it makes sense why I did originally, given that the XIRT
parser was written for simplifying test cases. But now that we want parsers
for real, and are writing production-quality parsers, this extra complexity
is very undesirable.
As a bonus, we also avoid any potential for heap allocations related to
attributes. Granted, they didn't _really_ exist to begin with, but it was
part of XIRT, and was ugly.
DEV-11268
The XIRT parser was initially written for test cases, so that unit tests
should assert more easily on generated token streams (XIR). While it was
planned, it wasn't clear what the eventual needs would be, which were
expected to differ. Indeed, loading everything into a generic tree
representation in memory is not appropriate---we should prefer streaming and
avoiding heap allocations when they’re not necessary, and we should parse
into an IR rather than a generic format, which ensures that the data follow
a proper grammar and are semantically valid.
When parsing attributes in an isolated context became necessary for the
aforementioned task, the state machine of the XIRT parser was modified to
accommodate. The opposite approach should have been taken---instead of
adding complexity and special cases to the parser, and from a complex parser
extracting a simple one (an attribute parser), we should be composing the
larger (full XIRT) parser from smaller ones (e.g. attribute, child
elements).
A combinator, when used in a functional sense, refers not to combinatory
logic but to the composition of more complex systems from smaller ones. The
changes made as part of this commit begin to work toward combinators, though
it's not necessarily evident yet (to you, the reader) how that'll work,
since the code for it hasn't yet been written; this is commit is simply
getting my work thusfar introduced so I can do some light refactoring before
continuing on it.
TAMER does not aim to introduce a parser combinator framework in its usual
sense---it favors, instead, striking a proper balance with Rust’s type
system that permits the convenience of combinators only in situations where
they are needed, to avoid having to write new parser
boilerplate. Specifically:
1. Rust’s type system should be used as combinators, so that parsers are
automatically constructed from the type definition.
2. Primitive parsers are written as explicit automata, not as primitive
combinators.
3. Parsing should directly produce IRs as a lowering operation below XIRT,
rather than producing XIRT itself. That is, target IRs should consume
XIRT and produce parse themselves immediately, during streaming.
In the future, if more combinators are needed, they will be added; maybe
this will eventually evolve into a more generic parser combinator framework
for TAME, but that is certainly a waste of time right now. And, to be
honest, I’m hoping that won’t be necessary.
There are a number of reasons for this, where the benefits do not make up
for the losses.
First: this is actually invoking cargo. Not only is this not necessary, but
it's not desirable: cargo by default hits the network and does all sorts of
other stuff, when all we want to do is invoke the executable. So the tests
aren't really testing the right thing in that sense. See the previous
commit for more information.
The way it invokes cargo is different than the way the Makefile invokes
cargo, so on my system, it's actually invoking a _different cargo_! This is
causing problems, in particular with lock files, which causes my tests to
fail.
Importantly, this also removes a _lot_ of dependencies, which removes a lot
of supplier chain risk and a lot of code to audit. This provides
significant security benefits, especially given that what was being tested
was rather small, and could be done in a shell script.
TAMER will receive significant system testing later on. But for now, none
of this was worth it.
Further audits of dependencies will come later on. I've always been fairly
insistent on keeping the dependency graph small and auditable, but recent
supply chain attacks have given me a better way to rationalize the security
risk. Further, I'm the only one on this project right now.
Cargo's default behavior is unfortunately to issue network calls each time
it is invoke in order to check for dependencies updates. This is not only
bad for reproducibility and privacy, but it's also a concern for supply
chain attacks, since most developers are unaware that this is occurring.
Instead, we pin to the lockfile. Installing dependencies can be done with
`cargo fetch` and updating dependencies must be explicitly done by the
developer, with the lockfile updated.
Well, parse to the extent that it was being parsed before, anyway.
The core of this change demonstrates how well TAMER's abstractions work well
together. (As long as you have an e.g. LSP to help you make sense of all of
the inference, I suppose.)
Token::Open(QN_LV_PACKAGE | QN_PACKAGE, _) => {
return Ok(XmloEvent::Package(
attr_parser_from(&mut self.reader)
.try_collect_ok()??,
));
}
This finally makes use of `attr_parser_from` and `try_collect_ok`. All of
the types are inferred---from the iterator transformations, to the error
conversions, to the destination PackageAttrs type.
DEV-10863
This was forgotten when the attribute parser was introduced, and led to the
parser continuing to the token following AttrEnd, which properly caused a
failure given that the parser was in the Done state.
There is a future task I have in my backlog to properly address the Done
state, but this is sufficient for now.
To maintain a proper abstraction, this cannot be the responsibility of the
caller; most callers should not know that fragments exist, letalone how to
handle them.
Like previous commits, this replaces the explicit escaping context with the
convention that all values retrieved from `xir` are unescaped on read and
escaped on write.
Comments are a notable TODO, since we must escape only `--`.
CData is also an issue. I had _expected_ to use it as a means to avoid
unescaping fragments, but I had forgotten that quick_xml hard-codes escaping
on read, so that it can re-use BytesStart! That is terribly unfortunate,
and may result in us having to re-implement our own read method in the
future to avoid this nonsense. So I'm just leaving it as a TODO for now.
DEV-11081
This adds a constant `ST_COUNT` representing the number of statically
allocated symbols, and uses that to estimate an initial capacity for the
`CachingEscaper`.
This is just a guess (and is certainly too low), but we can adjust later on
after profiling, if it ever comes up.
This rewrites a good portion of the previous commit.
Rather than explicitly storing whether a given string has been escaped, we
can instead assume that all SymbolIds leaving or entering XIR are unescaped,
because there is no reason for any other part of the system to deal with
such details of XML documents.
Given that, we need only unescape on read and escape on write. This is
customary, so why didn't I do that to begin with?
The previous commit outlines the reason, mainly being an optimization for
the echo writer that is upcoming. However, this solution will end up being
better---it's not implemented yet, but we can have a caching layer, such
that the Escaper records a mapping between escaped and unescaped SymbolIds
to avoid work the next time around. If we share the Escaper between _all_
readers and the writer, the result is that
1. Duplicate strings between source files and object files (many of which
are read by both the linker and compiler) avoid re-unescaping; and
2. Writers can use this cache to avoid re-escaping when we've already seen
the escaped variant of the string during read.
The alternative would be a global cache, like the internment system, but I
did not find that to be appropriate here, since this is far less
fundamental and is much easier to compose.
DEV-11081
I'm not fond of this implementation, which is why it's not fully
completed. I wanted to commit this for future reference, and take the
opportunity to explain why I don't like it.
First: this task started as an idea to implement a third variant to
AttrValue and friends that indicates that a value is fixed, in the sense of
a fixed-point function: escaped or unescaped, its value is the same. This
would allow us to skip wasteful escape/unescape operations.
In doing so, it became obvious that there's no need to leak this information
through the API, and indeed, no part of the system should care. When we
read XML, it should be unescaped, and when we write, it should be
escaped. The reason that this didn't quite happen to begin with was an
optimization: I'll be creating an echo writer in place of the current
filesystem-based copy in tamec shortly, and this would allow streaming XIR
directly from the reader to the writer without any unescaping or
re-escaping.
When we unescape, we know the value that it came from, so we could simply
store both symbols---they're 32-bit, so it results in a nicely compressed
64-bit value, so it's essentially cost-free, as long as we accept the
expense of internment. This is `XirString`. Then, when we want to escape
or unescape, we first check to see whether a symbol already exists and, if
so, use it.
While this works well for echoing streams, it won't work all that well in
practice: the unescaped SymbolId will be taken and the XirString discarded,
since nothing after XIR should be coupled with it. Then, when we later
construct a XIR stream for writting, XirString will no longer be available
and our previously known escape is lost, so the writer will have to
re-escape.
Further, if we look at XirString's generic for the XirStringEscaper---it
uses phantom, which hints that maybe it's not in the best place. Indeed,
I've already acknowledged that only a reader unescapes and only a writer
escapes, and that the rest of the system works with normal (unescaped)
values, so only readers and writers should be part of this process. I also
already acknowledged that XirString would be lost and only the unescaped
SymbolId would be used.
So what's the point of XirString, then, if it won't be a useful optimization
beyond the temporary echo writer?
Instead, we can take the XirStringWriter and implement two caches on that:
mapping SymbolId from escaped->unescaped and vice-versa. These can be
simple vectors, since SymbolId is a 32-bit value we will not have much
wasted space for symbols that never get read or written. We could even
optimize for preinterned symbols using markers, though I'll probably not do
so, and I'll explain why later.
If we do _that_, we get even _better_ optimizations through caching that
_will_ apply in the general case (so, not just for echo), and we're able to
ditch XirString entirely and simply use a SymbolId. This makes for a much
more friendly API that isn't leaking implementation details, though it
_does_ put an onus on the caller to pass the encoder to both the reader and
the writer, _if_ it wants to take advantage of a cache. But that burden is
not significant (and is, again, optional if we don't want it).
So, that'll be the next step.
This is intended to alleviate what will be some common boilerplate because
of the Rust compiler error described therein.
This will evolve over time, I'm sure.
DEV-10863
This provides convenience methods atop of the already-existing
functions. These are a bit more ergonomic since they (a) remove a variable
and its generics and (b) are conveniently suggested via LSP (with
e.g. rust-analyzer) if the iterator is of the right type, even if the trait
is not yet imported. This should help with discoverability as well.
These traits augment Rust's built-in traits to handle failure scenarios,
which will allow us to encapsulate lowering logic into discrete,
self-parsing units that enforce e.g. schemas (the example alludes to my
intentions).
The previous implementation took ownership over the provided iterator, which
was an oversight, considering that this is intended to be used in contexts
where doing so is not possible. A good example where isolated test cases
aren't necessarily painting the correct picture.
`scan` takes owned values, so this instead uses the same parsing method as
`parse_attrs`, but using a `FromFn` iterator to avoid having to create a
whole new iterator type. This will work well so long as we don't need to
store the type returned by this (while also wanting to avoid boxing).
DEV-11062
See the previous commit. There is no sense in some common "IR" namespace,
since those IRs should live close to whatever system whose data they
represent.
In the case of these, they are general IRs that can apply to many different
parts of the system. If that proves to be a false statement, they'll be
moved.
DEV-10863
Calling it "legacyir" is just confusing. The original hope, when beginning
TAMER, was that I'd be able to use a new object format in the near future to
help speed up the compilation process. But that's far from our list of
priorities now, and so seeing "legacy" all over the place is really
confusing considering that it implies that perhaps it shouldn't be used for
new code.
This helps to clear up that cognitive dissonance by remaining neutral on the
topic. And the reality is that it won't be "legacy" for some time.
DEV-10863
The IRs really ought to live where they are owned, especially given that
"IR" is so generic that it makes no sense for there to be a single location
for them; they're just data structures coupled with different phases of
compilation.
This will be renamed next commit; see that for details.
This also removes some documentation describing the lowering process,
because it's undergone a number of changes and needs to be accurately
re-summarized in another location. That will come at a later time after the
work is further along so that I don't have to keep spending the time
rewriting it.
DEV-10863
This was previous gated behind the negation of the wip-xmlo-xir-reader flag,
which meant that it was not being compiled or picked up by LSP. Both of
those things are inconvenient and unideal.
DEV-10863
This allows for the lazy parsing of attributes, and makes the necessary
changes to the parser to be able to do so safely without getting into a bad
context.
When XIRT was originally conceived, this concept existed somewhat, but it
was done in a way that would allow the parser to accept invalid input. This
avoids that problem.
This also introduces the concept of "Done", primarily because we had to for
the AttrEnd token. This will evolve in following commit(s), which will
allow carrying out the important check of ensuring that the parser has ended
parsing in a valid accepting state (in terms of a state machine).
DEV-11062
This produces an `AttrList` independent from a containing
`Element`. Upcoming changes may further permit the parser to yield smaller
components that are not part of an aggregate.
DEV-10863
This allows Rust to carry out its exhaustiveness check for when we add new
tokens. It further ensure that we understand what we missed, or chose not
to handle.
DEV-10863
This allows AttrList not only to be lazily initialized (which is less of a
problem at the moment with Vec, but may become one in the future), but also
leaves a space open for attributes to be added _after_ having been
parsed. It further leaves room to _take_ attributes from their `Element`.
This is important because the next commit will re-introduce the ability to
parse attributes independently, allowing us to put the parser in a state
where we can parse AttrList without an Element context. To re-use that
parsing under an Element context, we can simply attach an AttrList after it
has been parsed.
Option adds no additional size cost to Vec, so we get this for free (except
for the tiny change that initializes the attribute list when we try to push
to it).
I also think this reads better ("attrs: None"). Though it makes the API
slightly more of a pain to work with.
DEV-10863
The purpose of this token is to implement a lazy streaming attribute
collection operation without a token of lookup, which would complicate
parsing or require that a TokenStream provide a `peek` method.
This is only required for readers to produce, since readers will be feeding
data to parsers. I have the writer ignoring it. If you're looking back at
this commit, the question is whether this was a bad idea: it introduces
inconsistencies into the token stream depending on the context, which can be
confusing and error-prone.
The intent is to have the parser throw an explicit error if the new token is
missing in the context in which it is required, which will safely handle the
issue, but does defer it to runtime. But only readers need auditing, and
there's only one XIR reader at the moment.
DEV-10863
There isn't a whole lot here, but there is additional work needed in various
places to support upcoming changes and so I want to get this commited to
ease the cognitive burden of what I have thusfar. And to stop stashing. We
have a feature flag for a reason.
DEV-10863
This macro was previously using the path of wherever the template expanded
into, which I found to be unexpected considering that I thought the macros
were hygenic and the names bound to the environment in which they were
defined.
In any case, this solves the problem in all cases.
DEV-10863
This was forgotten in the previous commit and exists simply to ensure that
the TripIter doesn't add any significant overhead. The tests are
a handful of nanoseconds apart, on my machine.
See the documentation in this commit for more information.
This is pretty significant, in that it's been a long-standing question for
me how I'd like to join together `Result` iterators without having
unnecessarily complex APIs, and also allow for error recovery. This solves
both of those problems.
It should be noted, however, that this does not yet explicitly implement
error recovery, beyond being able to observe the failure as the result of
the provided callback function. Proper recovery will be implemented once
there's a use-case.
DEV-11006
This moves the Iterator impl and From<B> back into `quickxml`. The type of
the new reader is different, taking an iterator instead of a BufRead. This
will allow us to easily mock for unit tests, without the clustfuckery that
has ensued previously with quick-xml mocking.
DEV-10863
The original plan was to modify the existing reader to use the new
XmlXirReader, but that's going to be a lot of ongoing uncommitted work, with
both tests and implementation. The better option seems to be to reimplement
it, since so many things are changing.
This flag will be short-lived and removed as soon as the implementation is
complete.
DEV-10863
Comments re-use Text, but they are _not_ escaped, so we need to take care
with the type to ensure that, if the value were ever used with a
Token::Text, that we don't end up injecting XML.
quick_xml provides us the value escaped, so we can just handle this the same
way as Text for now.
In the future, we may want to distinguish between the two so that we can
reconstruct an identical XML document, but at the moment CData isn't used at
all in TAME sources or outputs, and so I'm not going to worry about it for
now.
DEV-10863
It's nice being able to breeze through changes, since that's been a pretty
rare thing so far, given all the foundational work that has been needed.
This should get us pretty damn close to being able to parse the `xmlo` files
for the reader linker, if we're not there already.
DEV-10863
This is quick-and-dirty; refactoring can be done later on. This is also
intended to demonstrate the ease with which additional events can be
added---the hard work is done.
This is an initial working concept for the reader which handles, so far,
just a single attribute. But extending it to completion will not be all
that much more work.
This does not have namespace support---that will be added later as part of
XIRT, which is responsible for semantic analysis. This allows XIR to stay
wonderfully simple, and won't have any impact on the writer (which expects
that QNames are unresolved and contain the namespace prefix to be written).
This is the safe version of the existing intern_utf8_unchecked, and exists
as a performance optimization.
We're about to introduce a XIR reader, which is going to intern a _lot_ of
duplicate strings, since it will intern node and attribute names as
well. Given that, we do not want to spent a lot of time performing UTF-8
checks that have already been performed.
We know that, if an intern is in the pool, it's either already UTF-8 or that
check was bypassed when it was initially interned. Therefore, if we find an
existing symbol, that can be returned without having to perform any
check. Otherwise, we intern as we usually would after attempting to convert
the byte slice into a string.
This allows us to continue to have good performance for interning without
sacrificing safety for strings.
The intent of this is to demonstrate how significant of an impact checking
byte arrays for UTF-8 validity will have, since the existing tests do not
make that clear (a static string in Rust is always valid UTF-8).
These benchmarks show that the cost when re-interning an already existing
value is +50%.
This is important, because the new reader will be interning a _lot_ of
duplicate strings, whereas the existing reader operates on byte arrays
without interning unless necessary. And, when it does, it does so
unchecked. But we'd rather not do that, since we cannot guarantee that
those XML files are valid (and not modified in some way).
Upcoming commits will have what I think is a reasonable compromise to this,
based on the fact that we'll be encountering _many_ duplicate strings in
parsing XML files.
DEV-10920
This provides a child `raw` module that exposes a SymbolId representing the
inner value of each of the static newtypes. This is needed in situations
where the type must match and the type of the static symbol is not
important.
In particular, when comparing against runtime-allocated symbols in `match`
expressions.
It is also worth noting that this commit managed to hit a bug in Rustc that
was fixed on 10/1/2021. We use nightly, and it doesn't seem that this
occurred in stable, from bug reports.
- https://github.com/rust-lang/rust/issues/89393
- 5ab1245303
- Original issue: https://github.com/rust-lang/rust/issues/72476
The error was:
compiler/rustc_mir_build/src/thir/pattern/deconstruct_pat.rs:1191:22:
Unexpected type for `Single` constructor: <u32 as sym::symbol::SymbolIndexSize>::NonZero
thread 'rustc' panicked at 'Box<dyn Any>', compiler/rustc_errors/src/lib.rs:1146:9
This occurred because we were trying to use `SymbolId` as the type, which
uses a projected type as its inner value: `SymbolId<Ix: SymbolIndexSize>(Ix::NonZero)`.
This was not a problem with the static newtypes because their inner type was
simply `SymbolId<Ix>`, which is not projected.
This is one of the risks of using nightly.
But, the point is: if you receive this error, upgrade your toolchain.
Tbh, I was unaware that this was supported by tuple variants until reading
over the Rustc source code for something. (Which I had previously read, but
I must have missed it.)
This is more proper, in the sense that in a lot of cases we not only care
about how many values a tuple has, but if we explicitly match on them using
`_`, then any time we modify the number of values, it would _break_ any code
doing so. Using this method, we improve maintainability by not causing
breakages under those circumstances.
But, consequently, it's important that we use this only when we _really_
don't care and don't want to be notified by the compiler.
I did not use `..` as a prefix, even where supported, because the intent is
to append additional information to tuples. Consequently, I also used `..`
in places where no additional fields currently exist, since they may in the
future (e.g. introducing `Span` for `IdentObject`).
In particular, `name` needn't return an `Option`. `fragment` also returns a
copy, since it's just a `SymbolId`. (It really ought to be a newtype rather
than an alias, but we'll worry about that some other time.)
These changes allow us to remove some runtime panics.
DEV-10859
This moves the logic that sorts identifiers into sections into Sections
itself, and introduces XmleSections to allow for mocking for testing.
This then allows us to narrow the types significantly, eliminating some
runtime checks. The types can be narrowed further, but I'll be limiting the
work I'll be doing now; this'll be inevitably addressed as we use the ASG
for the compiler.
This also handles moving Sections tests, which was a TODO from the previous
commit.
DEV-10859
This is the appropriate place to be, now that we've begun narrowing the
types. We'll be able to do so further; this is just the first step.
This does not yet move the tests, but the code is still tested because it's
tightly coupled with `sort`. Those will move in the next commit(s).
DEV-10859
xmle sections will only ever contain an object of one type, so there is no
use in making this generic.
I think the original plan was to have this represent, generically, sections
of some object file (like ELF), but doing so would require a significant
redesign anyway, so it makes no sense. This is easier to reason about.
DEV-10859
This has always been a lowering operation, but it was not phrased in terms
of it, which made the process a bit more confusing to understand.
The implementation hasn't changed, but this is an incremental refactoring
and so exposes BaseAsg and its `graph` field temporarily.
DEV-10859
Sections, as written, are specific to xmle files.
I think the intent originally was to have this be more generic, but that
doesn't really make sense.
By explicitly coupling it with `xmle` files, that will allow us to turn this
into a proper lowering operation with its own validations that will allow
`xmle::xir` to do its job without having to validate anything itself.
This outputs enough information to be a little bit useful in the event of an
error. In the future, we'll want to provide a (likely non-Display)
implementation that provides line number and source file context with
the problem characters indicated, like Rust.
This is a significant departure from my original plans---this makes it
_easy_ to display symbol values, despite me not wanting that to occur unless
absolutely necessary.
The reality is, based on the design of the system, they will only occur in
these situations:
1. Writing to files;
2. Displaying errors;
3. Tests; or
4. People not following the design of the system.
The fourth one is the most risky as people begin to contribute in the
future, but the reality is that those can be fixed as they are encountered,
since if they're not showing up in a profiler, then they must not be causing
much of a problem.
This removes `SymbolStr` in favor of, simply, `&'static str`.
The abstraction provided no additional safety since the slice was trivially
extracted (and commonly, in practice), and was inconvenient to work with.
This is part of a process of relaxing lookups so that symbols can be
conveniently displayed in errors; rather than trying to prevent the
developer from doing something bad, we'll just rely on conventions, hope
that it doesn't happen, and if it does, address it either at that time or
when it shows up in the profiler.
The docs still need to be improved, but they can be touched as we go.
This concludes the initial development of XIR. That was much more involved
that I had originally intended, but the result is good.
DEV-10561
This generalizes it a bit and provides tests, which was always the intent;
the existing code was POC to determine if this could be done without
performance degradation (see that commit for more information).
The intent is to support the composition and decomposition of spans such
that (A, B) is as documented here. This only performs the trivial case for
the sake of providing a convenient API when the developer would otherwise
just type (S, S).
This is intended to represent the sections written to the final xmle file,
and there was unnecessary complexity in separating everything.
By reducing this IR further, we can begin to constrain its types to
eliminate some of the runtime panics and error checking we have/had in the
writer.
The new writer has reached parity of the old, with the exception of some
edge case explicit error handling that should never occur (which will be
added), and cleanup/docs.
Removing this flag now allows me to perform that cleanup without having to
worry about updating the now-old implementation.
I ran `tameld` with the new writer against our production system with
numerous programs and a significant number of test cases, and diff'd the old
and new xmle files, and everything looks good.
This is a significant milestone, in the sense that it is the culmination of
the past month or so of work to prove that an Iterator-based XIR will be
viable for the system.
This barely had any impact on the performance from the previous commit
reporting the profiling. This performs at least as well as the quick-xml
based writer. In isolated benchmarks, it performs better, but in the real
world, the linker spends most of its time reading xmlo files, and so minor
differences in writing do not have a significant overall impact.
With that said, a lot of cleanup and documentation is still needed. That is
the subject of the upcoming commits, before this writer can finalized.
The previous iterators had to be used in a certain order because they mixed
concerns, out of concern for performance. This attempts to chain even more
iterators to see how it may perform.
To be clear: this will be cleaned up. This was just an experiment.
Here were profiles on the average of 50 runs of linking our largest program:
Baseline, pre-XIR (with fragments removed from output) 0.8082
XIR writer, pre-ElemWrap, no #[inline] 0.7844s
XIR writer, ElemWrap, no #[inline] 0.7918s
XIR writer, ElemWrap, inlines in obj::xmle::xir 0.7892s
XIR writer, ElemWrap, inlines in obj::xmle::xir and ir::asg::section 0.7858s
XIR writer, ElemWrap, inline in only ir::asg::section 0.781s
Pre-ElemWrap, inlines in ir::asg::section 0.7772s
These profiles are difficult, because they hit the filesystem so much. I
write to /dev/null, but it reads 100s of xmlo files from disk.
It's clear that the impact is fairly modest and within a margin of error; as
such, I will continue down the path of writing code that's easier to grok
and maintain, since not doing so would be a micro-optimization relative to
the concerns of the rest of the system at this point.
But the purpose of all of this work was to determine whether an
iterator-based XIR would be viable. It seems to be competitive. I'll
finish up the writer reimplementation and move on.
This contains some awkward coupling for opening and closing tags to reduce
the complexity of the `Iterator` types that must be manually
specified. That may be addressed shortly.
This was creating a heap-allocated `Vec` for each map symbol despite not
actually needing it. We do have multiple froms for return map values.
But by the time we may want this type of thing, we'll have a different IR
for it anyway.
See the docs for a much deeper discussion. In summary: traits do not
support static methods, and this is the workaround, which relies on unstable
nightly constant function features.
This implementation is tested using `qname_const!`, and will be utilized
with a new static type in a following commit.
This is to support two things:
1. Early switch to 2021 Edition, which is stable Oct 21; and
2. To make use of unstable const features.
The rationale is that switching to nightly does not really have any
significant downside for us, given that TAMER is used only by us and
the only risk is that unstable features may change a bit, which can be
mitigated with certain precautions.
The rationale for each unstable feature will be documented as they are used,
including documentation on what would be required to remove it and what
functionality would be lost / need to change in doing so.
This is far from fully documented; it's just a start. I'll document fully
once the implementation is done, to ensure I don't waste time documenting
things that may change.
These are getting large and messy.
And I now notice that I never completed the header test after
prototyping. Shame on me.
Also, errata from the previous commit message: the diffs are identical
_except for attribute escaping_ that is unnecessary; we're outputting data
read directly from existing XML files (output by Saxon), so characters are
already escaped as needed.
DEV-10561
The `l:dep` section of the `xmle` file, after formatting (since XIR writes
without newlines and indentation), is now identical to the existing xmle
writer. I can now move on to the other sections.
Note that the attribute movement in this commit is simply to get the diff to
properly align. Once the current xmle writer is removed, I'll organize them
a bit more sensibly.
`obj::xmle::xir` also needs documentation, now that it's shown to be viable.
The new xmle writer was having to intern before write, which did not make
sense.
This continues with consistently using symbols throughout the system, and
is a smaller size than `String` as a bonus.
`IdentKind` needs to be written to `xmle` files and displayed in error
messages. String slices were used when quick-xml was used for writing,
which will be going away with the new writer.
This has been a long time coming, and has been repeatedly stashed as other
parts of the system have evolved to support it. The introduction of the XIR
tree was to write tests for this (which are sloppy atm).
This currently writes out the `xmle` header and _most_ of the `l:dep`
section; it's missing the object-type-specific attributes. There is,
relatively speaking, not much more work to do here.
The feature flag `wip-xir-xmle-writer` was introduced to toggle this system
in place of `XmleWriter`. Initial benchmarks show that it will be
competitive with the quick-xml-based writer, but remember that is not the
goal: the purpose of this is to test XIR in a production system before we
continue to implement it for a frontend, and to refactor so that we do not
have multiple implementations writing XML files (once we echo the source XML
files).
I'm excited to get this done with so that I can move on. This has been
rather exhausting.
The 16-bit interner at present will be used only for span contexts. In the
future, this interner may become specialized specifically for that, but for
now let's just re-use what we already have so that I can move on.
DEV-10733
I want to make it clear in the assertion that the problem could be caused by
duplicate strings. We do not sort by string, because in part we may in the
future want to group certain symbols together in some arbitrary way so we
can compare ranges (using the markers).
If that doesn't end up happening, it may be better to just sort by string
to obviate the problem.
It's really awkward not having them caps, when not only are constants
expected to be, but also that we cannot maintain consistency between the
string and the identifier name in even the simplest of cases.
(We could use `r#`, but that's too cumbersome.)
`StaticSymbolId` was created before the more specific types, which render it
unnecessary. If we need a generic type, it can be re-introduced, but using
`static_symbol_newtypes!`.
This is the interner that is intended to be used with the majority of the
system; the 16-bit interner is left around for the moment, but will likely
later become specialized.
This had the writing on the wall all the same as the `'i` interner lifetime
that came before it. It was too much of a maintenance burden trying to
accommodate both 16-bit and 32-bit symbols generically.
There is a situation where we do still want 16-bit symbols---the
`Span`. Therefore, I have left generic support for symbol sizes, as well as
the different global interners, but `SymbolId` now defaults to 32-bit, as
does `Asg`. Further, the size parameter has been removed from the rest of
the code, with the exception of `Span`.
This cleans things up quite a bit, and is much nicer to work with. If we
want 16-bit symbols in the future for packing to increase CPU cache
performance, we can handle that situation then in that specific case; it's a
premature optimization that's not at all worth the effort here.
We'll see how the syntax evolves over time. It's not ideal to have to
specify the type, rather than having the compiler infer it, but I don't much
feel like getting into my first procedural macro right now, so we'll stick
with this approach for the time being.
This will set the stage to be able to safely e.g. create QNames statically
at compile-time and would allow us to make any attempts to bypass it
unsafe.
Previously, we were allocating only u32 versions of `SymbolId` for the
statically allocated symbols. This introduces a new symbol type with a very
small datatype (8 bits) that is able to cast into any `SymbolId`. This is
explained in the docs.
We'll be taking this typing further in future commits so that static symbols
are better-suited for compile-time guarantees for static newtype
construction.
DEV-10710
This is the beginning of static symbols, which is becoming increasing
necessary as it's quite a pain to have to deal with interning static strings
any place they're used.
It's _more_ of a pain to do that in conjunction with newtypes (e.g. `QName`,
`AttValue`, etc) that make use of `SymbolId`; this will allow us to
construct _those_ statically as well, and additional work to support that
will be coming up.
DEV-10701
These were using GiB of memory, which is ...unnecessary.
I reduced the iteration count significantly, but it was still wasting a lot
of time and memory and needed `with_capacity` to reduce the number of copies
after reallocation.
It is not typical that a buffer would contain this much information.
This broke when I removed `SelfClose`. I used to run
`make all fmt check bench` before every push, but they take a while to run,
in part because it uses nightly and has to recompile too.
But it looks like I need to be more diligent again.
This is exactly was I said I was _not_ going to do in the previous commit,
but apparently hacking late at night had me forget the whole reason that
XIRT is being introduced now---unit tests. I'll be emitting a XIR stream
and I need to parse it for convenience in the tests.
So, here's a good start. Next will be some generalizations that are useful
for the tests as well. This is pretty bare, but accomplishes the task.
See docs for more info.
The `tree` module is getting more difficult to navigate. The tests still
remain where they were, since a bunch of concerns are mixed together. Any
tests specific only to this module will be added here.
This is implemented only for the writer, since its use case is to be able to
concatenate strings without copying during writing.
It doesn't really make sense to support this in XIR Tree, since a reader
should never produce this. But if we ever run into this (e.g. due to some
internal processing pipeline), we'll address it then; XIR Tree might have to
do copying, then, but should probably wait until encountering all fragments
before interning. That'd be a distraction right now.
This commit will make more sense once the broader context is committed, but
it's needed for lowering from `Sections` into a XIR stream.
This will also change once we pre-allocate symbols, like rustc, when the
interner is initialized.
This is my first use of the `paste` crate, which is used to generate
identifiers. So this is partly an experiment, and it seems much better than
having to write a proc macro, at least at this point in time. If this code
stays around, it'll probably be generalized further and used elsewhere, but
I'd prefer not to go this route long-term.
This moves some logic into `ElementStack` (which would be part of `Stack` if
variants were their own types), rather than peering so deeply into its
data.
This correctly retains and restores the parent stack after processing an
attribute for a child element.
This does increase the size of [`Stack`] a bit, but we can evaluate whether
it's too large at a later time. It's currently 832 bits with `Ix=u32`,
which is large, but the question is whether it matters; we'll see as we
begin to use it.
This moves most of the parsing logic into `Stack`, which rightfully owns the
stack manipulation and state transitions. `ParserState` becomes exactly
what it says it is---a management of the persistent state of the parser, and
is also responsible for digesting tokens and dispatching their data to the
proper event.
This approach has a number of benefits over the old design: it's
self-documenting, making the intent clear; and it is easier to reason about
the subset of states (for both humans and Rusts) than a large match of
transitions.
This contains a number of TODO items that will be addressed shortly. It
also obviated that the previous commit was incomplete---it doesn't persist
`pstack` for attributes on child elements! That'll be fixed too.
This modifies the tree parser to handle child elements. It's mostly
proof-of-concept code; the next commit will clean it up a bit so that it's
largely self-documenting.
This removes `SelfClose` and merges it with `Close` by making the first
parameter an `Option`. This isn't really ideal, but it really simplifies
pattern matching, especially for the next commit. I'll have more details
there.
The primary motivation was lack of stabalization for binding after `@` in
matches, e.g. `Foo(name, ele) | ele @ Element { name, .. }`. It looks like
it's ready, though; maybe next Rust release?
https://github.com/rust-lang/rust/issues/65490
I don't know if I'll revert this change after then. This seems plenty
clear, albeit more verbose.
This introduces parser errors, but does not yet support error recovery; that
problem will be discussed in a commit in the near future, after the writer
is sorted out a bit more.
DEV-10561
The idea, previously, was that parsing could begin at attributes selectively
and be parsed independently. But that's really awkward with `Tree`, since
it effectively allows orphan attributes as children of an
`Element`. Nonsense.
Instead, if we truly only want an attribute list, we can offer a function to
create a parser with an empty `Stack::BuddingElement` that can accumulate
them.
Previously, `parser_from` was a simple wrapper around `parse`; now, this
provides a more convenient API where `next` will yield the next parsed
object.
See docs for much more information and rationale.
These traits are intended to eliminate boilerplate, primarily in tests, in
situations where from/into is not expected to fail.
Given that TAMER must only panic for internal compiler errors, this should
not often be used outside of test cases. Further, there may be better
options in the future (e.g. QNames could be statically compiled rather than
trying to convert at runtime, in this case).
This begins to introduce the XIR tree. I was originally going to wait on
this until after implementing the xmle writer in terms of XIR, but writing
unit tests is too much of a pain on the stream, so now is as good of a time
as any.
This has very limited support so far; it'll be added to as time goes on.
These groups happen to correspond with the sections of the xmle file, which
suggests again that this lives in the wrong place. But I should really have
my focus elsewhere right now, so I don't know if I'll go any further right
now. I guess we'll see as the writer is reimplemented.
`SectionsIter` was introduced to remove that responsibility from xmle
writer, since that's currently being reimplemented using XIR.
The existing iterator has been renamed SectionIter{ator=>} for a more
idiomatic name for iterator structs, and now has a static type rather than
relying on dynamic dispatch. The author of that code wasn't sure how to
handle it otherwise. (Which is understandable, since we were both still
getting acquainted with Rust.) There's no notable change in performance in
my benchmarking.
This abstraction is a bit awkward, in that it's named for object file
sections, but they aren't. Further, it's coupled with the ASG via
`SortableAsg` and perhaps should be generalized into a sorting routine that
takes a function for sorting, so that `Sections` can be moved into xmle's
packages.
The return value has no meaningful side-effects at all; the write operation
failing isn't worth pointing out, since it has to be used regardless.
The normal `write` does have useful side-effects, of course.
This change was primarily intended to clean up unit tests. Since it
allocates and returns a new buffer, I do not expect this to have much use
within TAMER itself in the near future. Maybe in later tooling.
If this is abused, person from the future: add `#[cfg(test)]` to its
definition.
I decided not to do this in a previous commit because I had documented
"NodeStream" elsewhere, so I'd like it to be in the Git history to
understand its evolution.
This never was a "Node" stream beyond the initial concept phase, because it
represents tokens that aren't themselves nodes. It is intended to generate
XML nodes, but may need to accommodate non-nodes (e.g. XML declarations) in
the future.
The name originated from `Node`, which was a tree-based IR that was
initially conceived, but removed because it's not yet needed. What we need
is a streaming IR for xmle writing, and then for reading and echoing back
out XML for the new frontend.
This is a working streaming IR for XML. I want to get this committed before
I go further cleaning it up and integrating it into the xmle writer.
This is lacking detailed documentation, and the names of things may end up
changing.
Initial benchmarks do show that it has a ~2x performance improvement over
quick-xml when dealing with two attributes on a node, and I suspect that
improvement will increase with the number of attributes. We will see how it
compares in real-world benchmarks once the linker has been modified to use
it.
The goal isn't to _avoid_ quick-xml---it'll be used in the future for things
like escaping that would be a huge waste to implement ourselves. It just so
happened that quick-xml was not beneficial for these changes; indeed, its
own writer is fairly simple for the portions that were implemented here, so
there's no use in fighting with its API, particularly around attributes and
our need to explicitly control whitespace (with the intent of handling code
formatters in the future).
To put this into perspective: the reason this work is being done isn't to
refactor the linker, or to speed it up, but to generalize XML writing and
provide a suitable IR for use in the compiler. The first step of the
frontend is to essentially echo the XML token stream back out so we can
incrementally parse it and do something useful, to incrementally rewrite the
compiler in Rust.
This adds benchmarking for the memchr crate. It is used primarily by
quick-xml at the moment, but the question is whether to rely on it for
certain operations for XIR.
The benchmarking on an Intel Xeon system shows that memchr and Rust's
contains() perform very similarly on small inputs, matching against a single
character, and so Rust's built-in should be preferred in that case so that
we're using APIs that are familiar to most people.
When larger inputs are compared against, there's a greater benefit (a little
under ~2x).
When comparing against two characters, they are again very close. But look
at when we compare two characters against _multiple_ inputs:
running 24 tests
test large_str:1️⃣:memchr_early_match ... bench: 4,938 ns/iter (+/- 124)
test large_str:1️⃣:memchr_late_match ... bench: 81,807 ns/iter (+/- 1,153)
test large_str:1️⃣:memchr_non_match ... bench: 82,074 ns/iter (+/- 1,062)
test large_str:1️⃣:rust_contains_one_byte_early_match ... bench: 9,425 ns/iter (+/- 167)
test large_str:1️⃣:rust_contains_one_byte_late_match ... bench: 123,685 ns/iter (+/- 3,728)
test large_str:1️⃣:rust_contains_one_byte_non_match ... bench: 123,117 ns/iter (+/- 2,200)
test large_str:1️⃣:rust_contains_one_char_early_match ... bench: 9,561 ns/iter (+/- 507)
test large_str:1️⃣:rust_contains_one_char_late_match ... bench: 123,929 ns/iter (+/- 2,377)
test large_str:1️⃣:rust_contains_one_char_non_match ... bench: 122,989 ns/iter (+/- 2,788)
test large_str:2️⃣:memchr2_early_match ... bench: 5,704 ns/iter (+/- 91)
test large_str:2️⃣:memchr2_late_match ... bench: 89,194 ns/iter (+/- 8,546)
test large_str:2️⃣:memchr2_non_match ... bench: 85,649 ns/iter (+/- 3,879)
test large_str:2️⃣:rust_contains_two_char_early_match ... bench: 66,785 ns/iter (+/- 3,385)
test large_str:2️⃣:rust_contains_two_char_late_match ... bench: 2,148,064 ns/iter (+/- 21,812)
test large_str:2️⃣:rust_contains_two_char_non_match ... bench: 2,322,082 ns/iter (+/- 22,947)
test small_str:1️⃣:memchr_mid_match ... bench: 4,737 ns/iter (+/- 842)
test small_str:1️⃣:memchr_non_match ... bench: 5,160 ns/iter (+/- 62)
test small_str:1️⃣:rust_contains_one_byte_non_match ... bench: 3,930 ns/iter (+/- 35)
test small_str:1️⃣:rust_contains_one_char_mid_match ... bench: 3,677 ns/iter (+/- 618)
test small_str:1️⃣:rust_contains_one_char_non_match ... bench: 5,415 ns/iter (+/- 221)
test small_str:2️⃣:memchr2_mid_match ... bench: 5,488 ns/iter (+/- 888)
test small_str:2️⃣:memchr2_non_match ... bench: 6,788 ns/iter (+/- 134)
test small_str:2️⃣:rust_contains_two_char_mid_match ... bench: 6,203 ns/iter (+/- 170)
test small_str:2️⃣:rust_contains_two_char_non_match ... bench: 7,853 ns/iter (+/- 713)
Yikes.
With that said, we won't be comparing against such large inputs
short-term. The larger strings (fragments) are copied verbatim, and not
compared against---but they _were_ prior to the previous commit that stopped
unencoding and re-encoding.
So: Rust built-ins for inputs that are expected to be small.
Fragments' text were unescaped on reading, producing an owned String and
spending time parsing the text to unescape. We were then copying that into
an internement pool (so, copying twice, effectively).
Further, we were then _re-escaping_ on write.
This was all wasteful, since we do not do any manipulation of the fragment
before outputting to the xmle file; we know that Saxon produced properly
escaped XML to begin with, and can trust to propagate it.
This also introduces a new global `clone_uninterned_utf8_unchecked` method.
In profiling this change, I tested (a) before this change, (b) after writing
without escaping, and (c) after both reading escaped and writing without
escaping.
(a) (b) (c)
sec mem (B) sec B sec B
0:00.95 47896 -> 0:00.91 47988 -> 0:00.87 48288
0:00.40 30176 -> 0:00.37 25656 -> 0:00.36 25788
0:00.39 45672 -> 0:00.37 45756 -> 0:00.35 34952
0:00.39 20716 -> 0:00.38 19604 -> 0:00.36 19956
0:00.33 16836 -> 0:00.32 16988 -> 0:00.31 16892
0:00.23 15268 -> 0:00.23 15236 -> 0:00.22 15312
0:00.44 20780 -> 0:00.44 20048 -> 0:00.41 20148
0:00.54 44516 -> 0:00.50 36964 -> 0:00.49 36728
0:00.62 55976 -> 0:00.57 46204 -> 0:00.54 41468
0:00.31 28016 -> 0:00.30 27308 -> 0:00.28 23844
0:00.23 15388 -> 0:00.22 15316 -> 0:00.21 15304
0:00.05 4888 -> 0:00.05 4760 -> 0:00.05 4948
0:00.41 19756 -> 0:00.41 19852 -> 0:00.40 19992
0:00.47 20828 -> 0:00.46 20844 -> 0:00.44 20968
0:00.27 18152 -> 0:00.26 18184 -> 0:00.25 18312
Interestingly, the peak memory usage increases very slightly between the
second and third steps (though decreases from the first), likely because the
raw (encoded) is larger than the unencoded text (e.g. `>` takes more
space than `>`).
Fragments were previously represented by `String` to avoid the cost of
interning (hashing and copying). This change modifies it to use uninterned
symbols, which does still have a copy overhead but it does not hash.
Initial tests shows a small performance decrease of about 15% and a small
memory increase of similar proportion. However, once I realized that I was
not clearing buffers from quick_xml events and implemented that change in a
previous commit, this change ended up being approximately on par with
`String`, despite the copying of some pretty large fragments.
YMMV, though, and perhaps on less powerful systems time may increase
slightly.
The upcoming XIR (XML IR) was originally going to support both owned strings
and symbols, but now we'll just use uninterned symbols; I can't rationalize
complicating the API at this time when it will provide an almost
imperceivable performance benefit. If ever that changes in the future,
that change will be entertained.
The end result is that the fate of a fragment's underlying memory is
determined by whatever is processing the data, _not_ by the API itself---the
API was previously forcing use of a String, whereas now it's up to the
caller to determine whether we want comparable interns. For fragments,
that's not likely ever to be the case, especially considering that the
representation will change so drastically in the future.
This clears the buffers used by quick_xml, which was apparently forgotten
during initial development (I think I expected it to re-use the previously
allocated space automatically).
This has significant effects in some cases. For example, one of our UI
builds drops from ~9KiB to ~5KiB peak memory usage. Other builds for larger
suppliers are only slightly effected because of some of their massive
fragments.
This adds support for uninterned symbols. This came about as I was creating
Xir (not yet committed) where I had to decide if I wanted `SymbolId` for all
values, even though some values (e.g. large text blocks like compiled code
fragments for xmle files) will never be compared, and so would be wastefull
hashed.
Previous IRs used `String`, but that was clumsy; see documentation in this
commit for rationale.
This is an initial implementation optimized for expected use
cases. Hopefully that pans out and doesn't come back to bite me.
Regarding the context: it only allows for interned paths atm, which are
strings (and so much be valid UTF-8, which is fine for us, but sucks for
something more general-purpose). I'll be curious if the context needs
extension later on, or if different contexts will be stored in IRs (e.g. to
store a template application site as well as the location of the expansion
within the template body).
SymboldIds must only be constructed by interners, otherwise we lose
confidence in the type.
This offers an associated function to construct raw SymbolIds from integers
for testing purposes.
This is a major change, and I apologize for it all being in one commit. I
had wanted to break it up, but doing so would have required a significant
amount of temporary work that was not worth doing while I'm the only one
working on this project at the moment.
This accomplishes a number of important things, now that I'm preparing to
write the first compiler frontend for TAMER:
1. `Symbol` has been removed; `SymbolId` is used in its place.
2. Consequently, symbols use 16 or 32 bits, rather than a 64-bit pointer.
3. Using symbols no longer requires dereferencing.
4. **Lifetimes no longer pollute the entire system! (`'i`)**
5. Two global interners are offered to produce `SymbolStr` with `'static`
lifetimes, simplfiying lifetime management and borrowing where strings
are still needed.
6. A nice API is provided for interning and lookups (e.g. "foo".intern())
which makes this look like a core feature of Rust.
Unfortunately, making this change required modifications to...virtually
everything. And that serves to emphasize why this change was needed:
_everything_ used symbols, and so there's no use in not providing globals.
I implemented this in a way that still provides for loose coupling through
Rust's trait system. Indeed, Rustc offers a global interner, and I decided
not to go that route initially because it wasn't clear to me that such a
thing was desirable. It didn't become apparent to me, in fact, until the
recent commit where I introduced `SymbolIndexSize` and saw how many things
had to be touched; the linker evolved so rapidly as I was trying to learn
Rust that I lost track of how bad it got.
Further, this shows how the design of the internment system was a bit
naive---I assumed certain requirements that never panned out. In
particular, everything using symbols stored `&'i Symbol<'i>`---that is, a
reference (usize) to an object containing an index (32-bit) and a string
slice (128-bit). So it was a reference to a pretty large value, which was
allocated in the arena alongside the interned string itself.
But, that was assuming that something would need both the symbol index _and_
a readily available string. That's not the case. In fact, it's pretty
clear that interning happens at the beginning of execution, that `SymbolId`
is all that's needed during processing (unless an error occurs; more on that
below); and it's not until _the very end_ that we need to retrieve interned
strings from the pool to write either to a file or to display to the
user. It was horribly wasteful!
So `SymbolId` solves the lifetime issue in itself for most systems, but it
still requires that an interner be available for anything that needs to
create or resolve symbols, which, as it turns out, is still a lot of
things. Therefore, I decided to implement them as thread-local static
variables, which is very similar to what Rustc does itself (Rustc's are
scoped). TAMER does not use threads, so the resulting `'static` lifetime
should be just fine for now. Eventually I'd like to implement `!Send` and
`!Sync`, though, to prevent references from escaping the thread (as noted in
the patch); I can't do that yet, since the feature has not yet been
stabalized.
In the end, this leaves us with a system that's much easier to use and
maintain; hopefully easier for newcomers to get into without having to deal
with so many complex lifetimes; and a nice API that makes it a pleasure to
work with symbols.
Admittedly, the `SymbolIndexSize` adds some complexity, and we'll see if I
end up regretting that down the line, but it exists for an important reason:
the `Span` and other structures that'll be introduced need to pack a lot of
data into 64 bits so they can be freely copied around to keep lifetimes
simple without wreaking havoc in other ways, but a 32-bit symbol size needed
by the linker is too large for that. (Actually, the linker doesn't yet need
32 bits for our systems, but it's going to in the somewhat near future
unless we optimize away a bunch of symbols...but I'd really rather not have
the linker hit a limit that requires a lot of code changes to resolve).
Rustc uses interned spans when they exceed 8 bytes, but I'd prefer to avoid
that for now. Most systems can just use on of the `PkgSymbolId` or
`ProgSymbolId` type aliases and not have to worry about it. Systems that
are actually shared between the compiler and the linker do, though, but it's
not like we don't already have a bunch of trait bounds.
Of course, as we implement link-time optimizations (LTO) in the future, it's
possible most things will need the size and I'll grow frustrated with that
and possibly revisit this. We shall see.
Anyway, this was exhausting...and...onward to the first frontend!
Oh boy. What a mess of a change.
This demonstrates some significant issues we have with Symbol. I had
originally modelled the system a bit after Rustc's, but deviated in certain
regards:
1. This has a confurable base type to enable better packing without bit
twiddling and potentially unsafe tricks I'd rather avoid unless
necessary; and
2. The lifetime is not static, and there is no global, singleton interner;
and
3. I pass around references to a Symbol rather than passing around an
index into an interner.
For #3---this is done because there's no singleton interner and therefore
resolving a symbol requires a direct reference to an available interner. It
also wasn't clear to me (and still isn't, in fact) whether more than one
interner may be used for different contexts.
But, that doesn't preclude removing lifetimes and just passing around
indexes; in fact, I plan to do this in the frontend where the parser and
such will have direct interner access and can therefore just look up based
on a symbol index. We could reserve references for situations where
exposing an interner would be undesirable.
Anyway, more to come...
As mentioned in the previous commit, this flips the types such that the base
type if the primitive and the associated type is the `NonZero*` type; this
is much more natural, concise, and allows Rust to infer the proper type in
most every situation.
The next step will be to stop defaulting the index type for SymbolIndex and
related, since we are about to care very much what size it is (compiler
vs. linker).
This was previously a NonZeroU32, but it was intended to support NonZeroU16
as well for packages, so that we can fit symbols into smaller spaces. In
particular, the upcoming Span wants to fit within 8 bytes, and so requires a
smaller SymbolIndex type.
I'm unhappy with this current implementation, and so comments are unfinished
and there are a couple ignores for dead code warnings. I want to flip the
`SupportedSymbolIndex` trait so that users can specify the primitive rather
than the NonZero* type, which is really awkward-looking and verbose,
especially if you have to do `SymbolIndex::<NonZeroU32>::from_int` or
something. It also prevents (at least in the cases I've observed) Rust from
inferring the proper type for you based on the argument you provide.
So, the goal will be `SymbolIndex::<u32>::from_int(n)`, for example.
The first step in the process is to emit the raw XML events that can then be
immediately output again to echo the results into another file. This will
then allow us to begin parsing the input incrementally, and begin to morph
the output into a real `xmlo` file.
This introduces the beginnings of frontends for TAMER, gated behind a
`wip-features` flag.
This will be introduced in stages:
1. Replace the existing copy with a parser-based copy (echo back out the
tokens), when the flag is on.
2. Begin to parse portions of the source, augmenting the output xmlo (xmli
at the moment). The XSLT-based compiler will be modified to skip
compilation steps as necessary.
As portions of the compilation are implemented in TAMER, they'll be placed
behind their own feature flags and stabalized, which will incrementally
remove the compilation steps from the XSLT-based system. The result should
be substantial incremental performance improvements.
Short-term, the priorities are for loading identifiers into an IR
are (though the order may change):
1. Echo
2. Imports
3. Extern declarations.
4. Simple identifiers (e.g. param, const, template, etc).
5. Classifications.
6. Documentation expressions.
7. Calculation expressions.
8. Template applications.
9. Template definitions.
10. Inline templates.
After each of those are done, the resulting xmlo (xmli) will have fully
reconstructed the source document from the IR produced during parsing.
This was incorrect to begin with---it does not make sense that an input
mapping should depend upon the identifier that it maps to, in the sense that
we make use of these dependencies. If we add weak symbol references in the
future, then this can be reintroduced.
By removing this, we free tameld from having to perform the check itself.
.rev-xmlo bumped to force rebuilding of object files since the linker now
expects that no such dependencies will exist within them.
This is something that changed when the TAMER POC was initially created, as
I was learning Rust. I don't recall the original reason why this was moved,
but it could have been moved back long ago.
In our systems, constants can hold tables (as matrices) with tens or
hundreds of thousands of rows, and there are a number of them in certain
projects. As an example, the YAML-based test cases for one of our systems
went from ~2m30s to ~45s after this change was made. Much of the cost
savings comes from saving GC.
A previous commit used a rustdoc tool lint, but that support wasn't added
until 1.52.0 (2021-05-06).
Note that this represents the minimum _required_ version to build TAMER; you
can use a later version.
This checks explicitly for unresolved objects while sorting and provides an
explicit error for them. For example, this will catch externs that have no
concrete resolution.
This previously fell all the way through to the unreachable! block. The old
POC implementation was catching unresolved objects, albeit with a debug
error.
This will be used for the next commit, but this change has been isolated
both because it distracts from the implementation change in the next commit,
and because it cleans up the code by removing the need for a type parameter
on `AsgError`.
Note that the sort test cases now use `unwrap` instead of having
`{,Sortable}AsgError` support one or the other---this is because that does
not currently happen in practice, and there is not supposed to be a
hierarchy; they are siblings (though perhaps their name may imply otherwise).
The only reason this function was a method of `BaseAsg` was because of
`self.graph`, which is accessible within the scope of this
module. `check_cycles` is logically associated with `SortableAsg`, and so
should exist alongside it (though it can't exist as an associated function
of that trait).
We want to be able to build a representation of the dependency graph so
we can easily inspect it.
We do not want to make GraphML by default. It is better to use a tool.
We use "petgraph-graphml".
This was originally omitted because there wasn't a use case for it. Now
that we're adding context to errors, however, an owned value is highly
desirable.
This adds almost no measurable overhead to the internment system in
benchmarks (largely within the margin of error).
This is a union (sum type) of three other errors types, plus errors specific
to this builder.
This commit does a good job demonstrating the boilerplate, as well as a need
for additional context (in the case of `IdentKindError`), that we'll want to
work on abstracting away.
The `Debug` bound is inconvenient and requires propagation to any types that
use it. Further, it's really awkward having `Display` depend on `Debug`; if
we want to render a useful display here, we can write one.
To be clear: IndexType implements Debug.
For now, this is pretty-printed by another part of the code, which we don't
want to implement in `Display` because it requires looking things up from
the graph.
This flips the API from using XmloWriter as the context to using Asg and
consuming anything that can produce XmloResults. This not only makes more
sense, but avoids having to create a trait for XmloReader, and simplifies
the trait bounds we have to concern ourselves with.
This just tidies things up a little bit before I get into some further
refactoring. I wrote the original code when I was just learning Rust not
too long ago, so it's interesting to see how my understanding has changed
over that relatively short period of time.
This abstracts away the canonicalizer and solves the problem whereby
canonicalization was not being performed prior to recording whether a path
has been visited. This ensures that multiple relative paths to the same
file will be properly recognized as visited.
This will be entirely replaced in an upcoming commit. See that for
details. I don't feel like dealing with the conflicts for rearranging and
squashing these commits.
This also includes an implementation to visit paths only once. Note that it
does not yet canonicalize the path before visiting, so relative paths to the
same file can slip through, and relative paths to _different_ files could be
erroneously considered to have been visited.
This will be fixed in an upcoming commit.
This serves as a constructor for the time being, decoupling from POC. We
may do something better once we have a better idea of how the various
abstractions around this will evolve.
Add a stub executable that will eventually become a full-featured TAME
compiler. The first implementation will only copy the source file to an
intermediary file that will be compiled by the XSLT compiler.
This is an awkward system that I'd like to remove at some point. It adds
complexity. For the meantime, overrides have been arbitrarily restricted to
a single override (no override-override). But it's needed being until we
rework maps and can handle the illusion of overrides using the template
system.
Benchmark performance for this method is still substantially slower. And
oddly, this nearly doubled the speed of the other two calls (granted, at
that speed, it doesn't matter).
All of these refactoring commits to arrive at this one final change: the
ability to store the source location for externs so that we can report on
what package is expecting an identifier to be defined.
Phew. Goodnight.
This undoes work I did earlier today...but now we'll be able to support a
Source on an extern.
There is duplicate code between `BaseAsg::declare{,_extern}` that will be
resolved in an upcoming commit. Upcoming commits will also simplify
terminology and clean up methods on ObjectState.
There is some duplication here with `declare` that will be cleared up in a
following commit. Reintroducing this method is necessary so that Source can
be used to represent the source location of the extern itself; it's
currently None to indicate an extern in `declare`.
This is the first step in a more incremental refactoring that previous
commits to undo the optional Source in `ObjectState::ident`. This provides
an explicit transition to an extern, with the intent of requiring an initial
missing state. This will simplify logic on the ASG.
Note that the Source provided to this new method is not yet used. That too
will come in a following commit and will represent the source of the defined
extern rather than the concrete identifier.
This properly verifies extern types, and cleans up Asg's API a little so
that externs aren't handled much differently than other declarations.
With that said, after making src optional, I realized that we will indeed
want source information for externs themselves so we can direct the user to
what package is expecting that symbol (as the old linker does). So this
approach will not work, and I'll have to undo some of those changes.
This is essential to clarify what exactly the different object types
represent with the new generic abstractions. For example, we will have
expressions as an object type.
There's a lot here to make the object stored on the `Asg` generic. This
introduces `ObjectState` for state transitions and `ObjectData` for pure
data retrieval. This will allow not only for mocking, but will be useful to
enforce compile-time restrictions on the type of objects expected by the
linker vs. the compiler (e.g. the linker will not have expressions).
This commit intentionally leaves the corresponding tests in their original
location to prove that the functionality has not changed; they'll be moved
in a future commit.
This also leaves the names as "Object" to reduce the number the cognative
overhead of this commit. It will be renamed to something like "IdentObject"
in the near future to clarify the intent of the current object type and to
open the way for expressions and a type that marries both of them in the
future.
Once all of this is done, we'll finally be able to make changes to the
compatibility logic in state transitions to implement extern compatibility
checks during resolution.
DEV-7087
The next commit will generalize this further. This moves logic out of
BaseAsg so that we can implement more sophisticated transitions for
compatability checks.
The logic is still tested as part of BaseAsg; the next commit will change
that as it's generalized further.
* tamer/src/ir/asg/base.rs: Extract object transitions.
* tamer/src/ir/asg/graph.rs (AsgError)[IncompatibleIdent]: New variant.
(From<TransitionError> for AsgError): Basic type translation.
* tamer/src/ir/asg/object.rs (TransitionResult): New type.
(impl Object): Transition methods.
(TransitionError): New enum.
This variant is unnecessary, as it was used only by the indexer to represent
the absence of a node, for which was can simply use `None` in the containing
`Option`.
* tamer/Cargo.toml: Add `lazy_static`.
* tamer/Cargo.lock: Update.
* tamer/src/ir/asg/base.rs (with_capacity): Use `None` in place of
`Some(Object::Empty)`.
* tamer/src/ir/asg/object.rs: Adjust state machine graphic.
(Empty): Remove variant.
(Missing): Remove reference to variance.
* tamer/src/lib.rs: Import `lazy_static` for test builds.
* tamer/obj/xmle/writer/writer.rs (Section::iter): Remove `Object::Empty`
from documentation.
(test::): Remove references to `Object::Missing`. `lazy_static!` used
here.
* tamer/obj/xmle/writer/xmle.rs (test::write_section_catch_missing): Replace
reference to `Object::Missing`.
This still isn't comprehensive. Further, it won't be able to be, because
we'd have to rely on Petgraph implementation details: there are potentially
many acceptable orderings for a given graph.
Create a trait that sorts a graph into `Sections` that can then be used
as an IR. The `BaseAsg` should implement the trait using what was
originally in the POC.
If we cannot set a fragment, we need to display the error to the user.
We are currently ignoring "___head", "___tail", and objects that are
both virtual and overridden. Those will be corrected in with future
changes.
We want to add an option to set the output file to the linker so we do
not need to redirect output to awk any longer.
This also adds integration tests for tameld.
This begins to introduce the ASG, backed by Petgraph. The API will continue
to evolve, and Petgraph will likely be encapsulated so that our
implementation can vary independently from it (or even remove it in the
future).
This introduces the reader for xmlo files produced by the XSLT-based
compiler. It is an initial implementation but is not complete; see future
commits.
One of the benefits of storing a reference to the interned string on the
symbol itself is that we get to get its underlying value essentially for
free.
This ordering will simplify streaming processing of xmlo files in
TAMER. Specifically, we know that symbols will have been declared by the
time dependencies are added to the graph (and so we should only be creating
edges to existing nodes); and we can halt reading as soon as the closing
fragments tag is encountered, avoiding parsing the entirety of these massive
XML files.
On one particularly large program, this cuts time down from ~0.333s to
~0.300 in the POC linker.
Contrary to what I said previously, this replaces the previous
implementation with an arena-backed internment system. The motivation for
this change was investigating how Rustc performed its string interning, and
why they chose to associate integer identifiers with symbols.
The intent was originally to use Rustc's arena allocator directly, but that
create pulled in far too many dependencies and depended on nightly
Rust. Bumpalo provides a very similar implementation to Rustc's
DroplessArena, so I went with that instead.
Rustc also relies on a global, singleton interner. I do not do that
here. Instead, the returned Symbol carries a lifetime of the underlying
arena, as well as a pointer to the interned string.
Now that this is put to rest, it's time to move on.
For strings of any notable length, Fx Hash outperforms FNV. Rustc also
moved to this hash function and noticed performance
improvements. Fortunately, as was accounted for in the design, this was a
trivial switch.
Here are some benchmarks to back up that claim:
test hash_set::fnv::with_all_new_1000 ... bench: 133,096 ns/iter (+/- 1,430)
test hash_set::fnv::with_all_new_1000_with_capacity ... bench: 82,591 ns/iter (+/- 592)
test hash_set::fnv::with_all_new_rc_str_1000_baseline ... bench: 162,073 ns/iter (+/- 1,277)
test hash_set::fnv::with_one_new_1000 ... bench: 37,334 ns/iter (+/- 256)
test hash_set::fnv::with_one_new_rc_str_1000_baseline ... bench: 18,263 ns/iter (+/- 261)
test hash_set::fx::with_all_new_1000 ... bench: 85,217 ns/iter (+/- 1,111)
test hash_set::fx::with_all_new_1000_with_capacity ... bench: 59,383 ns/iter (+/- 752)
test hash_set::fx::with_all_new_rc_str_1000_baseline ... bench: 98,802 ns/iter (+/- 1,117)
test hash_set::fx::with_one_new_1000 ... bench: 42,484 ns/iter (+/- 1,239)
test hash_set::fx::with_one_new_rc_str_1000_baseline ... bench: 15,000 ns/iter (+/- 233)
test hash_set::with_all_new_1000 ... bench: 137,645 ns/iter (+/- 1,186)
test hash_set::with_all_new_rc_str_1000_baseline ... bench: 163,129 ns/iter (+/- 1,725)
test hash_set::with_one_new_1000 ... bench: 59,051 ns/iter (+/- 1,202)
test hash_set::with_one_new_rc_str_1000_baseline ... bench: 37,986 ns/iter (+/- 771)
This will be used for generating the common tests between HashSet and
HashMap implementations.
This is my first macro in Rust. There does not seem to be a way to
concatenate identifiers (!), so I'm placing them within modules
instead. That ended up working out just fine, since then I can use a type
to provide the SUT.
This is missing two key things that I'll add shortly: a HashMap-based one
for use in the ASG for node mapping, and an entry-based system for
manipulations.
This has been a nice start for exploring various aspects of Rust
development, as well as conventions that I'd like to implement. In
particular:
- Robust documentation intended to guide people through learning the
necessary material about the compiler, as well as related work to
rationalize design decisions;
- Benchmarks;
- TDD;
- And just getting used to Rust in general.
I've beat this one to death, so I'll commit this and make smaller changes
going forward to show how easily it can evolve.
(This module was originally named `intern` but this commit and those that
follow rewrote it to `sym`.)
This is enabled by default in nightly, and is not available at all in
stable. Considering the PITA that it will be to go back and rewrite docs to
use the new format, and how important of a feature this is, we will just
make use of it now.
Given that developers should be doing TDD and therefore running this target
frequently, this has the effect of providing immediate feedback when
formatting is needed and outputting a diff. Developers will then quickly
understand what changes need to be made to avoid future issues (and can run
`cargo fmt` to fix it), at which point they'll rarely ever encounter
formatting errors.
The original purpose was to ensure pipelines fail when the formatter has not
been run.
This makes use of Petgraph for representing the dependency graph and uses a
separate data structure for both string interning and indexing by symbol
name.
This is garbage code. Do not use it. It is intentionally throwaway.
While I've researched Rust, I haven't actually _used_ it for a project, so
this is a combination of me exploring various ways of accomplishing the
problem and forcing myself to learn certain aspects of the language.
I'll likely be using petgraph, and this also currently lacks symbol
abstractions. This commit also performs far too much heap allocation
copying strings around. But it _does_ perform the topological sort.
Since this only stores the symbol name, it lacks enough information about
the symbol to perform a proper linking.