This introduces `xmlo-experimental` for `--emit`, allowing the new parser to
be toggled selectively for individual packages. This has a few notable
benefits:
1. We'll be able to conditionally compile packages as they are
supported (TAMER will target specific packages in our system to try to
achieve certain results more quickly);
2. This cleans up the code a bit by removing awkward gated logic, allowing
natural abstractions to form; and
3. Removing the compile-time feature flag ensures that the new features
are always built and tested; there are fewer configuration combinations
to test.
DEV-13162
This is an initial implementation that does not yet produce errors on
cycles. Documentation is not yet complete.
The implementation is fairly basic, and similar to Petgraph's DFS.
A terminology note: the DFS will be ontology-aware (or at least aware of
edge metadata) to avoid traversing edges that would introduce cycles in
situations where they are permitted, which effectively performs a
topological sort on an implicitly _filtered_ graph.
This will end up replacing ld::xmle::lower::sort.
DEV-13162
This commit is what I've been sitting on for testing some of the recent
changes; it is a very basic demonstration of lowering all the way down
from source XML files into the ASG. This can be run on real files to
observe, beyond unit tests, how the system reacts.
Once this outputs data from the graph, we'll finally have tamec end-to-end
and can just keep filling the gaps.
I'm hoping to roll the desugaring process into NirToAir rather than having a
separate process as originally planned a couple of months back.
This also introduces the `wip-nir-to-air` feature flag. Currently,
interpolation will cause a `Nir::BindIdent` to be emitted in blocks that
aren't yet emitting NIR, and so results in an invalid parse.
DEV-13159
This was originally created to populate Neo4J for querying, but it has not
been utilized. It's become a maintenance burden as I try to change the API
of and encapsulate the graph, which is important for upholding its
invariants.
This feature, or one like it, will return in the future. I have other
related plans; we'll see if they materialize.
The graph can't be encapsulated fully just yet because of the linker; those
commits will come in the following days.
DEV-13597
This always annoys me when I add a dependency and I don't know where I ought
to put it.
Anyway, I was originally going to add the `regex` crate, but with further
planning, I may not end up having use for it. Nonetheless, at least this is
consistent.
This flag allows toggling the parser trace that was previously only
available to tests. Unfortunately, at the time of writing, Cargo cannot
enable flags in profiles, so I have to check for either `test` or this flag
being set to enable relevant features.
This trace is useful as I start to run the parser against existing code
written in TAME so that our existing systems can help to guide my
development. Unlike the current tests, it also allows seeing real-world
data as part of the lowering pipeline, where multiple `Parser`s are in
play.
Having this feature flag also makes this feature more easily discoverable to
those wishing to observe how the lowering pipeline works.
DEV-7145
This is not longer needed after the previous commit, with static spans
having been replaced by `const` spans.
This used to be required before Rust acquired better const features, and
before I had preinterned symbols.
DEV-7145
Determining the column number is not as simple as performing byte
arithmetic, because certain characters have different widths. Even if we
only accepted ASCII, control characters aren't visible to the user.
This uses the unicode-width crate as an alternative to POSIX wcwidth, to
determine (hopefully) the number of fixed-width cells that a unicode
character will take up on a terminal. For example, control characters are
zero-width, while an emoji is likely double-width. See test cases for more
information on that.
There is also the unicode-segmentation crate, which can handle extended
grapheme clusters and such, but (a) we'll be outputting the line to the
terminal and (b) there's no guarantee that the user's editor displays
grapheme clusters as a single column. LSP measures in UTF-16,
apparently. I use both Emacs and Vim from a terminal, so unicode-width
applies to me. There's too much variation to try to solve that right now.
The columns can be considered a visual span---this gives us enough
information to draw line annotations, which will happen soon.
Here are some useful links:
- https://hsivonen.fi/string-length/
- https://unicode.org/reports/tr29/
- https://github.com/rust-analyzer/rowan/issues/17
- https://www.reddit.com/r/rust/comments/gpw2ra/how_is_the_rust_compiler_able_to_tell_the_visible/
DEV-10935
This entirely removes the old XmloReader that has since been replaced with a
XIR-based reader.
I had been holding off on this because the new reader is slower, pending
performance optimizations (which I'll do a little later on), however the
performance loss is of no practical consideration and only affects the
linker, which is still fast.
Therefore, it's better to get this old code out of the way to simplify
refactoring going forward. In particular, I'm working on the diagnostic
system.
This is a little sad, in a way---this is some of my first Rust code that I'm
deleting.
DEV-10935
Petgraph was previously held back due to petgraph-graphml. I'd like to
transition away from that at some point, given that it's tied to petgraph
and also pulls in xmlns, on top of quick-xml and our XIR, but that can come
down the line.
There are a number of reasons for this, where the benefits do not make up
for the losses.
First: this is actually invoking cargo. Not only is this not necessary, but
it's not desirable: cargo by default hits the network and does all sorts of
other stuff, when all we want to do is invoke the executable. So the tests
aren't really testing the right thing in that sense. See the previous
commit for more information.
The way it invokes cargo is different than the way the Makefile invokes
cargo, so on my system, it's actually invoking a _different cargo_! This is
causing problems, in particular with lock files, which causes my tests to
fail.
Importantly, this also removes a _lot_ of dependencies, which removes a lot
of supplier chain risk and a lot of code to audit. This provides
significant security benefits, especially given that what was being tested
was rather small, and could be done in a shell script.
TAMER will receive significant system testing later on. But for now, none
of this was worth it.
Further audits of dependencies will come later on. I've always been fairly
insistent on keeping the dependency graph small and auditable, but recent
supply chain attacks have given me a better way to rationalize the security
risk. Further, I'm the only one on this project right now.
I'm not fond of this implementation, which is why it's not fully
completed. I wanted to commit this for future reference, and take the
opportunity to explain why I don't like it.
First: this task started as an idea to implement a third variant to
AttrValue and friends that indicates that a value is fixed, in the sense of
a fixed-point function: escaped or unescaped, its value is the same. This
would allow us to skip wasteful escape/unescape operations.
In doing so, it became obvious that there's no need to leak this information
through the API, and indeed, no part of the system should care. When we
read XML, it should be unescaped, and when we write, it should be
escaped. The reason that this didn't quite happen to begin with was an
optimization: I'll be creating an echo writer in place of the current
filesystem-based copy in tamec shortly, and this would allow streaming XIR
directly from the reader to the writer without any unescaping or
re-escaping.
When we unescape, we know the value that it came from, so we could simply
store both symbols---they're 32-bit, so it results in a nicely compressed
64-bit value, so it's essentially cost-free, as long as we accept the
expense of internment. This is `XirString`. Then, when we want to escape
or unescape, we first check to see whether a symbol already exists and, if
so, use it.
While this works well for echoing streams, it won't work all that well in
practice: the unescaped SymbolId will be taken and the XirString discarded,
since nothing after XIR should be coupled with it. Then, when we later
construct a XIR stream for writting, XirString will no longer be available
and our previously known escape is lost, so the writer will have to
re-escape.
Further, if we look at XirString's generic for the XirStringEscaper---it
uses phantom, which hints that maybe it's not in the best place. Indeed,
I've already acknowledged that only a reader unescapes and only a writer
escapes, and that the rest of the system works with normal (unescaped)
values, so only readers and writers should be part of this process. I also
already acknowledged that XirString would be lost and only the unescaped
SymbolId would be used.
So what's the point of XirString, then, if it won't be a useful optimization
beyond the temporary echo writer?
Instead, we can take the XirStringWriter and implement two caches on that:
mapping SymbolId from escaped->unescaped and vice-versa. These can be
simple vectors, since SymbolId is a 32-bit value we will not have much
wasted space for symbols that never get read or written. We could even
optimize for preinterned symbols using markers, though I'll probably not do
so, and I'll explain why later.
If we do _that_, we get even _better_ optimizations through caching that
_will_ apply in the general case (so, not just for echo), and we're able to
ditch XirString entirely and simply use a SymbolId. This makes for a much
more friendly API that isn't leaking implementation details, though it
_does_ put an onus on the caller to pass the encoder to both the reader and
the writer, _if_ it wants to take advantage of a cache. But that burden is
not significant (and is, again, optional if we don't want it).
So, that'll be the next step.
The original plan was to modify the existing reader to use the new
XmlXirReader, but that's going to be a lot of ongoing uncommitted work, with
both tests and implementation. The better option seems to be to reimplement
it, since so many things are changing.
This flag will be short-lived and removed as soon as the implementation is
complete.
DEV-10863
The new writer has reached parity of the old, with the exception of some
edge case explicit error handling that should never occur (which will be
added), and cleanup/docs.
Removing this flag now allows me to perform that cleanup without having to
worry about updating the now-old implementation.
I ran `tameld` with the new writer against our production system with
numerous programs and a significant number of test cases, and diff'd the old
and new xmle files, and everything looks good.
This has been a long time coming, and has been repeatedly stashed as other
parts of the system have evolved to support it. The introduction of the XIR
tree was to write tests for this (which are sloppy atm).
This currently writes out the `xmle` header and _most_ of the `l:dep`
section; it's missing the object-type-specific attributes. There is,
relatively speaking, not much more work to do here.
The feature flag `wip-xir-xmle-writer` was introduced to toggle this system
in place of `XmleWriter`. Initial benchmarks show that it will be
competitive with the quick-xml-based writer, but remember that is not the
goal: the purpose of this is to test XIR in a production system before we
continue to implement it for a frontend, and to refactor so that we do not
have multiple implementations writing XML files (once we echo the source XML
files).
I'm excited to get this done with so that I can move on. This has been
rather exhausting.
This commit will make more sense once the broader context is committed, but
it's needed for lowering from `Sections` into a XIR stream.
This will also change once we pre-allocate symbols, like rustc, when the
interner is initialized.
This is my first use of the `paste` crate, which is used to generate
identifiers. So this is partly an experiment, and it seems much better than
having to write a proc macro, at least at this point in time. If this code
stays around, it'll probably be generalized further and used elsewhere, but
I'd prefer not to go this route long-term.
This adds benchmarking for the memchr crate. It is used primarily by
quick-xml at the moment, but the question is whether to rely on it for
certain operations for XIR.
The benchmarking on an Intel Xeon system shows that memchr and Rust's
contains() perform very similarly on small inputs, matching against a single
character, and so Rust's built-in should be preferred in that case so that
we're using APIs that are familiar to most people.
When larger inputs are compared against, there's a greater benefit (a little
under ~2x).
When comparing against two characters, they are again very close. But look
at when we compare two characters against _multiple_ inputs:
running 24 tests
test large_str:1️⃣:memchr_early_match ... bench: 4,938 ns/iter (+/- 124)
test large_str:1️⃣:memchr_late_match ... bench: 81,807 ns/iter (+/- 1,153)
test large_str:1️⃣:memchr_non_match ... bench: 82,074 ns/iter (+/- 1,062)
test large_str:1️⃣:rust_contains_one_byte_early_match ... bench: 9,425 ns/iter (+/- 167)
test large_str:1️⃣:rust_contains_one_byte_late_match ... bench: 123,685 ns/iter (+/- 3,728)
test large_str:1️⃣:rust_contains_one_byte_non_match ... bench: 123,117 ns/iter (+/- 2,200)
test large_str:1️⃣:rust_contains_one_char_early_match ... bench: 9,561 ns/iter (+/- 507)
test large_str:1️⃣:rust_contains_one_char_late_match ... bench: 123,929 ns/iter (+/- 2,377)
test large_str:1️⃣:rust_contains_one_char_non_match ... bench: 122,989 ns/iter (+/- 2,788)
test large_str:2️⃣:memchr2_early_match ... bench: 5,704 ns/iter (+/- 91)
test large_str:2️⃣:memchr2_late_match ... bench: 89,194 ns/iter (+/- 8,546)
test large_str:2️⃣:memchr2_non_match ... bench: 85,649 ns/iter (+/- 3,879)
test large_str:2️⃣:rust_contains_two_char_early_match ... bench: 66,785 ns/iter (+/- 3,385)
test large_str:2️⃣:rust_contains_two_char_late_match ... bench: 2,148,064 ns/iter (+/- 21,812)
test large_str:2️⃣:rust_contains_two_char_non_match ... bench: 2,322,082 ns/iter (+/- 22,947)
test small_str:1️⃣:memchr_mid_match ... bench: 4,737 ns/iter (+/- 842)
test small_str:1️⃣:memchr_non_match ... bench: 5,160 ns/iter (+/- 62)
test small_str:1️⃣:rust_contains_one_byte_non_match ... bench: 3,930 ns/iter (+/- 35)
test small_str:1️⃣:rust_contains_one_char_mid_match ... bench: 3,677 ns/iter (+/- 618)
test small_str:1️⃣:rust_contains_one_char_non_match ... bench: 5,415 ns/iter (+/- 221)
test small_str:2️⃣:memchr2_mid_match ... bench: 5,488 ns/iter (+/- 888)
test small_str:2️⃣:memchr2_non_match ... bench: 6,788 ns/iter (+/- 134)
test small_str:2️⃣:rust_contains_two_char_mid_match ... bench: 6,203 ns/iter (+/- 170)
test small_str:2️⃣:rust_contains_two_char_non_match ... bench: 7,853 ns/iter (+/- 713)
Yikes.
With that said, we won't be comparing against such large inputs
short-term. The larger strings (fragments) are copied verbatim, and not
compared against---but they _were_ prior to the previous commit that stopped
unencoding and re-encoding.
So: Rust built-ins for inputs that are expected to be small.
Oh boy. What a mess of a change.
This demonstrates some significant issues we have with Symbol. I had
originally modelled the system a bit after Rustc's, but deviated in certain
regards:
1. This has a confurable base type to enable better packing without bit
twiddling and potentially unsafe tricks I'd rather avoid unless
necessary; and
2. The lifetime is not static, and there is no global, singleton interner;
and
3. I pass around references to a Symbol rather than passing around an
index into an interner.
For #3---this is done because there's no singleton interner and therefore
resolving a symbol requires a direct reference to an available interner. It
also wasn't clear to me (and still isn't, in fact) whether more than one
interner may be used for different contexts.
But, that doesn't preclude removing lifetimes and just passing around
indexes; in fact, I plan to do this in the frontend where the parser and
such will have direct interner access and can therefore just look up based
on a symbol index. We could reserve references for situations where
exposing an interner would be undesirable.
Anyway, more to come...
This introduces the beginnings of frontends for TAMER, gated behind a
`wip-features` flag.
This will be introduced in stages:
1. Replace the existing copy with a parser-based copy (echo back out the
tokens), when the flag is on.
2. Begin to parse portions of the source, augmenting the output xmlo (xmli
at the moment). The XSLT-based compiler will be modified to skip
compilation steps as necessary.
As portions of the compilation are implemented in TAMER, they'll be placed
behind their own feature flags and stabalized, which will incrementally
remove the compilation steps from the XSLT-based system. The result should
be substantial incremental performance improvements.
Short-term, the priorities are for loading identifiers into an IR
are (though the order may change):
1. Echo
2. Imports
3. Extern declarations.
4. Simple identifiers (e.g. param, const, template, etc).
5. Classifications.
6. Documentation expressions.
7. Calculation expressions.
8. Template applications.
9. Template definitions.
10. Inline templates.
After each of those are done, the resulting xmlo (xmli) will have fully
reconstructed the source document from the IR produced during parsing.
We want to be able to build a representation of the dependency graph so
we can easily inspect it.
We do not want to make GraphML by default. It is better to use a tool.
We use "petgraph-graphml".
This variant is unnecessary, as it was used only by the indexer to represent
the absence of a node, for which was can simply use `None` in the containing
`Option`.
* tamer/Cargo.toml: Add `lazy_static`.
* tamer/Cargo.lock: Update.
* tamer/src/ir/asg/base.rs (with_capacity): Use `None` in place of
`Some(Object::Empty)`.
* tamer/src/ir/asg/object.rs: Adjust state machine graphic.
(Empty): Remove variant.
(Missing): Remove reference to variance.
* tamer/src/lib.rs: Import `lazy_static` for test builds.
* tamer/obj/xmle/writer/writer.rs (Section::iter): Remove `Object::Empty`
from documentation.
(test::): Remove references to `Object::Missing`. `lazy_static!` used
here.
* tamer/obj/xmle/writer/xmle.rs (test::write_section_catch_missing): Replace
reference to `Object::Missing`.
We want to add an option to set the output file to the linker so we do
not need to redirect output to awk any longer.
This also adds integration tests for tameld.
Contrary to what I said previously, this replaces the previous
implementation with an arena-backed internment system. The motivation for
this change was investigating how Rustc performed its string interning, and
why they chose to associate integer identifiers with symbols.
The intent was originally to use Rustc's arena allocator directly, but that
create pulled in far too many dependencies and depended on nightly
Rust. Bumpalo provides a very similar implementation to Rustc's
DroplessArena, so I went with that instead.
Rustc also relies on a global, singleton interner. I do not do that
here. Instead, the returned Symbol carries a lifetime of the underlying
arena, as well as a pointer to the interned string.
Now that this is put to rest, it's time to move on.
For strings of any notable length, Fx Hash outperforms FNV. Rustc also
moved to this hash function and noticed performance
improvements. Fortunately, as was accounted for in the design, this was a
trivial switch.
Here are some benchmarks to back up that claim:
test hash_set::fnv::with_all_new_1000 ... bench: 133,096 ns/iter (+/- 1,430)
test hash_set::fnv::with_all_new_1000_with_capacity ... bench: 82,591 ns/iter (+/- 592)
test hash_set::fnv::with_all_new_rc_str_1000_baseline ... bench: 162,073 ns/iter (+/- 1,277)
test hash_set::fnv::with_one_new_1000 ... bench: 37,334 ns/iter (+/- 256)
test hash_set::fnv::with_one_new_rc_str_1000_baseline ... bench: 18,263 ns/iter (+/- 261)
test hash_set::fx::with_all_new_1000 ... bench: 85,217 ns/iter (+/- 1,111)
test hash_set::fx::with_all_new_1000_with_capacity ... bench: 59,383 ns/iter (+/- 752)
test hash_set::fx::with_all_new_rc_str_1000_baseline ... bench: 98,802 ns/iter (+/- 1,117)
test hash_set::fx::with_one_new_1000 ... bench: 42,484 ns/iter (+/- 1,239)
test hash_set::fx::with_one_new_rc_str_1000_baseline ... bench: 15,000 ns/iter (+/- 233)
test hash_set::with_all_new_1000 ... bench: 137,645 ns/iter (+/- 1,186)
test hash_set::with_all_new_rc_str_1000_baseline ... bench: 163,129 ns/iter (+/- 1,725)
test hash_set::with_one_new_1000 ... bench: 59,051 ns/iter (+/- 1,202)
test hash_set::with_one_new_rc_str_1000_baseline ... bench: 37,986 ns/iter (+/- 771)
This is missing two key things that I'll add shortly: a HashMap-based one
for use in the ASG for node mapping, and an entry-based system for
manipulations.
This has been a nice start for exploring various aspects of Rust
development, as well as conventions that I'd like to implement. In
particular:
- Robust documentation intended to guide people through learning the
necessary material about the compiler, as well as related work to
rationalize design decisions;
- Benchmarks;
- TDD;
- And just getting used to Rust in general.
I've beat this one to death, so I'll commit this and make smaller changes
going forward to show how easily it can evolve.
(This module was originally named `intern` but this commit and those that
follow rewrote it to `sym`.)
This makes use of Petgraph for representing the dependency graph and uses a
separate data structure for both string interning and indexing by symbol
name.