Merge branch 'jira-7134'
* jira-7134:
[DEV-7134] Remove unnecessary node replacement
[DEV-7134] Propagate errors from the writer
[DEV-7134] Propagate sorting errors
[DEV-7134] Propagate errors setting fragments
[DEV-7134] Pass read event errors up the stack
[DEV-7134] Return error for XmloEvent::SymDecl
[DEV-7134] Add alias for LoadResult
[DEV-7134] Remove unwrap so we can bubble up error messages
[DEV-7134] Escalate the error from finding the absolute path
If we cannot set a fragment, we need to display the error to the user.
We are currently ignoring "___head", "___tail", and objects that are
both virtual and overridden. Those will be corrected in with future
changes.
We want to add an option to set the output file to the linker so we do
not need to redirect output to awk any longer.
This also adds integration tests for tameld.
We will continue to finalize this as we go. It is currently used in
production, both for performance and because it fixes a bug in the
XSLT-based linker.
All systems should be using the provided Makefile, so this shouldn't be
invoked anymore. The new linker is still considered a proof-of-concept, but
bugs have been encountered in the old one that are not worth investing the
time into fixing.
The new linker has been used in production for nearly a couple months and is
functioning properly.
This begins to introduce the ASG, backed by Petgraph. The API will continue
to evolve, and Petgraph will likely be encapsulated so that our
implementation can vary independently from it (or even remove it in the
future).
This introduces the reader for xmlo files produced by the XSLT-based
compiler. It is an initial implementation but is not complete; see future
commits.
One of the benefits of storing a reference to the interned string on the
symbol itself is that we get to get its underlying value essentially for
free.
This ordering will simplify streaming processing of xmlo files in
TAMER. Specifically, we know that symbols will have been declared by the
time dependencies are added to the graph (and so we should only be creating
edges to existing nodes); and we can halt reading as soon as the closing
fragments tag is encountered, avoiding parsing the entirety of these massive
XML files.
On one particularly large program, this cuts time down from ~0.333s to
~0.300 in the POC linker.
Contrary to what I said previously, this replaces the previous
implementation with an arena-backed internment system. The motivation for
this change was investigating how Rustc performed its string interning, and
why they chose to associate integer identifiers with symbols.
The intent was originally to use Rustc's arena allocator directly, but that
create pulled in far too many dependencies and depended on nightly
Rust. Bumpalo provides a very similar implementation to Rustc's
DroplessArena, so I went with that instead.
Rustc also relies on a global, singleton interner. I do not do that
here. Instead, the returned Symbol carries a lifetime of the underlying
arena, as well as a pointer to the interned string.
Now that this is put to rest, it's time to move on.
For strings of any notable length, Fx Hash outperforms FNV. Rustc also
moved to this hash function and noticed performance
improvements. Fortunately, as was accounted for in the design, this was a
trivial switch.
Here are some benchmarks to back up that claim:
test hash_set::fnv::with_all_new_1000 ... bench: 133,096 ns/iter (+/- 1,430)
test hash_set::fnv::with_all_new_1000_with_capacity ... bench: 82,591 ns/iter (+/- 592)
test hash_set::fnv::with_all_new_rc_str_1000_baseline ... bench: 162,073 ns/iter (+/- 1,277)
test hash_set::fnv::with_one_new_1000 ... bench: 37,334 ns/iter (+/- 256)
test hash_set::fnv::with_one_new_rc_str_1000_baseline ... bench: 18,263 ns/iter (+/- 261)
test hash_set::fx::with_all_new_1000 ... bench: 85,217 ns/iter (+/- 1,111)
test hash_set::fx::with_all_new_1000_with_capacity ... bench: 59,383 ns/iter (+/- 752)
test hash_set::fx::with_all_new_rc_str_1000_baseline ... bench: 98,802 ns/iter (+/- 1,117)
test hash_set::fx::with_one_new_1000 ... bench: 42,484 ns/iter (+/- 1,239)
test hash_set::fx::with_one_new_rc_str_1000_baseline ... bench: 15,000 ns/iter (+/- 233)
test hash_set::with_all_new_1000 ... bench: 137,645 ns/iter (+/- 1,186)
test hash_set::with_all_new_rc_str_1000_baseline ... bench: 163,129 ns/iter (+/- 1,725)
test hash_set::with_one_new_1000 ... bench: 59,051 ns/iter (+/- 1,202)
test hash_set::with_one_new_rc_str_1000_baseline ... bench: 37,986 ns/iter (+/- 771)
This will be used for generating the common tests between HashSet and
HashMap implementations.
This is my first macro in Rust. There does not seem to be a way to
concatenate identifiers (!), so I'm placing them within modules
instead. That ended up working out just fine, since then I can use a type
to provide the SUT.
This is missing two key things that I'll add shortly: a HashMap-based one
for use in the ASG for node mapping, and an entry-based system for
manipulations.
This has been a nice start for exploring various aspects of Rust
development, as well as conventions that I'd like to implement. In
particular:
- Robust documentation intended to guide people through learning the
necessary material about the compiler, as well as related work to
rationalize design decisions;
- Benchmarks;
- TDD;
- And just getting used to Rust in general.
I've beat this one to death, so I'll commit this and make smaller changes
going forward to show how easily it can evolve.
(This module was originally named `intern` but this commit and those that
follow rewrote it to `sym`.)
This is enabled by default in nightly, and is not available at all in
stable. Considering the PITA that it will be to go back and rewrite docs to
use the new format, and how important of a feature this is, we will just
make use of it now.
Given that developers should be doing TDD and therefore running this target
frequently, this has the effect of providing immediate feedback when
formatting is needed and outputting a diff. Developers will then quickly
understand what changes need to be made to avoid future issues (and can run
`cargo fmt` to fix it), at which point they'll rarely ever encounter
formatting errors.
The original purpose was to ensure pipelines fail when the formatter has not
been run.