Commit Graph

1065 Commits (d710437ee4b3f5b395997ce6fa200c9e1807f6d2)

Author SHA1 Message Date
Mike Gerwitz ed245bb099 tamer: sym::prefill: Initial typed static symbol concept
We'll see how the syntax evolves over time.  It's not ideal to have to
specify the type, rather than having the compiler infer it, but I don't much
feel like getting into my first procedural macro right now, so we'll stick
with this approach for the time being.

This will set the stage to be able to safely e.g. create QNames statically
at compile-time and would allow us to make any attempts to bypass it
unsafe.
2021-09-23 00:37:39 -04:00
Mike Gerwitz b972b0b202 tamer: sym::StaticSymbolId: Introduce
Previously, we were allocating only u32 versions of `SymbolId` for the
statically allocated symbols.  This introduces a new symbol type with a very
small datatype (8 bits) that is able to cast into any `SymbolId`.  This is
explained in the docs.

We'll be taking this typing further in future commits so that static symbols
are better-suited for compile-time guarantees for static newtype
construction.

DEV-10710
2021-09-22 21:37:06 -04:00
Mike Gerwitz c87147c277 configure.ac: Bump Rust 1.{53=>54} for using macros in attribute values
The previous commit uses `concat!` for doc generation.  I forgot that this
was only recently stabalized.
2021-09-22 16:47:17 -04:00
Mike Gerwitz 366fef714b tamer: sym::prefill: Introduce static symbols
This is the beginning of static symbols, which is becoming increasing
necessary as it's quite a pain to have to deal with interning static strings
any place they're used.

It's _more_ of a pain to do that in conjunction with newtypes (e.g. `QName`,
`AttValue`, etc) that make use of `SymbolId`; this will allow us to
construct _those_ statically as well, and additional work to support that
will be coming up.

DEV-10701
2021-09-22 16:08:40 -04:00
Mike Gerwitz e0a209d417 tamer: bench: xir: Reduce writer benchmark memory usage
These were using GiB of memory, which is ...unnecessary.

I reduced the iteration count significantly, but it was still wasting a lot
of time and memory and needed `with_capacity` to reduce the number of copies
after reallocation.

It is not typical that a buffer would contain this much information.
2021-09-21 16:21:32 -04:00
Mike Gerwitz aee781a6fb tamer: bench: xir: Fix broken benchmark
This broke when I removed `SelfClose`.  I used to run
`make all fmt check bench` before every push, but they take a while to run,
in part because it uses nightly and has to recompile too.

But it looks like I need to be more diligent again.
2021-09-21 16:09:50 -04:00
Mike Gerwitz b348892276 tamer: ir::xir::tree: Introduce attribute fragment parsing
This is exactly was I said I was _not_ going to do in the previous commit,
but apparently hacking late at night had me forget the whole reason that
XIRT is being introduced now---unit tests.  I'll be emitting a XIR stream
and I need to parse it for convenience in the tests.

So, here's a good start.  Next will be some generalizations that are useful
for the tests as well.  This is pretty bare, but accomplishes the task.

See docs for more info.
2021-09-21 16:07:38 -04:00
Mike Gerwitz a5afc76568 tamer: ir::xir::tree: Extract Attr{,List} into new module
The `tree` module is getting more difficult to navigate.  The tests still
remain where they were, since a bunch of concerns are mixed together.  Any
tests specific only to this module will be added here.
2021-09-21 10:43:23 -04:00
Mike Gerwitz fe7b64fe62 tamer: ir::xir::tree::AttrName: Remove unused, rename {Ele=>}AttrName
Attributes used to be able to be emitted standalone, but that was abandoned
a while back to clean things up a bit.  This cleanup was missed.
2021-09-21 09:29:56 -04:00
Mike Gerwitz c6a7988bc8 tamer: ir::xir: Add Token::AttrValueFragment with writer support
This is implemented only for the writer, since its use case is to be able to
concatenate strings without copying during writing.

It doesn't really make sense to support this in XIR Tree, since a reader
should never produce this.  But if we ever run into this (e.g. due to some
internal processing pipeline), we'll address it then; XIR Tree might have to
do copying, then, but should probably wait until encountering all fragments
before interning.  That'd be a distraction right now.
2021-09-21 00:16:30 -04:00
Mike Gerwitz e95afe2658 tamer: ir::xir::tree::Element::open: Fix doc typo 2021-09-21 00:16:30 -04:00
Mike Gerwitz 3bb6f0cf35 tamer: ir::asg::ident: AsRef impls for SymbolId types
This commit will make more sense once the broader context is committed, but
it's needed for lowering from `Sections` into a XIR stream.

This will also change once we pre-allocate symbols, like rustc, when the
interner is initialized.

This is my first use of the `paste` crate, which is used to generate
identifiers.  So this is partly an experiment, and it seems much better than
having to write a proc macro, at least at this point in time.  If this code
stays around, it'll probably be generalized further and used elsewhere, but
I'd prefer not to go this route long-term.
2021-09-20 16:50:40 -04:00
Mike Gerwitz 12daddcc2d tamer: ir::xir::tree::Element: Open element constructor
This simply moves the construction into `Element`.
2021-09-16 10:52:00 -04:00
Mike Gerwitz ea50e1112a tamer: ir::xir::tree: Extract tests into own file
This file's getting large, and will only grow more complex.
2021-09-16 10:18:02 -04:00
Mike Gerwitz 3484336b1d tamer: ir::xir::tree::Stack: Encapsulate ElementStack manipulation
This moves some logic into `ElementStack` (which would be part of `Stack` if
variants were their own types), rather than peering so deeply into its
data.
2021-09-16 10:07:37 -04:00
Mike Gerwitz a49ac23aeb tamer: ir::xir::tree: Child element attribute parsing
This correctly retains and restores the parent stack after processing an
attribute for a child element.

This does increase the size of [`Stack`] a bit, but we can evaluate whether
it's too large at a later time.  It's currently 832 bits with `Ix=u32`,
which is large, but the question is whether it matters; we'll see as we
begin to use it.
2021-09-15 16:46:15 -04:00
Mike Gerwitz 61e493066c tamer: ir::xir::tree: Clean up parser implementation
This moves most of the parsing logic into `Stack`, which rightfully owns the
stack manipulation and state transitions.  `ParserState` becomes exactly
what it says it is---a management of the persistent state of the parser, and
is also responsible for digesting tokens and dispatching their data to the
proper event.

This approach has a number of benefits over the old design: it's
self-documenting, making the intent clear; and it is easier to reason about
the subset of states (for both humans and Rusts) than a large match of
transitions.

This contains a number of TODO items that will be addressed shortly.  It
also obviated that the previous commit was incomplete---it doesn't persist
`pstack` for attributes on child elements!  That'll be fixed too.
2021-09-15 16:33:08 -04:00
Mike Gerwitz 366ecca8ea tamer: ir::xir::tree: Initial child element parsing
This modifies the tree parser to handle child elements.  It's mostly
proof-of-concept code; the next commit will clean it up a bit so that it's
largely self-documenting.
2021-09-15 11:19:08 -04:00
Mike Gerwitz 51507ccdad tamer: ir::xir: Combine Token::{SelfClose, Close} variants
This removes `SelfClose` and merges it with `Close` by making the first
parameter an `Option`.  This isn't really ideal, but it really simplifies
pattern matching, especially for the next commit.  I'll have more details
there.

The primary motivation was lack of stabalization for binding after `@` in
matches, e.g. `Foo(name, ele) | ele @ Element { name, .. }`.  It looks like
it's ready, though; maybe next Rust release?

  https://github.com/rust-lang/rust/issues/65490

I don't know if I'll revert this change after then.  This seems plenty
clear, albeit more verbose.
2021-09-13 13:06:20 -04:00
Mike Gerwitz 1c40b9c504 tamer: ir::xir::tree: Closing element parsing with balance check
This introduces parser errors, but does not yet support error recovery; that
problem will be discussed in a commit in the near future, after the writer
is sorted out a bit more.

DEV-10561
2021-09-13 10:45:38 -04:00
Mike Gerwitz 5979e1fb90 tamer: ir::xir::tree: Correct italic formatting in docs
I was using an Org mode format.
2021-09-13 09:47:39 -04:00
Mike Gerwitz fd8a05164d tamer: ir::xir::tree: Remove Tree::Attr, add AttrList
The idea, previously, was that parsing could begin at attributes selectively
and be parsed independently.  But that's really awkward with `Tree`, since
it effectively allows orphan attributes as children of an
`Element`.  Nonsense.

Instead, if we truly only want an attribute list, we can offer a function to
create a parser with an empty `Stack::BuddingElement` that can accumulate
them.
2021-09-09 14:40:58 -04:00
Mike Gerwitz 4987bc39b0 tamer: ir::xir::tree::parser_from: Yield parsed trees
Previously, `parser_from` was a simple wrapper around `parse`; now, this
provides a more convenient API where `next` will yield the next parsed
object.

See docs for much more information and rationale.
2021-09-09 13:05:11 -04:00
Mike Gerwitz 1452a4186a tamer: convert: Add missing method-level docs 2021-09-08 16:12:53 -04:00
Mike Gerwitz 2586827d64 tamer: convert::{ExpectFrom, ExpectInto}: New traits
These traits are intended to eliminate boilerplate, primarily in tests, in
situations where from/into is not expected to fail.

Given that TAMER must only panic for internal compiler errors, this should
not often be used outside of test cases.  Further, there may be better
options in the future (e.g. QNames could be statically compiled rather than
trying to convert at runtime, in this case).
2021-09-08 16:03:44 -04:00
Mike Gerwitz 12bb88e4b5 tamer: ir::xir::tree: Introduce XIR tree
This begins to introduce the XIR tree.  I was originally going to wait on
this until after implementing the xmle writer in terms of XIR, but writing
unit tests is too much of a pain on the stream, so now is as good of a time
as any.

This has very limited support so far; it'll be added to as time goes on.
2021-09-08 13:56:04 -04:00
Mike Gerwitz ab093046e9 tamer: ir::asg::section: Provide iterators for major section groups
These groups happen to correspond with the sections of the xmle file, which
suggests again that this lives in the wrong place.  But I should really have
my focus elsewhere right now, so I don't know if I'll go any further right
now.  I guess we'll see as the writer is reimplemented.
2021-09-01 11:21:44 -04:00
Mike Gerwitz 1fa9614698 tamer: ir::asg::section: Improve iteration
`SectionsIter` was introduced to remove that responsibility from xmle
writer, since that's currently being reimplemented using XIR.

The existing iterator has been renamed SectionIter{ator=>} for a more
idiomatic name for iterator structs, and now has a static type rather than
relying on dynamic dispatch.  The author of that code wasn't sure how to
handle it otherwise.  (Which is understandable, since we were both still
getting acquainted with Rust.)  There's no notable change in performance in
my benchmarking.

This abstraction is a bit awkward, in that it's named for object file
sections, but they aren't.  Further, it's coupled with the ASG via
`SortableAsg` and perhaps should be generalized into a sorting routine that
takes a function for sorting, so that `Sections` can be moved into xmle's
packages.
2021-09-01 09:14:51 -04:00
Mike Gerwitz b80064f59e tamer: configure: Check for Rust 1.{52=>53}.
Or-pattern syntax is used; I had forgotten to bump this version.

For example, match on `Foo(Bar | Baz)` vs. `Foo(Bar) | Foo(Baz)`.
2021-08-30 15:19:14 -04:00
Mike Gerwitz 9331858c6d doc: Give @mdash macro an argument
This macro is used to consume whitespace so that the following sentence can
start on the next line without producing any whitespace in the output.  Its
argument is, therefore, whitespace.

This used to work in earlier versions of Texinfo, but around 6.{6,7} it
began failing because an argument was provided when it wasn't defined with
one.
2021-08-30 10:41:49 -04:00
Mike Gerwitz 0a8fb71c1b tamer: tameld: Use buffered writes
This was an oversight.  The difference is significant.  I had my suspicions
about this when I noticed the huge difference in time between writing to
/dev/null vs. an actual file during profiling.

On one of our systems, here's the number of syscalls _before_ this change:

  $ strace -c target/release/tameld --emit xmle -o foo foo.xmlo
  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
   85.05    4.966192          16    318473           write
    7.23    0.421977          13     32298           lstat
    6.53    0.381424          15     25113           read
    0.75    0.043691          13      3350           readlink
    0.25    0.014713          61       241           close
    0.12    0.007167          30       241           openat
    0.05    0.003175         151        21           munmap
    0.01    0.000488          14        35           brk
    0.01    0.000292           9        33           mmap
    0.00    0.000266          38         7           mremap
    0.00    0.000004           1         3           sigaltstack
    0.00    0.000000           0         6           fstat
    0.00    0.000000           0         1           poll
    0.00    0.000000           0        11           mprotect
    0.00    0.000000           0         7           rt_sigaction
    0.00    0.000000           0         1           rt_sigprocmask
    0.00    0.000000           0         6         6 access
    0.00    0.000000           0         1           execve
    0.00    0.000000           0         1           arch_prctl
    0.00    0.000000           0         1           sched_getaffinity
    0.00    0.000000           0         1           set_tid_address
    0.00    0.000000           0         1           set_robust_list
    0.00    0.000000           0         2           prlimit64
  ------ ----------- ----------- --------- --------- ----------------
  100.00    5.839389                379854         6 total

And _after_:

  $ strace -c target/release/tameld --emit xmle -o foo foo.xmlo
  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
   45.21    0.435010          13     32298           lstat
   40.09    0.385752          15     25113           read
    6.14    0.059113          21      2809           write
    4.75    0.045687          14      3350           readlink
    2.51    0.024115         100       241           close
    0.84    0.008045          33       241           openat
    0.26    0.002468         118        21           munmap
    0.06    0.000580          17        35           brk
    0.06    0.000566          17        33           mmap
    0.03    0.000279          40         7           mremap
    0.02    0.000181          16        11           mprotect
    0.01    0.000087          15         6         6 access
    0.01    0.000082          12         7           rt_sigaction
    0.01    0.000075          13         6           fstat
    0.00    0.000027           9         3           sigaltstack
    0.00    0.000024          12         2           prlimit64
    0.00    0.000018          18         1           execve
    0.00    0.000016          16         1           poll
    0.00    0.000013          13         1           sched_getaffinity
    0.00    0.000012          12         1           rt_sigprocmask
    0.00    0.000012          12         1           arch_prctl
    0.00    0.000012          12         1           set_robust_list
    0.00    0.000011          11         1           set_tid_address
  ------ ----------- ----------- --------- --------- ----------------
  100.00    0.962185                 64190         6 total

What a difference!

There's still a lot of other red flags in there; those can be addressed
separately.

This was originally written as I was learning Rust, and I suspect that I
didn't realize that File wasn't buffered at the time.

For the above link: times go from 1.23s pre-change to 0.85s after:

  0.77user 0.44system 0:01.23elapsed 99%CPU (0avgtext+0avgdata 48520maxresident)k
  0inputs+43952outputs (0major+12825minor)pagefaults 0swaps

  0.69user 0.15system 0:00.85elapsed 98%CPU (0avgtext+0avgdata 48396maxresident)k
  0inputs+43952outputs (0major+12823minor)pagefaults 0swaps
2021-08-20 12:14:42 -04:00
Mike Gerwitz c9a2ae533f tamer: xir (XmlWriter)[write_new]: Correct #[must_use] declaration
The return value has no meaningful side-effects at all; the write operation
failing isn't worth pointing out, since it has to be used regardless.

The normal `write` does have useful side-effects, of course.
2021-08-20 11:38:58 -04:00
Mike Gerwitz 59d578e669 tamer: xir (XmlWriter)[write_new]: New method
This change was primarily intended to clean up unit tests.  Since it
allocates and returns a new buffer, I do not expect this to have much use
within TAMER itself in the near future.  Maybe in later tooling.

If this is abused, person from the future: add `#[cfg(test)]` to its
definition.
2021-08-20 11:37:01 -04:00
Mike Gerwitz cd1eae95ca tamer: xir: {NodeStream=>Token}
I decided not to do this in a previous commit because I had documented
"NodeStream" elsewhere, so I'd like it to be in the Git history to
understand its evolution.

This never was a "Node" stream beyond the initial concept phase, because it
represents tokens that aren't themselves nodes.  It is intended to generate
XML nodes, but may need to accommodate non-nodes (e.g. XML declarations) in
the future.

The name originated from `Node`, which was a tree-based IR that was
initially conceived, but removed because it's not yet needed.  What we need
is a streaming IR for xmle writing, and then for reading and echoing back
out XML for the new frontend.
2021-08-20 10:30:27 -04:00
Mike Gerwitz a23bae5e4d tamer: XIR: Working concept
This is a working streaming IR for XML.  I want to get this committed before
I go further cleaning it up and integrating it into the xmle writer.

This is lacking detailed documentation, and the names of things may end up
changing.

Initial benchmarks do show that it has a ~2x performance improvement over
quick-xml when dealing with two attributes on a node, and I suspect that
improvement will increase with the number of attributes.  We will see how it
compares in real-world benchmarks once the linker has been modified to use
it.

The goal isn't to _avoid_ quick-xml---it'll be used in the future for things
like escaping that would be a huge waste to implement ourselves.  It just so
happened that quick-xml was not beneficial for these changes; indeed, its
own writer is fairly simple for the portions that were implemented here, so
there's no use in fighting with its API, particularly around attributes and
our need to explicitly control whitespace (with the intent of handling code
formatters in the future).

To put this into perspective: the reason this work is being done isn't to
refactor the linker, or to speed it up, but to generalize XML writing and
provide a suitable IR for use in the compiler.  The first step of the
frontend is to essentially echo the XML token stream back out so we can
incrementally parse it and do something useful, to incrementally rewrite the
compiler in Rust.
2021-08-20 10:16:36 -04:00
Mike Gerwitz c211ada89b tamer: benches (memchr): Add missing bench attr
This benchmark was not being run.
2021-08-19 23:14:33 -04:00
Mike Gerwitz e217478a46 tamer: Makefile.am (CARGO_BENCH_FLAGS): New env var 2021-08-19 16:43:14 -04:00
Mike Gerwitz fc235b7ecc tamer: memchr benches
This adds benchmarking for the memchr crate.  It is used primarily by
quick-xml at the moment, but the question is whether to rely on it for
certain operations for XIR.

The benchmarking on an Intel Xeon system shows that memchr and Rust's
contains() perform very similarly on small inputs, matching against a single
character, and so Rust's built-in should be preferred in that case so that
we're using APIs that are familiar to most people.

When larger inputs are compared against, there's a greater benefit (a little
under ~2x).

When comparing against two characters, they are again very close.  But look
at when we compare two characters against _multiple_ inputs:

  running 24 tests
  test large_str:1️⃣:memchr_early_match                 ... bench:       4,938 ns/iter (+/- 124)
  test large_str:1️⃣:memchr_late_match                  ... bench:      81,807 ns/iter (+/- 1,153)
  test large_str:1️⃣:memchr_non_match                   ... bench:      82,074 ns/iter (+/- 1,062)
  test large_str:1️⃣:rust_contains_one_byte_early_match ... bench:       9,425 ns/iter (+/- 167)
  test large_str:1️⃣:rust_contains_one_byte_late_match  ... bench:     123,685 ns/iter (+/- 3,728)
  test large_str:1️⃣:rust_contains_one_byte_non_match   ... bench:     123,117 ns/iter (+/- 2,200)
  test large_str:1️⃣:rust_contains_one_char_early_match ... bench:       9,561 ns/iter (+/- 507)
  test large_str:1️⃣:rust_contains_one_char_late_match  ... bench:     123,929 ns/iter (+/- 2,377)
  test large_str:1️⃣:rust_contains_one_char_non_match   ... bench:     122,989 ns/iter (+/- 2,788)
  test large_str:2️⃣:memchr2_early_match                ... bench:       5,704 ns/iter (+/- 91)
  test large_str:2️⃣:memchr2_late_match                 ... bench:      89,194 ns/iter (+/- 8,546)
  test large_str:2️⃣:memchr2_non_match                  ... bench:      85,649 ns/iter (+/- 3,879)
  test large_str:2️⃣:rust_contains_two_char_early_match ... bench:      66,785 ns/iter (+/- 3,385)
  test large_str:2️⃣:rust_contains_two_char_late_match  ... bench:   2,148,064 ns/iter (+/- 21,812)
  test large_str:2️⃣:rust_contains_two_char_non_match   ... bench:   2,322,082 ns/iter (+/- 22,947)
  test small_str:1️⃣:memchr_mid_match                   ... bench:       4,737 ns/iter (+/- 842)
  test small_str:1️⃣:memchr_non_match                   ... bench:       5,160 ns/iter (+/- 62)
  test small_str:1️⃣:rust_contains_one_byte_non_match   ... bench:       3,930 ns/iter (+/- 35)
  test small_str:1️⃣:rust_contains_one_char_mid_match   ... bench:       3,677 ns/iter (+/- 618)
  test small_str:1️⃣:rust_contains_one_char_non_match   ... bench:       5,415 ns/iter (+/- 221)
  test small_str:2️⃣:memchr2_mid_match                  ... bench:       5,488 ns/iter (+/- 888)
  test small_str:2️⃣:memchr2_non_match                  ... bench:       6,788 ns/iter (+/- 134)
  test small_str:2️⃣:rust_contains_two_char_mid_match   ... bench:       6,203 ns/iter (+/- 170)
  test small_str:2️⃣:rust_contains_two_char_non_match   ... bench:       7,853 ns/iter (+/- 713)

Yikes.

With that said, we won't be comparing against such large inputs
short-term.  The larger strings (fragments) are copied verbatim, and not
compared against---but they _were_ prior to the previous commit that stopped
unencoding and re-encoding.

So: Rust built-ins for inputs that are expected to be small.
2021-08-18 14:23:03 -04:00
Mike Gerwitz 1cdb3fbbc5 tamer: tameld: Skip fragment unescaping only to re-escape on write
Fragments' text were unescaped on reading, producing an owned String and
spending time parsing the text to unescape.  We were then copying that into
an internement pool (so, copying twice, effectively).

Further, we were then _re-escaping_ on write.

This was all wasteful, since we do not do any manipulation of the fragment
before outputting to the xmle file; we know that Saxon produced properly
escaped XML to begin with, and can trust to propagate it.

This also introduces a new global `clone_uninterned_utf8_unchecked` method.

In profiling this change, I tested (a) before this change, (b) after writing
without escaping, and (c) after both reading escaped and writing without
escaping.

     (a)              (b)              (c)
  sec   mem (B)    sec     B        sec     B
0:00.95 47896 -> 0:00.91 47988 -> 0:00.87 48288
0:00.40 30176 -> 0:00.37 25656 -> 0:00.36 25788
0:00.39 45672 -> 0:00.37 45756 -> 0:00.35 34952
0:00.39 20716 -> 0:00.38 19604 -> 0:00.36 19956
0:00.33 16836 -> 0:00.32 16988 -> 0:00.31 16892
0:00.23 15268 -> 0:00.23 15236 -> 0:00.22 15312
0:00.44 20780 -> 0:00.44 20048 -> 0:00.41 20148
0:00.54 44516 -> 0:00.50 36964 -> 0:00.49 36728
0:00.62 55976 -> 0:00.57 46204 -> 0:00.54 41468
0:00.31 28016 -> 0:00.30 27308 -> 0:00.28 23844
0:00.23 15388 -> 0:00.22 15316 -> 0:00.21 15304
0:00.05 4888  -> 0:00.05 4760  -> 0:00.05 4948
0:00.41 19756 -> 0:00.41 19852 -> 0:00.40 19992
0:00.47 20828 -> 0:00.46 20844 -> 0:00.44 20968
0:00.27 18152 -> 0:00.26 18184 -> 0:00.25 18312

Interestingly, the peak memory usage increases very slightly between the
second and third steps (though decreases from the first), likely because the
raw (encoded) is larger than the unencoded text (e.g. `>` takes more
space than `>`).
2021-08-18 11:39:06 -04:00
Mike Gerwitz f97141f5c5 tamer: tameld: Use uninterned symbols for reader
Fragments were previously represented by `String` to avoid the cost of
interning (hashing and copying).  This change modifies it to use uninterned
symbols, which does still have a copy overhead but it does not hash.

Initial tests shows a small performance decrease of about 15% and a small
memory increase of similar proportion.  However, once I realized that I was
not clearing buffers from quick_xml events and implemented that change in a
previous commit, this change ended up being approximately on par with
`String`, despite the copying of some pretty large fragments.

YMMV, though, and perhaps on less powerful systems time may increase
slightly.

The upcoming XIR (XML IR) was originally going to support both owned strings
and symbols, but now we'll just use uninterned symbols; I can't rationalize
complicating the API at this time when it will provide an almost
imperceivable performance benefit.  If ever that changes in the future,
that change will be entertained.

The end result is that the fate of a fragment's underlying memory is
determined by whatever is processing the data, _not_ by the API itself---the
API was previously forcing use of a String, whereas now it's up to the
caller to determine whether we want comparable interns.  For fragments,
that's not likely ever to be the case, especially considering that the
representation will change so drastically in the future.
2021-08-16 14:05:32 -04:00
Mike Gerwitz d96dcad7d8 tamer: tameld: Reduce peak memory usage
This clears the buffers used by quick_xml, which was apparently forgotten
during initial development (I think I expected it to re-use the previously
allocated space automatically).

This has significant effects in some cases.  For example, one of our UI
builds drops from ~9KiB to ~5KiB peak memory usage.  Other builds for larger
suppliers are only slightly effected because of some of their massive
fragments.
2021-08-16 13:38:14 -04:00
Mike Gerwitz ce233ac01d tamer: sym: Uninterned symbols
This adds support for uninterned symbols.  This came about as I was creating
Xir (not yet committed) where I had to decide if I wanted `SymbolId` for all
values, even though some values (e.g. large text blocks like compiled code
fragments for xmle files) will never be compared, and so would be wastefull
hashed.

Previous IRs used `String`, but that was clumsy; see documentation in this
commit for rationale.
2021-08-13 22:54:04 -04:00
Mike Gerwitz a008d11fb3 .gitlab-ci.yml (deploy): Deploy on main branch
The switch to the `main` branch follows our conventions for other
repositories as we switch to trunk-based development.

Given that main will always be in a deployable state, there's no use in
waiting for tags.
2021-08-13 15:16:40 -04:00
Mike Gerwitz 0ff0f88e5f tamer: Introduce span
This is an initial implementation optimized for expected use
cases.  Hopefully that pans out and doesn't come back to bite me.

Regarding the context: it only allows for interned paths atm, which are
strings (and so much be valid UTF-8, which is fine for us, but sucks for
something more general-purpose).  I'll be curious if the context needs
extension later on, or if different contexts will be stored in IRs (e.g. to
store a template application site as well as the location of the expansion
within the template body).
2021-08-13 15:16:39 -04:00
Mike Gerwitz 29ab4b9bfc tamer: sym: Disallow SymbolId construction outside of module
SymboldIds must only be constructed by interners, otherwise we lose
confidence in the type.

This offers an associated function to construct raw SymbolIds from integers
for testing purposes.
2021-08-13 11:54:11 -04:00
Mike Gerwitz d11b4220b2 Revert "tamer: Cargo.toml (dependencies)[lazy_static]: Remove (now used)"
This reverts commit 4fd6313cd2.

...and now I need it for tests.
2021-08-12 16:08:34 -04:00
Mike Gerwitz 4fd6313cd2 tamer: Cargo.toml (dependencies)[lazy_static]: Remove (now used)
The previous commit removed all uses.
2021-08-11 16:26:36 -04:00
Mike Gerwitz 9deb393bfd tamer: Global interners
This is a major change, and I apologize for it all being in one commit.  I
had wanted to break it up, but doing so would have required a significant
amount of temporary work that was not worth doing while I'm the only one
working on this project at the moment.

This accomplishes a number of important things, now that I'm preparing to
write the first compiler frontend for TAMER:

  1. `Symbol` has been removed; `SymbolId` is used in its place.
  2. Consequently, symbols use 16 or 32 bits, rather than a 64-bit pointer.
  3. Using symbols no longer requires dereferencing.
  4. **Lifetimes no longer pollute the entire system! (`'i`)**
  5. Two global interners are offered to produce `SymbolStr` with `'static`
     lifetimes, simplfiying lifetime management and borrowing where strings
     are still needed.
  6. A nice API is provided for interning and lookups (e.g. "foo".intern())
     which makes this look like a core feature of Rust.

Unfortunately, making this change required modifications to...virtually
everything.  And that serves to emphasize why this change was needed:
_everything_ used symbols, and so there's no use in not providing globals.

I implemented this in a way that still provides for loose coupling through
Rust's trait system.  Indeed, Rustc offers a global interner, and I decided
not to go that route initially because it wasn't clear to me that such a
thing was desirable.  It didn't become apparent to me, in fact, until the
recent commit where I introduced `SymbolIndexSize` and saw how many things
had to be touched; the linker evolved so rapidly as I was trying to learn
Rust that I lost track of how bad it got.

Further, this shows how the design of the internment system was a bit
naive---I assumed certain requirements that never panned out.  In
particular, everything using symbols stored `&'i Symbol<'i>`---that is, a
reference (usize) to an object containing an index (32-bit) and a string
slice (128-bit).  So it was a reference to a pretty large value, which was
allocated in the arena alongside the interned string itself.

But, that was assuming that something would need both the symbol index _and_
a readily available string.  That's not the case.  In fact, it's pretty
clear that interning happens at the beginning of execution, that `SymbolId`
is all that's needed during processing (unless an error occurs; more on that
below); and it's not until _the very end_ that we need to retrieve interned
strings from the pool to write either to a file or to display to the
user.  It was horribly wasteful!

So `SymbolId` solves the lifetime issue in itself for most systems, but it
still requires that an interner be available for anything that needs to
create or resolve symbols, which, as it turns out, is still a lot of
things.  Therefore, I decided to implement them as thread-local static
variables, which is very similar to what Rustc does itself (Rustc's are
scoped).  TAMER does not use threads, so the resulting `'static` lifetime
should be just fine for now.  Eventually I'd like to implement `!Send` and
`!Sync`, though, to prevent references from escaping the thread (as noted in
the patch); I can't do that yet, since the feature has not yet been
stabalized.

In the end, this leaves us with a system that's much easier to use and
maintain; hopefully easier for newcomers to get into without having to deal
with so many complex lifetimes; and a nice API that makes it a pleasure to
work with symbols.

Admittedly, the `SymbolIndexSize` adds some complexity, and we'll see if I
end up regretting that down the line, but it exists for an important reason:
the `Span` and other structures that'll be introduced need to pack a lot of
data into 64 bits so they can be freely copied around to keep lifetimes
simple without wreaking havoc in other ways, but a 32-bit symbol size needed
by the linker is too large for that.  (Actually, the linker doesn't yet need
32 bits for our systems, but it's going to in the somewhat near future
unless we optimize away a bunch of symbols...but I'd really rather not have
the linker hit a limit that requires a lot of code changes to resolve).

Rustc uses interned spans when they exceed 8 bytes, but I'd prefer to avoid
that for now.  Most systems can just use on of the `PkgSymbolId` or
`ProgSymbolId` type aliases and not have to worry about it.  Systems that
are actually shared between the compiler and the linker do, though, but it's
not like we don't already have a bunch of trait bounds.

Of course, as we implement link-time optimizations (LTO) in the future, it's
possible most things will need the size and I'll grow frustrated with that
and possibly revisit this.  We shall see.

Anyway, this was exhausting...and...onward to the first frontend!
2021-08-11 14:24:55 -04:00
Mike Gerwitz 71011f5724 tamer: sym: Split into multiple modules
This helps to organize a bit better as I prepare to introduce singleton
interners.
2021-08-02 23:54:37 -04:00
Mike Gerwitz 01722c9c3b tamer: Symbol{Index=>Id}
The former was a misnomer (it represents an index _entry_).  This name is
also shorter, which is nice, considering how often it'll be used.
2021-07-30 13:32:32 -04:00