A long time ago (about a decade), package names were required, but they are
now generated by the compiler relative to the root path. The name here was
incorrect, which was generating an incorrect path for the linked symbols,
which was causing problems with the Summary Page.
See RELEASES.md for a list of changes.
This was a significant effort that began about six months ago, but was
paused at a number of points. Rather than risking further pauses from
interruptions, the new classification system has been gated behind a
package-level feature flag, since it causes BC breaks in certain buggy
situations.
Since this flag was introduced late, there is the potential that it causes
bugs when new optimizations are mixed with the old system.
This largely reintroduces the legacy classification system, but there are a
number of things that are not affected by the flag. For example:
1. Alias classifications are still optimized when the flag is off;
2. Classifications without predicates emit slightly different code than
before, though their functionality has not changed;
3. There's been a lot of refactoring and minor optimizations that are
unaffected by the flag;
4. lv:match/@pattern will now emit a warning; and
5. Cleaning and casting of input data is not gated.
This allows us to incrementally migrate to the new system where behavior may
be different, but this is admittedly a bit dangerous in that the new system
was aggressively tested and reasoned about, so reintroducing the legacy
system may combine in unexpected ways.
This is another significant milestone.
The next logical step with classification optimization is to inline all of
those intermediate classifications generated from any and all blocks, since
there are so many of them. This means having the parent classification
absorb all dependencies; not output dependencies for the classification; not
compile the assignments for those classifications; and to inline them at the
match site. They’re used only once, since they’re generated for each
individual block.
We need to keep the actual classification generation around (and just inline
them) for now, probably until TAMER, because we depend upon their symbol for
determining their dimensionality, which we need for the optimization work we
just did---we must inline them into the proper group (matrix, vector, or
scalar).
The optimization work done up to this point had inlining in mind---only a
little bit of work was needed to make sure that every classification can
simply be stripped of its assignment and be a valid expression that can be
inlined in place of the original reference.
The result of that was predictably significant for the `ui/package` program
that I've been testing with:
- 4,514 classifications were inlined;
- The file size dropped to 7.5MiB (from 8.2MiB previously---remember that
we started at 16MiB); and
- GC ticks were cut in half, from 67->31.
Unfortunately, this optimization added nearly 1m of time to the compilation
of that program. Speaking from the future: the UI build optimizations in
liza-proguic were introduced to offset this difference (and provide a net
gain in performance).
This convets disjunctive classifications into conjunctive and places an
<any> within it.
This ends up handling all the generated qwhen classifications from proguic,
which were probably converted into <any> by a previous optimization pass.
The UI program I've been using to test these compiler optimizations has
decreased in size down from 8.2MiB since the beginning of this branch; we
started at ~16MiB.
See comments. This is meant to help mitigate the damage done by one of our
code generation systems. The benefit is significant, allowing the code
generator to remain simple. By placing this optimization within the
compiler, hand-written and template-generated code also benefit.
Rather than extracting every any/all into their own classifications,
eliminate them (and replace them with their body) if they contain only one
predicate. This is most likely to happen after template expansion, and
there were an alarming number of them in our system.
Stripping them out of one of our programs saved ~0.2MiB of output, and
removed many intermediate classifications. It removed ~1,075 lines, which
should correspond closely to the actual number of classifications.
Discovering this required stripping the template barriers, which was done in
a previous commit.
Unfortunately, the performance improvement from this wasn't significantly,
largely because of the nondeterminisim of GC, which can easily mask the
gains. But a new line `v8::internal::FixedArray::set(int,
v8::internal::Object)` appeared in the profiler output, making me wonder
whether the JIT is starting to understand more interesting properties of the
system.
`mprotect` and `v8::internal::heap_internals::GenerationalBarrier` also
appeared, which are related to GC.
!!!
(Message from the future: this ends up being reintroduced and the new
classification system being placed behind a feature toggle. But it will be
eliminated eventually.)
This is a major milestone for class optimization---the old anyValue-based
system is no longer in use; the classification system has been wholly
rewritten.
The ticks in the sampling profiler are now where they should be, open to
further optimization with a much more solid foundation.
[JavaScript]:
ticks total nonlib name
5 0.6% 3.0% LazyCompile: *vu [...]/ui/package.strip.js:25191:16
5 0.6% 3.0% LazyCompile: *M [...]/ui/package.strip.js:25267:15
3 0.4% 1.8% LazyCompile: *vmu [...]/ui/package.strip.js:25144:17
3 0.4% 1.8% LazyCompile: *ve [...]/ui/package.strip.js:25204:16
2 0.2% 1.2% LazyCompile: *precision [...]/ui/package.strip.js:25137:23
2 0.2% 1.2% LazyCompile: *me [...]/ui/package.strip.js:25178:16
2 0.2% 1.2% LazyCompile: *cmatch [...]/ui/package.strip.js:25495:20
2 0.2% 1.2% LazyCompile: *ceq [...]/ui/package.strip.js:25273:17
1 0.1% 0.6% LazyCompile: *init_defaults [...]/ui/package.strip.js:25624:27
1 0.1% 0.6% LazyCompile: *MM [...]/ui/package.strip.js:25268:16
1 0.1% 0.6% LazyCompile: *E [...]/ui/package.strip.js:25239:15
1 0.1% 0.6% LazyCompile: *<anonymous> [...]/ui/package.strip.js:25184:13
1 0.1% 0.6% LazyCompile: *<anonymous> [...]/ui/package.strip.js:25171:13
Much better than the 102 ticks that anyValue was taking some time ago!
A lot of time used to be spent compiling functions as well, a lot of which
was removed by previous commits, bringing us to:
[C++]:
ticks total nonlib name
50 5.9% 30.5% node::contextify::ContextifyContext::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
20 2.4% 12.2% write
9 1.1% 5.5% node::native_module::NativeModuleEnv::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
6 0.7% 3.7% __pthread_cond_timedwait
4 0.5% 2.4% mmap
All of this work has simplified the output enough that it's obviated a slew
of other optimizations that can be done in future work, though a lot of that
may wait for TAMER, since performing them in XSLT will be difficult and not
performant; the compiler is slow enough as it is.
This shaves ~1m off of the total build time for our largest system. Output
is impressively slow.
Around this point in time, we have the following profile from V8's sampling
profiler:
[JavaScript]:
ticks total nonlib name
36 2.8% 10.7% LazyCompile: *anyValue [...]/ui/package.strip.new.js:31020:22
3 0.2% 0.9% LazyCompile: *m1v1u [...]/ui/package.strip.new.js:30941:19
2 0.2% 0.6% LazyCompile: *precision [...]/ui/package.strip.new.js:30934:23
1 0.1% 0.3% LazyCompile: *vu [...]/ui/package.strip.new.js:30964:16
1 0.1% 0.3% LazyCompile: *init_defaults [...]/ui/package.strip.new.js:31341:27