This convets disjunctive classifications into conjunctive and places an
<any> within it.
This ends up handling all the generated qwhen classifications from proguic,
which were probably converted into <any> by a previous optimization pass.
The UI program I've been using to test these compiler optimizations has
decreased in size down from 8.2MiB since the beginning of this branch; we
started at ~16MiB.
See comments. This is meant to help mitigate the damage done by one of our
code generation systems. The benefit is significant, allowing the code
generator to remain simple. By placing this optimization within the
compiler, hand-written and template-generated code also benefit.
Rather than extracting every any/all into their own classifications,
eliminate them (and replace them with their body) if they contain only one
predicate. This is most likely to happen after template expansion, and
there were an alarming number of them in our system.
Stripping them out of one of our programs saved ~0.2MiB of output, and
removed many intermediate classifications. It removed ~1,075 lines, which
should correspond closely to the actual number of classifications.
Discovering this required stripping the template barriers, which was done in
a previous commit.
Unfortunately, the performance improvement from this wasn't significantly,
largely because of the nondeterminisim of GC, which can easily mask the
gains. But a new line `v8::internal::FixedArray::set(int,
v8::internal::Object)` appeared in the profiler output, making me wonder
whether the JIT is starting to understand more interesting properties of the
system.
`mprotect` and `v8::internal::heap_internals::GenerationalBarrier` also
appeared, which are related to GC.
!!!
(Message from the future: this ends up being reintroduced and the new
classification system being placed behind a feature toggle. But it will be
eliminated eventually.)
This is a major milestone for class optimization---the old anyValue-based
system is no longer in use; the classification system has been wholly
rewritten.
The ticks in the sampling profiler are now where they should be, open to
further optimization with a much more solid foundation.
[JavaScript]:
ticks total nonlib name
5 0.6% 3.0% LazyCompile: *vu [...]/ui/package.strip.js:25191:16
5 0.6% 3.0% LazyCompile: *M [...]/ui/package.strip.js:25267:15
3 0.4% 1.8% LazyCompile: *vmu [...]/ui/package.strip.js:25144:17
3 0.4% 1.8% LazyCompile: *ve [...]/ui/package.strip.js:25204:16
2 0.2% 1.2% LazyCompile: *precision [...]/ui/package.strip.js:25137:23
2 0.2% 1.2% LazyCompile: *me [...]/ui/package.strip.js:25178:16
2 0.2% 1.2% LazyCompile: *cmatch [...]/ui/package.strip.js:25495:20
2 0.2% 1.2% LazyCompile: *ceq [...]/ui/package.strip.js:25273:17
1 0.1% 0.6% LazyCompile: *init_defaults [...]/ui/package.strip.js:25624:27
1 0.1% 0.6% LazyCompile: *MM [...]/ui/package.strip.js:25268:16
1 0.1% 0.6% LazyCompile: *E [...]/ui/package.strip.js:25239:15
1 0.1% 0.6% LazyCompile: *<anonymous> [...]/ui/package.strip.js:25184:13
1 0.1% 0.6% LazyCompile: *<anonymous> [...]/ui/package.strip.js:25171:13
Much better than the 102 ticks that anyValue was taking some time ago!
A lot of time used to be spent compiling functions as well, a lot of which
was removed by previous commits, bringing us to:
[C++]:
ticks total nonlib name
50 5.9% 30.5% node::contextify::ContextifyContext::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
20 2.4% 12.2% write
9 1.1% 5.5% node::native_module::NativeModuleEnv::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
6 0.7% 3.7% __pthread_cond_timedwait
4 0.5% 2.4% mmap
All of this work has simplified the output enough that it's obviated a slew
of other optimizations that can be done in future work, though a lot of that
may wait for TAMER, since performing them in XSLT will be difficult and not
performant; the compiler is slow enough as it is.
This shaves ~1m off of the total build time for our largest system. Output
is impressively slow.
Around this point in time, we have the following profile from V8's sampling
profiler:
[JavaScript]:
ticks total nonlib name
36 2.8% 10.7% LazyCompile: *anyValue [...]/ui/package.strip.new.js:31020:22
3 0.2% 0.9% LazyCompile: *m1v1u [...]/ui/package.strip.new.js:30941:19
2 0.2% 0.6% LazyCompile: *precision [...]/ui/package.strip.new.js:30934:23
1 0.1% 0.3% LazyCompile: *vu [...]/ui/package.strip.new.js:30964:16
1 0.1% 0.3% LazyCompile: *init_defaults [...]/ui/package.strip.new.js:31341:27
This allows us to easily see their shape looking at the compiled code. See
the previous commit for more of an explanation and examples. And future
commits.
This allows us to analyze the compiler runlog and determine the frequency of
certain shapes to prioritize optimization efforts.
This is a proof-of-concept. It also contains arrow functions, which do not
exist in ES5.
The notation m#v#s# refers to matrix, vector, and scalar counts of a
classification. This optimization therefore focuses on classifications with
a single vector and a single matrix.
I'd like to note that this commit message was written in retrospect, months
later, after I returned to these proof-of-concept commits to finalize
them. I'll try my best to have things make sense in a historical context
based on my notes.
The choice to focus on m1v1 was based on taking survey of the shape of
classifications in our largest rating system. m1v*, and specifically m1v1,
was the largest by far, followed by v1s1. Here's an example program used
for a UI:
$ grep -h 'internal: [svm][0-9]\+[svm][0-9]\+ ' run*.log > result
$ cut -d' ' -f2 result | sort | uniq -c | sort -rn
10056 m1v1
1788 m1v2
473 v1s1
18 v2s1
13 v1s5
8 v1s3
7 v1s2
4 v2s5
2 v4s4
2 v4s2
2 v2s8
2 v2s6
2 v1s9
2 v1s4
1 v7s7
1 v6s2
1 v5s7
1 v5s5
1 v5s4
1 v5s2
1 v4s9
1 v4s7
1 v4s3
1 v3s9
1 v3s7
1 v3s5
1 v3s2
1 v3s1
1 v33s21
1 v2s60
1 v2s4
1 v2s3
1 v2s2
1 v28s1
1 v23s8
1 v22s9
1 v1s8
1 v1s6
1 v18s24
1 v15s14
1 v14s6
1 v14s5
1 v13s7
1 v13s6
1 v12s6
1 v11s1
1 m76v7
1 m3v1
1 m1v3
1 m1374v1
The excessively large ones (like the last one) are aggregate classifications
that are generated by a template. But note the first count.
Here's another example, one of the raters:
8812 m1v1
311 v1s1
17 v2s1
14 v1s5
4 v2s5
4 v1s6
4 v11s10
3 v3s1
3 v1s8
2 v5s14
2 v4s7
2 v3s9
2 v3s5
2 v2s4
2 v1s9
2 v1s4
2 v1s2
1 v8s7
1 v7s7
1 v7s15
1 v6s4
1 v6s2
1 v6s10
1 v5s8
1 v5s7
1 v5s4
1 v5s2
1 v53s9
1 v4s9
1 v4s4
1 v4s3
1 v4s2
1 v4s11
1 v3s8
1 v3s7
1 v3s20
1 v3s2
1 v3s19
1 v3s15
1 v2s8
1 v2s60
1 v2s6
1 v2s2
1 v2s12
1 v29s20
1 v28s1
1 v23s8
1 v1s3
1 v15s23
1 v13s6
1 v13s20
1 v12s6
1 v12s10
1 v11s1
1 m1v2
1 m1s1
Given these examples, m1v1 is an easy first choice for this commit.
The general pattern for this commit and those that follow is to match on a
specific shape of classification that we're optimizing for, falling back to
the old anyValue-based system for all other cases, with the intent of
eventually removing it.
This has long been a curse, and I don't know why I didn't resolve it sooner.
This makes explicit some of the odd things that this is doing, to maintain
the previous behavior. Changing that behavior would be ideal, but ought to
be done separately and put behind a feature flag.
This reverts commit e2d9467633bb75d79dbc8fe9f8971bfa412ea59f.
BUT: it does cause more data to be returned, perhaps unnecessarily. See if
that may offset the slight increase in GC cost.
Further, we may end up getting rid of some of these generated values; check
after we do some class optimizations.
This was a waste of time; it actually reduces performance slightly and increased
GC, unintuitively enough.
Leaving commit here and reverting to keep it for reference.
When the Summary Page was _first written_ (the first part of TAME), it was
compiled in the browser---development consisted of refreshing the page,
which was familiar to how we wrote PHP at the time. No compile process.
In that situation, we couldn't have the XSLT stylesheet failing to
translate. But of course those days are long since gone, and this must be a
compile-time error.
It shouldn't ever get to this point, granted.