This is an exciting performance optimization that seems to have eluded me
for a surprisingly long time, given that the realization was quite random.
ease.js accomplishes much of its work through a method wrapper---each and
every method definition (well, until now) was wrapped in a closure that
performed a number of steps, depending on the type of wrapper involved:
1. All wrappers perform a context lookup, binding to the instance's private
member object of the class that defined that particular method. (See
"Implementation Details" in the manual for more information.)
2. This context is restored upon returning from the call: if a method
returns `this', it is instead converted back to the context in which
the method was invoked, which prevents the private member object from
leaking out of a public interface.
3. In the event of an override, this.__super is set up (and torn down).
There are other details (e.g. the method wrapper used for method proxies),
but for the sake of this particular commit, those are the only ones that
really matter. There are a couple of important details to notice:
- Private members are only ever accessible from within the context of the
private member object, which is always the context when executing a
method.
- Private methods cannot be overridden, as they cannot be inherited.
Consequently:
1. We do not need to perform a context lookup: we are already in the proper
context.
2. We do not need to restore the context, as we never needed to change it
to begin with.
3. this.__super is never applicable.
Method wrappers are therefore never necessary for private methods; they have
therefore been removed.
This has some interesting performance implications. While in most cases the
overhead of method wrapping is not a bottleneck, it can have a strong impact
in the event of frequent method calls or heavily recursive algorithms. There
was one particular problem that ease.js suffered from, which is mentioned in
the manual: recursive calls to methods in ease.js were not recommended
because it
(a) made two function calls for each method call, effectively halving the
remaining call stack size, and
(b) tail call optimization could not be performed, because recursion
invoked the wrapper, *not* the function that was wrapped.
By removing the method wrapper on private methods, we solve both of these
problems; now, heavily recursive algorithms need only use private methods
(which could always be exposed through a protected or public API) when
recursing to entirely avoid any performance penalty by using ease.js.
Running the test cases on my system (your results may vary) before and after
the patch, we have:
BEFORE:
0.170s (x1000 = 0.0001700000s each): Declare 1000 anonymous classes with
private members
0.021s (x500000 = 0.0000000420s each): Invoke private methods internally
AFTER:
0.151s (x1000 = 0.0001510000s each): Declare 1000 anonymous classes with
private members
0.004s (x500000 = 0.0000000080s each): Invoke private methods internally
This is all the more motivation to use private members, which enforces
encapsulation; keep in mind that, because use of private members is the
ideal in well-encapsulated and well-factored code, ease.js has been designed
to perform best under those circumstances.
These will be supported in future versions; this is not something that I
want to rush, nor is it something I want to hold up the first GNU release;
it is likely to be a much lesser-used feature.
The parser methods are now split into their own functions. This has a number
of benefits: The most immediate is the commit that will follow. The second
benefit is that the function is no longer a closure---all context
information is passed into it, and so it can be optimized by the JavaScript
engine accordingly.
The concept of stacked traits already existed in previous commits, but until
now, mixins could not be stacked without some ugly errors. This also allows
mixins to be stacked atop of themselves, duplicating their effect. This
would naturally have limited use, but it's there.
This differs slightly from Scala. For example, consider this ease.js mixin:
C.use( T ).use( T )()
This is perfectly valid---it has the effect of stacking T twice. In reality,
ease.js is doing this:
- C' = C.use( T );
- new C'.use( T );
That is, it each call to `use' creates another class with T mixed in.
Scala, on the other hand, complains in this situation:
new C with T with T
will produce an error stating that "trait T is inherited twice". You can
work around this, however, by doing this:
class Ca extends T
new Ca with T
In fact, this is precisely what ease.js is doing, as mentioned above; the
"use.use" syntax is merely shorthand for this:
new C.use( T ).extend( {} ).use( T )
Just keep that in mind.
More information on this implementation and the rationale behind it will
appear in the manual. See future commits.
(Note the TODOs; return values aren't quite right here, but that will be
handled in the next commit.)
This is a consequence of ease.js' careful trait implementation that ensures
that any mixed in trait retains its API in the same manner that interfaces
and supertypes do.
There does not seem to be tests for any of the metadata at present; they are
implicitly tested through various implementations that make use of them.
This will also be the case here ("will"---in coming commits), but needs to
change.
The upcoming reflection implementation would be an excellent time to do so.
This will allow for additional processing before actually triggering the
warnings. For the sake of this commit, though, we just keep with existing
functionality.
This adds the `weak' keyword and permits abstract method definitions to
appear in the same definition object as the concrete implementation. This
should never be used with hand-written code---it is intended for code
generators (e.g. traits) that do not know if a concrete implementation will
be provided, and would waste cycles duplicating the property parsing that
ease.js will already be doing. It also allows for more concise code
generator code.
Note that, even though it's permitted, the validator still needs to be
modified to permit useful cases. In particular, I need weak abstract and
strong concrete methods for use in traits.
This just saves some time and memory in the event that the trait is never
actually used, which may be the case in libraries or dynamically loaded
modules.
As described in <https://savannah.gnu.org/task/index.php#comment3>.
The benefit of this approach over definition object merging is primarily
simplicitly---we're re-using much of the existing system. We may provide
more tight integration eventually for performance reasons (this is a
proof-of-concept), but this is an interesting start.
This also allows us to study and reason about traits by building off of
existing knowledge of composition; the documentation will make mention of
this to explain design considerations and issues of tight coupling
introduced by mixing in of traits.
Note the incomplete tests. These are very important, but the current state
at least demonstrates conceptually how this will work (and is even useful in
its current state, albeit dangerous and far from ready for production).
This is a rough concept showing how traits will be used at definition time
by classes (note that this does not yet address how they will be ``mixed
in'' at the time of instantiation).
More generally, this was a problem with not recursing on *all* of the
visibility objects of the supertype's supertype; the public visibility
object was implicitly recursed upon through JavaScript's natural prototype
chain, so this only manifested itself with protected members.