This redefines the index as a function (rather than a vanilla object) so
that it may be invoked to yield an ease.js Class that wraps the given
prototype.
This is a bugfix; the bug was introduced in v0.2.3.
In ECMAScript 5, reserved keywords can be used to reference the field of an
object in dot notation (e.g. method.super), but in ES3 this is prohibited;
in these cases, method['super'] must be used. To maintain ES3 compatiblity,
GNU ease.js will use the latter within its code.
Of course, if you are developing software that need not support ES3, then
you can use the dot notation yourself in your own code.
This does not sway my decision to use `super`---ES3 will soon (hopefully)
become extinct, and would be already were it not for the terrible influence
of Microsloth's outdated browsers.
Specifically, aae51ecf, which introduces deepEqual changes for comparing
argument objects---specifically, this change:
```c
if ((aIsArgs && !bIsArgs) || (!aIsArgs && bIsArgs))
return false;
```
Since I was comparing an array with an arguments object, deepEqual failed.
While such an implementation may confuse users---since argument objects are
generally treated like arrays---the distinction is important and I do agree
with the change.
This is a bug fix.
If the provided object's constructor is an ease.js type, then the
conventional rules will apply (as mentioned in the test docblock and in the
manual); however, if it's just a vanilla ECMAScript object, then the interop
compatibility checks will be used instead.
The manual already states that this is the case; unfortunately, it
lies---this was apparently overlooked, and is a bug.
A solution for this problem took a disproportionally large amount of time,
attempting many different approaches, and arriving still at a kluge; this is
indicative of a larger issue---we've long since breached the comfort of the
original design, and drastic refactoring is needed.
I have ideas for this, and have already started in another branch, but I
cannot but this implementation off any longer while waiting for it.
Sorry for anyone waiting on the next release: this is what held it up, in
combination with my attention being directed elsewhere during that time (see
the sparse commit timestamps). Including this ordering guarantee is very
important for a stable, well-designed [trait] system.
See test cases for rationale.
I'm not satisfied with this implementation, but the current state of ease.js
makes this kind of thing awkward. Will revisit at some point.
This is an important feature to permit trait reuse without excessive
subtyping---composition over inheritance. For example, consider that you
have a `HttpPlainAuth` trait that adds authentication support to some
transport layer. Without parameterized traits, you have two options:
1. Expose setters for credentials
2. Trait closure
3. Extend the trait (not yet supported)
The first option seems like a simple solution:
```javascript
Transport.use( HttpPlainAuth )()
.setUser( 'username', 'password' )
.send( ... );
```
But we are now in the unfortunate situation that our initialization
procedure has changed. This, for one, means that authentication logic must
be added to anything that instantiates classes that mix in `HttpPlainAuth`.
We'll explore that in more detail momentarily.
More concerning with this first method is that, not only have we prohibited
immutability, but we have also made our code reliant on *invocation order*;
`setUser` must be called before `send`. What if we have other traits mixed
in that have similar conventions? Normally, this is the type of problem that
would be solved with a builder, but would we want every configurable trait
to return a new `Transport` instance? All that on top of littering the
API---what a mess!
The second option is to store configuration data outside of the Trait acting
as a closure:
```javascript
var _user, _pass;
function setCredentials( user, pass ) { _user = user; _pass = pass; }
Trait( 'HttpPlainAuth', { /* use _user and _pass */ } )
```
There are a number of problems with this; the most apparent is that, in this
case, the variables `_user` and `_pass` act in place of static fields---all
instances will share that data, and if the data is modified, it will affect
all instances; you are therefore relying on external state, and mutability
is forced upon you. You are also left with an awkward `setCredentials` call
that is disjoint from `HttpPlainAuth`.
The other notable issue arises if you did want to support instance-specific
credentials. You would have to use ease.js' internal identifiers (which is
undocumented and subject to change in future versions), and would likely
accumulate garbage data as mixin instances are deallocated, since ECMAScript
does not have destructor support.
To recover from memory leaks, you could instead create a trait generator:
```javascript
function createHttpPlainAuth( user, pass )
{
return Trait( { /* ... */ } );
}
```
This uses the same closure concept, but generates new traits at runtime.
This has various implications depending on your engine, and may thwart
future ease.js optimization attempts.
The third (which will be supported in the near future) is prohibitive: we'll
add many unnecessary traits that are a nightmare to develop and maintain.
Parameterized traits are similar in spirit to option three, but without
creating new traits each call: traits now support being passed configuration
data at the time of mixin that will be passed to every new instance:
```javascript
Transport.use( HttpPlainAuth( user, pass ) )()
.send( ... );
```
Notice now how the authentication configuration is isolated to the actual
mixin, *prior to* instantiation; the caller performing instantiation need
not be aware of this mixin, and so the construction logic can remain wholly
generic for all `Transport` types.
It further allows for a convenient means of providing useful, reusable
exports:
```javascript
module.exports = {
ServerFooAuth: HttpPlainAuth( userfoo, passfoo ),
ServerBarAuth: HttpPlainAuth( userbar, passbar ),
ServerFooTransport: Transport.use( module.exports.ServerFooAuth ),
// ...
};
var module = require( 'foo' );
// dynamic auth
Transport.use( foo.ServerFooAuth )().send( ... );
// or predefined classes
module.ServerFooTransport().send( ... );
```
Note that, in all of the above cases, the initialization logic is
unchanged---the caller does not need to be aware of any authentication
mechanism, nor should the caller care of its existence.
So how do you create parameterized traits? You need only define a `__mixin`
method:
Trait( 'HttpPlainAuth', { __mixin: function( user, pass ) { ... } } );
The method `__mixin` will be invoked upon instantiation of the class into
which a particular configuration of `HttpPlainAuth` is mixed into; it was
named differently from `__construct` to make clear that (a) traits cannot be
instantiated and (b) the constructor cannot be overridden by traits.
A configured parameterized trait is said to be an *argument trait*; each
argument trait's configuration is discrete, as was demonstrated by
`ServerFooAuth` and `ServerBarAuth` above. Once a parameterized trait is
configured, its arguments are stored within the argument trait and those
arguments are passed, by reference, to `__mixin`. Since any mixed in trait
can have its own `__mixin` method, this permits traits to have their own
initialization logic without the need for awkward overrides or explicit
method calls.
Existing functionality is maintained using toString. This is necessary to
support type checking, and to be more consistent with the native Symbol
implementation.
Technically `new FallbackSymbol` is supposed to throw a TypeError, just as
`new Symbol` would, but doing so complicates the constructor unncessarily,
so I am not going to bother with that here.
This is the closest we will get to implementing a concept similar to symbols
in pre-ES6. The intent is that, in an ES5 environment, the caller should
ensure that the object receiving this key will mark it as non-enumerable.
Otherwise, we're out of luck.
The symbol string is pseduo-randomly generated with an attempt to reduce the
likelihood of field collisions and malicious Math.{floor,random} overwrites
(so long as they are clean at the time of loading the module).
Strict mode fails on `typeof` for undefined variables, which was used
outside of strict mode for exactly the purpose of checking for undefined
variables! This check will work in either case.
During its initial development, no environments (e.g. Node.js, Chromium,
Firefox) supported strict mode; this has since changed, and node has a
--use-strict option, which is used in the test runner to ensure conformance.
See the introduced test cases for great detail; there was a problem with the
implementation where only the public API of the abstract trait object was
being checked, meaning that protected virtual methods were not found when
peforming the call. This was not a problem on override, because that proxies
to the protected member object (PMO), which includes protected members.
This fixes a bug that causesd virtual definitions with parameters on classes
that a trait is mixed into to fail, and prevented proper param length
validations in the reverse case.
See test case description for a less confusing description.
This was a nasty bug that I discovered when working on a project at work
probably over a year ago. I had worked around it, but ease.js was largely
stalled at the time; with it revitalized by GNU, it's finally getting fixed.
See test case comments for more information.
This is something that I've been aware of for quite some time, but never got
around to fixing; ease.js had stalled until it was revitalized by becoming a
GNU project.
This allows separation of concerns and makes the type system extensible. If
the type does not implement the necessary API, it falls back to using
instanceof.
`make perf` will build, by default, perf.log, but you may also build perf.*;
for example:
$ make perf.1
# make some changes
$ make perf.2
This allows comparing changes easily.
Styled for display to user as the tests are running, but data are written to
perf.out for additional processing.
You can style the perf.out file cleanly using:
$ column -ts\| perf.out
This is an exciting performance optimization that seems to have eluded me
for a surprisingly long time, given that the realization was quite random.
ease.js accomplishes much of its work through a method wrapper---each and
every method definition (well, until now) was wrapped in a closure that
performed a number of steps, depending on the type of wrapper involved:
1. All wrappers perform a context lookup, binding to the instance's private
member object of the class that defined that particular method. (See
"Implementation Details" in the manual for more information.)
2. This context is restored upon returning from the call: if a method
returns `this', it is instead converted back to the context in which
the method was invoked, which prevents the private member object from
leaking out of a public interface.
3. In the event of an override, this.__super is set up (and torn down).
There are other details (e.g. the method wrapper used for method proxies),
but for the sake of this particular commit, those are the only ones that
really matter. There are a couple of important details to notice:
- Private members are only ever accessible from within the context of the
private member object, which is always the context when executing a
method.
- Private methods cannot be overridden, as they cannot be inherited.
Consequently:
1. We do not need to perform a context lookup: we are already in the proper
context.
2. We do not need to restore the context, as we never needed to change it
to begin with.
3. this.__super is never applicable.
Method wrappers are therefore never necessary for private methods; they have
therefore been removed.
This has some interesting performance implications. While in most cases the
overhead of method wrapping is not a bottleneck, it can have a strong impact
in the event of frequent method calls or heavily recursive algorithms. There
was one particular problem that ease.js suffered from, which is mentioned in
the manual: recursive calls to methods in ease.js were not recommended
because it
(a) made two function calls for each method call, effectively halving the
remaining call stack size, and
(b) tail call optimization could not be performed, because recursion
invoked the wrapper, *not* the function that was wrapped.
By removing the method wrapper on private methods, we solve both of these
problems; now, heavily recursive algorithms need only use private methods
(which could always be exposed through a protected or public API) when
recursing to entirely avoid any performance penalty by using ease.js.
Running the test cases on my system (your results may vary) before and after
the patch, we have:
BEFORE:
0.170s (x1000 = 0.0001700000s each): Declare 1000 anonymous classes with
private members
0.021s (x500000 = 0.0000000420s each): Invoke private methods internally
AFTER:
0.151s (x1000 = 0.0001510000s each): Declare 1000 anonymous classes with
private members
0.004s (x500000 = 0.0000000080s each): Invoke private methods internally
This is all the more motivation to use private members, which enforces
encapsulation; keep in mind that, because use of private members is the
ideal in well-encapsulated and well-factored code, ease.js has been designed
to perform best under those circumstances.