Before this change, since `undefined' is encoded as `null' when serialized,
there was no way for the server to disambiguate between unmodified values
and a truncation point. For example:
[ undefined, undefined, null, null, null ]
The above array represents two unmodified and three removed indexes. But
this is serialzed into JSON as:
[ null, null, null, null, null ]
It isn't possible for the server to determine what the truncation point is
from that diff. The solution is to therefore truncate the array _before_
sending it to the server, providing a trailing null to indicate that a
truncation has occurred:
[ null, null, null ]
The above means that the first two indexes are unmodified, and that index 2
and later should all be truncated.
* doc/client.texi (Saving to Server): New section.
* src/client/transport/XhttpQuoteTransport.js (_truncateDiff): New method to
perform truncation.
(getBucketDataJson): Use it.
* test/client/transport/XhttpQuoteTransportTest.js: New file with respective
test case.
This is something that managed to slip by (but not unnoticed) for almost
exactly one year to this day (028606242a). It
can only be reproduced by changing classes that result in visibility changes
differing on the same field by index. The issue hides itself on first
load (because all fields are shown by default) and on refresh.
The problem is that, when one index shows a field but another hides it, the
hide overrode the show indexes, so only the hide took place.
* src/client/Cmatch.js (markShowHide): Make virtual. New implementation to
support concurrent show/hide.
(_handleClassMatch): Use it.
* test/client/CmatchTest.js: New test.
* npm-shrinkwrap.json: ease.js v0.2.{8=>9}.
The `set' event already existed---this merely extracts it into its own
handler.
* src/client/Client.js (handleEvent): Extract `set' handler.
* src/client/ClientDependencyFactory.js (createClientEventHandler): Add
`set'.
* src/client/event/ValueSetEventHandler.js: New class.
* test/event/ValueSetEventHandlerTest.js: Associated test case.
This mixes in support for non-terminating nulls. It would have been
nice to handle that in a separate commit for clarity, but the
refactoring came as a consequence of trying to provide a working
implementation.
Various inconsistencies and subtle bugs in unlikely situations have
been fixed by this, including modifying objects passed as arguments to
various methods, and inconsistent handling of diff data.
Changes are more consistently recognized. Perhaps the most noticeable
consequence is that moving between steps no longer prompts to discard
changes---previously calculated values would trigger the dirty flag on
steps even if the user didn't actually change anything. I (and
others) have wanted this fixed for many years.
This is a very dense commit that touches a core part of the
system. Hopefully the Changelog below helps.
* src/bucket/Bucket.js
(setValues): [BC-BREAK] Remove parameters `merge_index' and
`merge_null' parameters.
* src/bucket/DelayedStagingBucket.js
(setValues): [BC-BREAK] Remove `merge_index' and `merge_null
parameters. Remove distinction between `merge_index' and non-.
* src/bucket/QuoteDataBucket.js
(setValues): [BC-BREAK] Remove `merge_index' and `merge_null
parameters. Remove respective arguments from `_mergeData' call.
(_mergeData): Remove same parameters. Remove handling of
`merge_index' and `merge_null'.
(overwriteValues): Append `null' to each vector.
* src/bucket/StagingBucket.js
(_initState): Use `Object.create' instead of explicit prototype
instantiation (functionally equivalent).
(merge): Minor comment correction.
(_hasChanged): Rename to `_parseChanges'.
(_parseChanges): Rename from `_hasChanged'. Remove `merge_index'
parameter. Generate new object rather than mutation original
data (prevent dangerous and subtle bugs from side-effects). Clone
each vector rather than modifying/referencing directly (this was
previously done during merge). Remove `merge_index'
distinction. Handle non-terminating `null' values.
(setValues): [BC-BREAK] Remove `merge_index' and `merge_null'
parameters. Use new object generated by `_parseChanges'. Remove
cloning of each vector (`_parseChanges' now does that). Remove
`merge_index' distinction.
(overwriteValues): Remove argument to `setValues' call.
(getFilledDiff): [BC-BREAK] Use `_staged' rather than `_curdata'.
(commit): Remove second and third arguments of call to `setValues'
of underlying bucket.
* src/client/Client.js
(_initStepUi): Remove second argument of calls to quote `setData'.
* src/client/quote/ClientQuote.js
(setData): [BC-BREAK] Remove `merge_nulls' parameter. Remove second
and third arguments of call to staging bucket `setValues'. Add
comment indicating a long-standing problem with committing the
staging bucket contents before save has succeeded.
* src/server/request/DataProcessor.js
(processDiff): Remove `permit_null' argument of `sanitizeDiff'
call.
(sanitizeDiff): Remove `permit_null' parameter. Hard-code filter
call's `permit_null' argument to `true'.
(_determineDapiFields): Properly handle `null's (ignore) rather than
inadvertently converting them into the string "null".
* test/bucket/StagingBucketTest.js: Modify test cases
accordingly. Add tests to verify that updates and diffs operate
as expected, especially support for non-terminating `null's.
(createStubBucket): Use `QuoteDataBucket'. Ideally remove this
coupling in the future, but this is a more realistic test case for
the time being.
* test/server/request/DataProcessorTest.js: Update test to account for
hard-coded `given_null' argument.
This represents a portion of the refactoring that I had intended to do
until I realized that there was a simpler solution to the problem that
we were having (having proguic add stored calculated values to the
defaults object).
So ideally we'll continue extracting all quote init code out of
`Server' and into `ProgramInit' in the future.
* doc/server.texi (Liza Server): Mention `ProgramInit'.
* src/program/ProgramInit.js: Add class.
* src/server/DocumentServer.js: Use it.
* src/server/Server.js (_progInit): Add private field.
(__construct): Accept ProgramInit instance and assign to field.
(initQuote): Use promise returned by `_getDefaultBucket'.
(_getDefaultBucket): Proxy to `ProgramInit#init', which returns a
promise.
This was a bit involved because the system had to be made async all
the way up the stack. No attempt was made to clean up the mess up the
stack---no time.
* src/dapi/DataApiFactory.js
(fromType): [BC BREAK] Fix docblock. Add `api_name' param. Call
`#descLookup' and return promise.
(descLookup): Add method. Return promise resolving to provided
descriptor. Intended to be overridden by subtype.
* src/dapi/DataApiManager.js
(_dataApis): Update docblock to indicate that it now stores
promises.
(getApiData): Expect promise for `DataApiFactory#fromType' call.
* src/server/DocumentServer.js: (create): [BC BREAK] Accept
configuration. Look up dapi conf and pass to
`ServerDataApiFactory' ctor. Return promise.
* src/server/daemon/Daemon.js (_initRouters): Provide configuration.
* src/server/daemon/controller.js
(init): Accept configuration. Handle return of promise from
`_createDocumentServer'.
(_createDocumentServer): Accept configuration, providing to
`DocumentServer#create'. Because of aforementioned change to
`#create', returns promise.
* src/server/request/ServerDataApiFactory.js: Add StoreMissError
import.
(_conf): Add property.
(constructor): [BC BREAK] Accept configuration.
(descLookup): Add override. Look up configuration for provided
dapi.
This will make life much easier and less verbose, especially
considering the verbosity of promises.
* src/store/DelimitedKey.js: Add trait.
* test/store/DelimitedKeyTest.js: Add test case.
This is a terrible kluge, but time doesn't permit modifying the
system. All of this also touches old code that is untested, which is
difficult to modify with confidence.
* src/server/DocumentServer.js (DocumentServer#create): Use
StagingBucket.
* src/server/Server.js: Remove logic now handled by DataProcessor.
* src/server/request/DataProcessor.js (processDiff): Wrap in
StagingBucket to filter out values that do not result in changes.
* test/server/request/DataProcessorTest.js: Update failing cases.
This is a kluge until time can be spent better factoring this
system (using Traits).
* src/bucket/StagingBucket.js (_noStagingBypass): Add field.
(forbidBypass): Add method to set field.
(setCommittedValues): Use field.
* src/dapi/DataApiFactory.js (_createDataApi): Add support for
`quote'.
* src/dapi/http/HttpDataApi.js (__construct): New `enctype' argument.
(_encodeData, _encodeKeys): Remove former, rename latter to former.
(encodeData, _urlEncode): Encode based on enctype and method.
(*): Strict mode, es6 style.
What a cluster.
This was a lot of work to work around existing, bad APIs; there is no
time to refactor at the moment; this already took much longer than
expected.
Copyright notices updated. More casual references to "LoVullo
Associates" replaced with "RT Specialty / Lovullo", which will be "RT
Specialty Buffalo" in the future. Or "RT Specialty", depending on how
this is rolled out. Or "Ryan Specialty Group". Who knows.
"R-T Specialty, LLC." is the legal name, which includes the dash. Not
to be confused with a certain television network.
I have sat on releasing a lot of this code for years because I wanted
the liza repo to be in a pristine state---tests and all---which
required a great deal of refactoring. Well, that never happened, and
time is up.
LoVullo Associates---my employer---has been purchased by another
company. This means that any agreement with LoVullo regarding
releasing free software is going to have to be re-negotiated with this
new company, and I have no idea how those negotiations will go. So,
I have no choice but to simply release everything in its current state,
or risk it being lost forever.
This represents work over the past 6--7 years, 99.9% of it written by
me. This project has been my baby for quite some time, and has been
through a number of battles with deadlines and other unfortunate
circumstances; the scars show. I also didn't really "know" JS when
starting this project. Perhaps you can help improve upon it.
There are some odds-and-ends that could be committed. And references
to insurance and LoVullo need to be removed to generalize this.
I hope that this will not be the last public commit for this project.
I'll fight the good fight and we'll see where that takes us. Maybe
it'll be easy.
Happy hacking.
This code assumed that no classes would be removed that were _not_
added by #addClass. Well, that's false.
* src/ui/styler/FieldStyler.js (removeClass): Consider both spaces and
boundary preceding class name.
* test/ui/styler/FieldStylerTest.js: Add test case.
Classifications often yield scalar results. Since those results are
used directly in the diff, we have the situation where the expected
diff format (an array) is not provided. Consistent with the rest of
the system, we should consider a scalar to affect every index.
* src/validate/ValidStateMonitor.js (_checkCauseFix): Consider scalar
diffs to affect every index when checking for fixes.
* test/validate/ValidStateMonitorTest.js: Add test.
The intent of this is to allow for clearing errors after fields
load (e.g. a "field loading" message if the user attempts to continue
past the step when a field hasn't yet finished loading).
* src/dapi/DataApiManager.js (getApiData): Emit fieldLoaded after
request completes.
* test/dapi/DataApiManagerTest.js: Add test case.
DEV-2299
This was not properly chain the promises---it was always chaining on
the original _pending (so, the first request) and never clearing
_pending after that point.
* src/validate/DataValidator.js (_onceReady): Properly chain.
* test/validate/DataValidatorTest.js: Updated test. Don't ask.
Since changes trigger any event observers---which can be
expensive---it is ideal to ignore sets that do not result in any
changes to the bucket.
This also resolves issues with systems that are confused by empty
diffs.
* src/bucket/StagingBucket.js
(_hasChanged): Add method.
(setValues): Use it.
* test/bucket/StagingBucketTest.js: Add test case.
DEV-2299
Since we maintain state (as a kluge for the time being to integrate
with the rest of the system), we need to be careful to protect against
concurrent requests that might mess with it before the original
request is complete.
* src/validate/DataValidator.js
(validate, updateFailures): Hold concurrent requests.
(_onceReady): Add method.
* test/validate/DataValidatorTest.js: Add tests.
In practice, not clearing the store and allowing it to use previous
state has the effect of instantly "fixing" failures if there happens
to be more than one validation call.
This was a log of debugging for a one-line change.
* src/validate/DataValidator.js (_populateStore): Always clear store.
* test/validate/DataValidatorTest.js: Add test.
DEV-2299
I updated DataValidator but never updated the caller. Damnit. It's
an unfortunate side-effect of dynamic, loosely typed languages and
mitigating it requires what should be boilerplate functional tests (in
this case---functional tests are useful for many other integration
aspects).
* src/event/FieldVisibilityEventHandler.js
(handle): Use updated DataValidator#clearFailures API, which has
a different descriptor format.
* test/event/FieldVisibilityEventHandlerTest.js: Update test.
* src/validate/DataValidator.js
(validate): New `classes' parameter. API BC break from previous
commits.
(populateStore, updateFailures): Generalize methods.
* test/validate/DataValidatorTest.js:
Add associated test.
Refactor existing tests to adhere to new API/expectations.
DEV-2206
See the description in the test case. This is a bug fix.
* src/validate/ValidStateMonitor.js
(_checkCauseFix): Do not consider an empty diff on a past failure to
be a fix.
* test/validate/ValidStateMonitorTest.js: Add test.
DEV-2296