Processing Flow
todo - Explain the multi-stage processing pipeline. Cover:
- Overview of the five stages in order:
- Condition - Determines if schema should be processed at all
- Normalize - Converts input strings to appropriate types
- Transform - Applies custom transformations, accesses config state
- Validate - Ensures values meet requirements
- Serialize - Optional output formatting (mainly for
--dump)
For each stage:
- When it runs and why
- Handler signature:
(current, configuration, schema, path, options) => result - Sync vs async support
- Practical use cases and examples
Additional topics:
- Assignment processing: how
Map<path, value>becomes config object - Multi-pass resolution for union types
- Assignment priority ordering (selectors → discriminators → specific → bulk)
- Partial resolution support (discriminator can arrive last)
- Lazy evaluation and undefined returns for retry
- How conditions suppress entire subtrees
Key point: The processing pipeline is deterministic and extensible. Serialize is optional (not part of core flow), but condition is essential.
Examples:
- Normalizer that parses duration strings
- Transformer that instantiates classes based on config values
- Validator that checks file existence
- Serializer that formats dates for output
- Condition that enables/disables subsystem schemas Handlers can accept additional parameters for more complex processing:
/**
* @template TReturn
* @callback SchemaValueFunction
* @param {any} value - the value to be processed
* @param {Object|Array<any>} configuration - the full configuration object being built
* @param {CompiledSchema} schema - the current schema coresponding to the value
* @param {String} path - the dotted path of the value within the configuration object
* @returns {TReturn|Promise<TReturn>} - the processed value (function can be sync or async)
*/
/** @type {SchemaValueFunction} */
const handler = (value, configuration, schema, path) => { /* ... */ }