Rationale
Most schema libraries optimize for one of two things: runtime validation (Joi, Ajv) or compile-time type integration (Zod, Valibot). Both are useful, and if you're building a single application in TypeScript with a shared type definition between producer and consumer, Zod is probably what you want.
This library optimizes for something different: decoupled composition across subsystems that don't know about each other.
The motivating problem is the one that shows up in modular applications, plugin systems, and configuration pipelines. You have data producers (config files, CLI arguments, environment variables, HTTP requests, other modules) and data consumers (application subsystems, handlers, services) and you want them to compose without forcing either side to import the other's types, share a framework, or agree on a concrete shape at build time.
The conventional answers all have tradeoffs:
- Shared type definitions (Zod-style) work beautifully within one codebase but turn into a coupling point the moment a subsystem is swapped, lazily wired, or provided by a plugin that shouldn't know its caller.
- Plain JSON Schema decouples nicely but stops at structural validation — it has no story for normalization, transformation, cross-references, or dynamic resolution.
- Framework-style DI containers handle the wiring but require every participant to buy into the framework.
This library's approach is that the schema itself can be the contract, and that
contract can be as strict or as loose as each boundary requires. A consumer
declares what it needs by name (new Schema('Logger')). A producer, resolved
lazily at runtime, provides something that satisfies that name. The schema
pipeline verifies whatever the consumer asked it to verify (structural rules,
format constraints, custom validators, even instanceof checks) and the consumer
receives data it can trust to the degree it specified.
Neither side imports the other. Neither side depends on a framework. Schemas can be defined using pure data (or a fluent builder that produces data), allowing them to be exported, aggregated, passed through intermediaries that don't understand it, and instantiated late. The application layer becomes a matchmaker rather than a translator: it composes schemas from independent subsystems and routes validated data between them without inspecting the contents. Schemas or processors referenced by names are resolved late, enabling "dynamic library" style resolution.
The tradeoff: you don't automatically get compile-time type checking. These schemas are engines that perform arbitrary transformations. What you do get is runtime verification at whatever fidelity you define, in exchange for an architecture where subsystems compose without coupling. For the domains this library was built for (application configuration, dependency injection, modular systems where the wiring is deliberately deferred) that tradeoff is the whole point.
Benefits
This isn't a full feature checklist, but more of a set of capabilities that vary widely in library support:
- act as contract (validate)
- act as blueprint (process)
- direct inheritance
- indirect inheritance by name (late resolved)
- direct composition of processing rules
- indirect composition of named processing rules (late resolved)
- cross-references within the schema data
- recursive schema definitions
- dynamic values
- dynamic verification
- dynamic defaults
- transparent async processing without excessive overhead to sync processing
- define using builder (friendly api)
- define using pure data (avoiding dependencies)
- load custom definition libraries (plugin-style)
- complex auto-discriminated unions
- is introspectable and provides metadata for tooling integration
- zero dependencies
Liabilities
- no type inference (intended to be an engine, not a type system)
- pipeline model and multipass resolution add processing overhead
- hybrid sync/async architecture is subtle and complex for extension
- dynamic string-based references to processors limit tree shaking