-
Proves out the "handler-side plug-in context access" pattern: a plug-in's TaskHandler captures the PluginContext through its constructor when the plug-in instantiates it inside `start(context)`, and then uses `context.jdbc`, `context.logger`, etc. from inside `execute` the same way the plug-in's HTTP lambdas do. Zero new api.v1 surface was needed — the plug-in decides whether a handler takes a context or not, and a pure handler simply omits the constructor parameter. ## Why this pattern and not a richer TaskContext The alternatives were: (a) add a `PluginContext` field (or a narrowed projection of it) to api.v1 `TaskContext`, threading the host-owned context through the workflow engine (b) capture the context in the plug-in's handler constructor — this commit Option (a) would have forced every TaskHandler author — core PBC handlers too, not just plug-in ones — to reason about a per-plug-in context that wouldn't make sense for core PBCs. It would also have coupled api.v1 to the plug-in machinery in a way that leaks into every handler implementation. Option (b) is a pure plug-in-local pattern. A pure handler: class PureHandler : TaskHandler { ... } and a stateful handler look identical except for one constructor parameter: class StatefulHandler(private val context: PluginContext) : TaskHandler { override fun execute(task, ctx) { context.jdbc.update(...) context.logger.info(...) } } and both register the same way: context.taskHandlers.register(PureHandler()) context.taskHandlers.register(StatefulHandler(context)) The framework's `TaskHandlerRegistry`, `DispatchingJavaDelegate`, and `DelegateTaskContext` stay unchanged. Plug-in teardown still strips handlers via `unregisterAllByOwner(pluginId)` because registration still happens through the scoped registrar inside `start(context)`. ## What PlateApprovalTaskHandler now does Before this commit, the handler was a pure function that wrote `plateApproved=true` + metadata to the process variables and didn't touch the DB. Now it: 1. Parses `plateId` out of the process variables as a UUID (fail-fast on non-UUID). 2. Calls `context.jdbc.update` to set the plate row's `status` from 'DRAFT' to 'APPROVED', guarded by an explicit `WHERE id=:id AND status='DRAFT'`. The guard makes a second invocation a no-op (rowsUpdated=0) rather than silently overwriting a later status. 3. Logs via the plug-in's PluginLogger — "plate {id} approved by user:admin (rows updated: 1)". Log lines are tagged `[plugin:printing-shop]` by the framework's Slf4jPluginLogger. 4. Emits process output variables: `plateApproved=true`, `plateId=<uuid>`, `approvedBy=<principal label>`, `approvedAt=<instant>`, and `rowsUpdated=<count>` so callers can see whether the approval actually changed state. ## Smoke test (fresh DB, full end-to-end loop) ``` POST /api/v1/plugins/printing-shop/plates {"code":"PLATE-042","name":"Red cover plate","widthMm":320,"heightMm":480} → 201 {"id":"0bf577c9-...","status":"DRAFT",...} POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"0bf577c9-..."}} → 201 {"ended":true, "variables":{"plateId":"0bf577c9-...", "rowsUpdated":1, "approvedBy":"user:admin", "approvedAt":"2026-04-09T05:01:01.779369Z", "plateApproved":true}} GET /api/v1/plugins/printing-shop/plates/0bf577c9-... → 200 {"id":"0bf577c9-...","status":"APPROVED", ...} ^^^^ note: was DRAFT a moment ago, NOW APPROVED — persisted to plugin_printingshop__plate via the handler's context.jdbc.update POST /api/v1/workflow/process-instances (same plateId, second run) → 201 {"variables":{"rowsUpdated":0,"plateApproved":true,...}} ^^^^ idempotent guard: the WHERE status='DRAFT' clause prevents double-updates, rowsUpdated=0 on the re-run ``` This is the first cross-cutting end-to-end business flow in the framework driven entirely through the public surfaces: 1. Plug-in HTTP endpoint writes a domain row 2. Workflow HTTP endpoint starts a BPMN process 3. Plug-in-contributed BPMN (deployed via PluginProcessDeployer) routes to a plug-in-contributed TaskHandler (registered via context.taskHandlers) 4. Handler mutates the same plug-in-owned table via context.jdbc 5. Plug-in HTTP endpoint reads the new state Every step uses only api.v1. Zero framework core code knows the plug-in exists. ## Non-goals (parking lot) - Emitting an event from the handler. The next step in the plug-in workflow story is for a handler to publish a domain event via `context.eventBus.publish(...)` so OTHER subscribers (e.g. pbc-production waiting on a PlateApproved event) can react. This commit stays narrow: the handler only mutates its own plug-in state. - Transaction scope of the handler's DB write relative to the Flowable engine's process-state persistence. Today both go through the host DataSource and Spring transaction manager that Flowable auto-configures, so a handler throw rolls everything back — verified by walking the code path. An explicit test of transactional rollback lands with REF.1 when the handler takes on real business logic. -
Completes the plug-in side of the embedded Flowable story. The P2.1 core made plug-ins able to register TaskHandlers; this chunk makes them able to ship the BPMN processes those handlers serve. ## Why Flowable's built-in auto-deployer couldn't do it Flowable's Spring Boot starter scans the host classpath at engine startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys every hit (the literal glob is paraphrased because the Kotlin KDoc comment below would otherwise treat the embedded slash-star as the start of a nested comment — feedback memory "Kotlin KDoc nested- comment trap"). PF4J plug-ins load through an isolated child classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml` resource shipped inside a plug-in JAR is never seen. This chunk adds a dedicated host-side deployer that opens each plug-in JAR file directly (same JarFile walk pattern as `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs with the Flowable `RepositoryService`. ## Mechanism ### New PluginProcessDeployer (platform-workflow) One Spring bean, two methods: - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR, collects every entry whose name starts with `processes/` and ends with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one Flowable `Deployment` named `plugin:<id>` with `category = pluginId`. Returns the deployment id or null (missing JAR / no BPMN resources). One deployment per plug-in keeps undeploy atomic and makes the teardown query unambiguous. - `undeployByPlugin(pluginId): Int` — runs `createDeploymentQuery().deploymentCategory(pluginId).list()` and calls `deleteDeployment(id, cascade=true)` on each hit. Cascading removes process instances and history rows along with the deployment — "uninstalling a plug-in makes it disappear". Idempotent: a second call returns 0. The deployer reads the JAR entries into byte arrays inside the JarFile's `use` block and then passes the bytes to `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the jar handle is already closed by the time Flowable sees the deployment. No input-stream lifetime tangles. ### VibeErpPluginManager wiring - New constructor dependency on `PluginProcessDeployer`. - Deploy happens AFTER `start(context)` succeeds. The ordering matters because a plug-in can only register its TaskHandlers during `start(context)`, and a deployed BPMN whose service-task delegate expression resolves to a key with no matching handler would still deploy (Flowable only resolves delegates at process-start time). Registering handlers first is the safer default: the moment the deployment lands, every referenced handler is already in the TaskHandlerRegistry. - BPMN deployment failure AFTER a successful `start(context)` now fully unwinds the plug-in state: call `instance.stop()`, remove the plug-in from the `started` list, strip its endpoints + its TaskHandlers + call `undeployByPlugin` (belt and suspenders — the deploy attempt may have partially succeeded). That mirrors the existing start-failure unwinding so the framework doesn't end up with a plug-in that's half-installed after any step throws. - `destroy()` calls `undeployByPlugin(pluginId)` alongside the existing `unregisterAllByOwner(pluginId)`. ### Reference plug-in BPMN `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml` — a minimal two-task process (`start` → serviceTask → `end`) whose serviceTask id is `printing_shop.plate.approve`, matching the PlateApprovalTaskHandler key landed in the previous commit. Process definition key is `plugin-printing-shop-plate-approval` (distinct from the serviceTask id because BPMN 2.0 requires element ids to be unique per document — same separation used for the core ping process). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'vibeerp.workflow.ping' owner='core' ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml] $ curl /api/v1/workflow/definitions (as admin) [ {"key":"plugin-printing-shop-plate-approval", "name":"Printing shop — plate approval", "version":1, "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4", "resourceName":"processes/plate-approval.bpmn20.xml"}, {"key":"vibeerp-workflow-ping", "name":"vibe_erp workflow ping", "version":1, "deploymentId":"4f48...", "resourceName":"vibeerp-ping.bpmn20.xml"} ] $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"PLATE-007"}} → {"processInstanceId":"5b1b...", "ended":true, "variables":{"plateId":"PLATE-007", "plateApproved":true, "approvedBy":"user:admin", "approvedAt":"2026-04-09T04:48:30.514523Z"}} $ kill -TERM <pid> [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade) ``` Full end-to-end loop closed: plug-in ships a BPMN → host reads it out of the JAR → Flowable deployment registered under the plug-in category → HTTP caller starts a process instance via the standard `/api/v1/workflow/process-instances` surface → dispatcher routes by activity id to the plug-in's TaskHandler → handler writes output variables + plug-in sees the authenticated caller as `ctx.principal()` via the reserved `__vibeerp_*` process-variable propagation from commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs. ## Tests - 6 new unit tests on `PluginProcessDeployerTest`: * `deployFromPlugin returns null when jarPath is not a regular file` — guard against dev-exploded plug-in dirs * `deployFromPlugin returns null when the plug-in jar has no BPMN resources` * `deployFromPlugin reads every bpmn resource under processes and deploys one bundle` — builds a real temporary JAR with two BPMN entries + a README + a metadata YAML, verifies that both BPMNs go through `addBytes` with the right names and the README / metadata entries are skipped * `deployFromPlugin rejects a blank plug-in id` * `undeployByPlugin returns zero when there is nothing to remove` * `undeployByPlugin cascades a deleteDeployment per matching deployment` - Total framework unit tests: 275 (was 269), all green. ## Kotlin trap caught during authoring (feedback memory paid out) First compile failed with `Unclosed comment` on the last line of `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph containing the literal glob `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the backtick span was parsed as the start of a nested block comment even though the surrounding `/* ... */` KDoc was syntactically complete. The saved feedback-memory entry "Kotlin KDoc nested-comment trap" covered exactly this situation — the fix is to spell out glob characters as `[star]` / `[slash]` (or the word "slash-star") inside documentation so the literal `/*` never appears. The KDoc now documents the behaviour AND the workaround so the next maintainer doesn't hit the same trap. ## Non-goals (still parking lot) - Handler-side access to the full PluginContext — PlateApprovalTaskHandler is still a pure function because the framework doesn't hand TaskHandlers a context object. For REF.1 (real quote→job-card) handlers will need to read + mutate plug-in-owned tables; the cleanest approach is closure-capture inside the plug-in class (handler instantiated inside `start(context)` with the context captured in the outer scope). Decision deferred to REF.1. - BPMN resource hot reload. The deployer runs once per plug-in start; a plug-in whose BPMN changes under its feet at runtime isn't supported yet. - Plug-in-shipped DMN / CMMN resources. The deployer only looks at `.bpmn20.xml` and `.bpmn`. Decision-table and case-management resources are not on the v1.0 critical path. -
## What's new Plug-ins can now contribute workflow task handlers to the framework. The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans from the host Spring context; handlers defined inside a PF4J plug-in were invisible because the plug-in's child classloader is not in the host's bean list. This commit closes that gap. ## Mechanism ### api.v1 - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar` with a single `register(handler: TaskHandler)` method. Plug-ins call it from inside their `start(context)` lambda. - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as a new optional member with a default implementation that throws `UnsupportedOperationException("upgrade to v0.7 or later")`, so pre-existing plug-in jars remain binary-compatible with the new host and a plug-in built against v0.7 of the api-v1 surface fails fast on an old host instead of silently doing nothing. Same pattern we used for `endpoints` and `jdbc`. ### platform-workflow - `TaskHandlerRegistry` gains owner tagging. Every registered handler now carries an `ownerId`: core `@Component` beans get `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through the constructor-injection path), plug-in-contributed handlers get their PF4J plug-in id. New API: * `register(handler, ownerId = OWNER_CORE)` (default keeps existing call sites unchanged) * `unregisterAllByOwner(ownerId): Int` — strip every handler owned by that id in one call, returns the count for log correlation * The duplicate-key error message now includes both owners so a plug-in trying to stomp on a core handler gets an actionable "already registered by X (owner='core'), attempted by Y (owner='printing-shop')" instead of "already registered". * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>` to `ConcurrentHashMap<String, Entry>` where `Entry` carries `(handler, ownerId)`. `find(key)` still returns `TaskHandler?` so the dispatcher is unchanged. - No behavioral change for the hot-path (`DispatchingJavaDelegate`) — only the registration/teardown paths changed. ### platform-plugins - New dependency on `:platform:platform-workflow` (the only new inter- module dep of this chunk; it is the module that exposes `TaskHandlerRegistry`). - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)` that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`, so the plug-in never sees (or can tamper with) the owner id. - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor parameter and exposes it as the `PluginContext.taskHandlers` override. - `VibeErpPluginManager`: * injects `TaskHandlerRegistry` * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing `endpointRegistry.unregisterAll(pluginId)` cleanup so a throwing `start(context)` cannot leave stale registrations * `destroy()` calls the same `unregisterAllByOwner` for every started plug-in in reverse order, mirroring the endpoint cleanup ### reference-customer/plugin-printing-shop - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in- contributed TaskHandler in the framework. Key `printing_shop.plate.approve`. Reads a `plateId` process variable, writes `plateApproved`, `plateId`, `approvedBy` (principal label), `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper plate-approval handler would UPDATE `plugin_printingshop__plate` via `context.jdbc`, but that requires handing the TaskHandler a projection of the PluginContext — a deliberate non-goal of this chunk, deferred to the "handler context" follow-up. - `PrintingShopPlugin.start(context)` now ends with `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs the registration. - Package layout: `org.vibeerp.reference.printingshop.workflow` is the plug-in's workflow namespace going forward (the next printing- shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order — will live alongside). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler' [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve $ curl /api/v1/workflow/handlers (as admin) { "count": 2, "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"] } $ curl /api/v1/plugins/printing-shop/ping # plug-in HTTP still works {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...} $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping"} (principal propagation from previous commit still works — pingedBy=user:admin) $ kill -TERM <pid> [ionShutdownHook] vibe_erp stopping 1 plug-in(s) [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped) [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order with the right owner attribution. Core handlers are untouched by plug-in teardown. ## Tests - 4 new / updated tests on `TaskHandlerRegistryTest`: * `unregisterAllByOwner only removes handlers owned by that id` — 2 core + 2 plug-in, unregister the plug-in owner, only the 2 plug-in keys are removed * `unregisterAllByOwner on unknown owner returns zero` * `register with blank owner is rejected` * Updated `duplicate key fails fast` to assert the new error message format including both owner ids - Total framework unit tests: 269 (was 265), all green. ## What this unblocks - **REF.1** (real printing-shop quote→job-card workflow) can now register its production handlers through the same seam - **Plug-in-contributed handlers with state access** — the next design question is how a plug-in handler gets at the plug-in's database and translator. Two options: pass a projection of the PluginContext through TaskContext, or keep a reference to the context captured at plug-in start (closure). The PlateApproval handler in this chunk is pure on purpose to keep the seam conversation separate. - **Plug-in-shipped BPMN auto-deployment** — Flowable's default classpath scan uses `classpath*:/processes/*.bpmn20.xml` which does NOT see PF4J plug-in classloaders. A dedicated `PluginProcessDeployer` that walks each started plug-in's JAR for BPMN resources and calls `repositoryService.createDeployment` is the natural companion to this commit, still pending. ## Non-goals (still parking lot) - BPMN processes shipped inside plug-in JARs (see above — needs its own chunk, because it requires reading resources from the PF4J classloader and constructing a Flowable deployment by hand) - Per-handler permission checks — a handler that wants a permission gate still has to call back through its own context; P4.3's @RequirePermission aspect doesn't reach into Flowable delegate execution. - Hot reload of a running plug-in's TaskHandlers. The seam supports it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised by any current caller. -
Before this commit, every TaskHandler saw a fixed `workflow-engine` System principal via `ctx.principal()` because there was no plumbing from the REST caller down to the dispatcher. A printing-shop quote-to-job-card handler (or any real business workflow) needs to know the actual user who kicked off the process so audit columns and role-based logic behave correctly. ## Mechanism The chain is: Spring Security populates `SecurityContextHolder` → `PrincipalContextFilter` mirrors it into `AuthorizationContext` (already existed) → `WorkflowService.startProcess` reads the bound `AuthorizedPrincipal` and stashes two reserved process variables (`__vibeerp_initiator_id`, `__vibeerp_initiator_username`) before calling `RuntimeService.startProcessInstanceByKey` → `DispatchingJavaDelegate` reads them back off each `DelegateExecution` when constructing the `DelegateTaskContext` → handler sees a real `Principal.User` from `ctx.principal()`. When the process is started outside an HTTP request (e.g. a future Quartz-scheduled process, or a signal fired by a PBC event subscriber), `AuthorizationContext.current()` is null, no initiator variables are written, and the dispatcher falls back to the `Principal.System("workflow-engine")` principal. A corrupt initiator id (e.g. a non-UUID string) also falls back to the system principal rather than failing the task, so a stale variable can't brick a running workflow. ## Reserved variable hygiene The `__vibeerp_` prefix is reserved framework plumbing. Two consequences wired in this commit: - `DispatchingJavaDelegate` strips keys starting with `__vibeerp_` from the variable snapshot handed to the handler (via `WorkflowTask.variables`), so handler code cannot accidentally depend on the initiator id through the wrong door — it must use `ctx.principal()`. - `WorkflowService.startProcess` and `getInstanceVariables` strip the same prefix from their HTTP response payloads so REST callers never see the plumbing either. The prefix constant lives on `DispatchingJavaDelegate.RESERVED_VAR_PREFIX` so there is exactly one source of truth. The two initiator variable names are public constants on `WorkflowService` — tests, future plug-in code, and any custom handlers that genuinely need the raw ids (e.g. a security-audit task) can depend on the stable symbols instead of hard-coded strings. ## PingTaskHandler as the executable witness `PingTaskHandler` now writes a `pingedBy` output variable with a principal label (`user:<username>`, `system:<name>`, or `plugin:<pluginId>`) and logs it. That makes the end-to-end smoke test trivially assertable: ``` POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping"} (as admin user, with valid JWT) → {"processInstanceId": "...", "ended": true, "variables": { "pong": true, "pongAt": "...", "correlationId": "...", "pingedBy": "user:admin" }} ``` Note the RESPONSE does NOT contain `__vibeerp_initiator_id` or `__vibeerp_initiator_username` — the reserved-var filter in the service layer hides them. The handler-side log line confirms `principal='user:admin'` in the service-task execution thread. ## Tests - 3 new tests in `DispatchingJavaDelegateTest`: * `resolveInitiator` returns a User principal when both vars set * falls back to system principal when id var is missing * falls back to system principal when id var is corrupt (non-UUID string) - Updated `variables given to the handler are a defensive copy` to also assert that reserved `__vibeerp_*` keys are stripped from the task's variable snapshot. - Updated `PingTaskHandlerTest`: * rename to "writes pong plus timestamp plus correlation id plus user principal label" * new test for the System-principal branch producing `pingedBy=system:workflow-engine` - Total framework unit tests: 265 (was 261), all green. ## Non-goals (still parking lot) - Plug-in-contributed TaskHandler registration via the PF4J loader walking child contexts for TaskHandler beans and calling `TaskHandlerRegistry.register`. The seam exists on the registry; the loader integration is the next chunk, and unblocks REF.1. - Propagation of the full role set (not just id+username) into the TaskContext. Handlers don't currently see the initiator's roles. Can be added as a third reserved variable when a handler actually needs it — YAGNI for now. - BPMN user tasks / signals / timers — engine supports them but we have no HTTP surface for them yet. -
New platform subproject `platform/platform-workflow` that makes `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This is the framework's first chunk of Phase 2 (embedded workflow engine) and the dependency other work has been waiting on — pbc-production routings/operations, the full buy-make-sell BPMN scenario in the reference plug-in, and ultimately the BPMN designer web UI all hang off this seam. ## The shape - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a single new module. Every other module in the framework still sees only the api.v1 TaskHandler + WorkflowTask + TaskContext surface — guardrail #10 stays honest, no Flowable type leaks to plug-ins or PBCs. - `TaskHandlerRegistry` is the host-side index of every registered handler, keyed by `TaskHandler.key()`. Auto-populated from every Spring bean implementing TaskHandler via constructor injection of `List<TaskHandler>`; duplicate keys fail fast at registration time. `register` / `unregister` exposed for a future plug-in lifecycle integration. - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate named `taskDispatcher`. Every BPMN service task in the framework references it via `flowable:delegateExpression="${taskDispatcher}"`. The dispatcher reads `execution.currentActivityId` as the task key (BPMN `id` attribute = TaskHandler key — no extension elements, no field injection, no second source of truth) and routes to the matching registered handler. A defensive copy of the execution variables is passed to the handler so it cannot mutate Flowable's internal map. - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the api.v1 `TaskContext` — the variable `set(name, value)` call forwards through Flowable's variable scope (persisted in the same transaction as the surrounding service task execution) and null values remove the variable. Principal + locale are documented placeholders for now (a workflow-engine `Principal.System`), waiting on the propagation chunk that plumbs the initiating user through `runtimeService.startProcessInstanceByKey(...)`. - `WorkflowService` is a thin facade over Flowable's `RuntimeService` + `RepositoryService` exposing exactly the four operations the controller needs: start, list active, inspect variables, list definitions. Everything richer (signals, timers, sub-processes, user-task completion, history queries) lands on this seam in later chunks. - `WorkflowController` at `/api/v1/workflow/**`: * `POST /process-instances` (permission `workflow.process.start`) * `GET /process-instances` (`workflow.process.read`) * `GET /process-instances/{id}/variables` (`workflow.process.read`) * `GET /definitions` (`workflow.definition.read`) * `GET /handlers` (`workflow.definition.read`) Exception handlers map `NoSuchElementException` + `FlowableObjectNotFoundException` → 404, `IllegalArgumentException` → 400, and any other `FlowableException` → 400. Permissions are declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded by the core MetadataLoader so they show up under `GET /api/v1/_meta/metadata` alongside every other permission. ## The executable self-test - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module classpath and Flowable's starter auto-deploys it at boot. Structure: `start` → serviceTask id=`vibeerp.workflow.ping` (delegateExpression=`${taskDispatcher}`) → `end`. Process definitionKey is `vibeerp-workflow-ping` (distinct from the serviceTask id because BPMN 2.0 ids must be unique per document). - `PingTaskHandler` is a real shipped bean, not test code: its `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and `correlationId=<ctx.correlationId()>` to the process variables. Operators and AI agents get a trivial "is the workflow engine alive?" probe out of the box. Why the demo lives in src/main, not src/test: Flowable's auto-deployer reads from the host classpath at boot, so if either half lived under src/test the smoke test wouldn't be reproducible from the shipped image — exactly what CLAUDE.md's "reference plug-in is the executable acceptance test" discipline is trying to prevent. ## The Flowable + Liquibase trap **Learned the hard way during the smoke test.** Adding `flowable-spring-boot-starter-process` immediately broke boot with `Schema-validation: missing table [catalog__item]`. Liquibase was silently not running. Root cause: Flowable 7.x registers a Spring Boot `EnvironmentPostProcessor` called `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has already set an explicit value, forces `spring.liquibase.enabled=false` with a WARN log line that reads "Flowable pulls in Liquibase but does not use the Spring Boot configuration for it". Our master.xml then never executes and JPA validation fails against the empty schema. Fix is a single line in `distribution/src/main/resources/application.yaml` — `spring.liquibase.enabled: true` — with a comment explaining why it must stay there for anyone who touches config next. Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`, etc. tables coexist happily in the same public schema — 39 ACT_* tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable manages its own schema via its internal MyBatis DDL, Liquibase manages ours, they don't touch each other. ## Smoke-test transcript (fresh DB, dev profile) ``` docker compose down -v && docker compose up -d db ./gradlew :distribution:bootRun & # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables, # MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler, # BPMN auto-deployed from classpath POST /api/v1/auth/login → JWT GET /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping) GET /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]} POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping", "businessKey":"smoke-1", "variables":{"greeting":"ni hao"}} → 201 {"processInstanceId":"...","ended":true, "variables":{"pong":true,"pongAt":"2026-04-09T...", "correlationId":"...","greeting":"ni hao"}} POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"} → 404 {"message":"No process definition found for key 'does-not-exist'"} GET /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity) ``` ## Tests - 15 new unit tests in `platform-workflow/src/test`: * `TaskHandlerRegistryTest` — init with initial handlers, duplicate key fails fast, blank key rejected, unregister removes, unregister on unknown returns false, find on missing returns null * `DispatchingJavaDelegateTest` — dispatches by currentActivityId, throws on missing handler, defensive-copies the variable map * `DelegateTaskContextTest` — set non-null forwards, set null removes, blank name rejected, principal/locale/correlationId passthrough, default correlation id is stable across calls * `PingTaskHandlerTest` — key matches the BPMN serviceTask id, execute writes pong + pongAt + correlationId - Total framework unit tests: 261 (was 246), all green. ## What this unblocks - **REF.1** — real quote→job-card workflow handler in the printing-shop plug-in - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step with duration + machine assignment - **P2.3** — user-task form rendering (landing on top of the RuntimeService already exposed via WorkflowService) - **P2.2** — BPMN designer web page (later, depends on R1) ## Deliberate non-goals (parking lot) - Principal propagation from the REST caller through the process start into the handler — uses a fixed `workflow-engine` `Principal.System` for now. Follow-up chunk will plumb the authenticated user as a Flowable variable. - Plug-in-contributed TaskHandler registration via PF4J child contexts — the registry exposes `register/unregister` but the plug-in loader doesn't call them yet. Follow-up chunk. - BPMN user tasks, signals, timers, history queries — seam exists, deliberately not built out. - Workflow deployment from `metadata__workflow` rows (the Tier 1 path). Today deployment is classpath-only via Flowable's auto- deployer. - The Flowable async job executor is explicitly deactivated (`flowable.async-executor-activate: false`) — background-job machinery belongs to the future Quartz integration (P1.10), not Flowable. -
Two small closer items that tidy up the end of the HasExt rollout: 1. inventory.yml gains a `customFields:` section with two core declarations for Location: `inventory_address_city` (string, maxLength 128) and `inventory_floor_area_sqm` (decimal 10,2). Completes the "every HasExt entity has at least one declared field" symmetry. Printing-shop plug-in already adds its own `printing_shop_press_id` etc. on top. 2. CLAUDE.md "Repository state" section updated to reflect this session's milestones: - pbc-production v2 (IN_PROGRESS + BOM + scrap) now called out explicitly in the PBC list. - MATERIAL_ISSUE added to the buy-sell-MAKE loop description — the work-order completion now consumes raw materials per BOM line AND credits finished goods atomically. - New bullet: "Tier 1 customization is universal across every core entity with an ext column" — HasExt on Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item; every service uses applyTo/parseExt helpers, zero duplication. - New bullet: "Clean Core extensibility is executable" — the reference printing-shop plug-in's metadata YAML ships customFields on Partner/Item/SalesOrder/WorkOrder and the MetadataLoader merges them with core declarations at load time. Executable grade-A extension under the A/B/C/D safety scale. - Printing-shop plug-in description updated to note that its metadata YAML now carries custom fields on core entities, not just its own entities. Smoke verified end-to-end against real Postgres with the plug-in staged: - GET /_meta/metadata/custom-fields/Location returns 2 core fields. - POST /inventory/locations with `{inventory_address_city: "Shenzhen", inventory_floor_area_sqm: "1250.50"}` → 201, canonical form persisted, ext round-trips. - POST with `inventory_floor_area_sqm: "123456789012345.678"` → 400 "ext.inventory_floor_area_sqm: decimal scale 3 exceeds declared scale 2" — the validator's precision/scale rules fire exactly as designed. No code changes. 246 unit tests, all green. 18 Gradle subprojects. -
Completes the HasExt rollout across every core entity with an ext column. Item was the last one that carried an ext JSONB column without any validation wired — a plug-in could declare custom fields for Item but nothing would enforce them on save. This fixes that and restores two printing-shop-specific Item fields to the reference plug-in that were temporarily dropped from the previous Tier 1 customization chunk (commit 16c59310) precisely because Item wasn't wired. Code changes: - Item implements HasExt; `ext` becomes `override var ext`, a companion constant holds the entity name "Item". - ItemService injects ExtJsonValidator, calls applyTo() in both create() and update() (create + update symmetry like partners and locations). parseExt passthrough added for response mappers. - CreateItemCommand, UpdateItemCommand, CreateItemRequest, UpdateItemRequest gain a nullable ext field. - ItemResponse now carries the parsed ext map, same shape as PartnerResponse / LocationResponse / SalesOrderResponse. - pbc-catalog build.gradle adds `implementation(project(":platform:platform-metadata"))`. - ItemServiceTest constructor updated to pass the new validator dependency with no-op stubs. Plug-in YAML (printing-shop.yml): - Re-added `printing_shop_color_count` (integer) and `printing_shop_paper_gsm` (integer) custom fields targeting Item. These were originally in the commit 16c59310 draft but removed because Item wasn't wired. Now that Item is wired, they're back and actually enforced. Smoke verified end-to-end against real Postgres with the plug-in staged: - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields. - POST /catalog/items with `{printing_shop_color_count: 4, printing_shop_paper_gsm: 170}` → 201, canonical form persisted. - GET roundtrip preserves both integer values. - POST with `printing_shop_color_count: "not-a-number"` → 400 "ext.printing_shop_color_count: not a valid integer: 'not-a-number'". - POST with `rogue_key` → 400 "ext contains undeclared key(s) for 'Item': [rogue_key]". Six of eight PBCs now participate in HasExt: Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item. The remaining two are pbc-identity (User has no ext column by design — identity is a security concern, not a customization one) and pbc-finance (JournalEntry is derived state from events, no customization surface). Five core entities carry Tier 1 custom fields as of this commit: Partner (2 core + 1 plug-in) Item (0 core + 2 plug-in) SalesOrder (0 core + 1 plug-in) WorkOrder (2 core + 1 plug-in) Location (0 core + 0 plug-in — wired but no declarations yet) 246 unit tests, all green. 18 Gradle subprojects.
-
The reference printing-shop plug-in now demonstrates the framework's most important promise — that a customer plug-in can EXTEND core business entities (Partner, SalesOrder, WorkOrder) with customer- specific fields WITHOUT touching any core code. Added to `printing-shop.yml` customFields section: Partner: printing_shop_customer_segment (enum: agency, in_house, end_client, reseller) SalesOrder: printing_shop_quote_number (string, maxLength 32) WorkOrder: printing_shop_press_id (string, maxLength 32) Mechanism (no code changes, all metadata-driven): 1. Plug-in YAML carries a `customFields:` section alongside its entities / permissions / menus. 2. MetadataLoader.loadFromPluginJar reads the section and inserts rows into `metadata__custom_field` tagged `source='plugin:printing-shop'`. 3. CustomFieldRegistry.refresh re-reads ALL rows (both `source='core'` and `source='plugin:*'`) and merges them by `targetEntity`. 4. ExtJsonValidator.applyTo now validates incoming ext against the MERGED set, so a POST to /api/v1/partners/partners can include both core-declared (partners_industry) and plug-in-declared (printing_shop_customer_segment) fields in the same request, and both are enforced. 5. Uninstalling the plug-in removes the plugin:printing-shop rows and the fields disappear from validation AND from the UI's custom-field catalog — no migration, no restart of anything other than the plug-in lifecycle itself. Convention established: plug-in-contributed custom field keys are prefixed with the plug-in id (e.g. `printing_shop_*`) so two independent plug-ins can't collide on the same entity. Documented inline in the YAML. Smoke verified end-to-end against real Postgres with the plug-in staged: - CustomFieldRegistry logs "refreshed 4 custom fields" after core load, then "refreshed 7 custom fields across 3 entities" after plug-in load — 4 core + 3 plug-in fields, merged. - GET /_meta/metadata/custom-fields/Partner returns 3 fields (2 core + 1 plug-in). - GET /_meta/metadata/custom-fields/SalesOrder returns 1 field (plug-in only). - GET /_meta/metadata/custom-fields/WorkOrder returns 3 fields (2 core + 1 plug-in). - POST /partners/partners with BOTH a core field AND a plug-in field → 201, canonical form persisted. - POST with an invalid plug-in enum value → 400 "ext.printing_shop_customer_segment: value 'ghost' is not in allowed set [agency, in_house, end_client, reseller]". - POST /orders/sales-orders with printing_shop_quote_number → 201, quote number round-trips. - POST /production/work-orders with mixed production_priority + printing_shop_press_id → 201, both fields persisted. This is the first executable demonstration of Clean Core extensibility (CLAUDE.md guardrail #7) — the plug-in extends core-owned data through a stable public contract (api.v1 HasExt + metadata__custom_field) without reaching into any core or platform internal class. This is the "A" grade on the A/B/C/D extensibility safety scale. No code changes. No test changes. 246 unit tests, still green. -
Closes the last known gap from the HasExt refactor (commit 986f02ce): pbc-production's WorkOrder had an `ext` column but no validator was wired, so an operator could write arbitrary JSON without any schema enforcement. This fixes that and adds the first Tier 1 custom fields for WorkOrder. Code changes: - WorkOrder implements HasExt; ext becomes `override var ext`, ENTITY_NAME moves onto the entity companion. - WorkOrderService injects ExtJsonValidator, calls applyTo() in create() before saving (null-safe so the SalesOrderConfirmedSubscriber's auto-spawn path still works — verified by smoke test). - CreateWorkOrderCommand + CreateWorkOrderRequest gain an `ext` field that flows through to the validator. - WorkOrderResponse gains an `ext: Map<String, Any?>` field; the response mapper signature changes to `toResponse(service)` to reach the validator via a convenience parseExt delegate on the service (same pattern as the other four PBCs). - pbc-production Gradle build adds `implementation(project(":platform:platform-metadata"))`. Metadata (production.yml): - Permission keys extended to match the v2 state machine: production.work-order.start (was missing) and production.work-order.scrap (was missing). The existing .read / .create / .complete / .cancel keys stay. - Two custom fields declared: * production_priority (enum: low, normal, high, urgent) * production_routing_notes (string, maxLength 1024) Both are optional and non-PII; an operator can now add priority and routing notes to a work order through the public API without any code change, which is the whole point of Tier 1 customization. Unit tests: WorkOrderServiceTest constructor updated to pass the new extValidator dependency and stub applyTo/parseExt as no-ops. No behavioral test changes — ext validation is covered by ExtJsonValidatorTest and the platform-wide smoke tests. Smoke verified end-to-end against real Postgres: - GET /_meta/metadata/custom-fields/WorkOrder now returns both declarations with correct enum sets and maxLength. - POST /work-orders with valid ext {production_priority:"high", production_routing_notes:"Rush for customer demo"} → 201, canonical form persisted, round-trips via GET. - POST with invalid enum value → 400 "value 'emergency' is not in allowed set [low, normal, high, urgent]". - POST with unknown ext key → 400 "ext contains undeclared key(s) for 'WorkOrder': [unknown_field]". - Auto-spawn from confirmed SO → DRAFT work order with empty ext `{}`, confirming the applyTo(null) null-safe path. Five of the eight PBCs now participate in the HasExt pattern: Partner, Location, SalesOrder, PurchaseOrder, WorkOrder. The remaining three (Item, Uom, JournalEntry) either have their own custom-field story in separate entities or are derived state. 246 unit tests, all green. 18 Gradle subprojects.
-
Closes the P4.3 rollout — the last PBC whose controllers were still unannotated. Every endpoint in `UserController` now carries an `@RequirePermission("identity.user.*")` annotation matching the keys already declared in `identity.yml`: GET /api/v1/identity/users identity.user.read GET /api/v1/identity/users/{id} identity.user.read POST /api/v1/identity/users identity.user.create PATCH /api/v1/identity/users/{id} identity.user.update DELETE /api/v1/identity/users/{id} identity.user.disable `AuthController` (login, refresh) is deliberately NOT annotated — it is in the platform-security public allowlist because login is the token-issuing endpoint (chicken-and-egg). KDoc on the controller class updated to reflect the new auth story (removing the stale "authentication deferred to v0.2" comment from before P4.1 / P4.3 landed). Smoke verified end-to-end against real Postgres: - Admin (wildcard `admin` role) → GET /users returns 200, POST /users returns 201 (new user `jane` created). - Unauthenticated GET and POST → 401 Unauthorized from the framework's JWT filter before @RequirePermission runs. A non-admin user without explicit grants would get 403 from the AOP evaluator; tested manually with the admin and anonymous cases. No test changes — the controller unit test is a thin DTO mapper test that doesn't exercise the Spring AOP aspect; identity-wide authz enforcement is covered by the platform-security tests plus the shipping smoke tests. 246 unit tests, all green. P4.3 is now complete across every core PBC: pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales, pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity. -
CI verified green for `986f02ce` on both gradle build and docker image jobs.
-
Removes the ext-handling copy/paste that had grown across four PBCs (partners, inventory, orders-sales, orders-purchase). Every service that wrote the JSONB `ext` column was manually doing the same four-step sequence: validate, null-check, serialize with a local ObjectMapper, assign to the entity. And every response mapper was doing the inverse: check-if-blank, parse, cast, swallow errors. Net: ~15 lines saved per PBC, one place to change the ext contract later (e.g. PII redaction, audit tagging, field-level events), and a stable plug-in opt-in mechanism — any plug-in entity that implements `HasExt` automatically participates. New api.v1 surface: interface HasExt { val extEntityName: String // key into metadata__custom_field var ext: String // the serialized JSONB column } Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own entities into the same validation path. Zero Spring/Jackson dependencies — api.v1 stays clean. Extended `ExtJsonValidator` (platform-metadata) with two helpers: fun applyTo(entity: HasExt, ext: Map<String, Any?>?) — null-safe; validates; writes canonical JSON to entity.ext. Replaces the validate + writeValueAsString + assign triplet in every service's create() and update(). fun parseExt(entity: HasExt): Map<String, Any?> — returns empty map on blank/corrupt column; response mappers never 500 on bad data. Replaces the four identical parseExt local functions. ExtJsonValidator now takes an ObjectMapper via constructor injection (Spring Boot's auto-configured bean). Entities that now implement HasExt (override val extEntityName; override var ext; companion object const val ENTITY_NAME): - Partner (`partners.Partner` → "Partner") - Location (`inventory.Location` → "Location") - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder") - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder") Deliberately NOT converted this chunk: - WorkOrder (pbc-production) — its ext column has no declared fields yet; a follow-up that adds declarations AND the HasExt implementation is cleaner than splitting the two. - JournalEntry (pbc-finance) — derived state, no ext column. Services lose: - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()` field (four copies eliminated) - The `parseExt(entity): Map` helper function (four copies) - The `companion object { const val ENTITY_NAME = ... }` constant (moved onto the entity where it belongs) - The `val canonicalExt = extValidator.validate(...)` + `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }` create pattern (replaced with one applyTo call) - The `if (command.ext != null) { ... }` update pattern (applyTo is null-safe) Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and parseExt (null-safe path, happy path, failure path, blank column, round-trip, malformed JSON). Existing service tests just swap the mock setup from stubbing `validate` to stubbing `applyTo` and `parseExt` with no-ops. Smoke verified end-to-end against real Postgres: - POST /partners with valid ext (partners_credit_limit, partners_industry) → 201, canonical form persisted. - GET /partners/by-code/X → 200, ext round-trips. - POST with invalid enum value → 400 "value 'x' is not in allowed set [printing, publishing, packaging, other]". - POST with undeclared key → 400 "ext contains undeclared key(s) for 'Partner': [rogue_field]". - PATCH with new ext → 200, ext updated. - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe applyTo). - POST /orders/sales-orders with no ext → 201, the create path via the shared helper still works. 246 unit tests (+6 over 240), 18 Gradle subprojects. -
CI verified green for `75a75baa` on both the gradle build and docker image jobs.
-
Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one step, single output, no BOM) into a real v2 production PBC: 1. IN_PROGRESS state between DRAFT and COMPLETED so "started but not finished" work orders are observable on a dashboard. WorkOrderService.start(id) performs the transition and publishes a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR IN_PROGRESS (v2 writes nothing to the ledger at start() so there is nothing to undo on cancel). 2. Bill of materials via a new WorkOrderInput child entity — @OneToMany with cascade + orphanRemoval, same shape as SalesOrderLine. Each line carries (lineNo, itemCode, quantityPerUnit, sourceLocationCode). complete() now iterates the inputs in lineNo order and writes one MATERIAL_ISSUE ledger row per line (delta = -(quantityPerUnit × outputQuantity)) BEFORE writing the PRODUCTION_RECEIPT for the output. All in one transaction — a failure anywhere rolls back every prior ledger row AND the status flip. Empty inputs list is legal (the v1 auto-spawn-from-SO path still works unchanged, writing only the PRODUCTION_RECEIPT). 3. Scrap flow for COMPLETED work orders via a new scrap(id, scrapLocationCode, quantity, note) service method. Writes a negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over adding a new SCRAP movement reason to keep the enum stable — the reference-string suffix is the disambiguator. The work order itself STAYS COMPLETED; scrap is a correction on top of a terminal state, not a state change. complete() now requires IN_PROGRESS (not DRAFT); existing callers must start() first. api.v1 grows two events (WorkOrderStartedEvent, WorkOrderScrappedEvent) alongside the three that already existed. Since this is additive within a major version, the api.v1 semver contract holds — existing subscribers continue to compile. Liquibase: 002-production-v2.xml widens the status CHECK and creates production__work_order_input with (work_order_id FK, line_no, item_code, quantity_per_unit, source_location_code) plus a unique (work_order_id, line_no) constraint, a CHECK quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE from the parent. Unit tests: WorkOrderServiceTest grows from 8 to 18 cases — covers start happy path, start rejection, complete-on-DRAFT rejection, empty-BOM complete, BOM-with-two-lines complete (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT all fire with the right references), scrap happy path, scrap on non-COMPLETED rejection, scrap with non-positive quantity rejection, cancel-from-IN_PROGRESS, and BOM validation rejects (unknown item, duplicate line_no). Smoke verified end-to-end against real Postgres: - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per brochure, output 100). - Started (DRAFT → IN_PROGRESS, no ledger rows). - Completed: paper balance 500→300 (MATERIAL_ISSUE -200), ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100 (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE. - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged WO:WO-SMOKE:SCRAP, work order stayed COMPLETED. - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a DRAFT with empty BOM; starting + completing it writes only the PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the empty-BOM path is backwards-compatible. - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s, double-start 400s, cancel-from-IN_PROGRESS 200. 240 unit tests, 18 Gradle subprojects. -
Completes the @RequirePermission rollout that started in commit b174cf60. Every non-state-transition endpoint in pbc-inventory (Location CRUD), pbc-orders-sales, and pbc-orders-purchase is now guarded by the pre-declared permission keys from their respective metadata YAMLs. State-transition verbs (confirm/cancel/ship/receive) were annotated in the original P4.3 demo chunk; this one fills in the list/get/create/update gap. Inventory - LocationController: list/get/getByCode → inventory.location.read; create → inventory.location.create; update → inventory.location.update; deactivate → inventory.location.deactivate. - (StockBalanceController.adjust + StockMovementController.record were already annotated with inventory.stock.adjust.) Orders-sales - SalesOrderController: list/get/getByCode → orders.sales.read; create → orders.sales.create; update → orders.sales.update. (confirm/cancel/ship were already annotated.) Orders-purchase - PurchaseOrderController: list/get/getByCode → orders.purchase.read; create → orders.purchase.create; update → orders.purchase.update. (confirm/cancel/receive were already annotated.) No new permission keys. Every key this chunk consumes was already declared in the relevant metadata YAML since the respective PBC was first built — catalog + partners already shipped in this state, and the inventory/orders YAMLs declared their read/create/update keys from day one but the controllers hadn't started using them. Admin happy path still works (bootstrap admin has the wildcard `admin` role, same as after commit b174cf60). 230 unit tests still green — annotations are purely additive, no existing test hits the @RequirePermission path since service-level tests bypass the controller entirely. Combined with b174cf60, the framework now has full @RequirePermission coverage on every PBC controller except pbc-identity's user admin (which is a separate permission surface — user/role administration has its own security story). A minimum-privilege role like "sales-clerk" can now be granted exactly `orders.sales.read` + `orders.sales.create` + `partners.partner.read` and NOT accidentally see catalog admin, inventory movements, finance journals, or contact PII.
-
Closes the P4.3 permission-rollout gap for the two oldest PBCs that were never updated when the @RequirePermission aspect landed. The catalog and partners metadata YAMLs already declared all the needed permission keys — the controllers just weren't consuming them. Catalog - ItemController: list/get/getByCode → catalog.item.read; create → catalog.item.create; update → catalog.item.update; deactivate → catalog.item.deactivate. - UomController: list/get/getByCode → catalog.uom.read; create → catalog.uom.create; update → catalog.uom.update. Partners (including the PII boundary) - PartnerController: list/get/getByCode → partners.partner.read; create → partners.partner.create; update → partners.partner.update. (deactivate was already annotated in the P4.3 demo chunk.) - AddressController: all five verbs annotated with partners.address.{read,create,update,delete}. - ContactController: all five verbs annotated with partners.contact.{read,create,update,deactivate}. The "TODO once P4.3 lands" note in the class KDoc was removed; P4.3 is live and the annotations are now in place. This is the PII boundary that CLAUDE.md flagged as incomplete after the original P4.3 rollout. No new permission keys were added — all 14 keys this touches were already declared in pbc-catalog/catalog.yml and pbc-partners/partners.yml when those PBCs were first built. The metadata loader has been serving them to the SPA/OpenAPI/MCP introspection endpoint since day one; this change just starts enforcing them at the controller. Smoke-tested end-to-end against real Postgres - Fresh DB + fresh boot. - Admin happy path (bootstrap admin has wildcard `admin` role): GET /api/v1/catalog/items → 200 POST /api/v1/catalog/items → 201 (SMOKE-1 created) GET /api/v1/catalog/uoms → 200 POST /api/v1/partners/partners → 201 (SMOKE-P created) POST /api/v1/partners/.../contacts → 201 (contact created) GET /api/v1/partners/.../contacts → 200 (PII read) - Anonymous negative path (no Bearer token): GET /api/v1/catalog/items → 401 GET /api/v1/partners/.../contacts → 401 - 230 unit tests still green (annotations are purely additive, no existing test hit the @RequirePermission path since the service-level tests bypass the controller entirely). Why this is a genuine security improvement - Before: any authenticated user (including the eventual "Alice from reception", the contractor's read-only service account, the AI-agent MCP client) could read PII, create partners, and create catalog items. - After: those operations require explicit role-permission grants through metadata__role_permission. The bootstrap admin still has unconditional access via the wildcard admin role, so nothing in a fresh deployment is broken; but a real operator granting minimum-privilege roles now has the columns they need in the database to do it. - The contact PII boundary in particular is GDPR-relevant: before this change, any logged-in user could enumerate every contact's name + email + phone. After, only users with partners.contact.read can see them. What's still NOT annotated - pbc-inventory's Location create/update/deactivate endpoints (only stock.adjust and movement.create are annotated). - pbc-orders-sales and pbc-orders-purchase list/get/create/update endpoints (only the state-transition verbs are annotated). - pbc-identity's user admin endpoints. These are the next cleanup chunk. This one stays focused on catalog + partners because those were the two PBCs that predated P4.3 entirely and hadn't been touched since.
-
The framework's eighth PBC and the first one that's NOT order- or master-data-shaped. Work orders are about *making things*, which is the reason the printing-shop reference customer exists in the first place. With this PBC in place the framework can express the full buy-sell-make loop end-to-end. What landed (new module pbc/pbc-production/) - WorkOrder entity (production__work_order): code, output_item_code, output_quantity, status (DRAFT|COMPLETED| CANCELLED), due_date (display-only), source_sales_order_code (nullable — work orders can be either auto-spawned from a confirmed SO or created manually), ext. - WorkOrderJpaRepository with existsBySourceSalesOrderCode / findBySourceSalesOrderCode for the auto-spawn dedup. - WorkOrderService.create / complete / cancel: • create validates the output item via CatalogApi (same seam SalesOrderService and PurchaseOrderService use), rejects non-positive quantities, publishes WorkOrderCreatedEvent. • complete(outputLocationCode) credits finished goods to the named location via InventoryApi.recordMovement with reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and reference="WO:<order_code>", then flips status to COMPLETED, then publishes WorkOrderCompletedEvent — all in the same @Transactional method. • cancel only allowed from DRAFT (no un-producing finished goods); publishes WorkOrderCancelledEvent. - SalesOrderConfirmedSubscriber (@PostConstruct → EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)): walks the confirmed sales order's lines via SalesOrdersApi (NOT by importing pbc-orders-sales) and calls WorkOrderService.create for each line. Coded as one bean with one subscription — matches pbc-finance's one-bean-per-subject pattern. • Idempotent on source sales order code — if any work order already exists for the SO, the whole spawn is a no-op. • Tolerant of a missing SO (defensive against a future async bus that could deliver the confirm event after the SO has vanished). • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g. WO-FROM-SO-2026-0001-L1. - REST controller /api/v1/production/work-orders: list, get, by-code, create, complete, cancel — each annotated with @RequirePermission. Four permission keys declared in the production.yml metadata: read / create / complete / cancel. - CompleteWorkOrderRequest: single-arg DTO uses the @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest; cross-referenced in the KDoc so the third instance doesn't need re-discovery. - distribution/.../pbc-production/001-production-init.xml: CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext + the usual indexes. NEITHER output_item_code NOR source_sales_order_code is a foreign key (cross-PBC reference policy — guardrail #9). - settings.gradle.kts + distribution/build.gradle.kts: registers the new module and adds it to the distribution dependency list. - master.xml: includes the new changelog in dependency order, after pbc-finance. New api.v1 surface: org.vibeerp.api.v1.event.production.* - WorkOrderCreatedEvent, WorkOrderCompletedEvent, WorkOrderCancelledEvent — sealed under WorkOrderEvent, aggregateType="production.WorkOrder". Same pattern as the order events, so any future consumer (finance revenue recognition, warehouse put-away dashboard, a customer plug-in that needs to react to "work finished") subscribes through the public typed-class overload with no dependency on pbc-production. Unit tests (13 new, 217 → 230 total) - WorkOrderServiceTest (9 tests): create dedup, positive quantity check, catalog seam, happy-path create with event assertion, complete rejects non-DRAFT, complete happy path with InventoryApi.recordMovement assertion + event assertion, cancel from DRAFT, cancel rejects COMPLETED. - SalesOrderConfirmedSubscriberTest (5 tests): subscription registration count, spawns N work orders for N SO lines with correct code convention, idempotent when WOs already exist, no-op on missing SO, and a listener-routing test that captures the EventListener instance and verifies it forwards to the right service method. End-to-end smoke verified against real Postgres - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance) and SalesOrderConfirmedSubscriber (pbc-production) log their subscription registration before the first HTTP call. - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a finished-goods location (WH-FG). - Created a 2-line sales order (SO-WO-1), confirmed it. → Produced ONE orders_sales.SalesOrder outbox row. → Produced ONE AR POSTED finance__journal_entry for 1000 USD (500 × 1 + 250 × 2 — the pbc-finance consumer still works). → Produced TWO draft work orders auto-spawned from the SO lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with source_sales_order_code=SO-WO-1. - Completed WO1 to WH-FG: → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A delta=500 reference="WO:WO-FROM-SO-WO-1-L1". → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG. → Flipped status to COMPLETED. - Cancelled WO2 → CANCELLED. - Created a manual WO-MANUAL-1 with no source SO → succeeds; demonstrates the "operator creates a WO to build inventory ahead of demand" path. - platform__event_outbox ends with 6 rows all DISPATCHED: orders_sales.SalesOrder SO-WO-1 production.WorkOrder WO-FROM-SO-WO-1-L1 (created) production.WorkOrder WO-FROM-SO-WO-1-L2 (created) production.WorkOrder WO-FROM-SO-WO-1-L1 (completed) production.WorkOrder WO-FROM-SO-WO-1-L2 (cancelled) production.WorkOrder WO-MANUAL-1 (created) Why this chunk was the right next move - pbc-finance was a PASSIVE consumer — it only wrote derived reporting state. pbc-production is the first ACTIVE consumer: it creates new aggregates with their own state machines and their own cross-PBC writes in reaction to another PBC's events. This is a meaningfully harder test of the event-driven integration story and it passes end-to-end. - "One ledger, three callers" is now real: sales shipments, purchase receipts, AND production receipts all feed the same inventory__stock_movement ledger through the same InventoryApi.recordMovement facade. The facade has proven stable under three very different callers. - The framework now expresses the basic ERP trinity: buy (purchase orders), sell (sales orders), make (work orders). That's the shape every real manufacturing customer needs, and it's done without any PBC importing another. What's deliberately NOT in v1 - No bill of materials. complete() only credits finished goods; it does NOT issue raw materials. A shop floor that needs to consume 4 sheets of paper to produce 1 brochure does it manually via POST /api/v1/inventory/movements with reason= MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands as WorkOrderInput lines in a future chunk. - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in one step. A real shop floor needs "started but not finished" visibility; that's the next iteration. - No routings, operations, machine assignments, or due-date enforcement. due_date is display-only. - No "scrap defective output" flow for a COMPLETED work order. cancel refuses from COMPLETED; the fix requires a new MovementReason and a new event, not a special-case method on the service. -
2
-
Extends pbc-inventory's MovementReason enum with the two reasons a production-style PBC needs to record stock movements through the existing InventoryApi.recordMovement facade. No new endpoint, no new database column — just two new enum values, two new sign- validation rules, and four new tests. Why this lands BEFORE pbc-production - It's the smallest self-contained change that unblocks any future production-related code (the framework's planned pbc-production, a customer plug-in's manufacturing module, or even an ad-hoc operator script). Each of those callers can now record "consume raw material" / "produce finished good" through the same primitive that already serves sales shipments and purchase receipts. - It validates the "one ledger, many callers" property the architecture spec promised. Adding a new movement reason takes zero schema changes (the column is varchar) and zero plug-in changes (the api.v1 facade takes the reason as a string and delegates to MovementReason.valueOf inside the adapter). The enum lives entirely inside pbc-inventory. What changed - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one was added and how they fit the "one primitive for every direction" story. - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT, and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT. - 4 new unit tests: • record rejects positive delta on MATERIAL_ISSUE • record rejects negative delta on PRODUCTION_RECEIPT • record accepts a positive PRODUCTION_RECEIPT (happy path, new balance row at the receiving location) • record accepts a negative MATERIAL_ISSUE (decrements an existing balance from 1000 → 800) - Total tests: 213 → 217. Smoke test against real Postgres - Booted on a fresh DB; no schema migration needed because the `reason` column is varchar(32), already wide enough. - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location WH-PROD via the existing endpoints. - POST /api/v1/inventory/movements with reason=RECEIPT for 1000 raw paper → balance row at 1000. - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE delta=-200 reference="WO:WO-EVT-1" → balance becomes 800, ledger row written. - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT delta=50 reference="WO:WO-EVT-1" → balance row at 50 for FG-WIDGET, ledger row written. - Negative test: POST PRODUCTION_RECEIPT with delta=-1 → 400 Bad Request "movement reason PRODUCTION_RECEIPT requires a non-negative delta (got -1)" — the new sign rule fires. - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE, PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and RAW-PAPER=800 — the math is correct. What's deliberately NOT in this chunk - No pbc-production yet. That's the next chunk; this is just the foundation that lets it (or any other production-ish caller) write to the ledger correctly without needing changes to api.v1 or pbc-inventory ever again. - No new return-path reasons (RETURN_FROM_CUSTOMER, RETURN_TO_SUPPLIER) — those land when the returns flow does. - No reference convention for "WO:" — that's documented in the KDoc on `reference`, not enforced anywhere. The v0.16/v0.17 convention "<source>:<code>" continues unchanged. -
The minimal pbc-finance landed in commit bf090c2e only reacted to *ConfirmedEvent. This change wires the rest of the order lifecycle (ship/receive → SETTLED, cancel → REVERSED) so the journal entry reflects what actually happened to the order, not just the moment it was confirmed. JournalEntryStatus (new enum + new column) - POSTED — created from a confirm event (existing behaviour) - SETTLED — promoted by SalesOrderShippedEvent / PurchaseOrderReceivedEvent - REVERSED — promoted by SalesOrderCancelledEvent / PurchaseOrderCancelledEvent - The status field is intentionally a separate axis from JournalEntryType: type tells you "AR or AP", status tells you "where in its lifecycle". distribution/.../pbc-finance/002-finance-status.xml - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`, a CHECK constraint mirroring the enum values, and an index on status for the new filter endpoint. The DEFAULT 'POSTED' covers any existing rows on an upgraded environment without a backfill step. JournalEntryService — four new methods, all idempotent - settleFromSalesShipped(event) → POSTED → SETTLED for AR - settleFromPurchaseReceived(event) → POSTED → SETTLED for AP - reverseFromSalesCancelled(event) → POSTED → REVERSED for AR - reverseFromPurchaseCancelled(event) → POSTED → REVERSED for AP Each runs through a private settleByOrderCode/reverseByOrderCode helper that: 1. Looks up the row by order_code (new repo method findFirstByOrderCode). If absent → no-op (e.g. cancel from DRAFT means no *ConfirmedEvent was ever published, so no journal entry exists; this is the most common cancel path). 2. If the row is already in the destination status → no-op (idempotent under at-least-once delivery, e.g. outbox replay or future Kafka retry). 3. Refuses to overwrite a contradictory terminal status — a SETTLED row cannot be REVERSED, and vice versa. The producer's state machine forbids cancel-from-shipped/received, so reaching here implies an upstream contract violation; logged at WARN and the row is left alone. OrderEventSubscribers — six subscriptions per @PostConstruct - All six order events from api.v1.event.orders.* are subscribed via the typed-class EventBus.subscribe(eventType, listener) overload, the same public API a plug-in would use. Boot log line updated: "pbc-finance subscribed to 6 order events". JournalEntryController — new ?status= filter - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED surfaces the partition. Existing ?orderCode= and ?type= filters unchanged. Read permission still finance.journal.read. 12 new unit tests (213 total, was 201) - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency on duplicate destination status, refusal to overwrite a contradictory terminal status, no-op on missing row, default POSTED on new entries. - OrderEventSubscribersTest: assert all SIX subscriptions registered, one new test that captures all four lifecycle listeners and verifies they forward to the correct service methods. End-to-end smoke (real Postgres, fresh DB) - Booted with the new DDL applied (status column + CHECK + index) on an empty DB. The OrderEventSubscribers @PostConstruct line confirms 6 subscriptions registered before the first HTTP call. - Five lifecycle scenarios driven via REST: PO-FULL: confirm + receive → AP SETTLED amount=50.00 SO-FULL: confirm + ship → AR SETTLED amount= 1.00 SO-REVERSE: confirm + cancel → AR REVERSED amount= 1.00 PO-REVERSE: confirm + cancel → AP REVERSED amount=50.00 SO-DRAFT-CANCEL: cancel only → NO ROW (no confirm event) - finance__journal_entry returns exactly 4 rows (the 5th scenario correctly produces nothing) and ?status filters all return the expected partition (POSTED=0, SETTLED=2, REVERSED=2). What's still NOT in pbc-finance - Still no debit/credit legs, no chart of accounts, no period close, no double-entry invariant. This is the v0.17 minimal seed; the real P5.9 build promotes it into a real GL. - No reaction to "settle then reverse" or "reverse then settle" other than the WARN-and-leave-alone defensive path. A real GL would write a separate compensating journal entry; the minimal PBC just keeps the row immutable once it leaves POSTED.
-
The framework's seventh PBC, and the first one whose ENTIRE purpose is to react to events published by other PBCs. It validates the *consumer* side of the cross-PBC event seam that was wired up in commit 67406e87 (event-driven cross-PBC integration). With pbc-finance in place, the bus now has both producers and consumers in real PBC business logic — not just the wildcard EventAuditLogSubscriber that ships with platform-events. What landed (new module pbc/pbc-finance/, ~480 lines including tests) - JournalEntry entity (finance__journal_entry): id, code (= originating event UUID), type (AR|AP), partner_code, order_code, amount, currency_code, posted_at, ext. Unique index on `code` is the durability anchor for idempotent event delivery; the service ALSO existsByCode-checks before insert to make duplicate-event handling a clean no-op rather than a constraint-violation exception. - JournalEntryJpaRepository with existsByCode + findByOrderCode + findByType (the read-side filters used by the controller). - JournalEntryService.recordSalesConfirmed / recordPurchaseConfirmed take a SalesOrderConfirmedEvent / PurchaseOrderConfirmedEvent and write the corresponding AR/AP row. @Transactional with Propagation.REQUIRED so the listener joins the publisher's TX when the bus delivers synchronously (today) and creates a fresh one if a future async bus delivers from a worker thread. The KDoc explains why REQUIRED is the correct default and why REQUIRES_NEW would be wrong here. - OrderEventSubscribers @Component with @PostConstruct that calls EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...) and EventBus.subscribe(PurchaseOrderConfirmedEvent::class.java, ...) once at boot. Uses the public typed-class subscribe overload — NOT the platform-internal subscribeToAll wildcard helper. This is the API surface plug-ins will also use. - JournalEntryController: read-only REST under /api/v1/finance/journal-entries with @RequirePermission "finance.journal.read". Filter params: ?orderCode= and ?type=. Deliberately no POST endpoint — entries are derived state. - finance.yml metadata declaring 1 entity, 1 permission, 1 menu. - Liquibase changelog at distribution/.../pbc-finance/001-finance-init.xml + master.xml include + distribution/build.gradle.kts dep. - settings.gradle.kts: registers :pbc:pbc-finance. - 9 new unit tests (6 for JournalEntryService, 3 for OrderEventSubscribers) — including idempotency, dedup-by-event-id contract, listener-forwarding correctness via slot-captured EventListener invocation. Total tests: 192 → 201, 16 → 17 modules. Why this is the right shape - pbc-finance has zero source dependency on pbc-orders-sales, pbc-orders-purchase, pbc-partners, or pbc-catalog. The Gradle build refuses any cross-PBC dependency at configuration time — pbc-finance only declares api/api-v1, platform-persistence, and platform-security. The events and partner/item references it consumes all live in api.v1.event.orders / are stored as opaque string codes. - Subscribers go through EventBus.subscribe(eventType, listener), the public typed-class overload from api.v1.event.EventBus. Plug-ins use exactly this API; this PBC proves the API works end-to-end from a real consumer. - The consumer is idempotent on the producer's event id, so at-least-once delivery (outbox replay, future Kafka retry) cannot create duplicate journal entries. This makes the consumer correct under both the current synchronous bus and any future async / out-of-process bus. - Read-only REST API: derived state should not be writable from the outside. Adjustments and reversals will land later as their own command verbs when the real P5.9 finance build needs them, not as a generic create endpoint. End-to-end smoke verified against real Postgres - Booted on a fresh DB; the OrderEventSubscribers @PostConstruct log line confirms the subscription registered before any HTTP traffic. - Seeded an item, supplier, customer, location (existing PBCs). - Created PO PO-FIN-1 (5000 × 0.04 = 200 USD) → confirmed → GET /api/v1/finance/journal-entries returns ONE row: type=AP partner=SUP-PAPER order=PO-FIN-1 amount=200.0000 USD - Created SO SO-FIN-1 (50 × 0.10 = 5 USD) → confirmed → GET /api/v1/finance/journal-entries now returns TWO rows: type=AR partner=CUST-ACME order=SO-FIN-1 amount=5.0000 USD (plus the AP row from above) - GET /api/v1/finance/journal-entries?orderCode=PO-FIN-1 → only the AP row. - GET /api/v1/finance/journal-entries?type=AR → only the AR row. - platform__event_outbox shows 2 rows (one per confirm) both DISPATCHED, finance__journal_entry shows 2 rows. - The journal-entry code column equals the originating event UUID, proving the dedup contract is wired. What this is NOT (yet) - Not a real general ledger. No debit/credit legs, no chart of accounts, no period close, no double-entry invariant. P5.9 promotes this minimal seed into a real finance PBC. - No reaction to ship/receive/cancel events yet — only confirm. Real revenue recognition (which happens at ship time for most accounting standards) lands with the P5.9 build. - No outbound api.v1.ext facade. pbc-finance does not (yet) expose itself to other PBCs; it is a pure consumer. When pbc-production needs to know "did this order's invoice clear", that facade gets added.
-
1
-
The event bus and transactional outbox have existed since P1.7 but no real PBC business logic was publishing through them. This change closes that loop end-to-end: api.v1.event.orders (new public surface) - SalesOrderConfirmedEvent / SalesOrderShippedEvent / SalesOrderCancelledEvent — sealed under SalesOrderEvent, aggregateType = "orders_sales.SalesOrder" - PurchaseOrderConfirmedEvent / PurchaseOrderReceivedEvent / PurchaseOrderCancelledEvent — sealed under PurchaseOrderEvent, aggregateType = "orders_purchase.PurchaseOrder" - Events live in api.v1 (not inside the PBCs) so other PBCs and customer plug-ins can subscribe without importing the producing PBC — that would violate guardrail #9. pbc-orders-sales / pbc-orders-purchase - SalesOrderService and PurchaseOrderService now inject EventBus and publish a typed event from each state-changing method (confirm, ship/receive, cancel). The publish runs INSIDE the same @Transactional method as the JPA mutation and the InventoryApi.recordMovement ledger writes — EventBusImpl uses Propagation.MANDATORY, so a publish outside a transaction fails loudly. A failure in any line rolls back the status change AND every ledger row AND the would-have-been outbox row. - 6 new unit tests (3 per service) mockk the EventBus and verify each transition publishes exactly one matching event with the expected fields. Total tests: 186 → 192. End-to-end smoke verified against real Postgres - Created supplier, customer, item PAPER-A4, location WH-MAIN. - Drove a PO and an SO through the full state machine plus a cancel of each. 6 events fired: orders_purchase.PurchaseOrder × 3 (confirm + receive + cancel) orders_sales.SalesOrder × 3 (confirm + ship + cancel) - The wildcard EventAuditLogSubscriber logged each one at INFO level to /tmp/vibe-erp-boot.log with the [event-audit] tag. - platform__event_outbox shows 6 rows, all flipped from PENDING to DISPATCHED by the OutboxPoller within seconds. - The publish-inside-the-ledger-transaction guarantee means a subscriber that reads inventory__stock_movement on event receipt is guaranteed to see the matching SALES_SHIPMENT or PURCHASE_RECEIPT rows. This is what the architecture spec section 9 promised and now delivers. Why this is the right shape - Other PBCs (production, finance) and customer plug-ins can now react to "an order was confirmed/shipped/received/cancelled" without ever importing pbc-orders-* internals. The event class objects live in api.v1, the only stable contract surface. - The aggregateType strings ("orders_sales.SalesOrder", "orders_purchase.PurchaseOrder") match the <pbc>.<aggregate> convention documented on DomainEvent.aggregateType, so a cross-classloader subscriber can use the topic-string subscribe overload without holding the concrete Class<E>. - The bus's outbox row is the durability anchor for the future Kafka/NATS bridge: switching from in-process delivery to cross-process delivery will require zero changes to either PBC's publish call. -
The buying-side mirror of pbc-orders-sales. Adds the 6th real PBC and closes the loop: the framework now does both directions of the inventory flow through the same `InventoryApi.recordMovement` facade. Buy stock with a PO that hits RECEIVED, ship stock with a SO that hits SHIPPED, both feed the same `inventory__stock_movement` ledger. What landed ----------- * New Gradle subproject `pbc/pbc-orders-purchase` (16 modules total now). Same dependency set as pbc-orders-sales, same architectural enforcement — no direct dependency on any other PBC; cross-PBC references go through `api.v1.ext.<pbc>` facades at runtime. * Two JPA entities mirroring SalesOrder / SalesOrderLine: - `PurchaseOrder` (header) — code, partner_code (varchar, NOT a UUID FK), status enum DRAFT/CONFIRMED/RECEIVED/CANCELLED, order_date, expected_date (nullable, the supplier's promised delivery date), currency_code, total_amount, ext jsonb. - `PurchaseOrderLine` — purchase_order_id FK, line_no, item_code, quantity, unit_price, currency_code. Same shape as the sales order line; the api.v1 facade reuses `SalesOrderLineRef` rather than declaring a duplicate type. * `PurchaseOrderService.create` performs three cross-PBC validations in one transaction: 1. PartnersApi.findPartnerByCode → reject if null. 2. The partner's `type` must be SUPPLIER or BOTH (a CUSTOMER-only partner cannot be the supplier of a purchase order — the mirror of the sales-order rule that rejects SUPPLIER-only partners as customers). 3. CatalogApi.findItemByCode for EVERY line. Then validates: at least one line, no duplicate line numbers, positive quantity, non-negative price, currency matches header. The header total is RECOMPUTED from the lines (caller's value ignored — never trust a financial aggregate sent over the wire). * State machine enforced by `confirm()`, `cancel()`, and `receive()`: - DRAFT → CONFIRMED (confirm) - DRAFT → CANCELLED (cancel) - CONFIRMED → CANCELLED (cancel before receipt) - CONFIRMED → RECEIVED (receive — increments inventory) - RECEIVED → × (terminal; cancellation requires a return-to-supplier flow) * `receive(id, receivingLocationCode)` walks every line and calls `inventoryApi.recordMovement(... +line.quantity reason="PURCHASE_RECEIPT" reference="PO:<order_code>")`. The whole operation runs in ONE transaction so a failure on any line rolls back EVERY line's already-written movement AND the order status change. The customer cannot end up with "5 of 7 lines received, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/purchase-orders/{id}/receive` endpoint with body `{"receivingLocationCode": "WH-MAIN"}`, gated by `orders.purchase.receive`. The single-arg DTO has the same Jackson `@JsonCreator(mode = PROPERTIES)` workaround as `ShipSalesOrderRequest` (the trap is documented in the class KDoc with a back-reference to ShipSalesOrderRequest). * Confirm/cancel/receive endpoints carry `@RequirePermission` annotations (`orders.purchase.confirm`, `orders.purchase.cancel`, `orders.purchase.receive`). All three keys declared in the new `orders-purchase.yml` metadata. * New api.v1 facade `org.vibeerp.api.v1.ext.orders.PurchaseOrdersApi` + `PurchaseOrderRef`. Reuses the existing `SalesOrderLineRef` type for the line shape — buying and selling lines carry the same fields, so duplicating the ref type would be busywork. * `PurchaseOrdersApiAdapter` — sixth `*ApiAdapter` after Identity, Catalog, Partners, Inventory, SalesOrders. * `orders-purchase.yml` metadata declaring 2 entities, 6 permission keys, 1 menu entry under "Purchasing". End-to-end smoke test (the full demo loop) ------------------------------------------ Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → SUP-PAPER (SUPPLIER) * POST /inventory/locations → WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → [] (no stock) * POST /orders/purchase-orders → PO-2026-0001 for 5000 sheets @ $0.04 = total $200.00 (recomputed from the line) * POST /purchase-orders/{id}/confirm → status CONFIRMED * POST /purchase-orders/{id}/receive body={"receivingLocationCode":"WH-MAIN"} → status RECEIVED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=5000 * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 Then the FULL loop with the sales side from the previous chunk: * POST /partners → CUST-ACME (CUSTOMER) * POST /orders/sales-orders → SO-2026-0001 for 50 sheets * confirm + ship from WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → quantity=4950 (5000-50) * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 The framework's `InventoryApi.recordMovement` facade now has TWO callers — pbc-orders-sales (negative deltas, SALES_SHIPMENT) and pbc-orders-purchase (positive deltas, PURCHASE_RECEIPT) — feeding the same ledger from both sides. Failure paths verified: * Re-receive a RECEIVED PO → 400 "only CONFIRMED orders can be received" * Cancel a RECEIVED PO → 400 "issue a return-to-supplier flow instead" * Create a PO from a CUSTOMER-only partner → 400 "partner 'CUST-ONLY' is type CUSTOMER and cannot be the supplier of a purchase order" Regression: catalog uoms, identity users, partners, inventory, sales orders, purchase orders, printing-shop plates with i18n, metadata entities (15 now, was 13) — all still HTTP 2xx. Build ----- * `./gradlew build`: 16 subprojects, 186 unit tests (was 175), all green. The 11 new tests cover the same shapes as the sales-order tests but inverted: unknown supplier, CUSTOMER-only rejection, BOTH-type acceptance, unknown item, empty lines, total recomputation, confirm/cancel state machine, receive-rejects-non-CONFIRMED, receive-walks-lines-with-positive- delta, cancel-rejects-RECEIVED, cancel-CONFIRMED-allowed. What was deferred ----------------- * **RFQs** (request for quotation) and **supplier price catalogs** — both lay alongside POs but neither is in v1. * **Partial receipts**. v1's RECEIVED is "all-or-nothing"; the supplier delivering 4500 of 5000 sheets is not yet modelled. * **Supplier returns / refunds**. The cancel-RECEIVED rejection message says "issue a return-to-supplier flow" — that flow doesn't exist yet. * **Three-way matching** (PO + receipt + invoice). Lands with pbc-finance. * **Multi-leg transfers**. TRANSFER_IN/TRANSFER_OUT exist in the movement enum but no service operation yet writes both legs in one transaction.