You need to sign in before continuing.
-
…duction auto-creates WorkOrder First end-to-end cross-PBC workflow driven entirely from a customer plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a TaskHandler that publishes a generic api.v1 event; pbc-production reacts by creating a DRAFT WorkOrder. The plug-in has zero compile-time coupling to pbc-production, and pbc-production has zero knowledge the plug-in exists. ## Why an event, not a facade Two options were on the table for "how does a plug-in ask pbc-production to create a WorkOrder": (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi` with a `createWorkOrder(command)` method (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production` that anyone can publish — this commit Facade pattern (a) is what InventoryApi.recordMovement and CatalogApi.findItemByCode use: synchronous, in-transaction, caller-blocks-on-completion. Event pattern (b) is what SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses: asynchronous over the bus, still in-transaction (the bus uses `Propagation.MANDATORY` with synchronous delivery so a failure rolls everything back), but the caller doesn't need a typed result. Option (b) wins for plug-in → pbc-production: - Plug-in compile-time surface stays identical: plug-ins already import `api.v1.event.*` to publish. No new api.v1.ext package. Zero new plug-in dependency. - The outbox gets the row for free — a crash between publish and delivery replays cleanly from `platform__event_outbox`. - A second customer plug-in shipping a different flow that ALSO wants to auto-spawn work orders doesn't need a second facade, just publishes the same event. pbc-scheduling (future) can subscribe to the same channel without duplicating code. The synchronous facade pattern stays the right tool for cross-PBC operations the caller needs to observe (read-throughs, inventory debits that must block the current transaction). Creating a DRAFT work order is a fire-and-trust operation — the event shape fits. ## What landed ### api.v1 — WorkOrderRequestedEvent New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent` with four required fields: - `code`: desired work-order code (must be unique globally; convention is to bake the source reference into it so duplicate detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`) - `outputItemCode` + `outputQuantity`: what to produce - `sourceReference`: opaque free-form pointer used in logs and the outbox audit trail. Example values: `plugin:printing-shop:quote:Q-007`, `pbc-orders-sales:SO-2026-001:L2` The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the existing `WorkOrderEvent` sealed interface is for LIFECYCLE events published BY pbc-production, not for inbound requests). `init` validators reject blank strings and non-positive quantities so a malformed event fails fast at publish time rather than at the subscriber. ### pbc-production — WorkOrderRequestedSubscriber New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`. Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe` overload (same pattern as `SalesOrderConfirmedSubscriber` + the six pbc-finance order subscribers). The subscriber: 1. Looks up `workOrders.findByCode(event.code)` as the idempotent short-circuit. If a WorkOrder with that code already exists (outbox replay, future async bus retry, developer re-running the same BPMN process), the subscriber logs at DEBUG and returns. **Second execution of the same BPMN produces the same outbox row which the subscriber then skips — the database ends up with exactly ONE WorkOrder regardless of how many times the process runs.** 2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with the event's fields. `sourceSalesOrderCode` is null because this is the generic path, not the SO-driven one. Why this is a SECOND subscriber rather than extending `SalesOrderConfirmedSubscriber`: the two events serve different producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific and requires a round-trip through `SalesOrdersApi.findByCode` to fetch the lines; `WorkOrderRequestedEvent` carries everything the subscriber needs inline. Collapsing them would mean the generic path inherits the SO-flow's SO-specific lookup and short-circuit logic that doesn't apply to it. ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler New plug-in TaskHandler in `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`. Captures the `PluginContext` via constructor — same pattern as `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside `execute`: 1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables (`quantity` accepts Number or String since Flowable's variable coercion is flexible). 2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and `sourceReference = "plugin:printing-shop:quote:$quoteCode"`. 3. Logs via `context.logger.info(...)` — the line is tagged `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`. 4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`. This is the first time a plug-in TaskHandler publishes a cross-PBC event from inside a workflow — proves the event-bus leg of the handler-context pattern works end-to-end. 5. Writes `workOrderCode` + `workOrderRequested=true` back to the process variables so a downstream BPMN step or the HTTP caller can see the derived code. The handler is registered in `PrintingShopPlugin.start(context)` alongside `PlateApprovalTaskHandler`: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) Teardown via `unregisterAllByOwner("printing-shop")` still works unchanged — the scoped registrar tracks both handlers. ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the plug-in JAR. Single synchronous service task, process definition key `plugin-printing-shop-quote-to-work-order`, service task id `printing_shop.quote.create_work_order` (matches the handler key). Auto-deployed by the host's `PluginProcessDeployer` at plug-in start — the printing-shop plug-in now ships two BPMNs bundled into one Flowable deployment, both under category `printing-shop`. ## Smoke test (fresh DB) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ... [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...': [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml] pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload) # 1) seed a catalog item $ curl -X POST /api/v1/catalog/items {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"} → 201 BOOK-HARDCOVER # 2) start the plug-in's quote-to-work-order BPMN $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order", "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}} → 201 {"ended":true, "variables":{"quoteCode":"Q-007", "itemCode":"BOOK-HARDCOVER", "quantity":500, "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007", "workOrderRequested":true}} Log lines observed: [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500) [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007' for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007') # 3) verify the WorkOrder now exists in pbc-production $ curl /api/v1/production/work-orders → [{"id":"029c2482-...", "code":"WO-FROM-PRINTINGSHOP-Q-007", "outputItemCode":"BOOK-HARDCOVER", "outputQuantity":500.0, "status":"DRAFT", "sourceSalesOrderCode":null, "inputs":[], "ext":{}}] # 4) run the SAME BPMN a second time — verify idempotent $ curl -X POST /api/v1/workflow/process-instances {same body as above} → 201 (process ends, workOrderRequested=true, new event published + delivered) $ curl /api/v1/production/work-orders → count=1, still only WO-FROM-PRINTINGSHOP-Q-007 ``` Every single step runs through an api.v1 public surface. No framework core code knows the printing-shop plug-in exists; no plug-in code knows pbc-production exists. They meet on the event bus, and the outbox guarantees the delivery. ## Tests - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`: * `subscribe registers one listener for WorkOrderRequestedEvent` * `handle creates a work order from the event fields` — captures the `CreateWorkOrderCommand` and asserts every field * `handle short-circuits when a work order with that code already exists` — proves the idempotent branch - Total framework unit tests: 278 (was 275), all green. ## What this unblocks - **Richer multi-step BPMNs** in the plug-in that chain plate approval + quote → work order + production start + completion. - **Plug-in-owned Quote entity** — the printing-shop plug-in can now introduce a `plugin_printingshop__quote` table via its own Liquibase changelog and have its HTTP endpoint create quotes that kick off the quote-to-work-order workflow automatically (or on operator confirm). - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step, potentially driven by plug-ins contributing custom steps via the same TaskHandler + event seam. - **Second reference plug-in** — any new customer plug-in can publish `WorkOrderRequestedEvent` from its own workflows without any framework change. ## Non-goals (parking lot) - The handler publishes but does not also read pbc-production state back. A future "wait for WO completion" BPMN step could subscribe to `WorkOrderCompletedEvent` inside a user-task + signal flow, but the engine's signal/correlation machinery isn't wired to plug-ins yet. - Quote entity + HTTP + real business logic. REF.1 proves the cross-PBC event seam; the richer quote lifecycle is a separate chunk that can layer on top of this. - Transactional rollback integration test. The synchronous bus + `Propagation.MANDATORY` guarantees it, but an explicit test that a subscriber throw rolls back both the ledger-adjacent writes and the Flowable process state would be worth adding with a real test container run. -
Completes the plug-in side of the embedded Flowable story. The P2.1 core made plug-ins able to register TaskHandlers; this chunk makes them able to ship the BPMN processes those handlers serve. ## Why Flowable's built-in auto-deployer couldn't do it Flowable's Spring Boot starter scans the host classpath at engine startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys every hit (the literal glob is paraphrased because the Kotlin KDoc comment below would otherwise treat the embedded slash-star as the start of a nested comment — feedback memory "Kotlin KDoc nested- comment trap"). PF4J plug-ins load through an isolated child classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml` resource shipped inside a plug-in JAR is never seen. This chunk adds a dedicated host-side deployer that opens each plug-in JAR file directly (same JarFile walk pattern as `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs with the Flowable `RepositoryService`. ## Mechanism ### New PluginProcessDeployer (platform-workflow) One Spring bean, two methods: - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR, collects every entry whose name starts with `processes/` and ends with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one Flowable `Deployment` named `plugin:<id>` with `category = pluginId`. Returns the deployment id or null (missing JAR / no BPMN resources). One deployment per plug-in keeps undeploy atomic and makes the teardown query unambiguous. - `undeployByPlugin(pluginId): Int` — runs `createDeploymentQuery().deploymentCategory(pluginId).list()` and calls `deleteDeployment(id, cascade=true)` on each hit. Cascading removes process instances and history rows along with the deployment — "uninstalling a plug-in makes it disappear". Idempotent: a second call returns 0. The deployer reads the JAR entries into byte arrays inside the JarFile's `use` block and then passes the bytes to `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the jar handle is already closed by the time Flowable sees the deployment. No input-stream lifetime tangles. ### VibeErpPluginManager wiring - New constructor dependency on `PluginProcessDeployer`. - Deploy happens AFTER `start(context)` succeeds. The ordering matters because a plug-in can only register its TaskHandlers during `start(context)`, and a deployed BPMN whose service-task delegate expression resolves to a key with no matching handler would still deploy (Flowable only resolves delegates at process-start time). Registering handlers first is the safer default: the moment the deployment lands, every referenced handler is already in the TaskHandlerRegistry. - BPMN deployment failure AFTER a successful `start(context)` now fully unwinds the plug-in state: call `instance.stop()`, remove the plug-in from the `started` list, strip its endpoints + its TaskHandlers + call `undeployByPlugin` (belt and suspenders — the deploy attempt may have partially succeeded). That mirrors the existing start-failure unwinding so the framework doesn't end up with a plug-in that's half-installed after any step throws. - `destroy()` calls `undeployByPlugin(pluginId)` alongside the existing `unregisterAllByOwner(pluginId)`. ### Reference plug-in BPMN `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml` — a minimal two-task process (`start` → serviceTask → `end`) whose serviceTask id is `printing_shop.plate.approve`, matching the PlateApprovalTaskHandler key landed in the previous commit. Process definition key is `plugin-printing-shop-plate-approval` (distinct from the serviceTask id because BPMN 2.0 requires element ids to be unique per document — same separation used for the core ping process). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'vibeerp.workflow.ping' owner='core' ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml] $ curl /api/v1/workflow/definitions (as admin) [ {"key":"plugin-printing-shop-plate-approval", "name":"Printing shop — plate approval", "version":1, "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4", "resourceName":"processes/plate-approval.bpmn20.xml"}, {"key":"vibeerp-workflow-ping", "name":"vibe_erp workflow ping", "version":1, "deploymentId":"4f48...", "resourceName":"vibeerp-ping.bpmn20.xml"} ] $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"PLATE-007"}} → {"processInstanceId":"5b1b...", "ended":true, "variables":{"plateId":"PLATE-007", "plateApproved":true, "approvedBy":"user:admin", "approvedAt":"2026-04-09T04:48:30.514523Z"}} $ kill -TERM <pid> [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade) ``` Full end-to-end loop closed: plug-in ships a BPMN → host reads it out of the JAR → Flowable deployment registered under the plug-in category → HTTP caller starts a process instance via the standard `/api/v1/workflow/process-instances` surface → dispatcher routes by activity id to the plug-in's TaskHandler → handler writes output variables + plug-in sees the authenticated caller as `ctx.principal()` via the reserved `__vibeerp_*` process-variable propagation from commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs. ## Tests - 6 new unit tests on `PluginProcessDeployerTest`: * `deployFromPlugin returns null when jarPath is not a regular file` — guard against dev-exploded plug-in dirs * `deployFromPlugin returns null when the plug-in jar has no BPMN resources` * `deployFromPlugin reads every bpmn resource under processes and deploys one bundle` — builds a real temporary JAR with two BPMN entries + a README + a metadata YAML, verifies that both BPMNs go through `addBytes` with the right names and the README / metadata entries are skipped * `deployFromPlugin rejects a blank plug-in id` * `undeployByPlugin returns zero when there is nothing to remove` * `undeployByPlugin cascades a deleteDeployment per matching deployment` - Total framework unit tests: 275 (was 269), all green. ## Kotlin trap caught during authoring (feedback memory paid out) First compile failed with `Unclosed comment` on the last line of `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph containing the literal glob `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the backtick span was parsed as the start of a nested block comment even though the surrounding `/* ... */` KDoc was syntactically complete. The saved feedback-memory entry "Kotlin KDoc nested-comment trap" covered exactly this situation — the fix is to spell out glob characters as `[star]` / `[slash]` (or the word "slash-star") inside documentation so the literal `/*` never appears. The KDoc now documents the behaviour AND the workaround so the next maintainer doesn't hit the same trap. ## Non-goals (still parking lot) - Handler-side access to the full PluginContext — PlateApprovalTaskHandler is still a pure function because the framework doesn't hand TaskHandlers a context object. For REF.1 (real quote→job-card) handlers will need to read + mutate plug-in-owned tables; the cleanest approach is closure-capture inside the plug-in class (handler instantiated inside `start(context)` with the context captured in the outer scope). Decision deferred to REF.1. - BPMN resource hot reload. The deployer runs once per plug-in start; a plug-in whose BPMN changes under its feet at runtime isn't supported yet. - Plug-in-shipped DMN / CMMN resources. The deployer only looks at `.bpmn20.xml` and `.bpmn`. Decision-table and case-management resources are not on the v1.0 critical path.