-
Extends WorkOrderRequestedEvent with an optional routing so a producer — core PBC or customer plug-in — can attach shop-floor operations to a requested work order without importing any pbc-production internals. The reference printing-shop plug-in's quote-to-work-order BPMN now ships a 4-step default routing (CUT → PRINT → FOLD → BIND) end-to-end through the public api.v1 surface. **api.v1 surface additions (additive, defaulted).** - New public data class `RoutingOperationSpec(lineNo, operationCode, workCenter, standardMinutes)` in `api.v1.event.production.WorkOrderEvents` with init-block invariants matching pbc-production v3's internal validation (positive lineNo, non-blank operationCode + workCenter, non-negative standardMinutes). - `WorkOrderRequestedEvent` gains an `operations: List<RoutingOperationSpec>` field, defaulted to `emptyList()`. Existing callers compile without changes; the event's init block now also validates that every operation has a unique lineNo. Convention matches the other v1 events that already carry defaulted `eventId` and `occurredAt` — additive within a major version. **pbc-production subscriber wiring.** - `WorkOrderRequestedSubscriber.handle` now maps `event.operations` → `WorkOrderOperationCommand` 1:1 and passes them to `CreateWorkOrderCommand`. Empty list keeps the v2 behavior exactly (auto-spawned orders from the SO path still get no routing and walk DRAFT → IN_PROGRESS → COMPLETED without any gate); a non-empty list feeds the new v3 WorkOrderOperation children and forces a sequential walk on the shop floor. The log line now includes `ops=<size>` so operators can see at a glance whether a WO came with a routing. **Reference plug-in.** - `CreateWorkOrderFromQuoteTaskHandler` now attaches `DEFAULT_PRINTING_SHOP_ROUTING`: a 4-step sequence modeled on the reference business doc's brochure production flow. Each step gets its own work center (PRINTING-CUT-01, PRINTING-PRESS-A, PRINTING-FOLD-01, PRINTING-BIND-01) so a future shop-floor dashboard can show which station is running which job. Standard times are round-number placeholders (15/30/10/20 minutes) — a real customer tunes them from historical data. Deliberately hard-coded in v1: a real shop with a dozen different flows would either ship a richer plug-in that picks routing per item type, or wait for a future Tier 1 "routing template" metadata entity. v1 just proves the event-driven seam carries v3 operations end-to-end. **Why this is the right shape.** - Zero new compile-time coupling. The plug-in imports only `api.v1.event.production.RoutingOperationSpec`; the plug-in linter would refuse any reach into `pbc.production.*`. - Core pbc-production stays ignorant of the plug-in: the subscriber doesn't know where the event came from. - The same `WorkOrderRequestedEvent` path now works for ANY producer — the next customer plug-in that spawns routed work orders gets zero core changes. **Tests.** New `WorkOrderRequestedSubscriberTest.handle passes event operations through as WorkOrderOperationCommand` asserts the 1:1 mapping of RoutingOperationSpec → WorkOrderOperationCommand. The existing test gains one assertion that an empty `operations` list on the event produces an empty `operations` list on the command (backwards-compat lock-in). **Smoke-tested end-to-end against real Postgres:** 1. POST /api/v1/workflow/process-instances with processDefinitionKey `plugin-printing-shop-quote-to-work-order` and variables `{quoteCode: "Q-ROUTING-001", itemCode: "FG-BROCHURE", quantity: 250}` 2. BPMN runs through CreateWorkOrderFromQuoteTaskHandler, publishes WorkOrderRequestedEvent with 4 operations 3. pbc-production subscriber creates WO `WO-FROM-PRINTINGSHOP-Q-ROUTING-001` 4. GET /api/v1/production/work-orders/by-code/... returns the WO with status=DRAFT and 4 operations (CUT/PRINT/FOLD/BIND) all PENDING, each with its own work_center and standard_minutes. This is the framework's first business flow where a customer plug-in provides a routing to a core PBC end-to-end through api.v1 alone. Closes the loop between the v3 routings feature (commit fa867189) and the executable acceptance test in the reference plug-in. 24 modules, 350 unit tests (+1), all green. -
Closes two open wiring gaps left by the P1.9 and P1.8 chunks — `PluginContext.files` and `PluginContext.reports` both previously threw `UnsupportedOperationException` because the host's `DefaultPluginContext` never received the concrete beans. This commit plumbs both through and exercises them end-to-end via a new printing-shop plug-in endpoint that generates a quote PDF, stores it in the file store, and returns the file handle. With this chunk the reference printing-shop plug-in demonstrates **every extension seam the framework provides**: HTTP endpoints, JDBC, metadata YAML, i18n, BPMN + TaskHandlers, JobHandlers, custom fields on core entities, event publishing via EventBus, ReportRenderer, and FileStorage. There is no major public plug-in surface left unexercised. ## Wiring: DefaultPluginContext + VibeErpPluginManager - `DefaultPluginContext` gains two new constructor parameters (`sharedFileStorage: FileStorage`, `sharedReportRenderer: ReportRenderer`) and two new overrides. Each is wired via Spring — they live in platform-files and platform-reports respectively, but platform-plugins only depends on api.v1 (the interfaces) and NOT on those modules directly. The concrete beans are injected by Spring at distribution boot time when every `@Component` is on the classpath. - `VibeErpPluginManager` adds `private val fileStorage: FileStorage` and `private val reportRenderer: ReportRenderer` constructor params and passes them through to every `DefaultPluginContext` it builds per plug-in. The `files` and `reports` getters in api.v1 `PluginContext` still have their default-throw backward-compat shim — a plug-in built against v0.8 of api.v1 loading on a v0.7 host would still fail loudly at first call with a clear "upgrade to v0.8" message. The override here makes the v0.8+ host honour the interface. ## Printing-shop reference — quote PDF endpoint - New `resources/reports/quote-template.jrxml` inside the plug-in JAR. Parameters: plateCode, plateName, widthMm, heightMm, status, customerName. Produces a single-page A4 PDF with a header, a table of plate attributes, and a footer. - New endpoint `POST /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf`. Request body `{"customerName": "..."}`, response: `{"plateId", "plateCode", "customerName", "fileKey", "fileSize", "fileContentType", "downloadUrl"}` The handler does ALL of: 1. Reads the plate row via `context.jdbc.queryForObject(...)` 2. Loads the JRXML from the PLUG-IN's own classloader (not the host classpath — `this::class.java.classLoader .getResourceAsStream("reports/quote-template.jrxml")` — so the host's built-in `vibeerp-ping-report.jrxml` and the plug-in's template live in isolated namespaces) 3. Renders via `context.reports.renderPdf(template, data)` — uses the host JasperReportRenderer under the hood 4. Persists via `context.files.put(key, contentType, content)` under a plug-in-scoped key `plugin-printing-shop/quotes/quote-<code>.pdf` 5. Returns the file handle plus a `downloadUrl` pointing at the framework's `/api/v1/files/download` endpoint the caller can immediately hit ## Smoke test (fresh DB + staged plug-in) ``` # create a plate POST /api/v1/plugins/printing-shop/plates {code: PLATE-200, name: "Premium cover", widthMm: 420, heightMm: 594} → 201 {id, code: PLATE-200, status: DRAFT, ...} # generate + store the quote PDF POST /api/v1/plugins/printing-shop/plates/<id>/generate-quote-pdf {customerName: "Acme Inc"} → 201 { plateId, plateCode: "PLATE-200", customerName: "Acme Inc", fileKey: "plugin-printing-shop/quotes/quote-PLATE-200.pdf", fileSize: 1488, fileContentType: "application/pdf", downloadUrl: "/api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf" } # download via the framework's file endpoint GET /api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf → 200 Content-Type: application/pdf Content-Length: 1488 body: valid PDF 1.5, 1 page $ file /tmp/plate-quote.pdf /tmp/plate-quote.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded) # list by prefix GET /api/v1/files?prefix=plugin-printing-shop/ → [{"key":"plugin-printing-shop/quotes/quote-PLATE-200.pdf", "size":1488, "contentType":"application/pdf", ...}] # plug-in log [plugin:printing-shop] registered 8 endpoints under /api/v1/plugins/printing-shop/ [plugin:printing-shop] generated quote PDF for plate PLATE-200 (1488 bytes) → plugin-printing-shop/quotes/quote-PLATE-200.pdf ``` Four public surfaces composed in one flow: plug-in JDBC read → plug-in classloader resource load → host ReportRenderer compile/ fill/export → host FileStorage put → host file controller download. Every step stays on api.v1; zero plug-in code reaches into a concrete platform class. ## Printing-shop plug-in — full extension surface exercised After this commit the reference printing-shop plug-in contributes via every public seam the framework offers: | Seam | How the plug-in uses it | |-------------------------------|--------------------------------------------------------| | HTTP endpoints (P1.3) | 8 endpoints under /api/v1/plugins/printing-shop/ | | JDBC (P1.4) | Reads/writes its own plugin_printingshop__* tables | | Liquibase | Own changelog.xml, 2 tables created at plug-in start | | Metadata YAML (P1.5) | 2 entities, 5 permissions, 2 menus | | Custom fields on CORE (P3.4) | 5 plug-in fields on Partner/Item/SalesOrder/WorkOrder | | i18n (P1.6) | Own messages_<locale>.properties, quote number msgs | | EventBus (P1.7) | Publishes WorkOrderRequestedEvent from a TaskHandler | | TaskHandlers (P2.1) | 2 handlers (plate-approval, quote-to-work-order) | | Plug-in BPMN (P2.1 followup) | 2 BPMNs in processes/ auto-deployed at start | | JobHandlers (P1.10 followup) | PlateCleanupJobHandler using context.jdbc + logger | | ReportRenderer (P1.8) | Quote PDF from JRXML via context.reports | | FileStorage (P1.9) | Persists quote PDF via context.files | Everything listed in this table is exercised end-to-end by the current smoke test. The plug-in is the framework's executable acceptance test for the entire public extension surface. ## Tests No new unit tests — the wiring change is a plain constructor addition, the existing `DefaultPluginContext` has no dedicated test class (it's a thin dataclass-shaped bean), and `JasperReportRenderer` + `LocalDiskFileStorage` each have their own unit tests from the respective parent chunks. The change is validated end-to-end by the above smoke test; formalizing that into an integration test would need Testcontainers + a real plug-in JAR and belongs to a different (test-infra) chunk. - Total framework unit tests: 337 (unchanged), all green. ## Non-goals (parking lot) - Pre-compiled `.jasper` caching keyed by template hash. A hot-path benchmark would tell us whether the cache is worth shipping. - Multipart upload of a template into a plug-in's own `files` namespace so non-bundled templates can be tried without a plug-in rebuild. Nice-to-have for iteration but not on the v1.0 critical path. - Scoped file-key prefixes per plug-in enforced by the framework (today the plug-in picks its own prefix by convention; a `plugin.files.keyPrefix` config would let the host enforce that every plug-in-contributed file lives under `plugin-<id>/`). Future hardening chunk. -
P1.10 follow-up. Plug-ins can now register background job handlers the same way they already register workflow task handlers. The reference printing-shop plug-in ships a real PlateCleanupJobHandler that reads from its own database via `context.jdbc` as the executable acceptance test. ## Why this wasn't in the P1.10 chunk P1.10 landed the core scheduler + registry + Quartz bridge + HTTP surface, but the plug-in-loader integration was deliberately deferred — the JobHandlerRegistry already supported owner-tagged `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`, so the seam was defined; it just didn't have a caller from the PF4J plug-in side. Without a real plug-in consumer, shipping the integration would have been speculative. This commit closes the gap in exactly the shape the TaskHandler side already has: new api.v1 registrar interface, new scoped registrar in platform-plugins, one constructor parameter on DefaultPluginContext, one new field on VibeErpPluginManager, and the teardown paths all fall out automatically because JobHandlerRegistry already implements the owner-tagged cleanup. ## api.v1 additions - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single method `register(handler: JobHandler)`. Mirrors `PluginTaskHandlerRegistrar` exactly, same ergonomics, same duplicate-key-throws discipline. - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional member with the default-throw backward-compat pattern used for `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host loading a newer plug-in jar fails loudly at first call rather than silently dropping scheduled work. ## platform-plugins wiring - New dependency on `:platform:platform-jobs`. - New internal class `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar` that implements the api.v1 registrar by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. - `DefaultPluginContext` gains a `scopedJobHandlers` constructor parameter and exposes it as `PluginContext.jobs`. - `VibeErpPluginManager`: * injects `JobHandlerRegistry` * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing endpoint + taskHandler + BPMN-deployment cleanups * `destroy()` reverse-iterates `started` and calls the same `unregisterAllByOwner` alongside the other four teardown steps ## Reference plug-in — PlateCleanupJobHandler New file `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`. Key `printing_shop.plate.cleanup`. Captures the `PluginContext` via constructor — same "handler-side plug-in context access" pattern the printing-shop plug-in already uses for its TaskHandlers. The handler is READ-ONLY in its v1 incarnation: it runs a GROUP-BY query over `plugin_printingshop__plate` via `context.jdbc.query(...)` and logs a per-status summary via `context.logger.info(...)`. A real cleanup job would also run an `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the read-only shape is enough to exercise the seam end-to-end without introducing a retention policy the customer hasn't asked for. `PrintingShopPlugin.start(context)` now registers the handler alongside its two TaskHandlers: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) context.jobs.register(PlateCleanupJobHandler(context)) ## Smoke test (fresh DB, plug-in staged) ``` # boot registered JobHandler 'vibeerp.jobs.ping' owner='core' ... JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping] ... registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ... [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup # HTTP: list handlers — now shows both GET /api/v1/jobs/handlers → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]} # HTTP: trigger the plug-in handler — proves dispatcher routes to it POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger → 200 {"handlerKey":"printing_shop.plate.cleanup", "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9", "startedAt":"...","finishedAt":"...","ok":true} # Handler-side logs prove context.jdbc + context.logger access [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...' [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[] # SIGTERM — clean teardown [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s) [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped) [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order. Core handlers are untouched by plug-in teardown. ## Tests No new unit tests in this commit — the test coverage is inherited from the previously landed components: - `JobHandlerRegistryTest` already covers owner-tagged `register` / `unregister` / `unregisterAllByOwner` / duplicate key rejection. - `ScopedTaskHandlerRegistrar` behavior (which this commit mirrors structurally) is exercised end-to-end by the printing-shop plug-in boot path. - Total framework unit tests: 334 (unchanged from the quality→warehousing quarantine chunk), all green. ## What this unblocks - **Plug-in-shipped scheduled work.** The printing-shop plug-in can now add cron schedules for its cleanup handler via `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey, cronExpression}` without the operator touching core code. - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can now ship job handlers with distinct keys and be torn down independently on reload — the owner-tagged cleanup strips only the stopping plug-in's handlers, leaving other plug-ins' and core handlers alone. - **The "plug-in contributes everything" story.** The reference printing-shop plug-in now contributes via every public seam the framework has: HTTP endpoints (7), custom fields on core entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1) — plus its own database schema, its own metadata YAML, its own i18n bundles. That's every extension point a real customer plug-in would want. ## Non-goals (parking lot) - A real retention policy in PlateCleanupJobHandler. The handler logs a summary but doesn't mutate state. Customer-specific pruning rules belong in a customer-owned plug-in or a metadata- driven rule once that seam exists. - A built-in cron schedule for the plug-in's handler. The plug-in only registers the handler; scheduling is an operator decision exposed through the HTTP surface from P1.10. -
…duction auto-creates WorkOrder First end-to-end cross-PBC workflow driven entirely from a customer plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a TaskHandler that publishes a generic api.v1 event; pbc-production reacts by creating a DRAFT WorkOrder. The plug-in has zero compile-time coupling to pbc-production, and pbc-production has zero knowledge the plug-in exists. ## Why an event, not a facade Two options were on the table for "how does a plug-in ask pbc-production to create a WorkOrder": (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi` with a `createWorkOrder(command)` method (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production` that anyone can publish — this commit Facade pattern (a) is what InventoryApi.recordMovement and CatalogApi.findItemByCode use: synchronous, in-transaction, caller-blocks-on-completion. Event pattern (b) is what SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses: asynchronous over the bus, still in-transaction (the bus uses `Propagation.MANDATORY` with synchronous delivery so a failure rolls everything back), but the caller doesn't need a typed result. Option (b) wins for plug-in → pbc-production: - Plug-in compile-time surface stays identical: plug-ins already import `api.v1.event.*` to publish. No new api.v1.ext package. Zero new plug-in dependency. - The outbox gets the row for free — a crash between publish and delivery replays cleanly from `platform__event_outbox`. - A second customer plug-in shipping a different flow that ALSO wants to auto-spawn work orders doesn't need a second facade, just publishes the same event. pbc-scheduling (future) can subscribe to the same channel without duplicating code. The synchronous facade pattern stays the right tool for cross-PBC operations the caller needs to observe (read-throughs, inventory debits that must block the current transaction). Creating a DRAFT work order is a fire-and-trust operation — the event shape fits. ## What landed ### api.v1 — WorkOrderRequestedEvent New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent` with four required fields: - `code`: desired work-order code (must be unique globally; convention is to bake the source reference into it so duplicate detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`) - `outputItemCode` + `outputQuantity`: what to produce - `sourceReference`: opaque free-form pointer used in logs and the outbox audit trail. Example values: `plugin:printing-shop:quote:Q-007`, `pbc-orders-sales:SO-2026-001:L2` The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the existing `WorkOrderEvent` sealed interface is for LIFECYCLE events published BY pbc-production, not for inbound requests). `init` validators reject blank strings and non-positive quantities so a malformed event fails fast at publish time rather than at the subscriber. ### pbc-production — WorkOrderRequestedSubscriber New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`. Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe` overload (same pattern as `SalesOrderConfirmedSubscriber` + the six pbc-finance order subscribers). The subscriber: 1. Looks up `workOrders.findByCode(event.code)` as the idempotent short-circuit. If a WorkOrder with that code already exists (outbox replay, future async bus retry, developer re-running the same BPMN process), the subscriber logs at DEBUG and returns. **Second execution of the same BPMN produces the same outbox row which the subscriber then skips — the database ends up with exactly ONE WorkOrder regardless of how many times the process runs.** 2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with the event's fields. `sourceSalesOrderCode` is null because this is the generic path, not the SO-driven one. Why this is a SECOND subscriber rather than extending `SalesOrderConfirmedSubscriber`: the two events serve different producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific and requires a round-trip through `SalesOrdersApi.findByCode` to fetch the lines; `WorkOrderRequestedEvent` carries everything the subscriber needs inline. Collapsing them would mean the generic path inherits the SO-flow's SO-specific lookup and short-circuit logic that doesn't apply to it. ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler New plug-in TaskHandler in `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`. Captures the `PluginContext` via constructor — same pattern as `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside `execute`: 1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables (`quantity` accepts Number or String since Flowable's variable coercion is flexible). 2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and `sourceReference = "plugin:printing-shop:quote:$quoteCode"`. 3. Logs via `context.logger.info(...)` — the line is tagged `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`. 4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`. This is the first time a plug-in TaskHandler publishes a cross-PBC event from inside a workflow — proves the event-bus leg of the handler-context pattern works end-to-end. 5. Writes `workOrderCode` + `workOrderRequested=true` back to the process variables so a downstream BPMN step or the HTTP caller can see the derived code. The handler is registered in `PrintingShopPlugin.start(context)` alongside `PlateApprovalTaskHandler`: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) Teardown via `unregisterAllByOwner("printing-shop")` still works unchanged — the scoped registrar tracks both handlers. ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the plug-in JAR. Single synchronous service task, process definition key `plugin-printing-shop-quote-to-work-order`, service task id `printing_shop.quote.create_work_order` (matches the handler key). Auto-deployed by the host's `PluginProcessDeployer` at plug-in start — the printing-shop plug-in now ships two BPMNs bundled into one Flowable deployment, both under category `printing-shop`. ## Smoke test (fresh DB) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ... [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...': [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml] pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload) # 1) seed a catalog item $ curl -X POST /api/v1/catalog/items {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"} → 201 BOOK-HARDCOVER # 2) start the plug-in's quote-to-work-order BPMN $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order", "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}} → 201 {"ended":true, "variables":{"quoteCode":"Q-007", "itemCode":"BOOK-HARDCOVER", "quantity":500, "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007", "workOrderRequested":true}} Log lines observed: [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500) [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007' for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007') # 3) verify the WorkOrder now exists in pbc-production $ curl /api/v1/production/work-orders → [{"id":"029c2482-...", "code":"WO-FROM-PRINTINGSHOP-Q-007", "outputItemCode":"BOOK-HARDCOVER", "outputQuantity":500.0, "status":"DRAFT", "sourceSalesOrderCode":null, "inputs":[], "ext":{}}] # 4) run the SAME BPMN a second time — verify idempotent $ curl -X POST /api/v1/workflow/process-instances {same body as above} → 201 (process ends, workOrderRequested=true, new event published + delivered) $ curl /api/v1/production/work-orders → count=1, still only WO-FROM-PRINTINGSHOP-Q-007 ``` Every single step runs through an api.v1 public surface. No framework core code knows the printing-shop plug-in exists; no plug-in code knows pbc-production exists. They meet on the event bus, and the outbox guarantees the delivery. ## Tests - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`: * `subscribe registers one listener for WorkOrderRequestedEvent` * `handle creates a work order from the event fields` — captures the `CreateWorkOrderCommand` and asserts every field * `handle short-circuits when a work order with that code already exists` — proves the idempotent branch - Total framework unit tests: 278 (was 275), all green. ## What this unblocks - **Richer multi-step BPMNs** in the plug-in that chain plate approval + quote → work order + production start + completion. - **Plug-in-owned Quote entity** — the printing-shop plug-in can now introduce a `plugin_printingshop__quote` table via its own Liquibase changelog and have its HTTP endpoint create quotes that kick off the quote-to-work-order workflow automatically (or on operator confirm). - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step, potentially driven by plug-ins contributing custom steps via the same TaskHandler + event seam. - **Second reference plug-in** — any new customer plug-in can publish `WorkOrderRequestedEvent` from its own workflows without any framework change. ## Non-goals (parking lot) - The handler publishes but does not also read pbc-production state back. A future "wait for WO completion" BPMN step could subscribe to `WorkOrderCompletedEvent` inside a user-task + signal flow, but the engine's signal/correlation machinery isn't wired to plug-ins yet. - Quote entity + HTTP + real business logic. REF.1 proves the cross-PBC event seam; the richer quote lifecycle is a separate chunk that can layer on top of this. - Transactional rollback integration test. The synchronous bus + `Propagation.MANDATORY` guarantees it, but an explicit test that a subscriber throw rolls back both the ledger-adjacent writes and the Flowable process state would be worth adding with a real test container run. -
Proves out the "handler-side plug-in context access" pattern: a plug-in's TaskHandler captures the PluginContext through its constructor when the plug-in instantiates it inside `start(context)`, and then uses `context.jdbc`, `context.logger`, etc. from inside `execute` the same way the plug-in's HTTP lambdas do. Zero new api.v1 surface was needed — the plug-in decides whether a handler takes a context or not, and a pure handler simply omits the constructor parameter. ## Why this pattern and not a richer TaskContext The alternatives were: (a) add a `PluginContext` field (or a narrowed projection of it) to api.v1 `TaskContext`, threading the host-owned context through the workflow engine (b) capture the context in the plug-in's handler constructor — this commit Option (a) would have forced every TaskHandler author — core PBC handlers too, not just plug-in ones — to reason about a per-plug-in context that wouldn't make sense for core PBCs. It would also have coupled api.v1 to the plug-in machinery in a way that leaks into every handler implementation. Option (b) is a pure plug-in-local pattern. A pure handler: class PureHandler : TaskHandler { ... } and a stateful handler look identical except for one constructor parameter: class StatefulHandler(private val context: PluginContext) : TaskHandler { override fun execute(task, ctx) { context.jdbc.update(...) context.logger.info(...) } } and both register the same way: context.taskHandlers.register(PureHandler()) context.taskHandlers.register(StatefulHandler(context)) The framework's `TaskHandlerRegistry`, `DispatchingJavaDelegate`, and `DelegateTaskContext` stay unchanged. Plug-in teardown still strips handlers via `unregisterAllByOwner(pluginId)` because registration still happens through the scoped registrar inside `start(context)`. ## What PlateApprovalTaskHandler now does Before this commit, the handler was a pure function that wrote `plateApproved=true` + metadata to the process variables and didn't touch the DB. Now it: 1. Parses `plateId` out of the process variables as a UUID (fail-fast on non-UUID). 2. Calls `context.jdbc.update` to set the plate row's `status` from 'DRAFT' to 'APPROVED', guarded by an explicit `WHERE id=:id AND status='DRAFT'`. The guard makes a second invocation a no-op (rowsUpdated=0) rather than silently overwriting a later status. 3. Logs via the plug-in's PluginLogger — "plate {id} approved by user:admin (rows updated: 1)". Log lines are tagged `[plugin:printing-shop]` by the framework's Slf4jPluginLogger. 4. Emits process output variables: `plateApproved=true`, `plateId=<uuid>`, `approvedBy=<principal label>`, `approvedAt=<instant>`, and `rowsUpdated=<count>` so callers can see whether the approval actually changed state. ## Smoke test (fresh DB, full end-to-end loop) ``` POST /api/v1/plugins/printing-shop/plates {"code":"PLATE-042","name":"Red cover plate","widthMm":320,"heightMm":480} → 201 {"id":"0bf577c9-...","status":"DRAFT",...} POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"0bf577c9-..."}} → 201 {"ended":true, "variables":{"plateId":"0bf577c9-...", "rowsUpdated":1, "approvedBy":"user:admin", "approvedAt":"2026-04-09T05:01:01.779369Z", "plateApproved":true}} GET /api/v1/plugins/printing-shop/plates/0bf577c9-... → 200 {"id":"0bf577c9-...","status":"APPROVED", ...} ^^^^ note: was DRAFT a moment ago, NOW APPROVED — persisted to plugin_printingshop__plate via the handler's context.jdbc.update POST /api/v1/workflow/process-instances (same plateId, second run) → 201 {"variables":{"rowsUpdated":0,"plateApproved":true,...}} ^^^^ idempotent guard: the WHERE status='DRAFT' clause prevents double-updates, rowsUpdated=0 on the re-run ``` This is the first cross-cutting end-to-end business flow in the framework driven entirely through the public surfaces: 1. Plug-in HTTP endpoint writes a domain row 2. Workflow HTTP endpoint starts a BPMN process 3. Plug-in-contributed BPMN (deployed via PluginProcessDeployer) routes to a plug-in-contributed TaskHandler (registered via context.taskHandlers) 4. Handler mutates the same plug-in-owned table via context.jdbc 5. Plug-in HTTP endpoint reads the new state Every step uses only api.v1. Zero framework core code knows the plug-in exists. ## Non-goals (parking lot) - Emitting an event from the handler. The next step in the plug-in workflow story is for a handler to publish a domain event via `context.eventBus.publish(...)` so OTHER subscribers (e.g. pbc-production waiting on a PlateApproved event) can react. This commit stays narrow: the handler only mutates its own plug-in state. - Transaction scope of the handler's DB write relative to the Flowable engine's process-state persistence. Today both go through the host DataSource and Spring transaction manager that Flowable auto-configures, so a handler throw rolls everything back — verified by walking the code path. An explicit test of transactional rollback lands with REF.1 when the handler takes on real business logic. -
Completes the plug-in side of the embedded Flowable story. The P2.1 core made plug-ins able to register TaskHandlers; this chunk makes them able to ship the BPMN processes those handlers serve. ## Why Flowable's built-in auto-deployer couldn't do it Flowable's Spring Boot starter scans the host classpath at engine startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys every hit (the literal glob is paraphrased because the Kotlin KDoc comment below would otherwise treat the embedded slash-star as the start of a nested comment — feedback memory "Kotlin KDoc nested- comment trap"). PF4J plug-ins load through an isolated child classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml` resource shipped inside a plug-in JAR is never seen. This chunk adds a dedicated host-side deployer that opens each plug-in JAR file directly (same JarFile walk pattern as `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs with the Flowable `RepositoryService`. ## Mechanism ### New PluginProcessDeployer (platform-workflow) One Spring bean, two methods: - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR, collects every entry whose name starts with `processes/` and ends with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one Flowable `Deployment` named `plugin:<id>` with `category = pluginId`. Returns the deployment id or null (missing JAR / no BPMN resources). One deployment per plug-in keeps undeploy atomic and makes the teardown query unambiguous. - `undeployByPlugin(pluginId): Int` — runs `createDeploymentQuery().deploymentCategory(pluginId).list()` and calls `deleteDeployment(id, cascade=true)` on each hit. Cascading removes process instances and history rows along with the deployment — "uninstalling a plug-in makes it disappear". Idempotent: a second call returns 0. The deployer reads the JAR entries into byte arrays inside the JarFile's `use` block and then passes the bytes to `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the jar handle is already closed by the time Flowable sees the deployment. No input-stream lifetime tangles. ### VibeErpPluginManager wiring - New constructor dependency on `PluginProcessDeployer`. - Deploy happens AFTER `start(context)` succeeds. The ordering matters because a plug-in can only register its TaskHandlers during `start(context)`, and a deployed BPMN whose service-task delegate expression resolves to a key with no matching handler would still deploy (Flowable only resolves delegates at process-start time). Registering handlers first is the safer default: the moment the deployment lands, every referenced handler is already in the TaskHandlerRegistry. - BPMN deployment failure AFTER a successful `start(context)` now fully unwinds the plug-in state: call `instance.stop()`, remove the plug-in from the `started` list, strip its endpoints + its TaskHandlers + call `undeployByPlugin` (belt and suspenders — the deploy attempt may have partially succeeded). That mirrors the existing start-failure unwinding so the framework doesn't end up with a plug-in that's half-installed after any step throws. - `destroy()` calls `undeployByPlugin(pluginId)` alongside the existing `unregisterAllByOwner(pluginId)`. ### Reference plug-in BPMN `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml` — a minimal two-task process (`start` → serviceTask → `end`) whose serviceTask id is `printing_shop.plate.approve`, matching the PlateApprovalTaskHandler key landed in the previous commit. Process definition key is `plugin-printing-shop-plate-approval` (distinct from the serviceTask id because BPMN 2.0 requires element ids to be unique per document — same separation used for the core ping process). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'vibeerp.workflow.ping' owner='core' ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml] $ curl /api/v1/workflow/definitions (as admin) [ {"key":"plugin-printing-shop-plate-approval", "name":"Printing shop — plate approval", "version":1, "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4", "resourceName":"processes/plate-approval.bpmn20.xml"}, {"key":"vibeerp-workflow-ping", "name":"vibe_erp workflow ping", "version":1, "deploymentId":"4f48...", "resourceName":"vibeerp-ping.bpmn20.xml"} ] $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"PLATE-007"}} → {"processInstanceId":"5b1b...", "ended":true, "variables":{"plateId":"PLATE-007", "plateApproved":true, "approvedBy":"user:admin", "approvedAt":"2026-04-09T04:48:30.514523Z"}} $ kill -TERM <pid> [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade) ``` Full end-to-end loop closed: plug-in ships a BPMN → host reads it out of the JAR → Flowable deployment registered under the plug-in category → HTTP caller starts a process instance via the standard `/api/v1/workflow/process-instances` surface → dispatcher routes by activity id to the plug-in's TaskHandler → handler writes output variables + plug-in sees the authenticated caller as `ctx.principal()` via the reserved `__vibeerp_*` process-variable propagation from commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs. ## Tests - 6 new unit tests on `PluginProcessDeployerTest`: * `deployFromPlugin returns null when jarPath is not a regular file` — guard against dev-exploded plug-in dirs * `deployFromPlugin returns null when the plug-in jar has no BPMN resources` * `deployFromPlugin reads every bpmn resource under processes and deploys one bundle` — builds a real temporary JAR with two BPMN entries + a README + a metadata YAML, verifies that both BPMNs go through `addBytes` with the right names and the README / metadata entries are skipped * `deployFromPlugin rejects a blank plug-in id` * `undeployByPlugin returns zero when there is nothing to remove` * `undeployByPlugin cascades a deleteDeployment per matching deployment` - Total framework unit tests: 275 (was 269), all green. ## Kotlin trap caught during authoring (feedback memory paid out) First compile failed with `Unclosed comment` on the last line of `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph containing the literal glob `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the backtick span was parsed as the start of a nested block comment even though the surrounding `/* ... */` KDoc was syntactically complete. The saved feedback-memory entry "Kotlin KDoc nested-comment trap" covered exactly this situation — the fix is to spell out glob characters as `[star]` / `[slash]` (or the word "slash-star") inside documentation so the literal `/*` never appears. The KDoc now documents the behaviour AND the workaround so the next maintainer doesn't hit the same trap. ## Non-goals (still parking lot) - Handler-side access to the full PluginContext — PlateApprovalTaskHandler is still a pure function because the framework doesn't hand TaskHandlers a context object. For REF.1 (real quote→job-card) handlers will need to read + mutate plug-in-owned tables; the cleanest approach is closure-capture inside the plug-in class (handler instantiated inside `start(context)` with the context captured in the outer scope). Decision deferred to REF.1. - BPMN resource hot reload. The deployer runs once per plug-in start; a plug-in whose BPMN changes under its feet at runtime isn't supported yet. - Plug-in-shipped DMN / CMMN resources. The deployer only looks at `.bpmn20.xml` and `.bpmn`. Decision-table and case-management resources are not on the v1.0 critical path. -
## What's new Plug-ins can now contribute workflow task handlers to the framework. The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans from the host Spring context; handlers defined inside a PF4J plug-in were invisible because the plug-in's child classloader is not in the host's bean list. This commit closes that gap. ## Mechanism ### api.v1 - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar` with a single `register(handler: TaskHandler)` method. Plug-ins call it from inside their `start(context)` lambda. - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as a new optional member with a default implementation that throws `UnsupportedOperationException("upgrade to v0.7 or later")`, so pre-existing plug-in jars remain binary-compatible with the new host and a plug-in built against v0.7 of the api-v1 surface fails fast on an old host instead of silently doing nothing. Same pattern we used for `endpoints` and `jdbc`. ### platform-workflow - `TaskHandlerRegistry` gains owner tagging. Every registered handler now carries an `ownerId`: core `@Component` beans get `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through the constructor-injection path), plug-in-contributed handlers get their PF4J plug-in id. New API: * `register(handler, ownerId = OWNER_CORE)` (default keeps existing call sites unchanged) * `unregisterAllByOwner(ownerId): Int` — strip every handler owned by that id in one call, returns the count for log correlation * The duplicate-key error message now includes both owners so a plug-in trying to stomp on a core handler gets an actionable "already registered by X (owner='core'), attempted by Y (owner='printing-shop')" instead of "already registered". * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>` to `ConcurrentHashMap<String, Entry>` where `Entry` carries `(handler, ownerId)`. `find(key)` still returns `TaskHandler?` so the dispatcher is unchanged. - No behavioral change for the hot-path (`DispatchingJavaDelegate`) — only the registration/teardown paths changed. ### platform-plugins - New dependency on `:platform:platform-workflow` (the only new inter- module dep of this chunk; it is the module that exposes `TaskHandlerRegistry`). - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)` that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`, so the plug-in never sees (or can tamper with) the owner id. - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor parameter and exposes it as the `PluginContext.taskHandlers` override. - `VibeErpPluginManager`: * injects `TaskHandlerRegistry` * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing `endpointRegistry.unregisterAll(pluginId)` cleanup so a throwing `start(context)` cannot leave stale registrations * `destroy()` calls the same `unregisterAllByOwner` for every started plug-in in reverse order, mirroring the endpoint cleanup ### reference-customer/plugin-printing-shop - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in- contributed TaskHandler in the framework. Key `printing_shop.plate.approve`. Reads a `plateId` process variable, writes `plateApproved`, `plateId`, `approvedBy` (principal label), `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper plate-approval handler would UPDATE `plugin_printingshop__plate` via `context.jdbc`, but that requires handing the TaskHandler a projection of the PluginContext — a deliberate non-goal of this chunk, deferred to the "handler context" follow-up. - `PrintingShopPlugin.start(context)` now ends with `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs the registration. - Package layout: `org.vibeerp.reference.printingshop.workflow` is the plug-in's workflow namespace going forward (the next printing- shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order — will live alongside). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler' [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve $ curl /api/v1/workflow/handlers (as admin) { "count": 2, "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"] } $ curl /api/v1/plugins/printing-shop/ping # plug-in HTTP still works {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...} $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping"} (principal propagation from previous commit still works — pingedBy=user:admin) $ kill -TERM <pid> [ionShutdownHook] vibe_erp stopping 1 plug-in(s) [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped) [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order with the right owner attribution. Core handlers are untouched by plug-in teardown. ## Tests - 4 new / updated tests on `TaskHandlerRegistryTest`: * `unregisterAllByOwner only removes handlers owned by that id` — 2 core + 2 plug-in, unregister the plug-in owner, only the 2 plug-in keys are removed * `unregisterAllByOwner on unknown owner returns zero` * `register with blank owner is rejected` * Updated `duplicate key fails fast` to assert the new error message format including both owner ids - Total framework unit tests: 269 (was 265), all green. ## What this unblocks - **REF.1** (real printing-shop quote→job-card workflow) can now register its production handlers through the same seam - **Plug-in-contributed handlers with state access** — the next design question is how a plug-in handler gets at the plug-in's database and translator. Two options: pass a projection of the PluginContext through TaskContext, or keep a reference to the context captured at plug-in start (closure). The PlateApproval handler in this chunk is pure on purpose to keep the seam conversation separate. - **Plug-in-shipped BPMN auto-deployment** — Flowable's default classpath scan uses `classpath*:/processes/*.bpmn20.xml` which does NOT see PF4J plug-in classloaders. A dedicated `PluginProcessDeployer` that walks each started plug-in's JAR for BPMN resources and calls `repositoryService.createDeployment` is the natural companion to this commit, still pending. ## Non-goals (still parking lot) - BPMN processes shipped inside plug-in JARs (see above — needs its own chunk, because it requires reading resources from the PF4J classloader and constructing a Flowable deployment by hand) - Per-handler permission checks — a handler that wants a permission gate still has to call back through its own context; P4.3's @RequirePermission aspect doesn't reach into Flowable delegate execution. - Hot reload of a running plug-in's TaskHandlers. The seam supports it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised by any current caller. -
Completes the HasExt rollout across every core entity with an ext column. Item was the last one that carried an ext JSONB column without any validation wired — a plug-in could declare custom fields for Item but nothing would enforce them on save. This fixes that and restores two printing-shop-specific Item fields to the reference plug-in that were temporarily dropped from the previous Tier 1 customization chunk (commit 16c59310) precisely because Item wasn't wired. Code changes: - Item implements HasExt; `ext` becomes `override var ext`, a companion constant holds the entity name "Item". - ItemService injects ExtJsonValidator, calls applyTo() in both create() and update() (create + update symmetry like partners and locations). parseExt passthrough added for response mappers. - CreateItemCommand, UpdateItemCommand, CreateItemRequest, UpdateItemRequest gain a nullable ext field. - ItemResponse now carries the parsed ext map, same shape as PartnerResponse / LocationResponse / SalesOrderResponse. - pbc-catalog build.gradle adds `implementation(project(":platform:platform-metadata"))`. - ItemServiceTest constructor updated to pass the new validator dependency with no-op stubs. Plug-in YAML (printing-shop.yml): - Re-added `printing_shop_color_count` (integer) and `printing_shop_paper_gsm` (integer) custom fields targeting Item. These were originally in the commit 16c59310 draft but removed because Item wasn't wired. Now that Item is wired, they're back and actually enforced. Smoke verified end-to-end against real Postgres with the plug-in staged: - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields. - POST /catalog/items with `{printing_shop_color_count: 4, printing_shop_paper_gsm: 170}` → 201, canonical form persisted. - GET roundtrip preserves both integer values. - POST with `printing_shop_color_count: "not-a-number"` → 400 "ext.printing_shop_color_count: not a valid integer: 'not-a-number'". - POST with `rogue_key` → 400 "ext contains undeclared key(s) for 'Item': [rogue_key]". Six of eight PBCs now participate in HasExt: Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item. The remaining two are pbc-identity (User has no ext column by design — identity is a security concern, not a customization one) and pbc-finance (JournalEntry is derived state from events, no customization surface). Five core entities carry Tier 1 custom fields as of this commit: Partner (2 core + 1 plug-in) Item (0 core + 2 plug-in) SalesOrder (0 core + 1 plug-in) WorkOrder (2 core + 1 plug-in) Location (0 core + 0 plug-in — wired but no declarations yet) 246 unit tests, all green. 18 Gradle subprojects.
-
The reference printing-shop plug-in now demonstrates the framework's most important promise — that a customer plug-in can EXTEND core business entities (Partner, SalesOrder, WorkOrder) with customer- specific fields WITHOUT touching any core code. Added to `printing-shop.yml` customFields section: Partner: printing_shop_customer_segment (enum: agency, in_house, end_client, reseller) SalesOrder: printing_shop_quote_number (string, maxLength 32) WorkOrder: printing_shop_press_id (string, maxLength 32) Mechanism (no code changes, all metadata-driven): 1. Plug-in YAML carries a `customFields:` section alongside its entities / permissions / menus. 2. MetadataLoader.loadFromPluginJar reads the section and inserts rows into `metadata__custom_field` tagged `source='plugin:printing-shop'`. 3. CustomFieldRegistry.refresh re-reads ALL rows (both `source='core'` and `source='plugin:*'`) and merges them by `targetEntity`. 4. ExtJsonValidator.applyTo now validates incoming ext against the MERGED set, so a POST to /api/v1/partners/partners can include both core-declared (partners_industry) and plug-in-declared (printing_shop_customer_segment) fields in the same request, and both are enforced. 5. Uninstalling the plug-in removes the plugin:printing-shop rows and the fields disappear from validation AND from the UI's custom-field catalog — no migration, no restart of anything other than the plug-in lifecycle itself. Convention established: plug-in-contributed custom field keys are prefixed with the plug-in id (e.g. `printing_shop_*`) so two independent plug-ins can't collide on the same entity. Documented inline in the YAML. Smoke verified end-to-end against real Postgres with the plug-in staged: - CustomFieldRegistry logs "refreshed 4 custom fields" after core load, then "refreshed 7 custom fields across 3 entities" after plug-in load — 4 core + 3 plug-in fields, merged. - GET /_meta/metadata/custom-fields/Partner returns 3 fields (2 core + 1 plug-in). - GET /_meta/metadata/custom-fields/SalesOrder returns 1 field (plug-in only). - GET /_meta/metadata/custom-fields/WorkOrder returns 3 fields (2 core + 1 plug-in). - POST /partners/partners with BOTH a core field AND a plug-in field → 201, canonical form persisted. - POST with an invalid plug-in enum value → 400 "ext.printing_shop_customer_segment: value 'ghost' is not in allowed set [agency, in_house, end_client, reseller]". - POST /orders/sales-orders with printing_shop_quote_number → 201, quote number round-trips. - POST /production/work-orders with mixed production_priority + printing_shop_press_id → 201, both fields persisted. This is the first executable demonstration of Clean Core extensibility (CLAUDE.md guardrail #7) — the plug-in extends core-owned data through a stable public contract (api.v1 HasExt + metadata__custom_field) without reaching into any core or platform internal class. This is the "A" grade on the A/B/C/D extensibility safety scale. No code changes. No test changes. 246 unit tests, still green.
-
The eighth cross-cutting platform service is live: plug-ins and PBCs now have a real Translator and LocaleProvider instead of the UnsupportedOperationException stubs that have shipped since v0.5. What landed ----------- * New Gradle subproject `platform/platform-i18n` (13 modules total). * `IcuTranslator` — backed by ICU4J's MessageFormat (named placeholders, plurals, gender, locale-aware number/date/currency formatting), the format every modern translation tool speaks natively. JDK ResourceBundle handles per-locale fallback (zh_CN → zh → root). * The translator takes a list of `BundleLocation(classLoader, baseName)` pairs and tries them in order. For the **core** translator the chain is just `[(host, "messages")]`; for a **per-plug-in** translator constructed by VibeErpPluginManager it's `[(pluginClassLoader, "META-INF/vibe-erp/i18n/messages"), (host, "messages")]` so plug-in keys override host keys, but plug-ins still inherit shared keys like `errors.not_found`. * Critical detail: the plug-in baseName uses a path the host does NOT publish, because PF4J's `PluginClassLoader` is parent-first — a `getResource("messages.properties")` against the plug-in classloader would find the HOST bundle through the parent chain, defeating the per-plug-in override entirely. Naming the plug-in resource somewhere the host doesn't claim sidesteps the trap. * The translator disables `ResourceBundle.Control`'s automatic JVM-default locale fallback. The default control walks `requested → root → JVM default → root` which would silently serve German strings to a Japanese-locale request just because the German bundle exists. The fallback chain stops at root within a bundle, then moves to the next bundle location, then returns the key string itself. * `RequestLocaleProvider` reads the active HTTP request's Accept-Language via `RequestContextHolder` + the servlet container's `getLocale()`. Outside an HTTP request (background jobs, workflow tasks, MCP agents) it falls back to the configured default locale (`vibeerp.i18n.defaultLocale`, default `en`). Importantly, when an HTTP request HAS no Accept-Language header it ALSO falls back to the configured default — never to the JVM's locale. * `I18nConfiguration` exposes `coreTranslator` and `coreLocaleProvider` beans. Per-plug-in translators are NOT beans — they're constructed imperatively per plug-in start in VibeErpPluginManager because each needs its own classloader at the front of the resolution chain. * `DefaultPluginContext` now wires `translator` and `localeProvider` for real instead of throwing `UnsupportedOperationException`. Bundles ------- * Core: `platform-i18n/src/main/resources/messages.properties` (English), `messages_zh_CN.properties` (Simplified Chinese), `messages_de.properties` (German). Six common keys (errors, ok/cancel/save/delete) and an ICU plural example for `counts.items`. Java 9+ JEP 226 reads .properties files as UTF-8 by default, so Chinese characters are written directly rather than as `\\uXXXX` escapes. * Reference plug-in: moved from the broken `i18n/messages_en-US.properties` / `messages_zh-CN.properties` (wrong path, hyphen-locale filenames ResourceBundle ignores) to the canonical `META-INF/vibe-erp/i18n/messages.properties` / `messages_zh_CN.properties` paths with underscore locale tags. Added a new `printingshop.plate.created` key with an ICU plural for `ink_count` to demonstrate non-trivial argument substitution. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit POST /api/v1/plugins/printing-shop/plates with three different Accept-Language headers: * (no header) → "Plate 'PLATE-001' created with no inks." (en-US, plug-in base bundle) * `Accept-Language: zh-CN` → "已创建印版 'PLATE-002' (无油墨)。" (zh-CN, plug-in zh_CN bundle) * `Accept-Language: de` → "Plate 'PLATE-003' created with no inks." (de, but the plug-in ships no German bundle so it falls back to the plug-in base bundle — correct, the key is plug-in-specific) Regression: identity, catalog, partners, and `GET /plates` all still HTTP 200 after the i18n wiring change. Build ----- * `./gradlew build`: 13 subprojects, 118 unit tests (was 107 / 12), all green. The 11 new tests cover ICU plural rendering, named-arg substitution, locale fallback (zh_CN → root, ja → root via NO_FALLBACK), cross-classloader override (a real JAR built in /tmp at test time), and RequestLocaleProvider's three resolution paths (no request → default; Accept-Language present → request locale; request without Accept-Language → default, NOT JVM locale). * The architectural rule still enforced: platform-plugins now imports platform-i18n, which is a platform-* dependency (allowed), not a pbc-* dependency (forbidden). What was deferred ----------------- * User-preferred locale from the authenticated user's profile row is NOT in the resolution chain yet — the `LocaleProvider` interface leaves room for it but the implementation only consults Accept-Language and the configured default. Adding it slots in between request and default without changing the api.v1 surface. * The metadata translation overrides table (`metadata__translation`) is also deferred — the `Translator` JavaDoc mentions it as the first lookup source, but right now keys come from .properties files only. Once Tier 1 customisation lands (P3.x), key users will be able to override any string from the SPA without touching code. -
Adds the foundation for the entire Tier 1 customization story. Core PBCs and plug-ins now ship YAML files declaring their entities, permissions, and menus; a `MetadataLoader` walks the host classpath and each plug-in JAR at boot, upserts the rows tagged with their source, and exposes them at a public REST endpoint so the future SPA, AI-agent function catalog, OpenAPI generator, and external introspection tooling can all see what the framework offers without scraping code. What landed: * New `platform/platform-metadata/` Gradle subproject. Depends on api-v1 + platform-persistence + jackson-yaml + spring-jdbc. * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward- compatible: unknown top-level keys are ignored, so a future plug-in built against a newer schema (forms, workflows, rules, translations) loads cleanly on an older host that doesn't know those sections yet. * `MetadataLoader` with two entry points: loadCore() — uses Spring's PathMatchingResourcePatternResolver against the host classloader. Finds every classpath*:META-INF/ vibe-erp/metadata/*.yml across all jars contributing to the application. Tagged source='core'. loadFromPluginJar(pluginId, jarPath) — opens ONE specific plug-in JAR via java.util.jar.JarFile and walks its entries directly. This is critical: a plug-in's PluginClassLoader is parent-first, so a classpath*: scan against it would ALSO pick up the host's metadata files via parent classpath. We saw this in the first smoke run — the plug-in source ended up with 6 entities (the plug-in's 2 + the host's 4) before the fix. Walking the JAR file directly guarantees only the plug-in's own files load. Tagged source='plugin:<id>'. Both entry points use the same delete-then-insert idempotent core (doLoad). Loading the same source twice produces the same final state. User-edited metadata (source='user') is NEVER touched by either path — it survives boot, plug-in install, and plug-in upgrade. This is what lets a future SPA "Customize" UI add custom fields without fearing they'll be wiped on the next deploy. * `VibeErpPluginManager.afterPropertiesSet()` now calls metadataLoader.loadCore() at the very start, then walks plug-ins and calls loadFromPluginJar(...) for each one between Liquibase migration and start(context). Order is guaranteed: core → linter → migrate → metadata → start. The CommandLineRunner I originally put `loadCore()` in turned out to be wrong because Spring runs CommandLineRunners AFTER InitializingBean.afterPropertiesSet(), so the plug-in metadata was loading BEFORE core — the wrong way around. Calling loadCore() inline in the plug-in manager fixes the ordering without any @Order(...) gymnastics. * `MetadataController` exposes: GET /api/v1/_meta/metadata — all three sections GET /api/v1/_meta/metadata/entities — entities only GET /api/v1/_meta/metadata/permissions GET /api/v1/_meta/metadata/menus Public allowlist (covered by the existing /api/v1/_meta/** rule in SecurityConfiguration). The metadata is intentionally non- sensitive — entity names, permission keys, menu paths. Nothing in here is PII or secret; the SPA needs to read it before the user has logged in. * YAML files shipped: - pbc-identity/META-INF/vibe-erp/metadata/identity.yml (User + Role entities, 6 permissions, Users + Roles menus) - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml (Item + Uom entities, 7 permissions, Items + UoMs menus) - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus in a "Printing shop" section) Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths, mixed sections, blank pluginId rejection, missing-file no-op wipe) + 7 MetadataYamlParseTest cases (DTO mapping, optional fields, section defaults, forward-compat unknown keys). Total now **92 unit tests** across 11 modules, all green. End-to-end smoke test against fresh Postgres + plug-in loaded: Boot logs: MetadataLoader: source='core' loaded 4 entities, 13 permissions, 4 menus from 2 file(s) MetadataLoader: source='plugin:printing-shop' loaded 2 entities, 5 permissions, 2 menus from 1 file(s) HTTP smoke (everything green): GET /api/v1/_meta/metadata (no auth) → 200 6 entities, 18 permissions, 6 menus entity names: User, Role, Item, Uom, Plate, InkRecipe menu sections: Catalog, Printing shop, System GET /api/v1/_meta/metadata/entities → 200 GET /api/v1/_meta/metadata/menus → 200 Direct DB verification: metadata__entity: core=4, plugin:printing-shop=2 metadata__permission: core=13, plugin:printing-shop=5 metadata__menu: core=4, plugin:printing-shop=2 Idempotency: restart the app, identical row counts. Existing endpoints regression: GET /api/v1/identity/users (Bearer) → 1 user GET /api/v1/catalog/uoms (Bearer) → 15 UoMs GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 Bugs caught and fixed during the smoke test: • The first attempt loaded core metadata via a CommandLineRunner annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata inline in VibeErpPluginManager.afterPropertiesSet(). Spring runs all InitializingBeans BEFORE any CommandLineRunner, so the plug-in metadata loaded first and the core load came second — wrong order. Fix: drop CoreMetadataInitializer entirely; have the plug-in manager call metadataLoader.loadCore() directly at the start of afterPropertiesSet(). • The first attempt's plug-in load used metadataLoader.load(pluginClassLoader, ...) which used Spring's PathMatchingResourcePatternResolver against the plug-in's classloader. PluginClassLoader is parent-first, so the resolver enumerated BOTH the plug-in's own JAR AND the host classpath's metadata files, tagging core entities as source='plugin:<id>' and corrupting the seed counts. Fix: refactor MetadataLoader to expose loadFromPluginJar(pluginId, jarPath) which opens the plug-in JAR directly via java.util.jar.JarFile and walks its entries — never asking the classloader at all. The api-v1 surface didn't change. • Two KDoc comments contained the literal string `*.yml` after a `/` character (`/metadata/*.yml`), forming the `/*` pattern that Kotlin's lexer treats as a nested-comment opener. The file failed to compile with "Unclosed comment". This is the third time I've hit this trap; rewriting both KDocs to avoid the literal `/*` sequence. • The MetadataLoaderTest's hand-rolled JAR builder didn't include explicit directory entries for parent paths. Real Gradle JARs do include them, and Spring's PathMatchingResourcePatternResolver needs them to enumerate via classpath*:. Fixed the test helper to write directory entries for every parent of each file. Implementation plan refreshed: P1.5 marked DONE. Next priority candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom field application via the ext jsonb column, which would unlock the full Tier 1 customization story). Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests, metadata seeded for 6 entities + 18 permissions + 6 menus. -
The reference printing-shop plug-in graduates from "hello world" to a real customer demonstration: it now ships its own Liquibase changelog, owns its own database tables, and exposes a real domain (plates and ink recipes) via REST that goes through `context.jdbc` — a new typed-SQL surface in api.v1 — without ever touching Spring's `JdbcTemplate` or any other host internal type. A bytecode linter that runs before plug-in start refuses to load any plug-in that tries to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes. What landed: * api.v1 (additive, binary-compatible): - PluginJdbc — typed SQL access with named parameters. Methods: query, queryForObject, update, inTransaction. No Spring imports leaked. Forces plug-ins to use named params (no positional ?). - PluginRow — typed nullable accessors over a single result row: string, int, long, uuid, bool, instant, bigDecimal. Hides java.sql.ResultSet entirely. - PluginContext.jdbc getter with default impl that throws UnsupportedOperationException so older builds remain binary compatible per the api.v1 stability rules. * platform-plugins — three new sub-packages: - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate. ResultSetPluginRow translates each accessor through ResultSet.wasNull() so SQL NULL round-trips as Kotlin null instead of the JDBC defaults (0 for int, false for bool, etc. — bug factories). - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean for the whole process. Per-plugin isolation lands later. - migration/PluginLiquibaseRunner looks for META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via the PF4J classloader and applies it via Liquibase against the host's shared DataSource. The unique META-INF path matters: plug-ins also see the host's parent classpath, where the host's own db/changelog/master.xml lives, and a collision causes Liquibase ChangeLogParseException at install time. - lint/PluginLinter walks every .class entry in the plug-in JAR via java.util.jar.JarFile + ASM ClassReader, visits every type/ method/field/instruction reference, rejects on any reference to `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages. * VibeErpPluginManager lifecycle is now load → lint → migrate → start: - lint runs immediately after PF4J's loadPlugins(); rejected plug-ins are unloaded with a per-violation error log and never get to run any code - migrate runs the plug-in's own Liquibase changelog; failure means the plug-in is loaded but skipped (loud warning, framework boots fine) - then PF4J's startPlugins() runs the no-arg start - then we walk loaded plug-ins and call vibe_erp's start(context) with a fully-wired DefaultPluginContext (logger + endpoints + eventBus + jdbc). The plug-in's tables are guaranteed to exist by the time its lambdas run. * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the shared PluginJdbc and use it to talk to their own tables. * Reference plug-in (PrintingShopPlugin): - Ships META-INF/vibe-erp/db/changelog.xml with two changesets: plugin_printingshop__plate (id, code, name, width_mm, height_mm, status) and plugin_printingshop__ink_recipe (id, code, name, cmyk_c/m/y/k). - Now registers seven endpoints: GET /ping — health GET /echo/{name} — path variable demo GET /plates — list GET /plates/{id} — fetch POST /plates — create (with race-conditiony existence check before INSERT, since plug-ins can't import Spring's DataAccessException) GET /inks POST /inks - All CRUD lambdas use context.jdbc with named parameters. The plug-in still imports nothing from org.springframework.* in its own code (it does reach the host's Jackson via reflection for JSON parsing — a deliberate v0.6 shortcut documented inline). Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize in-memory plug-in JARs (clean class, forbidden platform ref, forbidden pbc ref, allowed api.v1 ref, multiple violations) and a mocked PluginWrapper to avoid touching the real PF4J loader. Total now **81 unit tests** across 10 modules, all green. End-to-end smoke test against fresh Postgres with the plug-in loaded (every assertion green): Boot logs: PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml Liquibase: ChangeSet printingshop-init-001 ran successfully Liquibase: ChangeSet printingshop-init-002 ran successfully Liquibase migrations applied successfully plugin.printing-shop: registered 7 endpoints HTTP smoke: \dt plugin_printingshop* → both tables exist GET /api/v1/plugins/printing-shop/plates → [] POST plate A4 → 201 + UUID POST plate A3 → 201 + UUID POST duplicate A4 → 409 + clear msg GET plates → 2 rows GET /plates/{id} → A4 details psql verifies both rows in plugin_printingshop__plate POST ink CYAN → 201 POST ink MAGENTA → 201 GET inks → 2 inks with nested CMYK GET /ping → 200 (existing endpoint) GET /api/v1/catalog/uoms → 15 UoMs (no regression) GET /api/v1/identity/users → 1 user (no regression) Bug encountered and fixed during the smoke test: • The plug-in initially shipped its changelog at db/changelog/master.xml, which collides with the HOST's db/changelog/master.xml. The plug-in classloader does parent-first lookup (PF4J default), so Liquibase's ClassLoaderResourceAccessor found BOTH files and threw ChangeLogParseException ("Found 2 files with the path"). Fixed by moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml, a path the host never uses, and updating PluginLiquibaseRunner. The unique META-INF prefix is now part of the documented plug-in convention. What is explicitly NOT in this chunk (deferred): • Per-plugin Spring child contexts — plug-ins still instantiate via PF4J's classloader without their own Spring beans • Per-plugin datasource isolation — one shared host pool today • Plug-in changelog table-prefix linter — convention only, runtime enforcement comes later • Rollback on plug-in uninstall — uninstall is operator-confirmed and rare; running dropAll() during stop() would lose data on accidental restart • Subscription auto-scoping on plug-in stop — plug-ins still close their own subscriptions in stop() • Real customer-grade JSON parsing in plug-in lambdas — the v0.6 reference plug-in uses reflection to find the host's Jackson; a real plug-in author would ship their own JSON library or use a future api.v1 typed-DTO surface Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1 all marked DONE in docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md. Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners). -
The reference printing-shop plug-in now actually does something: its main class registers two HTTP endpoints during start(context), and a real curl to /api/v1/plugins/printing-shop/ping returns the JSON the plug-in's lambda produced. End-to-end smoke test 10/10 green. This is the chunk that turns vibe_erp from "an ERP app that has a plug-in folder" into "an ERP framework whose plug-ins can serve traffic". What landed: * api.v1 — additive (binary-compatible per the api.v1 stability rule): - org.vibeerp.api.v1.plugin.HttpMethod (enum) - org.vibeerp.api.v1.plugin.PluginRequest (path params, query, body) - org.vibeerp.api.v1.plugin.PluginResponse (status + body) - org.vibeerp.api.v1.plugin.PluginEndpointHandler (fun interface) - org.vibeerp.api.v1.plugin.PluginEndpointRegistrar (per-plugin scoped, register(method, path, handler)) - PluginContext.endpoints getter with default impl that throws UnsupportedOperationException so the addition is binary-compatible with plug-ins compiled against earlier api.v1 builds. * platform-plugins — three new files: - PluginEndpointRegistry: process-wide registration storage. Uses Spring's AntPathMatcher so {var} extracts path variables. Synchronized mutation. Exact-match fast path before pattern loop. Rejects duplicate (method, path) per plug-in. unregisterAll(plugin) on shutdown. - ScopedPluginEndpointRegistrar: per-plugin wrapper that tags every register() call with the right plugin id. Plug-ins cannot register under another plug-in's namespace. - PluginEndpointDispatcher: single Spring @RestController at /api/v1/plugins/{pluginId}/** that catches GET/POST/PUT/PATCH/DELETE, asks the registry for a match, builds a PluginRequest, calls the handler, serializes the response. 404 on no match, 500 on handler throw (logged with stack trace). - DefaultPluginContext: implements PluginContext with a real SLF4J-backed logger (every line tagged with the plug-in id) and the scoped endpoint registrar. The other six services (eventBus, transaction, translator, localeProvider, permissionCheck, entityRegistry) throw UnsupportedOperationException with messages pointing at the implementation plan unit that will land each one. Loud failure beats silent no-op. * VibeErpPluginManager — after PF4J's startPlugins() now walks every loaded plug-in, casts the wrapper instance to api.v1.plugin.Plugin, and calls start(context) with a freshly-built DefaultPluginContext. Tracks the started set so destroy() can call stop() and unregisterAll() in reverse order. Catches plug-in start failures loudly without bringing the framework down. * Reference plug-in (PrintingShopPlugin): - Now extends BOTH org.pf4j.Plugin (so PF4J's loader can instantiate it via the Plugin-Class manifest entry) AND org.vibeerp.api.v1.plugin.Plugin (so the host's vibe_erp lifecycle hook can call start(context)). Uses Kotlin import aliases to disambiguate the two `Plugin` simple names. - In start(context), registers two endpoints: GET /ping — returns {plugin, version, ok, message} GET /echo/{name} — extracts path variable, echoes it back - The /echo handler proves path-variable extraction works end-to-end. * Build infrastructure: - reference-customer/plugin-printing-shop now has an `installToDev` Gradle task that builds the JAR and stages it into <repo>/plugins-dev/. The task wipes any previous staged copies first so renaming the JAR on a version bump doesn't leave PF4J trying to load two versions. - distribution's `bootRun` task now (a) depends on `installToDev` so the staging happens automatically and (b) sets workingDir to the repo root so application-dev.yaml's relative `vibeerp.plugins.directory: ./plugins-dev` resolves to the right place. Without (b) bootRun's CWD was distribution/ and PF4J found "No plugins" — which is exactly the bug that surfaced in the first smoke run. - .gitignore now excludes /plugins-dev/ and /files-dev/. Tests: 12 new unit tests for PluginEndpointRegistry covering literal paths, single/multi path variables, duplicate registration rejection, literal-vs-pattern precedence, cross-plug-in isolation, method matching, and unregisterAll. Total now 61 unit tests across the framework, all green. End-to-end smoke test against fresh Postgres + the plug-in JAR loaded by PF4J at boot (10/10 passing): GET /api/v1/plugins/printing-shop/ping (no auth) → 401 POST /api/v1/auth/login → access token GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 {plugin, version, ok, message} GET /api/v1/plugins/printing-shop/echo/hello → 200, echoed=hello GET /api/v1/plugins/printing-shop/echo/world → 200, echoed=world GET /api/v1/plugins/printing-shop/nonexistent → 404 (no handler) GET /api/v1/plugins/missing-plugin/ping → 404 (no plugin) POST /api/v1/plugins/printing-shop/ping → 404 (wrong method) GET /api/v1/catalog/uoms (Bearer) → 200, 15 UoMs GET /api/v1/identity/users (Bearer) → 200, 1 user PF4J resolved the JAR, started the plug-in, the host called vibe_erp's start(context), the plug-in registered two endpoints, and the dispatcher routed real HTTP traffic to the plug-in's lambdas. The boot log shows the full chain. What is explicitly NOT in this chunk and remains for later: • plug-in linter (P1.2) — bytecode scan for forbidden imports • plug-in Liquibase application (P1.4) — plug-in-owned schemas • per-plug-in Spring child context — currently we just instantiate the plug-in via PF4J's classloader; there is no Spring context for the plug-in's own beans • PluginContext.eventBus / transaction / translator / etc. — they still throw UnsupportedOperationException with TODO messages • Path-template precedence between multiple competing patterns (only literal-beats-pattern is implemented, not most-specific-pattern) • Permission checks at the dispatcher (Spring Security still catches plug-in endpoints with the global "anyRequest authenticated" rule, which is the right v0.5 behavior) • Hot reload of plug-ins (cold restart only) Bug encountered and fixed during the smoke test: • application-dev.yaml has `vibeerp.plugins.directory: ./plugins-dev`, a relative path. Gradle's `bootRun` task by default uses the subproject's directory as the working directory, so the relative path resolved to <repo>/distribution/plugins-dev/ instead of <repo>/plugins-dev/. PF4J reported "No plugins" because that directory was empty. Fixed by setting bootRun.workingDir = rootProject.layout.projectDirectory.asFile. • One KDoc comment in PluginEndpointDispatcher contained the literal string `/api/v1/plugins/{pluginId}/**` inside backticks. The Kotlin lexer doesn't treat backticks as comment-suppressing, so `/**` opened a nested KDoc comment that was never closed and the file failed to compile. Same root cause as the AuthController bug earlier in the session. Rewrote the line to avoid the literal `/**` sequence.
-
BLOCKER: wire Hibernate multi-tenancy - application.yaml: set hibernate.tenant_identifier_resolver and hibernate.multiTenancy=DISCRIMINATOR so HibernateTenantResolver is actually installed into the SessionFactory - AuditedJpaEntity.tenantId: add @org.hibernate.annotations.TenantId so every PBC entity inherits the discriminator - AuditedJpaEntityListener.onCreate: throw if a caller pre-set tenantId to a different value than the current TenantContext, instead of silently overwriting (defense against cross-tenant write bugs) IMPORTANT: dependency hygiene - pbc-identity no longer depends on platform-bootstrap (wrong direction; bootstrap assembles PBCs at the top of the stack) - root build.gradle.kts: tighten the architectural-rule enforcement to also reject :pbc:* -> platform-bootstrap; switch plug-in detection from a fragile pathname heuristic to an explicit extra["vibeerp.module-kind"] = "plugin" marker; reference plug-in declares the marker IMPORTANT: api.v1 surface additions (all non-breaking) - Repository: documented closed exception set; new PersistenceExceptions.kt declares OptimisticLockConflictException, UniqueConstraintViolationException, EntityValidationException, and EntityNotFoundException so plug-ins never see Hibernate types - TaskContext: now exposes tenantId(), principal(), locale(), correlationId() so workflow handlers (which run outside an HTTP request) can pass tenant-aware calls back into api.v1 - EventBus: subscribe() now returns a Subscription with close() so long-lived subscribers can deregister explicitly; added a subscribe(topic: String, ...) overload for cross-classloader event routing where Class<E> equality is unreliable - IdentityApi.findUserById: tightened from Id<*> to PrincipalId so the type system rejects "wrong-id-kind" mistakes at the cross-PBC boundary NITs: - HealthController.kt -> MetaController.kt (file name now matches the class name); added TODO(v0.2) for reading implementationVersion from the Spring Boot BuildProperties bean