-
P1.10 follow-up. Plug-ins can now register background job handlers the same way they already register workflow task handlers. The reference printing-shop plug-in ships a real PlateCleanupJobHandler that reads from its own database via `context.jdbc` as the executable acceptance test. ## Why this wasn't in the P1.10 chunk P1.10 landed the core scheduler + registry + Quartz bridge + HTTP surface, but the plug-in-loader integration was deliberately deferred — the JobHandlerRegistry already supported owner-tagged `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`, so the seam was defined; it just didn't have a caller from the PF4J plug-in side. Without a real plug-in consumer, shipping the integration would have been speculative. This commit closes the gap in exactly the shape the TaskHandler side already has: new api.v1 registrar interface, new scoped registrar in platform-plugins, one constructor parameter on DefaultPluginContext, one new field on VibeErpPluginManager, and the teardown paths all fall out automatically because JobHandlerRegistry already implements the owner-tagged cleanup. ## api.v1 additions - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single method `register(handler: JobHandler)`. Mirrors `PluginTaskHandlerRegistrar` exactly, same ergonomics, same duplicate-key-throws discipline. - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional member with the default-throw backward-compat pattern used for `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host loading a newer plug-in jar fails loudly at first call rather than silently dropping scheduled work. ## platform-plugins wiring - New dependency on `:platform:platform-jobs`. - New internal class `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar` that implements the api.v1 registrar by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. - `DefaultPluginContext` gains a `scopedJobHandlers` constructor parameter and exposes it as `PluginContext.jobs`. - `VibeErpPluginManager`: * injects `JobHandlerRegistry` * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing endpoint + taskHandler + BPMN-deployment cleanups * `destroy()` reverse-iterates `started` and calls the same `unregisterAllByOwner` alongside the other four teardown steps ## Reference plug-in — PlateCleanupJobHandler New file `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`. Key `printing_shop.plate.cleanup`. Captures the `PluginContext` via constructor — same "handler-side plug-in context access" pattern the printing-shop plug-in already uses for its TaskHandlers. The handler is READ-ONLY in its v1 incarnation: it runs a GROUP-BY query over `plugin_printingshop__plate` via `context.jdbc.query(...)` and logs a per-status summary via `context.logger.info(...)`. A real cleanup job would also run an `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the read-only shape is enough to exercise the seam end-to-end without introducing a retention policy the customer hasn't asked for. `PrintingShopPlugin.start(context)` now registers the handler alongside its two TaskHandlers: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) context.jobs.register(PlateCleanupJobHandler(context)) ## Smoke test (fresh DB, plug-in staged) ``` # boot registered JobHandler 'vibeerp.jobs.ping' owner='core' ... JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping] ... registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ... [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup # HTTP: list handlers — now shows both GET /api/v1/jobs/handlers → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]} # HTTP: trigger the plug-in handler — proves dispatcher routes to it POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger → 200 {"handlerKey":"printing_shop.plate.cleanup", "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9", "startedAt":"...","finishedAt":"...","ok":true} # Handler-side logs prove context.jdbc + context.logger access [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...' [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[] # SIGTERM — clean teardown [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s) [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped) [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order. Core handlers are untouched by plug-in teardown. ## Tests No new unit tests in this commit — the test coverage is inherited from the previously landed components: - `JobHandlerRegistryTest` already covers owner-tagged `register` / `unregister` / `unregisterAllByOwner` / duplicate key rejection. - `ScopedTaskHandlerRegistrar` behavior (which this commit mirrors structurally) is exercised end-to-end by the printing-shop plug-in boot path. - Total framework unit tests: 334 (unchanged from the quality→warehousing quarantine chunk), all green. ## What this unblocks - **Plug-in-shipped scheduled work.** The printing-shop plug-in can now add cron schedules for its cleanup handler via `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey, cronExpression}` without the operator touching core code. - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can now ship job handlers with distinct keys and be torn down independently on reload — the owner-tagged cleanup strips only the stopping plug-in's handlers, leaving other plug-ins' and core handlers alone. - **The "plug-in contributes everything" story.** The reference printing-shop plug-in now contributes via every public seam the framework has: HTTP endpoints (7), custom fields on core entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1) — plus its own database schema, its own metadata YAML, its own i18n bundles. That's every extension point a real customer plug-in would want. ## Non-goals (parking lot) - A real retention policy in PlateCleanupJobHandler. The handler logs a summary but doesn't mutate state. Customer-specific pruning rules belong in a customer-owned plug-in or a metadata- driven rule once that seam exists. - A built-in cron schedule for the plug-in's handler. The plug-in only registers the handler; scheduling is an operator decision exposed through the HTTP surface from P1.10. -
…c-warehousing StockTransfer First cross-PBC reaction originating from pbc-quality. Records a REJECTED inspection with explicit source + quarantine location codes, publishes an api.v1 event inside the same transaction as the row insert, and pbc-warehousing's new subscriber atomically creates + confirms a StockTransfer that moves the rejected quantity to the quarantine bin. The whole chain — inspection insert + event publish + transfer create + confirm + two ledger rows — runs in a single transaction under the synchronous in-process bus with Propagation.MANDATORY. ## Why the auto-quarantine is opt-in per-inspection Not every inspection wants physical movement. A REJECTED batch that's already separated from good stock on the shop floor doesn't need the framework to move anything; the operator just wants the record. Forcing every rejection to create a ledger pair would collide with real-world QC workflows. The contract is simple: the `InspectionRecord` now carries two OPTIONAL columns (`source_location_code`, `quarantine_location_code`). When BOTH are set AND the decision is REJECTED AND the rejected quantity is positive, the subscriber reacts. Otherwise it logs at DEBUG and does nothing. The event is published either way, so audit/KPI subscribers see every inspection regardless. ## api.v1 additions New event class `org.vibeerp.api.v1.event.quality.InspectionRecordedEvent` with nine fields: inspectionCode, itemCode, sourceReference, decision, inspectedQuantity, rejectedQuantity, sourceLocationCode?, quarantineLocationCode?, inspector All required fields validated in `init { }` — blank strings, non-positive inspected quantity, negative rejected quantity, or an unknown decision string all throw at publish time so a malformed event never hits the outbox. `aggregateType = "quality.InspectionRecord"` matches the `<pbc>.<aggregate>` convention. `decision` is carried as a String (not the pbc-quality `InspectionDecision` enum) to keep guardrail #10 honest — api.v1 events MUST NOT leak internal PBC types. Consumers compare against the literal `"APPROVED"` / `"REJECTED"` strings. ## pbc-quality changes - `InspectionRecord` entity gains two nullable columns: `source_location_code` + `quarantine_location_code`. - Liquibase migration `002-quality-quarantine-locations.xml` adds the columns to `quality__inspection_record`. - `InspectionRecordService` now injects `EventBus` and publishes `InspectionRecordedEvent` inside the `@Transactional record()` method. The publish carries all nine fields including the optional locations. - `RecordInspectionCommand` + `RecordInspectionRequest` gain the two optional location fields; unchanged default-null means every existing caller keeps working unchanged. - `InspectionRecordResponse` exposes both new columns on the HTTP wire. ## pbc-warehousing changes - New `QualityRejectionQuarantineSubscriber` @Component. - Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe(InspectionRecordedEvent::class.java, ...)` overload — same pattern every other PBC subscriber uses (SalesOrderConfirmedSubscriber, WorkOrderRequestedSubscriber, the pbc-finance order subscribers). - `handle(event)` is `internal` so the unit test can drive it directly without going through the bus. - Activation contract (all must be true): decision=REJECTED, rejectedQuantity>0, sourceLocationCode non-blank, quarantineLocationCode non-blank. Any missing condition → no-op. - Idempotency: derived transfer code is `TR-QC-<inspectionCode>`. Before creating, the subscriber checks `stockTransfers.findByCode(derivedCode)` — if anything exists (DRAFT, CONFIRMED, or CANCELLED), the subscriber skips. A replay of the same event under at-least-once delivery is safe. - On success: creates a DRAFT StockTransfer with one line moving `rejectedQuantity` of `itemCode` from source to quarantine, then calls `confirm(id)` which writes the atomic TRANSFER_OUT + TRANSFER_IN ledger pair. ## Smoke test (fresh DB) ``` # seed POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea} POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE} POST /api/v1/inventory/locations {code: WH-QUARANTINE, type: WAREHOUSE} POST /api/v1/inventory/movements {itemCode: WIDGET-1, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT} # the cross-PBC reaction POST /api/v1/quality/inspections {code: QC-R-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001", decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7, reason: "surface scratches", sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} → 201 {..., sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} # automatically created + confirmed GET /api/v1/warehousing/stock-transfers/by-code/TR-QC-QC-R-001 → 200 { "code": "TR-QC-QC-R-001", "fromLocationCode": "WH-MAIN", "toLocationCode": "WH-QUARANTINE", "status": "CONFIRMED", "note": "auto-quarantine from rejected inspection QC-R-001", "lines": [{"itemCode": "WIDGET-1", "quantity": 7.0}] } # ledger state (raw SQL) SELECT l.code, b.item_code, b.quantity FROM inventory__stock_balance b JOIN inventory__location l ON l.id = b.location_id WHERE b.item_code = 'WIDGET-1'; WH-MAIN | WIDGET-1 | 93.0000 ← was 100, now 93 WH-QUARANTINE | WIDGET-1 | 7.0000 ← 7 rejected units here SELECT item_code, location, reason, delta, reference FROM inventory__stock_movement m JOIN inventory__location l ON l.id=m.location_id WHERE m.reference = 'TR:TR-QC-QC-R-001'; WIDGET-1 | WH-MAIN | TRANSFER_OUT | -7 | TR:TR-QC-QC-R-001 WIDGET-1 | WH-QUARANTINE | TRANSFER_IN | 7 | TR:TR-QC-QC-R-001 # negatives POST /api/v1/quality/inspections {decision: APPROVED, ...+locations} → 201, but GET /TR-QC-QC-A-001 → 404 (no transfer, correct opt-out) POST /api/v1/quality/inspections {decision: REJECTED, rejected: 2, no locations} → 201, but GET /TR-QC-QC-R-002 → 404 (opt-in honored) # handler log [warehousing] auto-quarantining 7 units of 'WIDGET-1' from 'WH-MAIN' to 'WH-QUARANTINE' (inspection=QC-R-001, transfer=TR-QC-QC-R-001) ``` Everything happens in ONE transaction because EventBusImpl uses Propagation.MANDATORY with synchronous delivery: the inspection insert, the event publish, the StockTransfer create, the confirm, and the two ledger rows all commit or roll back together. ## Tests - Updated `InspectionRecordServiceTest`: the service now takes an `EventBus` constructor argument. Every existing test got a relaxed `EventBus` mock; the one new test `record publishes InspectionRecordedEvent on success` captures the published event and asserts every field including the location codes. - 6 new unit tests in `QualityRejectionQuarantineSubscriberTest`: * subscribe registers one listener for InspectionRecordedEvent * handle creates and confirms a quarantine transfer on a fully-populated REJECTED event (asserts derived code, locations, item code, quantity) * handle is a no-op when decision is APPROVED * handle is a no-op when sourceLocationCode is missing * handle is a no-op when quarantineLocationCode is missing * handle skips when a transfer with the derived code already exists (idempotent replay) - Total framework unit tests: 334 (was 327), all green. ## What this unblocks - **Quality KPI dashboards** — any PBC can now subscribe to `InspectionRecordedEvent` without coupling to pbc-quality. - **pbc-finance quality-cost tracking** — when GL growth lands, a finance subscriber can debit a "quality variance" account on every REJECTED inspection. - **REF.2 / customer plug-in workflows** — the printing-shop plug-in can emit an `InspectionRecordedEvent` of its own from a BPMN service task (via `context.eventBus.publish`) and drive the same quarantine chain without touching pbc-quality's HTTP surface. ## Non-goals (parking lot) - Partial-batch quarantine decisions (moving some units to quarantine, some back to general stock, some to scrap). v1 collapses the decision into a single "reject N units" action and assumes the operator splits batches manually before inspecting. A richer ResolutionPlan aggregate is a future chunk if real workflows need it. - Quality metrics storage. The event is audited by the existing wildcard event subscriber but no PBC rolls it up into a KPI table. Belongs to a future reporting feature. - Auto-approval chains. An APPROVED inspection could trigger a "release-from-hold" transfer (opposite direction) in a future-expanded subscriber, but v1 keeps the reaction REJECTED-only to match the "quarantine on fail" use case. -
Closes the P1.9 row of the implementation plan. New platform-files subproject exposing a cross-PBC facade for the framework's binary blob store, with a local-disk implementation and a thin HTTP surface for multipart upload / download / delete / list. ## api.v1 additions (package `org.vibeerp.api.v1.files`) - `FileStorage` — injectable facade with five methods: * `put(key, contentType, content: InputStream): FileHandle` * `get(key): FileReadResult?` * `exists(key): Boolean` * `delete(key): Boolean` * `list(prefix): List<FileHandle>` Stream-first (not byte-array-first) so reports PDFs etc. don't have to be materialized in memory. Keys are opaque strings with slashes allowed for logical grouping; the local-disk backend maps them to subdirectories. - `FileHandle` — read-only metadata DTO (key, size, contentType, createdAt, updatedAt). - `FileReadResult` — the return type of `get()` bundling a handle and an open InputStream. The caller MUST close the stream (`result.content.use { ... }` is the idiomatic shape); the facade is not responsible for managing the consumer's lifetime. - `PluginContext.files: FileStorage` — new member on the plug-in context interface, default implementation throws `UnsupportedOperationException("upgrade vibe_erp to v0.8 or later")`. Same backward-compat pattern we used for `endpoints`, `jdbc`, `taskHandlers`. Plug-ins that need to persist report PDFs, uploaded attachments, or exported archives inject this through the context. ## platform-files runtime - `LocalDiskFileStorage` @Component reading `vibeerp.files.local-path` (default `./files-local`, overridden in dev profile to `./files-dev`, overridden in production config to `/opt/vibe-erp/files`). **Layout**: files are stored at `<root>/<key>` with a sidecar metadata file at `<root>/<key>.meta` containing a single line `content_type=<value>`. Sidecars beat xattrs (not portable across Linux/macOS) and beat an H2/SQLite index (overkill for single-tenant single-instance). **Atomicity**: every `put` writes to a `.tmp` sibling file and atomic-moves it into place so a concurrent read against the same key never sees a half-written mix. **Key safety**: `put`/`get`/`delete` all validate the key: rejects blank, leading `/`, `..` (path traversal), and trailing `.meta` (sidecar collision). Every resolved path is checked to stay under the configured root via `normalize().startsWith(root)`. - `FileController` at `/api/v1/files/**`: * `POST /api/v1/files?key=...` multipart upload (form field `file`) * `GET /api/v1/files?prefix=...` list by prefix * `GET /api/v1/files/metadata?key=...` metadata only (doesn't open the stream) * `GET /api/v1/files/download?key=...` stream bytes with the right Content-Type + filename * `DELETE /api/v1/files?key=...` delete by key All endpoints @RequirePermission-gated via the keys declared in the metadata YAML. The `key` is a query parameter, NOT a path variable, so slashes in the key don't collide with Spring's path matching. - `META-INF/vibe-erp/metadata/files.yml` — 2 permissions + 1 menu. ## Smoke test (fresh DB, as admin) ``` POST /api/v1/files?key=reports/smoke-test.txt (multipart file) → 201 {"key":"reports/smoke-test.txt", "size":61, "contentType":"text/plain", "createdAt":"...","updatedAt":"..."} GET /api/v1/files?prefix=reports/ → [{"key":"reports/smoke-test.txt","size":61, ...}] GET /api/v1/files/metadata?key=reports/smoke-test.txt → same handle, no bytes GET /api/v1/files/download?key=reports/smoke-test.txt → 200 Content-Type: text/plain body: original upload content (diff == 0) DELETE /api/v1/files?key=reports/smoke-test.txt → 200 {"removed":true} GET /api/v1/files/download?key=reports/smoke-test.txt → 404 # path traversal POST /api/v1/files?key=../escape (multipart file) → 400 "file key must not contain '..' (got '../escape')" GET /api/v1/_meta/metadata → permissions include ["files.file.read", "files.file.write"] ``` Downloaded bytes match the uploaded bytes exactly — round-trip verified with `diff -q`. ## Tests - 12 new unit tests in `LocalDiskFileStorageTest` using JUnit 5's `@TempDir`: * `put then get round-trips content and metadata` * `put overwrites an existing key with the new content` * `get returns null for an unknown key` * `exists distinguishes present from absent` * `delete removes the file and its metadata sidecar` * `delete on unknown key returns false` * `list filters by prefix and returns sorted keys` * `put rejects a key with dot-dot` * `put rejects a key starting with slash` * `put rejects a key ending in dot-meta sidecar` * `put rejects blank content type` * `list sidecar metadata files are hidden from listing results` - Total framework unit tests: 327 (was 315), all green. ## What this unblocks - **P1.8 JasperReports integration** — now has a first-class home for generated PDFs. A report renderer can call `fileStorage.put("reports/quote-$code.pdf", "application/pdf", ...)` and return the handle to the caller. - **Plug-in attachments** — the printing-shop plug-in's future "plate scan image" or "QC report" attachments can be stored via `context.files` without touching the database. - **Export/import flows** — a scheduled job can write a nightly CSV export via `FileStorage.put` and a separate endpoint can download it; the scheduler-to-storage path is clean and typed. - **S3 backend when needed** — the interface is already streaming- based; dropping in an `S3FileStorage` @Component and toggling `vibeerp.files.backend: s3` in config is a future additive chunk, zero api.v1 churn. ## Non-goals (parking lot) - S3 backend. The config already reads `vibeerp.files.backend`, local is hard-wired for v1.0. Keeps the dependency tree off aws-sdk until a real consumer exists. - Range reads / HTTP `Range: bytes=...` support. Future enhancement for large-file streaming (e.g. video attachments). - Presigned URLs (for direct browser-to-S3 upload, skipping the framework). Design decision lives with the S3 backend chunk. - Per-file ACLs. The four `files.file.*` permissions currently gate all files uniformly; per-path or per-owner ACLs would require a new metadata table and haven't been asked for by any PBC yet. - Plug-in loader integration. `PluginContext.files` throws the default `UnsupportedOperationException` until the plug-in loader is wired to pass the host `FileStorage` through `DefaultPluginContext`. Lands in the same chunk as the first plug-in that needs to store a file. -
Closes the P1.10 row of the implementation plan. New platform-jobs subproject shipping a Quartz-backed background job engine adapted to the api.v1 JobHandler contract, so PBCs and plug-ins can register scheduled work without ever importing Quartz types. ## The shape (matches the P2.1 workflow engine) platform-jobs is to scheduled work what platform-workflow is to BPMN service tasks. Same pattern, same discipline: - A single `@Component` bridge (`QuartzJobBridge`) is the ONLY org.quartz.Job implementation in the framework. Every persistent trigger points at it. - A single `JobHandlerRegistry` (owner-tagged, duplicate-key-rejecting, ConcurrentHashMap-backed) holds every registered JobHandler by key. Mirrors `TaskHandlerRegistry`. - The bridge reads the handler key from the trigger's JobDataMap, looks it up in the registry, and executes the matching JobHandler inside a `PrincipalContext.runAs("system:jobs:<key>")` block so audit rows written during the job get a structured, greppable `created_by` value ("system:jobs:core.audit.prune") instead of the default `__system__`. - Handler-thrown exceptions are re-wrapped as `JobExecutionException` so Quartz's MISFIRE machinery handles them properly. - `@DisallowConcurrentExecution` on the bridge stops a long-running handler from being started again before it finishes. ## api.v1 additions (package `org.vibeerp.api.v1.jobs`) - `JobHandler` — interface with `key()` + `execute(context)`. Analogous to the workflow TaskHandler. Plug-ins implement this to contribute scheduled work without any Quartz dependency. - `JobContext` — read-only execution context passed to the handler: principal, locale, correlation id, started-at instant, data map. Unlike TaskContext it has no `set()` writeback — scheduled jobs don't produce continuation state for a downstream step; a job that wants to talk to the rest of the system writes to its own domain table or publishes an event. - `JobScheduler` — injectable facade exposing: * `scheduleCron(scheduleKey, handlerKey, cronExpression, data)` * `scheduleOnce(scheduleKey, handlerKey, runAt, data)` * `unschedule(scheduleKey): Boolean` * `triggerNow(handlerKey, data): JobExecutionSummary` — synchronous in-thread execution, bypasses Quartz; used by the HTTP trigger endpoint and by tests. * `listScheduled(): List<ScheduledJobInfo>` — introspection Both `scheduleCron` and `scheduleOnce` are idempotent on `scheduleKey` (replace if exists). - `ScheduledJobInfo` + `JobExecutionSummary` + `ScheduleKind` — read-only DTOs returned by the scheduler. ## platform-jobs runtime - `QuartzJobBridge` — the shared Job impl. Routes by the `__vibeerp_handler_key` JobDataMap entry. Uses `@Autowired` field injection because Quartz instantiates Job classes through its own JobFactory (Spring Boot's `SpringBeanJobFactory` autowires fields after construction, which is the documented pattern). - `QuartzJobScheduler` — the concrete api.v1 `JobScheduler` implementation. Builds JobDetail + Trigger pairs under fixed group names (`vibeerp-jobs`), uses `addJob(replace=true)` + explicit `checkExists` + `rescheduleJob` for idempotent scheduling, strips the reserved `__vibeerp_handler_key` from the data visible to the handler. - `SimpleJobContext` — internal immutable `JobContext` impl. Defensive-copies the data map at construction. - `JobHandlerRegistry` — owner-tagged registry (OWNER_CORE by default, any other string for plug-in ownership). Same `register` / `unregister` / `unregisterAllByOwner` / `find` / `keys` / `size` surface as `TaskHandlerRegistry`. The plug-in loader integration seam is defined; the loader hook that calls `register(handler, pluginId)` lands when a plug-in actually ships a job handler (YAGNI). - `JobController` at `/api/v1/jobs/**`: * `GET /handlers` (perm `jobs.handler.read`) * `POST /handlers/{key}/trigger` (perm `jobs.job.trigger`) * `GET /scheduled` (perm `jobs.schedule.read`) * `POST /scheduled` (perm `jobs.schedule.write`) * `DELETE /scheduled/{key}` (perm `jobs.schedule.write`) - `VibeErpPingJobHandler` — built-in diagnostic. Key `vibeerp.jobs.ping`. Logs the invocation and exits. Safe to trigger from any environment; mirrors the core `vibeerp.workflow.ping` workflow handler from P2.1. - `META-INF/vibe-erp/metadata/jobs.yml` — 4 permissions + 2 menus. ## Spring Boot config (application.yaml) ``` spring.quartz: job-store-type: jdbc jdbc: initialize-schema: always # creates QRTZ_* tables on first boot properties: org.quartz.scheduler.instanceName: vibeerp-scheduler org.quartz.scheduler.instanceId: AUTO org.quartz.threadPool.threadCount: "4" org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate org.quartz.jobStore.isClustered: "false" ``` ## The config trap caught during smoke-test (documented in-file) First boot crashed with `SchedulerConfigException: DataSource name not set.` because I'd initially added `org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX` to the raw Quartz properties. That is correct for a standalone Quartz deployment but WRONG for the Spring Boot starter: the starter configures a `LocalDataSourceJobStore` that wraps the Spring-managed DataSource automatically when `job-store-type=jdbc`, and setting `jobStore.class` explicitly overrides that wrapper back to Quartz's standalone JobStoreTX — which then fails at init because Quartz-standalone expects a separately-named `dataSource` property the Spring Boot starter doesn't supply. Fix: drop the `jobStore.class` property entirely. The `driverDelegateClass` is still fine to set explicitly because it's read by both the standalone and Spring-wrapped JobStore implementations. Rationale is documented in the config comment so the next maintainer doesn't add it back. ## Smoke test (fresh DB, as admin) ``` GET /api/v1/jobs/handlers → {"count": 1, "keys": ["vibeerp.jobs.ping"]} POST /api/v1/jobs/handlers/vibeerp.jobs.ping/trigger {"data": {"source": "smoke-test"}} → 200 {"handlerKey": "vibeerp.jobs.ping", "correlationId": "e142...", "startedAt": "...", "finishedAt": "...", "ok": true} log: VibeErpPingJobHandler invoked at=... principal='system:jobs:manual-trigger' data={source=smoke-test} GET /api/v1/jobs/scheduled → [] POST /api/v1/jobs/scheduled {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping", "cronExpression": "0/1 * * * * ?", "data": {"trigger": "cron"}} → 201 {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping"} # after 3 seconds GET /api/v1/jobs/scheduled → [{"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping", "kind": "CRON", "cronExpression": "0/1 * * * * ?", "nextFireTime": "...", "previousFireTime": "...", "data": {"trigger": "cron"}}] DELETE /api/v1/jobs/scheduled/ping-every-sec → 200 {"removed": true} # handler log count after ~3 seconds of cron ticks grep -c "VibeErpPingJobHandler invoked" /tmp/boot.log → 5 # 1 manual trigger + 4 cron ticks before unschedule — matches the # 0/1 * * * * ? expression # negatives POST /api/v1/jobs/handlers/nope/trigger → 400 "no JobHandler registered for key 'nope'" POST /api/v1/jobs/scheduled {cronExpression: "not a cron"} → 400 "invalid Quartz cron expression: 'not a cron'" ``` ## Three schemas coexist in one Postgres database ``` SELECT count(*) FILTER (WHERE table_name LIKE 'qrtz_%') AS quartz_tables, count(*) FILTER (WHERE table_name LIKE 'act_%') AS flowable_tables, count(*) FILTER (WHERE table_name NOT LIKE 'qrtz_%' AND table_name NOT LIKE 'act_%' AND table_schema = 'public') AS vibeerp_tables FROM information_schema.tables WHERE table_schema = 'public'; quartz_tables | flowable_tables | vibeerp_tables ---------------+-----------------+---------------- 11 | 39 | 48 ``` Three independent schema owners (Quartz / Flowable / Liquibase) in one public schema, no collisions. Spring Boot's `QuartzDataSourceScriptDatabaseInitializer` runs the QRTZ_* DDL once and skips on subsequent boots; Flowable's internal MyBatis schema manager does the same for ACT_* tables; our Liquibase owns the rest. ## Tests - 6 new tests in `JobHandlerRegistryTest`: * initial handlers registered with OWNER_CORE * duplicate key fails fast with both owners in the error * unregisterAllByOwner only removes handlers owned by that id * unregister by key returns false for unknown * find on missing key returns null * blank key is rejected - 9 new tests in `QuartzJobSchedulerTest` (Quartz Scheduler mocked): * scheduleCron rejects an unknown handler key * scheduleCron rejects an invalid cron expression * scheduleCron adds job + schedules trigger when nothing exists yet * scheduleCron reschedules when the trigger already exists * scheduleOnce uses a simple trigger at the requested instant * unschedule returns true/false correctly * triggerNow calls the handler synchronously and returns ok=true * triggerNow propagates the handler's exception * triggerNow rejects an unknown handler key - Total framework unit tests: 315 (was 300), all green. ## What this unblocks - **pbc-finance audit prune** — a core recurring job that deletes posted journal entries older than N days, driven by a cron from a Tier 1 metadata row. - **Plug-in scheduled work** — once the loader integration hook is wired (trivial follow-up), any plug-in's `start(context)` can register a JobHandler via `context.jobs.register(handler)` and the host strips it on plug-in stop via `unregisterAllByOwner`. - **Delayed workflow continuations** — a BPMN handler can call `jobScheduler.scheduleOnce(...)` to "re-evaluate this workflow in 24 hours if no one has approved it", bridging the workflow engine and the scheduler without introducing Thread.sleep. - **Outbox draining strategy** — the existing 5-second OutboxPoller can move from a Spring @Scheduled to a Quartz cron so it inherits the scheduler's persistence, misfire handling, and the future clustering story. ## Non-goals (parking lot) - **Clustered scheduling.** `isClustered=false` for now. Making this true requires every instance to share a unique `instanceId` and agree on the JDBC lock policy — doable but out of v1.0 scope since vibe_erp is single-tenant single-instance by design. - **Async execution of triggerNow.** The current `triggerNow` runs synchronously on the caller thread so HTTP requests see the real result. A future "fire and forget" endpoint would delegate to `Scheduler.triggerJob(...)` against the JobDetail instead. - **Per-job permissions.** Today the four `jobs.*` permissions gate the whole controller. A future enhancement could attach per-handler permissions (so "trigger audit prune" requires a different permission than "trigger pricing refresh"). - **Plug-in loader integration.** The seam is defined on `JobHandlerRegistry` (owner tagging + unregisterAllByOwner) but `VibeErpPluginManager` doesn't call it yet. Lands in the same chunk as the first plug-in that ships a JobHandler. -
…alidates locations at create Follow-up to the pbc-warehousing chunk. Plugs a real gap noticed in the smoke test: an unknown fromLocationCode or toLocationCode on a StockTransfer was silently accepted at create() and only surfaced as a confirm()-time rollback, which is a confusing UX — the operator types TR-001 wrong, hits "create", then hits "confirm" minutes later and sees "location GHOST-SRC is not in the inventory directory". ## api.v1 growth New cross-PBC method on `InventoryApi`: fun findLocationByCode(locationCode: String): LocationRef? Parallel shape to `CatalogApi.findItemByCode` — a lookup-by-code returning a lightweight ref or null, safe for any cross-PBC consumer to inject. The returned `LocationRef` data class carries id, code, name, type (as a String, not the inventory-internal LocationType enum — rationale in the KDoc), and active flag. Fields that are NOT part of the cross-PBC contract (audit columns, ext JSONB, the raw JPA entity) stay inside pbc-inventory. api.v1 additive change within the v1 line — no breaking rename, no signature churn on existing methods. The interface adds a new abstract method, which IS technically a source-breaking change for any in-tree implementation, but the only impl is pbc-inventory/InventoryApiAdapter which is updated in the same commit. No external plug-in implements InventoryApi (by design; plug-ins inject it, they don't provide it). ## Adapter implementation `InventoryApiAdapter.findLocationByCode` resolves the location via the existing `LocationJpaRepository.findByCode`, which is exactly what `recordMovement` already uses. A new private extension `Location.toRef()` builds the api.v1 DTO. Zero new SQL; zero new repository methods. ## pbc-warehousing wiring `StockTransferService.create` now calls the facade twice — once for the source location, once for the destination — BEFORE validating lines. The four-step ordering is: code uniqueness → from != to → non-empty lines → both locations exist and are active → per-line validation. Unknown locations produce a 400 with a clear message; deactivated locations produce a 400 distinguishing "doesn't exist" from "exists but can't be used": "from location code 'GHOST-SRC' is not in the inventory directory" "from location 'WH-CLOSED' is deactivated and cannot be transfer source" The confirm() path is unchanged. Locations may still vanish between create and confirm (though the likelihood is low for a normal workflow), and `recordMovement` will still raise its own error in that case — belt and suspenders. ## Smoke test ``` POST /api/v1/inventory/locations {code: WH-GOOD, type: WAREHOUSE} POST /api/v1/catalog/items {code: ITEM-1, baseUomCode: ea} POST /api/v1/warehousing/stock-transfers {code: TR-bad, fromLocationCode: GHOST-SRC, toLocationCode: WH-GOOD, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "from location code 'GHOST-SRC' is not in the inventory directory" (before this commit: 201 DRAFT, then 400 at confirm) POST /api/v1/warehousing/stock-transfers {code: TR-bad2, fromLocationCode: WH-GOOD, toLocationCode: GHOST-DST, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "to location code 'GHOST-DST' is not in the inventory directory" POST /api/v1/warehousing/stock-transfers {code: TR-ok, fromLocationCode: WH-GOOD, toLocationCode: WH-OTHER, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 201 DRAFT ← happy path still works ``` ## Tests - Updated the 3 existing `StockTransferServiceTest` tests that created real transfers to stub `inventory.findLocationByCode` for both WH-A and WH-B via a new `stubLocation()` helper. - 3 new tests: * `create rejects unknown from location via InventoryApi` * `create rejects unknown to location via InventoryApi` * `create rejects a deactivated from location` - Total framework unit tests: 300 (was 297), all green. ## Why this isn't a breaking api.v1 change InventoryApi is an interface consumed by other PBCs and by plug-ins, implemented ONLY by pbc-inventory. Adding a new method to an interface IS a source-breaking change for any implementer — but the framework's dependency rules mean no external code implements this interface. Plug-ins and other PBCs CONSUME it via dependency injection; the only production impl is InventoryApiAdapter, updated in the same commit. Binary compatibility for consumers is preserved: existing call sites compile and run unchanged because only the interface grew, not its existing methods. If/when a third party implements InventoryApi (e.g. a test double outside the framework, or a custom backend plug-in), this would be a semver-major-worthy addition. For the in-tree framework, it's additive-within-a-major. -
…duction auto-creates WorkOrder First end-to-end cross-PBC workflow driven entirely from a customer plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a TaskHandler that publishes a generic api.v1 event; pbc-production reacts by creating a DRAFT WorkOrder. The plug-in has zero compile-time coupling to pbc-production, and pbc-production has zero knowledge the plug-in exists. ## Why an event, not a facade Two options were on the table for "how does a plug-in ask pbc-production to create a WorkOrder": (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi` with a `createWorkOrder(command)` method (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production` that anyone can publish — this commit Facade pattern (a) is what InventoryApi.recordMovement and CatalogApi.findItemByCode use: synchronous, in-transaction, caller-blocks-on-completion. Event pattern (b) is what SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses: asynchronous over the bus, still in-transaction (the bus uses `Propagation.MANDATORY` with synchronous delivery so a failure rolls everything back), but the caller doesn't need a typed result. Option (b) wins for plug-in → pbc-production: - Plug-in compile-time surface stays identical: plug-ins already import `api.v1.event.*` to publish. No new api.v1.ext package. Zero new plug-in dependency. - The outbox gets the row for free — a crash between publish and delivery replays cleanly from `platform__event_outbox`. - A second customer plug-in shipping a different flow that ALSO wants to auto-spawn work orders doesn't need a second facade, just publishes the same event. pbc-scheduling (future) can subscribe to the same channel without duplicating code. The synchronous facade pattern stays the right tool for cross-PBC operations the caller needs to observe (read-throughs, inventory debits that must block the current transaction). Creating a DRAFT work order is a fire-and-trust operation — the event shape fits. ## What landed ### api.v1 — WorkOrderRequestedEvent New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent` with four required fields: - `code`: desired work-order code (must be unique globally; convention is to bake the source reference into it so duplicate detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`) - `outputItemCode` + `outputQuantity`: what to produce - `sourceReference`: opaque free-form pointer used in logs and the outbox audit trail. Example values: `plugin:printing-shop:quote:Q-007`, `pbc-orders-sales:SO-2026-001:L2` The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the existing `WorkOrderEvent` sealed interface is for LIFECYCLE events published BY pbc-production, not for inbound requests). `init` validators reject blank strings and non-positive quantities so a malformed event fails fast at publish time rather than at the subscriber. ### pbc-production — WorkOrderRequestedSubscriber New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`. Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe` overload (same pattern as `SalesOrderConfirmedSubscriber` + the six pbc-finance order subscribers). The subscriber: 1. Looks up `workOrders.findByCode(event.code)` as the idempotent short-circuit. If a WorkOrder with that code already exists (outbox replay, future async bus retry, developer re-running the same BPMN process), the subscriber logs at DEBUG and returns. **Second execution of the same BPMN produces the same outbox row which the subscriber then skips — the database ends up with exactly ONE WorkOrder regardless of how many times the process runs.** 2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with the event's fields. `sourceSalesOrderCode` is null because this is the generic path, not the SO-driven one. Why this is a SECOND subscriber rather than extending `SalesOrderConfirmedSubscriber`: the two events serve different producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific and requires a round-trip through `SalesOrdersApi.findByCode` to fetch the lines; `WorkOrderRequestedEvent` carries everything the subscriber needs inline. Collapsing them would mean the generic path inherits the SO-flow's SO-specific lookup and short-circuit logic that doesn't apply to it. ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler New plug-in TaskHandler in `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`. Captures the `PluginContext` via constructor — same pattern as `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside `execute`: 1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables (`quantity` accepts Number or String since Flowable's variable coercion is flexible). 2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and `sourceReference = "plugin:printing-shop:quote:$quoteCode"`. 3. Logs via `context.logger.info(...)` — the line is tagged `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`. 4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`. This is the first time a plug-in TaskHandler publishes a cross-PBC event from inside a workflow — proves the event-bus leg of the handler-context pattern works end-to-end. 5. Writes `workOrderCode` + `workOrderRequested=true` back to the process variables so a downstream BPMN step or the HTTP caller can see the derived code. The handler is registered in `PrintingShopPlugin.start(context)` alongside `PlateApprovalTaskHandler`: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) Teardown via `unregisterAllByOwner("printing-shop")` still works unchanged — the scoped registrar tracks both handlers. ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the plug-in JAR. Single synchronous service task, process definition key `plugin-printing-shop-quote-to-work-order`, service task id `printing_shop.quote.create_work_order` (matches the handler key). Auto-deployed by the host's `PluginProcessDeployer` at plug-in start — the printing-shop plug-in now ships two BPMNs bundled into one Flowable deployment, both under category `printing-shop`. ## Smoke test (fresh DB) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ... [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...': [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml] pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload) # 1) seed a catalog item $ curl -X POST /api/v1/catalog/items {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"} → 201 BOOK-HARDCOVER # 2) start the plug-in's quote-to-work-order BPMN $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order", "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}} → 201 {"ended":true, "variables":{"quoteCode":"Q-007", "itemCode":"BOOK-HARDCOVER", "quantity":500, "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007", "workOrderRequested":true}} Log lines observed: [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500) [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007' for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007') # 3) verify the WorkOrder now exists in pbc-production $ curl /api/v1/production/work-orders → [{"id":"029c2482-...", "code":"WO-FROM-PRINTINGSHOP-Q-007", "outputItemCode":"BOOK-HARDCOVER", "outputQuantity":500.0, "status":"DRAFT", "sourceSalesOrderCode":null, "inputs":[], "ext":{}}] # 4) run the SAME BPMN a second time — verify idempotent $ curl -X POST /api/v1/workflow/process-instances {same body as above} → 201 (process ends, workOrderRequested=true, new event published + delivered) $ curl /api/v1/production/work-orders → count=1, still only WO-FROM-PRINTINGSHOP-Q-007 ``` Every single step runs through an api.v1 public surface. No framework core code knows the printing-shop plug-in exists; no plug-in code knows pbc-production exists. They meet on the event bus, and the outbox guarantees the delivery. ## Tests - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`: * `subscribe registers one listener for WorkOrderRequestedEvent` * `handle creates a work order from the event fields` — captures the `CreateWorkOrderCommand` and asserts every field * `handle short-circuits when a work order with that code already exists` — proves the idempotent branch - Total framework unit tests: 278 (was 275), all green. ## What this unblocks - **Richer multi-step BPMNs** in the plug-in that chain plate approval + quote → work order + production start + completion. - **Plug-in-owned Quote entity** — the printing-shop plug-in can now introduce a `plugin_printingshop__quote` table via its own Liquibase changelog and have its HTTP endpoint create quotes that kick off the quote-to-work-order workflow automatically (or on operator confirm). - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step, potentially driven by plug-ins contributing custom steps via the same TaskHandler + event seam. - **Second reference plug-in** — any new customer plug-in can publish `WorkOrderRequestedEvent` from its own workflows without any framework change. ## Non-goals (parking lot) - The handler publishes but does not also read pbc-production state back. A future "wait for WO completion" BPMN step could subscribe to `WorkOrderCompletedEvent` inside a user-task + signal flow, but the engine's signal/correlation machinery isn't wired to plug-ins yet. - Quote entity + HTTP + real business logic. REF.1 proves the cross-PBC event seam; the richer quote lifecycle is a separate chunk that can layer on top of this. - Transactional rollback integration test. The synchronous bus + `Propagation.MANDATORY` guarantees it, but an explicit test that a subscriber throw rolls back both the ledger-adjacent writes and the Flowable process state would be worth adding with a real test container run. -
## What's new Plug-ins can now contribute workflow task handlers to the framework. The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans from the host Spring context; handlers defined inside a PF4J plug-in were invisible because the plug-in's child classloader is not in the host's bean list. This commit closes that gap. ## Mechanism ### api.v1 - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar` with a single `register(handler: TaskHandler)` method. Plug-ins call it from inside their `start(context)` lambda. - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as a new optional member with a default implementation that throws `UnsupportedOperationException("upgrade to v0.7 or later")`, so pre-existing plug-in jars remain binary-compatible with the new host and a plug-in built against v0.7 of the api-v1 surface fails fast on an old host instead of silently doing nothing. Same pattern we used for `endpoints` and `jdbc`. ### platform-workflow - `TaskHandlerRegistry` gains owner tagging. Every registered handler now carries an `ownerId`: core `@Component` beans get `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through the constructor-injection path), plug-in-contributed handlers get their PF4J plug-in id. New API: * `register(handler, ownerId = OWNER_CORE)` (default keeps existing call sites unchanged) * `unregisterAllByOwner(ownerId): Int` — strip every handler owned by that id in one call, returns the count for log correlation * The duplicate-key error message now includes both owners so a plug-in trying to stomp on a core handler gets an actionable "already registered by X (owner='core'), attempted by Y (owner='printing-shop')" instead of "already registered". * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>` to `ConcurrentHashMap<String, Entry>` where `Entry` carries `(handler, ownerId)`. `find(key)` still returns `TaskHandler?` so the dispatcher is unchanged. - No behavioral change for the hot-path (`DispatchingJavaDelegate`) — only the registration/teardown paths changed. ### platform-plugins - New dependency on `:platform:platform-workflow` (the only new inter- module dep of this chunk; it is the module that exposes `TaskHandlerRegistry`). - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)` that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`, so the plug-in never sees (or can tamper with) the owner id. - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor parameter and exposes it as the `PluginContext.taskHandlers` override. - `VibeErpPluginManager`: * injects `TaskHandlerRegistry` * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing `endpointRegistry.unregisterAll(pluginId)` cleanup so a throwing `start(context)` cannot leave stale registrations * `destroy()` calls the same `unregisterAllByOwner` for every started plug-in in reverse order, mirroring the endpoint cleanup ### reference-customer/plugin-printing-shop - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in- contributed TaskHandler in the framework. Key `printing_shop.plate.approve`. Reads a `plateId` process variable, writes `plateApproved`, `plateId`, `approvedBy` (principal label), `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper plate-approval handler would UPDATE `plugin_printingshop__plate` via `context.jdbc`, but that requires handing the TaskHandler a projection of the PluginContext — a deliberate non-goal of this chunk, deferred to the "handler context" follow-up. - `PrintingShopPlugin.start(context)` now ends with `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs the registration. - Package layout: `org.vibeerp.reference.printingshop.workflow` is the plug-in's workflow namespace going forward (the next printing- shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order — will live alongside). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler' [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve $ curl /api/v1/workflow/handlers (as admin) { "count": 2, "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"] } $ curl /api/v1/plugins/printing-shop/ping # plug-in HTTP still works {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...} $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping"} (principal propagation from previous commit still works — pingedBy=user:admin) $ kill -TERM <pid> [ionShutdownHook] vibe_erp stopping 1 plug-in(s) [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped) [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order with the right owner attribution. Core handlers are untouched by plug-in teardown. ## Tests - 4 new / updated tests on `TaskHandlerRegistryTest`: * `unregisterAllByOwner only removes handlers owned by that id` — 2 core + 2 plug-in, unregister the plug-in owner, only the 2 plug-in keys are removed * `unregisterAllByOwner on unknown owner returns zero` * `register with blank owner is rejected` * Updated `duplicate key fails fast` to assert the new error message format including both owner ids - Total framework unit tests: 269 (was 265), all green. ## What this unblocks - **REF.1** (real printing-shop quote→job-card workflow) can now register its production handlers through the same seam - **Plug-in-contributed handlers with state access** — the next design question is how a plug-in handler gets at the plug-in's database and translator. Two options: pass a projection of the PluginContext through TaskContext, or keep a reference to the context captured at plug-in start (closure). The PlateApproval handler in this chunk is pure on purpose to keep the seam conversation separate. - **Plug-in-shipped BPMN auto-deployment** — Flowable's default classpath scan uses `classpath*:/processes/*.bpmn20.xml` which does NOT see PF4J plug-in classloaders. A dedicated `PluginProcessDeployer` that walks each started plug-in's JAR for BPMN resources and calls `repositoryService.createDeployment` is the natural companion to this commit, still pending. ## Non-goals (still parking lot) - BPMN processes shipped inside plug-in JARs (see above — needs its own chunk, because it requires reading resources from the PF4J classloader and constructing a Flowable deployment by hand) - Per-handler permission checks — a handler that wants a permission gate still has to call back through its own context; P4.3's @RequirePermission aspect doesn't reach into Flowable delegate execution. - Hot reload of a running plug-in's TaskHandlers. The seam supports it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised by any current caller. -
Removes the ext-handling copy/paste that had grown across four PBCs (partners, inventory, orders-sales, orders-purchase). Every service that wrote the JSONB `ext` column was manually doing the same four-step sequence: validate, null-check, serialize with a local ObjectMapper, assign to the entity. And every response mapper was doing the inverse: check-if-blank, parse, cast, swallow errors. Net: ~15 lines saved per PBC, one place to change the ext contract later (e.g. PII redaction, audit tagging, field-level events), and a stable plug-in opt-in mechanism — any plug-in entity that implements `HasExt` automatically participates. New api.v1 surface: interface HasExt { val extEntityName: String // key into metadata__custom_field var ext: String // the serialized JSONB column } Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own entities into the same validation path. Zero Spring/Jackson dependencies — api.v1 stays clean. Extended `ExtJsonValidator` (platform-metadata) with two helpers: fun applyTo(entity: HasExt, ext: Map<String, Any?>?) — null-safe; validates; writes canonical JSON to entity.ext. Replaces the validate + writeValueAsString + assign triplet in every service's create() and update(). fun parseExt(entity: HasExt): Map<String, Any?> — returns empty map on blank/corrupt column; response mappers never 500 on bad data. Replaces the four identical parseExt local functions. ExtJsonValidator now takes an ObjectMapper via constructor injection (Spring Boot's auto-configured bean). Entities that now implement HasExt (override val extEntityName; override var ext; companion object const val ENTITY_NAME): - Partner (`partners.Partner` → "Partner") - Location (`inventory.Location` → "Location") - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder") - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder") Deliberately NOT converted this chunk: - WorkOrder (pbc-production) — its ext column has no declared fields yet; a follow-up that adds declarations AND the HasExt implementation is cleaner than splitting the two. - JournalEntry (pbc-finance) — derived state, no ext column. Services lose: - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()` field (four copies eliminated) - The `parseExt(entity): Map` helper function (four copies) - The `companion object { const val ENTITY_NAME = ... }` constant (moved onto the entity where it belongs) - The `val canonicalExt = extValidator.validate(...)` + `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }` create pattern (replaced with one applyTo call) - The `if (command.ext != null) { ... }` update pattern (applyTo is null-safe) Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and parseExt (null-safe path, happy path, failure path, blank column, round-trip, malformed JSON). Existing service tests just swap the mock setup from stubbing `validate` to stubbing `applyTo` and `parseExt` with no-ops. Smoke verified end-to-end against real Postgres: - POST /partners with valid ext (partners_credit_limit, partners_industry) → 201, canonical form persisted. - GET /partners/by-code/X → 200, ext round-trips. - POST with invalid enum value → 400 "value 'x' is not in allowed set [printing, publishing, packaging, other]". - POST with undeclared key → 400 "ext contains undeclared key(s) for 'Partner': [rogue_field]". - PATCH with new ext → 200, ext updated. - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe applyTo). - POST /orders/sales-orders with no ext → 201, the create path via the shared helper still works. 246 unit tests (+6 over 240), 18 Gradle subprojects. -
Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one step, single output, no BOM) into a real v2 production PBC: 1. IN_PROGRESS state between DRAFT and COMPLETED so "started but not finished" work orders are observable on a dashboard. WorkOrderService.start(id) performs the transition and publishes a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR IN_PROGRESS (v2 writes nothing to the ledger at start() so there is nothing to undo on cancel). 2. Bill of materials via a new WorkOrderInput child entity — @OneToMany with cascade + orphanRemoval, same shape as SalesOrderLine. Each line carries (lineNo, itemCode, quantityPerUnit, sourceLocationCode). complete() now iterates the inputs in lineNo order and writes one MATERIAL_ISSUE ledger row per line (delta = -(quantityPerUnit × outputQuantity)) BEFORE writing the PRODUCTION_RECEIPT for the output. All in one transaction — a failure anywhere rolls back every prior ledger row AND the status flip. Empty inputs list is legal (the v1 auto-spawn-from-SO path still works unchanged, writing only the PRODUCTION_RECEIPT). 3. Scrap flow for COMPLETED work orders via a new scrap(id, scrapLocationCode, quantity, note) service method. Writes a negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over adding a new SCRAP movement reason to keep the enum stable — the reference-string suffix is the disambiguator. The work order itself STAYS COMPLETED; scrap is a correction on top of a terminal state, not a state change. complete() now requires IN_PROGRESS (not DRAFT); existing callers must start() first. api.v1 grows two events (WorkOrderStartedEvent, WorkOrderScrappedEvent) alongside the three that already existed. Since this is additive within a major version, the api.v1 semver contract holds — existing subscribers continue to compile. Liquibase: 002-production-v2.xml widens the status CHECK and creates production__work_order_input with (work_order_id FK, line_no, item_code, quantity_per_unit, source_location_code) plus a unique (work_order_id, line_no) constraint, a CHECK quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE from the parent. Unit tests: WorkOrderServiceTest grows from 8 to 18 cases — covers start happy path, start rejection, complete-on-DRAFT rejection, empty-BOM complete, BOM-with-two-lines complete (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT all fire with the right references), scrap happy path, scrap on non-COMPLETED rejection, scrap with non-positive quantity rejection, cancel-from-IN_PROGRESS, and BOM validation rejects (unknown item, duplicate line_no). Smoke verified end-to-end against real Postgres: - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per brochure, output 100). - Started (DRAFT → IN_PROGRESS, no ledger rows). - Completed: paper balance 500→300 (MATERIAL_ISSUE -200), ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100 (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE. - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged WO:WO-SMOKE:SCRAP, work order stayed COMPLETED. - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a DRAFT with empty BOM; starting + completing it writes only the PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the empty-BOM path is backwards-compatible. - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s, double-start 400s, cancel-from-IN_PROGRESS 200. 240 unit tests, 18 Gradle subprojects.
-
The framework's eighth PBC and the first one that's NOT order- or master-data-shaped. Work orders are about *making things*, which is the reason the printing-shop reference customer exists in the first place. With this PBC in place the framework can express the full buy-sell-make loop end-to-end. What landed (new module pbc/pbc-production/) - WorkOrder entity (production__work_order): code, output_item_code, output_quantity, status (DRAFT|COMPLETED| CANCELLED), due_date (display-only), source_sales_order_code (nullable — work orders can be either auto-spawned from a confirmed SO or created manually), ext. - WorkOrderJpaRepository with existsBySourceSalesOrderCode / findBySourceSalesOrderCode for the auto-spawn dedup. - WorkOrderService.create / complete / cancel: • create validates the output item via CatalogApi (same seam SalesOrderService and PurchaseOrderService use), rejects non-positive quantities, publishes WorkOrderCreatedEvent. • complete(outputLocationCode) credits finished goods to the named location via InventoryApi.recordMovement with reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and reference="WO:<order_code>", then flips status to COMPLETED, then publishes WorkOrderCompletedEvent — all in the same @Transactional method. • cancel only allowed from DRAFT (no un-producing finished goods); publishes WorkOrderCancelledEvent. - SalesOrderConfirmedSubscriber (@PostConstruct → EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)): walks the confirmed sales order's lines via SalesOrdersApi (NOT by importing pbc-orders-sales) and calls WorkOrderService.create for each line. Coded as one bean with one subscription — matches pbc-finance's one-bean-per-subject pattern. • Idempotent on source sales order code — if any work order already exists for the SO, the whole spawn is a no-op. • Tolerant of a missing SO (defensive against a future async bus that could deliver the confirm event after the SO has vanished). • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g. WO-FROM-SO-2026-0001-L1. - REST controller /api/v1/production/work-orders: list, get, by-code, create, complete, cancel — each annotated with @RequirePermission. Four permission keys declared in the production.yml metadata: read / create / complete / cancel. - CompleteWorkOrderRequest: single-arg DTO uses the @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest; cross-referenced in the KDoc so the third instance doesn't need re-discovery. - distribution/.../pbc-production/001-production-init.xml: CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext + the usual indexes. NEITHER output_item_code NOR source_sales_order_code is a foreign key (cross-PBC reference policy — guardrail #9). - settings.gradle.kts + distribution/build.gradle.kts: registers the new module and adds it to the distribution dependency list. - master.xml: includes the new changelog in dependency order, after pbc-finance. New api.v1 surface: org.vibeerp.api.v1.event.production.* - WorkOrderCreatedEvent, WorkOrderCompletedEvent, WorkOrderCancelledEvent — sealed under WorkOrderEvent, aggregateType="production.WorkOrder". Same pattern as the order events, so any future consumer (finance revenue recognition, warehouse put-away dashboard, a customer plug-in that needs to react to "work finished") subscribes through the public typed-class overload with no dependency on pbc-production. Unit tests (13 new, 217 → 230 total) - WorkOrderServiceTest (9 tests): create dedup, positive quantity check, catalog seam, happy-path create with event assertion, complete rejects non-DRAFT, complete happy path with InventoryApi.recordMovement assertion + event assertion, cancel from DRAFT, cancel rejects COMPLETED. - SalesOrderConfirmedSubscriberTest (5 tests): subscription registration count, spawns N work orders for N SO lines with correct code convention, idempotent when WOs already exist, no-op on missing SO, and a listener-routing test that captures the EventListener instance and verifies it forwards to the right service method. End-to-end smoke verified against real Postgres - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance) and SalesOrderConfirmedSubscriber (pbc-production) log their subscription registration before the first HTTP call. - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a finished-goods location (WH-FG). - Created a 2-line sales order (SO-WO-1), confirmed it. → Produced ONE orders_sales.SalesOrder outbox row. → Produced ONE AR POSTED finance__journal_entry for 1000 USD (500 × 1 + 250 × 2 — the pbc-finance consumer still works). → Produced TWO draft work orders auto-spawned from the SO lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with source_sales_order_code=SO-WO-1. - Completed WO1 to WH-FG: → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A delta=500 reference="WO:WO-FROM-SO-WO-1-L1". → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG. → Flipped status to COMPLETED. - Cancelled WO2 → CANCELLED. - Created a manual WO-MANUAL-1 with no source SO → succeeds; demonstrates the "operator creates a WO to build inventory ahead of demand" path. - platform__event_outbox ends with 6 rows all DISPATCHED: orders_sales.SalesOrder SO-WO-1 production.WorkOrder WO-FROM-SO-WO-1-L1 (created) production.WorkOrder WO-FROM-SO-WO-1-L2 (created) production.WorkOrder WO-FROM-SO-WO-1-L1 (completed) production.WorkOrder WO-FROM-SO-WO-1-L2 (cancelled) production.WorkOrder WO-MANUAL-1 (created) Why this chunk was the right next move - pbc-finance was a PASSIVE consumer — it only wrote derived reporting state. pbc-production is the first ACTIVE consumer: it creates new aggregates with their own state machines and their own cross-PBC writes in reaction to another PBC's events. This is a meaningfully harder test of the event-driven integration story and it passes end-to-end. - "One ledger, three callers" is now real: sales shipments, purchase receipts, AND production receipts all feed the same inventory__stock_movement ledger through the same InventoryApi.recordMovement facade. The facade has proven stable under three very different callers. - The framework now expresses the basic ERP trinity: buy (purchase orders), sell (sales orders), make (work orders). That's the shape every real manufacturing customer needs, and it's done without any PBC importing another. What's deliberately NOT in v1 - No bill of materials. complete() only credits finished goods; it does NOT issue raw materials. A shop floor that needs to consume 4 sheets of paper to produce 1 brochure does it manually via POST /api/v1/inventory/movements with reason= MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands as WorkOrderInput lines in a future chunk. - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in one step. A real shop floor needs "started but not finished" visibility; that's the next iteration. - No routings, operations, machine assignments, or due-date enforcement. due_date is display-only. - No "scrap defective output" flow for a COMPLETED work order. cancel refuses from COMPLETED; the fix requires a new MovementReason and a new event, not a special-case method on the service. -
The event bus and transactional outbox have existed since P1.7 but no real PBC business logic was publishing through them. This change closes that loop end-to-end: api.v1.event.orders (new public surface) - SalesOrderConfirmedEvent / SalesOrderShippedEvent / SalesOrderCancelledEvent — sealed under SalesOrderEvent, aggregateType = "orders_sales.SalesOrder" - PurchaseOrderConfirmedEvent / PurchaseOrderReceivedEvent / PurchaseOrderCancelledEvent — sealed under PurchaseOrderEvent, aggregateType = "orders_purchase.PurchaseOrder" - Events live in api.v1 (not inside the PBCs) so other PBCs and customer plug-ins can subscribe without importing the producing PBC — that would violate guardrail #9. pbc-orders-sales / pbc-orders-purchase - SalesOrderService and PurchaseOrderService now inject EventBus and publish a typed event from each state-changing method (confirm, ship/receive, cancel). The publish runs INSIDE the same @Transactional method as the JPA mutation and the InventoryApi.recordMovement ledger writes — EventBusImpl uses Propagation.MANDATORY, so a publish outside a transaction fails loudly. A failure in any line rolls back the status change AND every ledger row AND the would-have-been outbox row. - 6 new unit tests (3 per service) mockk the EventBus and verify each transition publishes exactly one matching event with the expected fields. Total tests: 186 → 192. End-to-end smoke verified against real Postgres - Created supplier, customer, item PAPER-A4, location WH-MAIN. - Drove a PO and an SO through the full state machine plus a cancel of each. 6 events fired: orders_purchase.PurchaseOrder × 3 (confirm + receive + cancel) orders_sales.SalesOrder × 3 (confirm + ship + cancel) - The wildcard EventAuditLogSubscriber logged each one at INFO level to /tmp/vibe-erp-boot.log with the [event-audit] tag. - platform__event_outbox shows 6 rows, all flipped from PENDING to DISPATCHED by the OutboxPoller within seconds. - The publish-inside-the-ledger-transaction guarantee means a subscriber that reads inventory__stock_movement on event receipt is guaranteed to see the matching SALES_SHIPMENT or PURCHASE_RECEIPT rows. This is what the architecture spec section 9 promised and now delivers. Why this is the right shape - Other PBCs (production, finance) and customer plug-ins can now react to "an order was confirmed/shipped/received/cancelled" without ever importing pbc-orders-* internals. The event class objects live in api.v1, the only stable contract surface. - The aggregateType strings ("orders_sales.SalesOrder", "orders_purchase.PurchaseOrder") match the <pbc>.<aggregate> convention documented on DomainEvent.aggregateType, so a cross-classloader subscriber can use the topic-string subscribe overload without holding the concrete Class<E>. - The bus's outbox row is the durability anchor for the future Kafka/NATS bridge: switching from in-process delivery to cross-process delivery will require zero changes to either PBC's publish call. -
The buying-side mirror of pbc-orders-sales. Adds the 6th real PBC and closes the loop: the framework now does both directions of the inventory flow through the same `InventoryApi.recordMovement` facade. Buy stock with a PO that hits RECEIVED, ship stock with a SO that hits SHIPPED, both feed the same `inventory__stock_movement` ledger. What landed ----------- * New Gradle subproject `pbc/pbc-orders-purchase` (16 modules total now). Same dependency set as pbc-orders-sales, same architectural enforcement — no direct dependency on any other PBC; cross-PBC references go through `api.v1.ext.<pbc>` facades at runtime. * Two JPA entities mirroring SalesOrder / SalesOrderLine: - `PurchaseOrder` (header) — code, partner_code (varchar, NOT a UUID FK), status enum DRAFT/CONFIRMED/RECEIVED/CANCELLED, order_date, expected_date (nullable, the supplier's promised delivery date), currency_code, total_amount, ext jsonb. - `PurchaseOrderLine` — purchase_order_id FK, line_no, item_code, quantity, unit_price, currency_code. Same shape as the sales order line; the api.v1 facade reuses `SalesOrderLineRef` rather than declaring a duplicate type. * `PurchaseOrderService.create` performs three cross-PBC validations in one transaction: 1. PartnersApi.findPartnerByCode → reject if null. 2. The partner's `type` must be SUPPLIER or BOTH (a CUSTOMER-only partner cannot be the supplier of a purchase order — the mirror of the sales-order rule that rejects SUPPLIER-only partners as customers). 3. CatalogApi.findItemByCode for EVERY line. Then validates: at least one line, no duplicate line numbers, positive quantity, non-negative price, currency matches header. The header total is RECOMPUTED from the lines (caller's value ignored — never trust a financial aggregate sent over the wire). * State machine enforced by `confirm()`, `cancel()`, and `receive()`: - DRAFT → CONFIRMED (confirm) - DRAFT → CANCELLED (cancel) - CONFIRMED → CANCELLED (cancel before receipt) - CONFIRMED → RECEIVED (receive — increments inventory) - RECEIVED → × (terminal; cancellation requires a return-to-supplier flow) * `receive(id, receivingLocationCode)` walks every line and calls `inventoryApi.recordMovement(... +line.quantity reason="PURCHASE_RECEIPT" reference="PO:<order_code>")`. The whole operation runs in ONE transaction so a failure on any line rolls back EVERY line's already-written movement AND the order status change. The customer cannot end up with "5 of 7 lines received, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/purchase-orders/{id}/receive` endpoint with body `{"receivingLocationCode": "WH-MAIN"}`, gated by `orders.purchase.receive`. The single-arg DTO has the same Jackson `@JsonCreator(mode = PROPERTIES)` workaround as `ShipSalesOrderRequest` (the trap is documented in the class KDoc with a back-reference to ShipSalesOrderRequest). * Confirm/cancel/receive endpoints carry `@RequirePermission` annotations (`orders.purchase.confirm`, `orders.purchase.cancel`, `orders.purchase.receive`). All three keys declared in the new `orders-purchase.yml` metadata. * New api.v1 facade `org.vibeerp.api.v1.ext.orders.PurchaseOrdersApi` + `PurchaseOrderRef`. Reuses the existing `SalesOrderLineRef` type for the line shape — buying and selling lines carry the same fields, so duplicating the ref type would be busywork. * `PurchaseOrdersApiAdapter` — sixth `*ApiAdapter` after Identity, Catalog, Partners, Inventory, SalesOrders. * `orders-purchase.yml` metadata declaring 2 entities, 6 permission keys, 1 menu entry under "Purchasing". End-to-end smoke test (the full demo loop) ------------------------------------------ Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → SUP-PAPER (SUPPLIER) * POST /inventory/locations → WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → [] (no stock) * POST /orders/purchase-orders → PO-2026-0001 for 5000 sheets @ $0.04 = total $200.00 (recomputed from the line) * POST /purchase-orders/{id}/confirm → status CONFIRMED * POST /purchase-orders/{id}/receive body={"receivingLocationCode":"WH-MAIN"} → status RECEIVED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=5000 * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 Then the FULL loop with the sales side from the previous chunk: * POST /partners → CUST-ACME (CUSTOMER) * POST /orders/sales-orders → SO-2026-0001 for 50 sheets * confirm + ship from WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → quantity=4950 (5000-50) * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 The framework's `InventoryApi.recordMovement` facade now has TWO callers — pbc-orders-sales (negative deltas, SALES_SHIPMENT) and pbc-orders-purchase (positive deltas, PURCHASE_RECEIPT) — feeding the same ledger from both sides. Failure paths verified: * Re-receive a RECEIVED PO → 400 "only CONFIRMED orders can be received" * Cancel a RECEIVED PO → 400 "issue a return-to-supplier flow instead" * Create a PO from a CUSTOMER-only partner → 400 "partner 'CUST-ONLY' is type CUSTOMER and cannot be the supplier of a purchase order" Regression: catalog uoms, identity users, partners, inventory, sales orders, purchase orders, printing-shop plates with i18n, metadata entities (15 now, was 13) — all still HTTP 2xx. Build ----- * `./gradlew build`: 16 subprojects, 186 unit tests (was 175), all green. The 11 new tests cover the same shapes as the sales-order tests but inverted: unknown supplier, CUSTOMER-only rejection, BOTH-type acceptance, unknown item, empty lines, total recomputation, confirm/cancel state machine, receive-rejects-non-CONFIRMED, receive-walks-lines-with-positive- delta, cancel-rejects-RECEIVED, cancel-CONFIRMED-allowed. What was deferred ----------------- * **RFQs** (request for quotation) and **supplier price catalogs** — both lay alongside POs but neither is in v1. * **Partial receipts**. v1's RECEIVED is "all-or-nothing"; the supplier delivering 4500 of 5000 sheets is not yet modelled. * **Supplier returns / refunds**. The cancel-RECEIVED rejection message says "issue a return-to-supplier flow" — that flow doesn't exist yet. * **Three-way matching** (PO + receipt + invoice). Lands with pbc-finance. * **Multi-leg transfers**. TRANSFER_IN/TRANSFER_OUT exist in the movement enum but no service operation yet writes both legs in one transaction. -
The killer demo finally works: place a sales order, ship it, watch inventory drop. This chunk lands the two pieces that close the loop: the inventory movement ledger (the audit-grade history of every stock change) and the sales-order /ship endpoint that calls InventoryApi.recordMovement to atomically debit stock for every line. This is the framework's FIRST cross-PBC WRITE flow. Every earlier cross-PBC call was a read (CatalogApi.findItemByCode, PartnersApi.findPartnerByCode, InventoryApi.findStockBalance). Shipping inverts that: pbc-orders-sales synchronously writes to inventory's tables (via the api.v1 facade) as a side effect of changing its own state, all in ONE Spring transaction. What landed ----------- * New `inventory__stock_movement` table — append-only ledger (id, item_code, location_id FK, signed delta, reason enum, reference, occurred_at, audit cols). CHECK constraint `delta <> 0` rejects no-op rows. Indexes on item_code, location_id, the (item, location) composite, reference, and occurred_at. Migration is in its own changelog file (002-inventory-movement-ledger.xml) per the project convention that each new schema cut is a new file. * New `StockMovement` JPA entity + repository + `MovementReason` enum (RECEIPT, ISSUE, ADJUSTMENT, SALES_SHIPMENT, PURCHASE_RECEIPT, TRANSFER_OUT, TRANSFER_IN). Each value carries a documented sign convention; the service rejects mismatches (a SALES_SHIPMENT with positive delta is a caller bug, not silently coerced). * New `StockMovementService.record(...)` — the ONE entry point for changing inventory. Cross-PBC item validation via CatalogApi, local location validation, sign-vs-reason enforcement, and negative-balance rejection all happen BEFORE the write. The ledger row insert AND the balance row update happen in the SAME database transaction so the two cannot drift. * `StockBalanceService.adjust` refactored to delegate: it computes delta = newQty - oldQty and calls record(... ADJUSTMENT). The REST endpoint keeps its absolute-quantity semantics — operators type "the shelf has 47" not "decrease by 3" — but every adjustment now writes a ledger row too. A no-op adjustment (re-saving the same value) does NOT write a row, so the audit log doesn't fill with noise from operator clicks that didn't change anything. * New `StockMovementController` at `/api/v1/inventory/movements`: GET filters by itemCode, locationId, or reference (for "all movements caused by SO-2026-0001"); POST records a manual movement. Both protected by `inventory.stock.adjust`. * `InventoryApi` facade extended with `recordMovement(itemCode, locationCode, delta, reason: String, reference)`. The reason is a String in the api.v1 surface (not the local enum) so plug-ins don't import inventory's internal types — the closed set is documented on the interface. The adapter parses the string with a meaningful error on unknown values. * New `SHIPPED` status on `SalesOrderStatus`. Transitions: DRAFT → CONFIRMED → SHIPPED (terminal). Cancelling a SHIPPED order is rejected with "issue a return / refund flow instead". * New `SalesOrderService.ship(id, shippingLocationCode)`: walks every line, calls `inventoryApi.recordMovement(... -line.quantity reason="SALES_SHIPMENT" reference="SO:{order_code}")`, flips status to SHIPPED. The whole operation runs in ONE transaction so a failure on any line — bad item, bad location, would push balance negative — rolls back the order status change AND every other line's already-written movement. The customer never ends up with "5 of 7 lines shipped, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/sales-orders/{id}/ship` endpoint with body `{"shippingLocationCode": "WH-MAIN"}`, gated by the new `orders.sales.ship` permission key. * `ShipSalesOrderRequest` is a single-arg Kotlin data class — same Jackson deserialization trap as `RefreshRequest`. Fixed with `@JsonCreator(mode = PROPERTIES) + @param:JsonProperty`. The trap is documented in the class KDoc. End-to-end smoke test (the killer demo) --------------------------------------- Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → CUST-ACME * POST /inventory/locations → WH-MAIN * POST /inventory/balances/adjust → quantity=1000 (now writes a ledger row via the new path) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null * POST /orders/sales-orders → SO-2026-0001 (50 units of PAPER-A4) * POST /sales-orders/{id}/confirm → status CONFIRMED * POST /sales-orders/{id}/ship body={"shippingLocationCode":"WH-MAIN"} → status SHIPPED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=950 (1000 - 50) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 Failure paths verified: * Re-ship a SHIPPED order → 400 "only CONFIRMED orders can be shipped" * Cancel a SHIPPED order → 400 "issue a return / refund flow instead" * Place a 10000-unit order, confirm, try to ship from a 950-stock warehouse → 400 "stock movement would push balance for 'PAPER-A4' at location ... below zero (current=950.0000, delta=-10000.0000)"; balance unchanged after the rollback (transaction integrity verified) Regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata entities — all still HTTP 2xx. Build ----- * `./gradlew build`: 15 subprojects, 175 unit tests (was 163), all green. The 12 new tests cover: - StockMovementServiceTest (8): zero-delta rejection, positive SALES_SHIPMENT rejection, negative RECEIPT rejection, both signs allowed on ADJUSTMENT, unknown item via CatalogApi seam, unknown location, would-push-balance-negative rejection, new-row + existing-row balance update. - StockBalanceServiceTest, rewritten (5): negative-quantity early reject, delegation with computed positive delta, delegation with computed negative delta, no-op adjustment short-circuit (NO ledger row written), no-op on missing row creates an empty row at zero. - SalesOrderServiceTest, additions (3): ship rejects non-CONFIRMED, ship walks lines and calls recordMovement with negated quantity + correct reference, cancel rejects SHIPPED. What was deferred ----------------- * **Event publication.** A `StockMovementRecorded` event would let pbc-finance and pbc-production react to ledger writes without polling. The event bus has been wired since P1.7 but no real cross-PBC flow uses it yet — that's the natural next chunk and the chunk after this commit. * **Multi-leg transfers.** TRANSFER_OUT and TRANSFER_IN are in the enum but no service operation atomically writes both legs yet (both legs in one transaction is required to keep total on-hand invariant). * **Reservation / pick lists.** "Reserve 50 of PAPER-A4 for an unconfirmed order" is its own concept that lands later. * **Shipped-order returns / refunds.** The cancel-SHIPPED rule points the user at "use a return flow" — that flow doesn't exist yet. v1 says shipments are terminal. -
The fifth real PBC and the first business workflow PBC. pbc-inventory proved a PBC could consume ONE cross-PBC facade (CatalogApi). pbc-orders-sales consumes TWO simultaneously (PartnersApi for the customer, CatalogApi for every line's item) in a single transaction — the most rigorous test of the modular monolith story so far. Neither source PBC is on the compile classpath; the Gradle build refuses any direct dependency. Spring DI wires the api.v1 interfaces to their concrete adapters at runtime. What landed ----------- * New Gradle subproject `pbc/pbc-orders-sales` (15 modules total). * Two JPA entities, both extending `AuditedJpaEntity`: - `SalesOrder` (header) — code, partner_code (varchar, NOT a UUID FK to partners), status enum DRAFT/CONFIRMED/CANCELLED, order_date, currency_code (varchar(3)), total_amount numeric(18,4), ext jsonb. Eager-loaded `lines` collection because every read of the header is followed by a read of the lines in practice. - `SalesOrderLine` — sales_order_id FK, line_no, item_code (varchar, NOT a UUID FK to catalog), quantity, unit_price, currency_code. Per-line currency in the schema even though v1 enforces all-lines- match-header (so multi-currency relaxation is later schema-free). No `ext` jsonb on lines: lines are facts, not master records; custom fields belong on the header. * `SalesOrderService.create` performs **three independent cross-PBC validations** in one transaction: 1. PartnersApi.findPartnerByCode → reject if null (covers unknown AND inactive partners; the facade hides them). 2. PartnersApi result.type must be CUSTOMER or BOTH (a SUPPLIER-only partner cannot be the customer of a sales order). 3. CatalogApi.findItemByCode for EVERY line → reject if null. Then it ALSO validates: at least one line, no duplicate line numbers, positive quantity, non-negative price, currency matches header. The header total is RECOMPUTED from the lines — the caller's value is intentionally ignored. Never trust a financial aggregate sent over the wire. * State machine enforced by `confirm()` and `cancel()`: - DRAFT → CONFIRMED (confirm) - DRAFT → CANCELLED (cancel from draft) - CONFIRMED → CANCELLED (cancel a confirmed order) Anything else throws with a descriptive message. CONFIRMED orders are immutable except for cancellation — the `update` method refuses to mutate a non-DRAFT order. * `update` with line items REPLACES the existing lines wholesale (PUT semantics for lines, PATCH for header columns). Partial line edits are not modelled because the typical "edit one line" UI gesture renders to a full re-send anyway. * REST: `/api/v1/orders/sales-orders` (CRUD + `/confirm` + `/cancel`). State transitions live on dedicated POST endpoints rather than PATCH-based status writes — they have side effects (lines become immutable, downstream PBCs will receive events in future versions), and sentinel-status writes hide that. * New api.v1 facade `org.vibeerp.api.v1.ext.orders.SalesOrdersApi` with `findByCode`, `findById`, `SalesOrderRef`, `SalesOrderLineRef`. Fifth ext.* package after identity, catalog, partners, inventory. Sets up the next consumers: pbc-production for work orders, pbc-finance for invoicing, the printing-shop reference plug-in for the quote-to-job-card workflow. * `SalesOrdersApiAdapter` runtime implementation. Cancelled orders ARE returned by the facade (unlike inactive items / partners which are hidden) because downstream consumers may legitimately need to react to a cancellation — release a production slot, void an invoice, etc. * `orders-sales.yml` metadata declaring 2 entities, 5 permission keys, 1 menu entry. Build enforcement (still load-bearing) -------------------------------------- The root `build.gradle.kts` STILL refuses any direct dependency from `pbc-orders-sales` to either `pbc-partners` or `pbc-catalog`. Try adding either as `implementation(project(...))` and the build fails at configuration time with the architectural violation. The cross-PBC interfaces live in api-v1; the concrete adapters live in their owning PBCs; Spring DI assembles them at runtime via the bootstrap @ComponentScan. pbc-orders-sales sees only the api.v1 interfaces. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/catalog/items × 2 → PAPER-A4, INK-CYAN * POST /api/v1/partners/partners → CUST-ACME (CUSTOMER), SUP-ONLY (SUPPLIER) * POST /api/v1/orders/sales-orders → 201, two lines, total 386.50 (5000 × 0.05 + 3 × 45.50 = 250.00 + 136.50, correctly recomputed) * POST .../sales-orders with FAKE-PARTNER → 400 with the meaningful message "partner code 'FAKE-PARTNER' is not in the partners directory (or is inactive)" * POST .../sales-orders with SUP-ONLY → 400 "partner 'SUP-ONLY' is type SUPPLIER and cannot be the customer of a sales order" * POST .../sales-orders with FAKE-ITEM line → 400 "line 1: item code 'FAKE-ITEM' is not in the catalog (or is inactive)" * POST /{id}/confirm → status DRAFT → CONFIRMED * PATCH the CONFIRMED order → 400 "only DRAFT orders are mutable" * Re-confirm a CONFIRMED order → 400 "only DRAFT can be confirmed" * POST /{id}/cancel a CONFIRMED order → status CANCELLED (allowed) * SELECT * FROM orders_sales__sales_order — single row, total 386.5000, status CANCELLED * SELECT * FROM orders_sales__sales_order_line — two rows in line_no order with the right items and quantities * GET /api/v1/_meta/metadata/entities → 13 entities now (was 11) * Regression: catalog uoms, identity users, partners, inventory locations, printing-shop plates with i18n (Accept-Language: zh-CN) all still HTTP 2xx. Build ----- * `./gradlew build`: 15 subprojects, 153 unit tests (was 139), all green. The 14 new tests cover: unknown/SUPPLIER-only/BOTH-type partner paths, unknown item path, empty/duplicate-lineno line arrays, negative-quantity early reject (verifies CatalogApi NOT consulted), currency mismatch reject, total recomputation, all three state-machine transitions and the rejected ones. What was deferred ----------------- * **Sales-order shipping**. Confirmed orders cannot yet ship, because shipping requires atomically debiting inventory — which needs the movement ledger that was deferred from P5.3. The pair of chunks (movement ledger + sales-order shipping flow) is the natural next combination. * **Multi-currency lines**. The schema column is per-line but the service enforces all-lines-match-header in v1. Relaxing this is a service-only change. * **Quotes** (DRAFT-but-customer-visible) and **deliveries** (the thing that triggers shipping). v1 only models the order itself. * **Pricing engine / discounts**. v1 takes the unit price the caller sends. A real ERP has a price book lookup, customer-specific pricing, volume discounts, promotional pricing — all of which slot in BEFORE the line price is set, leaving the schema unchanged. * **Tax**. v1 totals are pre-tax. Tax calculation is its own PBC (and a regulatory minefield) that lands later. -
The fourth real PBC, and the first one that CONSUMES another PBC's api.v1.ext facade. Until now every PBC was a *provider* of an ext.<pbc> interface (identity, catalog, partners). pbc-inventory is the first *consumer*: it injects org.vibeerp.api.v1.ext.catalog.CatalogApi to validate item codes before adjusting stock. This proves the cross-PBC contract works in both directions, exactly as guardrail #9 requires. What landed ----------- * New Gradle subproject `pbc/pbc-inventory` (14 modules total now). * Two JPA entities, both extending `AuditedJpaEntity`: - `Location` — code, name, type (WAREHOUSE/BIN/VIRTUAL), active, ext jsonb. Single table for all location levels with a type discriminator (no recursive self-reference in v1; YAGNI for the "one warehouse, handful of bins" shape every printing shop has). - `StockBalance` — item_code (varchar, NOT a UUID FK), location_id FK, quantity numeric(18,4). The item_code is deliberately a string FK that references nothing because pbc-inventory has no compile-time link to pbc-catalog — the cross-PBC link goes through CatalogApi at runtime. UNIQUE INDEX on (item_code, location_id) is the primary integrity guarantee; UUID id is the addressable PK. CHECK (quantity >= 0). * `LocationService` and `StockBalanceService` with full CRUD + adjust semantics. ext jsonb on Location goes through ExtJsonValidator (P3.4 — Tier 1 customisation). * `StockBalanceService.adjust(itemCode, locationId, quantity)`: 1. Reject negative quantity. 2. **Inject CatalogApi**, call `findItemByCode(itemCode)`, reject if null with a meaningful 400. THIS is the cross-PBC seam test. 3. Verify the location exists. 4. SELECT-then-save upsert on (item_code, location_id) — single row per cell, mutated in place when the row exists, created when it doesn't. Single-instance deployment makes the read-modify-write race window academic. * REST: `/api/v1/inventory/locations` (CRUD), `/api/v1/inventory/balances` (GET with itemCode or locationId filters, POST /adjust). * New api.v1 facade `org.vibeerp.api.v1.ext.inventory` with `InventoryApi.findStockBalance(itemCode, locationCode)` + `totalOnHand(itemCode)` + `StockBalanceRef`. Fourth ext.* package after identity, catalog, partners. Sets up the next consumers (sales orders, purchase orders, the printing-shop plug-in's "do we have enough paper for this job?"). * `InventoryApiAdapter` runtime implementation in pbc-inventory. * `inventory.yml` metadata declaring 2 entities, 6 permission keys, 2 menu entries. Build enforcement (the load-bearing bit) ---------------------------------------- The root build.gradle.kts STILL refuses any direct dependency from pbc-inventory to pbc-catalog. Try adding `implementation(project( ":pbc:pbc-catalog"))` to pbc-inventory's build.gradle.kts and the build fails at configuration time with "Architectural violation in :pbc:pbc-inventory: depends on :pbc:pbc-catalog". The CatalogApi interface is in api-v1; the CatalogApiAdapter implementation is in pbc-catalog; Spring DI wires them at runtime via the bootstrap @ComponentScan. pbc-inventory only ever sees the interface. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/inventory/locations → 201, "WH-MAIN" warehouse * POST /api/v1/catalog/items → 201, "PAPER-A4" sheet item * POST /api/v1/inventory/balances/adjust with itemCode=PAPER-A4 → 200, the cross-PBC catalog lookup succeeded * POST .../adjust with itemCode=FAKE-ITEM → 400 with the meaningful message "item code 'FAKE-ITEM' is not in the catalog (or is inactive)" — the cross-PBC seam REJECTS unknown items as designed * POST .../adjust with quantity=-5 → 400 "stock quantity must be non-negative", caught BEFORE the CatalogApi mock would be invoked * POST .../adjust again with quantity=7500 → 200; SELECT shows ONE row with id unchanged and quantity = 7500 (upsert mutates, not duplicates) * GET /api/v1/inventory/balances?itemCode=PAPER-A4 → the row, with scale-4 numeric serialised verbatim * GET /api/v1/_meta/metadata/entities → 11 entities now (was 9 before Location + StockBalance landed) * Regression: catalog uoms, identity users, partners, printing-shop plates with i18n (Accept-Language: zh-CN), Location custom-fields endpoint all still HTTP 2xx. Build ----- * `./gradlew build`: 14 subprojects, 139 unit tests (was 129), all green. The 10 new tests cover Location CRUD + the StockBalance adjust path with mocked CatalogApi: unknown item rejection, unknown location rejection, negative-quantity early reject (verifies CatalogApi is NOT consulted), happy-path create, and upsert (existing row mutated, save() not called because @Transactional flushes the JPA-managed entity on commit). What was deferred ----------------- * `inventory__stock_movement` append-only ledger. The current operation is "set the quantity"; receipts/issues/transfers as discrete events with audit trail land in a focused follow-up. The balance row will then be regenerated from the ledger via a Liquibase backfill. * Negative-balance / over-issue prevention. The CHECK constraint blocks SET to a negative value, but there's no concept of "you cannot ISSUE more than is on hand" yet because there is no separate ISSUE operation — only absolute SET. * Lots, batches, serial numbers, expiry dates. Plenty of printing shops need none of these; the ones that do can either wait for the lot/serial chunk later or add the columns via Tier 1 custom fields on Location for now. * Cross-warehouse transfer atomicity (debit one, credit another in one transaction). Same — needs the ledger. -
The third real PBC. Validates the modular-monolith template against a parent-with-children aggregate (Partner → Addresses → Contacts), where the previous two PBCs only had single-table or two-independent-table shapes. What landed ----------- * New Gradle subproject `pbc/pbc-partners` (12 modules total now). * Three JPA entities, all extending `AuditedJpaEntity`: - `Partner` — code, name, type (CUSTOMER/SUPPLIER/BOTH), tax_id, website, email, phone, active, ext jsonb. Single-table for both customers and suppliers because the role flag is a property of the relationship, not the organisation. - `Address` — partner_id FK, address_type (BILLING/SHIPPING/OTHER), line1/line2/city/region/postal_code/country_code (ISO 3166-1), is_primary. Two free address lines + structured city/region/code is the smallest set that round-trips through every postal system. - `Contact` — partner_id FK, full_name, role, email, phone, active. PII-tagged in metadata YAML for the future audit/export tooling. * Spring Data JPA repos, application services with full CRUD and the invariants below, REST controllers under `/api/v1/partners/partners` (+ nested addresses, contacts). * `partners-init.xml` Liquibase changelog with the three tables, FKs, GIN index on `partner.ext`, indexes on type/active/country. * New api.v1 facade `org.vibeerp.api.v1.ext.partners` with `PartnersApi` + `PartnerRef`. Third `ext.<pbc>` after identity and catalog. Inactive partners hidden at the facade boundary. * `PartnersApiAdapter` runtime implementation in pbc-partners, never leaking JPA entity types. * `partners.yml` metadata declaring all 3 entities, 12 permission keys, 1 menu entry. Picked up automatically by `MetadataLoader`. * 15 new unit tests across `PartnerServiceTest`, `AddressServiceTest` and `ContactServiceTest` (mockk-based, mirroring catalog tests). Invariants enforced in code (not blindly delegated to the DB) ------------------------------------------------------------- * Partner code uniqueness — explicit check produces a 400 with a real message instead of a 500 from the unique-index violation. * Partner code is NOT updatable — every external reference uses code, so renaming is a data-migration concern, not an API call. * Partner deactivate cascades to contacts (also flipped to inactive). Addresses are NOT touched (no `active` column — they exist or they don't). Verified end-to-end against Postgres. * "Primary" flag is at most one per (partner, address_type). When a new/updated address is marked primary, all OTHER primaries of the same type for the same partner are demoted in the same transaction. * Addresses and contacts reject operations on unknown partners up-front to give better errors than the FK-violation. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/auth/login (admin) → JWT * POST /api/v1/partners/partners (CUSTOMER, SUPPLIER) → 201 * GET /api/v1/partners/partners → lists both * GET /api/v1/partners/partners/by-code/CUST-ACME → resolves * POST /api/v1/partners/partners (dup code) → 400 with real message * POST .../{id}/addresses (BILLING, primary) → 201 * POST .../{id}/contacts → 201 * DELETE /api/v1/partners/partners/{id} → 204; partner active=false * GET .../contacts → contact ALSO active=false (cascade verified) * GET /api/v1/_meta/metadata/entities → 3 partners entities present * GET /api/v1/_meta/metadata/permissions → 12 partners permissions * Regression: catalog UoMs/items, identity users, printing-shop plug-in plates all still HTTP 200. Build ----- * `./gradlew build`: 12 subprojects, 107 unit tests, all green (was 11 / 92 before this commit). * The architectural rule still enforced: pbc-partners depends on api-v1 + platform-persistence + platform-security only — no cross-PBC dep, no platform-bootstrap dep. What was deferred ----------------- * Permission enforcement on contact endpoints (P4.3). Currently plain authenticated; the metadata declares the planned `partners.contact.*` keys for when @RequirePermission lands. * Per-country address structure layered on top via metadata forms (P3.x). The current schema is the smallest universal subset. * `deletePartnerCompletely` — out of scope for v1; should be a separate "data scrub" admin tool, not a routine API call. -
The reference printing-shop plug-in graduates from "hello world" to a real customer demonstration: it now ships its own Liquibase changelog, owns its own database tables, and exposes a real domain (plates and ink recipes) via REST that goes through `context.jdbc` — a new typed-SQL surface in api.v1 — without ever touching Spring's `JdbcTemplate` or any other host internal type. A bytecode linter that runs before plug-in start refuses to load any plug-in that tries to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes. What landed: * api.v1 (additive, binary-compatible): - PluginJdbc — typed SQL access with named parameters. Methods: query, queryForObject, update, inTransaction. No Spring imports leaked. Forces plug-ins to use named params (no positional ?). - PluginRow — typed nullable accessors over a single result row: string, int, long, uuid, bool, instant, bigDecimal. Hides java.sql.ResultSet entirely. - PluginContext.jdbc getter with default impl that throws UnsupportedOperationException so older builds remain binary compatible per the api.v1 stability rules. * platform-plugins — three new sub-packages: - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate. ResultSetPluginRow translates each accessor through ResultSet.wasNull() so SQL NULL round-trips as Kotlin null instead of the JDBC defaults (0 for int, false for bool, etc. — bug factories). - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean for the whole process. Per-plugin isolation lands later. - migration/PluginLiquibaseRunner looks for META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via the PF4J classloader and applies it via Liquibase against the host's shared DataSource. The unique META-INF path matters: plug-ins also see the host's parent classpath, where the host's own db/changelog/master.xml lives, and a collision causes Liquibase ChangeLogParseException at install time. - lint/PluginLinter walks every .class entry in the plug-in JAR via java.util.jar.JarFile + ASM ClassReader, visits every type/ method/field/instruction reference, rejects on any reference to `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages. * VibeErpPluginManager lifecycle is now load → lint → migrate → start: - lint runs immediately after PF4J's loadPlugins(); rejected plug-ins are unloaded with a per-violation error log and never get to run any code - migrate runs the plug-in's own Liquibase changelog; failure means the plug-in is loaded but skipped (loud warning, framework boots fine) - then PF4J's startPlugins() runs the no-arg start - then we walk loaded plug-ins and call vibe_erp's start(context) with a fully-wired DefaultPluginContext (logger + endpoints + eventBus + jdbc). The plug-in's tables are guaranteed to exist by the time its lambdas run. * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the shared PluginJdbc and use it to talk to their own tables. * Reference plug-in (PrintingShopPlugin): - Ships META-INF/vibe-erp/db/changelog.xml with two changesets: plugin_printingshop__plate (id, code, name, width_mm, height_mm, status) and plugin_printingshop__ink_recipe (id, code, name, cmyk_c/m/y/k). - Now registers seven endpoints: GET /ping — health GET /echo/{name} — path variable demo GET /plates — list GET /plates/{id} — fetch POST /plates — create (with race-conditiony existence check before INSERT, since plug-ins can't import Spring's DataAccessException) GET /inks POST /inks - All CRUD lambdas use context.jdbc with named parameters. The plug-in still imports nothing from org.springframework.* in its own code (it does reach the host's Jackson via reflection for JSON parsing — a deliberate v0.6 shortcut documented inline). Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize in-memory plug-in JARs (clean class, forbidden platform ref, forbidden pbc ref, allowed api.v1 ref, multiple violations) and a mocked PluginWrapper to avoid touching the real PF4J loader. Total now **81 unit tests** across 10 modules, all green. End-to-end smoke test against fresh Postgres with the plug-in loaded (every assertion green): Boot logs: PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml Liquibase: ChangeSet printingshop-init-001 ran successfully Liquibase: ChangeSet printingshop-init-002 ran successfully Liquibase migrations applied successfully plugin.printing-shop: registered 7 endpoints HTTP smoke: \dt plugin_printingshop* → both tables exist GET /api/v1/plugins/printing-shop/plates → [] POST plate A4 → 201 + UUID POST plate A3 → 201 + UUID POST duplicate A4 → 409 + clear msg GET plates → 2 rows GET /plates/{id} → A4 details psql verifies both rows in plugin_printingshop__plate POST ink CYAN → 201 POST ink MAGENTA → 201 GET inks → 2 inks with nested CMYK GET /ping → 200 (existing endpoint) GET /api/v1/catalog/uoms → 15 UoMs (no regression) GET /api/v1/identity/users → 1 user (no regression) Bug encountered and fixed during the smoke test: • The plug-in initially shipped its changelog at db/changelog/master.xml, which collides with the HOST's db/changelog/master.xml. The plug-in classloader does parent-first lookup (PF4J default), so Liquibase's ClassLoaderResourceAccessor found BOTH files and threw ChangeLogParseException ("Found 2 files with the path"). Fixed by moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml, a path the host never uses, and updating PluginLiquibaseRunner. The unique META-INF prefix is now part of the documented plug-in convention. What is explicitly NOT in this chunk (deferred): • Per-plugin Spring child contexts — plug-ins still instantiate via PF4J's classloader without their own Spring beans • Per-plugin datasource isolation — one shared host pool today • Plug-in changelog table-prefix linter — convention only, runtime enforcement comes later • Rollback on plug-in uninstall — uninstall is operator-confirmed and rare; running dropAll() during stop() would lose data on accidental restart • Subscription auto-scoping on plug-in stop — plug-ins still close their own subscriptions in stop() • Real customer-grade JSON parsing in plug-in lambdas — the v0.6 reference plug-in uses reflection to find the host's Jackson; a real plug-in author would ship their own JSON library or use a future api.v1 typed-DTO surface Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1 all marked DONE in docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md. Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners). -
The reference printing-shop plug-in now actually does something: its main class registers two HTTP endpoints during start(context), and a real curl to /api/v1/plugins/printing-shop/ping returns the JSON the plug-in's lambda produced. End-to-end smoke test 10/10 green. This is the chunk that turns vibe_erp from "an ERP app that has a plug-in folder" into "an ERP framework whose plug-ins can serve traffic". What landed: * api.v1 — additive (binary-compatible per the api.v1 stability rule): - org.vibeerp.api.v1.plugin.HttpMethod (enum) - org.vibeerp.api.v1.plugin.PluginRequest (path params, query, body) - org.vibeerp.api.v1.plugin.PluginResponse (status + body) - org.vibeerp.api.v1.plugin.PluginEndpointHandler (fun interface) - org.vibeerp.api.v1.plugin.PluginEndpointRegistrar (per-plugin scoped, register(method, path, handler)) - PluginContext.endpoints getter with default impl that throws UnsupportedOperationException so the addition is binary-compatible with plug-ins compiled against earlier api.v1 builds. * platform-plugins — three new files: - PluginEndpointRegistry: process-wide registration storage. Uses Spring's AntPathMatcher so {var} extracts path variables. Synchronized mutation. Exact-match fast path before pattern loop. Rejects duplicate (method, path) per plug-in. unregisterAll(plugin) on shutdown. - ScopedPluginEndpointRegistrar: per-plugin wrapper that tags every register() call with the right plugin id. Plug-ins cannot register under another plug-in's namespace. - PluginEndpointDispatcher: single Spring @RestController at /api/v1/plugins/{pluginId}/** that catches GET/POST/PUT/PATCH/DELETE, asks the registry for a match, builds a PluginRequest, calls the handler, serializes the response. 404 on no match, 500 on handler throw (logged with stack trace). - DefaultPluginContext: implements PluginContext with a real SLF4J-backed logger (every line tagged with the plug-in id) and the scoped endpoint registrar. The other six services (eventBus, transaction, translator, localeProvider, permissionCheck, entityRegistry) throw UnsupportedOperationException with messages pointing at the implementation plan unit that will land each one. Loud failure beats silent no-op. * VibeErpPluginManager — after PF4J's startPlugins() now walks every loaded plug-in, casts the wrapper instance to api.v1.plugin.Plugin, and calls start(context) with a freshly-built DefaultPluginContext. Tracks the started set so destroy() can call stop() and unregisterAll() in reverse order. Catches plug-in start failures loudly without bringing the framework down. * Reference plug-in (PrintingShopPlugin): - Now extends BOTH org.pf4j.Plugin (so PF4J's loader can instantiate it via the Plugin-Class manifest entry) AND org.vibeerp.api.v1.plugin.Plugin (so the host's vibe_erp lifecycle hook can call start(context)). Uses Kotlin import aliases to disambiguate the two `Plugin` simple names. - In start(context), registers two endpoints: GET /ping — returns {plugin, version, ok, message} GET /echo/{name} — extracts path variable, echoes it back - The /echo handler proves path-variable extraction works end-to-end. * Build infrastructure: - reference-customer/plugin-printing-shop now has an `installToDev` Gradle task that builds the JAR and stages it into <repo>/plugins-dev/. The task wipes any previous staged copies first so renaming the JAR on a version bump doesn't leave PF4J trying to load two versions. - distribution's `bootRun` task now (a) depends on `installToDev` so the staging happens automatically and (b) sets workingDir to the repo root so application-dev.yaml's relative `vibeerp.plugins.directory: ./plugins-dev` resolves to the right place. Without (b) bootRun's CWD was distribution/ and PF4J found "No plugins" — which is exactly the bug that surfaced in the first smoke run. - .gitignore now excludes /plugins-dev/ and /files-dev/. Tests: 12 new unit tests for PluginEndpointRegistry covering literal paths, single/multi path variables, duplicate registration rejection, literal-vs-pattern precedence, cross-plug-in isolation, method matching, and unregisterAll. Total now 61 unit tests across the framework, all green. End-to-end smoke test against fresh Postgres + the plug-in JAR loaded by PF4J at boot (10/10 passing): GET /api/v1/plugins/printing-shop/ping (no auth) → 401 POST /api/v1/auth/login → access token GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 {plugin, version, ok, message} GET /api/v1/plugins/printing-shop/echo/hello → 200, echoed=hello GET /api/v1/plugins/printing-shop/echo/world → 200, echoed=world GET /api/v1/plugins/printing-shop/nonexistent → 404 (no handler) GET /api/v1/plugins/missing-plugin/ping → 404 (no plugin) POST /api/v1/plugins/printing-shop/ping → 404 (wrong method) GET /api/v1/catalog/uoms (Bearer) → 200, 15 UoMs GET /api/v1/identity/users (Bearer) → 200, 1 user PF4J resolved the JAR, started the plug-in, the host called vibe_erp's start(context), the plug-in registered two endpoints, and the dispatcher routed real HTTP traffic to the plug-in's lambdas. The boot log shows the full chain. What is explicitly NOT in this chunk and remains for later: • plug-in linter (P1.2) — bytecode scan for forbidden imports • plug-in Liquibase application (P1.4) — plug-in-owned schemas • per-plug-in Spring child context — currently we just instantiate the plug-in via PF4J's classloader; there is no Spring context for the plug-in's own beans • PluginContext.eventBus / transaction / translator / etc. — they still throw UnsupportedOperationException with TODO messages • Path-template precedence between multiple competing patterns (only literal-beats-pattern is implemented, not most-specific-pattern) • Permission checks at the dispatcher (Spring Security still catches plug-in endpoints with the global "anyRequest authenticated" rule, which is the right v0.5 behavior) • Hot reload of plug-ins (cold restart only) Bug encountered and fixed during the smoke test: • application-dev.yaml has `vibeerp.plugins.directory: ./plugins-dev`, a relative path. Gradle's `bootRun` task by default uses the subproject's directory as the working directory, so the relative path resolved to <repo>/distribution/plugins-dev/ instead of <repo>/plugins-dev/. PF4J reported "No plugins" because that directory was empty. Fixed by setting bootRun.workingDir = rootProject.layout.projectDirectory.asFile. • One KDoc comment in PluginEndpointDispatcher contained the literal string `/api/v1/plugins/{pluginId}/**` inside backticks. The Kotlin lexer doesn't treat backticks as comment-suppressing, so `/**` opened a nested KDoc comment that was never closed and the file failed to compile. Same root cause as the AuthController bug earlier in the session. Rewrote the line to avoid the literal `/**` sequence. -
Adds the second core PBC, validating that the pbc-identity template is actually clonable and that the Gradle dependency rule fires correctly for a real second PBC. What landed: * New `pbc/pbc-catalog/` Gradle subproject. Same shape as pbc-identity: api-v1 + platform-persistence + platform-security only (no platform-bootstrap, no other pbc). The architecture rule in the root build.gradle.kts now has two real PBCs to enforce against. * `Uom` entity (catalog__uom) — code, name, dimension, ext jsonb. Code is the natural key (stable, human-readable). UomService rejects duplicate codes and refuses to update the code itself (would invalidate every Item FK referencing it). UomController at /api/v1/catalog/uoms exposes list, get-by-id, get-by-code, create, update. * `Item` entity (catalog__item) — code, name, description, item_type (GOOD/SERVICE/DIGITAL enum), base_uom_code FK, active flag, ext jsonb. ItemService validates the referenced UoM exists at the application layer (better error message than the DB FK alone), refuses to update code or baseUomCode (data-migration operations, not edits), supports soft delete via deactivate. ItemController at /api/v1/catalog/items with full CRUD. * `org.vibeerp.api.v1.ext.catalog.CatalogApi` — second cross-PBC facade in api.v1 (after IdentityApi). Exposes findItemByCode(code) and findUomByCode(code) returning safe ItemRef/UomRef DTOs. Inactive items are filtered to null at the boundary so callers cannot accidentally reference deactivated catalog rows. * `CatalogApiAdapter` in pbc-catalog — concrete @Component implementing CatalogApi. Maps internal entities to api.v1 DTOs without leaking storage types. * Liquibase changeset (catalog-init-001..003) creates both tables with unique indexes on code, GIN indexes on ext, and seeds 15 canonical units of measure: kg/g/t (mass), m/cm/mm/km (length), m2 (area), l/ml (volume), ea/sheet/pack (count), h/min (time). Tagged created_by='__seed__' so a future metadata uninstall sweep can identify them. Tests: 11 new unit tests (UomServiceTest x5, ItemServiceTest x6), total now 49 unit tests across the framework, all green. End-to-end smoke test against fresh Postgres via docker-compose (14/14 passing): GET /api/v1/catalog/items (no auth) → 401 POST /api/v1/auth/login → access token GET /api/v1/catalog/uoms (Bearer) → 15 seeded UoMs GET /api/v1/catalog/uoms/by-code/kg → 200 POST custom UoM 'roll' → 201 POST duplicate UoM 'kg' → 400 + clear message GET items → [] POST item with unknown UoM → 400 + clear message POST item with valid UoM → 201 catalog__item.created_by → admin user UUID (NOT __system__) GET /by-code/INK-CMYK-CYAN → 200 PATCH item name + description → 200 DELETE item → 204 GET item → active=false The principal-context bridge from P4.1 keeps working without any additional wiring in pbc-catalog: every PBC inherits the audit behavior for free by extending AuditedJpaEntity. That is exactly the "PBCs follow a recipe, the framework provides the cross-cutting machinery" promise from the architecture spec. Architectural rule enforcement still active: confirmed by reading the build.gradle.kts and observing that pbc-catalog declares no :platform:platform-bootstrap and no :pbc:pbc-identity dependency. The build refuses to load on either violation.
-
Design change: vibe_erp deliberately does NOT support multiple companies in one process. Each running instance serves exactly one company against an isolated Postgres database. Hosting many customers means provisioning many independent instances, not multiplexing them. Why: most ERP/EBC customers will not accept a SaaS where their data shares a database with other companies. The single-tenant-per-instance model is what the user actually wants the product to look like, and it dramatically simplifies the framework. What changed: - CLAUDE.md guardrail #5 rewritten from "multi-tenant from day one" to "single-tenant per instance, isolated database" - api.v1: removed TenantId value class entirely; removed tenantId from Entity, AuditedEntity, Principal, DomainEvent, RequestContext, TaskContext, IdentityApi.UserRef, Repository - platform-persistence: deleted TenantContext, HibernateTenantResolver, TenantAwareJpaTransactionManager, TenancyJpaConfiguration; removed @TenantId and tenant_id column from AuditedJpaEntity - platform-bootstrap: deleted TenantResolutionFilter; dropped vibeerp.instance.mode and default-tenant from properties; added vibeerp.instance.company-name; added VibeErpApplication @EnableJpaRepositories and @EntityScan so PBC repositories outside the main package are wired; added GlobalExceptionHandler that maps IllegalArgumentException → 400 and NoSuchElementException → 404 (RFC 7807 ProblemDetail) - pbc-identity: removed tenant_id from User, repository, controller, DTOs, IdentityApiAdapter; updated UserService duplicate-username message and the matching test - distribution: dropped multiTenancy=DISCRIMINATOR and tenant_identifier_resolver from application.yaml; configured Spring Boot mainClass on the springBoot extension (not just bootJar) so bootRun works - Liquibase: rewrote platform-init changelog to drop platform__tenant and the tenant_id columns on every metadata__* table; rewrote pbc-identity init to drop tenant_id columns, the (tenant_id, *) composite indexes, and the per-table RLS policies - IdentifiersTest replaced with Id<T> tests since the TenantId tests no longer apply Verified end-to-end against a real Postgres via docker-compose: POST /api/v1/identity/users → 201 Created GET /api/v1/identity/users → list works GET /api/v1/identity/users/X → fetch by id works POST duplicate username → 400 Bad Request (was 500) PATCH bogus id → 404 Not Found (was 500) PATCH alice → 200 OK DELETE alice → 204, alice now disabled All 18 unit tests pass.
-
BLOCKER: wire Hibernate multi-tenancy - application.yaml: set hibernate.tenant_identifier_resolver and hibernate.multiTenancy=DISCRIMINATOR so HibernateTenantResolver is actually installed into the SessionFactory - AuditedJpaEntity.tenantId: add @org.hibernate.annotations.TenantId so every PBC entity inherits the discriminator - AuditedJpaEntityListener.onCreate: throw if a caller pre-set tenantId to a different value than the current TenantContext, instead of silently overwriting (defense against cross-tenant write bugs) IMPORTANT: dependency hygiene - pbc-identity no longer depends on platform-bootstrap (wrong direction; bootstrap assembles PBCs at the top of the stack) - root build.gradle.kts: tighten the architectural-rule enforcement to also reject :pbc:* -> platform-bootstrap; switch plug-in detection from a fragile pathname heuristic to an explicit extra["vibeerp.module-kind"] = "plugin" marker; reference plug-in declares the marker IMPORTANT: api.v1 surface additions (all non-breaking) - Repository: documented closed exception set; new PersistenceExceptions.kt declares OptimisticLockConflictException, UniqueConstraintViolationException, EntityValidationException, and EntityNotFoundException so plug-ins never see Hibernate types - TaskContext: now exposes tenantId(), principal(), locale(), correlationId() so workflow handlers (which run outside an HTTP request) can pass tenant-aware calls back into api.v1 - EventBus: subscribe() now returns a Subscription with close() so long-lived subscribers can deregister explicitly; added a subscribe(topic: String, ...) overload for cross-classloader event routing where Class<E> equality is unreliable - IdentityApi.findUserById: tightened from Id<*> to PrincipalId so the type system rejects "wrong-id-kind" mistakes at the cross-PBC boundary NITs: - HealthController.kt -> MetaController.kt (file name now matches the class name); added TODO(v0.2) for reading implementationVersion from the Spring Boot BuildProperties bean
-
… i18n, http, plugin lifecycle