-
Adds S3FileStorage alongside the existing LocalDiskFileStorage, selected at boot by vibeerp.files.backend (local or s3). The local backend is the default (matchIfMissing=true) so existing deployments are unaffected. Setting backend=s3 activates the S3 backend with its own config block. Works with AWS S3, MinIO, DigitalOcean Spaces, or any S3-compatible object store via the endpoint-url override. The S3 client is lazy-initialized on first use so the bean loads even when S3 is unreachable at boot time (useful for tests and for the local-disk default path where the S3 bean is never instantiated). Configuration (vibeerp.files.s3.*): - bucket (required when backend=s3) - region (default: us-east-1) - endpoint-url (optional; for MinIO and non-AWS services) - access-key + secret-key (optional; falls back to AWS DefaultCredentialsProvider chain) - key-prefix (optional; namespaces objects so multiple instances can share one bucket) Implementation notes: - put() reads the stream into a byte array for S3 (S3 requires Content-Length up front; chunked upload is a future optimization for large files) - get() returns the S3 response InputStream directly; caller must close it (same contract as local backend) - list() paginates via ContinuationToken for buckets with >1000 objects per prefix - Content-type is stored as native S3 object metadata (no sidecar .meta file unlike local backend) Dependency: software.amazon.awssdk:s3:2.28.6 (AWS SDK v2) added to libs.versions.toml and platform-files build.gradle.kts. LocalDiskFileStorage gained @ConditionalOnProperty(havingValue = "local", matchIfMissing = true) so it's the default but doesn't conflict when backend=s3. application.yaml updated with commented-out S3 config block documenting all available properties. -
Closes the R2 gap: an admin can now manage users and roles entirely from the SPA without touching curl or Swagger UI. Backend (pbc-identity): - New RoleService with createRole, assignRole, revokeRole, findUserRoleCodes, listRoles. Each method validates existence + idempotency (duplicate assignment rejected, missing role rejected). - New RoleController at /api/v1/identity/roles (CRUD) + /api/v1/identity/users/{userId}/roles/{roleCode} (POST assign, DELETE revoke). All permission-gated: identity.role.read, identity.role.create, identity.role.assign. - identity.yml updated: added identity.role.create permission. SPA (web/): - UsersPage — list with username link to detail, "+ New User" - CreateUserPage — username, display name, email form - UserDetailPage — shows user info + role toggle list. Each role has an Assign/Revoke button that takes effect on the user's next login (JWT carries roles from login time). - RolesPage — list with inline create form (code + name) - Sidebar gains "System" section with Users + Roles links - API client + types: identity.listUsers, getUser, createUser, listRoles, createRole, getUserRoles, assignRole, revokeRole Infrastructure: - SpaController: added /users/** and /roles/** forwarding - SecurityConfiguration: added /users/** and /roles/** to the SPA permitAll block -
Extends the R1 SPA with create forms for the four entities operators interact with most. Each page follows the same pattern proven by CreateSalesOrderPage: a card-scoped form with dropdowns populated from the API, inline validation, and a redirect to the detail or list page on success. New pages: - CreateItemPage — code, name, type (GOOD/SERVICE/DIGITAL), UoM dropdown populated from /api/v1/catalog/uoms - CreatePartnerPage — code, name, type (CUSTOMER/SUPPLIER/BOTH), optional email + phone - CreatePurchaseOrderPage — symmetric to CreateSalesOrderPage; supplier dropdown filtered to SUPPLIER/BOTH partners, optional expected date, dynamic line items - CreateWorkOrderPage — output item + quantity + optional due date, dynamic BOM inputs (item + qty/unit + source location dropdown), dynamic routing operations (op code + work center + std minutes). The most complex form in the SPA — matches the EBC-PP-001 work order creation flow API client additions: catalog.createItem, partners.create, purchaseOrders.create, production.createWorkOrder — each a typed wrapper around POST to the corresponding endpoint. List pages updated: Items, Partners, Purchase Orders, Work Orders all now show a "+ New" button in the PageHeader that links to the create form. Routes wired: /items/new, /partners/new, /purchase-orders/new, /work-orders/new — all covered by the existing SpaController wildcard patterns and SecurityConfiguration permitAll rules. -
Reworks the demo seed and SPA to match the reference customer's work-order management process (EBC-PP-001 from raw/ docs). Demo seed (DemoSeedRunner): - 7 printing-specific items: paper stock, 4-color ink, CTP plates, lamination film, business cards, brochures, posters - 4 partners: 2 customers (Wucai Advertising, Globe Marketing), 2 suppliers (Huazhong Paper, InkPro Industries) - 2 warehouses with opening stock for all items - Pre-seeded WO-PRINT-0001 with full BOM (3 inputs: paper + ink + CTP plates from WH-RAW) and 3-step routing (CTP plate-making @ CTP-ROOM-01 -> offset printing @ PRESS-A -> post-press finishing @ BIND-01) matching EBC-PP-001 steps C-010/C-040 - 2 DRAFT sales orders: SO-2026-0001 (100x business cards + 500x brochures, $1950), SO-2026-0002 (200x posters, $760) - 1 DRAFT purchase order: PO-2026-0001 (10000x paper + 50kg ink, $2550) from Huazhong Paper SPA additions: - New CreateSalesOrderPage with customer dropdown, item selector, dynamic line add/remove, quantity + price inputs. Navigates to the detail page on creation. - "+ New Order" button on the SalesOrdersPage header - Dashboard "Try the demo" section rewritten to walk the EBC-PP-001 flow: create SO -> confirm (auto-spawns WOs) -> walk WO routing -> complete (material issue + production receipt) -> ship SO (stock debit + AR settle) - salesOrders.create() added to the typed API client The key demo beat: confirming SO-2026-0001 auto-spawns WO-FROM-SO-2026-0001-L1 and -L2 via SalesOrderConfirmedSubscriber (EBC-PP-001 step B-010). The pre-seeded WO-PRINT-0001 shows the full BOM + routing story separately. Together they demonstrate that the framework expresses the customer's production workflow through configuration, not code. Smoke verified on fresh Postgres: all 7 items seeded, WO with 3 BOM + 3 ops created, SO confirm spawns 2 WOs with source traceability, SPA /sales-orders/new renders and creates orders.
-
Updates the "at a glance" row to v0.29.0-SNAPSHOT + fc62d6d7, bumps the Phase 6 R1 row to DONE with the commit ref and an overview of what landed (Gradle wrapper, SpaController, security reordering, bundled fat-jar, 16 pages), and rewrites the "How to run" section to walk the click-through demo instead of just curl. README's status table updated to reflect 10/10 PBCs + 356 tests + SPA status; building section now mentions that `./gradlew build` compiles the SPA too. No code changes.
-
The R1 SPA chunk added a :web Gradle subproject whose npmBuild Exec task runs Vite during :distribution:bootJar. The Dockerfile's build stage uses eclipse-temurin:21-jdk-alpine which has no node/npm, so the docker image CI job fails with: process "/bin/sh -c chmod +x ./gradlew && ./gradlew :distribution:bootJar --no-daemon" did not complete successfully: exit code: 1 Fix: apk add --no-cache nodejs npm before the Gradle build. Alpine 3.21 ships node v22 + npm 10 which Vite 5 + React 18 handle fine. The runtime stage stays a pure JRE image — node is only needed at build time and never makes it into the shipping container.
-
First runnable end-to-end demo: open the browser, log in, click through every PBC, and walk a sales order DRAFT → CONFIRMED → SHIPPED. Stock balances drop, the SALES_SHIPMENT row appears in the ledger, and the AR journal entry settles — all visible in the SPA without touching curl. Bumps version to 0.29.0-SNAPSHOT. What landed ----------- * New `:web` Gradle subproject — Vite + React 18 + TypeScript + Tailwind 3.4. The Gradle wrapper is two `Exec` tasks (`npmInstall`, `npmBuild`) with proper inputs/outputs declared for incremental builds. Deliberately no node-gradle plugin — one less moving piece. * SPA architecture: hand-written typed REST client over `fetch` (auth header injection + 401 handler), AuthContext that decodes the JWT for display, ProtectedRoute, AppLayout with sidebar grouped by PBC, 16 page components covering the full v1 surface (Items, UoMs, Partners, Locations, Stock Balances + Movements, Sales Orders + detail w/ confirm/ship/cancel, Purchase Orders + detail w/ confirm/receive/cancel, Work Orders + detail w/ start/complete, Shop-Floor dashboard with 5s polling, Journal Entries). 211 KB JS / 21 KB CSS gzipped. * Sales-order detail page: confirm/ship/cancel verbs each refresh the order, the (SO-filtered) movements list, and the (SO-filtered) journal entries — so an operator watches the ledger row appear and the AR row settle in real time after a single click. Same pattern on the purchase-order detail page for the AP/RECEIPT side. * Shop-floor dashboard polls /api/v1/production/work-orders/shop- floor every 5s and renders one card per IN_PROGRESS WO with current operation, planned vs actual minutes (progress bar), and operations-completed. * `:distribution` consumes the SPA dist via a normal Gradle outgoing/incoming configuration: `:web` exposes `webStaticBundle`, `:distribution`'s `bundleWebStatic` Sync task copies it into `${buildDir}/web-static/static/`, and that parent directory is added to the main resources source set so Spring Boot serves the SPA from `classpath:/static/` out of the same fat-jar. Single artifact, no nginx, no CORS. * New `SpaController` in platform-bootstrap forwards every known SPA route prefix to `/index.html` so React Router's HTML5 history mode works on hard refresh / deep-link entry. Explicit list (12 prefixes) rather than catch-all so typoed API URLs still get an honest 404 instead of the SPA shell. * SecurityConfiguration restructured: keeps the public allowlist for /api/v1/auth + /api/v1/_meta + /v3/api-docs + /swagger-ui, then `/api/**` is `.authenticated()`, then SPA static assets + every SPA route prefix are `.permitAll()`. The order is load-bearing — putting `.authenticated()` for /api/** BEFORE the SPA permitAll preserves the framework's "API is always authenticated" invariant even with the SPA bundled in the same fat-jar. The SPA bundle itself is just HTML+CSS+JS so permitting it is correct; secrets are gated by /api/**. * New `DemoSeedRunner` in `:distribution` (gated behind `vibeerp.demo.seed=true`, set in application-dev.yaml only). Idempotent — the runner short-circuits if its sentinel item (DEMO-PAPER-A4) already exists. Seeds 5 items, 2 warehouses, 4 partners, opening stock for every item, one open DEMO-SO-0001 (50× business cards + 20× brochures, $720), one open DEMO-PO-0001 (10000× paper, $400). Every row carries the DEMO- prefix so it's trivially distinguishable from hand-created data; a future "delete demo data" command has an obvious filter. Production deploys never set the property, so the @ConditionalOnProperty bean stays absent from the context. How to run ---------- docker compose up -d db ./gradlew :distribution:bootRun open http://localhost:8080 # Read the bootstrap admin password from the boot log, # log in as admin, and walk DEMO-SO-0001 through the # confirm + ship flow to see the buy-sell loop in the UI. What was caught by the smoke test --------------------------------- * TypeScript strict mode + `error: unknown` in React state → `{error && <X/>}` evaluates to `unknown` and JSX rejects it. Fixed by typing the state as `Error | null` and converting in catches with `e instanceof Error ? e : new Error(String(e))`. Affected 16 page files; the conversion is now uniform. * DataTable's `T extends Record<string, unknown>` constraint was too restrictive for typed row interfaces; relaxed to unconstrained `T` with `(row as unknown as Record<…>)[key]` for the unkeyed cell read fallback. * `vite.config.ts` needs `@types/node` for `node:path` + `__dirname`; added to devDependencies and tsconfig.node.json declares `"types": ["node"]`. * KDoc nested-comment trap (4th time): SpaController's KDoc had `/api/v1/...` in backticks; the `/*` inside backticks starts a nested block comment and breaks Kotlin compilation with "Unclosed comment". Rephrased to "the api-v1 prefix". * SecurityConfiguration order: a draft version that put the SPA permit-all rules BEFORE `/api/**` authenticated() let unauthenticated requests reach API endpoints. Caught by an explicit smoke test (curl /api/v1/some-bogus-endpoint should return 401, not 404 from a missing static file). Reordered so /api/** authentication runs first. End-to-end smoke (real Postgres, fresh DB) ------------------------------------------ - bootRun starts in 7.6s - DemoSeedRunner reports "populating starter dataset… done" - GET / returns the SPA HTML - GET /sales-orders, /sales-orders/<uuid>, /journal-entries all return 200 (SPA shell — React Router takes over) - GET /assets/index-*.js / /assets/index-*.css both 200 - GET /api/v1/some-bogus-endpoint → 401 (Spring Security rejects before any controller mapping) - admin login via /api/v1/auth/login → 200 + JWT - GET /catalog/items → 5 DEMO-* rows - GET /partners/partners → 4 DEMO-* rows - GET /inventory/locations → 2 DEMO-* warehouses - GET /inventory/balances → 5 starting balances - POST /orders/sales-orders/<id>/confirm → CONFIRMED; GET /finance/journal-entries shows AR POSTED 720 USD - POST /orders/sales-orders/<id>/ship {"shippingLocationCode": "DEMO-WH-FG"} → SHIPPED; balances drop to 150 + 80; journal entry flips to SETTLED - POST /orders/purchase-orders/<id>/confirm → CONFIRMED; AP POSTED 400 USD appears - POST /orders/purchase-orders/<id>/receive → RECEIVED; PAPER balance grows from 5000 to 15000; AP row SETTLED - 8 stock_movement rows in the ledger total -
Closes the deferred TODO from the OpenAPI commit (11bef932): every endpoint a plug-in registers via `PluginContext.endpoints.register` now shows up in the OpenAPI spec alongside the host's @RestController operations. Downstream OpenAPI clients (R1 web SPA codegen, A1 MCP server tool catalog, operator-side Swagger UI browsing) can finally see the customer-specific HTTP surface. **Problem.** springdoc's default scan walks `@RestController` beans on the host classpath. Plug-in endpoints are NOT registered that way — they live as lambdas on a single `PluginEndpointDispatcher` catch-all controller, so the default scan saw ONE dispatcher path and zero per-plug-in detail. The printing-shop plug-in's 8 endpoints were entirely invisible to the spec. **Solution: an OpenApiCustomizer bean that queries the registry at spec-build time.** 1. `PluginEndpointRegistry.snapshot()` — new public read-only view. Returns a list of `(pluginId, method, path)` tuples without exposing the handler lambdas. Taken under the registry's intrinsic lock and copied out so callers can iterate without racing plug-in (un)registration. Ordered by registration order for determinism. 2. `PluginEndpointSummary` — new public data class in platform-plugins. `pluginId` + `method` + `path` plus a `fullPath()` helper that prepends `/api/v1/plugins/<pluginId>`. 3. `PluginEndpointsOpenApiCustomizer @Component` — new class in `platform-plugins/openapi/`. Implements `org.springdoc.core.customizers.OpenApiCustomizer`. On every `/v3/api-docs` request, iterates `registry.snapshot()`, groups by full path, and attaches a `PathItem` with one `Operation` per registered HTTP verb. Each operation gets: - A tag `"Plug-in: <pluginId>"` so Swagger UI groups every plug-in's surface under a header - A `summary` + `description` naming the plug-in - Path parameters auto-extracted from `{name}` segments - A generic JSON request body for POST/PUT/PATCH - A generic 200 response + 401/403/404 error responses - The global bearerAuth security scheme (inherited from OpenApiConfiguration, no per-op annotation) 4. `compileOnly(libs.springdoc.openapi.starter.webmvc.ui)` in platform-plugins so `OpenApiCustomizer` is visible at compile time without dragging the full webmvc-ui bundle into platform-plugins' runtime classpath (distribution already pulls it in via platform-bootstrap's `implementation`). 5. `implementation(project(":platform:platform-plugins"))` added to platform-bootstrap so `OpenApiConfiguration` can inject the customizer by type and explicitly wire it to the `pluginEndpointsGroup()` `GroupedOpenApi` builder via `.addOpenApiCustomizer(...)`. **This is load-bearing** — springdoc's grouped specs run their own customizer pipeline and do NOT inherit top-level @Component OpenApiCustomizer beans. Caught at smoke-test time: initially the customizer populated the default /v3/api-docs but the /v3/api-docs/plugins group still showed only the dispatcher. Fix was making the customizer a constructor-injected dep of OpenApiConfiguration and calling `addOpenApiCustomizer` on the group builder. **What a future chunk might add** (not in this one): - Richer per-endpoint JSON Schema — v1 ships unconstrained `ObjectSchema` request/response bodies because the framework has no per-endpoint shape info at the registrar layer. A future `PluginEndpointRegistrar` overload accepting an explicit schema would let plug-ins document their payloads. - Per-endpoint `@RequirePermission` surface — the dispatcher enforces permissions at runtime but doesn't record them on the registration, so the OpenAPI spec doesn't list them. **KDoc `/**` trap caught.** A literal plug-in URL pattern in the customizer's KDoc (`/api/v1/plugins/{pluginId}/**`) tripped the Kotlin nested-comment parser again. Rephrased as "under the `/api/v1/plugins/{pluginId}` prefix" to sidestep. Third time this trap has bitten me — the workaround is in feedback memory. **HttpMethod enum caught.** Initial `when` branch on the customizer covered HEAD/OPTIONS which don't exist in the api.v1 `HttpMethod` enum (only GET/POST/PUT/PATCH/DELETE). Dropped those branches. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs/plugins returns 7 paths: - /api/v1/plugins/printing-shop/echo/{name} GET - /api/v1/plugins/printing-shop/inks GET, POST - /api/v1/plugins/printing-shop/ping GET - /api/v1/plugins/printing-shop/plates GET, POST - /api/v1/plugins/printing-shop/plates/{id} GET - /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf POST - /api/v1/plugins/{pluginId}/** (dispatcher fallback) Before this chunk: only the dispatcher fallback (1 path). - Top-level /v3/api-docs now also includes the 6 printing-shop paths it previously didn't. - All 7 printing-shop endpoints remain functional at the real dispatcher (no behavior change — this is a documentation-only enhancement). 24 modules, 355 unit tests, all green.
-
Adds 15 GroupedOpenApi beans that split the single giant OpenAPI spec into per-PBC + per-platform-module focused specs selectable from Swagger UI's top-right "Select a definition" dropdown. No @RestController changes — all groups are defined by URL prefix in platform-bootstrap, so adding a new PBC means touching exactly this file (plus the controller itself). Each group stays additive alongside the default /v3/api-docs. **Groups shipped:** Platform platform-core — /api/v1/auth/**, /api/v1/_meta/** platform-workflow — /api/v1/workflow/** platform-jobs — /api/v1/jobs/** platform-files — /api/v1/files/** platform-reports — /api/v1/reports/** Core PBCs pbc-identity — /api/v1/identity/** pbc-catalog — /api/v1/catalog/** pbc-partners — /api/v1/partners/** pbc-inventory — /api/v1/inventory/** pbc-warehousing — /api/v1/warehousing/** pbc-orders — /api/v1/orders/** (sales + purchase together) pbc-production — /api/v1/production/** pbc-quality — /api/v1/quality/** pbc-finance — /api/v1/finance/** Plug-in dispatcher plugins — /api/v1/plugins/** **Why path-prefix grouping, not package-scan grouping.** Package-scan grouping would force OpenApiConfiguration to know every PBC's Kotlin package name and drift every time a PBC ships or a controller moves. Path-prefix grouping only shifts when `@RequestMapping` changes — which is already a breaking API change that would need review anyway. This keeps the control plane for grouping in one file while the routing stays in each controller. **Why pbc-orders is one group, not split sales/purchase.** Both controllers share the `/api/v1/orders/` prefix, and sales / purchase are the same shape in practice — splitting them into two groups would just duplicate the dropdown entries. A future chunk can split if a real consumer asks for it. **Primary group unchanged.** The default /v3/api-docs continues to return the full merged spec (every operation in one document). The grouped specs are additive at /v3/api-docs/<group-name> and clients can pick whichever they need. Swagger UI defaults to showing the first group in the dropdown. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs/swagger-config returns 15 groups with human-readable display names - Per-group path counts (confirming each group is focused): pbc-production: 10 paths pbc-catalog: 6 paths pbc-orders: 12 paths platform-core: 9 paths platform-files: 3 paths plugins: 1 path (dispatcher) - Default /v3/api-docs continues to return the full spec. 24 modules, 355 unit tests, all green. -
Closes a 15-commit-old TODO on MetaController and unifies the version story across /api/v1/_meta/info, /v3/api-docs, and the api-v1.jar manifest. **Build metadata wiring.** `distribution/build.gradle.kts` now calls `buildInfo()` inside the `springBoot { }` block. This makes Spring Boot's Gradle plug-in write `META-INF/build-info.properties` into the bootJar at build time with group / artifact / version / build time pulled from `project.version` + timestamps. Spring Boot's `BuildInfoAutoConfiguration` then exposes a `BuildProperties` bean that injection points can consume. **MetaController enriched.** Now injects: - `ObjectProvider<BuildProperties>` — returns the real version (`0.28.0-SNAPSHOT`) and the build timestamp when packaged through the distribution bootJar; falls back to `0.0.0-test` inside a bare platform-bootstrap unit test classloader with no build-info file on the classpath. - `Environment` — returns `spring.profiles.active` so a dashboard can distinguish "dev" from "staging" from a prod container that activates no profile. The GET /api/v1/_meta/info response now carries: - `name`, `apiVersion` — unchanged - `implementationVersion` — from BuildProperties (was stuck at "0.1.0-SNAPSHOT" via an unreachable `javaClass.package` lookup) - `buildTime` — ISO-8601 string from BuildProperties, null if the classpath has no build-info file - `activeProfiles` — list of effective spring profiles **OpenApiConfiguration now reads version from BuildProperties too.** Previously OPENAPI_INFO_VERSION was a hardcoded "v0.28.0" constant. Now it's injected via ObjectProvider<BuildProperties> with the same fallback pattern as MetaController. A single version bump in gradle.properties now flows to: gradle.properties → Spring Boot's buildInfo() → build-info.properties (on the classpath) → BuildProperties bean → MetaController (/_meta/info) → OpenApiConfiguration (/v3/api-docs + Swagger UI) → api-v1.jar manifest (already wired) No more hand-maintained version strings in code. Bump `vibeerp.version` in gradle.properties and every display follows. **Version bump.** `gradle.properties` `vibeerp.version`: `0.1.0-SNAPSHOT` → `0.28.0-SNAPSHOT`. This matches the numeric label used on PROGRESS.md's "Latest version" row and carries a documentation comment explaining the propagation chain so the next person bumping it knows what to update alongside (just the one line + PROGRESS.md). **KDoc trap caught.** A literal `/api/v1/_meta/**` path pattern in MetaController's KDoc tripped the Kotlin nested-comment parser (`/**` starts a KDoc). Rephrased as "the whole `/api/v1/_meta` prefix" to sidestep the trap — same workaround I saved in feedback memory after the first time it bit me. **Smoke-tested end-to-end against real Postgres:** - GET /api/v1/_meta/info returns `{"implementationVersion": "0.28.0-SNAPSHOT", "buildTime": "2026-04-09T09:48:25.646Z", "activeProfiles": ["dev"]}` - GET /v3/api-docs `info.version` = "0.28.0-SNAPSHOT" (was the hardcoded "v0.28.0" constant before this chunk) - Single edit to gradle.properties propagates cleanly. 24 modules, 355 unit tests, all green. -
Adds self-introspection of the framework's REST surface via springdoc-openapi. Every @RestController method in the host application is now documented in a machine-readable OpenAPI 3 spec at /v3/api-docs and rendered for humans at /swagger-ui/index.html. This is the first step toward: - R1 (web SPA): OpenAPI codegen feeds a typed TypeScript client - A1 (MCP server): discoverable tool catalog - Operator debugging: browsable "what can this instance do" page **Dependency.** New `springdoc-openapi-starter-webmvc-ui` 2.6.0 added to platform-bootstrap (not distribution) because it ships @Configuration classes that need to run inside a full Spring Boot application context AND brings a Swagger UI WebJar. platform-bootstrap is the only module with a @SpringBootApplication anyway; pbc modules never depend on it, so plug-in classloaders stay clean and the OpenAPI scanner only sees host controllers. **Configuration.** New `OpenApiConfiguration` @Configuration in platform-bootstrap provides a single @Bean OpenAPI: - Title "vibe_erp", version v0.28.0 (hardcoded; moves to a build property when a real version header ships) - Description with a framework-level intro explaining the bearer-JWT auth model, the permission whitelist, and the fact that plug-in endpoints under /api/v1/plugins/{id}/** are NOT scanned (they are dynamically registered via PluginContext.endpoints on a single dispatcher controller; a future chunk may extend the spec at runtime). - One relative server entry ("/") so the spec works behind a reverse proxy without baking localhost into it. - bearerAuth security scheme (HTTP/bearer/JWT) applied globally via addSecurityItem, so every operation in the rendered UI shows a lock icon and the "Authorize" button accepts a raw JWT (Swagger adds the "Bearer " prefix itself). **Security whitelist.** SecurityConfiguration now permits three additional path patterns without authentication: - /v3/api-docs/** — the generated JSON spec - /swagger-ui/** — the Swagger UI static assets + index - /swagger-ui.html — the legacy path (redirects to the above) The data still requires a valid JWT: an unauthenticated "Try it out" call from the Swagger UI against a pbc endpoint returns 401 exactly like a curl would. **Why not wire this into every PBC controller with @Operation / @Parameter annotations in this chunk:** springdoc already auto-generates the full path + request body + response schema from reflection. Adding hand-written annotations is scope creep — a future chunk can tag per-operation @Operation(security = ...) to surface the @RequirePermission keys once a consumer actually needs them. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs returns 200 with 64680 bytes of OpenAPI JSON - 76 total paths listed across every PBC controller - All v3 production paths present: /work-orders/shop-floor, /work-orders/{id}/operations/{operationId}/start + /complete, /work-orders/{id}/{start,complete,cancel,scrap} - components.securitySchemes includes bearerAuth (type=http, format=JWT) - GET /swagger-ui/index.html returns 200 with the Swagger HTML bundle (5 swagger markers found in the HTML) - GET /swagger-ui.html (legacy path) returns 200 after redirect 25 modules (unchanged count — new config lives inside platform-bootstrap), 355 unit tests, all green. -
Adds GET /api/v1/production/work-orders/shop-floor — a pure read that returns every IN_PROGRESS work order with its current operation and planned/actual time totals. Designed to feed a future shop-floor dashboard (web SPA, mobile, or an external reporting tool) without any follow-up round trips. **Service method.** `WorkOrderService.shopFloorSnapshot()` is a @Transactional(readOnly = true) query that: 1. Pulls every IN_PROGRESS work order via the existing `WorkOrderJpaRepository.findByStatus`. 2. Sorts by WO code ascending so a dashboard poll gets a stable row order. 3. For each WO picks the "current operation" = first op in IN_PROGRESS status, or, if none, first PENDING op. This captures both live states: "operator is running step N right now" and "operator just finished step N and hasn't picked up step N+1 yet". 4. Computes `totalStandardMinutes` (sum across every op) + `totalActualMinutes` (sum of completed ops' `actualMinutes` only, treating null as zero). 5. Counts completed vs total operations for a "step 2 of 5" badge. 6. Returns a list of `ShopFloorEntry` DTOs — flat structure, one row per WO, nullable `current*` fields when a WO has no routing at all (v2-compat path). **HTTP surface.** - `GET /api/v1/production/work-orders/shop-floor` - New permission `production.shop-floor.read` - Response is `List<ShopFloorEntryResponse>` — flat so a SPA can render a table without joining across nested JSON. Fields are 1:1 with the service-side `ShopFloorEntry`. **Design choices.** - Mounted under `/work-orders/shop-floor` rather than a top-level `/production/shop-floor` so every production read stays under the same permission/audit/OpenAPI root. - Read-only, zero events published, zero ledger writes. Pure projection over existing state. - Returns empty list when no WO is in-progress — the dashboard renders "no jobs running" without a special case. - Sorted by code so polling is deterministic. A future chunk might add sort-by-work-center if a dashboard needs a by-station view. **Why not a top-level "shop-floor" PBC.** A shop-floor dashboard doesn't own any state — every field it displays is projected from pbc-production. A new PBC would duplicate the data model and create a reaction loop on work order events. Keeping the read in pbc-production matches the CLAUDE.md guardrail "grow the PBC when real consumers appear, not on speculation". **Nullable `current*` fields.** A WO with an empty operations list (the v2-compat path — auto-spawned from SalesOrderConfirmedSubscriber before v3 routings) has all four `current*` fields set to null. The dashboard UI renders "no routing" or similar without any downstream round trip. **Tests (5 new).** empty snapshot when no IN_PROGRESS WOs; one entry per IN_PROGRESS WO with stable sort; current-op picks IN_PROGRESS over PENDING; current-op picks first PENDING when no op is IN_PROGRESS (between-operations state); v2-compat WO with no operations shows null current-op fields and zero time sums. **Smoke-tested end-to-end against real Postgres:** 1. Empty shop-floor initially (no IN_PROGRESS WOs) 2. Started plugin-printing-shop-quote-to-work-order BPMN with quoteCode=Q-DASH-1, quantity=500 3. Started the resulting WO — shop-floor showed currentOperationLineNo=1 (CUT @ PRINTING-CUT-01) status=PENDING, 0/4 completed, totalStandardMinutes=75, totalActualMinutes=0 4. Started op 1 — currentOperationStatus flipped to IN_PROGRESS 5. Completed op 1 with actualMinutes=17 — current op rolled forward to line 2 (PRINT @ PRINTING-PRESS-A) status=PENDING, operationsCompleted=1/4, totalActualMinutes=17 24 modules, 355 unit tests (+5), all green. -
Extends WorkOrderRequestedEvent with an optional routing so a producer — core PBC or customer plug-in — can attach shop-floor operations to a requested work order without importing any pbc-production internals. The reference printing-shop plug-in's quote-to-work-order BPMN now ships a 4-step default routing (CUT → PRINT → FOLD → BIND) end-to-end through the public api.v1 surface. **api.v1 surface additions (additive, defaulted).** - New public data class `RoutingOperationSpec(lineNo, operationCode, workCenter, standardMinutes)` in `api.v1.event.production.WorkOrderEvents` with init-block invariants matching pbc-production v3's internal validation (positive lineNo, non-blank operationCode + workCenter, non-negative standardMinutes). - `WorkOrderRequestedEvent` gains an `operations: List<RoutingOperationSpec>` field, defaulted to `emptyList()`. Existing callers compile without changes; the event's init block now also validates that every operation has a unique lineNo. Convention matches the other v1 events that already carry defaulted `eventId` and `occurredAt` — additive within a major version. **pbc-production subscriber wiring.** - `WorkOrderRequestedSubscriber.handle` now maps `event.operations` → `WorkOrderOperationCommand` 1:1 and passes them to `CreateWorkOrderCommand`. Empty list keeps the v2 behavior exactly (auto-spawned orders from the SO path still get no routing and walk DRAFT → IN_PROGRESS → COMPLETED without any gate); a non-empty list feeds the new v3 WorkOrderOperation children and forces a sequential walk on the shop floor. The log line now includes `ops=<size>` so operators can see at a glance whether a WO came with a routing. **Reference plug-in.** - `CreateWorkOrderFromQuoteTaskHandler` now attaches `DEFAULT_PRINTING_SHOP_ROUTING`: a 4-step sequence modeled on the reference business doc's brochure production flow. Each step gets its own work center (PRINTING-CUT-01, PRINTING-PRESS-A, PRINTING-FOLD-01, PRINTING-BIND-01) so a future shop-floor dashboard can show which station is running which job. Standard times are round-number placeholders (15/30/10/20 minutes) — a real customer tunes them from historical data. Deliberately hard-coded in v1: a real shop with a dozen different flows would either ship a richer plug-in that picks routing per item type, or wait for a future Tier 1 "routing template" metadata entity. v1 just proves the event-driven seam carries v3 operations end-to-end. **Why this is the right shape.** - Zero new compile-time coupling. The plug-in imports only `api.v1.event.production.RoutingOperationSpec`; the plug-in linter would refuse any reach into `pbc.production.*`. - Core pbc-production stays ignorant of the plug-in: the subscriber doesn't know where the event came from. - The same `WorkOrderRequestedEvent` path now works for ANY producer — the next customer plug-in that spawns routed work orders gets zero core changes. **Tests.** New `WorkOrderRequestedSubscriberTest.handle passes event operations through as WorkOrderOperationCommand` asserts the 1:1 mapping of RoutingOperationSpec → WorkOrderOperationCommand. The existing test gains one assertion that an empty `operations` list on the event produces an empty `operations` list on the command (backwards-compat lock-in). **Smoke-tested end-to-end against real Postgres:** 1. POST /api/v1/workflow/process-instances with processDefinitionKey `plugin-printing-shop-quote-to-work-order` and variables `{quoteCode: "Q-ROUTING-001", itemCode: "FG-BROCHURE", quantity: 250}` 2. BPMN runs through CreateWorkOrderFromQuoteTaskHandler, publishes WorkOrderRequestedEvent with 4 operations 3. pbc-production subscriber creates WO `WO-FROM-PRINTINGSHOP-Q-ROUTING-001` 4. GET /api/v1/production/work-orders/by-code/... returns the WO with status=DRAFT and 4 operations (CUT/PRINT/FOLD/BIND) all PENDING, each with its own work_center and standard_minutes. This is the framework's first business flow where a customer plug-in provides a routing to a core PBC end-to-end through api.v1 alone. Closes the loop between the v3 routings feature (commit fa867189) and the executable acceptance test in the reference plug-in. 24 modules, 350 unit tests (+1), all green. -
Adds WorkOrderOperation child entity and two new verbs that gate WorkOrder.complete() behind a strict sequential walk of shop-floor steps. An empty operations list keeps the v2 behavior exactly; a non-empty list forces every op to reach COMPLETED before the work order can finish. **New domain.** - `production__work_order_operation` table with `UNIQUE (work_order_id, line_no)` and a status CHECK constraint admitting PENDING / IN_PROGRESS / COMPLETED. - `WorkOrderOperation` @Entity mirroring the `WorkOrderInput` shape: `lineNo`, `operationCode`, `workCenter`, `standardMinutes`, `status`, `actualMinutes` (nullable), `startedAt` + `completedAt` timestamps. No `ext` JSONB — operations are facts, not master records. - `WorkOrderOperationStatus` enum (PENDING / IN_PROGRESS / COMPLETED). - `WorkOrder.operations` collection with the same @OneToMany + cascade=ALL + orphanRemoval + @OrderBy("lineNo ASC") pattern as `inputs`. **State machine (sequential).** - `startOperation(workOrderId, operationId)` — parent WO must be IN_PROGRESS; target op must be PENDING; every earlier op must be COMPLETED. Flips to IN_PROGRESS and stamps `startedAt`. Idempotent no-op if already IN_PROGRESS. - `completeOperation(workOrderId, operationId, actualMinutes)` — parent WO must be IN_PROGRESS; target op must be IN_PROGRESS; `actualMinutes` must be non-negative. Flips to COMPLETED and stamps `completedAt`. Idempotent with the same `actualMinutes`; refuses to clobber with a different value. - `WorkOrder.complete()` gains a routings gate: refuses if any operation is not COMPLETED. Empty operations list is legal and preserves v2 behavior (auto-spawned orders from `SalesOrderConfirmedSubscriber` continue to complete without any gate). **Why sequential, not parallel.** v3 deliberately forbids parallel operations on one routing. The shop-floor dashboard story is trivial when the invariant is "you are on step N of M"; the unit test matrix is finite. Parallel routings (two presses in parallel) wait for a real consumer asking for them. Same pattern as every other pbc-production invariant — grow the PBC when consumers appear, not on speculation. **Why standardMinutes + actualMinutes instead of just timestamps.** The variance between planned and actual runtime is the single most interesting data point on a routing. Deriving it from `completedAt - startedAt` at report time has to fight shift-boundary and pause-resume ambiguity; the operator typing in "this run took 47 minutes" is the single source of truth. `startedAt` and `completedAt` are kept as an audit trail, not used for variance math. **Why work_center is a varchar not a FK.** Same cross-PBC discipline as every other identifier in pbc-production: work centers will be the seam for a future pbc-equipment PBC, and pinning a FK now would couple two PBC schemas before the consumer even exists (CLAUDE.md guardrail #9). **HTTP surface.** - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/start` → `production.work-order.operation.start` - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/complete` → `production.work-order.operation.complete` Body: `{"actualMinutes": "..."}`. Annotated with the single-arg Jackson trap escape hatch (`@JsonCreator(mode=PROPERTIES)` + `@param:JsonProperty`) — same trap that bit `CompleteWorkOrderRequest`, `ShipSalesOrderRequest`, `ReceivePurchaseOrderRequest`. Caught at smoke-test time. - `CreateWorkOrderRequest` accepts an optional `operations` array alongside `inputs`. - `WorkOrderResponse` gains `operations: List<WorkOrderOperationResponse>` showing status, standardMinutes, actualMinutes, startedAt, completedAt. **Metadata.** Two new permissions in `production.yml`: `production.work-order.operation.start` and `production.work-order.operation.complete`. **Tests (12 new).** create-with-ops happy path; duplicate line_no refused; blank operationCode refused; complete() gated when any op is not COMPLETED; complete() passes when every op is COMPLETED; startOperation refused on DRAFT parent; startOperation flips PENDING to IN_PROGRESS and stamps startedAt; startOperation refuses skip-ahead over a PENDING predecessor; startOperation is idempotent when already IN_PROGRESS; completeOperation records actualMinutes and flips to COMPLETED; completeOperation rejects negative actualMinutes; completeOperation refuses clobbering an already-COMPLETED op with a different value. **Smoke-tested end-to-end against real Postgres:** - Created a WO with 3 operations (CUT → PRINT → BIND) - `complete()` refused while DRAFT, then refused while IN_PROGRESS with pending ops ("3 routing operation(s) are not yet COMPLETED") - Skip-ahead `startOperation(op2)` refused ("earlier operation(s) are not yet COMPLETED") - Walked ops 1 → 2 → 3 through start + complete with varying actualMinutes (17, 32.5, 18 vs standard 15, 30, 20) - Final `complete()` succeeded, wrote exactly ONE PRODUCTION_RECEIPT ledger row for 100 units of FG-BROCHURE — no premature writes - Separately verified a no-operations WO still walks DRAFT → IN_PROGRESS → COMPLETED exactly like v2 24 modules, 349 unit tests (+12), all green. -
Closes two open wiring gaps left by the P1.9 and P1.8 chunks — `PluginContext.files` and `PluginContext.reports` both previously threw `UnsupportedOperationException` because the host's `DefaultPluginContext` never received the concrete beans. This commit plumbs both through and exercises them end-to-end via a new printing-shop plug-in endpoint that generates a quote PDF, stores it in the file store, and returns the file handle. With this chunk the reference printing-shop plug-in demonstrates **every extension seam the framework provides**: HTTP endpoints, JDBC, metadata YAML, i18n, BPMN + TaskHandlers, JobHandlers, custom fields on core entities, event publishing via EventBus, ReportRenderer, and FileStorage. There is no major public plug-in surface left unexercised. ## Wiring: DefaultPluginContext + VibeErpPluginManager - `DefaultPluginContext` gains two new constructor parameters (`sharedFileStorage: FileStorage`, `sharedReportRenderer: ReportRenderer`) and two new overrides. Each is wired via Spring — they live in platform-files and platform-reports respectively, but platform-plugins only depends on api.v1 (the interfaces) and NOT on those modules directly. The concrete beans are injected by Spring at distribution boot time when every `@Component` is on the classpath. - `VibeErpPluginManager` adds `private val fileStorage: FileStorage` and `private val reportRenderer: ReportRenderer` constructor params and passes them through to every `DefaultPluginContext` it builds per plug-in. The `files` and `reports` getters in api.v1 `PluginContext` still have their default-throw backward-compat shim — a plug-in built against v0.8 of api.v1 loading on a v0.7 host would still fail loudly at first call with a clear "upgrade to v0.8" message. The override here makes the v0.8+ host honour the interface. ## Printing-shop reference — quote PDF endpoint - New `resources/reports/quote-template.jrxml` inside the plug-in JAR. Parameters: plateCode, plateName, widthMm, heightMm, status, customerName. Produces a single-page A4 PDF with a header, a table of plate attributes, and a footer. - New endpoint `POST /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf`. Request body `{"customerName": "..."}`, response: `{"plateId", "plateCode", "customerName", "fileKey", "fileSize", "fileContentType", "downloadUrl"}` The handler does ALL of: 1. Reads the plate row via `context.jdbc.queryForObject(...)` 2. Loads the JRXML from the PLUG-IN's own classloader (not the host classpath — `this::class.java.classLoader .getResourceAsStream("reports/quote-template.jrxml")` — so the host's built-in `vibeerp-ping-report.jrxml` and the plug-in's template live in isolated namespaces) 3. Renders via `context.reports.renderPdf(template, data)` — uses the host JasperReportRenderer under the hood 4. Persists via `context.files.put(key, contentType, content)` under a plug-in-scoped key `plugin-printing-shop/quotes/quote-<code>.pdf` 5. Returns the file handle plus a `downloadUrl` pointing at the framework's `/api/v1/files/download` endpoint the caller can immediately hit ## Smoke test (fresh DB + staged plug-in) ``` # create a plate POST /api/v1/plugins/printing-shop/plates {code: PLATE-200, name: "Premium cover", widthMm: 420, heightMm: 594} → 201 {id, code: PLATE-200, status: DRAFT, ...} # generate + store the quote PDF POST /api/v1/plugins/printing-shop/plates/<id>/generate-quote-pdf {customerName: "Acme Inc"} → 201 { plateId, plateCode: "PLATE-200", customerName: "Acme Inc", fileKey: "plugin-printing-shop/quotes/quote-PLATE-200.pdf", fileSize: 1488, fileContentType: "application/pdf", downloadUrl: "/api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf" } # download via the framework's file endpoint GET /api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf → 200 Content-Type: application/pdf Content-Length: 1488 body: valid PDF 1.5, 1 page $ file /tmp/plate-quote.pdf /tmp/plate-quote.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded) # list by prefix GET /api/v1/files?prefix=plugin-printing-shop/ → [{"key":"plugin-printing-shop/quotes/quote-PLATE-200.pdf", "size":1488, "contentType":"application/pdf", ...}] # plug-in log [plugin:printing-shop] registered 8 endpoints under /api/v1/plugins/printing-shop/ [plugin:printing-shop] generated quote PDF for plate PLATE-200 (1488 bytes) → plugin-printing-shop/quotes/quote-PLATE-200.pdf ``` Four public surfaces composed in one flow: plug-in JDBC read → plug-in classloader resource load → host ReportRenderer compile/ fill/export → host FileStorage put → host file controller download. Every step stays on api.v1; zero plug-in code reaches into a concrete platform class. ## Printing-shop plug-in — full extension surface exercised After this commit the reference printing-shop plug-in contributes via every public seam the framework offers: | Seam | How the plug-in uses it | |-------------------------------|--------------------------------------------------------| | HTTP endpoints (P1.3) | 8 endpoints under /api/v1/plugins/printing-shop/ | | JDBC (P1.4) | Reads/writes its own plugin_printingshop__* tables | | Liquibase | Own changelog.xml, 2 tables created at plug-in start | | Metadata YAML (P1.5) | 2 entities, 5 permissions, 2 menus | | Custom fields on CORE (P3.4) | 5 plug-in fields on Partner/Item/SalesOrder/WorkOrder | | i18n (P1.6) | Own messages_<locale>.properties, quote number msgs | | EventBus (P1.7) | Publishes WorkOrderRequestedEvent from a TaskHandler | | TaskHandlers (P2.1) | 2 handlers (plate-approval, quote-to-work-order) | | Plug-in BPMN (P2.1 followup) | 2 BPMNs in processes/ auto-deployed at start | | JobHandlers (P1.10 followup) | PlateCleanupJobHandler using context.jdbc + logger | | ReportRenderer (P1.8) | Quote PDF from JRXML via context.reports | | FileStorage (P1.9) | Persists quote PDF via context.files | Everything listed in this table is exercised end-to-end by the current smoke test. The plug-in is the framework's executable acceptance test for the entire public extension surface. ## Tests No new unit tests — the wiring change is a plain constructor addition, the existing `DefaultPluginContext` has no dedicated test class (it's a thin dataclass-shaped bean), and `JasperReportRenderer` + `LocalDiskFileStorage` each have their own unit tests from the respective parent chunks. The change is validated end-to-end by the above smoke test; formalizing that into an integration test would need Testcontainers + a real plug-in JAR and belongs to a different (test-infra) chunk. - Total framework unit tests: 337 (unchanged), all green. ## Non-goals (parking lot) - Pre-compiled `.jasper` caching keyed by template hash. A hot-path benchmark would tell us whether the cache is worth shipping. - Multipart upload of a template into a plug-in's own `files` namespace so non-bundled templates can be tried without a plug-in rebuild. Nice-to-have for iteration but not on the v1.0 critical path. - Scoped file-key prefixes per plug-in enforced by the framework (today the plug-in picks its own prefix by convention; a `plugin.files.keyPrefix` config would let the host enforce that every plug-in-contributed file lives under `plugin-<id>/`). Future hardening chunk. -
Closes the P1.8 row of the implementation plan — **every Phase 1 platform unit is now ✅**. New platform-reports subproject wrapping JasperReports 6.21.3 with a minimal api.v1 ReportRenderer facade, a built-in self-test JRXML template, and a thin HTTP surface. ## api.v1 additions (package `org.vibeerp.api.v1.reports`) - `ReportRenderer` — injectable facade with ONE method for v1: `renderPdf(template: InputStream, data: Map<String, Any?>): ByteArray` Caller loads the JRXML (or pre-compiled .jasper) from wherever (plug-in JAR classpath, FileStorage, DB metadata row, HTTP upload) and hands an open stream to the renderer. The framework reads the bytes, compiles/fills/exports, and returns the PDF. - `ReportRenderException` — wraps any engine exception so plug-ins don't have to import concrete Jasper exception types. - `PluginContext.reports: ReportRenderer` — new optional member with the default-throw backward-compat pattern used for every other addition. Plug-ins that ship quote PDFs, job cards, delivery notes, etc. inject this through the context. ## platform-reports runtime - `JasperReportRenderer` @Component — wraps JasperReports' compile → fill → export cycle into one method. * `JasperCompileManager.compileReport(template)` turns the JRXML stream into an in-memory `JasperReport`. * `JasperFillManager.fillReport(compiled, params, JREmptyDataSource(1))` evaluates expressions against the parameter map. The empty data source satisfies Jasper's requirement for a non-null data source when the template has no `<field>` definitions. * `JasperExportManager.exportReportToPdfStream(jasperPrint, buffer)` produces the PDF bytes. The `JasperPrint` type annotation on the local is deliberate — Jasper has an ambiguous `exportReportToPdfStream(InputStream, OutputStream)` overload and Kotlin needs the explicit type to pick the right one. * Every stage catches `Throwable` and re-throws as `ReportRenderException` with a useful message, keeping the api.v1 surface clean of Jasper's exception hierarchy. - `ReportController` at `/api/v1/reports/**`: * `POST /ping` render the built-in self-test JRXML with the supplied `{name: "..."}` (optional, defaults to "world") and return the PDF bytes with `application/pdf` Content-Type * `POST /render` multipart upload a JRXML template + return the PDF. Operator / test use, not the main production path. Both endpoints @RequirePermission-gated via `reports.report.render`. - `reports/vibeerp-ping-report.jrxml` — a single-page JRXML with a title, centred "Hello, $P{name}!" text, and a footer. Zero fields, one string parameter with a default value. Ships on the platform-reports classpath and is loaded by the `/ping` endpoint via `ClassPathResource`. - `META-INF/vibe-erp/metadata/reports.yml` — 1 permission + 1 menu. ## Design decisions captured in-file - **No template compilation cache.** Every call compiles the JRXML fresh. Fine for infrequent reports (quotes, job cards); a hot path that renders thousands of the same report per minute would want a `ConcurrentHashMap<String, JasperReport>` keyed by template hash. Deliberately NOT shipped until a benchmark shows it's needed — the cache key semantics need a real consumer. - **No multiple output formats.** v1 is PDF-only. Additive overloads for HTML/XLSX land when a real consumer needs them. - **No data-source argument.** v1 is parameter-driven, not query-driven. A future `renderPdf(template, data, rows)` overload will take tabular data for `<field>`-based templates. - **No Groovy / Janino / ECJ.** The default `JRJavacCompiler` uses `javax.tools.ToolProvider.getSystemJavaCompiler()` which is available on any JDK runtime. vibe_erp already requires a JDK (not JRE) for Liquibase + Flowable + Quartz, so we inherit this for free. Zero extra compiler dependencies. ## Config trap caught during first build (documented in build.gradle.kts) My first attempt added aggressive JasperReports exclusions to shrink the transitive dep tree (POI, Batik, Velocity, Castor, Groovy, commons-digester, ...). The build compiled fine but `JasperCompileManager.compileReport(...)` threw `ClassNotFoundException: org.apache.commons.digester.Digester` at runtime — Jasper uses Digester internally to parse the JRXML structure, and excluding the transitive dep silently breaks template loading. Fix: remove ALL exclusions. JasperReports' dep tree IS heavy, but each transitive is load-bearing for a use case that's only obvious once you exercise the engine end-to-end. A benchmark- driven optimization chunk can revisit this later if the JAR size becomes a concern; for v1.0 the "just pull it all in" approach is correct. Documented in the build.gradle.kts so the next person who thinks about trimming the dep tree reads the warning first. ## Smoke test (fresh DB, as admin) ``` POST /api/v1/reports/ping {"name": "Alice"} → 200 Content-Type: application/pdf Content-Length: 1436 body: %PDF-1.5 ... (valid 1-page PDF) $ file /tmp/ping-report.pdf /tmp/ping-report.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded) POST /api/v1/reports/ping (no body) → 200, 1435 bytes, renders with default name="world" from JRXML defaultValueExpression # negative POST /api/v1/reports/render (multipart with garbage bytes) → 400 {"message": "failed to compile JRXML template: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog."} GET /api/v1/_meta/metadata → permissions includes "reports.report.render" ``` The `%PDF-` magic header is present and the `file` command on macOS identifies the bytes as a valid PDF 1.5 single-page document. JasperReports compile + fill + export are all running against the live JDK 21 javac inside the Spring Boot app on first boot. ## Tests - 3 new unit tests in `JasperReportRendererTest`: * `renders the built-in ping template to a valid PDF byte stream` — checks for the `%PDF-` magic header and a reasonable size * `renders with the default parameter when the data map is empty` — proves the JRXML's defaultValueExpression fires * `wraps compile failures in ReportRenderException` — feeds garbage bytes and asserts the exception type - Total framework unit tests: 337 (was 334), all green. ## What this unblocks - **Printing-shop quote PDFs.** The reference plug-in can now ship a `reports/quote.jrxml` in its JAR, load it in an HTTP handler via classloader, render via `context.reports.renderPdf(...)`, and either return the PDF bytes directly or persist it via `context.files.put("reports/quote-$code.pdf", "application/pdf", ...)` for later download. The P1.8 → P1.9 chain is ready. - **Job cards, delivery notes, pick lists, QC certificates.** Every business document in a printing shop is a report template + a data payload. The facade handles them all through the same `renderPdf` call. - **A future reports PBC.** When a PBC actually needs report metadata persisted (template versioning, report scheduling), a new pbc-reports can layer on top without changing api.v1 — the renderer stays the lowest-level primitive, the PBC becomes the management surface. ## Phase 1 completion With P1.8 landed: | Unit | Status | |------|--------| | P1.2 Plug-in linter | ✅ | | P1.3 Plug-in HTTP + lifecycle | ✅ | | P1.4 Plug-in Liquibase + PluginJdbc | ✅ | | P1.5 Metadata store + loader | ✅ | | P1.6 ICU4J translator | ✅ | | P1.7 Event bus + outbox | ✅ | | P1.8 JasperReports integration | ✅ | | P1.9 File store | ✅ | | P1.10 Quartz scheduler | ✅ | **All nine Phase 1 platform units are now done.** (P1.1 Postgres RLS was removed by the early single-tenant refactor, per CLAUDE.md guardrail #5.) Remaining v1.0 work is cross-cutting: pbc-finance GL growth, the web SPA (R1–R4), OIDC (P4.2), the MCP server (A1), and richer per-PBC v2/v3 scopes. ## Non-goals (parking lot) - Template caching keyed by hash. - HTML/XLSX exporters. - Pre-compiled `.jasper` support via a Gradle build task. - Sub-reports (master-detail). - Dependency-tree optimisation via selective exclusions — needs a benchmark-driven chunk to prove each exclusion is safe. - Plug-in loader integration for custom font embedding. Jasper's default fonts work; custom fonts land when a real customer plug-in needs them. -
P1.10 follow-up. Plug-ins can now register background job handlers the same way they already register workflow task handlers. The reference printing-shop plug-in ships a real PlateCleanupJobHandler that reads from its own database via `context.jdbc` as the executable acceptance test. ## Why this wasn't in the P1.10 chunk P1.10 landed the core scheduler + registry + Quartz bridge + HTTP surface, but the plug-in-loader integration was deliberately deferred — the JobHandlerRegistry already supported owner-tagged `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`, so the seam was defined; it just didn't have a caller from the PF4J plug-in side. Without a real plug-in consumer, shipping the integration would have been speculative. This commit closes the gap in exactly the shape the TaskHandler side already has: new api.v1 registrar interface, new scoped registrar in platform-plugins, one constructor parameter on DefaultPluginContext, one new field on VibeErpPluginManager, and the teardown paths all fall out automatically because JobHandlerRegistry already implements the owner-tagged cleanup. ## api.v1 additions - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single method `register(handler: JobHandler)`. Mirrors `PluginTaskHandlerRegistrar` exactly, same ergonomics, same duplicate-key-throws discipline. - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional member with the default-throw backward-compat pattern used for `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host loading a newer plug-in jar fails loudly at first call rather than silently dropping scheduled work. ## platform-plugins wiring - New dependency on `:platform:platform-jobs`. - New internal class `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar` that implements the api.v1 registrar by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. - `DefaultPluginContext` gains a `scopedJobHandlers` constructor parameter and exposes it as `PluginContext.jobs`. - `VibeErpPluginManager`: * injects `JobHandlerRegistry` * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing endpoint + taskHandler + BPMN-deployment cleanups * `destroy()` reverse-iterates `started` and calls the same `unregisterAllByOwner` alongside the other four teardown steps ## Reference plug-in — PlateCleanupJobHandler New file `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`. Key `printing_shop.plate.cleanup`. Captures the `PluginContext` via constructor — same "handler-side plug-in context access" pattern the printing-shop plug-in already uses for its TaskHandlers. The handler is READ-ONLY in its v1 incarnation: it runs a GROUP-BY query over `plugin_printingshop__plate` via `context.jdbc.query(...)` and logs a per-status summary via `context.logger.info(...)`. A real cleanup job would also run an `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the read-only shape is enough to exercise the seam end-to-end without introducing a retention policy the customer hasn't asked for. `PrintingShopPlugin.start(context)` now registers the handler alongside its two TaskHandlers: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) context.jobs.register(PlateCleanupJobHandler(context)) ## Smoke test (fresh DB, plug-in staged) ``` # boot registered JobHandler 'vibeerp.jobs.ping' owner='core' ... JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping] ... registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ... [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup # HTTP: list handlers — now shows both GET /api/v1/jobs/handlers → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]} # HTTP: trigger the plug-in handler — proves dispatcher routes to it POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger → 200 {"handlerKey":"printing_shop.plate.cleanup", "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9", "startedAt":"...","finishedAt":"...","ok":true} # Handler-side logs prove context.jdbc + context.logger access [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...' [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[] # SIGTERM — clean teardown [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s) [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped) [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order. Core handlers are untouched by plug-in teardown. ## Tests No new unit tests in this commit — the test coverage is inherited from the previously landed components: - `JobHandlerRegistryTest` already covers owner-tagged `register` / `unregister` / `unregisterAllByOwner` / duplicate key rejection. - `ScopedTaskHandlerRegistrar` behavior (which this commit mirrors structurally) is exercised end-to-end by the printing-shop plug-in boot path. - Total framework unit tests: 334 (unchanged from the quality→warehousing quarantine chunk), all green. ## What this unblocks - **Plug-in-shipped scheduled work.** The printing-shop plug-in can now add cron schedules for its cleanup handler via `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey, cronExpression}` without the operator touching core code. - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can now ship job handlers with distinct keys and be torn down independently on reload — the owner-tagged cleanup strips only the stopping plug-in's handlers, leaving other plug-ins' and core handlers alone. - **The "plug-in contributes everything" story.** The reference printing-shop plug-in now contributes via every public seam the framework has: HTTP endpoints (7), custom fields on core entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1) — plus its own database schema, its own metadata YAML, its own i18n bundles. That's every extension point a real customer plug-in would want. ## Non-goals (parking lot) - A real retention policy in PlateCleanupJobHandler. The handler logs a summary but doesn't mutate state. Customer-specific pruning rules belong in a customer-owned plug-in or a metadata- driven rule once that seam exists. - A built-in cron schedule for the plug-in's handler. The plug-in only registers the handler; scheduling is an operator decision exposed through the HTTP surface from P1.10. -
…c-warehousing StockTransfer First cross-PBC reaction originating from pbc-quality. Records a REJECTED inspection with explicit source + quarantine location codes, publishes an api.v1 event inside the same transaction as the row insert, and pbc-warehousing's new subscriber atomically creates + confirms a StockTransfer that moves the rejected quantity to the quarantine bin. The whole chain — inspection insert + event publish + transfer create + confirm + two ledger rows — runs in a single transaction under the synchronous in-process bus with Propagation.MANDATORY. ## Why the auto-quarantine is opt-in per-inspection Not every inspection wants physical movement. A REJECTED batch that's already separated from good stock on the shop floor doesn't need the framework to move anything; the operator just wants the record. Forcing every rejection to create a ledger pair would collide with real-world QC workflows. The contract is simple: the `InspectionRecord` now carries two OPTIONAL columns (`source_location_code`, `quarantine_location_code`). When BOTH are set AND the decision is REJECTED AND the rejected quantity is positive, the subscriber reacts. Otherwise it logs at DEBUG and does nothing. The event is published either way, so audit/KPI subscribers see every inspection regardless. ## api.v1 additions New event class `org.vibeerp.api.v1.event.quality.InspectionRecordedEvent` with nine fields: inspectionCode, itemCode, sourceReference, decision, inspectedQuantity, rejectedQuantity, sourceLocationCode?, quarantineLocationCode?, inspector All required fields validated in `init { }` — blank strings, non-positive inspected quantity, negative rejected quantity, or an unknown decision string all throw at publish time so a malformed event never hits the outbox. `aggregateType = "quality.InspectionRecord"` matches the `<pbc>.<aggregate>` convention. `decision` is carried as a String (not the pbc-quality `InspectionDecision` enum) to keep guardrail #10 honest — api.v1 events MUST NOT leak internal PBC types. Consumers compare against the literal `"APPROVED"` / `"REJECTED"` strings. ## pbc-quality changes - `InspectionRecord` entity gains two nullable columns: `source_location_code` + `quarantine_location_code`. - Liquibase migration `002-quality-quarantine-locations.xml` adds the columns to `quality__inspection_record`. - `InspectionRecordService` now injects `EventBus` and publishes `InspectionRecordedEvent` inside the `@Transactional record()` method. The publish carries all nine fields including the optional locations. - `RecordInspectionCommand` + `RecordInspectionRequest` gain the two optional location fields; unchanged default-null means every existing caller keeps working unchanged. - `InspectionRecordResponse` exposes both new columns on the HTTP wire. ## pbc-warehousing changes - New `QualityRejectionQuarantineSubscriber` @Component. - Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe(InspectionRecordedEvent::class.java, ...)` overload — same pattern every other PBC subscriber uses (SalesOrderConfirmedSubscriber, WorkOrderRequestedSubscriber, the pbc-finance order subscribers). - `handle(event)` is `internal` so the unit test can drive it directly without going through the bus. - Activation contract (all must be true): decision=REJECTED, rejectedQuantity>0, sourceLocationCode non-blank, quarantineLocationCode non-blank. Any missing condition → no-op. - Idempotency: derived transfer code is `TR-QC-<inspectionCode>`. Before creating, the subscriber checks `stockTransfers.findByCode(derivedCode)` — if anything exists (DRAFT, CONFIRMED, or CANCELLED), the subscriber skips. A replay of the same event under at-least-once delivery is safe. - On success: creates a DRAFT StockTransfer with one line moving `rejectedQuantity` of `itemCode` from source to quarantine, then calls `confirm(id)` which writes the atomic TRANSFER_OUT + TRANSFER_IN ledger pair. ## Smoke test (fresh DB) ``` # seed POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea} POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE} POST /api/v1/inventory/locations {code: WH-QUARANTINE, type: WAREHOUSE} POST /api/v1/inventory/movements {itemCode: WIDGET-1, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT} # the cross-PBC reaction POST /api/v1/quality/inspections {code: QC-R-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001", decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7, reason: "surface scratches", sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} → 201 {..., sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} # automatically created + confirmed GET /api/v1/warehousing/stock-transfers/by-code/TR-QC-QC-R-001 → 200 { "code": "TR-QC-QC-R-001", "fromLocationCode": "WH-MAIN", "toLocationCode": "WH-QUARANTINE", "status": "CONFIRMED", "note": "auto-quarantine from rejected inspection QC-R-001", "lines": [{"itemCode": "WIDGET-1", "quantity": 7.0}] } # ledger state (raw SQL) SELECT l.code, b.item_code, b.quantity FROM inventory__stock_balance b JOIN inventory__location l ON l.id = b.location_id WHERE b.item_code = 'WIDGET-1'; WH-MAIN | WIDGET-1 | 93.0000 ← was 100, now 93 WH-QUARANTINE | WIDGET-1 | 7.0000 ← 7 rejected units here SELECT item_code, location, reason, delta, reference FROM inventory__stock_movement m JOIN inventory__location l ON l.id=m.location_id WHERE m.reference = 'TR:TR-QC-QC-R-001'; WIDGET-1 | WH-MAIN | TRANSFER_OUT | -7 | TR:TR-QC-QC-R-001 WIDGET-1 | WH-QUARANTINE | TRANSFER_IN | 7 | TR:TR-QC-QC-R-001 # negatives POST /api/v1/quality/inspections {decision: APPROVED, ...+locations} → 201, but GET /TR-QC-QC-A-001 → 404 (no transfer, correct opt-out) POST /api/v1/quality/inspections {decision: REJECTED, rejected: 2, no locations} → 201, but GET /TR-QC-QC-R-002 → 404 (opt-in honored) # handler log [warehousing] auto-quarantining 7 units of 'WIDGET-1' from 'WH-MAIN' to 'WH-QUARANTINE' (inspection=QC-R-001, transfer=TR-QC-QC-R-001) ``` Everything happens in ONE transaction because EventBusImpl uses Propagation.MANDATORY with synchronous delivery: the inspection insert, the event publish, the StockTransfer create, the confirm, and the two ledger rows all commit or roll back together. ## Tests - Updated `InspectionRecordServiceTest`: the service now takes an `EventBus` constructor argument. Every existing test got a relaxed `EventBus` mock; the one new test `record publishes InspectionRecordedEvent on success` captures the published event and asserts every field including the location codes. - 6 new unit tests in `QualityRejectionQuarantineSubscriberTest`: * subscribe registers one listener for InspectionRecordedEvent * handle creates and confirms a quarantine transfer on a fully-populated REJECTED event (asserts derived code, locations, item code, quantity) * handle is a no-op when decision is APPROVED * handle is a no-op when sourceLocationCode is missing * handle is a no-op when quarantineLocationCode is missing * handle skips when a transfer with the derived code already exists (idempotent replay) - Total framework unit tests: 334 (was 327), all green. ## What this unblocks - **Quality KPI dashboards** — any PBC can now subscribe to `InspectionRecordedEvent` without coupling to pbc-quality. - **pbc-finance quality-cost tracking** — when GL growth lands, a finance subscriber can debit a "quality variance" account on every REJECTED inspection. - **REF.2 / customer plug-in workflows** — the printing-shop plug-in can emit an `InspectionRecordedEvent` of its own from a BPMN service task (via `context.eventBus.publish`) and drive the same quarantine chain without touching pbc-quality's HTTP surface. ## Non-goals (parking lot) - Partial-batch quarantine decisions (moving some units to quarantine, some back to general stock, some to scrap). v1 collapses the decision into a single "reject N units" action and assumes the operator splits batches manually before inspecting. A richer ResolutionPlan aggregate is a future chunk if real workflows need it. - Quality metrics storage. The event is audited by the existing wildcard event subscriber but no PBC rolls it up into a KPI table. Belongs to a future reporting feature. - Auto-approval chains. An APPROVED inspection could trigger a "release-from-hold" transfer (opposite direction) in a future-expanded subscriber, but v1 keeps the reaction REJECTED-only to match the "quarantine on fail" use case. -
Closes the P1.9 row of the implementation plan. New platform-files subproject exposing a cross-PBC facade for the framework's binary blob store, with a local-disk implementation and a thin HTTP surface for multipart upload / download / delete / list. ## api.v1 additions (package `org.vibeerp.api.v1.files`) - `FileStorage` — injectable facade with five methods: * `put(key, contentType, content: InputStream): FileHandle` * `get(key): FileReadResult?` * `exists(key): Boolean` * `delete(key): Boolean` * `list(prefix): List<FileHandle>` Stream-first (not byte-array-first) so reports PDFs etc. don't have to be materialized in memory. Keys are opaque strings with slashes allowed for logical grouping; the local-disk backend maps them to subdirectories. - `FileHandle` — read-only metadata DTO (key, size, contentType, createdAt, updatedAt). - `FileReadResult` — the return type of `get()` bundling a handle and an open InputStream. The caller MUST close the stream (`result.content.use { ... }` is the idiomatic shape); the facade is not responsible for managing the consumer's lifetime. - `PluginContext.files: FileStorage` — new member on the plug-in context interface, default implementation throws `UnsupportedOperationException("upgrade vibe_erp to v0.8 or later")`. Same backward-compat pattern we used for `endpoints`, `jdbc`, `taskHandlers`. Plug-ins that need to persist report PDFs, uploaded attachments, or exported archives inject this through the context. ## platform-files runtime - `LocalDiskFileStorage` @Component reading `vibeerp.files.local-path` (default `./files-local`, overridden in dev profile to `./files-dev`, overridden in production config to `/opt/vibe-erp/files`). **Layout**: files are stored at `<root>/<key>` with a sidecar metadata file at `<root>/<key>.meta` containing a single line `content_type=<value>`. Sidecars beat xattrs (not portable across Linux/macOS) and beat an H2/SQLite index (overkill for single-tenant single-instance). **Atomicity**: every `put` writes to a `.tmp` sibling file and atomic-moves it into place so a concurrent read against the same key never sees a half-written mix. **Key safety**: `put`/`get`/`delete` all validate the key: rejects blank, leading `/`, `..` (path traversal), and trailing `.meta` (sidecar collision). Every resolved path is checked to stay under the configured root via `normalize().startsWith(root)`. - `FileController` at `/api/v1/files/**`: * `POST /api/v1/files?key=...` multipart upload (form field `file`) * `GET /api/v1/files?prefix=...` list by prefix * `GET /api/v1/files/metadata?key=...` metadata only (doesn't open the stream) * `GET /api/v1/files/download?key=...` stream bytes with the right Content-Type + filename * `DELETE /api/v1/files?key=...` delete by key All endpoints @RequirePermission-gated via the keys declared in the metadata YAML. The `key` is a query parameter, NOT a path variable, so slashes in the key don't collide with Spring's path matching. - `META-INF/vibe-erp/metadata/files.yml` — 2 permissions + 1 menu. ## Smoke test (fresh DB, as admin) ``` POST /api/v1/files?key=reports/smoke-test.txt (multipart file) → 201 {"key":"reports/smoke-test.txt", "size":61, "contentType":"text/plain", "createdAt":"...","updatedAt":"..."} GET /api/v1/files?prefix=reports/ → [{"key":"reports/smoke-test.txt","size":61, ...}] GET /api/v1/files/metadata?key=reports/smoke-test.txt → same handle, no bytes GET /api/v1/files/download?key=reports/smoke-test.txt → 200 Content-Type: text/plain body: original upload content (diff == 0) DELETE /api/v1/files?key=reports/smoke-test.txt → 200 {"removed":true} GET /api/v1/files/download?key=reports/smoke-test.txt → 404 # path traversal POST /api/v1/files?key=../escape (multipart file) → 400 "file key must not contain '..' (got '../escape')" GET /api/v1/_meta/metadata → permissions include ["files.file.read", "files.file.write"] ``` Downloaded bytes match the uploaded bytes exactly — round-trip verified with `diff -q`. ## Tests - 12 new unit tests in `LocalDiskFileStorageTest` using JUnit 5's `@TempDir`: * `put then get round-trips content and metadata` * `put overwrites an existing key with the new content` * `get returns null for an unknown key` * `exists distinguishes present from absent` * `delete removes the file and its metadata sidecar` * `delete on unknown key returns false` * `list filters by prefix and returns sorted keys` * `put rejects a key with dot-dot` * `put rejects a key starting with slash` * `put rejects a key ending in dot-meta sidecar` * `put rejects blank content type` * `list sidecar metadata files are hidden from listing results` - Total framework unit tests: 327 (was 315), all green. ## What this unblocks - **P1.8 JasperReports integration** — now has a first-class home for generated PDFs. A report renderer can call `fileStorage.put("reports/quote-$code.pdf", "application/pdf", ...)` and return the handle to the caller. - **Plug-in attachments** — the printing-shop plug-in's future "plate scan image" or "QC report" attachments can be stored via `context.files` without touching the database. - **Export/import flows** — a scheduled job can write a nightly CSV export via `FileStorage.put` and a separate endpoint can download it; the scheduler-to-storage path is clean and typed. - **S3 backend when needed** — the interface is already streaming- based; dropping in an `S3FileStorage` @Component and toggling `vibeerp.files.backend: s3` in config is a future additive chunk, zero api.v1 churn. ## Non-goals (parking lot) - S3 backend. The config already reads `vibeerp.files.backend`, local is hard-wired for v1.0. Keeps the dependency tree off aws-sdk until a real consumer exists. - Range reads / HTTP `Range: bytes=...` support. Future enhancement for large-file streaming (e.g. video attachments). - Presigned URLs (for direct browser-to-S3 upload, skipping the framework). Design decision lives with the S3 backend chunk. - Per-file ACLs. The four `files.file.*` permissions currently gate all files uniformly; per-path or per-owner ACLs would require a new metadata table and haven't been asked for by any PBC yet. - Plug-in loader integration. `PluginContext.files` throws the default `UnsupportedOperationException` until the plug-in loader is wired to pass the host `FileStorage` through `DefaultPluginContext`. Lands in the same chunk as the first plug-in that needs to store a file. -
Closes the P1.10 row of the implementation plan. New platform-jobs subproject shipping a Quartz-backed background job engine adapted to the api.v1 JobHandler contract, so PBCs and plug-ins can register scheduled work without ever importing Quartz types. ## The shape (matches the P2.1 workflow engine) platform-jobs is to scheduled work what platform-workflow is to BPMN service tasks. Same pattern, same discipline: - A single `@Component` bridge (`QuartzJobBridge`) is the ONLY org.quartz.Job implementation in the framework. Every persistent trigger points at it. - A single `JobHandlerRegistry` (owner-tagged, duplicate-key-rejecting, ConcurrentHashMap-backed) holds every registered JobHandler by key. Mirrors `TaskHandlerRegistry`. - The bridge reads the handler key from the trigger's JobDataMap, looks it up in the registry, and executes the matching JobHandler inside a `PrincipalContext.runAs("system:jobs:<key>")` block so audit rows written during the job get a structured, greppable `created_by` value ("system:jobs:core.audit.prune") instead of the default `__system__`. - Handler-thrown exceptions are re-wrapped as `JobExecutionException` so Quartz's MISFIRE machinery handles them properly. - `@DisallowConcurrentExecution` on the bridge stops a long-running handler from being started again before it finishes. ## api.v1 additions (package `org.vibeerp.api.v1.jobs`) - `JobHandler` — interface with `key()` + `execute(context)`. Analogous to the workflow TaskHandler. Plug-ins implement this to contribute scheduled work without any Quartz dependency. - `JobContext` — read-only execution context passed to the handler: principal, locale, correlation id, started-at instant, data map. Unlike TaskContext it has no `set()` writeback — scheduled jobs don't produce continuation state for a downstream step; a job that wants to talk to the rest of the system writes to its own domain table or publishes an event. - `JobScheduler` — injectable facade exposing: * `scheduleCron(scheduleKey, handlerKey, cronExpression, data)` * `scheduleOnce(scheduleKey, handlerKey, runAt, data)` * `unschedule(scheduleKey): Boolean` * `triggerNow(handlerKey, data): JobExecutionSummary` — synchronous in-thread execution, bypasses Quartz; used by the HTTP trigger endpoint and by tests. * `listScheduled(): List<ScheduledJobInfo>` — introspection Both `scheduleCron` and `scheduleOnce` are idempotent on `scheduleKey` (replace if exists). - `ScheduledJobInfo` + `JobExecutionSummary` + `ScheduleKind` — read-only DTOs returned by the scheduler. ## platform-jobs runtime - `QuartzJobBridge` — the shared Job impl. Routes by the `__vibeerp_handler_key` JobDataMap entry. Uses `@Autowired` field injection because Quartz instantiates Job classes through its own JobFactory (Spring Boot's `SpringBeanJobFactory` autowires fields after construction, which is the documented pattern). - `QuartzJobScheduler` — the concrete api.v1 `JobScheduler` implementation. Builds JobDetail + Trigger pairs under fixed group names (`vibeerp-jobs`), uses `addJob(replace=true)` + explicit `checkExists` + `rescheduleJob` for idempotent scheduling, strips the reserved `__vibeerp_handler_key` from the data visible to the handler. - `SimpleJobContext` — internal immutable `JobContext` impl. Defensive-copies the data map at construction. - `JobHandlerRegistry` — owner-tagged registry (OWNER_CORE by default, any other string for plug-in ownership). Same `register` / `unregister` / `unregisterAllByOwner` / `find` / `keys` / `size` surface as `TaskHandlerRegistry`. The plug-in loader integration seam is defined; the loader hook that calls `register(handler, pluginId)` lands when a plug-in actually ships a job handler (YAGNI). - `JobController` at `/api/v1/jobs/**`: * `GET /handlers` (perm `jobs.handler.read`) * `POST /handlers/{key}/trigger` (perm `jobs.job.trigger`) * `GET /scheduled` (perm `jobs.schedule.read`) * `POST /scheduled` (perm `jobs.schedule.write`) * `DELETE /scheduled/{key}` (perm `jobs.schedule.write`) - `VibeErpPingJobHandler` — built-in diagnostic. Key `vibeerp.jobs.ping`. Logs the invocation and exits. Safe to trigger from any environment; mirrors the core `vibeerp.workflow.ping` workflow handler from P2.1. - `META-INF/vibe-erp/metadata/jobs.yml` — 4 permissions + 2 menus. ## Spring Boot config (application.yaml) ``` spring.quartz: job-store-type: jdbc jdbc: initialize-schema: always # creates QRTZ_* tables on first boot properties: org.quartz.scheduler.instanceName: vibeerp-scheduler org.quartz.scheduler.instanceId: AUTO org.quartz.threadPool.threadCount: "4" org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate org.quartz.jobStore.isClustered: "false" ``` ## The config trap caught during smoke-test (documented in-file) First boot crashed with `SchedulerConfigException: DataSource name not set.` because I'd initially added `org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX` to the raw Quartz properties. That is correct for a standalone Quartz deployment but WRONG for the Spring Boot starter: the starter configures a `LocalDataSourceJobStore` that wraps the Spring-managed DataSource automatically when `job-store-type=jdbc`, and setting `jobStore.class` explicitly overrides that wrapper back to Quartz's standalone JobStoreTX — which then fails at init because Quartz-standalone expects a separately-named `dataSource` property the Spring Boot starter doesn't supply. Fix: drop the `jobStore.class` property entirely. The `driverDelegateClass` is still fine to set explicitly because it's read by both the standalone and Spring-wrapped JobStore implementations. Rationale is documented in the config comment so the next maintainer doesn't add it back. ## Smoke test (fresh DB, as admin) ``` GET /api/v1/jobs/handlers → {"count": 1, "keys": ["vibeerp.jobs.ping"]} POST /api/v1/jobs/handlers/vibeerp.jobs.ping/trigger {"data": {"source": "smoke-test"}} → 200 {"handlerKey": "vibeerp.jobs.ping", "correlationId": "e142...", "startedAt": "...", "finishedAt": "...", "ok": true} log: VibeErpPingJobHandler invoked at=... principal='system:jobs:manual-trigger' data={source=smoke-test} GET /api/v1/jobs/scheduled → [] POST /api/v1/jobs/scheduled {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping", "cronExpression": "0/1 * * * * ?", "data": {"trigger": "cron"}} → 201 {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping"} # after 3 seconds GET /api/v1/jobs/scheduled → [{"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping", "kind": "CRON", "cronExpression": "0/1 * * * * ?", "nextFireTime": "...", "previousFireTime": "...", "data": {"trigger": "cron"}}] DELETE /api/v1/jobs/scheduled/ping-every-sec → 200 {"removed": true} # handler log count after ~3 seconds of cron ticks grep -c "VibeErpPingJobHandler invoked" /tmp/boot.log → 5 # 1 manual trigger + 4 cron ticks before unschedule — matches the # 0/1 * * * * ? expression # negatives POST /api/v1/jobs/handlers/nope/trigger → 400 "no JobHandler registered for key 'nope'" POST /api/v1/jobs/scheduled {cronExpression: "not a cron"} → 400 "invalid Quartz cron expression: 'not a cron'" ``` ## Three schemas coexist in one Postgres database ``` SELECT count(*) FILTER (WHERE table_name LIKE 'qrtz_%') AS quartz_tables, count(*) FILTER (WHERE table_name LIKE 'act_%') AS flowable_tables, count(*) FILTER (WHERE table_name NOT LIKE 'qrtz_%' AND table_name NOT LIKE 'act_%' AND table_schema = 'public') AS vibeerp_tables FROM information_schema.tables WHERE table_schema = 'public'; quartz_tables | flowable_tables | vibeerp_tables ---------------+-----------------+---------------- 11 | 39 | 48 ``` Three independent schema owners (Quartz / Flowable / Liquibase) in one public schema, no collisions. Spring Boot's `QuartzDataSourceScriptDatabaseInitializer` runs the QRTZ_* DDL once and skips on subsequent boots; Flowable's internal MyBatis schema manager does the same for ACT_* tables; our Liquibase owns the rest. ## Tests - 6 new tests in `JobHandlerRegistryTest`: * initial handlers registered with OWNER_CORE * duplicate key fails fast with both owners in the error * unregisterAllByOwner only removes handlers owned by that id * unregister by key returns false for unknown * find on missing key returns null * blank key is rejected - 9 new tests in `QuartzJobSchedulerTest` (Quartz Scheduler mocked): * scheduleCron rejects an unknown handler key * scheduleCron rejects an invalid cron expression * scheduleCron adds job + schedules trigger when nothing exists yet * scheduleCron reschedules when the trigger already exists * scheduleOnce uses a simple trigger at the requested instant * unschedule returns true/false correctly * triggerNow calls the handler synchronously and returns ok=true * triggerNow propagates the handler's exception * triggerNow rejects an unknown handler key - Total framework unit tests: 315 (was 300), all green. ## What this unblocks - **pbc-finance audit prune** — a core recurring job that deletes posted journal entries older than N days, driven by a cron from a Tier 1 metadata row. - **Plug-in scheduled work** — once the loader integration hook is wired (trivial follow-up), any plug-in's `start(context)` can register a JobHandler via `context.jobs.register(handler)` and the host strips it on plug-in stop via `unregisterAllByOwner`. - **Delayed workflow continuations** — a BPMN handler can call `jobScheduler.scheduleOnce(...)` to "re-evaluate this workflow in 24 hours if no one has approved it", bridging the workflow engine and the scheduler without introducing Thread.sleep. - **Outbox draining strategy** — the existing 5-second OutboxPoller can move from a Spring @Scheduled to a Quartz cron so it inherits the scheduler's persistence, misfire handling, and the future clustering story. ## Non-goals (parking lot) - **Clustered scheduling.** `isClustered=false` for now. Making this true requires every instance to share a unique `instanceId` and agree on the JDBC lock policy — doable but out of v1.0 scope since vibe_erp is single-tenant single-instance by design. - **Async execution of triggerNow.** The current `triggerNow` runs synchronously on the caller thread so HTTP requests see the real result. A future "fire and forget" endpoint would delegate to `Scheduler.triggerJob(...)` against the JobDetail instead. - **Per-job permissions.** Today the four `jobs.*` permissions gate the whole controller. A future enhancement could attach per-handler permissions (so "trigger audit prune" requires a different permission than "trigger pricing refresh"). - **Plug-in loader integration.** The seam is defined on `JobHandlerRegistry` (owner tagging + unregisterAllByOwner) but `VibeErpPluginManager` doesn't call it yet. Lands in the same chunk as the first plug-in that ships a JobHandler. -
…alidates locations at create Follow-up to the pbc-warehousing chunk. Plugs a real gap noticed in the smoke test: an unknown fromLocationCode or toLocationCode on a StockTransfer was silently accepted at create() and only surfaced as a confirm()-time rollback, which is a confusing UX — the operator types TR-001 wrong, hits "create", then hits "confirm" minutes later and sees "location GHOST-SRC is not in the inventory directory". ## api.v1 growth New cross-PBC method on `InventoryApi`: fun findLocationByCode(locationCode: String): LocationRef? Parallel shape to `CatalogApi.findItemByCode` — a lookup-by-code returning a lightweight ref or null, safe for any cross-PBC consumer to inject. The returned `LocationRef` data class carries id, code, name, type (as a String, not the inventory-internal LocationType enum — rationale in the KDoc), and active flag. Fields that are NOT part of the cross-PBC contract (audit columns, ext JSONB, the raw JPA entity) stay inside pbc-inventory. api.v1 additive change within the v1 line — no breaking rename, no signature churn on existing methods. The interface adds a new abstract method, which IS technically a source-breaking change for any in-tree implementation, but the only impl is pbc-inventory/InventoryApiAdapter which is updated in the same commit. No external plug-in implements InventoryApi (by design; plug-ins inject it, they don't provide it). ## Adapter implementation `InventoryApiAdapter.findLocationByCode` resolves the location via the existing `LocationJpaRepository.findByCode`, which is exactly what `recordMovement` already uses. A new private extension `Location.toRef()` builds the api.v1 DTO. Zero new SQL; zero new repository methods. ## pbc-warehousing wiring `StockTransferService.create` now calls the facade twice — once for the source location, once for the destination — BEFORE validating lines. The four-step ordering is: code uniqueness → from != to → non-empty lines → both locations exist and are active → per-line validation. Unknown locations produce a 400 with a clear message; deactivated locations produce a 400 distinguishing "doesn't exist" from "exists but can't be used": "from location code 'GHOST-SRC' is not in the inventory directory" "from location 'WH-CLOSED' is deactivated and cannot be transfer source" The confirm() path is unchanged. Locations may still vanish between create and confirm (though the likelihood is low for a normal workflow), and `recordMovement` will still raise its own error in that case — belt and suspenders. ## Smoke test ``` POST /api/v1/inventory/locations {code: WH-GOOD, type: WAREHOUSE} POST /api/v1/catalog/items {code: ITEM-1, baseUomCode: ea} POST /api/v1/warehousing/stock-transfers {code: TR-bad, fromLocationCode: GHOST-SRC, toLocationCode: WH-GOOD, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "from location code 'GHOST-SRC' is not in the inventory directory" (before this commit: 201 DRAFT, then 400 at confirm) POST /api/v1/warehousing/stock-transfers {code: TR-bad2, fromLocationCode: WH-GOOD, toLocationCode: GHOST-DST, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "to location code 'GHOST-DST' is not in the inventory directory" POST /api/v1/warehousing/stock-transfers {code: TR-ok, fromLocationCode: WH-GOOD, toLocationCode: WH-OTHER, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 201 DRAFT ← happy path still works ``` ## Tests - Updated the 3 existing `StockTransferServiceTest` tests that created real transfers to stub `inventory.findLocationByCode` for both WH-A and WH-B via a new `stubLocation()` helper. - 3 new tests: * `create rejects unknown from location via InventoryApi` * `create rejects unknown to location via InventoryApi` * `create rejects a deactivated from location` - Total framework unit tests: 300 (was 297), all green. ## Why this isn't a breaking api.v1 change InventoryApi is an interface consumed by other PBCs and by plug-ins, implemented ONLY by pbc-inventory. Adding a new method to an interface IS a source-breaking change for any implementer — but the framework's dependency rules mean no external code implements this interface. Plug-ins and other PBCs CONSUME it via dependency injection; the only production impl is InventoryApiAdapter, updated in the same commit. Binary compatibility for consumers is preserved: existing call sites compile and run unchanged because only the interface grew, not its existing methods. If/when a third party implements InventoryApi (e.g. a test double outside the framework, or a custom backend plug-in), this would be a semver-major-worthy addition. For the in-tree framework, it's additive-within-a-major. -
Closes the core PBC row of the v1.0 target. Ships pbc-quality as a lean v1 recording-only aggregate: any caller that performs a quality inspection (inbound goods, in-process work order output, outbound shipment) appends an immutable InspectionRecord with a decision (APPROVED/REJECTED), inspected/rejected quantities, a free-form source reference, and the inspector's principal id. ## Deliberately narrow v1 scope pbc-quality does NOT ship: - cross-PBC writes (no "rejected stock gets auto-quarantined" rule) - event publishing (no InspectionRecordedEvent in api.v1 yet) - inspection plans or templates (no "item X requires checks Y, Z") - multi-check records (one decision per row; multi-step inspections become multiple records) The rationale is the "follow the consumer" discipline: every seam the framework adds has to be driven by a real consumer. With no PBC yet subscribing to inspection events or calling into pbc-quality, speculatively building those capabilities would be guessing the shape. Future chunks that actually need them (e.g. pbc-warehousing auto-quarantine on rejection, pbc-production WorkOrder scrap from rejected QC) will grow the seam into the shape they need. Even at this narrow scope pbc-quality delivers real value: a queryable, append-only, permission-gated record of every QC decision in the system, filterable by source reference or item code, and linked to the catalog via CatalogApi. ## Module contents - `build.gradle.kts` — new Gradle subproject following the existing recipe. api-v1 + platform/persistence + platform/security only; no cross-pbc deps (guardrail #9 stays honest). - `InspectionRecord` entity — code, item_code, source_reference, decision (enum), inspected_quantity, rejected_quantity, inspector (principal id as String, same convention as created_by), reason, inspected_at. Owns table `quality__inspection_record`. No `ext` column in v1 — the aggregate is simple enough that adding Tier 1 customization now would be speculation; it can be added in one edit when a customer asks for it. - `InspectionDecision` enum — APPROVED, REJECTED. Deliberately two-valued; see the entity KDoc for why "conditional accept" is rejected as a shape. - `InspectionRecordJpaRepository` — existsByCode, findByCode, findBySourceReference, findByItemCode. - `InspectionRecordService` — ONE write verb `record`. Inspections are immutable; revising means recording a new one with a new code. Validates: * code is unique * source reference non-blank * inspected quantity > 0 * rejected quantity >= 0 * rejected <= inspected * APPROVED ↔ rejected = 0, REJECTED ↔ rejected > 0 * itemCode resolves via CatalogApi Inspector is read from `PrincipalContext.currentOrSystem()` at call time so a real HTTP user records their own inspections and a background job recording a batch uses a named system principal. - `InspectionRecordController` — `/api/v1/quality/inspections` with GET list (supports `?sourceReference=` and `?itemCode=` query params), GET by id, GET by-code, POST record. Every endpoint @RequirePermission-gated. - `META-INF/vibe-erp/metadata/quality.yml` — 1 entity, 2 permissions (`quality.inspection.read`, `quality.inspection.record`), 1 menu. - `distribution/.../db/changelog/pbc-quality/001-quality-init.xml` — single table with the full audit column set plus: * CHECK decision IN ('APPROVED', 'REJECTED') * CHECK inspected_quantity > 0 * CHECK rejected_quantity >= 0 * CHECK rejected_quantity <= inspected_quantity The application enforces the biconditional (APPROVED ↔ rejected=0) because CHECK constraints in Postgres can't express the same thing ergonomically; the DB enforces the weaker "rejected is within bounds" so a direct INSERT can't fabricate nonsense. - `settings.gradle.kts`, `distribution/build.gradle.kts`, `master.xml` all wired. ## Smoke test (fresh DB + running app, as admin) ``` POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea} → 201 POST /api/v1/quality/inspections {code: QC-2026-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001", decision: APPROVED, inspectedQuantity: 100, rejectedQuantity: 0} → 201 {inspector: <admin principal uuid>, inspectedAt: "..."} POST /api/v1/quality/inspections {code: QC-2026-002, itemCode: WIDGET-1, sourceReference: "WO:WO-002", decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7, reason: "surface scratches detected on 7 units"} → 201 GET /api/v1/quality/inspections?sourceReference=WO:WO-001 → [{code: QC-2026-001, ...}] GET /api/v1/quality/inspections?itemCode=WIDGET-1 → [APPROVED, REJECTED] ← filter works, 2 records # Negative: APPROVED with positive rejected POST /api/v1/quality/inspections {decision: APPROVED, rejectedQuantity: 3, ...} → 400 "APPROVED inspection must have rejected quantity = 0 (got 3); record a REJECTED inspection instead" # Negative: rejected > inspected POST /api/v1/quality/inspections {decision: REJECTED, inspectedQuantity: 5, rejectedQuantity: 10, ...} → 400 "rejected quantity (10) cannot exceed inspected (5)" GET /api/v1/_meta/metadata → permissions include ["quality.inspection.read", "quality.inspection.record"] ``` The `inspector` field on the created records contains the admin user's principal UUID exactly as written by the `PrincipalContextFilter` — proving the audit trail end-to-end. ## Tests - 9 new unit tests in `InspectionRecordServiceTest`: * `record persists an APPROVED inspection with rejected=0` * `record persists a REJECTED inspection with positive rejected` * `inspector defaults to system when no principal is bound` — validates the `PrincipalContext.currentOrSystem()` fallback * `record rejects duplicate code` * `record rejects non-positive inspected quantity` * `record rejects rejected greater than inspected` * `APPROVED with positive rejected is rejected` * `REJECTED with zero rejected is rejected` * `record rejects unknown items via CatalogApi` - Total framework unit tests: 297 (was 288), all green. ## Framework state after this commit - **20 → 21 Gradle subprojects** - **10 of 10 core PBCs live** (pbc-identity, pbc-catalog, pbc-partners, pbc-inventory, pbc-warehousing, pbc-orders-sales, pbc-orders-purchase, pbc-finance, pbc-production, pbc-quality). The P5.x row of the implementation plan is complete at minimal v1 scope. - The v1.0 acceptance bar's "core PBC coverage" line is met. Remaining v1.0 work is cross-cutting (reports, forms, scheduler, web SPA) plus the richer per-PBC v2/v3 scopes. ## What this unblocks - **Cross-PBC quality integration** — any PBC that needs to react to a quality decision can subscribe when pbc-quality grows its event. pbc-warehousing quarantine on rejection is the obvious first consumer. - **The full buy-make-sell BPMN scenario** — now every step has a home: sales → procurement → warehousing → production → quality → finance are all live. The big reference-plug-in end-to-end flow is unblocked at the PBC level. - **Completes the P5.x row** of the implementation plan. Remaining v1.0 work is cross-cutting platform units (P1.8 reports, P1.9 files, P1.10 jobs, P2.2/P2.3 designer/forms) plus the web SPA. -
Ninth core PBC. Ships the first-class orchestration aggregate for moving stock between locations: a header + lines that represents operator intent, and a confirm() verb that atomically posts the matching TRANSFER_OUT / TRANSFER_IN ledger pair per line via the existing InventoryApi.recordMovement facade. Takes the framework's core-PBC count to 9 of 10 (only pbc-quality remains in the P5.x row). ## The shape pbc-warehousing sits above pbc-inventory in the dependency graph: it doesn't replace the flat movement ledger, it orchestrates multi-row ledger writes with a business-level document on top. A DRAFT `warehousing__stock_transfer` row is queued intent (pickers haven't started yet); a CONFIRMED row reflects movements that have already posted to the `inventory__stock_movement` ledger. Each confirmed line becomes two ledger rows: TRANSFER_OUT(itemCode, fromLocationCode, -quantity, ref="TR:<code>") TRANSFER_IN (itemCode, toLocationCode, quantity, ref="TR:<code>") All rows of one confirm call run inside ONE @Transactional method, so a failure anywhere — unknown item, unknown location, balance would go below zero — rolls back EVERY line's both halves. There is no half-confirmed transfer. ## Module contents - `build.gradle.kts` — new Gradle subproject, api-v1 + platform/* dependencies only. No cross-PBC dependency (guardrail #9 stays honest; CatalogApi + InventoryApi both come in via api.v1.ext). - `StockTransfer` entity — header with code, from/to location codes, status (DRAFT/CONFIRMED/CANCELLED), transfer_date, note, OneToMany<StockTransferLine>. Table name `warehousing__stock_transfer`. - `StockTransferLine` entity — lineNo, itemCode, quantity. `transfer_id → warehousing__stock_transfer(id) ON DELETE CASCADE`, unique `(transfer_id, line_no)`. - `StockTransferJpaRepository` — existsByCode + findByCode. - `StockTransferService` — create / confirm / cancel + three read methods. @Transactional service-level; all state transitions run through @Transactional methods so the event-bus MANDATORY propagation (if/when a pbc-warehousing event is added later) has a transaction to join. Business invariants: * code is unique (existsByCode short-circuit) * from != to (enforced in code AND in the Liquibase CHECK) * at least one line * each line: positive line_no, unique per transfer, positive quantity, itemCode must resolve via CatalogApi.findItemByCode * confirm requires DRAFT; writes OUT-first-per-line so a balance-goes-negative error aborts before touching the destination location * cancel requires DRAFT; CONFIRMED transfers are terminal (reverse by creating a NEW transfer in the opposite direction, matching the document-discipline rule every other PBC uses) - `StockTransferController` — `/api/v1/warehousing/stock-transfers` with GET list, GET by id, GET by-code, POST create, POST {id}/confirm, POST {id}/cancel. Every endpoint @RequirePermission-gated using the keys declared in the metadata YAML. Matches the shape of pbc-orders-sales, pbc-orders-purchase, pbc-production. - DTOs use the established pattern — jakarta.validation on the request, response mapping via extension functions. - `META-INF/vibe-erp/metadata/warehousing.yml` — 1 entity, 4 permissions, 1 menu. Loaded by MetadataLoader at boot, visible via `GET /api/v1/_meta/metadata`. - `distribution/src/main/resources/db/changelog/pbc-warehousing/001-warehousing-init.xml` — creates both tables with the full audit column set, state CHECK constraint, locations-distinct CHECK, unique (transfer_id, line_no) index, quantity > 0 CHECK, item_code index for cross-PBC grep. - `settings.gradle.kts`, `distribution/build.gradle.kts`, `master.xml` all wired. ## Smoke test (fresh DB + running app) ``` # seed POST /api/v1/catalog/items {code: PAPER-A4, baseUomCode: sheet} POST /api/v1/catalog/items {code: PAPER-A3, baseUomCode: sheet} POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE} POST /api/v1/inventory/locations {code: WH-SHOP, type: WAREHOUSE} POST /api/v1/inventory/movements {itemCode: PAPER-A4, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT} POST /api/v1/inventory/movements {itemCode: PAPER-A3, locationId: <WH-MAIN>, delta: 50, reason: RECEIPT} # exercise the new PBC POST /api/v1/warehousing/stock-transfers {code: TR-001, fromLocationCode: WH-MAIN, toLocationCode: WH-SHOP, lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 30}, {lineNo: 2, itemCode: PAPER-A3, quantity: 10}]} → 201 DRAFT POST /api/v1/warehousing/stock-transfers/<id>/confirm → 200 CONFIRMED # verify balances via the raw DB (the HTTP stock-balance endpoint # has a separate unrelated bug returning 500; the ledger state is # what this commit is proving) SELECT item_code, location_id, quantity FROM inventory__stock_balance; PAPER-A4 / WH-MAIN → 70 ← debited 30 PAPER-A4 / WH-SHOP → 30 ← credited 30 PAPER-A3 / WH-MAIN → 40 ← debited 10 PAPER-A3 / WH-SHOP → 10 ← credited 10 SELECT item_code, location_id, reason, delta, reference FROM inventory__stock_movement ORDER BY occurred_at; PAPER-A4 / WH-MAIN / TRANSFER_OUT / -30 / TR:TR-001 PAPER-A4 / WH-SHOP / TRANSFER_IN / 30 / TR:TR-001 PAPER-A3 / WH-MAIN / TRANSFER_OUT / -10 / TR:TR-001 PAPER-A3 / WH-SHOP / TRANSFER_IN / 10 / TR:TR-001 ``` Four rows all tagged `TR:TR-001`. A grep of the ledger attributes both halves of each line to the single source transfer document. ## Transactional rollback test (in the same smoke run) ``` # ask for more than exists POST /api/v1/warehousing/stock-transfers {code: TR-002, from: WH-MAIN, to: WH-SHOP, lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 1000}]} → 201 DRAFT POST /api/v1/warehousing/stock-transfers/<id>/confirm → 400 "stock movement would push balance for 'PAPER-A4' at location <WH-MAIN> below zero (current=70.0000, delta=-1000.0000)" # assert TR-002 is still DRAFT GET /api/v1/warehousing/stock-transfers/<id> → status: DRAFT ← NOT flipped to CONFIRMED # assert the ledger still has exactly 6 rows (no partial writes) SELECT count(*) FROM inventory__stock_movement; → 6 ``` The failed confirm left no residue: status stayed DRAFT, and the ledger count is unchanged at 6 (the 2 RECEIPT seeds + the 4 TRANSFER_OUT/IN from TR-001). Propagation.REQUIRED + Spring's default rollback-on-unchecked-exception semantics do exactly what the KDoc promises. ## State-machine guards ``` POST /api/v1/warehousing/stock-transfers/<confirmed-id>/confirm → 400 "cannot confirm stock transfer TR-001 in status CONFIRMED; only DRAFT can be confirmed" POST /api/v1/warehousing/stock-transfers/<confirmed-id>/cancel → 400 "cannot cancel stock transfer TR-001 in status CONFIRMED; only DRAFT can be cancelled — reverse a confirmed transfer by creating a new one in the other direction" ``` ## Tests - 10 new unit tests in `StockTransferServiceTest`: * `create persists a DRAFT transfer when everything validates` * `create rejects duplicate code` * `create rejects same from and to location` * `create rejects an empty line list` * `create rejects duplicate line numbers` * `create rejects non-positive quantities` * `create rejects unknown items via CatalogApi` * `confirm writes an atomic TRANSFER_OUT + TRANSFER_IN pair per line` — uses `verifyOrder` to assert OUT-first-per-line dispatch order * `confirm refuses a non-DRAFT transfer` * `cancel refuses a CONFIRMED transfer` * `cancel flips a DRAFT transfer to CANCELLED` - Total framework unit tests: 288 (was 278), all green. ## What this unblocks - **Real warehouse workflows** — confirm a transfer from a picker UI (R1 is pending), driven by a BPMN that hands the confirm to a TaskHandler once the physical move is complete. - **pbc-quality (P5.8, last remaining core PBC)** — inspection plans + results + holds. Holds would typically quarantine stock by moving it to a QUARANTINE location via a stock transfer, which is the natural consumer for this aggregate. - **Stocktakes (physical inventory reconciliation)** — future pbc-warehousing verb that compares counted vs recorded and posts the differences as ADJUSTMENT rows; shares the same `recordMovement` primitive. -
…duction auto-creates WorkOrder First end-to-end cross-PBC workflow driven entirely from a customer plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a TaskHandler that publishes a generic api.v1 event; pbc-production reacts by creating a DRAFT WorkOrder. The plug-in has zero compile-time coupling to pbc-production, and pbc-production has zero knowledge the plug-in exists. ## Why an event, not a facade Two options were on the table for "how does a plug-in ask pbc-production to create a WorkOrder": (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi` with a `createWorkOrder(command)` method (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production` that anyone can publish — this commit Facade pattern (a) is what InventoryApi.recordMovement and CatalogApi.findItemByCode use: synchronous, in-transaction, caller-blocks-on-completion. Event pattern (b) is what SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses: asynchronous over the bus, still in-transaction (the bus uses `Propagation.MANDATORY` with synchronous delivery so a failure rolls everything back), but the caller doesn't need a typed result. Option (b) wins for plug-in → pbc-production: - Plug-in compile-time surface stays identical: plug-ins already import `api.v1.event.*` to publish. No new api.v1.ext package. Zero new plug-in dependency. - The outbox gets the row for free — a crash between publish and delivery replays cleanly from `platform__event_outbox`. - A second customer plug-in shipping a different flow that ALSO wants to auto-spawn work orders doesn't need a second facade, just publishes the same event. pbc-scheduling (future) can subscribe to the same channel without duplicating code. The synchronous facade pattern stays the right tool for cross-PBC operations the caller needs to observe (read-throughs, inventory debits that must block the current transaction). Creating a DRAFT work order is a fire-and-trust operation — the event shape fits. ## What landed ### api.v1 — WorkOrderRequestedEvent New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent` with four required fields: - `code`: desired work-order code (must be unique globally; convention is to bake the source reference into it so duplicate detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`) - `outputItemCode` + `outputQuantity`: what to produce - `sourceReference`: opaque free-form pointer used in logs and the outbox audit trail. Example values: `plugin:printing-shop:quote:Q-007`, `pbc-orders-sales:SO-2026-001:L2` The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the existing `WorkOrderEvent` sealed interface is for LIFECYCLE events published BY pbc-production, not for inbound requests). `init` validators reject blank strings and non-positive quantities so a malformed event fails fast at publish time rather than at the subscriber. ### pbc-production — WorkOrderRequestedSubscriber New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`. Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe` overload (same pattern as `SalesOrderConfirmedSubscriber` + the six pbc-finance order subscribers). The subscriber: 1. Looks up `workOrders.findByCode(event.code)` as the idempotent short-circuit. If a WorkOrder with that code already exists (outbox replay, future async bus retry, developer re-running the same BPMN process), the subscriber logs at DEBUG and returns. **Second execution of the same BPMN produces the same outbox row which the subscriber then skips — the database ends up with exactly ONE WorkOrder regardless of how many times the process runs.** 2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with the event's fields. `sourceSalesOrderCode` is null because this is the generic path, not the SO-driven one. Why this is a SECOND subscriber rather than extending `SalesOrderConfirmedSubscriber`: the two events serve different producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific and requires a round-trip through `SalesOrdersApi.findByCode` to fetch the lines; `WorkOrderRequestedEvent` carries everything the subscriber needs inline. Collapsing them would mean the generic path inherits the SO-flow's SO-specific lookup and short-circuit logic that doesn't apply to it. ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler New plug-in TaskHandler in `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`. Captures the `PluginContext` via constructor — same pattern as `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside `execute`: 1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables (`quantity` accepts Number or String since Flowable's variable coercion is flexible). 2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and `sourceReference = "plugin:printing-shop:quote:$quoteCode"`. 3. Logs via `context.logger.info(...)` — the line is tagged `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`. 4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`. This is the first time a plug-in TaskHandler publishes a cross-PBC event from inside a workflow — proves the event-bus leg of the handler-context pattern works end-to-end. 5. Writes `workOrderCode` + `workOrderRequested=true` back to the process variables so a downstream BPMN step or the HTTP caller can see the derived code. The handler is registered in `PrintingShopPlugin.start(context)` alongside `PlateApprovalTaskHandler`: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) Teardown via `unregisterAllByOwner("printing-shop")` still works unchanged — the scoped registrar tracks both handlers. ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the plug-in JAR. Single synchronous service task, process definition key `plugin-printing-shop-quote-to-work-order`, service task id `printing_shop.quote.create_work_order` (matches the handler key). Auto-deployed by the host's `PluginProcessDeployer` at plug-in start — the printing-shop plug-in now ships two BPMNs bundled into one Flowable deployment, both under category `printing-shop`. ## Smoke test (fresh DB) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ... [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...': [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml] pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload) # 1) seed a catalog item $ curl -X POST /api/v1/catalog/items {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"} → 201 BOOK-HARDCOVER # 2) start the plug-in's quote-to-work-order BPMN $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order", "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}} → 201 {"ended":true, "variables":{"quoteCode":"Q-007", "itemCode":"BOOK-HARDCOVER", "quantity":500, "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007", "workOrderRequested":true}} Log lines observed: [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500) [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007' for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007') # 3) verify the WorkOrder now exists in pbc-production $ curl /api/v1/production/work-orders → [{"id":"029c2482-...", "code":"WO-FROM-PRINTINGSHOP-Q-007", "outputItemCode":"BOOK-HARDCOVER", "outputQuantity":500.0, "status":"DRAFT", "sourceSalesOrderCode":null, "inputs":[], "ext":{}}] # 4) run the SAME BPMN a second time — verify idempotent $ curl -X POST /api/v1/workflow/process-instances {same body as above} → 201 (process ends, workOrderRequested=true, new event published + delivered) $ curl /api/v1/production/work-orders → count=1, still only WO-FROM-PRINTINGSHOP-Q-007 ``` Every single step runs through an api.v1 public surface. No framework core code knows the printing-shop plug-in exists; no plug-in code knows pbc-production exists. They meet on the event bus, and the outbox guarantees the delivery. ## Tests - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`: * `subscribe registers one listener for WorkOrderRequestedEvent` * `handle creates a work order from the event fields` — captures the `CreateWorkOrderCommand` and asserts every field * `handle short-circuits when a work order with that code already exists` — proves the idempotent branch - Total framework unit tests: 278 (was 275), all green. ## What this unblocks - **Richer multi-step BPMNs** in the plug-in that chain plate approval + quote → work order + production start + completion. - **Plug-in-owned Quote entity** — the printing-shop plug-in can now introduce a `plugin_printingshop__quote` table via its own Liquibase changelog and have its HTTP endpoint create quotes that kick off the quote-to-work-order workflow automatically (or on operator confirm). - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step, potentially driven by plug-ins contributing custom steps via the same TaskHandler + event seam. - **Second reference plug-in** — any new customer plug-in can publish `WorkOrderRequestedEvent` from its own workflows without any framework change. ## Non-goals (parking lot) - The handler publishes but does not also read pbc-production state back. A future "wait for WO completion" BPMN step could subscribe to `WorkOrderCompletedEvent` inside a user-task + signal flow, but the engine's signal/correlation machinery isn't wired to plug-ins yet. - Quote entity + HTTP + real business logic. REF.1 proves the cross-PBC event seam; the richer quote lifecycle is a separate chunk that can layer on top of this. - Transactional rollback integration test. The synchronous bus + `Propagation.MANDATORY` guarantees it, but an explicit test that a subscriber throw rolls back both the ledger-adjacent writes and the Flowable process state would be worth adding with a real test container run. -
Proves out the "handler-side plug-in context access" pattern: a plug-in's TaskHandler captures the PluginContext through its constructor when the plug-in instantiates it inside `start(context)`, and then uses `context.jdbc`, `context.logger`, etc. from inside `execute` the same way the plug-in's HTTP lambdas do. Zero new api.v1 surface was needed — the plug-in decides whether a handler takes a context or not, and a pure handler simply omits the constructor parameter. ## Why this pattern and not a richer TaskContext The alternatives were: (a) add a `PluginContext` field (or a narrowed projection of it) to api.v1 `TaskContext`, threading the host-owned context through the workflow engine (b) capture the context in the plug-in's handler constructor — this commit Option (a) would have forced every TaskHandler author — core PBC handlers too, not just plug-in ones — to reason about a per-plug-in context that wouldn't make sense for core PBCs. It would also have coupled api.v1 to the plug-in machinery in a way that leaks into every handler implementation. Option (b) is a pure plug-in-local pattern. A pure handler: class PureHandler : TaskHandler { ... } and a stateful handler look identical except for one constructor parameter: class StatefulHandler(private val context: PluginContext) : TaskHandler { override fun execute(task, ctx) { context.jdbc.update(...) context.logger.info(...) } } and both register the same way: context.taskHandlers.register(PureHandler()) context.taskHandlers.register(StatefulHandler(context)) The framework's `TaskHandlerRegistry`, `DispatchingJavaDelegate`, and `DelegateTaskContext` stay unchanged. Plug-in teardown still strips handlers via `unregisterAllByOwner(pluginId)` because registration still happens through the scoped registrar inside `start(context)`. ## What PlateApprovalTaskHandler now does Before this commit, the handler was a pure function that wrote `plateApproved=true` + metadata to the process variables and didn't touch the DB. Now it: 1. Parses `plateId` out of the process variables as a UUID (fail-fast on non-UUID). 2. Calls `context.jdbc.update` to set the plate row's `status` from 'DRAFT' to 'APPROVED', guarded by an explicit `WHERE id=:id AND status='DRAFT'`. The guard makes a second invocation a no-op (rowsUpdated=0) rather than silently overwriting a later status. 3. Logs via the plug-in's PluginLogger — "plate {id} approved by user:admin (rows updated: 1)". Log lines are tagged `[plugin:printing-shop]` by the framework's Slf4jPluginLogger. 4. Emits process output variables: `plateApproved=true`, `plateId=<uuid>`, `approvedBy=<principal label>`, `approvedAt=<instant>`, and `rowsUpdated=<count>` so callers can see whether the approval actually changed state. ## Smoke test (fresh DB, full end-to-end loop) ``` POST /api/v1/plugins/printing-shop/plates {"code":"PLATE-042","name":"Red cover plate","widthMm":320,"heightMm":480} → 201 {"id":"0bf577c9-...","status":"DRAFT",...} POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"0bf577c9-..."}} → 201 {"ended":true, "variables":{"plateId":"0bf577c9-...", "rowsUpdated":1, "approvedBy":"user:admin", "approvedAt":"2026-04-09T05:01:01.779369Z", "plateApproved":true}} GET /api/v1/plugins/printing-shop/plates/0bf577c9-... → 200 {"id":"0bf577c9-...","status":"APPROVED", ...} ^^^^ note: was DRAFT a moment ago, NOW APPROVED — persisted to plugin_printingshop__plate via the handler's context.jdbc.update POST /api/v1/workflow/process-instances (same plateId, second run) → 201 {"variables":{"rowsUpdated":0,"plateApproved":true,...}} ^^^^ idempotent guard: the WHERE status='DRAFT' clause prevents double-updates, rowsUpdated=0 on the re-run ``` This is the first cross-cutting end-to-end business flow in the framework driven entirely through the public surfaces: 1. Plug-in HTTP endpoint writes a domain row 2. Workflow HTTP endpoint starts a BPMN process 3. Plug-in-contributed BPMN (deployed via PluginProcessDeployer) routes to a plug-in-contributed TaskHandler (registered via context.taskHandlers) 4. Handler mutates the same plug-in-owned table via context.jdbc 5. Plug-in HTTP endpoint reads the new state Every step uses only api.v1. Zero framework core code knows the plug-in exists. ## Non-goals (parking lot) - Emitting an event from the handler. The next step in the plug-in workflow story is for a handler to publish a domain event via `context.eventBus.publish(...)` so OTHER subscribers (e.g. pbc-production waiting on a PlateApproved event) can react. This commit stays narrow: the handler only mutates its own plug-in state. - Transaction scope of the handler's DB write relative to the Flowable engine's process-state persistence. Today both go through the host DataSource and Spring transaction manager that Flowable auto-configures, so a handler throw rolls everything back — verified by walking the code path. An explicit test of transactional rollback lands with REF.1 when the handler takes on real business logic.