-
CLAUDE.md "Repository state" was stale (18→25 subprojects, 246→356 tests, 8→10 PBCs, 9→12 platform services). PROGRESS.md "What's not yet live" still listed Flowable, JasperReports, file store, job scheduler, OIDC, and web SPA as missing — all are live. "Current stage" paragraph updated to reflect 10 PBCs and 12 services. "How to run" section updated to remove DemoSeedRunner reference (moved to demo branch).
-
Tier 1 customization comes alive in the SPA: custom fields declared in YAML metadata now render automatically in create forms without any compile-time knowledge of the field. New component: DynamicExtFields - Fetches custom field declarations from the existing /api/v1/_meta/metadata/custom-fields/{entityName} endpoint - Renders one input per declared field, type-matched: string → text, integer → number (step=1), decimal/money/ quantity → number (step=0.01), boolean → checkbox, date → date picker, dateTime → datetime-local, enum → select dropdown, uuid → text - Labels resolve from labelTranslations using the active locale (i18n integration) - Required fields show a red asterisk - Values are collected in the ext map and sent with the create request Wired into: CreateItemPage (entityName="Item"), CreatePartnerPage (entityName="Partner"). Both now show a "Custom fields" section below the static fields when the entity has custom field declarations in metadata. No new backend code — the existing /api/v1/_meta/metadata/ custom-fields endpoint already returns exactly the shape the component needs. This is P3.1: the runtime form renderer for Tier 1 customization. -
Adds edit pages for items and partners — the two entities operators update most often. Each form loads the existing record, pre-fills all editable fields, and PATCHes on save. Code and baseUomCode are read-only after creation (by design). New pages: - EditItemPage: name, type, description, active toggle - EditPartnerPage: name, type, email, phone API client: catalog.updateItem, partners.update (PATCH). List pages: item/partner codes are now clickable links to the edit page instead of plain text. Routes wired at /items/:id/edit and /partners/:id/edit.
-
Adds client-side i18n infrastructure to the SPA (CLAUDE.md guardrail #6: global/i18n from day one). New files: - i18n/messages.ts: flat key-value message bundles for en-US and zh-CN. Keys use dot-notation (nav.*, action.*, status.*, label.*). ~80 keys per locale covering navigation, actions, status badges, and common labels. - i18n/LocaleContext.tsx: LocaleProvider + useT() hook + useLocale() hook. Active locale stored in localStorage, defaults to the browser's navigator.language. Auto-detects zh-* → zh-CN. Wired into the SPA: - main.tsx wraps the app in <LocaleProvider> - AppLayout sidebar uses t(key) for every heading and item - Top bar has a locale dropdown (English / 中文) that switches the entire sidebar + status labels instantly - StatusBadge uses t('status.DRAFT') etc. so statuses render as '草稿' / '已确认' / '已发货' in Chinese The i18n system is intentionally simple: plain strings, no ICU MessageFormat patterns (those live on the backend via ICU4J). A future chunk can adopt @formatjs/intl-messageformat if the SPA needs plural/gender/number formatting client-side. Not yet translated: page-level titles and form labels (they still use hard-coded English). The infrastructure is in place; translating individual pages is incremental. -
Removes demo-specific code from the main (framework) branch so main stays a clean, generic foundation. The demo branch retains these files from before this commit. Removed from main: - DemoSeedRunner.kt (printing-company seed data) - vibeerp.demo.seed config in application-dev.yaml - EBC-PP-001 demo walkthrough in DashboardPage The dashboard now shows a generic "Getting started" guide that walks operators through setting up master data, creating orders, and walking the buy-make-sell loop — without referencing any specific customer or seed data. To run the printing-company demo, use the demo worktree: cd ~/Desktop/vibe_erp_demo ./gradlew :distribution:bootRun
-
JournalEntriesPage now renders expandable debit/credit lines for each entry. Click a row to toggle the line detail view showing account code, DR/CR amounts, and description. Types updated: JournalEntry now includes lines array with JournalEntryLine (lineNo, accountCode, debit, credit, description). This makes the GL growth visible in the demo — confirming an SO shows the AR entry with DR 1100 / CR 4100 balanced lines inline.
-
Adds JournalEntryLine child entity with debit/credit legs per account, completing the pbc-finance GL foundation. Domain: - JournalEntryLine entity: lineNo, accountCode, debit (>=0), credit (>=0), description. FK to parent JournalEntry with CASCADE delete. Unique (journal_entry_id, line_no). - JournalEntry gains @OneToMany lines collection (EAGER fetch, ordered by lineNo). Event subscribers now write balanced double-entry lines: - SalesOrderConfirmed: DR 1100 (AR), CR 4100 (Revenue) - PurchaseOrderConfirmed: DR 1200 (Inventory), CR 2100 (AP) The parent JournalEntry.amount is retained as a denormalized summary; the lines are the source of truth for accounting. The seeded account codes (1100, 1200, 2100, 4100, 5100) match the chart from the previous commit. JournalEntryController.toResponse() now includes the lines array so the SPA can display debit/credit legs inline. Schema: 004-finance-entry-lines.xml adds finance__journal_entry_line with FK, unique (entry, lineNo), non-negative check on dr/cr. Smoke verified on fresh Postgres: - Confirm SO-2026-0001 -> AR entry with 2 lines: DR 1100 $1950, CR 4100 $1950 - Confirm PO-2026-0001 -> AP entry with 2 lines: DR 1200 $2550, CR 2100 $2550 - Both entries balanced (sum DR = sum CR) Caught by smoke: LazyInitializationException on the new lines collection — fixed by switching FetchType from LAZY to EAGER (entries have 2-4 lines max, eager is appropriate). -
First step of pbc-finance GL growth: the chart of accounts. Backend: - Account entity (code, name, accountType: ASSET/LIABILITY/EQUITY/ REVENUE/EXPENSE, description, active) - AccountJpaRepository + AccountService (list, findById, findByCode, create with duplicate-code guard) - AccountController at /api/v1/finance/accounts (GET list, GET by id, POST create). Permission-gated: finance.account.read, .create. - Liquibase 003-finance-accounts.xml: table + unique code index + 6 seeded accounts (1000 Cash, 1100 AR, 1200 Inventory, 2100 AP, 4100 Sales Revenue, 5100 COGS) - finance.yml updated: Account entity + 2 permissions + menu entry SPA: - AccountsPage with sortable list + inline create form - finance.listAccounts + finance.createAccount in typed API client - Sidebar: "Chart of Accounts" above "Journal Entries" in Finance - Route /accounts wired in App.tsx + SpaController + SecurityConfig This is the foundation for the next step (JournalEntryLine child entity with per-account debit/credit legs + balanced-entry validation). The seeded chart covers the 6 accounts the existing event subscribers will reference once the double-entry lines land. -
Pins today's 5 feature commits: - 25353240 SPA CRUD forms (item, partner, PO, WO) - 82c5267d R2 identity screens (users, roles, assignment) - c2fab13b S3 file backend (P1.9 complete) - 6ad72c7c OIDC federation (P4.2 complete) - 17771894 SPA fill (create location, adjust stock) P1.9 promoted from Partial to DONE. P4.2 promoted from Pending to DONE. R2 promoted from Pending to DONE. R4 updated to reflect create forms for every manageable entity. Version bumped 0.29.0 -> 0.30.0-SNAPSHOT.
-
Two more operator-facing forms: - CreateLocationPage: code, name, type (WAREHOUSE/BIN/VIRTUAL) - AdjustStockPage: item dropdown, location dropdown, absolute quantity. Creates the balance row if absent; sets it to the given value if present. Shows the resulting balance inline. API client: inventory.createLocation, inventory.adjustBalance. Locations list gets "+ New Location"; Balances list gets "Adjust Stock". Routes wired at /locations/new and /balances/adjust. With this commit, every PBC entity that operators need to create or manage has a SPA form: items, partners, locations, stock balances, sales orders, purchase orders, work orders (with BOM + routing), users, and roles. The only create-less entities are journal entries (read-only, event-driven) and stock movements (append-only ledger). -
The framework now supports federated authentication: operators can configure an external OIDC provider (Keycloak, Auth0, or any OIDC-compliant issuer) and the API accepts JWTs from both the built-in auth (/api/v1/auth/login, HS256) and the OIDC provider (RS256, JWKS auto-discovered). Opt-in via vibeerp.security.oidc.issuer-uri. When blank (default), only built-in auth works — exactly the pre-P4.2 behavior. When set, the JwtDecoder becomes a composite: tries the built-in HS256 decoder first (cheap, local HMAC), falls back to the OIDC decoder (RS256, cached JWKS fetch from the provider's .well-known endpoint). Claim mapping: PrincipalContextFilter now handles both formats: - Built-in: sub=UUID, username=<claim>, roles=<flat array> - OIDC/Keycloak: sub=OIDC subject, preferred_username=<claim>, realm_access.roles=<nested array> Claim names are configurable via vibeerp.security.oidc.username-claim and roles-claim for non-Keycloak providers. New files: - OidcProperties.kt: config properties class for the OIDC block Modified files: - JwtConfiguration.kt: composite decoder, now takes OidcProperties - PrincipalContextFilter.kt: dual claim resolution (built-in first, OIDC fallback), now takes OidcProperties - JwtRoundTripTest.kt: updated to pass OidcProperties (defaults) - application.yaml: OIDC config block with env-var interpolation No new dependencies — uses Spring Security's existing JwtDecoders.fromIssuerLocation() which is already on the classpath via spring-boot-starter-oauth2-resource-server. -
Adds S3FileStorage alongside the existing LocalDiskFileStorage, selected at boot by vibeerp.files.backend (local or s3). The local backend is the default (matchIfMissing=true) so existing deployments are unaffected. Setting backend=s3 activates the S3 backend with its own config block. Works with AWS S3, MinIO, DigitalOcean Spaces, or any S3-compatible object store via the endpoint-url override. The S3 client is lazy-initialized on first use so the bean loads even when S3 is unreachable at boot time (useful for tests and for the local-disk default path where the S3 bean is never instantiated). Configuration (vibeerp.files.s3.*): - bucket (required when backend=s3) - region (default: us-east-1) - endpoint-url (optional; for MinIO and non-AWS services) - access-key + secret-key (optional; falls back to AWS DefaultCredentialsProvider chain) - key-prefix (optional; namespaces objects so multiple instances can share one bucket) Implementation notes: - put() reads the stream into a byte array for S3 (S3 requires Content-Length up front; chunked upload is a future optimization for large files) - get() returns the S3 response InputStream directly; caller must close it (same contract as local backend) - list() paginates via ContinuationToken for buckets with >1000 objects per prefix - Content-type is stored as native S3 object metadata (no sidecar .meta file unlike local backend) Dependency: software.amazon.awssdk:s3:2.28.6 (AWS SDK v2) added to libs.versions.toml and platform-files build.gradle.kts. LocalDiskFileStorage gained @ConditionalOnProperty(havingValue = "local", matchIfMissing = true) so it's the default but doesn't conflict when backend=s3. application.yaml updated with commented-out S3 config block documenting all available properties. -
Closes the R2 gap: an admin can now manage users and roles entirely from the SPA without touching curl or Swagger UI. Backend (pbc-identity): - New RoleService with createRole, assignRole, revokeRole, findUserRoleCodes, listRoles. Each method validates existence + idempotency (duplicate assignment rejected, missing role rejected). - New RoleController at /api/v1/identity/roles (CRUD) + /api/v1/identity/users/{userId}/roles/{roleCode} (POST assign, DELETE revoke). All permission-gated: identity.role.read, identity.role.create, identity.role.assign. - identity.yml updated: added identity.role.create permission. SPA (web/): - UsersPage — list with username link to detail, "+ New User" - CreateUserPage — username, display name, email form - UserDetailPage — shows user info + role toggle list. Each role has an Assign/Revoke button that takes effect on the user's next login (JWT carries roles from login time). - RolesPage — list with inline create form (code + name) - Sidebar gains "System" section with Users + Roles links - API client + types: identity.listUsers, getUser, createUser, listRoles, createRole, getUserRoles, assignRole, revokeRole Infrastructure: - SpaController: added /users/** and /roles/** forwarding - SecurityConfiguration: added /users/** and /roles/** to the SPA permitAll block -
Extends the R1 SPA with create forms for the four entities operators interact with most. Each page follows the same pattern proven by CreateSalesOrderPage: a card-scoped form with dropdowns populated from the API, inline validation, and a redirect to the detail or list page on success. New pages: - CreateItemPage — code, name, type (GOOD/SERVICE/DIGITAL), UoM dropdown populated from /api/v1/catalog/uoms - CreatePartnerPage — code, name, type (CUSTOMER/SUPPLIER/BOTH), optional email + phone - CreatePurchaseOrderPage — symmetric to CreateSalesOrderPage; supplier dropdown filtered to SUPPLIER/BOTH partners, optional expected date, dynamic line items - CreateWorkOrderPage — output item + quantity + optional due date, dynamic BOM inputs (item + qty/unit + source location dropdown), dynamic routing operations (op code + work center + std minutes). The most complex form in the SPA — matches the EBC-PP-001 work order creation flow API client additions: catalog.createItem, partners.create, purchaseOrders.create, production.createWorkOrder — each a typed wrapper around POST to the corresponding endpoint. List pages updated: Items, Partners, Purchase Orders, Work Orders all now show a "+ New" button in the PageHeader that links to the create form. Routes wired: /items/new, /partners/new, /purchase-orders/new, /work-orders/new — all covered by the existing SpaController wildcard patterns and SecurityConfiguration permitAll rules. -
Reworks the demo seed and SPA to match the reference customer's work-order management process (EBC-PP-001 from raw/ docs). Demo seed (DemoSeedRunner): - 7 printing-specific items: paper stock, 4-color ink, CTP plates, lamination film, business cards, brochures, posters - 4 partners: 2 customers (Wucai Advertising, Globe Marketing), 2 suppliers (Huazhong Paper, InkPro Industries) - 2 warehouses with opening stock for all items - Pre-seeded WO-PRINT-0001 with full BOM (3 inputs: paper + ink + CTP plates from WH-RAW) and 3-step routing (CTP plate-making @ CTP-ROOM-01 -> offset printing @ PRESS-A -> post-press finishing @ BIND-01) matching EBC-PP-001 steps C-010/C-040 - 2 DRAFT sales orders: SO-2026-0001 (100x business cards + 500x brochures, $1950), SO-2026-0002 (200x posters, $760) - 1 DRAFT purchase order: PO-2026-0001 (10000x paper + 50kg ink, $2550) from Huazhong Paper SPA additions: - New CreateSalesOrderPage with customer dropdown, item selector, dynamic line add/remove, quantity + price inputs. Navigates to the detail page on creation. - "+ New Order" button on the SalesOrdersPage header - Dashboard "Try the demo" section rewritten to walk the EBC-PP-001 flow: create SO -> confirm (auto-spawns WOs) -> walk WO routing -> complete (material issue + production receipt) -> ship SO (stock debit + AR settle) - salesOrders.create() added to the typed API client The key demo beat: confirming SO-2026-0001 auto-spawns WO-FROM-SO-2026-0001-L1 and -L2 via SalesOrderConfirmedSubscriber (EBC-PP-001 step B-010). The pre-seeded WO-PRINT-0001 shows the full BOM + routing story separately. Together they demonstrate that the framework expresses the customer's production workflow through configuration, not code. Smoke verified on fresh Postgres: all 7 items seeded, WO with 3 BOM + 3 ops created, SO confirm spawns 2 WOs with source traceability, SPA /sales-orders/new renders and creates orders.
-
Updates the "at a glance" row to v0.29.0-SNAPSHOT + fc62d6d7, bumps the Phase 6 R1 row to DONE with the commit ref and an overview of what landed (Gradle wrapper, SpaController, security reordering, bundled fat-jar, 16 pages), and rewrites the "How to run" section to walk the click-through demo instead of just curl. README's status table updated to reflect 10/10 PBCs + 356 tests + SPA status; building section now mentions that `./gradlew build` compiles the SPA too. No code changes.
-
The R1 SPA chunk added a :web Gradle subproject whose npmBuild Exec task runs Vite during :distribution:bootJar. The Dockerfile's build stage uses eclipse-temurin:21-jdk-alpine which has no node/npm, so the docker image CI job fails with: process "/bin/sh -c chmod +x ./gradlew && ./gradlew :distribution:bootJar --no-daemon" did not complete successfully: exit code: 1 Fix: apk add --no-cache nodejs npm before the Gradle build. Alpine 3.21 ships node v22 + npm 10 which Vite 5 + React 18 handle fine. The runtime stage stays a pure JRE image — node is only needed at build time and never makes it into the shipping container.
-
First runnable end-to-end demo: open the browser, log in, click through every PBC, and walk a sales order DRAFT → CONFIRMED → SHIPPED. Stock balances drop, the SALES_SHIPMENT row appears in the ledger, and the AR journal entry settles — all visible in the SPA without touching curl. Bumps version to 0.29.0-SNAPSHOT. What landed ----------- * New `:web` Gradle subproject — Vite + React 18 + TypeScript + Tailwind 3.4. The Gradle wrapper is two `Exec` tasks (`npmInstall`, `npmBuild`) with proper inputs/outputs declared for incremental builds. Deliberately no node-gradle plugin — one less moving piece. * SPA architecture: hand-written typed REST client over `fetch` (auth header injection + 401 handler), AuthContext that decodes the JWT for display, ProtectedRoute, AppLayout with sidebar grouped by PBC, 16 page components covering the full v1 surface (Items, UoMs, Partners, Locations, Stock Balances + Movements, Sales Orders + detail w/ confirm/ship/cancel, Purchase Orders + detail w/ confirm/receive/cancel, Work Orders + detail w/ start/complete, Shop-Floor dashboard with 5s polling, Journal Entries). 211 KB JS / 21 KB CSS gzipped. * Sales-order detail page: confirm/ship/cancel verbs each refresh the order, the (SO-filtered) movements list, and the (SO-filtered) journal entries — so an operator watches the ledger row appear and the AR row settle in real time after a single click. Same pattern on the purchase-order detail page for the AP/RECEIPT side. * Shop-floor dashboard polls /api/v1/production/work-orders/shop- floor every 5s and renders one card per IN_PROGRESS WO with current operation, planned vs actual minutes (progress bar), and operations-completed. * `:distribution` consumes the SPA dist via a normal Gradle outgoing/incoming configuration: `:web` exposes `webStaticBundle`, `:distribution`'s `bundleWebStatic` Sync task copies it into `${buildDir}/web-static/static/`, and that parent directory is added to the main resources source set so Spring Boot serves the SPA from `classpath:/static/` out of the same fat-jar. Single artifact, no nginx, no CORS. * New `SpaController` in platform-bootstrap forwards every known SPA route prefix to `/index.html` so React Router's HTML5 history mode works on hard refresh / deep-link entry. Explicit list (12 prefixes) rather than catch-all so typoed API URLs still get an honest 404 instead of the SPA shell. * SecurityConfiguration restructured: keeps the public allowlist for /api/v1/auth + /api/v1/_meta + /v3/api-docs + /swagger-ui, then `/api/**` is `.authenticated()`, then SPA static assets + every SPA route prefix are `.permitAll()`. The order is load-bearing — putting `.authenticated()` for /api/** BEFORE the SPA permitAll preserves the framework's "API is always authenticated" invariant even with the SPA bundled in the same fat-jar. The SPA bundle itself is just HTML+CSS+JS so permitting it is correct; secrets are gated by /api/**. * New `DemoSeedRunner` in `:distribution` (gated behind `vibeerp.demo.seed=true`, set in application-dev.yaml only). Idempotent — the runner short-circuits if its sentinel item (DEMO-PAPER-A4) already exists. Seeds 5 items, 2 warehouses, 4 partners, opening stock for every item, one open DEMO-SO-0001 (50× business cards + 20× brochures, $720), one open DEMO-PO-0001 (10000× paper, $400). Every row carries the DEMO- prefix so it's trivially distinguishable from hand-created data; a future "delete demo data" command has an obvious filter. Production deploys never set the property, so the @ConditionalOnProperty bean stays absent from the context. How to run ---------- docker compose up -d db ./gradlew :distribution:bootRun open http://localhost:8080 # Read the bootstrap admin password from the boot log, # log in as admin, and walk DEMO-SO-0001 through the # confirm + ship flow to see the buy-sell loop in the UI. What was caught by the smoke test --------------------------------- * TypeScript strict mode + `error: unknown` in React state → `{error && <X/>}` evaluates to `unknown` and JSX rejects it. Fixed by typing the state as `Error | null` and converting in catches with `e instanceof Error ? e : new Error(String(e))`. Affected 16 page files; the conversion is now uniform. * DataTable's `T extends Record<string, unknown>` constraint was too restrictive for typed row interfaces; relaxed to unconstrained `T` with `(row as unknown as Record<…>)[key]` for the unkeyed cell read fallback. * `vite.config.ts` needs `@types/node` for `node:path` + `__dirname`; added to devDependencies and tsconfig.node.json declares `"types": ["node"]`. * KDoc nested-comment trap (4th time): SpaController's KDoc had `/api/v1/...` in backticks; the `/*` inside backticks starts a nested block comment and breaks Kotlin compilation with "Unclosed comment". Rephrased to "the api-v1 prefix". * SecurityConfiguration order: a draft version that put the SPA permit-all rules BEFORE `/api/**` authenticated() let unauthenticated requests reach API endpoints. Caught by an explicit smoke test (curl /api/v1/some-bogus-endpoint should return 401, not 404 from a missing static file). Reordered so /api/** authentication runs first. End-to-end smoke (real Postgres, fresh DB) ------------------------------------------ - bootRun starts in 7.6s - DemoSeedRunner reports "populating starter dataset… done" - GET / returns the SPA HTML - GET /sales-orders, /sales-orders/<uuid>, /journal-entries all return 200 (SPA shell — React Router takes over) - GET /assets/index-*.js / /assets/index-*.css both 200 - GET /api/v1/some-bogus-endpoint → 401 (Spring Security rejects before any controller mapping) - admin login via /api/v1/auth/login → 200 + JWT - GET /catalog/items → 5 DEMO-* rows - GET /partners/partners → 4 DEMO-* rows - GET /inventory/locations → 2 DEMO-* warehouses - GET /inventory/balances → 5 starting balances - POST /orders/sales-orders/<id>/confirm → CONFIRMED; GET /finance/journal-entries shows AR POSTED 720 USD - POST /orders/sales-orders/<id>/ship {"shippingLocationCode": "DEMO-WH-FG"} → SHIPPED; balances drop to 150 + 80; journal entry flips to SETTLED - POST /orders/purchase-orders/<id>/confirm → CONFIRMED; AP POSTED 400 USD appears - POST /orders/purchase-orders/<id>/receive → RECEIVED; PAPER balance grows from 5000 to 15000; AP row SETTLED - 8 stock_movement rows in the ledger total -
Closes the deferred TODO from the OpenAPI commit (11bef932): every endpoint a plug-in registers via `PluginContext.endpoints.register` now shows up in the OpenAPI spec alongside the host's @RestController operations. Downstream OpenAPI clients (R1 web SPA codegen, A1 MCP server tool catalog, operator-side Swagger UI browsing) can finally see the customer-specific HTTP surface. **Problem.** springdoc's default scan walks `@RestController` beans on the host classpath. Plug-in endpoints are NOT registered that way — they live as lambdas on a single `PluginEndpointDispatcher` catch-all controller, so the default scan saw ONE dispatcher path and zero per-plug-in detail. The printing-shop plug-in's 8 endpoints were entirely invisible to the spec. **Solution: an OpenApiCustomizer bean that queries the registry at spec-build time.** 1. `PluginEndpointRegistry.snapshot()` — new public read-only view. Returns a list of `(pluginId, method, path)` tuples without exposing the handler lambdas. Taken under the registry's intrinsic lock and copied out so callers can iterate without racing plug-in (un)registration. Ordered by registration order for determinism. 2. `PluginEndpointSummary` — new public data class in platform-plugins. `pluginId` + `method` + `path` plus a `fullPath()` helper that prepends `/api/v1/plugins/<pluginId>`. 3. `PluginEndpointsOpenApiCustomizer @Component` — new class in `platform-plugins/openapi/`. Implements `org.springdoc.core.customizers.OpenApiCustomizer`. On every `/v3/api-docs` request, iterates `registry.snapshot()`, groups by full path, and attaches a `PathItem` with one `Operation` per registered HTTP verb. Each operation gets: - A tag `"Plug-in: <pluginId>"` so Swagger UI groups every plug-in's surface under a header - A `summary` + `description` naming the plug-in - Path parameters auto-extracted from `{name}` segments - A generic JSON request body for POST/PUT/PATCH - A generic 200 response + 401/403/404 error responses - The global bearerAuth security scheme (inherited from OpenApiConfiguration, no per-op annotation) 4. `compileOnly(libs.springdoc.openapi.starter.webmvc.ui)` in platform-plugins so `OpenApiCustomizer` is visible at compile time without dragging the full webmvc-ui bundle into platform-plugins' runtime classpath (distribution already pulls it in via platform-bootstrap's `implementation`). 5. `implementation(project(":platform:platform-plugins"))` added to platform-bootstrap so `OpenApiConfiguration` can inject the customizer by type and explicitly wire it to the `pluginEndpointsGroup()` `GroupedOpenApi` builder via `.addOpenApiCustomizer(...)`. **This is load-bearing** — springdoc's grouped specs run their own customizer pipeline and do NOT inherit top-level @Component OpenApiCustomizer beans. Caught at smoke-test time: initially the customizer populated the default /v3/api-docs but the /v3/api-docs/plugins group still showed only the dispatcher. Fix was making the customizer a constructor-injected dep of OpenApiConfiguration and calling `addOpenApiCustomizer` on the group builder. **What a future chunk might add** (not in this one): - Richer per-endpoint JSON Schema — v1 ships unconstrained `ObjectSchema` request/response bodies because the framework has no per-endpoint shape info at the registrar layer. A future `PluginEndpointRegistrar` overload accepting an explicit schema would let plug-ins document their payloads. - Per-endpoint `@RequirePermission` surface — the dispatcher enforces permissions at runtime but doesn't record them on the registration, so the OpenAPI spec doesn't list them. **KDoc `/**` trap caught.** A literal plug-in URL pattern in the customizer's KDoc (`/api/v1/plugins/{pluginId}/**`) tripped the Kotlin nested-comment parser again. Rephrased as "under the `/api/v1/plugins/{pluginId}` prefix" to sidestep. Third time this trap has bitten me — the workaround is in feedback memory. **HttpMethod enum caught.** Initial `when` branch on the customizer covered HEAD/OPTIONS which don't exist in the api.v1 `HttpMethod` enum (only GET/POST/PUT/PATCH/DELETE). Dropped those branches. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs/plugins returns 7 paths: - /api/v1/plugins/printing-shop/echo/{name} GET - /api/v1/plugins/printing-shop/inks GET, POST - /api/v1/plugins/printing-shop/ping GET - /api/v1/plugins/printing-shop/plates GET, POST - /api/v1/plugins/printing-shop/plates/{id} GET - /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf POST - /api/v1/plugins/{pluginId}/** (dispatcher fallback) Before this chunk: only the dispatcher fallback (1 path). - Top-level /v3/api-docs now also includes the 6 printing-shop paths it previously didn't. - All 7 printing-shop endpoints remain functional at the real dispatcher (no behavior change — this is a documentation-only enhancement). 24 modules, 355 unit tests, all green.
-
Adds 15 GroupedOpenApi beans that split the single giant OpenAPI spec into per-PBC + per-platform-module focused specs selectable from Swagger UI's top-right "Select a definition" dropdown. No @RestController changes — all groups are defined by URL prefix in platform-bootstrap, so adding a new PBC means touching exactly this file (plus the controller itself). Each group stays additive alongside the default /v3/api-docs. **Groups shipped:** Platform platform-core — /api/v1/auth/**, /api/v1/_meta/** platform-workflow — /api/v1/workflow/** platform-jobs — /api/v1/jobs/** platform-files — /api/v1/files/** platform-reports — /api/v1/reports/** Core PBCs pbc-identity — /api/v1/identity/** pbc-catalog — /api/v1/catalog/** pbc-partners — /api/v1/partners/** pbc-inventory — /api/v1/inventory/** pbc-warehousing — /api/v1/warehousing/** pbc-orders — /api/v1/orders/** (sales + purchase together) pbc-production — /api/v1/production/** pbc-quality — /api/v1/quality/** pbc-finance — /api/v1/finance/** Plug-in dispatcher plugins — /api/v1/plugins/** **Why path-prefix grouping, not package-scan grouping.** Package-scan grouping would force OpenApiConfiguration to know every PBC's Kotlin package name and drift every time a PBC ships or a controller moves. Path-prefix grouping only shifts when `@RequestMapping` changes — which is already a breaking API change that would need review anyway. This keeps the control plane for grouping in one file while the routing stays in each controller. **Why pbc-orders is one group, not split sales/purchase.** Both controllers share the `/api/v1/orders/` prefix, and sales / purchase are the same shape in practice — splitting them into two groups would just duplicate the dropdown entries. A future chunk can split if a real consumer asks for it. **Primary group unchanged.** The default /v3/api-docs continues to return the full merged spec (every operation in one document). The grouped specs are additive at /v3/api-docs/<group-name> and clients can pick whichever they need. Swagger UI defaults to showing the first group in the dropdown. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs/swagger-config returns 15 groups with human-readable display names - Per-group path counts (confirming each group is focused): pbc-production: 10 paths pbc-catalog: 6 paths pbc-orders: 12 paths platform-core: 9 paths platform-files: 3 paths plugins: 1 path (dispatcher) - Default /v3/api-docs continues to return the full spec. 24 modules, 355 unit tests, all green. -
Closes a 15-commit-old TODO on MetaController and unifies the version story across /api/v1/_meta/info, /v3/api-docs, and the api-v1.jar manifest. **Build metadata wiring.** `distribution/build.gradle.kts` now calls `buildInfo()` inside the `springBoot { }` block. This makes Spring Boot's Gradle plug-in write `META-INF/build-info.properties` into the bootJar at build time with group / artifact / version / build time pulled from `project.version` + timestamps. Spring Boot's `BuildInfoAutoConfiguration` then exposes a `BuildProperties` bean that injection points can consume. **MetaController enriched.** Now injects: - `ObjectProvider<BuildProperties>` — returns the real version (`0.28.0-SNAPSHOT`) and the build timestamp when packaged through the distribution bootJar; falls back to `0.0.0-test` inside a bare platform-bootstrap unit test classloader with no build-info file on the classpath. - `Environment` — returns `spring.profiles.active` so a dashboard can distinguish "dev" from "staging" from a prod container that activates no profile. The GET /api/v1/_meta/info response now carries: - `name`, `apiVersion` — unchanged - `implementationVersion` — from BuildProperties (was stuck at "0.1.0-SNAPSHOT" via an unreachable `javaClass.package` lookup) - `buildTime` — ISO-8601 string from BuildProperties, null if the classpath has no build-info file - `activeProfiles` — list of effective spring profiles **OpenApiConfiguration now reads version from BuildProperties too.** Previously OPENAPI_INFO_VERSION was a hardcoded "v0.28.0" constant. Now it's injected via ObjectProvider<BuildProperties> with the same fallback pattern as MetaController. A single version bump in gradle.properties now flows to: gradle.properties → Spring Boot's buildInfo() → build-info.properties (on the classpath) → BuildProperties bean → MetaController (/_meta/info) → OpenApiConfiguration (/v3/api-docs + Swagger UI) → api-v1.jar manifest (already wired) No more hand-maintained version strings in code. Bump `vibeerp.version` in gradle.properties and every display follows. **Version bump.** `gradle.properties` `vibeerp.version`: `0.1.0-SNAPSHOT` → `0.28.0-SNAPSHOT`. This matches the numeric label used on PROGRESS.md's "Latest version" row and carries a documentation comment explaining the propagation chain so the next person bumping it knows what to update alongside (just the one line + PROGRESS.md). **KDoc trap caught.** A literal `/api/v1/_meta/**` path pattern in MetaController's KDoc tripped the Kotlin nested-comment parser (`/**` starts a KDoc). Rephrased as "the whole `/api/v1/_meta` prefix" to sidestep the trap — same workaround I saved in feedback memory after the first time it bit me. **Smoke-tested end-to-end against real Postgres:** - GET /api/v1/_meta/info returns `{"implementationVersion": "0.28.0-SNAPSHOT", "buildTime": "2026-04-09T09:48:25.646Z", "activeProfiles": ["dev"]}` - GET /v3/api-docs `info.version` = "0.28.0-SNAPSHOT" (was the hardcoded "v0.28.0" constant before this chunk) - Single edit to gradle.properties propagates cleanly. 24 modules, 355 unit tests, all green. -
Adds self-introspection of the framework's REST surface via springdoc-openapi. Every @RestController method in the host application is now documented in a machine-readable OpenAPI 3 spec at /v3/api-docs and rendered for humans at /swagger-ui/index.html. This is the first step toward: - R1 (web SPA): OpenAPI codegen feeds a typed TypeScript client - A1 (MCP server): discoverable tool catalog - Operator debugging: browsable "what can this instance do" page **Dependency.** New `springdoc-openapi-starter-webmvc-ui` 2.6.0 added to platform-bootstrap (not distribution) because it ships @Configuration classes that need to run inside a full Spring Boot application context AND brings a Swagger UI WebJar. platform-bootstrap is the only module with a @SpringBootApplication anyway; pbc modules never depend on it, so plug-in classloaders stay clean and the OpenAPI scanner only sees host controllers. **Configuration.** New `OpenApiConfiguration` @Configuration in platform-bootstrap provides a single @Bean OpenAPI: - Title "vibe_erp", version v0.28.0 (hardcoded; moves to a build property when a real version header ships) - Description with a framework-level intro explaining the bearer-JWT auth model, the permission whitelist, and the fact that plug-in endpoints under /api/v1/plugins/{id}/** are NOT scanned (they are dynamically registered via PluginContext.endpoints on a single dispatcher controller; a future chunk may extend the spec at runtime). - One relative server entry ("/") so the spec works behind a reverse proxy without baking localhost into it. - bearerAuth security scheme (HTTP/bearer/JWT) applied globally via addSecurityItem, so every operation in the rendered UI shows a lock icon and the "Authorize" button accepts a raw JWT (Swagger adds the "Bearer " prefix itself). **Security whitelist.** SecurityConfiguration now permits three additional path patterns without authentication: - /v3/api-docs/** — the generated JSON spec - /swagger-ui/** — the Swagger UI static assets + index - /swagger-ui.html — the legacy path (redirects to the above) The data still requires a valid JWT: an unauthenticated "Try it out" call from the Swagger UI against a pbc endpoint returns 401 exactly like a curl would. **Why not wire this into every PBC controller with @Operation / @Parameter annotations in this chunk:** springdoc already auto-generates the full path + request body + response schema from reflection. Adding hand-written annotations is scope creep — a future chunk can tag per-operation @Operation(security = ...) to surface the @RequirePermission keys once a consumer actually needs them. **Smoke-tested end-to-end against real Postgres:** - GET /v3/api-docs returns 200 with 64680 bytes of OpenAPI JSON - 76 total paths listed across every PBC controller - All v3 production paths present: /work-orders/shop-floor, /work-orders/{id}/operations/{operationId}/start + /complete, /work-orders/{id}/{start,complete,cancel,scrap} - components.securitySchemes includes bearerAuth (type=http, format=JWT) - GET /swagger-ui/index.html returns 200 with the Swagger HTML bundle (5 swagger markers found in the HTML) - GET /swagger-ui.html (legacy path) returns 200 after redirect 25 modules (unchanged count — new config lives inside platform-bootstrap), 355 unit tests, all green. -
Adds GET /api/v1/production/work-orders/shop-floor — a pure read that returns every IN_PROGRESS work order with its current operation and planned/actual time totals. Designed to feed a future shop-floor dashboard (web SPA, mobile, or an external reporting tool) without any follow-up round trips. **Service method.** `WorkOrderService.shopFloorSnapshot()` is a @Transactional(readOnly = true) query that: 1. Pulls every IN_PROGRESS work order via the existing `WorkOrderJpaRepository.findByStatus`. 2. Sorts by WO code ascending so a dashboard poll gets a stable row order. 3. For each WO picks the "current operation" = first op in IN_PROGRESS status, or, if none, first PENDING op. This captures both live states: "operator is running step N right now" and "operator just finished step N and hasn't picked up step N+1 yet". 4. Computes `totalStandardMinutes` (sum across every op) + `totalActualMinutes` (sum of completed ops' `actualMinutes` only, treating null as zero). 5. Counts completed vs total operations for a "step 2 of 5" badge. 6. Returns a list of `ShopFloorEntry` DTOs — flat structure, one row per WO, nullable `current*` fields when a WO has no routing at all (v2-compat path). **HTTP surface.** - `GET /api/v1/production/work-orders/shop-floor` - New permission `production.shop-floor.read` - Response is `List<ShopFloorEntryResponse>` — flat so a SPA can render a table without joining across nested JSON. Fields are 1:1 with the service-side `ShopFloorEntry`. **Design choices.** - Mounted under `/work-orders/shop-floor` rather than a top-level `/production/shop-floor` so every production read stays under the same permission/audit/OpenAPI root. - Read-only, zero events published, zero ledger writes. Pure projection over existing state. - Returns empty list when no WO is in-progress — the dashboard renders "no jobs running" without a special case. - Sorted by code so polling is deterministic. A future chunk might add sort-by-work-center if a dashboard needs a by-station view. **Why not a top-level "shop-floor" PBC.** A shop-floor dashboard doesn't own any state — every field it displays is projected from pbc-production. A new PBC would duplicate the data model and create a reaction loop on work order events. Keeping the read in pbc-production matches the CLAUDE.md guardrail "grow the PBC when real consumers appear, not on speculation". **Nullable `current*` fields.** A WO with an empty operations list (the v2-compat path — auto-spawned from SalesOrderConfirmedSubscriber before v3 routings) has all four `current*` fields set to null. The dashboard UI renders "no routing" or similar without any downstream round trip. **Tests (5 new).** empty snapshot when no IN_PROGRESS WOs; one entry per IN_PROGRESS WO with stable sort; current-op picks IN_PROGRESS over PENDING; current-op picks first PENDING when no op is IN_PROGRESS (between-operations state); v2-compat WO with no operations shows null current-op fields and zero time sums. **Smoke-tested end-to-end against real Postgres:** 1. Empty shop-floor initially (no IN_PROGRESS WOs) 2. Started plugin-printing-shop-quote-to-work-order BPMN with quoteCode=Q-DASH-1, quantity=500 3. Started the resulting WO — shop-floor showed currentOperationLineNo=1 (CUT @ PRINTING-CUT-01) status=PENDING, 0/4 completed, totalStandardMinutes=75, totalActualMinutes=0 4. Started op 1 — currentOperationStatus flipped to IN_PROGRESS 5. Completed op 1 with actualMinutes=17 — current op rolled forward to line 2 (PRINT @ PRINTING-PRESS-A) status=PENDING, operationsCompleted=1/4, totalActualMinutes=17 24 modules, 355 unit tests (+5), all green. -
Extends WorkOrderRequestedEvent with an optional routing so a producer — core PBC or customer plug-in — can attach shop-floor operations to a requested work order without importing any pbc-production internals. The reference printing-shop plug-in's quote-to-work-order BPMN now ships a 4-step default routing (CUT → PRINT → FOLD → BIND) end-to-end through the public api.v1 surface. **api.v1 surface additions (additive, defaulted).** - New public data class `RoutingOperationSpec(lineNo, operationCode, workCenter, standardMinutes)` in `api.v1.event.production.WorkOrderEvents` with init-block invariants matching pbc-production v3's internal validation (positive lineNo, non-blank operationCode + workCenter, non-negative standardMinutes). - `WorkOrderRequestedEvent` gains an `operations: List<RoutingOperationSpec>` field, defaulted to `emptyList()`. Existing callers compile without changes; the event's init block now also validates that every operation has a unique lineNo. Convention matches the other v1 events that already carry defaulted `eventId` and `occurredAt` — additive within a major version. **pbc-production subscriber wiring.** - `WorkOrderRequestedSubscriber.handle` now maps `event.operations` → `WorkOrderOperationCommand` 1:1 and passes them to `CreateWorkOrderCommand`. Empty list keeps the v2 behavior exactly (auto-spawned orders from the SO path still get no routing and walk DRAFT → IN_PROGRESS → COMPLETED without any gate); a non-empty list feeds the new v3 WorkOrderOperation children and forces a sequential walk on the shop floor. The log line now includes `ops=<size>` so operators can see at a glance whether a WO came with a routing. **Reference plug-in.** - `CreateWorkOrderFromQuoteTaskHandler` now attaches `DEFAULT_PRINTING_SHOP_ROUTING`: a 4-step sequence modeled on the reference business doc's brochure production flow. Each step gets its own work center (PRINTING-CUT-01, PRINTING-PRESS-A, PRINTING-FOLD-01, PRINTING-BIND-01) so a future shop-floor dashboard can show which station is running which job. Standard times are round-number placeholders (15/30/10/20 minutes) — a real customer tunes them from historical data. Deliberately hard-coded in v1: a real shop with a dozen different flows would either ship a richer plug-in that picks routing per item type, or wait for a future Tier 1 "routing template" metadata entity. v1 just proves the event-driven seam carries v3 operations end-to-end. **Why this is the right shape.** - Zero new compile-time coupling. The plug-in imports only `api.v1.event.production.RoutingOperationSpec`; the plug-in linter would refuse any reach into `pbc.production.*`. - Core pbc-production stays ignorant of the plug-in: the subscriber doesn't know where the event came from. - The same `WorkOrderRequestedEvent` path now works for ANY producer — the next customer plug-in that spawns routed work orders gets zero core changes. **Tests.** New `WorkOrderRequestedSubscriberTest.handle passes event operations through as WorkOrderOperationCommand` asserts the 1:1 mapping of RoutingOperationSpec → WorkOrderOperationCommand. The existing test gains one assertion that an empty `operations` list on the event produces an empty `operations` list on the command (backwards-compat lock-in). **Smoke-tested end-to-end against real Postgres:** 1. POST /api/v1/workflow/process-instances with processDefinitionKey `plugin-printing-shop-quote-to-work-order` and variables `{quoteCode: "Q-ROUTING-001", itemCode: "FG-BROCHURE", quantity: 250}` 2. BPMN runs through CreateWorkOrderFromQuoteTaskHandler, publishes WorkOrderRequestedEvent with 4 operations 3. pbc-production subscriber creates WO `WO-FROM-PRINTINGSHOP-Q-ROUTING-001` 4. GET /api/v1/production/work-orders/by-code/... returns the WO with status=DRAFT and 4 operations (CUT/PRINT/FOLD/BIND) all PENDING, each with its own work_center and standard_minutes. This is the framework's first business flow where a customer plug-in provides a routing to a core PBC end-to-end through api.v1 alone. Closes the loop between the v3 routings feature (commit fa867189) and the executable acceptance test in the reference plug-in. 24 modules, 350 unit tests (+1), all green. -
Adds WorkOrderOperation child entity and two new verbs that gate WorkOrder.complete() behind a strict sequential walk of shop-floor steps. An empty operations list keeps the v2 behavior exactly; a non-empty list forces every op to reach COMPLETED before the work order can finish. **New domain.** - `production__work_order_operation` table with `UNIQUE (work_order_id, line_no)` and a status CHECK constraint admitting PENDING / IN_PROGRESS / COMPLETED. - `WorkOrderOperation` @Entity mirroring the `WorkOrderInput` shape: `lineNo`, `operationCode`, `workCenter`, `standardMinutes`, `status`, `actualMinutes` (nullable), `startedAt` + `completedAt` timestamps. No `ext` JSONB — operations are facts, not master records. - `WorkOrderOperationStatus` enum (PENDING / IN_PROGRESS / COMPLETED). - `WorkOrder.operations` collection with the same @OneToMany + cascade=ALL + orphanRemoval + @OrderBy("lineNo ASC") pattern as `inputs`. **State machine (sequential).** - `startOperation(workOrderId, operationId)` — parent WO must be IN_PROGRESS; target op must be PENDING; every earlier op must be COMPLETED. Flips to IN_PROGRESS and stamps `startedAt`. Idempotent no-op if already IN_PROGRESS. - `completeOperation(workOrderId, operationId, actualMinutes)` — parent WO must be IN_PROGRESS; target op must be IN_PROGRESS; `actualMinutes` must be non-negative. Flips to COMPLETED and stamps `completedAt`. Idempotent with the same `actualMinutes`; refuses to clobber with a different value. - `WorkOrder.complete()` gains a routings gate: refuses if any operation is not COMPLETED. Empty operations list is legal and preserves v2 behavior (auto-spawned orders from `SalesOrderConfirmedSubscriber` continue to complete without any gate). **Why sequential, not parallel.** v3 deliberately forbids parallel operations on one routing. The shop-floor dashboard story is trivial when the invariant is "you are on step N of M"; the unit test matrix is finite. Parallel routings (two presses in parallel) wait for a real consumer asking for them. Same pattern as every other pbc-production invariant — grow the PBC when consumers appear, not on speculation. **Why standardMinutes + actualMinutes instead of just timestamps.** The variance between planned and actual runtime is the single most interesting data point on a routing. Deriving it from `completedAt - startedAt` at report time has to fight shift-boundary and pause-resume ambiguity; the operator typing in "this run took 47 minutes" is the single source of truth. `startedAt` and `completedAt` are kept as an audit trail, not used for variance math. **Why work_center is a varchar not a FK.** Same cross-PBC discipline as every other identifier in pbc-production: work centers will be the seam for a future pbc-equipment PBC, and pinning a FK now would couple two PBC schemas before the consumer even exists (CLAUDE.md guardrail #9). **HTTP surface.** - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/start` → `production.work-order.operation.start` - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/complete` → `production.work-order.operation.complete` Body: `{"actualMinutes": "..."}`. Annotated with the single-arg Jackson trap escape hatch (`@JsonCreator(mode=PROPERTIES)` + `@param:JsonProperty`) — same trap that bit `CompleteWorkOrderRequest`, `ShipSalesOrderRequest`, `ReceivePurchaseOrderRequest`. Caught at smoke-test time. - `CreateWorkOrderRequest` accepts an optional `operations` array alongside `inputs`. - `WorkOrderResponse` gains `operations: List<WorkOrderOperationResponse>` showing status, standardMinutes, actualMinutes, startedAt, completedAt. **Metadata.** Two new permissions in `production.yml`: `production.work-order.operation.start` and `production.work-order.operation.complete`. **Tests (12 new).** create-with-ops happy path; duplicate line_no refused; blank operationCode refused; complete() gated when any op is not COMPLETED; complete() passes when every op is COMPLETED; startOperation refused on DRAFT parent; startOperation flips PENDING to IN_PROGRESS and stamps startedAt; startOperation refuses skip-ahead over a PENDING predecessor; startOperation is idempotent when already IN_PROGRESS; completeOperation records actualMinutes and flips to COMPLETED; completeOperation rejects negative actualMinutes; completeOperation refuses clobbering an already-COMPLETED op with a different value. **Smoke-tested end-to-end against real Postgres:** - Created a WO with 3 operations (CUT → PRINT → BIND) - `complete()` refused while DRAFT, then refused while IN_PROGRESS with pending ops ("3 routing operation(s) are not yet COMPLETED") - Skip-ahead `startOperation(op2)` refused ("earlier operation(s) are not yet COMPLETED") - Walked ops 1 → 2 → 3 through start + complete with varying actualMinutes (17, 32.5, 18 vs standard 15, 30, 20) - Final `complete()` succeeded, wrote exactly ONE PRODUCTION_RECEIPT ledger row for 100 units of FG-BROCHURE — no premature writes - Separately verified a no-operations WO still walks DRAFT → IN_PROGRESS → COMPLETED exactly like v2 24 modules, 349 unit tests (+12), all green. -
Closes two open wiring gaps left by the P1.9 and P1.8 chunks — `PluginContext.files` and `PluginContext.reports` both previously threw `UnsupportedOperationException` because the host's `DefaultPluginContext` never received the concrete beans. This commit plumbs both through and exercises them end-to-end via a new printing-shop plug-in endpoint that generates a quote PDF, stores it in the file store, and returns the file handle. With this chunk the reference printing-shop plug-in demonstrates **every extension seam the framework provides**: HTTP endpoints, JDBC, metadata YAML, i18n, BPMN + TaskHandlers, JobHandlers, custom fields on core entities, event publishing via EventBus, ReportRenderer, and FileStorage. There is no major public plug-in surface left unexercised. ## Wiring: DefaultPluginContext + VibeErpPluginManager - `DefaultPluginContext` gains two new constructor parameters (`sharedFileStorage: FileStorage`, `sharedReportRenderer: ReportRenderer`) and two new overrides. Each is wired via Spring — they live in platform-files and platform-reports respectively, but platform-plugins only depends on api.v1 (the interfaces) and NOT on those modules directly. The concrete beans are injected by Spring at distribution boot time when every `@Component` is on the classpath. - `VibeErpPluginManager` adds `private val fileStorage: FileStorage` and `private val reportRenderer: ReportRenderer` constructor params and passes them through to every `DefaultPluginContext` it builds per plug-in. The `files` and `reports` getters in api.v1 `PluginContext` still have their default-throw backward-compat shim — a plug-in built against v0.8 of api.v1 loading on a v0.7 host would still fail loudly at first call with a clear "upgrade to v0.8" message. The override here makes the v0.8+ host honour the interface. ## Printing-shop reference — quote PDF endpoint - New `resources/reports/quote-template.jrxml` inside the plug-in JAR. Parameters: plateCode, plateName, widthMm, heightMm, status, customerName. Produces a single-page A4 PDF with a header, a table of plate attributes, and a footer. - New endpoint `POST /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf`. Request body `{"customerName": "..."}`, response: `{"plateId", "plateCode", "customerName", "fileKey", "fileSize", "fileContentType", "downloadUrl"}` The handler does ALL of: 1. Reads the plate row via `context.jdbc.queryForObject(...)` 2. Loads the JRXML from the PLUG-IN's own classloader (not the host classpath — `this::class.java.classLoader .getResourceAsStream("reports/quote-template.jrxml")` — so the host's built-in `vibeerp-ping-report.jrxml` and the plug-in's template live in isolated namespaces) 3. Renders via `context.reports.renderPdf(template, data)` — uses the host JasperReportRenderer under the hood 4. Persists via `context.files.put(key, contentType, content)` under a plug-in-scoped key `plugin-printing-shop/quotes/quote-<code>.pdf` 5. Returns the file handle plus a `downloadUrl` pointing at the framework's `/api/v1/files/download` endpoint the caller can immediately hit ## Smoke test (fresh DB + staged plug-in) ``` # create a plate POST /api/v1/plugins/printing-shop/plates {code: PLATE-200, name: "Premium cover", widthMm: 420, heightMm: 594} → 201 {id, code: PLATE-200, status: DRAFT, ...} # generate + store the quote PDF POST /api/v1/plugins/printing-shop/plates/<id>/generate-quote-pdf {customerName: "Acme Inc"} → 201 { plateId, plateCode: "PLATE-200", customerName: "Acme Inc", fileKey: "plugin-printing-shop/quotes/quote-PLATE-200.pdf", fileSize: 1488, fileContentType: "application/pdf", downloadUrl: "/api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf" } # download via the framework's file endpoint GET /api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf → 200 Content-Type: application/pdf Content-Length: 1488 body: valid PDF 1.5, 1 page $ file /tmp/plate-quote.pdf /tmp/plate-quote.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded) # list by prefix GET /api/v1/files?prefix=plugin-printing-shop/ → [{"key":"plugin-printing-shop/quotes/quote-PLATE-200.pdf", "size":1488, "contentType":"application/pdf", ...}] # plug-in log [plugin:printing-shop] registered 8 endpoints under /api/v1/plugins/printing-shop/ [plugin:printing-shop] generated quote PDF for plate PLATE-200 (1488 bytes) → plugin-printing-shop/quotes/quote-PLATE-200.pdf ``` Four public surfaces composed in one flow: plug-in JDBC read → plug-in classloader resource load → host ReportRenderer compile/ fill/export → host FileStorage put → host file controller download. Every step stays on api.v1; zero plug-in code reaches into a concrete platform class. ## Printing-shop plug-in — full extension surface exercised After this commit the reference printing-shop plug-in contributes via every public seam the framework offers: | Seam | How the plug-in uses it | |-------------------------------|--------------------------------------------------------| | HTTP endpoints (P1.3) | 8 endpoints under /api/v1/plugins/printing-shop/ | | JDBC (P1.4) | Reads/writes its own plugin_printingshop__* tables | | Liquibase | Own changelog.xml, 2 tables created at plug-in start | | Metadata YAML (P1.5) | 2 entities, 5 permissions, 2 menus | | Custom fields on CORE (P3.4) | 5 plug-in fields on Partner/Item/SalesOrder/WorkOrder | | i18n (P1.6) | Own messages_<locale>.properties, quote number msgs | | EventBus (P1.7) | Publishes WorkOrderRequestedEvent from a TaskHandler | | TaskHandlers (P2.1) | 2 handlers (plate-approval, quote-to-work-order) | | Plug-in BPMN (P2.1 followup) | 2 BPMNs in processes/ auto-deployed at start | | JobHandlers (P1.10 followup) | PlateCleanupJobHandler using context.jdbc + logger | | ReportRenderer (P1.8) | Quote PDF from JRXML via context.reports | | FileStorage (P1.9) | Persists quote PDF via context.files | Everything listed in this table is exercised end-to-end by the current smoke test. The plug-in is the framework's executable acceptance test for the entire public extension surface. ## Tests No new unit tests — the wiring change is a plain constructor addition, the existing `DefaultPluginContext` has no dedicated test class (it's a thin dataclass-shaped bean), and `JasperReportRenderer` + `LocalDiskFileStorage` each have their own unit tests from the respective parent chunks. The change is validated end-to-end by the above smoke test; formalizing that into an integration test would need Testcontainers + a real plug-in JAR and belongs to a different (test-infra) chunk. - Total framework unit tests: 337 (unchanged), all green. ## Non-goals (parking lot) - Pre-compiled `.jasper` caching keyed by template hash. A hot-path benchmark would tell us whether the cache is worth shipping. - Multipart upload of a template into a plug-in's own `files` namespace so non-bundled templates can be tried without a plug-in rebuild. Nice-to-have for iteration but not on the v1.0 critical path. - Scoped file-key prefixes per plug-in enforced by the framework (today the plug-in picks its own prefix by convention; a `plugin.files.keyPrefix` config would let the host enforce that every plug-in-contributed file lives under `plugin-<id>/`). Future hardening chunk. -
Closes the P1.8 row of the implementation plan — **every Phase 1 platform unit is now ✅**. New platform-reports subproject wrapping JasperReports 6.21.3 with a minimal api.v1 ReportRenderer facade, a built-in self-test JRXML template, and a thin HTTP surface. ## api.v1 additions (package `org.vibeerp.api.v1.reports`) - `ReportRenderer` — injectable facade with ONE method for v1: `renderPdf(template: InputStream, data: Map<String, Any?>): ByteArray` Caller loads the JRXML (or pre-compiled .jasper) from wherever (plug-in JAR classpath, FileStorage, DB metadata row, HTTP upload) and hands an open stream to the renderer. The framework reads the bytes, compiles/fills/exports, and returns the PDF. - `ReportRenderException` — wraps any engine exception so plug-ins don't have to import concrete Jasper exception types. - `PluginContext.reports: ReportRenderer` — new optional member with the default-throw backward-compat pattern used for every other addition. Plug-ins that ship quote PDFs, job cards, delivery notes, etc. inject this through the context. ## platform-reports runtime - `JasperReportRenderer` @Component — wraps JasperReports' compile → fill → export cycle into one method. * `JasperCompileManager.compileReport(template)` turns the JRXML stream into an in-memory `JasperReport`. * `JasperFillManager.fillReport(compiled, params, JREmptyDataSource(1))` evaluates expressions against the parameter map. The empty data source satisfies Jasper's requirement for a non-null data source when the template has no `<field>` definitions. * `JasperExportManager.exportReportToPdfStream(jasperPrint, buffer)` produces the PDF bytes. The `JasperPrint` type annotation on the local is deliberate — Jasper has an ambiguous `exportReportToPdfStream(InputStream, OutputStream)` overload and Kotlin needs the explicit type to pick the right one. * Every stage catches `Throwable` and re-throws as `ReportRenderException` with a useful message, keeping the api.v1 surface clean of Jasper's exception hierarchy. - `ReportController` at `/api/v1/reports/**`: * `POST /ping` render the built-in self-test JRXML with the supplied `{name: "..."}` (optional, defaults to "world") and return the PDF bytes with `application/pdf` Content-Type * `POST /render` multipart upload a JRXML template + return the PDF. Operator / test use, not the main production path. Both endpoints @RequirePermission-gated via `reports.report.render`. - `reports/vibeerp-ping-report.jrxml` — a single-page JRXML with a title, centred "Hello, $P{name}!" text, and a footer. Zero fields, one string parameter with a default value. Ships on the platform-reports classpath and is loaded by the `/ping` endpoint via `ClassPathResource`. - `META-INF/vibe-erp/metadata/reports.yml` — 1 permission + 1 menu. ## Design decisions captured in-file - **No template compilation cache.** Every call compiles the JRXML fresh. Fine for infrequent reports (quotes, job cards); a hot path that renders thousands of the same report per minute would want a `ConcurrentHashMap<String, JasperReport>` keyed by template hash. Deliberately NOT shipped until a benchmark shows it's needed — the cache key semantics need a real consumer. - **No multiple output formats.** v1 is PDF-only. Additive overloads for HTML/XLSX land when a real consumer needs them. - **No data-source argument.** v1 is parameter-driven, not query-driven. A future `renderPdf(template, data, rows)` overload will take tabular data for `<field>`-based templates. - **No Groovy / Janino / ECJ.** The default `JRJavacCompiler` uses `javax.tools.ToolProvider.getSystemJavaCompiler()` which is available on any JDK runtime. vibe_erp already requires a JDK (not JRE) for Liquibase + Flowable + Quartz, so we inherit this for free. Zero extra compiler dependencies. ## Config trap caught during first build (documented in build.gradle.kts) My first attempt added aggressive JasperReports exclusions to shrink the transitive dep tree (POI, Batik, Velocity, Castor, Groovy, commons-digester, ...). The build compiled fine but `JasperCompileManager.compileReport(...)` threw `ClassNotFoundException: org.apache.commons.digester.Digester` at runtime — Jasper uses Digester internally to parse the JRXML structure, and excluding the transitive dep silently breaks template loading. Fix: remove ALL exclusions. JasperReports' dep tree IS heavy, but each transitive is load-bearing for a use case that's only obvious once you exercise the engine end-to-end. A benchmark- driven optimization chunk can revisit this later if the JAR size becomes a concern; for v1.0 the "just pull it all in" approach is correct. Documented in the build.gradle.kts so the next person who thinks about trimming the dep tree reads the warning first. ## Smoke test (fresh DB, as admin) ``` POST /api/v1/reports/ping {"name": "Alice"} → 200 Content-Type: application/pdf Content-Length: 1436 body: %PDF-1.5 ... (valid 1-page PDF) $ file /tmp/ping-report.pdf /tmp/ping-report.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded) POST /api/v1/reports/ping (no body) → 200, 1435 bytes, renders with default name="world" from JRXML defaultValueExpression # negative POST /api/v1/reports/render (multipart with garbage bytes) → 400 {"message": "failed to compile JRXML template: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog."} GET /api/v1/_meta/metadata → permissions includes "reports.report.render" ``` The `%PDF-` magic header is present and the `file` command on macOS identifies the bytes as a valid PDF 1.5 single-page document. JasperReports compile + fill + export are all running against the live JDK 21 javac inside the Spring Boot app on first boot. ## Tests - 3 new unit tests in `JasperReportRendererTest`: * `renders the built-in ping template to a valid PDF byte stream` — checks for the `%PDF-` magic header and a reasonable size * `renders with the default parameter when the data map is empty` — proves the JRXML's defaultValueExpression fires * `wraps compile failures in ReportRenderException` — feeds garbage bytes and asserts the exception type - Total framework unit tests: 337 (was 334), all green. ## What this unblocks - **Printing-shop quote PDFs.** The reference plug-in can now ship a `reports/quote.jrxml` in its JAR, load it in an HTTP handler via classloader, render via `context.reports.renderPdf(...)`, and either return the PDF bytes directly or persist it via `context.files.put("reports/quote-$code.pdf", "application/pdf", ...)` for later download. The P1.8 → P1.9 chain is ready. - **Job cards, delivery notes, pick lists, QC certificates.** Every business document in a printing shop is a report template + a data payload. The facade handles them all through the same `renderPdf` call. - **A future reports PBC.** When a PBC actually needs report metadata persisted (template versioning, report scheduling), a new pbc-reports can layer on top without changing api.v1 — the renderer stays the lowest-level primitive, the PBC becomes the management surface. ## Phase 1 completion With P1.8 landed: | Unit | Status | |------|--------| | P1.2 Plug-in linter | ✅ | | P1.3 Plug-in HTTP + lifecycle | ✅ | | P1.4 Plug-in Liquibase + PluginJdbc | ✅ | | P1.5 Metadata store + loader | ✅ | | P1.6 ICU4J translator | ✅ | | P1.7 Event bus + outbox | ✅ | | P1.8 JasperReports integration | ✅ | | P1.9 File store | ✅ | | P1.10 Quartz scheduler | ✅ | **All nine Phase 1 platform units are now done.** (P1.1 Postgres RLS was removed by the early single-tenant refactor, per CLAUDE.md guardrail #5.) Remaining v1.0 work is cross-cutting: pbc-finance GL growth, the web SPA (R1–R4), OIDC (P4.2), the MCP server (A1), and richer per-PBC v2/v3 scopes. ## Non-goals (parking lot) - Template caching keyed by hash. - HTML/XLSX exporters. - Pre-compiled `.jasper` support via a Gradle build task. - Sub-reports (master-detail). - Dependency-tree optimisation via selective exclusions — needs a benchmark-driven chunk to prove each exclusion is safe. - Plug-in loader integration for custom font embedding. Jasper's default fonts work; custom fonts land when a real customer plug-in needs them. -
P1.10 follow-up. Plug-ins can now register background job handlers the same way they already register workflow task handlers. The reference printing-shop plug-in ships a real PlateCleanupJobHandler that reads from its own database via `context.jdbc` as the executable acceptance test. ## Why this wasn't in the P1.10 chunk P1.10 landed the core scheduler + registry + Quartz bridge + HTTP surface, but the plug-in-loader integration was deliberately deferred — the JobHandlerRegistry already supported owner-tagged `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`, so the seam was defined; it just didn't have a caller from the PF4J plug-in side. Without a real plug-in consumer, shipping the integration would have been speculative. This commit closes the gap in exactly the shape the TaskHandler side already has: new api.v1 registrar interface, new scoped registrar in platform-plugins, one constructor parameter on DefaultPluginContext, one new field on VibeErpPluginManager, and the teardown paths all fall out automatically because JobHandlerRegistry already implements the owner-tagged cleanup. ## api.v1 additions - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single method `register(handler: JobHandler)`. Mirrors `PluginTaskHandlerRegistrar` exactly, same ergonomics, same duplicate-key-throws discipline. - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional member with the default-throw backward-compat pattern used for `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host loading a newer plug-in jar fails loudly at first call rather than silently dropping scheduled work. ## platform-plugins wiring - New dependency on `:platform:platform-jobs`. - New internal class `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar` that implements the api.v1 registrar by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. - `DefaultPluginContext` gains a `scopedJobHandlers` constructor parameter and exposes it as `PluginContext.jobs`. - `VibeErpPluginManager`: * injects `JobHandlerRegistry` * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing endpoint + taskHandler + BPMN-deployment cleanups * `destroy()` reverse-iterates `started` and calls the same `unregisterAllByOwner` alongside the other four teardown steps ## Reference plug-in — PlateCleanupJobHandler New file `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`. Key `printing_shop.plate.cleanup`. Captures the `PluginContext` via constructor — same "handler-side plug-in context access" pattern the printing-shop plug-in already uses for its TaskHandlers. The handler is READ-ONLY in its v1 incarnation: it runs a GROUP-BY query over `plugin_printingshop__plate` via `context.jdbc.query(...)` and logs a per-status summary via `context.logger.info(...)`. A real cleanup job would also run an `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the read-only shape is enough to exercise the seam end-to-end without introducing a retention policy the customer hasn't asked for. `PrintingShopPlugin.start(context)` now registers the handler alongside its two TaskHandlers: context.taskHandlers.register(PlateApprovalTaskHandler(context)) context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context)) context.jobs.register(PlateCleanupJobHandler(context)) ## Smoke test (fresh DB, plug-in staged) ``` # boot registered JobHandler 'vibeerp.jobs.ping' owner='core' ... JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping] ... registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ... [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup # HTTP: list handlers — now shows both GET /api/v1/jobs/handlers → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]} # HTTP: trigger the plug-in handler — proves dispatcher routes to it POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger → 200 {"handlerKey":"printing_shop.plate.cleanup", "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9", "startedAt":"...","finishedAt":"...","ok":true} # Handler-side logs prove context.jdbc + context.logger access [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...' [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[] # SIGTERM — clean teardown [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s) [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped) [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order. Core handlers are untouched by plug-in teardown. ## Tests No new unit tests in this commit — the test coverage is inherited from the previously landed components: - `JobHandlerRegistryTest` already covers owner-tagged `register` / `unregister` / `unregisterAllByOwner` / duplicate key rejection. - `ScopedTaskHandlerRegistrar` behavior (which this commit mirrors structurally) is exercised end-to-end by the printing-shop plug-in boot path. - Total framework unit tests: 334 (unchanged from the quality→warehousing quarantine chunk), all green. ## What this unblocks - **Plug-in-shipped scheduled work.** The printing-shop plug-in can now add cron schedules for its cleanup handler via `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey, cronExpression}` without the operator touching core code. - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can now ship job handlers with distinct keys and be torn down independently on reload — the owner-tagged cleanup strips only the stopping plug-in's handlers, leaving other plug-ins' and core handlers alone. - **The "plug-in contributes everything" story.** The reference printing-shop plug-in now contributes via every public seam the framework has: HTTP endpoints (7), custom fields on core entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1) — plus its own database schema, its own metadata YAML, its own i18n bundles. That's every extension point a real customer plug-in would want. ## Non-goals (parking lot) - A real retention policy in PlateCleanupJobHandler. The handler logs a summary but doesn't mutate state. Customer-specific pruning rules belong in a customer-owned plug-in or a metadata- driven rule once that seam exists. - A built-in cron schedule for the plug-in's handler. The plug-in only registers the handler; scheduling is an operator decision exposed through the HTTP surface from P1.10. -
…c-warehousing StockTransfer First cross-PBC reaction originating from pbc-quality. Records a REJECTED inspection with explicit source + quarantine location codes, publishes an api.v1 event inside the same transaction as the row insert, and pbc-warehousing's new subscriber atomically creates + confirms a StockTransfer that moves the rejected quantity to the quarantine bin. The whole chain — inspection insert + event publish + transfer create + confirm + two ledger rows — runs in a single transaction under the synchronous in-process bus with Propagation.MANDATORY. ## Why the auto-quarantine is opt-in per-inspection Not every inspection wants physical movement. A REJECTED batch that's already separated from good stock on the shop floor doesn't need the framework to move anything; the operator just wants the record. Forcing every rejection to create a ledger pair would collide with real-world QC workflows. The contract is simple: the `InspectionRecord` now carries two OPTIONAL columns (`source_location_code`, `quarantine_location_code`). When BOTH are set AND the decision is REJECTED AND the rejected quantity is positive, the subscriber reacts. Otherwise it logs at DEBUG and does nothing. The event is published either way, so audit/KPI subscribers see every inspection regardless. ## api.v1 additions New event class `org.vibeerp.api.v1.event.quality.InspectionRecordedEvent` with nine fields: inspectionCode, itemCode, sourceReference, decision, inspectedQuantity, rejectedQuantity, sourceLocationCode?, quarantineLocationCode?, inspector All required fields validated in `init { }` — blank strings, non-positive inspected quantity, negative rejected quantity, or an unknown decision string all throw at publish time so a malformed event never hits the outbox. `aggregateType = "quality.InspectionRecord"` matches the `<pbc>.<aggregate>` convention. `decision` is carried as a String (not the pbc-quality `InspectionDecision` enum) to keep guardrail #10 honest — api.v1 events MUST NOT leak internal PBC types. Consumers compare against the literal `"APPROVED"` / `"REJECTED"` strings. ## pbc-quality changes - `InspectionRecord` entity gains two nullable columns: `source_location_code` + `quarantine_location_code`. - Liquibase migration `002-quality-quarantine-locations.xml` adds the columns to `quality__inspection_record`. - `InspectionRecordService` now injects `EventBus` and publishes `InspectionRecordedEvent` inside the `@Transactional record()` method. The publish carries all nine fields including the optional locations. - `RecordInspectionCommand` + `RecordInspectionRequest` gain the two optional location fields; unchanged default-null means every existing caller keeps working unchanged. - `InspectionRecordResponse` exposes both new columns on the HTTP wire. ## pbc-warehousing changes - New `QualityRejectionQuarantineSubscriber` @Component. - Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe(InspectionRecordedEvent::class.java, ...)` overload — same pattern every other PBC subscriber uses (SalesOrderConfirmedSubscriber, WorkOrderRequestedSubscriber, the pbc-finance order subscribers). - `handle(event)` is `internal` so the unit test can drive it directly without going through the bus. - Activation contract (all must be true): decision=REJECTED, rejectedQuantity>0, sourceLocationCode non-blank, quarantineLocationCode non-blank. Any missing condition → no-op. - Idempotency: derived transfer code is `TR-QC-<inspectionCode>`. Before creating, the subscriber checks `stockTransfers.findByCode(derivedCode)` — if anything exists (DRAFT, CONFIRMED, or CANCELLED), the subscriber skips. A replay of the same event under at-least-once delivery is safe. - On success: creates a DRAFT StockTransfer with one line moving `rejectedQuantity` of `itemCode` from source to quarantine, then calls `confirm(id)` which writes the atomic TRANSFER_OUT + TRANSFER_IN ledger pair. ## Smoke test (fresh DB) ``` # seed POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea} POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE} POST /api/v1/inventory/locations {code: WH-QUARANTINE, type: WAREHOUSE} POST /api/v1/inventory/movements {itemCode: WIDGET-1, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT} # the cross-PBC reaction POST /api/v1/quality/inspections {code: QC-R-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001", decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7, reason: "surface scratches", sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} → 201 {..., sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"} # automatically created + confirmed GET /api/v1/warehousing/stock-transfers/by-code/TR-QC-QC-R-001 → 200 { "code": "TR-QC-QC-R-001", "fromLocationCode": "WH-MAIN", "toLocationCode": "WH-QUARANTINE", "status": "CONFIRMED", "note": "auto-quarantine from rejected inspection QC-R-001", "lines": [{"itemCode": "WIDGET-1", "quantity": 7.0}] } # ledger state (raw SQL) SELECT l.code, b.item_code, b.quantity FROM inventory__stock_balance b JOIN inventory__location l ON l.id = b.location_id WHERE b.item_code = 'WIDGET-1'; WH-MAIN | WIDGET-1 | 93.0000 ← was 100, now 93 WH-QUARANTINE | WIDGET-1 | 7.0000 ← 7 rejected units here SELECT item_code, location, reason, delta, reference FROM inventory__stock_movement m JOIN inventory__location l ON l.id=m.location_id WHERE m.reference = 'TR:TR-QC-QC-R-001'; WIDGET-1 | WH-MAIN | TRANSFER_OUT | -7 | TR:TR-QC-QC-R-001 WIDGET-1 | WH-QUARANTINE | TRANSFER_IN | 7 | TR:TR-QC-QC-R-001 # negatives POST /api/v1/quality/inspections {decision: APPROVED, ...+locations} → 201, but GET /TR-QC-QC-A-001 → 404 (no transfer, correct opt-out) POST /api/v1/quality/inspections {decision: REJECTED, rejected: 2, no locations} → 201, but GET /TR-QC-QC-R-002 → 404 (opt-in honored) # handler log [warehousing] auto-quarantining 7 units of 'WIDGET-1' from 'WH-MAIN' to 'WH-QUARANTINE' (inspection=QC-R-001, transfer=TR-QC-QC-R-001) ``` Everything happens in ONE transaction because EventBusImpl uses Propagation.MANDATORY with synchronous delivery: the inspection insert, the event publish, the StockTransfer create, the confirm, and the two ledger rows all commit or roll back together. ## Tests - Updated `InspectionRecordServiceTest`: the service now takes an `EventBus` constructor argument. Every existing test got a relaxed `EventBus` mock; the one new test `record publishes InspectionRecordedEvent on success` captures the published event and asserts every field including the location codes. - 6 new unit tests in `QualityRejectionQuarantineSubscriberTest`: * subscribe registers one listener for InspectionRecordedEvent * handle creates and confirms a quarantine transfer on a fully-populated REJECTED event (asserts derived code, locations, item code, quantity) * handle is a no-op when decision is APPROVED * handle is a no-op when sourceLocationCode is missing * handle is a no-op when quarantineLocationCode is missing * handle skips when a transfer with the derived code already exists (idempotent replay) - Total framework unit tests: 334 (was 327), all green. ## What this unblocks - **Quality KPI dashboards** — any PBC can now subscribe to `InspectionRecordedEvent` without coupling to pbc-quality. - **pbc-finance quality-cost tracking** — when GL growth lands, a finance subscriber can debit a "quality variance" account on every REJECTED inspection. - **REF.2 / customer plug-in workflows** — the printing-shop plug-in can emit an `InspectionRecordedEvent` of its own from a BPMN service task (via `context.eventBus.publish`) and drive the same quarantine chain without touching pbc-quality's HTTP surface. ## Non-goals (parking lot) - Partial-batch quarantine decisions (moving some units to quarantine, some back to general stock, some to scrap). v1 collapses the decision into a single "reject N units" action and assumes the operator splits batches manually before inspecting. A richer ResolutionPlan aggregate is a future chunk if real workflows need it. - Quality metrics storage. The event is audited by the existing wildcard event subscriber but no PBC rolls it up into a KPI table. Belongs to a future reporting feature. - Auto-approval chains. An APPROVED inspection could trigger a "release-from-hold" transfer (opposite direction) in a future-expanded subscriber, but v1 keeps the reaction REJECTED-only to match the "quarantine on fail" use case.