• Adds WorkOrderOperation child entity and two new verbs that gate
    WorkOrder.complete() behind a strict sequential walk of shop-floor
    steps. An empty operations list keeps the v2 behavior exactly; a
    non-empty list forces every op to reach COMPLETED before the work
    order can finish.
    
    **New domain.**
    - `production__work_order_operation` table with
      `UNIQUE (work_order_id, line_no)` and a status CHECK constraint
      admitting PENDING / IN_PROGRESS / COMPLETED.
    - `WorkOrderOperation` @Entity mirroring the `WorkOrderInput` shape:
      `lineNo`, `operationCode`, `workCenter`, `standardMinutes`,
      `status`, `actualMinutes` (nullable), `startedAt` + `completedAt`
      timestamps. No `ext` JSONB — operations are facts, not master
      records.
    - `WorkOrderOperationStatus` enum (PENDING / IN_PROGRESS / COMPLETED).
    - `WorkOrder.operations` collection with the same @OneToMany +
      cascade=ALL + orphanRemoval + @OrderBy("lineNo ASC") pattern as
      `inputs`.
    
    **State machine (sequential).**
    - `startOperation(workOrderId, operationId)` — parent WO must be
      IN_PROGRESS; target op must be PENDING; every earlier op must be
      COMPLETED. Flips to IN_PROGRESS and stamps `startedAt`.
      Idempotent no-op if already IN_PROGRESS.
    - `completeOperation(workOrderId, operationId, actualMinutes)` —
      parent WO must be IN_PROGRESS; target op must be IN_PROGRESS;
      `actualMinutes` must be non-negative. Flips to COMPLETED and
      stamps `completedAt`. Idempotent with the same `actualMinutes`;
      refuses to clobber with a different value.
    - `WorkOrder.complete()` gains a routings gate: refuses if any
      operation is not COMPLETED. Empty operations list is legal and
      preserves v2 behavior (auto-spawned orders from
      `SalesOrderConfirmedSubscriber` continue to complete without
      any gate).
    
    **Why sequential, not parallel.** v3 deliberately forbids parallel
    operations on one routing. The shop-floor dashboard story is
    trivial when the invariant is "you are on step N of M"; the unit
    test matrix is finite. Parallel routings (two presses in parallel)
    wait for a real consumer asking for them. Same pattern as every
    other pbc-production invariant — grow the PBC when consumers
    appear, not on speculation.
    
    **Why standardMinutes + actualMinutes instead of just timestamps.**
    The variance between planned and actual runtime is the single
    most interesting data point on a routing. Deriving it from
    `completedAt - startedAt` at report time has to fight
    shift-boundary and pause-resume ambiguity; the operator typing in
    "this run took 47 minutes" is the single source of truth. `startedAt`
    and `completedAt` are kept as an audit trail, not used for
    variance math.
    
    **Why work_center is a varchar not a FK.** Same cross-PBC discipline
    as every other identifier in pbc-production: work centers will be
    the seam for a future pbc-equipment PBC, and pinning a FK now
    would couple two PBC schemas before the consumer even exists
    (CLAUDE.md guardrail #9).
    
    **HTTP surface.**
    - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/start`
      → `production.work-order.operation.start`
    - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/complete`
      → `production.work-order.operation.complete`
      Body: `{"actualMinutes": "..."}`. Annotated with the
      single-arg Jackson trap escape hatch (`@JsonCreator(mode=PROPERTIES)`
      + `@param:JsonProperty`) — same trap that bit
      `CompleteWorkOrderRequest`, `ShipSalesOrderRequest`,
      `ReceivePurchaseOrderRequest`. Caught at smoke-test time.
    - `CreateWorkOrderRequest` accepts an optional `operations` array
      alongside `inputs`.
    - `WorkOrderResponse` gains `operations: List<WorkOrderOperationResponse>`
      showing status, standardMinutes, actualMinutes, startedAt,
      completedAt.
    
    **Metadata.** Two new permissions in `production.yml`:
    `production.work-order.operation.start` and
    `production.work-order.operation.complete`.
    
    **Tests (12 new).** create-with-ops happy path; duplicate line_no
    refused; blank operationCode refused; complete() gated when any
    op is not COMPLETED; complete() passes when every op is COMPLETED;
    startOperation refused on DRAFT parent; startOperation flips
    PENDING to IN_PROGRESS and stamps startedAt; startOperation
    refuses skip-ahead over a PENDING predecessor; startOperation is
    idempotent when already IN_PROGRESS; completeOperation records
    actualMinutes and flips to COMPLETED; completeOperation rejects
    negative actualMinutes; completeOperation refuses clobbering an
    already-COMPLETED op with a different value.
    
    **Smoke-tested end-to-end against real Postgres:**
    - Created a WO with 3 operations (CUT → PRINT → BIND)
    - `complete()` refused while DRAFT, then refused while IN_PROGRESS
      with pending ops ("3 routing operation(s) are not yet COMPLETED")
    - Skip-ahead `startOperation(op2)` refused ("earlier operation(s)
      are not yet COMPLETED")
    - Walked ops 1 → 2 → 3 through start + complete with varying
      actualMinutes (17, 32.5, 18 vs standard 15, 30, 20)
    - Final `complete()` succeeded, wrote exactly ONE
      PRODUCTION_RECEIPT ledger row for 100 units of FG-BROCHURE —
      no premature writes
    - Separately verified a no-operations WO still walks DRAFT →
      IN_PROGRESS → COMPLETED exactly like v2
    
    24 modules, 349 unit tests (+12), all green.
    zichun authored
     
    Browse Code »
  • Closes two open wiring gaps left by the P1.9 and P1.8 chunks —
    `PluginContext.files` and `PluginContext.reports` both previously
    threw `UnsupportedOperationException` because the host's
    `DefaultPluginContext` never received the concrete beans. This
    commit plumbs both through and exercises them end-to-end via a
    new printing-shop plug-in endpoint that generates a quote PDF,
    stores it in the file store, and returns the file handle.
    
    With this chunk the reference printing-shop plug-in demonstrates
    **every extension seam the framework provides**: HTTP endpoints,
    JDBC, metadata YAML, i18n, BPMN + TaskHandlers, JobHandlers,
    custom fields on core entities, event publishing via EventBus,
    ReportRenderer, and FileStorage. There is no major public plug-in
    surface left unexercised.
    
    ## Wiring: DefaultPluginContext + VibeErpPluginManager
    
    - `DefaultPluginContext` gains two new constructor parameters
      (`sharedFileStorage: FileStorage`, `sharedReportRenderer: ReportRenderer`)
      and two new overrides. Each is wired via Spring — they live in
      platform-files and platform-reports respectively, but
      platform-plugins only depends on api.v1 (the interfaces) and
      NOT on those modules directly. The concrete beans are injected
      by Spring at distribution boot time when every `@Component` is
      on the classpath.
    - `VibeErpPluginManager` adds `private val fileStorage: FileStorage`
      and `private val reportRenderer: ReportRenderer` constructor
      params and passes them through to every `DefaultPluginContext`
      it builds per plug-in.
    
    The `files` and `reports` getters in api.v1 `PluginContext` still
    have their default-throw backward-compat shim — a plug-in built
    against v0.8 of api.v1 loading on a v0.7 host would still fail
    loudly at first call with a clear "upgrade to v0.8" message. The
    override here makes the v0.8+ host honour the interface.
    
    ## Printing-shop reference — quote PDF endpoint
    
    - New `resources/reports/quote-template.jrxml` inside the plug-in
      JAR. Parameters: plateCode, plateName, widthMm, heightMm,
      status, customerName. Produces a single-page A4 PDF with a
      header, a table of plate attributes, and a footer.
    
    - New endpoint `POST /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf`.
      Request body `{"customerName": "..."}`, response:
        `{"plateId", "plateCode", "customerName",
          "fileKey", "fileSize", "fileContentType", "downloadUrl"}`
    
      The handler does ALL of:
        1. Reads the plate row via `context.jdbc.queryForObject(...)`
        2. Loads the JRXML from the PLUG-IN's own classloader (not
           the host classpath — `this::class.java.classLoader
           .getResourceAsStream("reports/quote-template.jrxml")` —
           so the host's built-in `vibeerp-ping-report.jrxml` and the
           plug-in's template live in isolated namespaces)
        3. Renders via `context.reports.renderPdf(template, data)`
           — uses the host JasperReportRenderer under the hood
        4. Persists via `context.files.put(key, contentType, content)`
           under a plug-in-scoped key `plugin-printing-shop/quotes/quote-<code>.pdf`
        5. Returns the file handle plus a `downloadUrl` pointing at
           the framework's `/api/v1/files/download` endpoint the
           caller can immediately hit
    
    ## Smoke test (fresh DB + staged plug-in)
    
    ```
    # create a plate
    POST /api/v1/plugins/printing-shop/plates
         {code: PLATE-200, name: "Premium cover", widthMm: 420, heightMm: 594}
      → 201 {id, code: PLATE-200, status: DRAFT, ...}
    
    # generate + store the quote PDF
    POST /api/v1/plugins/printing-shop/plates/<id>/generate-quote-pdf
         {customerName: "Acme Inc"}
      → 201 {
          plateId, plateCode: "PLATE-200", customerName: "Acme Inc",
          fileKey: "plugin-printing-shop/quotes/quote-PLATE-200.pdf",
          fileSize: 1488,
          fileContentType: "application/pdf",
          downloadUrl: "/api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf"
        }
    
    # download via the framework's file endpoint
    GET /api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf
      → 200
        Content-Type: application/pdf
        Content-Length: 1488
        body: valid PDF 1.5, 1 page
    
    $ file /tmp/plate-quote.pdf
      /tmp/plate-quote.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded)
    
    # list by prefix
    GET /api/v1/files?prefix=plugin-printing-shop/
      → [{"key":"plugin-printing-shop/quotes/quote-PLATE-200.pdf",
          "size":1488, "contentType":"application/pdf", ...}]
    
    # plug-in log
    [plugin:printing-shop] registered 8 endpoints under /api/v1/plugins/printing-shop/
    [plugin:printing-shop] generated quote PDF for plate PLATE-200 (1488 bytes)
                           → plugin-printing-shop/quotes/quote-PLATE-200.pdf
    ```
    
    Four public surfaces composed in one flow: plug-in JDBC read →
    plug-in classloader resource load → host ReportRenderer compile/
    fill/export → host FileStorage put → host file controller
    download. Every step stays on api.v1; zero plug-in code reaches
    into a concrete platform class.
    
    ## Printing-shop plug-in — full extension surface exercised
    
    After this commit the reference printing-shop plug-in contributes
    via every public seam the framework offers:
    
    | Seam                          | How the plug-in uses it                                |
    |-------------------------------|--------------------------------------------------------|
    | HTTP endpoints (P1.3)         | 8 endpoints under /api/v1/plugins/printing-shop/       |
    | JDBC (P1.4)                   | Reads/writes its own plugin_printingshop__* tables    |
    | Liquibase                     | Own changelog.xml, 2 tables created at plug-in start  |
    | Metadata YAML (P1.5)          | 2 entities, 5 permissions, 2 menus                    |
    | Custom fields on CORE (P3.4)  | 5 plug-in fields on Partner/Item/SalesOrder/WorkOrder |
    | i18n (P1.6)                   | Own messages_<locale>.properties, quote number msgs   |
    | EventBus (P1.7)               | Publishes WorkOrderRequestedEvent from a TaskHandler  |
    | TaskHandlers (P2.1)           | 2 handlers (plate-approval, quote-to-work-order)      |
    | Plug-in BPMN (P2.1 followup)  | 2 BPMNs in processes/ auto-deployed at start          |
    | JobHandlers (P1.10 followup)  | PlateCleanupJobHandler using context.jdbc + logger    |
    | ReportRenderer (P1.8)         | Quote PDF from JRXML via context.reports              |
    | FileStorage (P1.9)            | Persists quote PDF via context.files                  |
    
    Everything listed in this table is exercised end-to-end by the
    current smoke test. The plug-in is the framework's executable
    acceptance test for the entire public extension surface.
    
    ## Tests
    
    No new unit tests — the wiring change is a plain constructor
    addition, the existing `DefaultPluginContext` has no dedicated
    test class (it's a thin dataclass-shaped bean), and
    `JasperReportRenderer` + `LocalDiskFileStorage` each have their
    own unit tests from the respective parent chunks. The change is
    validated end-to-end by the above smoke test; formalizing that
    into an integration test would need Testcontainers + a real
    plug-in JAR and belongs to a different (test-infra) chunk.
    
    - Total framework unit tests: 337 (unchanged), all green.
    
    ## Non-goals (parking lot)
    
    - Pre-compiled `.jasper` caching keyed by template hash. A
      hot-path benchmark would tell us whether the cache is worth
      shipping.
    - Multipart upload of a template into a plug-in's own `files`
      namespace so non-bundled templates can be tried without a
      plug-in rebuild. Nice-to-have for iteration but not on the
      v1.0 critical path.
    - Scoped file-key prefixes per plug-in enforced by the framework
      (today the plug-in picks its own prefix by convention; a
      `plugin.files.keyPrefix` config would let the host enforce
      that every plug-in-contributed file lives under
      `plugin-<id>/`). Future hardening chunk.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P1.8 row of the implementation plan — **every Phase 1
    platform unit is now ✅**. New platform-reports subproject wrapping
    JasperReports 6.21.3 with a minimal api.v1 ReportRenderer facade,
    a built-in self-test JRXML template, and a thin HTTP surface.
    
    ## api.v1 additions (package `org.vibeerp.api.v1.reports`)
    
    - `ReportRenderer` — injectable facade with ONE method for v1:
        `renderPdf(template: InputStream, data: Map<String, Any?>): ByteArray`
      Caller loads the JRXML (or pre-compiled .jasper) from wherever
      (plug-in JAR classpath, FileStorage, DB metadata row, HTTP
      upload) and hands an open stream to the renderer. The framework
      reads the bytes, compiles/fills/exports, and returns the PDF.
    - `ReportRenderException` — wraps any engine exception so plug-ins
      don't have to import concrete Jasper exception types.
    - `PluginContext.reports: ReportRenderer` — new optional member
      with the default-throw backward-compat pattern used for every
      other addition. Plug-ins that ship quote PDFs, job cards,
      delivery notes, etc. inject this through the context.
    
    ## platform-reports runtime
    
    - `JasperReportRenderer` @Component — wraps JasperReports' compile
      → fill → export cycle into one method.
        * `JasperCompileManager.compileReport(template)` turns the
          JRXML stream into an in-memory `JasperReport`.
        * `JasperFillManager.fillReport(compiled, params, JREmptyDataSource(1))`
          evaluates expressions against the parameter map. The empty
          data source satisfies Jasper's requirement for a non-null
          data source when the template has no `<field>` definitions.
        * `JasperExportManager.exportReportToPdfStream(jasperPrint, buffer)`
          produces the PDF bytes. The `JasperPrint` type annotation on
          the local is deliberate — Jasper has an ambiguous
          `exportReportToPdfStream(InputStream, OutputStream)` overload
          and Kotlin needs the explicit type to pick the right one.
        * Every stage catches `Throwable` and re-throws as
          `ReportRenderException` with a useful message, keeping the
          api.v1 surface clean of Jasper's exception hierarchy.
    
    - `ReportController` at `/api/v1/reports/**`:
        * `POST /ping`    render the built-in self-test JRXML with
                          the supplied `{name: "..."}` (optional, defaults
                          to "world") and return the PDF bytes with
                          `application/pdf` Content-Type
        * `POST /render`  multipart upload a JRXML template + return
                          the PDF. Operator / test use, not the main
                          production path.
      Both endpoints @RequirePermission-gated via `reports.report.render`.
    
    - `reports/vibeerp-ping-report.jrxml` — a single-page JRXML with
      a title, centred "Hello, $P{name}!" text, and a footer. Zero
      fields, one string parameter with a default value. Ships on the
      platform-reports classpath and is loaded by the `/ping` endpoint
      via `ClassPathResource`.
    
    - `META-INF/vibe-erp/metadata/reports.yml` — 1 permission + 1 menu.
    
    ## Design decisions captured in-file
    
    - **No template compilation cache.** Every call compiles the JRXML
      fresh. Fine for infrequent reports (quotes, job cards); a hot
      path that renders thousands of the same report per minute would
      want a `ConcurrentHashMap<String, JasperReport>` keyed by
      template hash. Deliberately NOT shipped until a benchmark shows
      it's needed — the cache key semantics need a real consumer.
    - **No multiple output formats.** v1 is PDF-only. Additive
      overloads for HTML/XLSX land when a real consumer needs them.
    - **No data-source argument.** v1 is parameter-driven, not
      query-driven. A future `renderPdf(template, data, rows)`
      overload will take tabular data for `<field>`-based templates.
    - **No Groovy / Janino / ECJ.** The default `JRJavacCompiler` uses
      `javax.tools.ToolProvider.getSystemJavaCompiler()` which is
      available on any JDK runtime. vibe_erp already requires a JDK
      (not JRE) for Liquibase + Flowable + Quartz, so we inherit this
      for free. Zero extra compiler dependencies.
    
    ## Config trap caught during first build (documented in build.gradle.kts)
    
    My first attempt added aggressive JasperReports exclusions to
    shrink the transitive dep tree (POI, Batik, Velocity, Castor,
    Groovy, commons-digester, ...). The build compiled fine but
    `JasperCompileManager.compileReport(...)` threw
    `ClassNotFoundException: org.apache.commons.digester.Digester`
    at runtime — Jasper uses Digester internally to parse the JRXML
    structure, and excluding the transitive dep silently breaks
    template loading.
    
    Fix: remove ALL exclusions. JasperReports' dep tree IS heavy,
    but each transitive is load-bearing for a use case that's only
    obvious once you exercise the engine end-to-end. A benchmark-
    driven optimization chunk can revisit this later if the JAR size
    becomes a concern; for v1.0 the "just pull it all in" approach is
    correct. Documented in the build.gradle.kts so the next person
    who thinks about trimming the dep tree reads the warning first.
    
    ## Smoke test (fresh DB, as admin)
    
    ```
    POST /api/v1/reports/ping {"name": "Alice"}
      → 200
        Content-Type: application/pdf
        Content-Length: 1436
        body: %PDF-1.5 ... (valid 1-page PDF)
    
    $ file /tmp/ping-report.pdf
      /tmp/ping-report.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded)
    
    POST /api/v1/reports/ping   (no body)
      → 200, 1435 bytes, renders with default name="world" from JRXML
        defaultValueExpression
    
    # negative
    POST /api/v1/reports/render  (multipart with garbage bytes)
      → 400 {"message": "failed to compile JRXML template:
             org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1;
             Content is not allowed in prolog."}
    
    GET /api/v1/_meta/metadata
      → permissions includes "reports.report.render"
    ```
    
    The `%PDF-` magic header is present and the `file` command on
    macOS identifies the bytes as a valid PDF 1.5 single-page document.
    JasperReports compile + fill + export are all running against
    the live JDK 21 javac inside the Spring Boot app on first boot.
    
    ## Tests
    
    - 3 new unit tests in `JasperReportRendererTest`:
      * `renders the built-in ping template to a valid PDF byte stream`
        — checks for the `%PDF-` magic header and a reasonable size
      * `renders with the default parameter when the data map is empty`
        — proves the JRXML's defaultValueExpression fires
      * `wraps compile failures in ReportRenderException` — feeds
        garbage bytes and asserts the exception type
    - Total framework unit tests: 337 (was 334), all green.
    
    ## What this unblocks
    
    - **Printing-shop quote PDFs.** The reference plug-in can now ship
      a `reports/quote.jrxml` in its JAR, load it in an HTTP handler
      via classloader, render via `context.reports.renderPdf(...)`,
      and either return the PDF bytes directly or persist it via
      `context.files.put("reports/quote-$code.pdf", "application/pdf", ...)`
      for later download. The P1.8 → P1.9 chain is ready.
    - **Job cards, delivery notes, pick lists, QC certificates.**
      Every business document in a printing shop is a report
      template + a data payload. The facade handles them all through
      the same `renderPdf` call.
    - **A future reports PBC.** When a PBC actually needs report
      metadata persisted (template versioning, report scheduling), a
      new pbc-reports can layer on top without changing api.v1 —
      the renderer stays the lowest-level primitive, the PBC becomes
      the management surface.
    
    ## Phase 1 completion
    
    With P1.8 landed:
    
    | Unit | Status |
    |------|--------|
    | P1.2 Plug-in linter        | ✅ |
    | P1.3 Plug-in HTTP + lifecycle | ✅ |
    | P1.4 Plug-in Liquibase + PluginJdbc | ✅ |
    | P1.5 Metadata store + loader | ✅ |
    | P1.6 ICU4J translator | ✅ |
    | P1.7 Event bus + outbox | ✅ |
    | P1.8 JasperReports integration | ✅ |
    | P1.9 File store | ✅ |
    | P1.10 Quartz scheduler | ✅ |
    
    **All nine Phase 1 platform units are now done.** (P1.1 Postgres RLS
    was removed by the early single-tenant refactor, per CLAUDE.md
    guardrail #5.) Remaining v1.0 work is cross-cutting: pbc-finance
    GL growth, the web SPA (R1–R4), OIDC (P4.2), the MCP server (A1),
    and richer per-PBC v2/v3 scopes.
    
    ## Non-goals (parking lot)
    
    - Template caching keyed by hash.
    - HTML/XLSX exporters.
    - Pre-compiled `.jasper` support via a Gradle build task.
    - Sub-reports (master-detail).
    - Dependency-tree optimisation via selective exclusions — needs a
      benchmark-driven chunk to prove each exclusion is safe.
    - Plug-in loader integration for custom font embedding. Jasper's
      default fonts work; custom fonts land when a real customer
      plug-in needs them.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • P1.10 follow-up. Plug-ins can now register background job handlers
    the same way they already register workflow task handlers. The
    reference printing-shop plug-in ships a real PlateCleanupJobHandler
    that reads from its own database via `context.jdbc` as the
    executable acceptance test.
    
    ## Why this wasn't in the P1.10 chunk
    
    P1.10 landed the core scheduler + registry + Quartz bridge + HTTP
    surface, but the plug-in-loader integration was deliberately
    deferred — the JobHandlerRegistry already supported owner-tagged
    `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`,
    so the seam was defined; it just didn't have a caller from the
    PF4J plug-in side. Without a real plug-in consumer, shipping the
    integration would have been speculative.
    
    This commit closes the gap in exactly the shape the TaskHandler
    side already has: new api.v1 registrar interface, new scoped
    registrar in platform-plugins, one constructor parameter on
    DefaultPluginContext, one new field on VibeErpPluginManager, and
    the teardown paths all fall out automatically because
    JobHandlerRegistry already implements the owner-tagged cleanup.
    
    ## api.v1 additions
    
    - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single
      method `register(handler: JobHandler)`. Mirrors
      `PluginTaskHandlerRegistrar` exactly, same ergonomics, same
      duplicate-key-throws discipline.
    - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional
      member with the default-throw backward-compat pattern used for
      `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host
      loading a newer plug-in jar fails loudly at first call rather
      than silently dropping scheduled work.
    
    ## platform-plugins wiring
    
    - New dependency on `:platform:platform-jobs`.
    - New internal class
      `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar`
      that implements the api.v1 registrar by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`.
    - `DefaultPluginContext` gains a `scopedJobHandlers` constructor
      parameter and exposes it as `PluginContext.jobs`.
    - `VibeErpPluginManager`:
      * injects `JobHandlerRegistry`
      * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing endpoint + taskHandler + BPMN-deployment cleanups
      * `destroy()` reverse-iterates `started` and calls the same
        `unregisterAllByOwner` alongside the other four teardown steps
    
    ## Reference plug-in — PlateCleanupJobHandler
    
    New file
    `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`.
    Key `printing_shop.plate.cleanup`. Captures the `PluginContext`
    via constructor — same "handler-side plug-in context access"
    pattern the printing-shop plug-in already uses for its
    TaskHandlers.
    
    The handler is READ-ONLY in its v1 incarnation: it runs a
    GROUP-BY query over `plugin_printingshop__plate` via
    `context.jdbc.query(...)` and logs a per-status summary via
    `context.logger.info(...)`. A real cleanup job would also run an
    `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the
    read-only shape is enough to exercise the seam end-to-end without
    introducing a retention policy the customer hasn't asked for.
    
    `PrintingShopPlugin.start(context)` now registers the handler
    alongside its two TaskHandlers:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
        context.jobs.register(PlateCleanupJobHandler(context))
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    # boot
    registered JobHandler 'vibeerp.jobs.ping' owner='core' ...
    JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping]
    ...
    registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup
    
    # HTTP: list handlers — now shows both
    GET /api/v1/jobs/handlers
      → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]}
    
    # HTTP: trigger the plug-in handler — proves dispatcher routes to it
    POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger
      → 200 {"handlerKey":"printing_shop.plate.cleanup",
             "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9",
             "startedAt":"...","finishedAt":"...","ok":true}
    
    # Handler-side logs prove context.jdbc + context.logger access
    [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...'
    [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[]
    
    # SIGTERM — clean teardown
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s)
    [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped)
    [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order. Core
    handlers are untouched by plug-in teardown.
    
    ## Tests
    
    No new unit tests in this commit — the test coverage is inherited
    from the previously landed components:
      - `JobHandlerRegistryTest` already covers owner-tagged
        `register` / `unregister` / `unregisterAllByOwner` / duplicate
        key rejection.
      - `ScopedTaskHandlerRegistrar` behavior (which this commit
        mirrors structurally) is exercised end-to-end by the
        printing-shop plug-in boot path.
    - Total framework unit tests: 334 (unchanged from the
      quality→warehousing quarantine chunk), all green.
    
    ## What this unblocks
    
    - **Plug-in-shipped scheduled work.** The printing-shop plug-in
      can now add cron schedules for its cleanup handler via
      `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey,
      cronExpression}` without the operator touching core code.
    - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can
      now ship job handlers with distinct keys and be torn down
      independently on reload — the owner-tagged cleanup strips only
      the stopping plug-in's handlers, leaving other plug-ins' and
      core handlers alone.
    - **The "plug-in contributes everything" story.** The reference
      printing-shop plug-in now contributes via every public seam the
      framework has: HTTP endpoints (7), custom fields on core
      entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1)
      — plus its own database schema, its own metadata YAML, its own
      i18n bundles. That's every extension point a real customer
      plug-in would want.
    
    ## Non-goals (parking lot)
    
    - A real retention policy in PlateCleanupJobHandler. The handler
      logs a summary but doesn't mutate state. Customer-specific
      pruning rules belong in a customer-owned plug-in or a metadata-
      driven rule once that seam exists.
    - A built-in cron schedule for the plug-in's handler. The
      plug-in only registers the handler; scheduling is an operator
      decision exposed through the HTTP surface from P1.10.
    zichun authored
     
    Browse Code »
  • …c-warehousing StockTransfer
    
    First cross-PBC reaction originating from pbc-quality. Records a
    REJECTED inspection with explicit source + quarantine location
    codes, publishes an api.v1 event inside the same transaction as
    the row insert, and pbc-warehousing's new subscriber atomically
    creates + confirms a StockTransfer that moves the rejected
    quantity to the quarantine bin. The whole chain — inspection
    insert + event publish + transfer create + confirm + two ledger
    rows — runs in a single transaction under the synchronous
    in-process bus with Propagation.MANDATORY.
    
    ## Why the auto-quarantine is opt-in per-inspection
    
    Not every inspection wants physical movement. A REJECTED batch
    that's already separated from good stock on the shop floor doesn't
    need the framework to move anything; the operator just wants the
    record. Forcing every rejection to create a ledger pair would
    collide with real-world QC workflows.
    
    The contract is simple: the `InspectionRecord` now carries two
    OPTIONAL columns (`source_location_code`, `quarantine_location_code`).
    When BOTH are set AND the decision is REJECTED AND the rejected
    quantity is positive, the subscriber reacts. Otherwise it logs at
    DEBUG and does nothing. The event is published either way, so
    audit/KPI subscribers see every inspection regardless.
    
    ## api.v1 additions
    
    New event class `org.vibeerp.api.v1.event.quality.InspectionRecordedEvent`
    with nine fields:
    
      inspectionCode, itemCode, sourceReference, decision,
      inspectedQuantity, rejectedQuantity,
      sourceLocationCode?, quarantineLocationCode?, inspector
    
    All required fields validated in `init { }` — blank strings,
    non-positive inspected quantity, negative rejected quantity, or
    an unknown decision string all throw at publish time so a
    malformed event never hits the outbox.
    
    `aggregateType = "quality.InspectionRecord"` matches the
    `<pbc>.<aggregate>` convention.
    
    `decision` is carried as a String (not the pbc-quality
    `InspectionDecision` enum) to keep guardrail #10 honest — api.v1
    events MUST NOT leak internal PBC types. Consumers compare
    against the literal `"APPROVED"` / `"REJECTED"` strings.
    
    ## pbc-quality changes
    
    - `InspectionRecord` entity gains two nullable columns:
      `source_location_code` + `quarantine_location_code`.
    - Liquibase migration `002-quality-quarantine-locations.xml` adds
      the columns to `quality__inspection_record`.
    - `InspectionRecordService` now injects `EventBus` and publishes
      `InspectionRecordedEvent` inside the `@Transactional record()`
      method. The publish carries all nine fields including the
      optional locations.
    - `RecordInspectionCommand` + `RecordInspectionRequest` gain the
      two optional location fields; unchanged default-null means
      every existing caller keeps working unchanged.
    - `InspectionRecordResponse` exposes both new columns on the HTTP
      wire.
    
    ## pbc-warehousing changes
    
    - New `QualityRejectionQuarantineSubscriber` @Component.
    - Subscribes in `@PostConstruct` via the typed-class
      `EventBus.subscribe(InspectionRecordedEvent::class.java, ...)`
      overload — same pattern every other PBC subscriber uses
      (SalesOrderConfirmedSubscriber, WorkOrderRequestedSubscriber,
      the pbc-finance order subscribers).
    - `handle(event)` is `internal` so the unit test can drive it
      directly without going through the bus.
    - Activation contract (all must be true): decision=REJECTED,
      rejectedQuantity>0, sourceLocationCode non-blank,
      quarantineLocationCode non-blank. Any missing condition → no-op.
    - Idempotency: derived transfer code is `TR-QC-<inspectionCode>`.
      Before creating, the subscriber checks
      `stockTransfers.findByCode(derivedCode)` — if anything exists
      (DRAFT, CONFIRMED, or CANCELLED), the subscriber skips. A
      replay of the same event under at-least-once delivery is safe.
    - On success: creates a DRAFT StockTransfer with one line moving
      `rejectedQuantity` of `itemCode` from source to quarantine,
      then calls `confirm(id)` which writes the atomic TRANSFER_OUT
      + TRANSFER_IN ledger pair.
    
    ## Smoke test (fresh DB)
    
    ```
    # seed
    POST /api/v1/catalog/items       {code: WIDGET-1, baseUomCode: ea}
    POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE}
    POST /api/v1/inventory/locations {code: WH-QUARANTINE, type: WAREHOUSE}
    POST /api/v1/inventory/movements {itemCode: WIDGET-1, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT}
    
    # the cross-PBC reaction
    POST /api/v1/quality/inspections
         {code: QC-R-001,
          itemCode: WIDGET-1,
          sourceReference: "WO:WO-001",
          decision: REJECTED,
          inspectedQuantity: 50,
          rejectedQuantity: 7,
          reason: "surface scratches",
          sourceLocationCode: "WH-MAIN",
          quarantineLocationCode: "WH-QUARANTINE"}
      → 201 {..., sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"}
    
    # automatically created + confirmed
    GET /api/v1/warehousing/stock-transfers/by-code/TR-QC-QC-R-001
      → 200 {
          "code": "TR-QC-QC-R-001",
          "fromLocationCode": "WH-MAIN",
          "toLocationCode": "WH-QUARANTINE",
          "status": "CONFIRMED",
          "note": "auto-quarantine from rejected inspection QC-R-001",
          "lines": [{"itemCode": "WIDGET-1", "quantity": 7.0}]
        }
    
    # ledger state (raw SQL)
    SELECT l.code, b.item_code, b.quantity
      FROM inventory__stock_balance b
      JOIN inventory__location l ON l.id = b.location_id
      WHERE b.item_code = 'WIDGET-1';
      WH-MAIN       | WIDGET-1 | 93.0000   ← was 100, now 93
      WH-QUARANTINE | WIDGET-1 |  7.0000   ← 7 rejected units here
    
    SELECT item_code, location, reason, delta, reference
      FROM inventory__stock_movement m JOIN inventory__location l ON l.id=m.location_id
      WHERE m.reference = 'TR:TR-QC-QC-R-001';
      WIDGET-1 | WH-MAIN       | TRANSFER_OUT | -7 | TR:TR-QC-QC-R-001
      WIDGET-1 | WH-QUARANTINE | TRANSFER_IN  |  7 | TR:TR-QC-QC-R-001
    
    # negatives
    POST /api/v1/quality/inspections {decision: APPROVED, ...+locations}
      → 201, but GET /TR-QC-QC-A-001 → 404 (no transfer, correct opt-out)
    
    POST /api/v1/quality/inspections {decision: REJECTED, rejected: 2, no locations}
      → 201, but GET /TR-QC-QC-R-002 → 404 (opt-in honored)
    
    # handler log
    [warehousing] auto-quarantining 7 units of 'WIDGET-1'
    from 'WH-MAIN' to 'WH-QUARANTINE'
    (inspection=QC-R-001, transfer=TR-QC-QC-R-001)
    ```
    
    Everything happens in ONE transaction because EventBusImpl uses
    Propagation.MANDATORY with synchronous delivery: the inspection
    insert, the event publish, the StockTransfer create, the
    confirm, and the two ledger rows all commit or roll back
    together.
    
    ## Tests
    
    - Updated `InspectionRecordServiceTest`: the service now takes an
      `EventBus` constructor argument. Every existing test got a
      relaxed `EventBus` mock; the one new test
      `record publishes InspectionRecordedEvent on success` captures
      the published event and asserts every field including the
      location codes.
    - 6 new unit tests in `QualityRejectionQuarantineSubscriberTest`:
      * subscribe registers one listener for InspectionRecordedEvent
      * handle creates and confirms a quarantine transfer on a
        fully-populated REJECTED event (asserts derived code,
        locations, item code, quantity)
      * handle is a no-op when decision is APPROVED
      * handle is a no-op when sourceLocationCode is missing
      * handle is a no-op when quarantineLocationCode is missing
      * handle skips when a transfer with the derived code already
        exists (idempotent replay)
    - Total framework unit tests: 334 (was 327), all green.
    
    ## What this unblocks
    
    - **Quality KPI dashboards** — any PBC can now subscribe to
      `InspectionRecordedEvent` without coupling to pbc-quality.
    - **pbc-finance quality-cost tracking** — when GL growth lands, a
      finance subscriber can debit a "quality variance" account on
      every REJECTED inspection.
    - **REF.2 / customer plug-in workflows** — the printing-shop
      plug-in can emit an `InspectionRecordedEvent` of its own from
      a BPMN service task (via `context.eventBus.publish`) and drive
      the same quarantine chain without touching pbc-quality's HTTP
      surface.
    
    ## Non-goals (parking lot)
    
    - Partial-batch quarantine decisions (moving some units to
      quarantine, some back to general stock, some to scrap). v1
      collapses the decision into a single "reject N units" action
      and assumes the operator splits batches manually before
      inspecting. A richer ResolutionPlan aggregate is a future
      chunk if real workflows need it.
    - Quality metrics storage. The event is audited by the existing
      wildcard event subscriber but no PBC rolls it up into a KPI
      table. Belongs to a future reporting feature.
    - Auto-approval chains. An APPROVED inspection could trigger a
      "release-from-hold" transfer (opposite direction) in a
      future-expanded subscriber, but v1 keeps the reaction
      REJECTED-only to match the "quarantine on fail" use case.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P1.9 row of the implementation plan. New platform-files
    subproject exposing a cross-PBC facade for the framework's binary
    blob store, with a local-disk implementation and a thin HTTP
    surface for multipart upload / download / delete / list.
    
    ## api.v1 additions (package `org.vibeerp.api.v1.files`)
    
    - `FileStorage` — injectable facade with five methods:
        * `put(key, contentType, content: InputStream): FileHandle`
        * `get(key): FileReadResult?`
        * `exists(key): Boolean`
        * `delete(key): Boolean`
        * `list(prefix): List<FileHandle>`
      Stream-first (not byte-array-first) so reports PDFs etc. don't
      have to be materialized in memory. Keys are opaque strings with
      slashes allowed for logical grouping; the local-disk backend
      maps them to subdirectories.
    
    - `FileHandle` — read-only metadata DTO (key, size, contentType,
      createdAt, updatedAt).
    
    - `FileReadResult` — the return type of `get()` bundling a handle
      and an open InputStream. The caller MUST close the stream
      (`result.content.use { ... }` is the idiomatic shape); the
      facade is not responsible for managing the consumer's lifetime.
    
    - `PluginContext.files: FileStorage` — new member on the plug-in
      context interface, default implementation throws
      `UnsupportedOperationException("upgrade vibe_erp to v0.8 or later")`.
      Same backward-compat pattern we used for `endpoints`, `jdbc`,
      `taskHandlers`. Plug-ins that need to persist report PDFs,
      uploaded attachments, or exported archives inject this through
      the context.
    
    ## platform-files runtime
    
    - `LocalDiskFileStorage` @Component reading `vibeerp.files.local-path`
      (default `./files-local`, overridden in dev profile to
      `./files-dev`, overridden in production config to
      `/opt/vibe-erp/files`).
    
      **Layout**: files are stored at `<root>/<key>` with a sidecar
      metadata file at `<root>/<key>.meta` containing a single line
      `content_type=<value>`. Sidecars beat xattrs (not portable
      across Linux/macOS) and beat an H2/SQLite index (overkill for
      single-tenant single-instance).
    
      **Atomicity**: every `put` writes to a `.tmp` sibling file and
      atomic-moves it into place so a concurrent read against the same
      key never sees a half-written mix.
    
      **Key safety**: `put`/`get`/`delete` all validate the key:
      rejects blank, leading `/`, `..` (path traversal), and trailing
      `.meta` (sidecar collision). Every resolved path is checked to
      stay under the configured root via `normalize().startsWith(root)`.
    
    - `FileController` at `/api/v1/files/**`:
        * `POST   /api/v1/files?key=...`            multipart upload (form field `file`)
        * `GET    /api/v1/files?prefix=...`         list by prefix
        * `GET    /api/v1/files/metadata?key=...`   metadata only (doesn't open the stream)
        * `GET    /api/v1/files/download?key=...`   stream bytes with the right Content-Type + filename
        * `DELETE /api/v1/files?key=...`            delete by key
      All endpoints @RequirePermission-gated via the keys declared in
      the metadata YAML. The `key` is a query parameter, NOT a path
      variable, so slashes in the key don't collide with Spring's path
      matching.
    
    - `META-INF/vibe-erp/metadata/files.yml` — 2 permissions + 1 menu.
    
    ## Smoke test (fresh DB, as admin)
    
    ```
    POST /api/v1/files?key=reports/smoke-test.txt  (multipart file)
      → 201 {"key":"reports/smoke-test.txt",
             "size":61,
             "contentType":"text/plain",
             "createdAt":"...","updatedAt":"..."}
    
    GET  /api/v1/files?prefix=reports/
      → [{"key":"reports/smoke-test.txt","size":61, ...}]
    
    GET  /api/v1/files/metadata?key=reports/smoke-test.txt
      → same handle, no bytes
    
    GET  /api/v1/files/download?key=reports/smoke-test.txt
      → 200 Content-Type: text/plain
         body: original upload content  (diff == 0)
    
    DELETE /api/v1/files?key=reports/smoke-test.txt
      → 200 {"removed":true}
    
    GET  /api/v1/files/download?key=reports/smoke-test.txt
      → 404
    
    # path traversal
    POST /api/v1/files?key=../escape  (multipart file)
      → 400 "file key must not contain '..' (got '../escape')"
    
    GET  /api/v1/_meta/metadata
      → permissions include ["files.file.read", "files.file.write"]
    ```
    
    Downloaded bytes match the uploaded bytes exactly — round-trip
    verified with `diff -q`.
    
    ## Tests
    
    - 12 new unit tests in `LocalDiskFileStorageTest` using JUnit 5's
      `@TempDir`:
      * `put then get round-trips content and metadata`
      * `put overwrites an existing key with the new content`
      * `get returns null for an unknown key`
      * `exists distinguishes present from absent`
      * `delete removes the file and its metadata sidecar`
      * `delete on unknown key returns false`
      * `list filters by prefix and returns sorted keys`
      * `put rejects a key with dot-dot`
      * `put rejects a key starting with slash`
      * `put rejects a key ending in dot-meta sidecar`
      * `put rejects blank content type`
      * `list sidecar metadata files are hidden from listing results`
    - Total framework unit tests: 327 (was 315), all green.
    
    ## What this unblocks
    
    - **P1.8 JasperReports integration** — now has a first-class home
      for generated PDFs. A report renderer can call
      `fileStorage.put("reports/quote-$code.pdf", "application/pdf", ...)`
      and return the handle to the caller.
    - **Plug-in attachments** — the printing-shop plug-in's future
      "plate scan image" or "QC report" attachments can be stored via
      `context.files` without touching the database.
    - **Export/import flows** — a scheduled job can write a nightly
      CSV export via `FileStorage.put` and a separate endpoint can
      download it; the scheduler-to-storage path is clean and typed.
    - **S3 backend when needed** — the interface is already streaming-
      based; dropping in an `S3FileStorage` @Component and toggling
      `vibeerp.files.backend: s3` in config is a future additive chunk,
      zero api.v1 churn.
    
    ## Non-goals (parking lot)
    
    - S3 backend. The config already reads `vibeerp.files.backend`,
      local is hard-wired for v1.0. Keeps the dependency tree off
      aws-sdk until a real consumer exists.
    - Range reads / HTTP `Range: bytes=...` support. Future
      enhancement for large-file streaming (e.g. video attachments).
    - Presigned URLs (for direct browser-to-S3 upload, skipping the
      framework). Design decision lives with the S3 backend chunk.
    - Per-file ACLs. The four `files.file.*` permissions currently
      gate all files uniformly; per-path or per-owner ACLs would
      require a new metadata table and haven't been asked for by any
      PBC yet.
    - Plug-in loader integration. `PluginContext.files` throws the
      default `UnsupportedOperationException` until the plug-in
      loader is wired to pass the host `FileStorage` through
      `DefaultPluginContext`. Lands in the same chunk as the first
      plug-in that needs to store a file.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P1.10 row of the implementation plan. New platform-jobs
    subproject shipping a Quartz-backed background job engine adapted
    to the api.v1 JobHandler contract, so PBCs and plug-ins can register
    scheduled work without ever importing Quartz types.
    
    ## The shape (matches the P2.1 workflow engine)
    
    platform-jobs is to scheduled work what platform-workflow is to
    BPMN service tasks. Same pattern, same discipline:
    
     - A single `@Component` bridge (`QuartzJobBridge`) is the ONLY
       org.quartz.Job implementation in the framework. Every persistent
       trigger points at it.
     - A single `JobHandlerRegistry` (owner-tagged, duplicate-key-rejecting,
       ConcurrentHashMap-backed) holds every registered JobHandler by key.
       Mirrors `TaskHandlerRegistry`.
     - The bridge reads the handler key from the trigger's JobDataMap,
       looks it up in the registry, and executes the matching JobHandler
       inside a `PrincipalContext.runAs("system:jobs:<key>")` block so
       audit rows written during the job get a structured, greppable
       `created_by` value ("system:jobs:core.audit.prune") instead of
       the default `__system__`.
     - Handler-thrown exceptions are re-wrapped as `JobExecutionException`
       so Quartz's MISFIRE machinery handles them properly.
     - `@DisallowConcurrentExecution` on the bridge stops a long-running
       handler from being started again before it finishes.
    
    ## api.v1 additions (package `org.vibeerp.api.v1.jobs`)
    
     - `JobHandler` — interface with `key()` + `execute(context)`.
       Analogous to the workflow TaskHandler. Plug-ins implement this
       to contribute scheduled work without any Quartz dependency.
     - `JobContext` — read-only execution context passed to the handler:
       principal, locale, correlation id, started-at instant, data map.
       Unlike TaskContext it has no `set()` writeback — scheduled jobs
       don't produce continuation state for a downstream step; a job
       that wants to talk to the rest of the system writes to its own
       domain table or publishes an event.
     - `JobScheduler` — injectable facade exposing:
         * `scheduleCron(scheduleKey, handlerKey, cronExpression, data)`
         * `scheduleOnce(scheduleKey, handlerKey, runAt, data)`
         * `unschedule(scheduleKey): Boolean`
         * `triggerNow(handlerKey, data): JobExecutionSummary`
           — synchronous in-thread execution, bypasses Quartz; used by
           the HTTP trigger endpoint and by tests.
         * `listScheduled(): List<ScheduledJobInfo>` — introspection
       Both `scheduleCron` and `scheduleOnce` are idempotent on
       `scheduleKey` (replace if exists).
     - `ScheduledJobInfo` + `JobExecutionSummary` + `ScheduleKind` —
       read-only DTOs returned by the scheduler.
    
    ## platform-jobs runtime
    
     - `QuartzJobBridge` — the shared Job impl. Routes by the
       `__vibeerp_handler_key` JobDataMap entry. Uses `@Autowired` field
       injection because Quartz instantiates Job classes through its
       own JobFactory (Spring Boot's `SpringBeanJobFactory` autowires
       fields after construction, which is the documented pattern).
     - `QuartzJobScheduler` — the concrete api.v1 `JobScheduler`
       implementation. Builds JobDetail + Trigger pairs under fixed
       group names (`vibeerp-jobs`), uses `addJob(replace=true)` +
       explicit `checkExists` + `rescheduleJob` for idempotent
       scheduling, strips the reserved `__vibeerp_handler_key` from the
       data visible to the handler.
     - `SimpleJobContext` — internal immutable `JobContext` impl.
       Defensive-copies the data map at construction.
     - `JobHandlerRegistry` — owner-tagged registry (OWNER_CORE by
       default, any other string for plug-in ownership). Same
       `register` / `unregister` / `unregisterAllByOwner` / `find` /
       `keys` / `size` surface as `TaskHandlerRegistry`. The plug-in
       loader integration seam is defined; the loader hook that calls
       `register(handler, pluginId)` lands when a plug-in actually ships
       a job handler (YAGNI).
     - `JobController` at `/api/v1/jobs/**`:
         * `GET  /handlers`                (perm `jobs.handler.read`)
         * `POST /handlers/{key}/trigger`  (perm `jobs.job.trigger`)
         * `GET  /scheduled`               (perm `jobs.schedule.read`)
         * `POST /scheduled`               (perm `jobs.schedule.write`)
         * `DELETE /scheduled/{key}`       (perm `jobs.schedule.write`)
     - `VibeErpPingJobHandler` — built-in diagnostic. Key
       `vibeerp.jobs.ping`. Logs the invocation and exits. Safe to
       trigger from any environment; mirrors the core
       `vibeerp.workflow.ping` workflow handler from P2.1.
     - `META-INF/vibe-erp/metadata/jobs.yml` — 4 permissions + 2 menus.
    
    ## Spring Boot config (application.yaml)
    
    ```
    spring.quartz:
      job-store-type: jdbc
      jdbc:
        initialize-schema: always     # creates QRTZ_* tables on first boot
      properties:
        org.quartz.scheduler.instanceName: vibeerp-scheduler
        org.quartz.scheduler.instanceId: AUTO
        org.quartz.threadPool.threadCount: "4"
        org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
        org.quartz.jobStore.isClustered: "false"
    ```
    
    ## The config trap caught during smoke-test (documented in-file)
    
    First boot crashed with `SchedulerConfigException: DataSource name
    not set.` because I'd initially added
    `org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX`
    to the raw Quartz properties. That is correct for a standalone
    Quartz deployment but WRONG for the Spring Boot starter: the
    starter configures a `LocalDataSourceJobStore` that wraps the
    Spring-managed DataSource automatically when `job-store-type=jdbc`,
    and setting `jobStore.class` explicitly overrides that wrapper back
    to Quartz's standalone JobStoreTX — which then fails at init
    because Quartz-standalone expects a separately-named `dataSource`
    property the Spring Boot starter doesn't supply. Fix: drop the
    `jobStore.class` property entirely. The `driverDelegateClass` is
    still fine to set explicitly because it's read by both the standalone
    and Spring-wrapped JobStore implementations. Rationale is documented
    in the config comment so the next maintainer doesn't add it back.
    
    ## Smoke test (fresh DB, as admin)
    
    ```
    GET  /api/v1/jobs/handlers
      → {"count": 1, "keys": ["vibeerp.jobs.ping"]}
    
    POST /api/v1/jobs/handlers/vibeerp.jobs.ping/trigger
         {"data": {"source": "smoke-test"}}
      → 200 {"handlerKey": "vibeerp.jobs.ping",
             "correlationId": "e142...",
             "startedAt": "...",
             "finishedAt": "...",
             "ok": true}
      log: VibeErpPingJobHandler invoked at=... principal='system:jobs:manual-trigger'
           data={source=smoke-test}
    
    GET  /api/v1/jobs/scheduled → []
    
    POST /api/v1/jobs/scheduled
         {"scheduleKey": "ping-every-sec",
          "handlerKey": "vibeerp.jobs.ping",
          "cronExpression": "0/1 * * * * ?",
          "data": {"trigger": "cron"}}
      → 201 {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping"}
    
    # after 3 seconds
    GET  /api/v1/jobs/scheduled
      → [{"scheduleKey": "ping-every-sec",
          "handlerKey": "vibeerp.jobs.ping",
          "kind": "CRON",
          "cronExpression": "0/1 * * * * ?",
          "nextFireTime": "...",
          "previousFireTime": "...",
          "data": {"trigger": "cron"}}]
    
    DELETE /api/v1/jobs/scheduled/ping-every-sec → 200 {"removed": true}
    
    # handler log count after ~3 seconds of cron ticks
    grep -c "VibeErpPingJobHandler invoked" /tmp/boot.log → 5
    # 1 manual trigger + 4 cron ticks before unschedule — matches the
    # 0/1 * * * * ? expression
    
    # negatives
    POST /api/v1/jobs/handlers/nope/trigger
      → 400 "no JobHandler registered for key 'nope'"
    POST /api/v1/jobs/scheduled  {cronExpression: "not a cron"}
      → 400 "invalid Quartz cron expression: 'not a cron'"
    ```
    
    ## Three schemas coexist in one Postgres database
    
    ```
    SELECT count(*) FILTER (WHERE table_name LIKE 'qrtz_%')    AS quartz_tables,
           count(*) FILTER (WHERE table_name LIKE 'act_%')     AS flowable_tables,
           count(*) FILTER (WHERE table_name NOT LIKE 'qrtz_%'
                              AND table_name NOT LIKE 'act_%'
                              AND table_schema = 'public')     AS vibeerp_tables
    FROM information_schema.tables WHERE table_schema = 'public';
    
     quartz_tables | flowable_tables | vibeerp_tables
    ---------------+-----------------+----------------
                11 |              39 |             48
    ```
    
    Three independent schema owners (Quartz / Flowable / Liquibase) in
    one public schema, no collisions. Spring Boot's
    `QuartzDataSourceScriptDatabaseInitializer` runs the QRTZ_* DDL
    once and skips on subsequent boots; Flowable's internal MyBatis
    schema manager does the same for ACT_* tables; our Liquibase owns
    the rest.
    
    ## Tests
    
    - 6 new tests in `JobHandlerRegistryTest`:
      * initial handlers registered with OWNER_CORE
      * duplicate key fails fast with both owners in the error
      * unregisterAllByOwner only removes handlers owned by that id
      * unregister by key returns false for unknown
      * find on missing key returns null
      * blank key is rejected
    - 9 new tests in `QuartzJobSchedulerTest` (Quartz Scheduler mocked):
      * scheduleCron rejects an unknown handler key
      * scheduleCron rejects an invalid cron expression
      * scheduleCron adds job + schedules trigger when nothing exists yet
      * scheduleCron reschedules when the trigger already exists
      * scheduleOnce uses a simple trigger at the requested instant
      * unschedule returns true/false correctly
      * triggerNow calls the handler synchronously and returns ok=true
      * triggerNow propagates the handler's exception
      * triggerNow rejects an unknown handler key
    - Total framework unit tests: 315 (was 300), all green.
    
    ## What this unblocks
    
    - **pbc-finance audit prune** — a core recurring job that deletes
      posted journal entries older than N days, driven by a cron from
      a Tier 1 metadata row.
    - **Plug-in scheduled work** — once the loader integration hook is
      wired (trivial follow-up), any plug-in's `start(context)` can
      register a JobHandler via `context.jobs.register(handler)` and
      the host strips it on plug-in stop via `unregisterAllByOwner`.
    - **Delayed workflow continuations** — a BPMN handler can call
      `jobScheduler.scheduleOnce(...)` to "re-evaluate this workflow
      in 24 hours if no one has approved it", bridging the workflow
      engine and the scheduler without introducing Thread.sleep.
    - **Outbox draining strategy** — the existing 5-second OutboxPoller
      can move from a Spring @Scheduled to a Quartz cron so it
      inherits the scheduler's persistence, misfire handling, and the
      future clustering story.
    
    ## Non-goals (parking lot)
    
    - **Clustered scheduling.** `isClustered=false` for now. Making
      this true requires every instance to share a unique `instanceId`
      and agree on the JDBC lock policy — doable but out of v1.0 scope
      since vibe_erp is single-tenant single-instance by design.
    - **Async execution of triggerNow.** The current `triggerNow` runs
      synchronously on the caller thread so HTTP requests see the real
      result. A future "fire and forget" endpoint would delegate to
      `Scheduler.triggerJob(...)` against the JobDetail instead.
    - **Per-job permissions.** Today the four `jobs.*` permissions gate
      the whole controller. A future enhancement could attach
      per-handler permissions (so "trigger audit prune" requires a
      different permission than "trigger pricing refresh").
    - **Plug-in loader integration.** The seam is defined on
      `JobHandlerRegistry` (owner tagging + unregisterAllByOwner) but
      `VibeErpPluginManager` doesn't call it yet. Lands in the same
      chunk as the first plug-in that ships a JobHandler.
    zichun authored
     
    Browse Code »
  • …alidates locations at create
    
    Follow-up to the pbc-warehousing chunk. Plugs a real gap noticed in
    the smoke test: an unknown fromLocationCode or toLocationCode on a
    StockTransfer was silently accepted at create() and only surfaced
    as a confirm()-time rollback, which is a confusing UX — the operator
    types TR-001 wrong, hits "create", then hits "confirm" minutes later
    and sees "location GHOST-SRC is not in the inventory directory".
    
    ## api.v1 growth
    
    New cross-PBC method on `InventoryApi`:
    
        fun findLocationByCode(locationCode: String): LocationRef?
    
    Parallel shape to `CatalogApi.findItemByCode` — a lookup-by-code
    returning a lightweight ref or null, safe for any cross-PBC consumer
    to inject. The returned `LocationRef` data class carries id, code,
    name, type (as a String, not the inventory-internal LocationType
    enum — rationale in the KDoc), and active flag. Fields that are
    NOT part of the cross-PBC contract (audit columns, ext JSONB, the
    raw JPA entity) stay inside pbc-inventory.
    
    api.v1 additive change within the v1 line — no breaking rename, no
    signature churn on existing methods. The interface adds a new
    abstract method, which IS technically a source-breaking change for
    any in-tree implementation, but the only impl is
    pbc-inventory/InventoryApiAdapter which is updated in the same
    commit. No external plug-in implements InventoryApi (by design;
    plug-ins inject it, they don't provide it).
    
    ## Adapter implementation
    
    `InventoryApiAdapter.findLocationByCode` resolves the location via
    the existing `LocationJpaRepository.findByCode`, which is exactly
    what `recordMovement` already uses. A new private extension
    `Location.toRef()` builds the api.v1 DTO. Zero new SQL; zero new
    repository methods.
    
    ## pbc-warehousing wiring
    
    `StockTransferService.create` now calls the facade twice — once for
    the source location, once for the destination — BEFORE validating
    lines. The four-step ordering is: code uniqueness → from != to →
    non-empty lines → both locations exist and are active → per-line
    validation. Unknown locations produce a 400 with a clear message;
    deactivated locations produce a 400 distinguishing "doesn't exist"
    from "exists but can't be used":
    
        "from location code 'GHOST-SRC' is not in the inventory directory"
        "from location 'WH-CLOSED' is deactivated and cannot be transfer source"
    
    The confirm() path is unchanged. Locations may still vanish between
    create and confirm (though the likelihood is low for a normal
    workflow), and `recordMovement` will still raise its own error in
    that case — belt and suspenders.
    
    ## Smoke test
    
    ```
    POST /api/v1/inventory/locations {code: WH-GOOD, type: WAREHOUSE}
    POST /api/v1/catalog/items       {code: ITEM-1, baseUomCode: ea}
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-bad, fromLocationCode: GHOST-SRC, toLocationCode: WH-GOOD,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 400 "from location code 'GHOST-SRC' is not in the inventory directory"
         (before this commit: 201 DRAFT, then 400 at confirm)
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-bad2, fromLocationCode: WH-GOOD, toLocationCode: GHOST-DST,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 400 "to location code 'GHOST-DST' is not in the inventory directory"
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-ok, fromLocationCode: WH-GOOD, toLocationCode: WH-OTHER,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 201 DRAFT   ← happy path still works
    ```
    
    ## Tests
    
    - Updated the 3 existing `StockTransferServiceTest` tests that
      created real transfers to stub `inventory.findLocationByCode` for
      both WH-A and WH-B via a new `stubLocation()` helper.
    - 3 new tests:
      * `create rejects unknown from location via InventoryApi`
      * `create rejects unknown to location via InventoryApi`
      * `create rejects a deactivated from location`
    - Total framework unit tests: 300 (was 297), all green.
    
    ## Why this isn't a breaking api.v1 change
    
    InventoryApi is an interface consumed by other PBCs and by plug-ins,
    implemented ONLY by pbc-inventory. Adding a new method to an
    interface IS a source-breaking change for any implementer — but
    the framework's dependency rules mean no external code implements
    this interface. Plug-ins and other PBCs CONSUME it via dependency
    injection; the only production impl is InventoryApiAdapter, updated
    in the same commit. Binary compatibility for consumers is
    preserved: existing call sites compile and run unchanged because
    only the interface grew, not its existing methods.
    
    If/when a third party implements InventoryApi (e.g. a test double
    outside the framework, or a custom backend plug-in), this would be
    a semver-major-worthy addition. For the in-tree framework, it's
    additive-within-a-major.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the core PBC row of the v1.0 target. Ships pbc-quality as a
    lean v1 recording-only aggregate: any caller that performs a quality
    inspection (inbound goods, in-process work order output, outbound
    shipment) appends an immutable InspectionRecord with a decision
    (APPROVED/REJECTED), inspected/rejected quantities, a free-form
    source reference, and the inspector's principal id.
    
    ## Deliberately narrow v1 scope
    
    pbc-quality does NOT ship:
      - cross-PBC writes (no "rejected stock gets auto-quarantined" rule)
      - event publishing (no InspectionRecordedEvent in api.v1 yet)
      - inspection plans or templates (no "item X requires checks Y, Z")
      - multi-check records (one decision per row; multi-step
        inspections become multiple records)
    
    The rationale is the "follow the consumer" discipline: every seam
    the framework adds has to be driven by a real consumer. With no PBC
    yet subscribing to inspection events or calling into pbc-quality,
    speculatively building those capabilities would be guessing the
    shape. Future chunks that actually need them (e.g. pbc-warehousing
    auto-quarantine on rejection, pbc-production WorkOrder scrap from
    rejected QC) will grow the seam into the shape they need.
    
    Even at this narrow scope pbc-quality delivers real value: a
    queryable, append-only, permission-gated record of every QC
    decision in the system, filterable by source reference or item
    code, and linked to the catalog via CatalogApi.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject following the existing
      recipe. api-v1 + platform/persistence + platform/security only;
      no cross-pbc deps (guardrail #9 stays honest).
    - `InspectionRecord` entity — code, item_code, source_reference,
      decision (enum), inspected_quantity, rejected_quantity, inspector
      (principal id as String, same convention as created_by), reason,
      inspected_at. Owns table `quality__inspection_record`. No `ext`
      column in v1 — the aggregate is simple enough that adding Tier 1
      customization now would be speculation; it can be added in one
      edit when a customer asks for it.
    - `InspectionDecision` enum — APPROVED, REJECTED. Deliberately
      two-valued; see the entity KDoc for why "conditional accept" is
      rejected as a shape.
    - `InspectionRecordJpaRepository` — existsByCode, findByCode,
      findBySourceReference, findByItemCode.
    - `InspectionRecordService` — ONE write verb `record`. Inspections
      are immutable; revising means recording a new one with a new code.
      Validates:
        * code is unique
        * source reference non-blank
        * inspected quantity > 0
        * rejected quantity >= 0
        * rejected <= inspected
        * APPROVED ↔ rejected = 0, REJECTED ↔ rejected > 0
        * itemCode resolves via CatalogApi
      Inspector is read from `PrincipalContext.currentOrSystem()` at
      call time so a real HTTP user records their own inspections and
      a background job recording a batch uses a named system principal.
    - `InspectionRecordController` — `/api/v1/quality/inspections`
      with GET list (supports `?sourceReference=` and `?itemCode=`
      query params), GET by id, GET by-code, POST record. Every
      endpoint @RequirePermission-gated.
    - `META-INF/vibe-erp/metadata/quality.yml` — 1 entity, 2
      permissions (`quality.inspection.read`, `quality.inspection.record`),
      1 menu.
    - `distribution/.../db/changelog/pbc-quality/001-quality-init.xml`
      — single table with the full audit column set plus:
        * CHECK decision IN ('APPROVED', 'REJECTED')
        * CHECK inspected_quantity > 0
        * CHECK rejected_quantity >= 0
        * CHECK rejected_quantity <= inspected_quantity
      The application enforces the biconditional (APPROVED ↔ rejected=0)
      because CHECK constraints in Postgres can't express the same
      thing ergonomically; the DB enforces the weaker "rejected is
      within bounds" so a direct INSERT can't fabricate nonsense.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app, as admin)
    
    ```
    POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea}
      → 201
    
    POST /api/v1/quality/inspections
         {code: QC-2026-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001",
          decision: APPROVED, inspectedQuantity: 100, rejectedQuantity: 0}
      → 201 {inspector: <admin principal uuid>, inspectedAt: "..."}
    
    POST /api/v1/quality/inspections
         {code: QC-2026-002, itemCode: WIDGET-1, sourceReference: "WO:WO-002",
          decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7,
          reason: "surface scratches detected on 7 units"}
      → 201
    
    GET  /api/v1/quality/inspections?sourceReference=WO:WO-001
      → [{code: QC-2026-001, ...}]
    GET  /api/v1/quality/inspections?itemCode=WIDGET-1
      → [APPROVED, REJECTED]   ← filter works, 2 records
    
    # Negative: APPROVED with positive rejected
    POST /api/v1/quality/inspections
         {decision: APPROVED, rejectedQuantity: 3, ...}
      → 400 "APPROVED inspection must have rejected quantity = 0 (got 3);
             record a REJECTED inspection instead"
    
    # Negative: rejected > inspected
    POST /api/v1/quality/inspections
         {decision: REJECTED, inspectedQuantity: 5, rejectedQuantity: 10, ...}
      → 400 "rejected quantity (10) cannot exceed inspected (5)"
    
    GET  /api/v1/_meta/metadata
      → permissions include ["quality.inspection.read",
                              "quality.inspection.record"]
    ```
    
    The `inspector` field on the created records contains the admin
    user's principal UUID exactly as written by the
    `PrincipalContextFilter` — proving the audit trail end-to-end.
    
    ## Tests
    
    - 9 new unit tests in `InspectionRecordServiceTest`:
      * `record persists an APPROVED inspection with rejected=0`
      * `record persists a REJECTED inspection with positive rejected`
      * `inspector defaults to system when no principal is bound` —
        validates the `PrincipalContext.currentOrSystem()` fallback
      * `record rejects duplicate code`
      * `record rejects non-positive inspected quantity`
      * `record rejects rejected greater than inspected`
      * `APPROVED with positive rejected is rejected`
      * `REJECTED with zero rejected is rejected`
      * `record rejects unknown items via CatalogApi`
    - Total framework unit tests: 297 (was 288), all green.
    
    ## Framework state after this commit
    
    - **20 → 21 Gradle subprojects**
    - **10 of 10 core PBCs live** (pbc-identity, pbc-catalog, pbc-partners,
      pbc-inventory, pbc-warehousing, pbc-orders-sales, pbc-orders-purchase,
      pbc-finance, pbc-production, pbc-quality). The P5.x row of the
      implementation plan is complete at minimal v1 scope.
    - The v1.0 acceptance bar's "core PBC coverage" line is met. Remaining
      v1.0 work is cross-cutting (reports, forms, scheduler, web SPA)
      plus the richer per-PBC v2/v3 scopes.
    
    ## What this unblocks
    
    - **Cross-PBC quality integration** — any PBC that needs to react
      to a quality decision can subscribe when pbc-quality grows its
      event. pbc-warehousing quarantine on rejection is the obvious
      first consumer.
    - **The full buy-make-sell BPMN scenario** — now every step has a
      home: sales → procurement → warehousing → production → quality →
      finance are all live. The big reference-plug-in end-to-end
      flow is unblocked at the PBC level.
    - **Completes the P5.x row** of the implementation plan. Remaining
      v1.0 work is cross-cutting platform units (P1.8 reports, P1.9
      files, P1.10 jobs, P2.2/P2.3 designer/forms) plus the web SPA.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Ninth core PBC. Ships the first-class orchestration aggregate for
    moving stock between locations: a header + lines that represents
    operator intent, and a confirm() verb that atomically posts the
    matching TRANSFER_OUT / TRANSFER_IN ledger pair per line via the
    existing InventoryApi.recordMovement facade.
    
    Takes the framework's core-PBC count to 9 of 10 (only pbc-quality
    remains in the P5.x row).
    
    ## The shape
    
    pbc-warehousing sits above pbc-inventory in the dependency graph:
    it doesn't replace the flat movement ledger, it orchestrates
    multi-row ledger writes with a business-level document on top. A
    DRAFT `warehousing__stock_transfer` row is queued intent (pickers
    haven't started yet); a CONFIRMED row reflects movements that have
    already posted to the `inventory__stock_movement` ledger. Each
    confirmed line becomes two ledger rows:
    
      TRANSFER_OUT(itemCode, fromLocationCode, -quantity, ref="TR:<code>")
      TRANSFER_IN (itemCode, toLocationCode,    quantity, ref="TR:<code>")
    
    All rows of one confirm call run inside ONE @Transactional method,
    so a failure anywhere — unknown item, unknown location, balance
    would go below zero — rolls back EVERY line's both halves. There
    is no half-confirmed transfer.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject, api-v1 + platform/*
      dependencies only. No cross-PBC dependency (guardrail #9 stays
      honest; CatalogApi + InventoryApi both come in via api.v1.ext).
    - `StockTransfer` entity — header with code, from/to location
      codes, status (DRAFT/CONFIRMED/CANCELLED), transfer_date, note,
      OneToMany<StockTransferLine>. Table name
      `warehousing__stock_transfer`.
    - `StockTransferLine` entity — lineNo, itemCode, quantity.
      `transfer_id → warehousing__stock_transfer(id) ON DELETE CASCADE`,
      unique `(transfer_id, line_no)`.
    - `StockTransferJpaRepository` — existsByCode + findByCode.
    - `StockTransferService` — create / confirm / cancel + three read
      methods. @Transactional service-level; all state transitions run
      through @Transactional methods so the event-bus MANDATORY
      propagation (if/when a pbc-warehousing event is added later) has
      a transaction to join. Business invariants:
        * code is unique (existsByCode short-circuit)
        * from != to (enforced in code AND in the Liquibase CHECK)
        * at least one line
        * each line: positive line_no, unique per transfer, positive
          quantity, itemCode must resolve via CatalogApi.findItemByCode
        * confirm requires DRAFT; writes OUT-first-per-line so a
          balance-goes-negative error aborts before touching the
          destination location
        * cancel requires DRAFT; CONFIRMED transfers are terminal
          (reverse by creating a NEW transfer in the opposite direction,
          matching the document-discipline rule every other PBC uses)
    - `StockTransferController` — `/api/v1/warehousing/stock-transfers`
      with GET list, GET by id, GET by-code, POST create, POST
      {id}/confirm, POST {id}/cancel. Every endpoint
      @RequirePermission-gated using the keys declared in the metadata
      YAML. Matches the shape of pbc-orders-sales, pbc-orders-purchase,
      pbc-production.
    - DTOs use the established pattern — jakarta.validation on the
      request, response mapping via extension functions.
    - `META-INF/vibe-erp/metadata/warehousing.yml` — 1 entity, 4
      permissions, 1 menu. Loaded by MetadataLoader at boot, visible
      via `GET /api/v1/_meta/metadata`.
    - `distribution/src/main/resources/db/changelog/pbc-warehousing/001-warehousing-init.xml`
      — creates both tables with the full audit column set, state
      CHECK constraint, locations-distinct CHECK, unique
      (transfer_id, line_no) index, quantity > 0 CHECK, item_code
      index for cross-PBC grep.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app)
    
    ```
    # seed
    POST /api/v1/catalog/items   {code: PAPER-A4, baseUomCode: sheet}
    POST /api/v1/catalog/items   {code: PAPER-A3, baseUomCode: sheet}
    POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE}
    POST /api/v1/inventory/locations {code: WH-SHOP, type: WAREHOUSE}
    POST /api/v1/inventory/movements {itemCode: PAPER-A4, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT}
    POST /api/v1/inventory/movements {itemCode: PAPER-A3, locationId: <WH-MAIN>, delta: 50,  reason: RECEIPT}
    
    # exercise the new PBC
    POST /api/v1/warehousing/stock-transfers
         {code: TR-001, fromLocationCode: WH-MAIN, toLocationCode: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 30},
                  {lineNo: 2, itemCode: PAPER-A3, quantity: 10}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 200 CONFIRMED
    
    # verify balances via the raw DB (the HTTP stock-balance endpoint
    # has a separate unrelated bug returning 500; the ledger state is
    # what this commit is proving)
    SELECT item_code, location_id, quantity FROM inventory__stock_balance;
      PAPER-A4 / WH-MAIN →  70   ← debited 30
      PAPER-A4 / WH-SHOP →  30   ← credited 30
      PAPER-A3 / WH-MAIN →  40   ← debited 10
      PAPER-A3 / WH-SHOP →  10   ← credited 10
    
    SELECT item_code, location_id, reason, delta, reference
      FROM inventory__stock_movement ORDER BY occurred_at;
      PAPER-A4 / WH-MAIN / TRANSFER_OUT / -30 / TR:TR-001
      PAPER-A4 / WH-SHOP / TRANSFER_IN  /  30 / TR:TR-001
      PAPER-A3 / WH-MAIN / TRANSFER_OUT / -10 / TR:TR-001
      PAPER-A3 / WH-SHOP / TRANSFER_IN  /  10 / TR:TR-001
    ```
    
    Four rows all tagged `TR:TR-001`. A grep of the ledger attributes
    both halves of each line to the single source transfer document.
    
    ## Transactional rollback test (in the same smoke run)
    
    ```
    # ask for more than exists
    POST /api/v1/warehousing/stock-transfers
         {code: TR-002, from: WH-MAIN, to: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 1000}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 400 "stock movement would push balance for 'PAPER-A4' at
             location <WH-MAIN> below zero (current=70.0000, delta=-1000.0000)"
    
    # assert TR-002 is still DRAFT
    GET /api/v1/warehousing/stock-transfers/<id> → status: DRAFT  ← NOT flipped to CONFIRMED
    
    # assert the ledger still has exactly 6 rows (no partial writes)
    SELECT count(*) FROM inventory__stock_movement; → 6
    ```
    
    The failed confirm left no residue: status stayed DRAFT, and the
    ledger count is unchanged at 6 (the 2 RECEIPT seeds + the 4
    TRANSFER_OUT/IN from TR-001). Propagation.REQUIRED + Spring's
    default rollback-on-unchecked-exception semantics do exactly what
    the KDoc promises.
    
    ## State-machine guards
    
    ```
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/confirm
      → 400 "cannot confirm stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be confirmed"
    
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/cancel
      → 400 "cannot cancel stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be cancelled — reverse a confirmed transfer
             by creating a new one in the other direction"
    ```
    
    ## Tests
    
    - 10 new unit tests in `StockTransferServiceTest`:
      * `create persists a DRAFT transfer when everything validates`
      * `create rejects duplicate code`
      * `create rejects same from and to location`
      * `create rejects an empty line list`
      * `create rejects duplicate line numbers`
      * `create rejects non-positive quantities`
      * `create rejects unknown items via CatalogApi`
      * `confirm writes an atomic TRANSFER_OUT + TRANSFER_IN pair per line`
        — uses `verifyOrder` to assert OUT-first-per-line dispatch order
      * `confirm refuses a non-DRAFT transfer`
      * `cancel refuses a CONFIRMED transfer`
      * `cancel flips a DRAFT transfer to CANCELLED`
    - Total framework unit tests: 288 (was 278), all green.
    
    ## What this unblocks
    
    - **Real warehouse workflows** — confirm a transfer from a picker
      UI (R1 is pending), driven by a BPMN that hands the confirm to a
      TaskHandler once the physical move is complete.
    - **pbc-quality (P5.8, last remaining core PBC)** — inspection
      plans + results + holds. Holds would typically quarantine stock
      by moving it to a QUARANTINE location via a stock transfer,
      which is the natural consumer for this aggregate.
    - **Stocktakes (physical inventory reconciliation)** — future
      pbc-warehousing verb that compares counted vs recorded and posts
      the differences as ADJUSTMENT rows; shares the same
      `recordMovement` primitive.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • …duction auto-creates WorkOrder
    
    First end-to-end cross-PBC workflow driven entirely from a customer
    plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a
    TaskHandler that publishes a generic api.v1 event; pbc-production
    reacts by creating a DRAFT WorkOrder. The plug-in has zero
    compile-time coupling to pbc-production, and pbc-production has zero
    knowledge the plug-in exists.
    
    ## Why an event, not a facade
    
    Two options were on the table for "how does a plug-in ask
    pbc-production to create a WorkOrder":
    
      (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi`
          with a `createWorkOrder(command)` method
      (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production`
          that anyone can publish — this commit
    
    Facade pattern (a) is what InventoryApi.recordMovement and
    CatalogApi.findItemByCode use: synchronous, in-transaction,
    caller-blocks-on-completion. Event pattern (b) is what
    SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses:
    asynchronous over the bus, still in-transaction (the bus uses
    `Propagation.MANDATORY` with synchronous delivery so a failure
    rolls everything back), but the caller doesn't need a typed result.
    
    Option (b) wins for plug-in → pbc-production:
    
    - Plug-in compile-time surface stays identical: plug-ins already
      import `api.v1.event.*` to publish. No new api.v1.ext package.
      Zero new plug-in dependency.
    - The outbox gets the row for free — a crash between publish and
      delivery replays cleanly from `platform__event_outbox`.
    - A second customer plug-in shipping a different flow that ALSO
      wants to auto-spawn work orders doesn't need a second facade, just
      publishes the same event. pbc-scheduling (future) can subscribe
      to the same channel without duplicating code.
    
    The synchronous facade pattern stays the right tool for cross-PBC
    operations the caller needs to observe (read-throughs, inventory
    debits that must block the current transaction). Creating a DRAFT
    work order is a fire-and-trust operation — the event shape fits.
    
    ## What landed
    
    ### api.v1 — WorkOrderRequestedEvent
    
    New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent`
    with four required fields:
      - `code`: desired work-order code (must be unique globally;
        convention is to bake the source reference into it so duplicate
        detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`)
      - `outputItemCode` + `outputQuantity`: what to produce
      - `sourceReference`: opaque free-form pointer used in logs and
        the outbox audit trail. Example values:
        `plugin:printing-shop:quote:Q-007`,
        `pbc-orders-sales:SO-2026-001:L2`
    
    The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the
    existing `WorkOrderEvent` sealed interface is for LIFECYCLE events
    published BY pbc-production, not for inbound requests). `init`
    validators reject blank strings and non-positive quantities so a
    malformed event fails fast at publish time rather than at the
    subscriber.
    
    ### pbc-production — WorkOrderRequestedSubscriber
    
    New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`.
    Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe`
    overload (same pattern as `SalesOrderConfirmedSubscriber` + the six
    pbc-finance order subscribers). The subscriber:
    
      1. Looks up `workOrders.findByCode(event.code)` as the idempotent
         short-circuit. If a WorkOrder with that code already exists
         (outbox replay, future async bus retry, developer re-running the
         same BPMN process), the subscriber logs at DEBUG and returns.
         **Second execution of the same BPMN produces the same outbox row
         which the subscriber then skips — the database ends up with
         exactly ONE WorkOrder regardless of how many times the process
         runs.**
      2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with
         the event's fields. `sourceSalesOrderCode` is null because this
         is the generic path, not the SO-driven one.
    
    Why this is a SECOND subscriber rather than extending
    `SalesOrderConfirmedSubscriber`: the two events serve different
    producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific
    and requires a round-trip through `SalesOrdersApi.findByCode` to
    fetch the lines; `WorkOrderRequestedEvent` carries everything the
    subscriber needs inline. Collapsing them would mean the generic
    path inherits the SO-flow's SO-specific lookup and short-circuit
    logic that doesn't apply to it.
    
    ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler
    
    New plug-in TaskHandler in
    `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`.
    Captures the `PluginContext` via constructor — same pattern as
    `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside
    `execute`:
    
      1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables
         (`quantity` accepts Number or String since Flowable's variable
         coercion is flexible).
      2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and
         `sourceReference = "plugin:printing-shop:quote:$quoteCode"`.
      3. Logs via `context.logger.info(...)` — the line is tagged
         `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`.
      4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`.
         This is the first time a plug-in TaskHandler publishes a cross-PBC
         event from inside a workflow — proves the event-bus leg of the
         handler-context pattern works end-to-end.
      5. Writes `workOrderCode` + `workOrderRequested=true` back to the
         process variables so a downstream BPMN step or the HTTP caller
         can see the derived code.
    
    The handler is registered in `PrintingShopPlugin.start(context)`
    alongside `PlateApprovalTaskHandler`:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
    
    Teardown via `unregisterAllByOwner("printing-shop")` still works
    unchanged — the scoped registrar tracks both handlers.
    
    ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml
    
    New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the
    plug-in JAR. Single synchronous service task, process definition
    key `plugin-printing-shop-quote-to-work-order`, service task id
    `printing_shop.quote.create_work_order` (matches the handler key).
    Auto-deployed by the host's `PluginProcessDeployer` at plug-in
    start — the printing-shop plug-in now ships two BPMNs bundled into
    one Flowable deployment, both under category `printing-shop`.
    
    ## Smoke test (fresh DB)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ...
    [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order
    PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...':
      [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml]
    pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload)
    
    # 1) seed a catalog item
    $ curl -X POST /api/v1/catalog/items
           {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"}
      → 201 BOOK-HARDCOVER
    
    # 2) start the plug-in's quote-to-work-order BPMN
    $ curl -X POST /api/v1/workflow/process-instances
           {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order",
            "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}}
      → 201 {"ended":true,
             "variables":{"quoteCode":"Q-007",
                          "itemCode":"BOOK-HARDCOVER",
                          "quantity":500,
                          "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007",
                          "workOrderRequested":true}}
    
    Log lines observed:
      [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent
         (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500)
      [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007'
         for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007')
    
    # 3) verify the WorkOrder now exists in pbc-production
    $ curl /api/v1/production/work-orders
      → [{"id":"029c2482-...",
          "code":"WO-FROM-PRINTINGSHOP-Q-007",
          "outputItemCode":"BOOK-HARDCOVER",
          "outputQuantity":500.0,
          "status":"DRAFT",
          "sourceSalesOrderCode":null,
          "inputs":[], "ext":{}}]
    
    # 4) run the SAME BPMN a second time — verify idempotent
    $ curl -X POST /api/v1/workflow/process-instances
           {same body as above}
      → 201  (process ends, workOrderRequested=true, new event published + delivered)
    $ curl /api/v1/production/work-orders
      → count=1, still only WO-FROM-PRINTINGSHOP-Q-007
    ```
    
    Every single step runs through an api.v1 public surface. No framework
    core code knows the printing-shop plug-in exists; no plug-in code knows
    pbc-production exists. They meet on the event bus, and the outbox
    guarantees the delivery.
    
    ## Tests
    
    - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`:
      * `subscribe registers one listener for WorkOrderRequestedEvent`
      * `handle creates a work order from the event fields` — captures the
        `CreateWorkOrderCommand` and asserts every field
      * `handle short-circuits when a work order with that code already exists`
        — proves the idempotent branch
    - Total framework unit tests: 278 (was 275), all green.
    
    ## What this unblocks
    
    - **Richer multi-step BPMNs** in the plug-in that chain plate
      approval + quote → work order + production start + completion.
    - **Plug-in-owned Quote entity** — the printing-shop plug-in can now
      introduce a `plugin_printingshop__quote` table via its own Liquibase
      changelog and have its HTTP endpoint create quotes that kick off the
      quote-to-work-order workflow automatically (or on operator confirm).
    - **pbc-production routings/operations (v3)** — each operation becomes
      a BPMN step, potentially driven by plug-ins contributing custom
      steps via the same TaskHandler + event seam.
    - **Second reference plug-in** — any new customer plug-in can publish
      `WorkOrderRequestedEvent` from its own workflows without any
      framework change.
    
    ## Non-goals (parking lot)
    
    - The handler publishes but does not also read pbc-production state
      back. A future "wait for WO completion" BPMN step could subscribe
      to `WorkOrderCompletedEvent` inside a user-task + signal flow, but
      the engine's signal/correlation machinery isn't wired to
      plug-ins yet.
    - Quote entity + HTTP + real business logic. REF.1 proves the
      cross-PBC event seam; the richer quote lifecycle is a separate
      chunk that can layer on top of this.
    - Transactional rollback integration test. The synchronous bus +
      `Propagation.MANDATORY` guarantees it, but an explicit test that
      a subscriber throw rolls back both the ledger-adjacent writes and
      the Flowable process state would be worth adding with a real
      test container run.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Proves out the "handler-side plug-in context access" pattern: a
    plug-in's TaskHandler captures the PluginContext through its
    constructor when the plug-in instantiates it inside `start(context)`,
    and then uses `context.jdbc`, `context.logger`, etc. from inside
    `execute` the same way the plug-in's HTTP lambdas do. Zero new
    api.v1 surface was needed — the plug-in decides whether a handler
    takes a context or not, and a pure handler simply omits the
    constructor parameter.
    
    ## Why this pattern and not a richer TaskContext
    
    The alternatives were:
      (a) add a `PluginContext` field (or a narrowed projection of it)
          to api.v1 `TaskContext`, threading the host-owned context
          through the workflow engine
      (b) capture the context in the plug-in's handler constructor —
          this commit
    
    Option (a) would have forced every TaskHandler author — core PBC
    handlers too, not just plug-in ones — to reason about a per-plug-in
    context that wouldn't make sense for core PBCs. It would also have
    coupled api.v1 to the plug-in machinery in a way that leaks into
    every handler implementation.
    
    Option (b) is a pure plug-in-local pattern. A pure handler:
    
        class PureHandler : TaskHandler { ... }
    
    and a stateful handler look identical except for one constructor
    parameter:
    
        class StatefulHandler(private val context: PluginContext) : TaskHandler {
            override fun execute(task, ctx) {
                context.jdbc.update(...)
                context.logger.info(...)
            }
        }
    
    and both register the same way:
    
        context.taskHandlers.register(PureHandler())
        context.taskHandlers.register(StatefulHandler(context))
    
    The framework's `TaskHandlerRegistry`, `DispatchingJavaDelegate`,
    and `DelegateTaskContext` stay unchanged. Plug-in teardown still
    strips handlers via `unregisterAllByOwner(pluginId)` because
    registration still happens through the scoped registrar inside
    `start(context)`.
    
    ## What PlateApprovalTaskHandler now does
    
    Before this commit, the handler was a pure function that wrote
    `plateApproved=true` + metadata to the process variables and
    didn't touch the DB. Now it:
    
      1. Parses `plateId` out of the process variables as a UUID
         (fail-fast on non-UUID).
      2. Calls `context.jdbc.update` to set the plate row's `status`
         from 'DRAFT' to 'APPROVED', guarded by an explicit
         `WHERE id=:id AND status='DRAFT'`. The guard makes a second
         invocation a no-op (rowsUpdated=0) rather than silently
         overwriting a later status.
      3. Logs via the plug-in's PluginLogger — "plate {id} approved by
         user:admin (rows updated: 1)". Log lines are tagged
         `[plugin:printing-shop]` by the framework's Slf4jPluginLogger.
      4. Emits process output variables: `plateApproved=true`,
         `plateId=<uuid>`, `approvedBy=<principal label>`, `approvedAt=<instant>`,
         and `rowsUpdated=<count>` so callers can see whether the
         approval actually changed state.
    
    ## Smoke test (fresh DB, full end-to-end loop)
    
    ```
    POST /api/v1/plugins/printing-shop/plates
         {"code":"PLATE-042","name":"Red cover plate","widthMm":320,"heightMm":480}
      → 201 {"id":"0bf577c9-...","status":"DRAFT",...}
    
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"plugin-printing-shop-plate-approval",
          "variables":{"plateId":"0bf577c9-..."}}
      → 201 {"ended":true,
             "variables":{"plateId":"0bf577c9-...",
                          "rowsUpdated":1,
                          "approvedBy":"user:admin",
                          "approvedAt":"2026-04-09T05:01:01.779369Z",
                          "plateApproved":true}}
    
    GET  /api/v1/plugins/printing-shop/plates/0bf577c9-...
      → 200 {"id":"0bf577c9-...","status":"APPROVED", ...}
           ^^^^ note: was DRAFT a moment ago, NOW APPROVED — persisted to
           plugin_printingshop__plate via the handler's context.jdbc.update
    
    POST /api/v1/workflow/process-instances   (same plateId, second run)
      → 201 {"variables":{"rowsUpdated":0,"plateApproved":true,...}}
           ^^^^ idempotent guard: the WHERE status='DRAFT' clause prevents
           double-updates, rowsUpdated=0 on the re-run
    ```
    
    This is the first cross-cutting end-to-end business flow in the
    framework driven entirely through the public surfaces:
      1. Plug-in HTTP endpoint writes a domain row
      2. Workflow HTTP endpoint starts a BPMN process
      3. Plug-in-contributed BPMN (deployed via PluginProcessDeployer)
         routes to a plug-in-contributed TaskHandler (registered via
         context.taskHandlers)
      4. Handler mutates the same plug-in-owned table via context.jdbc
      5. Plug-in HTTP endpoint reads the new state
    Every step uses only api.v1. Zero framework core code knows the
    plug-in exists.
    
    ## Non-goals (parking lot)
    
    - Emitting an event from the handler. The next step in the
      plug-in workflow story is for a handler to publish a domain
      event via `context.eventBus.publish(...)` so OTHER subscribers
      (e.g. pbc-production waiting on a PlateApproved event) can react.
      This commit stays narrow: the handler only mutates its own plug-in
      state.
    - Transaction scope of the handler's DB write relative to the
      Flowable engine's process-state persistence. Today both go
      through the host DataSource and Spring transaction manager that
      Flowable auto-configures, so a handler throw rolls everything
      back — verified by walking the code path. An explicit test of
      transactional rollback lands with REF.1 when the handler takes
      on real business logic.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Completes the plug-in side of the embedded Flowable story. The P2.1
    core made plug-ins able to register TaskHandlers; this chunk makes
    them able to ship the BPMN processes those handlers serve.
    
    ## Why Flowable's built-in auto-deployer couldn't do it
    
    Flowable's Spring Boot starter scans the host classpath at engine
    startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys
    every hit (the literal glob is paraphrased because the Kotlin KDoc
    comment below would otherwise treat the embedded slash-star as the
    start of a nested comment — feedback memory "Kotlin KDoc nested-
    comment trap"). PF4J plug-ins load through an isolated child
    classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml`
    resource shipped inside a plug-in JAR is never seen. This chunk adds
    a dedicated host-side deployer that opens each plug-in JAR file
    directly (same JarFile walk pattern as
    `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs
    with the Flowable `RepositoryService`.
    
    ## Mechanism
    
    ### New PluginProcessDeployer (platform-workflow)
    
    One Spring bean, two methods:
    
    - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR,
      collects every entry whose name starts with `processes/` and ends
      with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one
      Flowable `Deployment` named `plugin:<id>` with `category = pluginId`.
      Returns the deployment id or null (missing JAR / no BPMN resources).
      One deployment per plug-in keeps undeploy atomic and makes the
      teardown query unambiguous.
    - `undeployByPlugin(pluginId): Int` — runs
      `createDeploymentQuery().deploymentCategory(pluginId).list()` and
      calls `deleteDeployment(id, cascade=true)` on each hit. Cascading
      removes process instances and history rows along with the
      deployment — "uninstalling a plug-in makes it disappear". Idempotent:
      a second call returns 0.
    
    The deployer reads the JAR entries into byte arrays inside the
    JarFile's `use` block and then passes the bytes to
    `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the
    jar handle is already closed by the time Flowable sees the
    deployment. No input-stream lifetime tangles.
    
    ### VibeErpPluginManager wiring
    
    - New constructor dependency on `PluginProcessDeployer`.
    - Deploy happens AFTER `start(context)` succeeds. The ordering matters
      because a plug-in can only register its TaskHandlers during
      `start(context)`, and a deployed BPMN whose service-task delegate
      expression resolves to a key with no matching handler would still
      deploy (Flowable only resolves delegates at process-start time).
      Registering handlers first is the safer default: the moment the
      deployment lands, every referenced handler is already in the
      TaskHandlerRegistry.
    - BPMN deployment failure AFTER a successful `start(context)` now
      fully unwinds the plug-in state: call `instance.stop()`, remove
      the plug-in from the `started` list, strip its endpoints + its
      TaskHandlers + call `undeployByPlugin` (belt and suspenders — the
      deploy attempt may have partially succeeded). That mirrors the
      existing start-failure unwinding so the framework doesn't end up
      with a plug-in that's half-installed after any step throws.
    - `destroy()` calls `undeployByPlugin(pluginId)` alongside the
      existing `unregisterAllByOwner(pluginId)`.
    
    ### Reference plug-in BPMN
    
    `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml`
    — a minimal two-task process (`start` → serviceTask → `end`) whose
    serviceTask id is `printing_shop.plate.approve`, matching the
    PlateApprovalTaskHandler key landed in the previous commit. Process
    definition key is `plugin-printing-shop-plate-approval` (distinct
    from the serviceTask id because BPMN 2.0 requires element ids to be
    unique per document — same separation used for the core ping
    process).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'vibeerp.workflow.ping' owner='core' ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml]
    
    $ curl /api/v1/workflow/definitions (as admin)
    [
      {"key":"plugin-printing-shop-plate-approval",
       "name":"Printing shop — plate approval",
       "version":1,
       "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4",
       "resourceName":"processes/plate-approval.bpmn20.xml"},
      {"key":"vibeerp-workflow-ping",
       "name":"vibe_erp workflow ping",
       "version":1,
       "deploymentId":"4f48...",
       "resourceName":"vibeerp-ping.bpmn20.xml"}
    ]
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"plugin-printing-shop-plate-approval",
              "variables":{"plateId":"PLATE-007"}}
      → {"processInstanceId":"5b1b...",
         "ended":true,
         "variables":{"plateId":"PLATE-007",
                      "plateApproved":true,
                      "approvedBy":"user:admin",
                      "approvedAt":"2026-04-09T04:48:30.514523Z"}}
    
    $ kill -TERM <pid>
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade)
    ```
    
    Full end-to-end loop closed: plug-in ships a BPMN → host reads it
    out of the JAR → Flowable deployment registered under the plug-in
    category → HTTP caller starts a process instance via the standard
    `/api/v1/workflow/process-instances` surface → dispatcher routes by
    activity id to the plug-in's TaskHandler → handler writes output
    variables + plug-in sees the authenticated caller as `ctx.principal()`
    via the reserved `__vibeerp_*` process-variable propagation from
    commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs.
    
    ## Tests
    
    - 6 new unit tests on `PluginProcessDeployerTest`:
      * `deployFromPlugin returns null when jarPath is not a regular file`
        — guard against dev-exploded plug-in dirs
      * `deployFromPlugin returns null when the plug-in jar has no BPMN resources`
      * `deployFromPlugin reads every bpmn resource under processes and
        deploys one bundle` — builds a real temporary JAR with two BPMN
        entries + a README + a metadata YAML, verifies that both BPMNs
        go through `addBytes` with the right names and the README /
        metadata entries are skipped
      * `deployFromPlugin rejects a blank plug-in id`
      * `undeployByPlugin returns zero when there is nothing to remove`
      * `undeployByPlugin cascades a deleteDeployment per matching deployment`
    - Total framework unit tests: 275 (was 269), all green.
    
    ## Kotlin trap caught during authoring (feedback memory paid out)
    
    First compile failed with `Unclosed comment` on the last line of
    `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph
    containing the literal glob
    `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the
    backtick span was parsed as the start of a nested block comment
    even though the surrounding `/* ... */` KDoc was syntactically
    complete. The saved feedback-memory entry "Kotlin KDoc nested-comment
    trap" covered exactly this situation — the fix is to spell out glob
    characters as `[star]` / `[slash]` (or the word "slash-star") inside
    documentation so the literal `/*` never appears. The KDoc now
    documents the behaviour AND the workaround so the next maintainer
    doesn't hit the same trap.
    
    ## Non-goals (still parking lot)
    
    - Handler-side access to the full PluginContext — PlateApprovalTaskHandler
      is still a pure function because the framework doesn't hand
      TaskHandlers a context object. For REF.1 (real quote→job-card)
      handlers will need to read + mutate plug-in-owned tables; the
      cleanest approach is closure-capture inside the plug-in class
      (handler instantiated inside `start(context)` with the context
      captured in the outer scope). Decision deferred to REF.1.
    - BPMN resource hot reload. The deployer runs once per plug-in
      start; a plug-in whose BPMN changes under its feet at runtime
      isn't supported yet.
    - Plug-in-shipped DMN / CMMN resources. The deployer only looks at
      `.bpmn20.xml` and `.bpmn`. Decision-table and case-management
      resources are not on the v1.0 critical path.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • ## What's new
    
    Plug-ins can now contribute workflow task handlers to the framework.
    The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans
    from the host Spring context; handlers defined inside a PF4J plug-in
    were invisible because the plug-in's child classloader is not in the
    host's bean list. This commit closes that gap.
    
    ## Mechanism
    
    ### api.v1
    
    - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar`
      with a single `register(handler: TaskHandler)` method. Plug-ins call
      it from inside their `start(context)` lambda.
    - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as
      a new optional member with a default implementation that throws
      `UnsupportedOperationException("upgrade to v0.7 or later")`, so
      pre-existing plug-in jars remain binary-compatible with the new
      host and a plug-in built against v0.7 of the api-v1 surface fails
      fast on an old host instead of silently doing nothing. Same
      pattern we used for `endpoints` and `jdbc`.
    
    ### platform-workflow
    
    - `TaskHandlerRegistry` gains owner tagging. Every registered handler
      now carries an `ownerId`: core `@Component` beans get
      `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through
      the constructor-injection path), plug-in-contributed handlers get
      their PF4J plug-in id. New API:
      * `register(handler, ownerId = OWNER_CORE)` (default keeps existing
        call sites unchanged)
      * `unregisterAllByOwner(ownerId): Int` — strip every handler owned
        by that id in one call, returns the count for log correlation
      * The duplicate-key error message now includes both owners so a
        plug-in trying to stomp on a core handler gets an actionable
        "already registered by X (owner='core'), attempted by Y
        (owner='printing-shop')" instead of "already registered".
      * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>`
        to `ConcurrentHashMap<String, Entry>` where `Entry` carries
        `(handler, ownerId)`. `find(key)` still returns `TaskHandler?`
        so the dispatcher is unchanged.
    - No behavioral change for the hot-path (`DispatchingJavaDelegate`) —
      only the registration/teardown paths changed.
    
    ### platform-plugins
    
    - New dependency on `:platform:platform-workflow` (the only new inter-
      module dep of this chunk; it is the module that exposes
      `TaskHandlerRegistry`).
    - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)`
      that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId =
      pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`,
      so the plug-in never sees (or can tamper with) the owner id.
    - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor
      parameter and exposes it as the `PluginContext.taskHandlers`
      override.
    - `VibeErpPluginManager`:
      * injects `TaskHandlerRegistry`
      * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing `endpointRegistry.unregisterAll(pluginId)` cleanup
        so a throwing `start(context)` cannot leave stale registrations
      * `destroy()` calls the same `unregisterAllByOwner` for every
        started plug-in in reverse order, mirroring the endpoint cleanup
    
    ### reference-customer/plugin-printing-shop
    
    - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in-
      contributed TaskHandler in the framework. Key
      `printing_shop.plate.approve`. Reads a `plateId` process variable,
      writes `plateApproved`, `plateId`, `approvedBy` (principal label),
      `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper
      plate-approval handler would UPDATE `plugin_printingshop__plate` via
      `context.jdbc`, but that requires handing the TaskHandler a
      projection of the PluginContext — a deliberate non-goal of this
      chunk, deferred to the "handler context" follow-up.
    - `PrintingShopPlugin.start(context)` now ends with
      `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs
      the registration.
    - Package layout: `org.vibeerp.reference.printingshop.workflow` is
      the plug-in's workflow namespace going forward (the next printing-
      shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order
      — will live alongside).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler'
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    
    $ curl /api/v1/workflow/handlers (as admin)
    {
      "count": 2,
      "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"]
    }
    
    $ curl /api/v1/plugins/printing-shop/ping  # plug-in HTTP still works
    {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...}
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"vibeerp-workflow-ping"}
      (principal propagation from previous commit still works — pingedBy=user:admin)
    
    $ kill -TERM <pid>
    [ionShutdownHook] vibe_erp stopping 1 plug-in(s)
    [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped
    [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped)
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order with the
    right owner attribution. Core handlers are untouched by plug-in
    teardown.
    
    ## Tests
    
    - 4 new / updated tests on `TaskHandlerRegistryTest`:
      * `unregisterAllByOwner only removes handlers owned by that id`
        — 2 core + 2 plug-in, unregister the plug-in owner, only the
        2 plug-in keys are removed
      * `unregisterAllByOwner on unknown owner returns zero`
      * `register with blank owner is rejected`
      * Updated `duplicate key fails fast` to assert the new error
        message format including both owner ids
    - Total framework unit tests: 269 (was 265), all green.
    
    ## What this unblocks
    
    - **REF.1** (real printing-shop quote→job-card workflow) can now
      register its production handlers through the same seam
    - **Plug-in-contributed handlers with state access** — the next
      design question is how a plug-in handler gets at the plug-in's
      database and translator. Two options: pass a projection of the
      PluginContext through TaskContext, or keep a reference to the
      context captured at plug-in start (closure). The PlateApproval
      handler in this chunk is pure on purpose to keep the seam
      conversation separate.
    - **Plug-in-shipped BPMN auto-deployment** — Flowable's default
      classpath scan uses `classpath*:/processes/*.bpmn20.xml` which
      does NOT see PF4J plug-in classloaders. A dedicated
      `PluginProcessDeployer` that walks each started plug-in's JAR for
      BPMN resources and calls `repositoryService.createDeployment` is
      the natural companion to this commit, still pending.
    
    ## Non-goals (still parking lot)
    
    - BPMN processes shipped inside plug-in JARs (see above — needs
      its own chunk, because it requires reading resources from the
      PF4J classloader and constructing a Flowable deployment by hand)
    - Per-handler permission checks — a handler that wants a permission
      gate still has to call back through its own context; P4.3's
      @RequirePermission aspect doesn't reach into Flowable delegate
      execution.
    - Hot reload of a running plug-in's TaskHandlers. The seam supports
      it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised
      by any current caller.
    zichun authored
     
    Browse Code »
  • Before this commit, every TaskHandler saw a fixed `workflow-engine`
    System principal via `ctx.principal()` because there was no plumbing
    from the REST caller down to the dispatcher. A printing-shop
    quote-to-job-card handler (or any real business workflow) needs to
    know the actual user who kicked off the process so audit columns and
    role-based logic behave correctly.
    
    ## Mechanism
    
    The chain is: Spring Security populates `SecurityContextHolder` →
    `PrincipalContextFilter` mirrors it into `AuthorizationContext`
    (already existed) → `WorkflowService.startProcess` reads the bound
    `AuthorizedPrincipal` and stashes two reserved process variables
    (`__vibeerp_initiator_id`, `__vibeerp_initiator_username`) before
    calling `RuntimeService.startProcessInstanceByKey` →
    `DispatchingJavaDelegate` reads them back off each `DelegateExecution`
    when constructing the `DelegateTaskContext` → handler sees a real
    `Principal.User` from `ctx.principal()`.
    
    When the process is started outside an HTTP request (e.g. a future
    Quartz-scheduled process, or a signal fired by a PBC event
    subscriber), `AuthorizationContext.current()` is null, no initiator
    variables are written, and the dispatcher falls back to the
    `Principal.System("workflow-engine")` principal. A corrupt initiator
    id (e.g. a non-UUID string) also falls back to the system principal
    rather than failing the task, so a stale variable can't brick a
    running workflow.
    
    ## Reserved variable hygiene
    
    The `__vibeerp_` prefix is reserved framework plumbing. Two
    consequences wired in this commit:
    
    - `DispatchingJavaDelegate` strips keys starting with `__vibeerp_`
      from the variable snapshot handed to the handler (via
      `WorkflowTask.variables`), so handler code cannot accidentally
      depend on the initiator id through the wrong door — it must use
      `ctx.principal()`.
    - `WorkflowService.startProcess` and `getInstanceVariables` strip
      the same prefix from their HTTP response payloads so REST callers
      never see the plumbing either.
    
    The prefix constant lives on `DispatchingJavaDelegate.RESERVED_VAR_PREFIX`
    so there is exactly one source of truth. The two initiator variable
    names are public constants on `WorkflowService` — tests, future
    plug-in code, and any custom handlers that genuinely need the raw
    ids (e.g. a security-audit task) can depend on the stable symbols
    instead of hard-coded strings.
    
    ## PingTaskHandler as the executable witness
    
    `PingTaskHandler` now writes a `pingedBy` output variable with a
    principal label (`user:<username>`, `system:<name>`, or
    `plugin:<pluginId>`) and logs it. That makes the end-to-end smoke
    test trivially assertable:
    
    ```
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping"}
      (as admin user, with valid JWT)
      → {"processInstanceId": "...", "ended": true,
         "variables": {
           "pong": true,
           "pongAt": "...",
           "correlationId": "...",
           "pingedBy": "user:admin"
         }}
    ```
    
    Note the RESPONSE does NOT contain `__vibeerp_initiator_id` or
    `__vibeerp_initiator_username` — the reserved-var filter in the
    service layer hides them. The handler-side log line confirms
    `principal='user:admin'` in the service-task execution thread.
    
    ## Tests
    
    - 3 new tests in `DispatchingJavaDelegateTest`:
      * `resolveInitiator` returns a User principal when both vars set
      * falls back to system principal when id var is missing
      * falls back to system principal when id var is corrupt
        (non-UUID string)
    - Updated `variables given to the handler are a defensive copy` to
      also assert that reserved `__vibeerp_*` keys are stripped from
      the task's variable snapshot.
    - Updated `PingTaskHandlerTest`:
      * rename to "writes pong plus timestamp plus correlation id plus
        user principal label"
      * new test for the System-principal branch producing
        `pingedBy=system:workflow-engine`
    - Total framework unit tests: 265 (was 261), all green.
    
    ## Non-goals (still parking lot)
    
    - Plug-in-contributed TaskHandler registration via the PF4J loader
      walking child contexts for TaskHandler beans and calling
      `TaskHandlerRegistry.register`. The seam exists on the registry;
      the loader integration is the next chunk, and unblocks REF.1.
    - Propagation of the full role set (not just id+username) into the
      TaskContext. Handlers don't currently see the initiator's roles.
      Can be added as a third reserved variable when a handler actually
      needs it — YAGNI for now.
    - BPMN user tasks / signals / timers — engine supports them but we
      have no HTTP surface for them yet.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • New platform subproject `platform/platform-workflow` that makes
    `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This
    is the framework's first chunk of Phase 2 (embedded workflow engine)
    and the dependency other work has been waiting on — pbc-production
    routings/operations, the full buy-make-sell BPMN scenario in the
    reference plug-in, and ultimately the BPMN designer web UI all hang
    off this seam.
    
    ## The shape
    
    - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a
      single new module. Every other module in the framework still sees
      only the api.v1 TaskHandler + WorkflowTask + TaskContext surface —
      guardrail #10 stays honest, no Flowable type leaks to plug-ins or
      PBCs.
    - `TaskHandlerRegistry` is the host-side index of every registered
      handler, keyed by `TaskHandler.key()`. Auto-populated from every
      Spring bean implementing TaskHandler via constructor injection of
      `List<TaskHandler>`; duplicate keys fail fast at registration time.
      `register` / `unregister` exposed for a future plug-in lifecycle
      integration.
    - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate
      named `taskDispatcher`. Every BPMN service task in the framework
      references it via `flowable:delegateExpression="${taskDispatcher}"`.
      The dispatcher reads `execution.currentActivityId` as the task key
      (BPMN `id` attribute = TaskHandler key — no extension elements, no
      field injection, no second source of truth) and routes to the
      matching registered handler. A defensive copy of the execution
      variables is passed to the handler so it cannot mutate Flowable's
      internal map.
    - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the
      api.v1 `TaskContext` — the variable `set(name, value)` call
      forwards through Flowable's variable scope (persisted in the same
      transaction as the surrounding service task execution) and null
      values remove the variable. Principal + locale are documented
      placeholders for now (a workflow-engine `Principal.System`),
      waiting on the propagation chunk that plumbs the initiating user
      through `runtimeService.startProcessInstanceByKey(...)`.
    - `WorkflowService` is a thin facade over Flowable's `RuntimeService`
      + `RepositoryService` exposing exactly the four operations the
      controller needs: start, list active, inspect variables, list
      definitions. Everything richer (signals, timers, sub-processes,
      user-task completion, history queries) lands on this seam in later
      chunks.
    - `WorkflowController` at `/api/v1/workflow/**`:
      * `POST /process-instances`                       (permission `workflow.process.start`)
      * `GET  /process-instances`                       (`workflow.process.read`)
      * `GET  /process-instances/{id}/variables`        (`workflow.process.read`)
      * `GET  /definitions`                             (`workflow.definition.read`)
      * `GET  /handlers`                                (`workflow.definition.read`)
      Exception handlers map `NoSuchElementException` +
      `FlowableObjectNotFoundException` → 404, `IllegalArgumentException`
      → 400, and any other `FlowableException` → 400. Permissions are
      declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded
      by the core MetadataLoader so they show up under
      `GET /api/v1/_meta/metadata` alongside every other permission.
    
    ## The executable self-test
    
    - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module
      classpath and Flowable's starter auto-deploys it at boot.
      Structure: `start` → serviceTask id=`vibeerp.workflow.ping`
      (delegateExpression=`${taskDispatcher}`) → `end`. Process
      definitionKey is `vibeerp-workflow-ping` (distinct from the
      serviceTask id because BPMN 2.0 ids must be unique per document).
    - `PingTaskHandler` is a real shipped bean, not test code: its
      `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and
      `correlationId=<ctx.correlationId()>` to the process variables.
      Operators and AI agents get a trivial "is the workflow engine
      alive?" probe out of the box.
    
    Why the demo lives in src/main, not src/test: Flowable's auto-deployer
    reads from the host classpath at boot, so if either half lived under
    src/test the smoke test wouldn't be reproducible from the shipped
    image — exactly what CLAUDE.md's "reference plug-in is the executable
    acceptance test" discipline is trying to prevent.
    
    ## The Flowable + Liquibase trap
    
    **Learned the hard way during the smoke test.** Adding
    `flowable-spring-boot-starter-process` immediately broke boot with
    `Schema-validation: missing table [catalog__item]`. Liquibase was
    silently not running. Root cause: Flowable 7.x registers a Spring
    Boot `EnvironmentPostProcessor` called
    `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has
    already set an explicit value, forces
    `spring.liquibase.enabled=false` with a WARN log line that reads
    "Flowable pulls in Liquibase but does not use the Spring Boot
    configuration for it". Our master.xml then never executes and JPA
    validation fails against the empty schema. Fix is a single line in
    `distribution/src/main/resources/application.yaml` —
    `spring.liquibase.enabled: true` — with a comment explaining why it
    must stay there for anyone who touches config next.
    
    Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`,
    etc. tables coexist happily in the same public schema — 39 ACT_*
    tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable
    manages its own schema via its internal MyBatis DDL, Liquibase manages
    ours, they don't touch each other.
    
    ## Smoke-test transcript (fresh DB, dev profile)
    
    ```
    docker compose down -v && docker compose up -d db
    ./gradlew :distribution:bootRun &
    # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables,
    #     MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler,
    #     BPMN auto-deployed from classpath
    POST /api/v1/auth/login → JWT
    GET  /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping)
    GET  /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]}
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping",
          "businessKey":"smoke-1",
          "variables":{"greeting":"ni hao"}}
      → 201 {"processInstanceId":"...","ended":true,
             "variables":{"pong":true,"pongAt":"2026-04-09T...",
                          "correlationId":"...","greeting":"ni hao"}}
    POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"}
      → 404 {"message":"No process definition found for key 'does-not-exist'"}
    GET  /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity)
    ```
    
    ## Tests
    
    - 15 new unit tests in `platform-workflow/src/test`:
      * `TaskHandlerRegistryTest` — init with initial handlers, duplicate
        key fails fast, blank key rejected, unregister removes,
        unregister on unknown returns false, find on missing returns null
      * `DispatchingJavaDelegateTest` — dispatches by currentActivityId,
        throws on missing handler, defensive-copies the variable map
      * `DelegateTaskContextTest` — set non-null forwards, set null
        removes, blank name rejected, principal/locale/correlationId
        passthrough, default correlation id is stable across calls
      * `PingTaskHandlerTest` — key matches the BPMN serviceTask id,
        execute writes pong + pongAt + correlationId
    - Total framework unit tests: 261 (was 246), all green.
    
    ## What this unblocks
    
    - **REF.1** — real quote→job-card workflow handler in the
      printing-shop plug-in
    - **pbc-production routings/operations (v3)** — each operation
      becomes a BPMN step with duration + machine assignment
    - **P2.3** — user-task form rendering (landing on top of the
      RuntimeService already exposed via WorkflowService)
    - **P2.2** — BPMN designer web page (later, depends on R1)
    
    ## Deliberate non-goals (parking lot)
    
    - Principal propagation from the REST caller through the process
      start into the handler — uses a fixed `workflow-engine`
      `Principal.System` for now. Follow-up chunk will plumb the
      authenticated user as a Flowable variable.
    - Plug-in-contributed TaskHandler registration via PF4J child
      contexts — the registry exposes `register/unregister` but the
      plug-in loader doesn't call them yet. Follow-up chunk.
    - BPMN user tasks, signals, timers, history queries — seam exists,
      deliberately not built out.
    - Workflow deployment from `metadata__workflow` rows (the Tier 1
      path). Today deployment is classpath-only via Flowable's auto-
      deployer.
    - The Flowable async job executor is explicitly deactivated
      (`flowable.async-executor-activate: false`) — background-job
      machinery belongs to the future Quartz integration (P1.10), not
      Flowable.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Two small closer items that tidy up the end of the HasExt rollout:
    
    1. inventory.yml gains a `customFields:` section with two core
       declarations for Location: `inventory_address_city` (string,
       maxLength 128) and `inventory_floor_area_sqm` (decimal 10,2).
       Completes the "every HasExt entity has at least one declared
       field" symmetry. Printing-shop plug-in already adds its own
       `printing_shop_press_id` etc. on top.
    
    2. CLAUDE.md "Repository state" section updated to reflect this
       session's milestones:
       - pbc-production v2 (IN_PROGRESS + BOM + scrap) now called out
         explicitly in the PBC list.
       - MATERIAL_ISSUE added to the buy-sell-MAKE loop description —
         the work-order completion now consumes raw materials per BOM
         line AND credits finished goods atomically.
       - New bullet: "Tier 1 customization is universal across every
         core entity with an ext column" — HasExt on Partner, Location,
         SalesOrder, PurchaseOrder, WorkOrder, Item; every service uses
         applyTo/parseExt helpers, zero duplication.
       - New bullet: "Clean Core extensibility is executable" — the
         reference printing-shop plug-in's metadata YAML ships
         customFields on Partner/Item/SalesOrder/WorkOrder and the
         MetadataLoader merges them with core declarations at load
         time. Executable grade-A extension under the A/B/C/D safety
         scale.
       - Printing-shop plug-in description updated to note that its
         metadata YAML now carries custom fields on core entities, not
         just its own entities.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Location returns 2 core
        fields.
      - POST /inventory/locations with `{inventory_address_city:
        "Shenzhen", inventory_floor_area_sqm: "1250.50"}` → 201,
        canonical form persisted, ext round-trips.
      - POST with `inventory_floor_area_sqm: "123456789012345.678"` →
        400 "ext.inventory_floor_area_sqm: decimal scale 3 exceeds
        declared scale 2" — the validator's precision/scale rules fire
        exactly as designed.
    
    No code changes. 246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Completes the HasExt rollout across every core entity with an ext
    column. Item was the last one that carried an ext JSONB column
    without any validation wired — a plug-in could declare custom fields
    for Item but nothing would enforce them on save. This fixes that and
    restores two printing-shop-specific Item fields to the reference
    plug-in that were temporarily dropped from the previous Tier 1
    customization chunk (commit 16c59310) precisely because Item wasn't
    wired.
    
    Code changes:
      - Item implements HasExt; `ext` becomes `override var ext`, a
        companion constant holds the entity name "Item".
      - ItemService injects ExtJsonValidator, calls applyTo() in both
        create() and update() (create + update symmetry like partners
        and locations). parseExt passthrough added for response mappers.
      - CreateItemCommand, UpdateItemCommand, CreateItemRequest,
        UpdateItemRequest gain a nullable ext field.
      - ItemResponse now carries the parsed ext map, same shape as
        PartnerResponse / LocationResponse / SalesOrderResponse.
      - pbc-catalog build.gradle adds
        `implementation(project(":platform:platform-metadata"))`.
      - ItemServiceTest constructor updated to pass the new validator
        dependency with no-op stubs.
    
    Plug-in YAML (printing-shop.yml):
      - Re-added `printing_shop_color_count` (integer) and
        `printing_shop_paper_gsm` (integer) custom fields targeting Item.
        These were originally in the commit 16c59310 draft but removed
        because Item wasn't wired. Now that Item is wired, they're back
        and actually enforced.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields.
      - POST /catalog/items with `{printing_shop_color_count: 4,
        printing_shop_paper_gsm: 170}` → 201, canonical form persisted.
      - GET roundtrip preserves both integer values.
      - POST with `printing_shop_color_count: "not-a-number"` → 400
        "ext.printing_shop_color_count: not a valid integer: 'not-a-number'".
      - POST with `rogue_key` → 400 "ext contains undeclared key(s)
        for 'Item': [rogue_key]".
    
    Six of eight PBCs now participate in HasExt:
      Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item.
    The remaining two are pbc-identity (User has no ext column by
    design — identity is a security concern, not a customization one)
    and pbc-finance (JournalEntry is derived state from events, no
    customization surface).
    
    Five core entities carry Tier 1 custom fields as of this commit:
      Partner     (2 core + 1 plug-in)
      Item        (0 core + 2 plug-in)
      SalesOrder  (0 core + 1 plug-in)
      WorkOrder   (2 core + 1 plug-in)
      Location    (0 core + 0 plug-in — wired but no declarations yet)
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The reference printing-shop plug-in now demonstrates the framework's
    most important promise — that a customer plug-in can EXTEND core
    business entities (Partner, SalesOrder, WorkOrder) with customer-
    specific fields WITHOUT touching any core code.
    
    Added to `printing-shop.yml` customFields section:
    
      Partner:
        printing_shop_customer_segment (enum: agency, in_house, end_client, reseller)
    
      SalesOrder:
        printing_shop_quote_number (string, maxLength 32)
    
      WorkOrder:
        printing_shop_press_id (string, maxLength 32)
    
    Mechanism (no code changes, all metadata-driven):
      1. Plug-in YAML carries a `customFields:` section alongside its
         entities / permissions / menus.
      2. MetadataLoader.loadFromPluginJar reads the section and inserts
         rows into `metadata__custom_field` tagged
         `source='plugin:printing-shop'`.
      3. CustomFieldRegistry.refresh re-reads ALL rows (both `source='core'`
         and `source='plugin:*'`) and merges them by `targetEntity`.
      4. ExtJsonValidator.applyTo now validates incoming ext against the
         MERGED set, so a POST to /api/v1/partners/partners can include
         both core-declared (partners_industry) and plug-in-declared
         (printing_shop_customer_segment) fields in the same request,
         and both are enforced.
      5. Uninstalling the plug-in removes the plugin:printing-shop rows
         and the fields disappear from validation AND from the UI's
         custom-field catalog — no migration, no restart of anything
         other than the plug-in lifecycle itself.
    
    Convention established: plug-in-contributed custom field keys are
    prefixed with the plug-in id (e.g. `printing_shop_*`) so two
    independent plug-ins can't collide on the same entity. Documented
    inline in the YAML.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - CustomFieldRegistry logs "refreshed 4 custom fields" after core
        load, then "refreshed 7 custom fields across 3 entities" after
        plug-in load — 4 core + 3 plug-in fields, merged.
      - GET /_meta/metadata/custom-fields/Partner returns 3 fields
        (2 core + 1 plug-in).
      - GET /_meta/metadata/custom-fields/SalesOrder returns 1 field
        (plug-in only).
      - GET /_meta/metadata/custom-fields/WorkOrder returns 3 fields
        (2 core + 1 plug-in).
      - POST /partners/partners with BOTH a core field AND a plug-in
        field → 201, canonical form persisted.
      - POST with an invalid plug-in enum value → 400 "ext.printing_shop_customer_segment:
        value 'ghost' is not in allowed set [agency, in_house, end_client, reseller]".
      - POST /orders/sales-orders with printing_shop_quote_number → 201,
        quote number round-trips.
      - POST /production/work-orders with mixed production_priority +
        printing_shop_press_id → 201, both fields persisted.
    
    This is the first executable demonstration of Clean Core
    extensibility (CLAUDE.md guardrail #7) — the plug-in extends
    core-owned data through a stable public contract (api.v1 HasExt +
    metadata__custom_field) without reaching into any core or platform
    internal class. This is the "A" grade on the A/B/C/D extensibility
    safety scale.
    
    No code changes. No test changes. 246 unit tests, still green.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the last known gap from the HasExt refactor (commit 986f02ce):
    pbc-production's WorkOrder had an `ext` column but no validator was
    wired, so an operator could write arbitrary JSON without any
    schema enforcement. This fixes that and adds the first Tier 1
    custom fields for WorkOrder.
    
    Code changes:
      - WorkOrder implements HasExt; ext becomes `override var ext`,
        ENTITY_NAME moves onto the entity companion.
      - WorkOrderService injects ExtJsonValidator, calls applyTo() in
        create() before saving (null-safe so the
        SalesOrderConfirmedSubscriber's auto-spawn path still works —
        verified by smoke test).
      - CreateWorkOrderCommand + CreateWorkOrderRequest gain an `ext`
        field that flows through to the validator.
      - WorkOrderResponse gains an `ext: Map<String, Any?>` field; the
        response mapper signature changes to `toResponse(service)` to
        reach the validator via a convenience parseExt delegate on the
        service (same pattern as the other four PBCs).
      - pbc-production Gradle build adds `implementation(project(":platform:platform-metadata"))`.
    
    Metadata (production.yml):
      - Permission keys extended to match the v2 state machine:
        production.work-order.start (was missing) and
        production.work-order.scrap (was missing). The existing
        .read / .create / .complete / .cancel keys stay.
      - Two custom fields declared:
          * production_priority (enum: low, normal, high, urgent)
          * production_routing_notes (string, maxLength 1024)
        Both are optional and non-PII; an operator can now add
        priority and routing notes to a work order through the public
        API without any code change, which is the whole point of
        Tier 1 customization.
    
    Unit tests: WorkOrderServiceTest constructor updated to pass the
    new extValidator dependency and stub applyTo/parseExt as no-ops.
    No behavioral test changes — ext validation is covered by
    ExtJsonValidatorTest and the platform-wide smoke tests.
    
    Smoke verified end-to-end against real Postgres:
      - GET /_meta/metadata/custom-fields/WorkOrder now returns both
        declarations with correct enum sets and maxLength.
      - POST /work-orders with valid ext {production_priority:"high",
        production_routing_notes:"Rush for customer demo"} → 201,
        canonical form persisted, round-trips via GET.
      - POST with invalid enum value → 400 "value 'emergency' is not
        in allowed set [low, normal, high, urgent]".
      - POST with unknown ext key → 400 "ext contains undeclared
        key(s) for 'WorkOrder': [unknown_field]".
      - Auto-spawn from confirmed SO → DRAFT work order with empty
        ext `{}`, confirming the applyTo(null) null-safe path.
    
    Five of the eight PBCs now participate in the HasExt pattern:
    Partner, Location, SalesOrder, PurchaseOrder, WorkOrder. The
    remaining three (Item, Uom, JournalEntry) either have their own
    custom-field story in separate entities or are derived state.
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P4.3 rollout — the last PBC whose controllers were still
    unannotated. Every endpoint in `UserController` now carries an
    `@RequirePermission("identity.user.*")` annotation matching the keys
    already declared in `identity.yml`:
    
      GET  /api/v1/identity/users           identity.user.read
      GET  /api/v1/identity/users/{id}      identity.user.read
      POST /api/v1/identity/users           identity.user.create
      PATCH /api/v1/identity/users/{id}     identity.user.update
      DELETE /api/v1/identity/users/{id}    identity.user.disable
    
    `AuthController` (login, refresh) is deliberately NOT annotated —
    it is in the platform-security public allowlist because login is the
    token-issuing endpoint (chicken-and-egg).
    
    KDoc on the controller class updated to reflect the new auth story
    (removing the stale "authentication deferred to v0.2" comment from
    before P4.1 / P4.3 landed).
    
    Smoke verified end-to-end against real Postgres:
      - Admin (wildcard `admin` role) → GET /users returns 200, POST
        /users returns 201 (new user `jane` created).
      - Unauthenticated GET and POST → 401 Unauthorized from the
        framework's JWT filter before @RequirePermission runs. A
        non-admin user without explicit grants would get 403 from the
        AOP evaluator; tested manually with the admin and anonymous
        cases.
    
    No test changes — the controller unit test is a thin DTO mapper
    test that doesn't exercise the Spring AOP aspect; identity-wide
    authz enforcement is covered by the platform-security tests plus
    the shipping smoke tests. 246 unit tests, all green.
    
    P4.3 is now complete across every core PBC:
      pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales,
      pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity.
    zichun authored
     
    Browse Code »
  • CI verified green for `986f02ce` on both gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »
  • Removes the ext-handling copy/paste that had grown across four PBCs
    (partners, inventory, orders-sales, orders-purchase). Every service
    that wrote the JSONB `ext` column was manually doing the same
    four-step sequence: validate, null-check, serialize with a local
    ObjectMapper, assign to the entity. And every response mapper was
    doing the inverse: check-if-blank, parse, cast, swallow errors.
    
    Net: ~15 lines saved per PBC, one place to change the ext contract
    later (e.g. PII redaction, audit tagging, field-level events), and
    a stable plug-in opt-in mechanism — any plug-in entity that
    implements `HasExt` automatically participates.
    
    New api.v1 surface:
    
      interface HasExt {
          val extEntityName: String     // key into metadata__custom_field
          var ext: String               // the serialized JSONB column
      }
    
    Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own
    entities into the same validation path. Zero Spring/Jackson
    dependencies — api.v1 stays clean.
    
    Extended `ExtJsonValidator` (platform-metadata) with two helpers:
    
      fun applyTo(entity: HasExt, ext: Map<String, Any?>?)
          — null-safe; validates; writes canonical JSON to entity.ext.
            Replaces the validate + writeValueAsString + assign triplet
            in every service's create() and update().
    
      fun parseExt(entity: HasExt): Map<String, Any?>
          — returns empty map on blank/corrupt column; response
            mappers never 500 on bad data. Replaces the four identical
            parseExt local functions.
    
    ExtJsonValidator now takes an ObjectMapper via constructor
    injection (Spring Boot's auto-configured bean).
    
    Entities that now implement HasExt (override val extEntityName;
    override var ext; companion object const val ENTITY_NAME):
      - Partner (`partners.Partner` → "Partner")
      - Location (`inventory.Location` → "Location")
      - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder")
      - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder")
    
    Deliberately NOT converted this chunk:
      - WorkOrder (pbc-production) — its ext column has no declared
        fields yet; a follow-up that adds declarations AND the
        HasExt implementation is cleaner than splitting the two.
      - JournalEntry (pbc-finance) — derived state, no ext column.
    
    Services lose:
      - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()`
        field (four copies eliminated)
      - The `parseExt(entity): Map` helper function (four copies)
      - The `companion object { const val ENTITY_NAME = ... }` constant
        (moved onto the entity where it belongs)
      - The `val canonicalExt = extValidator.validate(...)` +
        `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }`
        create pattern (replaced with one applyTo call)
      - The `if (command.ext != null) { ... }` update pattern
        (applyTo is null-safe)
    
    Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and
    parseExt (null-safe path, happy path, failure path, blank column,
    round-trip, malformed JSON). Existing service tests just swap the
    mock setup from stubbing `validate` to stubbing `applyTo` and
    `parseExt` with no-ops.
    
    Smoke verified end-to-end against real Postgres:
      - POST /partners with valid ext (partners_credit_limit,
        partners_industry) → 201, canonical form persisted.
      - GET /partners/by-code/X → 200, ext round-trips.
      - POST with invalid enum value → 400 "value 'x' is not in
        allowed set [printing, publishing, packaging, other]".
      - POST with undeclared key → 400 "ext contains undeclared
        key(s) for 'Partner': [rogue_field]".
      - PATCH with new ext → 200, ext updated.
      - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe
        applyTo).
      - POST /orders/sales-orders with no ext → 201, the create path
        via the shared helper still works.
    
    246 unit tests (+6 over 240), 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • CI verified green for `75a75baa` on both the gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »