• Closes the core PBC row of the v1.0 target. Ships pbc-quality as a
    lean v1 recording-only aggregate: any caller that performs a quality
    inspection (inbound goods, in-process work order output, outbound
    shipment) appends an immutable InspectionRecord with a decision
    (APPROVED/REJECTED), inspected/rejected quantities, a free-form
    source reference, and the inspector's principal id.
    
    ## Deliberately narrow v1 scope
    
    pbc-quality does NOT ship:
      - cross-PBC writes (no "rejected stock gets auto-quarantined" rule)
      - event publishing (no InspectionRecordedEvent in api.v1 yet)
      - inspection plans or templates (no "item X requires checks Y, Z")
      - multi-check records (one decision per row; multi-step
        inspections become multiple records)
    
    The rationale is the "follow the consumer" discipline: every seam
    the framework adds has to be driven by a real consumer. With no PBC
    yet subscribing to inspection events or calling into pbc-quality,
    speculatively building those capabilities would be guessing the
    shape. Future chunks that actually need them (e.g. pbc-warehousing
    auto-quarantine on rejection, pbc-production WorkOrder scrap from
    rejected QC) will grow the seam into the shape they need.
    
    Even at this narrow scope pbc-quality delivers real value: a
    queryable, append-only, permission-gated record of every QC
    decision in the system, filterable by source reference or item
    code, and linked to the catalog via CatalogApi.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject following the existing
      recipe. api-v1 + platform/persistence + platform/security only;
      no cross-pbc deps (guardrail #9 stays honest).
    - `InspectionRecord` entity — code, item_code, source_reference,
      decision (enum), inspected_quantity, rejected_quantity, inspector
      (principal id as String, same convention as created_by), reason,
      inspected_at. Owns table `quality__inspection_record`. No `ext`
      column in v1 — the aggregate is simple enough that adding Tier 1
      customization now would be speculation; it can be added in one
      edit when a customer asks for it.
    - `InspectionDecision` enum — APPROVED, REJECTED. Deliberately
      two-valued; see the entity KDoc for why "conditional accept" is
      rejected as a shape.
    - `InspectionRecordJpaRepository` — existsByCode, findByCode,
      findBySourceReference, findByItemCode.
    - `InspectionRecordService` — ONE write verb `record`. Inspections
      are immutable; revising means recording a new one with a new code.
      Validates:
        * code is unique
        * source reference non-blank
        * inspected quantity > 0
        * rejected quantity >= 0
        * rejected <= inspected
        * APPROVED ↔ rejected = 0, REJECTED ↔ rejected > 0
        * itemCode resolves via CatalogApi
      Inspector is read from `PrincipalContext.currentOrSystem()` at
      call time so a real HTTP user records their own inspections and
      a background job recording a batch uses a named system principal.
    - `InspectionRecordController` — `/api/v1/quality/inspections`
      with GET list (supports `?sourceReference=` and `?itemCode=`
      query params), GET by id, GET by-code, POST record. Every
      endpoint @RequirePermission-gated.
    - `META-INF/vibe-erp/metadata/quality.yml` — 1 entity, 2
      permissions (`quality.inspection.read`, `quality.inspection.record`),
      1 menu.
    - `distribution/.../db/changelog/pbc-quality/001-quality-init.xml`
      — single table with the full audit column set plus:
        * CHECK decision IN ('APPROVED', 'REJECTED')
        * CHECK inspected_quantity > 0
        * CHECK rejected_quantity >= 0
        * CHECK rejected_quantity <= inspected_quantity
      The application enforces the biconditional (APPROVED ↔ rejected=0)
      because CHECK constraints in Postgres can't express the same
      thing ergonomically; the DB enforces the weaker "rejected is
      within bounds" so a direct INSERT can't fabricate nonsense.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app, as admin)
    
    ```
    POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea}
      → 201
    
    POST /api/v1/quality/inspections
         {code: QC-2026-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001",
          decision: APPROVED, inspectedQuantity: 100, rejectedQuantity: 0}
      → 201 {inspector: <admin principal uuid>, inspectedAt: "..."}
    
    POST /api/v1/quality/inspections
         {code: QC-2026-002, itemCode: WIDGET-1, sourceReference: "WO:WO-002",
          decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7,
          reason: "surface scratches detected on 7 units"}
      → 201
    
    GET  /api/v1/quality/inspections?sourceReference=WO:WO-001
      → [{code: QC-2026-001, ...}]
    GET  /api/v1/quality/inspections?itemCode=WIDGET-1
      → [APPROVED, REJECTED]   ← filter works, 2 records
    
    # Negative: APPROVED with positive rejected
    POST /api/v1/quality/inspections
         {decision: APPROVED, rejectedQuantity: 3, ...}
      → 400 "APPROVED inspection must have rejected quantity = 0 (got 3);
             record a REJECTED inspection instead"
    
    # Negative: rejected > inspected
    POST /api/v1/quality/inspections
         {decision: REJECTED, inspectedQuantity: 5, rejectedQuantity: 10, ...}
      → 400 "rejected quantity (10) cannot exceed inspected (5)"
    
    GET  /api/v1/_meta/metadata
      → permissions include ["quality.inspection.read",
                              "quality.inspection.record"]
    ```
    
    The `inspector` field on the created records contains the admin
    user's principal UUID exactly as written by the
    `PrincipalContextFilter` — proving the audit trail end-to-end.
    
    ## Tests
    
    - 9 new unit tests in `InspectionRecordServiceTest`:
      * `record persists an APPROVED inspection with rejected=0`
      * `record persists a REJECTED inspection with positive rejected`
      * `inspector defaults to system when no principal is bound` —
        validates the `PrincipalContext.currentOrSystem()` fallback
      * `record rejects duplicate code`
      * `record rejects non-positive inspected quantity`
      * `record rejects rejected greater than inspected`
      * `APPROVED with positive rejected is rejected`
      * `REJECTED with zero rejected is rejected`
      * `record rejects unknown items via CatalogApi`
    - Total framework unit tests: 297 (was 288), all green.
    
    ## Framework state after this commit
    
    - **20 → 21 Gradle subprojects**
    - **10 of 10 core PBCs live** (pbc-identity, pbc-catalog, pbc-partners,
      pbc-inventory, pbc-warehousing, pbc-orders-sales, pbc-orders-purchase,
      pbc-finance, pbc-production, pbc-quality). The P5.x row of the
      implementation plan is complete at minimal v1 scope.
    - The v1.0 acceptance bar's "core PBC coverage" line is met. Remaining
      v1.0 work is cross-cutting (reports, forms, scheduler, web SPA)
      plus the richer per-PBC v2/v3 scopes.
    
    ## What this unblocks
    
    - **Cross-PBC quality integration** — any PBC that needs to react
      to a quality decision can subscribe when pbc-quality grows its
      event. pbc-warehousing quarantine on rejection is the obvious
      first consumer.
    - **The full buy-make-sell BPMN scenario** — now every step has a
      home: sales → procurement → warehousing → production → quality →
      finance are all live. The big reference-plug-in end-to-end
      flow is unblocked at the PBC level.
    - **Completes the P5.x row** of the implementation plan. Remaining
      v1.0 work is cross-cutting platform units (P1.8 reports, P1.9
      files, P1.10 jobs, P2.2/P2.3 designer/forms) plus the web SPA.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Ninth core PBC. Ships the first-class orchestration aggregate for
    moving stock between locations: a header + lines that represents
    operator intent, and a confirm() verb that atomically posts the
    matching TRANSFER_OUT / TRANSFER_IN ledger pair per line via the
    existing InventoryApi.recordMovement facade.
    
    Takes the framework's core-PBC count to 9 of 10 (only pbc-quality
    remains in the P5.x row).
    
    ## The shape
    
    pbc-warehousing sits above pbc-inventory in the dependency graph:
    it doesn't replace the flat movement ledger, it orchestrates
    multi-row ledger writes with a business-level document on top. A
    DRAFT `warehousing__stock_transfer` row is queued intent (pickers
    haven't started yet); a CONFIRMED row reflects movements that have
    already posted to the `inventory__stock_movement` ledger. Each
    confirmed line becomes two ledger rows:
    
      TRANSFER_OUT(itemCode, fromLocationCode, -quantity, ref="TR:<code>")
      TRANSFER_IN (itemCode, toLocationCode,    quantity, ref="TR:<code>")
    
    All rows of one confirm call run inside ONE @Transactional method,
    so a failure anywhere — unknown item, unknown location, balance
    would go below zero — rolls back EVERY line's both halves. There
    is no half-confirmed transfer.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject, api-v1 + platform/*
      dependencies only. No cross-PBC dependency (guardrail #9 stays
      honest; CatalogApi + InventoryApi both come in via api.v1.ext).
    - `StockTransfer` entity — header with code, from/to location
      codes, status (DRAFT/CONFIRMED/CANCELLED), transfer_date, note,
      OneToMany<StockTransferLine>. Table name
      `warehousing__stock_transfer`.
    - `StockTransferLine` entity — lineNo, itemCode, quantity.
      `transfer_id → warehousing__stock_transfer(id) ON DELETE CASCADE`,
      unique `(transfer_id, line_no)`.
    - `StockTransferJpaRepository` — existsByCode + findByCode.
    - `StockTransferService` — create / confirm / cancel + three read
      methods. @Transactional service-level; all state transitions run
      through @Transactional methods so the event-bus MANDATORY
      propagation (if/when a pbc-warehousing event is added later) has
      a transaction to join. Business invariants:
        * code is unique (existsByCode short-circuit)
        * from != to (enforced in code AND in the Liquibase CHECK)
        * at least one line
        * each line: positive line_no, unique per transfer, positive
          quantity, itemCode must resolve via CatalogApi.findItemByCode
        * confirm requires DRAFT; writes OUT-first-per-line so a
          balance-goes-negative error aborts before touching the
          destination location
        * cancel requires DRAFT; CONFIRMED transfers are terminal
          (reverse by creating a NEW transfer in the opposite direction,
          matching the document-discipline rule every other PBC uses)
    - `StockTransferController` — `/api/v1/warehousing/stock-transfers`
      with GET list, GET by id, GET by-code, POST create, POST
      {id}/confirm, POST {id}/cancel. Every endpoint
      @RequirePermission-gated using the keys declared in the metadata
      YAML. Matches the shape of pbc-orders-sales, pbc-orders-purchase,
      pbc-production.
    - DTOs use the established pattern — jakarta.validation on the
      request, response mapping via extension functions.
    - `META-INF/vibe-erp/metadata/warehousing.yml` — 1 entity, 4
      permissions, 1 menu. Loaded by MetadataLoader at boot, visible
      via `GET /api/v1/_meta/metadata`.
    - `distribution/src/main/resources/db/changelog/pbc-warehousing/001-warehousing-init.xml`
      — creates both tables with the full audit column set, state
      CHECK constraint, locations-distinct CHECK, unique
      (transfer_id, line_no) index, quantity > 0 CHECK, item_code
      index for cross-PBC grep.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app)
    
    ```
    # seed
    POST /api/v1/catalog/items   {code: PAPER-A4, baseUomCode: sheet}
    POST /api/v1/catalog/items   {code: PAPER-A3, baseUomCode: sheet}
    POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE}
    POST /api/v1/inventory/locations {code: WH-SHOP, type: WAREHOUSE}
    POST /api/v1/inventory/movements {itemCode: PAPER-A4, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT}
    POST /api/v1/inventory/movements {itemCode: PAPER-A3, locationId: <WH-MAIN>, delta: 50,  reason: RECEIPT}
    
    # exercise the new PBC
    POST /api/v1/warehousing/stock-transfers
         {code: TR-001, fromLocationCode: WH-MAIN, toLocationCode: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 30},
                  {lineNo: 2, itemCode: PAPER-A3, quantity: 10}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 200 CONFIRMED
    
    # verify balances via the raw DB (the HTTP stock-balance endpoint
    # has a separate unrelated bug returning 500; the ledger state is
    # what this commit is proving)
    SELECT item_code, location_id, quantity FROM inventory__stock_balance;
      PAPER-A4 / WH-MAIN →  70   ← debited 30
      PAPER-A4 / WH-SHOP →  30   ← credited 30
      PAPER-A3 / WH-MAIN →  40   ← debited 10
      PAPER-A3 / WH-SHOP →  10   ← credited 10
    
    SELECT item_code, location_id, reason, delta, reference
      FROM inventory__stock_movement ORDER BY occurred_at;
      PAPER-A4 / WH-MAIN / TRANSFER_OUT / -30 / TR:TR-001
      PAPER-A4 / WH-SHOP / TRANSFER_IN  /  30 / TR:TR-001
      PAPER-A3 / WH-MAIN / TRANSFER_OUT / -10 / TR:TR-001
      PAPER-A3 / WH-SHOP / TRANSFER_IN  /  10 / TR:TR-001
    ```
    
    Four rows all tagged `TR:TR-001`. A grep of the ledger attributes
    both halves of each line to the single source transfer document.
    
    ## Transactional rollback test (in the same smoke run)
    
    ```
    # ask for more than exists
    POST /api/v1/warehousing/stock-transfers
         {code: TR-002, from: WH-MAIN, to: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 1000}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 400 "stock movement would push balance for 'PAPER-A4' at
             location <WH-MAIN> below zero (current=70.0000, delta=-1000.0000)"
    
    # assert TR-002 is still DRAFT
    GET /api/v1/warehousing/stock-transfers/<id> → status: DRAFT  ← NOT flipped to CONFIRMED
    
    # assert the ledger still has exactly 6 rows (no partial writes)
    SELECT count(*) FROM inventory__stock_movement; → 6
    ```
    
    The failed confirm left no residue: status stayed DRAFT, and the
    ledger count is unchanged at 6 (the 2 RECEIPT seeds + the 4
    TRANSFER_OUT/IN from TR-001). Propagation.REQUIRED + Spring's
    default rollback-on-unchecked-exception semantics do exactly what
    the KDoc promises.
    
    ## State-machine guards
    
    ```
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/confirm
      → 400 "cannot confirm stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be confirmed"
    
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/cancel
      → 400 "cannot cancel stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be cancelled — reverse a confirmed transfer
             by creating a new one in the other direction"
    ```
    
    ## Tests
    
    - 10 new unit tests in `StockTransferServiceTest`:
      * `create persists a DRAFT transfer when everything validates`
      * `create rejects duplicate code`
      * `create rejects same from and to location`
      * `create rejects an empty line list`
      * `create rejects duplicate line numbers`
      * `create rejects non-positive quantities`
      * `create rejects unknown items via CatalogApi`
      * `confirm writes an atomic TRANSFER_OUT + TRANSFER_IN pair per line`
        — uses `verifyOrder` to assert OUT-first-per-line dispatch order
      * `confirm refuses a non-DRAFT transfer`
      * `cancel refuses a CONFIRMED transfer`
      * `cancel flips a DRAFT transfer to CANCELLED`
    - Total framework unit tests: 288 (was 278), all green.
    
    ## What this unblocks
    
    - **Real warehouse workflows** — confirm a transfer from a picker
      UI (R1 is pending), driven by a BPMN that hands the confirm to a
      TaskHandler once the physical move is complete.
    - **pbc-quality (P5.8, last remaining core PBC)** — inspection
      plans + results + holds. Holds would typically quarantine stock
      by moving it to a QUARANTINE location via a stock transfer,
      which is the natural consumer for this aggregate.
    - **Stocktakes (physical inventory reconciliation)** — future
      pbc-warehousing verb that compares counted vs recorded and posts
      the differences as ADJUSTMENT rows; shares the same
      `recordMovement` primitive.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • …duction auto-creates WorkOrder
    
    First end-to-end cross-PBC workflow driven entirely from a customer
    plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a
    TaskHandler that publishes a generic api.v1 event; pbc-production
    reacts by creating a DRAFT WorkOrder. The plug-in has zero
    compile-time coupling to pbc-production, and pbc-production has zero
    knowledge the plug-in exists.
    
    ## Why an event, not a facade
    
    Two options were on the table for "how does a plug-in ask
    pbc-production to create a WorkOrder":
    
      (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi`
          with a `createWorkOrder(command)` method
      (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production`
          that anyone can publish — this commit
    
    Facade pattern (a) is what InventoryApi.recordMovement and
    CatalogApi.findItemByCode use: synchronous, in-transaction,
    caller-blocks-on-completion. Event pattern (b) is what
    SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses:
    asynchronous over the bus, still in-transaction (the bus uses
    `Propagation.MANDATORY` with synchronous delivery so a failure
    rolls everything back), but the caller doesn't need a typed result.
    
    Option (b) wins for plug-in → pbc-production:
    
    - Plug-in compile-time surface stays identical: plug-ins already
      import `api.v1.event.*` to publish. No new api.v1.ext package.
      Zero new plug-in dependency.
    - The outbox gets the row for free — a crash between publish and
      delivery replays cleanly from `platform__event_outbox`.
    - A second customer plug-in shipping a different flow that ALSO
      wants to auto-spawn work orders doesn't need a second facade, just
      publishes the same event. pbc-scheduling (future) can subscribe
      to the same channel without duplicating code.
    
    The synchronous facade pattern stays the right tool for cross-PBC
    operations the caller needs to observe (read-throughs, inventory
    debits that must block the current transaction). Creating a DRAFT
    work order is a fire-and-trust operation — the event shape fits.
    
    ## What landed
    
    ### api.v1 — WorkOrderRequestedEvent
    
    New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent`
    with four required fields:
      - `code`: desired work-order code (must be unique globally;
        convention is to bake the source reference into it so duplicate
        detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`)
      - `outputItemCode` + `outputQuantity`: what to produce
      - `sourceReference`: opaque free-form pointer used in logs and
        the outbox audit trail. Example values:
        `plugin:printing-shop:quote:Q-007`,
        `pbc-orders-sales:SO-2026-001:L2`
    
    The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the
    existing `WorkOrderEvent` sealed interface is for LIFECYCLE events
    published BY pbc-production, not for inbound requests). `init`
    validators reject blank strings and non-positive quantities so a
    malformed event fails fast at publish time rather than at the
    subscriber.
    
    ### pbc-production — WorkOrderRequestedSubscriber
    
    New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`.
    Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe`
    overload (same pattern as `SalesOrderConfirmedSubscriber` + the six
    pbc-finance order subscribers). The subscriber:
    
      1. Looks up `workOrders.findByCode(event.code)` as the idempotent
         short-circuit. If a WorkOrder with that code already exists
         (outbox replay, future async bus retry, developer re-running the
         same BPMN process), the subscriber logs at DEBUG and returns.
         **Second execution of the same BPMN produces the same outbox row
         which the subscriber then skips — the database ends up with
         exactly ONE WorkOrder regardless of how many times the process
         runs.**
      2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with
         the event's fields. `sourceSalesOrderCode` is null because this
         is the generic path, not the SO-driven one.
    
    Why this is a SECOND subscriber rather than extending
    `SalesOrderConfirmedSubscriber`: the two events serve different
    producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific
    and requires a round-trip through `SalesOrdersApi.findByCode` to
    fetch the lines; `WorkOrderRequestedEvent` carries everything the
    subscriber needs inline. Collapsing them would mean the generic
    path inherits the SO-flow's SO-specific lookup and short-circuit
    logic that doesn't apply to it.
    
    ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler
    
    New plug-in TaskHandler in
    `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`.
    Captures the `PluginContext` via constructor — same pattern as
    `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside
    `execute`:
    
      1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables
         (`quantity` accepts Number or String since Flowable's variable
         coercion is flexible).
      2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and
         `sourceReference = "plugin:printing-shop:quote:$quoteCode"`.
      3. Logs via `context.logger.info(...)` — the line is tagged
         `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`.
      4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`.
         This is the first time a plug-in TaskHandler publishes a cross-PBC
         event from inside a workflow — proves the event-bus leg of the
         handler-context pattern works end-to-end.
      5. Writes `workOrderCode` + `workOrderRequested=true` back to the
         process variables so a downstream BPMN step or the HTTP caller
         can see the derived code.
    
    The handler is registered in `PrintingShopPlugin.start(context)`
    alongside `PlateApprovalTaskHandler`:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
    
    Teardown via `unregisterAllByOwner("printing-shop")` still works
    unchanged — the scoped registrar tracks both handlers.
    
    ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml
    
    New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the
    plug-in JAR. Single synchronous service task, process definition
    key `plugin-printing-shop-quote-to-work-order`, service task id
    `printing_shop.quote.create_work_order` (matches the handler key).
    Auto-deployed by the host's `PluginProcessDeployer` at plug-in
    start — the printing-shop plug-in now ships two BPMNs bundled into
    one Flowable deployment, both under category `printing-shop`.
    
    ## Smoke test (fresh DB)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ...
    [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order
    PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...':
      [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml]
    pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload)
    
    # 1) seed a catalog item
    $ curl -X POST /api/v1/catalog/items
           {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"}
      → 201 BOOK-HARDCOVER
    
    # 2) start the plug-in's quote-to-work-order BPMN
    $ curl -X POST /api/v1/workflow/process-instances
           {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order",
            "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}}
      → 201 {"ended":true,
             "variables":{"quoteCode":"Q-007",
                          "itemCode":"BOOK-HARDCOVER",
                          "quantity":500,
                          "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007",
                          "workOrderRequested":true}}
    
    Log lines observed:
      [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent
         (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500)
      [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007'
         for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007')
    
    # 3) verify the WorkOrder now exists in pbc-production
    $ curl /api/v1/production/work-orders
      → [{"id":"029c2482-...",
          "code":"WO-FROM-PRINTINGSHOP-Q-007",
          "outputItemCode":"BOOK-HARDCOVER",
          "outputQuantity":500.0,
          "status":"DRAFT",
          "sourceSalesOrderCode":null,
          "inputs":[], "ext":{}}]
    
    # 4) run the SAME BPMN a second time — verify idempotent
    $ curl -X POST /api/v1/workflow/process-instances
           {same body as above}
      → 201  (process ends, workOrderRequested=true, new event published + delivered)
    $ curl /api/v1/production/work-orders
      → count=1, still only WO-FROM-PRINTINGSHOP-Q-007
    ```
    
    Every single step runs through an api.v1 public surface. No framework
    core code knows the printing-shop plug-in exists; no plug-in code knows
    pbc-production exists. They meet on the event bus, and the outbox
    guarantees the delivery.
    
    ## Tests
    
    - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`:
      * `subscribe registers one listener for WorkOrderRequestedEvent`
      * `handle creates a work order from the event fields` — captures the
        `CreateWorkOrderCommand` and asserts every field
      * `handle short-circuits when a work order with that code already exists`
        — proves the idempotent branch
    - Total framework unit tests: 278 (was 275), all green.
    
    ## What this unblocks
    
    - **Richer multi-step BPMNs** in the plug-in that chain plate
      approval + quote → work order + production start + completion.
    - **Plug-in-owned Quote entity** — the printing-shop plug-in can now
      introduce a `plugin_printingshop__quote` table via its own Liquibase
      changelog and have its HTTP endpoint create quotes that kick off the
      quote-to-work-order workflow automatically (or on operator confirm).
    - **pbc-production routings/operations (v3)** — each operation becomes
      a BPMN step, potentially driven by plug-ins contributing custom
      steps via the same TaskHandler + event seam.
    - **Second reference plug-in** — any new customer plug-in can publish
      `WorkOrderRequestedEvent` from its own workflows without any
      framework change.
    
    ## Non-goals (parking lot)
    
    - The handler publishes but does not also read pbc-production state
      back. A future "wait for WO completion" BPMN step could subscribe
      to `WorkOrderCompletedEvent` inside a user-task + signal flow, but
      the engine's signal/correlation machinery isn't wired to
      plug-ins yet.
    - Quote entity + HTTP + real business logic. REF.1 proves the
      cross-PBC event seam; the richer quote lifecycle is a separate
      chunk that can layer on top of this.
    - Transactional rollback integration test. The synchronous bus +
      `Propagation.MANDATORY` guarantees it, but an explicit test that
      a subscriber throw rolls back both the ledger-adjacent writes and
      the Flowable process state would be worth adding with a real
      test container run.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Proves out the "handler-side plug-in context access" pattern: a
    plug-in's TaskHandler captures the PluginContext through its
    constructor when the plug-in instantiates it inside `start(context)`,
    and then uses `context.jdbc`, `context.logger`, etc. from inside
    `execute` the same way the plug-in's HTTP lambdas do. Zero new
    api.v1 surface was needed — the plug-in decides whether a handler
    takes a context or not, and a pure handler simply omits the
    constructor parameter.
    
    ## Why this pattern and not a richer TaskContext
    
    The alternatives were:
      (a) add a `PluginContext` field (or a narrowed projection of it)
          to api.v1 `TaskContext`, threading the host-owned context
          through the workflow engine
      (b) capture the context in the plug-in's handler constructor —
          this commit
    
    Option (a) would have forced every TaskHandler author — core PBC
    handlers too, not just plug-in ones — to reason about a per-plug-in
    context that wouldn't make sense for core PBCs. It would also have
    coupled api.v1 to the plug-in machinery in a way that leaks into
    every handler implementation.
    
    Option (b) is a pure plug-in-local pattern. A pure handler:
    
        class PureHandler : TaskHandler { ... }
    
    and a stateful handler look identical except for one constructor
    parameter:
    
        class StatefulHandler(private val context: PluginContext) : TaskHandler {
            override fun execute(task, ctx) {
                context.jdbc.update(...)
                context.logger.info(...)
            }
        }
    
    and both register the same way:
    
        context.taskHandlers.register(PureHandler())
        context.taskHandlers.register(StatefulHandler(context))
    
    The framework's `TaskHandlerRegistry`, `DispatchingJavaDelegate`,
    and `DelegateTaskContext` stay unchanged. Plug-in teardown still
    strips handlers via `unregisterAllByOwner(pluginId)` because
    registration still happens through the scoped registrar inside
    `start(context)`.
    
    ## What PlateApprovalTaskHandler now does
    
    Before this commit, the handler was a pure function that wrote
    `plateApproved=true` + metadata to the process variables and
    didn't touch the DB. Now it:
    
      1. Parses `plateId` out of the process variables as a UUID
         (fail-fast on non-UUID).
      2. Calls `context.jdbc.update` to set the plate row's `status`
         from 'DRAFT' to 'APPROVED', guarded by an explicit
         `WHERE id=:id AND status='DRAFT'`. The guard makes a second
         invocation a no-op (rowsUpdated=0) rather than silently
         overwriting a later status.
      3. Logs via the plug-in's PluginLogger — "plate {id} approved by
         user:admin (rows updated: 1)". Log lines are tagged
         `[plugin:printing-shop]` by the framework's Slf4jPluginLogger.
      4. Emits process output variables: `plateApproved=true`,
         `plateId=<uuid>`, `approvedBy=<principal label>`, `approvedAt=<instant>`,
         and `rowsUpdated=<count>` so callers can see whether the
         approval actually changed state.
    
    ## Smoke test (fresh DB, full end-to-end loop)
    
    ```
    POST /api/v1/plugins/printing-shop/plates
         {"code":"PLATE-042","name":"Red cover plate","widthMm":320,"heightMm":480}
      → 201 {"id":"0bf577c9-...","status":"DRAFT",...}
    
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"plugin-printing-shop-plate-approval",
          "variables":{"plateId":"0bf577c9-..."}}
      → 201 {"ended":true,
             "variables":{"plateId":"0bf577c9-...",
                          "rowsUpdated":1,
                          "approvedBy":"user:admin",
                          "approvedAt":"2026-04-09T05:01:01.779369Z",
                          "plateApproved":true}}
    
    GET  /api/v1/plugins/printing-shop/plates/0bf577c9-...
      → 200 {"id":"0bf577c9-...","status":"APPROVED", ...}
           ^^^^ note: was DRAFT a moment ago, NOW APPROVED — persisted to
           plugin_printingshop__plate via the handler's context.jdbc.update
    
    POST /api/v1/workflow/process-instances   (same plateId, second run)
      → 201 {"variables":{"rowsUpdated":0,"plateApproved":true,...}}
           ^^^^ idempotent guard: the WHERE status='DRAFT' clause prevents
           double-updates, rowsUpdated=0 on the re-run
    ```
    
    This is the first cross-cutting end-to-end business flow in the
    framework driven entirely through the public surfaces:
      1. Plug-in HTTP endpoint writes a domain row
      2. Workflow HTTP endpoint starts a BPMN process
      3. Plug-in-contributed BPMN (deployed via PluginProcessDeployer)
         routes to a plug-in-contributed TaskHandler (registered via
         context.taskHandlers)
      4. Handler mutates the same plug-in-owned table via context.jdbc
      5. Plug-in HTTP endpoint reads the new state
    Every step uses only api.v1. Zero framework core code knows the
    plug-in exists.
    
    ## Non-goals (parking lot)
    
    - Emitting an event from the handler. The next step in the
      plug-in workflow story is for a handler to publish a domain
      event via `context.eventBus.publish(...)` so OTHER subscribers
      (e.g. pbc-production waiting on a PlateApproved event) can react.
      This commit stays narrow: the handler only mutates its own plug-in
      state.
    - Transaction scope of the handler's DB write relative to the
      Flowable engine's process-state persistence. Today both go
      through the host DataSource and Spring transaction manager that
      Flowable auto-configures, so a handler throw rolls everything
      back — verified by walking the code path. An explicit test of
      transactional rollback lands with REF.1 when the handler takes
      on real business logic.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Completes the plug-in side of the embedded Flowable story. The P2.1
    core made plug-ins able to register TaskHandlers; this chunk makes
    them able to ship the BPMN processes those handlers serve.
    
    ## Why Flowable's built-in auto-deployer couldn't do it
    
    Flowable's Spring Boot starter scans the host classpath at engine
    startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys
    every hit (the literal glob is paraphrased because the Kotlin KDoc
    comment below would otherwise treat the embedded slash-star as the
    start of a nested comment — feedback memory "Kotlin KDoc nested-
    comment trap"). PF4J plug-ins load through an isolated child
    classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml`
    resource shipped inside a plug-in JAR is never seen. This chunk adds
    a dedicated host-side deployer that opens each plug-in JAR file
    directly (same JarFile walk pattern as
    `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs
    with the Flowable `RepositoryService`.
    
    ## Mechanism
    
    ### New PluginProcessDeployer (platform-workflow)
    
    One Spring bean, two methods:
    
    - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR,
      collects every entry whose name starts with `processes/` and ends
      with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one
      Flowable `Deployment` named `plugin:<id>` with `category = pluginId`.
      Returns the deployment id or null (missing JAR / no BPMN resources).
      One deployment per plug-in keeps undeploy atomic and makes the
      teardown query unambiguous.
    - `undeployByPlugin(pluginId): Int` — runs
      `createDeploymentQuery().deploymentCategory(pluginId).list()` and
      calls `deleteDeployment(id, cascade=true)` on each hit. Cascading
      removes process instances and history rows along with the
      deployment — "uninstalling a plug-in makes it disappear". Idempotent:
      a second call returns 0.
    
    The deployer reads the JAR entries into byte arrays inside the
    JarFile's `use` block and then passes the bytes to
    `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the
    jar handle is already closed by the time Flowable sees the
    deployment. No input-stream lifetime tangles.
    
    ### VibeErpPluginManager wiring
    
    - New constructor dependency on `PluginProcessDeployer`.
    - Deploy happens AFTER `start(context)` succeeds. The ordering matters
      because a plug-in can only register its TaskHandlers during
      `start(context)`, and a deployed BPMN whose service-task delegate
      expression resolves to a key with no matching handler would still
      deploy (Flowable only resolves delegates at process-start time).
      Registering handlers first is the safer default: the moment the
      deployment lands, every referenced handler is already in the
      TaskHandlerRegistry.
    - BPMN deployment failure AFTER a successful `start(context)` now
      fully unwinds the plug-in state: call `instance.stop()`, remove
      the plug-in from the `started` list, strip its endpoints + its
      TaskHandlers + call `undeployByPlugin` (belt and suspenders — the
      deploy attempt may have partially succeeded). That mirrors the
      existing start-failure unwinding so the framework doesn't end up
      with a plug-in that's half-installed after any step throws.
    - `destroy()` calls `undeployByPlugin(pluginId)` alongside the
      existing `unregisterAllByOwner(pluginId)`.
    
    ### Reference plug-in BPMN
    
    `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml`
    — a minimal two-task process (`start` → serviceTask → `end`) whose
    serviceTask id is `printing_shop.plate.approve`, matching the
    PlateApprovalTaskHandler key landed in the previous commit. Process
    definition key is `plugin-printing-shop-plate-approval` (distinct
    from the serviceTask id because BPMN 2.0 requires element ids to be
    unique per document — same separation used for the core ping
    process).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'vibeerp.workflow.ping' owner='core' ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml]
    
    $ curl /api/v1/workflow/definitions (as admin)
    [
      {"key":"plugin-printing-shop-plate-approval",
       "name":"Printing shop — plate approval",
       "version":1,
       "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4",
       "resourceName":"processes/plate-approval.bpmn20.xml"},
      {"key":"vibeerp-workflow-ping",
       "name":"vibe_erp workflow ping",
       "version":1,
       "deploymentId":"4f48...",
       "resourceName":"vibeerp-ping.bpmn20.xml"}
    ]
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"plugin-printing-shop-plate-approval",
              "variables":{"plateId":"PLATE-007"}}
      → {"processInstanceId":"5b1b...",
         "ended":true,
         "variables":{"plateId":"PLATE-007",
                      "plateApproved":true,
                      "approvedBy":"user:admin",
                      "approvedAt":"2026-04-09T04:48:30.514523Z"}}
    
    $ kill -TERM <pid>
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade)
    ```
    
    Full end-to-end loop closed: plug-in ships a BPMN → host reads it
    out of the JAR → Flowable deployment registered under the plug-in
    category → HTTP caller starts a process instance via the standard
    `/api/v1/workflow/process-instances` surface → dispatcher routes by
    activity id to the plug-in's TaskHandler → handler writes output
    variables + plug-in sees the authenticated caller as `ctx.principal()`
    via the reserved `__vibeerp_*` process-variable propagation from
    commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs.
    
    ## Tests
    
    - 6 new unit tests on `PluginProcessDeployerTest`:
      * `deployFromPlugin returns null when jarPath is not a regular file`
        — guard against dev-exploded plug-in dirs
      * `deployFromPlugin returns null when the plug-in jar has no BPMN resources`
      * `deployFromPlugin reads every bpmn resource under processes and
        deploys one bundle` — builds a real temporary JAR with two BPMN
        entries + a README + a metadata YAML, verifies that both BPMNs
        go through `addBytes` with the right names and the README /
        metadata entries are skipped
      * `deployFromPlugin rejects a blank plug-in id`
      * `undeployByPlugin returns zero when there is nothing to remove`
      * `undeployByPlugin cascades a deleteDeployment per matching deployment`
    - Total framework unit tests: 275 (was 269), all green.
    
    ## Kotlin trap caught during authoring (feedback memory paid out)
    
    First compile failed with `Unclosed comment` on the last line of
    `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph
    containing the literal glob
    `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the
    backtick span was parsed as the start of a nested block comment
    even though the surrounding `/* ... */` KDoc was syntactically
    complete. The saved feedback-memory entry "Kotlin KDoc nested-comment
    trap" covered exactly this situation — the fix is to spell out glob
    characters as `[star]` / `[slash]` (or the word "slash-star") inside
    documentation so the literal `/*` never appears. The KDoc now
    documents the behaviour AND the workaround so the next maintainer
    doesn't hit the same trap.
    
    ## Non-goals (still parking lot)
    
    - Handler-side access to the full PluginContext — PlateApprovalTaskHandler
      is still a pure function because the framework doesn't hand
      TaskHandlers a context object. For REF.1 (real quote→job-card)
      handlers will need to read + mutate plug-in-owned tables; the
      cleanest approach is closure-capture inside the plug-in class
      (handler instantiated inside `start(context)` with the context
      captured in the outer scope). Decision deferred to REF.1.
    - BPMN resource hot reload. The deployer runs once per plug-in
      start; a plug-in whose BPMN changes under its feet at runtime
      isn't supported yet.
    - Plug-in-shipped DMN / CMMN resources. The deployer only looks at
      `.bpmn20.xml` and `.bpmn`. Decision-table and case-management
      resources are not on the v1.0 critical path.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • ## What's new
    
    Plug-ins can now contribute workflow task handlers to the framework.
    The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans
    from the host Spring context; handlers defined inside a PF4J plug-in
    were invisible because the plug-in's child classloader is not in the
    host's bean list. This commit closes that gap.
    
    ## Mechanism
    
    ### api.v1
    
    - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar`
      with a single `register(handler: TaskHandler)` method. Plug-ins call
      it from inside their `start(context)` lambda.
    - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as
      a new optional member with a default implementation that throws
      `UnsupportedOperationException("upgrade to v0.7 or later")`, so
      pre-existing plug-in jars remain binary-compatible with the new
      host and a plug-in built against v0.7 of the api-v1 surface fails
      fast on an old host instead of silently doing nothing. Same
      pattern we used for `endpoints` and `jdbc`.
    
    ### platform-workflow
    
    - `TaskHandlerRegistry` gains owner tagging. Every registered handler
      now carries an `ownerId`: core `@Component` beans get
      `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through
      the constructor-injection path), plug-in-contributed handlers get
      their PF4J plug-in id. New API:
      * `register(handler, ownerId = OWNER_CORE)` (default keeps existing
        call sites unchanged)
      * `unregisterAllByOwner(ownerId): Int` — strip every handler owned
        by that id in one call, returns the count for log correlation
      * The duplicate-key error message now includes both owners so a
        plug-in trying to stomp on a core handler gets an actionable
        "already registered by X (owner='core'), attempted by Y
        (owner='printing-shop')" instead of "already registered".
      * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>`
        to `ConcurrentHashMap<String, Entry>` where `Entry` carries
        `(handler, ownerId)`. `find(key)` still returns `TaskHandler?`
        so the dispatcher is unchanged.
    - No behavioral change for the hot-path (`DispatchingJavaDelegate`) —
      only the registration/teardown paths changed.
    
    ### platform-plugins
    
    - New dependency on `:platform:platform-workflow` (the only new inter-
      module dep of this chunk; it is the module that exposes
      `TaskHandlerRegistry`).
    - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)`
      that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId =
      pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`,
      so the plug-in never sees (or can tamper with) the owner id.
    - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor
      parameter and exposes it as the `PluginContext.taskHandlers`
      override.
    - `VibeErpPluginManager`:
      * injects `TaskHandlerRegistry`
      * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing `endpointRegistry.unregisterAll(pluginId)` cleanup
        so a throwing `start(context)` cannot leave stale registrations
      * `destroy()` calls the same `unregisterAllByOwner` for every
        started plug-in in reverse order, mirroring the endpoint cleanup
    
    ### reference-customer/plugin-printing-shop
    
    - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in-
      contributed TaskHandler in the framework. Key
      `printing_shop.plate.approve`. Reads a `plateId` process variable,
      writes `plateApproved`, `plateId`, `approvedBy` (principal label),
      `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper
      plate-approval handler would UPDATE `plugin_printingshop__plate` via
      `context.jdbc`, but that requires handing the TaskHandler a
      projection of the PluginContext — a deliberate non-goal of this
      chunk, deferred to the "handler context" follow-up.
    - `PrintingShopPlugin.start(context)` now ends with
      `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs
      the registration.
    - Package layout: `org.vibeerp.reference.printingshop.workflow` is
      the plug-in's workflow namespace going forward (the next printing-
      shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order
      — will live alongside).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler'
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    
    $ curl /api/v1/workflow/handlers (as admin)
    {
      "count": 2,
      "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"]
    }
    
    $ curl /api/v1/plugins/printing-shop/ping  # plug-in HTTP still works
    {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...}
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"vibeerp-workflow-ping"}
      (principal propagation from previous commit still works — pingedBy=user:admin)
    
    $ kill -TERM <pid>
    [ionShutdownHook] vibe_erp stopping 1 plug-in(s)
    [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped
    [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped)
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order with the
    right owner attribution. Core handlers are untouched by plug-in
    teardown.
    
    ## Tests
    
    - 4 new / updated tests on `TaskHandlerRegistryTest`:
      * `unregisterAllByOwner only removes handlers owned by that id`
        — 2 core + 2 plug-in, unregister the plug-in owner, only the
        2 plug-in keys are removed
      * `unregisterAllByOwner on unknown owner returns zero`
      * `register with blank owner is rejected`
      * Updated `duplicate key fails fast` to assert the new error
        message format including both owner ids
    - Total framework unit tests: 269 (was 265), all green.
    
    ## What this unblocks
    
    - **REF.1** (real printing-shop quote→job-card workflow) can now
      register its production handlers through the same seam
    - **Plug-in-contributed handlers with state access** — the next
      design question is how a plug-in handler gets at the plug-in's
      database and translator. Two options: pass a projection of the
      PluginContext through TaskContext, or keep a reference to the
      context captured at plug-in start (closure). The PlateApproval
      handler in this chunk is pure on purpose to keep the seam
      conversation separate.
    - **Plug-in-shipped BPMN auto-deployment** — Flowable's default
      classpath scan uses `classpath*:/processes/*.bpmn20.xml` which
      does NOT see PF4J plug-in classloaders. A dedicated
      `PluginProcessDeployer` that walks each started plug-in's JAR for
      BPMN resources and calls `repositoryService.createDeployment` is
      the natural companion to this commit, still pending.
    
    ## Non-goals (still parking lot)
    
    - BPMN processes shipped inside plug-in JARs (see above — needs
      its own chunk, because it requires reading resources from the
      PF4J classloader and constructing a Flowable deployment by hand)
    - Per-handler permission checks — a handler that wants a permission
      gate still has to call back through its own context; P4.3's
      @RequirePermission aspect doesn't reach into Flowable delegate
      execution.
    - Hot reload of a running plug-in's TaskHandlers. The seam supports
      it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised
      by any current caller.
    zichun authored
     
    Browse Code »
  • Before this commit, every TaskHandler saw a fixed `workflow-engine`
    System principal via `ctx.principal()` because there was no plumbing
    from the REST caller down to the dispatcher. A printing-shop
    quote-to-job-card handler (or any real business workflow) needs to
    know the actual user who kicked off the process so audit columns and
    role-based logic behave correctly.
    
    ## Mechanism
    
    The chain is: Spring Security populates `SecurityContextHolder` →
    `PrincipalContextFilter` mirrors it into `AuthorizationContext`
    (already existed) → `WorkflowService.startProcess` reads the bound
    `AuthorizedPrincipal` and stashes two reserved process variables
    (`__vibeerp_initiator_id`, `__vibeerp_initiator_username`) before
    calling `RuntimeService.startProcessInstanceByKey` →
    `DispatchingJavaDelegate` reads them back off each `DelegateExecution`
    when constructing the `DelegateTaskContext` → handler sees a real
    `Principal.User` from `ctx.principal()`.
    
    When the process is started outside an HTTP request (e.g. a future
    Quartz-scheduled process, or a signal fired by a PBC event
    subscriber), `AuthorizationContext.current()` is null, no initiator
    variables are written, and the dispatcher falls back to the
    `Principal.System("workflow-engine")` principal. A corrupt initiator
    id (e.g. a non-UUID string) also falls back to the system principal
    rather than failing the task, so a stale variable can't brick a
    running workflow.
    
    ## Reserved variable hygiene
    
    The `__vibeerp_` prefix is reserved framework plumbing. Two
    consequences wired in this commit:
    
    - `DispatchingJavaDelegate` strips keys starting with `__vibeerp_`
      from the variable snapshot handed to the handler (via
      `WorkflowTask.variables`), so handler code cannot accidentally
      depend on the initiator id through the wrong door — it must use
      `ctx.principal()`.
    - `WorkflowService.startProcess` and `getInstanceVariables` strip
      the same prefix from their HTTP response payloads so REST callers
      never see the plumbing either.
    
    The prefix constant lives on `DispatchingJavaDelegate.RESERVED_VAR_PREFIX`
    so there is exactly one source of truth. The two initiator variable
    names are public constants on `WorkflowService` — tests, future
    plug-in code, and any custom handlers that genuinely need the raw
    ids (e.g. a security-audit task) can depend on the stable symbols
    instead of hard-coded strings.
    
    ## PingTaskHandler as the executable witness
    
    `PingTaskHandler` now writes a `pingedBy` output variable with a
    principal label (`user:<username>`, `system:<name>`, or
    `plugin:<pluginId>`) and logs it. That makes the end-to-end smoke
    test trivially assertable:
    
    ```
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping"}
      (as admin user, with valid JWT)
      → {"processInstanceId": "...", "ended": true,
         "variables": {
           "pong": true,
           "pongAt": "...",
           "correlationId": "...",
           "pingedBy": "user:admin"
         }}
    ```
    
    Note the RESPONSE does NOT contain `__vibeerp_initiator_id` or
    `__vibeerp_initiator_username` — the reserved-var filter in the
    service layer hides them. The handler-side log line confirms
    `principal='user:admin'` in the service-task execution thread.
    
    ## Tests
    
    - 3 new tests in `DispatchingJavaDelegateTest`:
      * `resolveInitiator` returns a User principal when both vars set
      * falls back to system principal when id var is missing
      * falls back to system principal when id var is corrupt
        (non-UUID string)
    - Updated `variables given to the handler are a defensive copy` to
      also assert that reserved `__vibeerp_*` keys are stripped from
      the task's variable snapshot.
    - Updated `PingTaskHandlerTest`:
      * rename to "writes pong plus timestamp plus correlation id plus
        user principal label"
      * new test for the System-principal branch producing
        `pingedBy=system:workflow-engine`
    - Total framework unit tests: 265 (was 261), all green.
    
    ## Non-goals (still parking lot)
    
    - Plug-in-contributed TaskHandler registration via the PF4J loader
      walking child contexts for TaskHandler beans and calling
      `TaskHandlerRegistry.register`. The seam exists on the registry;
      the loader integration is the next chunk, and unblocks REF.1.
    - Propagation of the full role set (not just id+username) into the
      TaskContext. Handlers don't currently see the initiator's roles.
      Can be added as a third reserved variable when a handler actually
      needs it — YAGNI for now.
    - BPMN user tasks / signals / timers — engine supports them but we
      have no HTTP surface for them yet.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • New platform subproject `platform/platform-workflow` that makes
    `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This
    is the framework's first chunk of Phase 2 (embedded workflow engine)
    and the dependency other work has been waiting on — pbc-production
    routings/operations, the full buy-make-sell BPMN scenario in the
    reference plug-in, and ultimately the BPMN designer web UI all hang
    off this seam.
    
    ## The shape
    
    - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a
      single new module. Every other module in the framework still sees
      only the api.v1 TaskHandler + WorkflowTask + TaskContext surface —
      guardrail #10 stays honest, no Flowable type leaks to plug-ins or
      PBCs.
    - `TaskHandlerRegistry` is the host-side index of every registered
      handler, keyed by `TaskHandler.key()`. Auto-populated from every
      Spring bean implementing TaskHandler via constructor injection of
      `List<TaskHandler>`; duplicate keys fail fast at registration time.
      `register` / `unregister` exposed for a future plug-in lifecycle
      integration.
    - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate
      named `taskDispatcher`. Every BPMN service task in the framework
      references it via `flowable:delegateExpression="${taskDispatcher}"`.
      The dispatcher reads `execution.currentActivityId` as the task key
      (BPMN `id` attribute = TaskHandler key — no extension elements, no
      field injection, no second source of truth) and routes to the
      matching registered handler. A defensive copy of the execution
      variables is passed to the handler so it cannot mutate Flowable's
      internal map.
    - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the
      api.v1 `TaskContext` — the variable `set(name, value)` call
      forwards through Flowable's variable scope (persisted in the same
      transaction as the surrounding service task execution) and null
      values remove the variable. Principal + locale are documented
      placeholders for now (a workflow-engine `Principal.System`),
      waiting on the propagation chunk that plumbs the initiating user
      through `runtimeService.startProcessInstanceByKey(...)`.
    - `WorkflowService` is a thin facade over Flowable's `RuntimeService`
      + `RepositoryService` exposing exactly the four operations the
      controller needs: start, list active, inspect variables, list
      definitions. Everything richer (signals, timers, sub-processes,
      user-task completion, history queries) lands on this seam in later
      chunks.
    - `WorkflowController` at `/api/v1/workflow/**`:
      * `POST /process-instances`                       (permission `workflow.process.start`)
      * `GET  /process-instances`                       (`workflow.process.read`)
      * `GET  /process-instances/{id}/variables`        (`workflow.process.read`)
      * `GET  /definitions`                             (`workflow.definition.read`)
      * `GET  /handlers`                                (`workflow.definition.read`)
      Exception handlers map `NoSuchElementException` +
      `FlowableObjectNotFoundException` → 404, `IllegalArgumentException`
      → 400, and any other `FlowableException` → 400. Permissions are
      declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded
      by the core MetadataLoader so they show up under
      `GET /api/v1/_meta/metadata` alongside every other permission.
    
    ## The executable self-test
    
    - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module
      classpath and Flowable's starter auto-deploys it at boot.
      Structure: `start` → serviceTask id=`vibeerp.workflow.ping`
      (delegateExpression=`${taskDispatcher}`) → `end`. Process
      definitionKey is `vibeerp-workflow-ping` (distinct from the
      serviceTask id because BPMN 2.0 ids must be unique per document).
    - `PingTaskHandler` is a real shipped bean, not test code: its
      `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and
      `correlationId=<ctx.correlationId()>` to the process variables.
      Operators and AI agents get a trivial "is the workflow engine
      alive?" probe out of the box.
    
    Why the demo lives in src/main, not src/test: Flowable's auto-deployer
    reads from the host classpath at boot, so if either half lived under
    src/test the smoke test wouldn't be reproducible from the shipped
    image — exactly what CLAUDE.md's "reference plug-in is the executable
    acceptance test" discipline is trying to prevent.
    
    ## The Flowable + Liquibase trap
    
    **Learned the hard way during the smoke test.** Adding
    `flowable-spring-boot-starter-process` immediately broke boot with
    `Schema-validation: missing table [catalog__item]`. Liquibase was
    silently not running. Root cause: Flowable 7.x registers a Spring
    Boot `EnvironmentPostProcessor` called
    `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has
    already set an explicit value, forces
    `spring.liquibase.enabled=false` with a WARN log line that reads
    "Flowable pulls in Liquibase but does not use the Spring Boot
    configuration for it". Our master.xml then never executes and JPA
    validation fails against the empty schema. Fix is a single line in
    `distribution/src/main/resources/application.yaml` —
    `spring.liquibase.enabled: true` — with a comment explaining why it
    must stay there for anyone who touches config next.
    
    Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`,
    etc. tables coexist happily in the same public schema — 39 ACT_*
    tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable
    manages its own schema via its internal MyBatis DDL, Liquibase manages
    ours, they don't touch each other.
    
    ## Smoke-test transcript (fresh DB, dev profile)
    
    ```
    docker compose down -v && docker compose up -d db
    ./gradlew :distribution:bootRun &
    # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables,
    #     MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler,
    #     BPMN auto-deployed from classpath
    POST /api/v1/auth/login → JWT
    GET  /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping)
    GET  /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]}
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping",
          "businessKey":"smoke-1",
          "variables":{"greeting":"ni hao"}}
      → 201 {"processInstanceId":"...","ended":true,
             "variables":{"pong":true,"pongAt":"2026-04-09T...",
                          "correlationId":"...","greeting":"ni hao"}}
    POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"}
      → 404 {"message":"No process definition found for key 'does-not-exist'"}
    GET  /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity)
    ```
    
    ## Tests
    
    - 15 new unit tests in `platform-workflow/src/test`:
      * `TaskHandlerRegistryTest` — init with initial handlers, duplicate
        key fails fast, blank key rejected, unregister removes,
        unregister on unknown returns false, find on missing returns null
      * `DispatchingJavaDelegateTest` — dispatches by currentActivityId,
        throws on missing handler, defensive-copies the variable map
      * `DelegateTaskContextTest` — set non-null forwards, set null
        removes, blank name rejected, principal/locale/correlationId
        passthrough, default correlation id is stable across calls
      * `PingTaskHandlerTest` — key matches the BPMN serviceTask id,
        execute writes pong + pongAt + correlationId
    - Total framework unit tests: 261 (was 246), all green.
    
    ## What this unblocks
    
    - **REF.1** — real quote→job-card workflow handler in the
      printing-shop plug-in
    - **pbc-production routings/operations (v3)** — each operation
      becomes a BPMN step with duration + machine assignment
    - **P2.3** — user-task form rendering (landing on top of the
      RuntimeService already exposed via WorkflowService)
    - **P2.2** — BPMN designer web page (later, depends on R1)
    
    ## Deliberate non-goals (parking lot)
    
    - Principal propagation from the REST caller through the process
      start into the handler — uses a fixed `workflow-engine`
      `Principal.System` for now. Follow-up chunk will plumb the
      authenticated user as a Flowable variable.
    - Plug-in-contributed TaskHandler registration via PF4J child
      contexts — the registry exposes `register/unregister` but the
      plug-in loader doesn't call them yet. Follow-up chunk.
    - BPMN user tasks, signals, timers, history queries — seam exists,
      deliberately not built out.
    - Workflow deployment from `metadata__workflow` rows (the Tier 1
      path). Today deployment is classpath-only via Flowable's auto-
      deployer.
    - The Flowable async job executor is explicitly deactivated
      (`flowable.async-executor-activate: false`) — background-job
      machinery belongs to the future Quartz integration (P1.10), not
      Flowable.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Two small closer items that tidy up the end of the HasExt rollout:
    
    1. inventory.yml gains a `customFields:` section with two core
       declarations for Location: `inventory_address_city` (string,
       maxLength 128) and `inventory_floor_area_sqm` (decimal 10,2).
       Completes the "every HasExt entity has at least one declared
       field" symmetry. Printing-shop plug-in already adds its own
       `printing_shop_press_id` etc. on top.
    
    2. CLAUDE.md "Repository state" section updated to reflect this
       session's milestones:
       - pbc-production v2 (IN_PROGRESS + BOM + scrap) now called out
         explicitly in the PBC list.
       - MATERIAL_ISSUE added to the buy-sell-MAKE loop description —
         the work-order completion now consumes raw materials per BOM
         line AND credits finished goods atomically.
       - New bullet: "Tier 1 customization is universal across every
         core entity with an ext column" — HasExt on Partner, Location,
         SalesOrder, PurchaseOrder, WorkOrder, Item; every service uses
         applyTo/parseExt helpers, zero duplication.
       - New bullet: "Clean Core extensibility is executable" — the
         reference printing-shop plug-in's metadata YAML ships
         customFields on Partner/Item/SalesOrder/WorkOrder and the
         MetadataLoader merges them with core declarations at load
         time. Executable grade-A extension under the A/B/C/D safety
         scale.
       - Printing-shop plug-in description updated to note that its
         metadata YAML now carries custom fields on core entities, not
         just its own entities.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Location returns 2 core
        fields.
      - POST /inventory/locations with `{inventory_address_city:
        "Shenzhen", inventory_floor_area_sqm: "1250.50"}` → 201,
        canonical form persisted, ext round-trips.
      - POST with `inventory_floor_area_sqm: "123456789012345.678"` →
        400 "ext.inventory_floor_area_sqm: decimal scale 3 exceeds
        declared scale 2" — the validator's precision/scale rules fire
        exactly as designed.
    
    No code changes. 246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Completes the HasExt rollout across every core entity with an ext
    column. Item was the last one that carried an ext JSONB column
    without any validation wired — a plug-in could declare custom fields
    for Item but nothing would enforce them on save. This fixes that and
    restores two printing-shop-specific Item fields to the reference
    plug-in that were temporarily dropped from the previous Tier 1
    customization chunk (commit 16c59310) precisely because Item wasn't
    wired.
    
    Code changes:
      - Item implements HasExt; `ext` becomes `override var ext`, a
        companion constant holds the entity name "Item".
      - ItemService injects ExtJsonValidator, calls applyTo() in both
        create() and update() (create + update symmetry like partners
        and locations). parseExt passthrough added for response mappers.
      - CreateItemCommand, UpdateItemCommand, CreateItemRequest,
        UpdateItemRequest gain a nullable ext field.
      - ItemResponse now carries the parsed ext map, same shape as
        PartnerResponse / LocationResponse / SalesOrderResponse.
      - pbc-catalog build.gradle adds
        `implementation(project(":platform:platform-metadata"))`.
      - ItemServiceTest constructor updated to pass the new validator
        dependency with no-op stubs.
    
    Plug-in YAML (printing-shop.yml):
      - Re-added `printing_shop_color_count` (integer) and
        `printing_shop_paper_gsm` (integer) custom fields targeting Item.
        These were originally in the commit 16c59310 draft but removed
        because Item wasn't wired. Now that Item is wired, they're back
        and actually enforced.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields.
      - POST /catalog/items with `{printing_shop_color_count: 4,
        printing_shop_paper_gsm: 170}` → 201, canonical form persisted.
      - GET roundtrip preserves both integer values.
      - POST with `printing_shop_color_count: "not-a-number"` → 400
        "ext.printing_shop_color_count: not a valid integer: 'not-a-number'".
      - POST with `rogue_key` → 400 "ext contains undeclared key(s)
        for 'Item': [rogue_key]".
    
    Six of eight PBCs now participate in HasExt:
      Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item.
    The remaining two are pbc-identity (User has no ext column by
    design — identity is a security concern, not a customization one)
    and pbc-finance (JournalEntry is derived state from events, no
    customization surface).
    
    Five core entities carry Tier 1 custom fields as of this commit:
      Partner     (2 core + 1 plug-in)
      Item        (0 core + 2 plug-in)
      SalesOrder  (0 core + 1 plug-in)
      WorkOrder   (2 core + 1 plug-in)
      Location    (0 core + 0 plug-in — wired but no declarations yet)
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The reference printing-shop plug-in now demonstrates the framework's
    most important promise — that a customer plug-in can EXTEND core
    business entities (Partner, SalesOrder, WorkOrder) with customer-
    specific fields WITHOUT touching any core code.
    
    Added to `printing-shop.yml` customFields section:
    
      Partner:
        printing_shop_customer_segment (enum: agency, in_house, end_client, reseller)
    
      SalesOrder:
        printing_shop_quote_number (string, maxLength 32)
    
      WorkOrder:
        printing_shop_press_id (string, maxLength 32)
    
    Mechanism (no code changes, all metadata-driven):
      1. Plug-in YAML carries a `customFields:` section alongside its
         entities / permissions / menus.
      2. MetadataLoader.loadFromPluginJar reads the section and inserts
         rows into `metadata__custom_field` tagged
         `source='plugin:printing-shop'`.
      3. CustomFieldRegistry.refresh re-reads ALL rows (both `source='core'`
         and `source='plugin:*'`) and merges them by `targetEntity`.
      4. ExtJsonValidator.applyTo now validates incoming ext against the
         MERGED set, so a POST to /api/v1/partners/partners can include
         both core-declared (partners_industry) and plug-in-declared
         (printing_shop_customer_segment) fields in the same request,
         and both are enforced.
      5. Uninstalling the plug-in removes the plugin:printing-shop rows
         and the fields disappear from validation AND from the UI's
         custom-field catalog — no migration, no restart of anything
         other than the plug-in lifecycle itself.
    
    Convention established: plug-in-contributed custom field keys are
    prefixed with the plug-in id (e.g. `printing_shop_*`) so two
    independent plug-ins can't collide on the same entity. Documented
    inline in the YAML.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - CustomFieldRegistry logs "refreshed 4 custom fields" after core
        load, then "refreshed 7 custom fields across 3 entities" after
        plug-in load — 4 core + 3 plug-in fields, merged.
      - GET /_meta/metadata/custom-fields/Partner returns 3 fields
        (2 core + 1 plug-in).
      - GET /_meta/metadata/custom-fields/SalesOrder returns 1 field
        (plug-in only).
      - GET /_meta/metadata/custom-fields/WorkOrder returns 3 fields
        (2 core + 1 plug-in).
      - POST /partners/partners with BOTH a core field AND a plug-in
        field → 201, canonical form persisted.
      - POST with an invalid plug-in enum value → 400 "ext.printing_shop_customer_segment:
        value 'ghost' is not in allowed set [agency, in_house, end_client, reseller]".
      - POST /orders/sales-orders with printing_shop_quote_number → 201,
        quote number round-trips.
      - POST /production/work-orders with mixed production_priority +
        printing_shop_press_id → 201, both fields persisted.
    
    This is the first executable demonstration of Clean Core
    extensibility (CLAUDE.md guardrail #7) — the plug-in extends
    core-owned data through a stable public contract (api.v1 HasExt +
    metadata__custom_field) without reaching into any core or platform
    internal class. This is the "A" grade on the A/B/C/D extensibility
    safety scale.
    
    No code changes. No test changes. 246 unit tests, still green.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the last known gap from the HasExt refactor (commit 986f02ce):
    pbc-production's WorkOrder had an `ext` column but no validator was
    wired, so an operator could write arbitrary JSON without any
    schema enforcement. This fixes that and adds the first Tier 1
    custom fields for WorkOrder.
    
    Code changes:
      - WorkOrder implements HasExt; ext becomes `override var ext`,
        ENTITY_NAME moves onto the entity companion.
      - WorkOrderService injects ExtJsonValidator, calls applyTo() in
        create() before saving (null-safe so the
        SalesOrderConfirmedSubscriber's auto-spawn path still works —
        verified by smoke test).
      - CreateWorkOrderCommand + CreateWorkOrderRequest gain an `ext`
        field that flows through to the validator.
      - WorkOrderResponse gains an `ext: Map<String, Any?>` field; the
        response mapper signature changes to `toResponse(service)` to
        reach the validator via a convenience parseExt delegate on the
        service (same pattern as the other four PBCs).
      - pbc-production Gradle build adds `implementation(project(":platform:platform-metadata"))`.
    
    Metadata (production.yml):
      - Permission keys extended to match the v2 state machine:
        production.work-order.start (was missing) and
        production.work-order.scrap (was missing). The existing
        .read / .create / .complete / .cancel keys stay.
      - Two custom fields declared:
          * production_priority (enum: low, normal, high, urgent)
          * production_routing_notes (string, maxLength 1024)
        Both are optional and non-PII; an operator can now add
        priority and routing notes to a work order through the public
        API without any code change, which is the whole point of
        Tier 1 customization.
    
    Unit tests: WorkOrderServiceTest constructor updated to pass the
    new extValidator dependency and stub applyTo/parseExt as no-ops.
    No behavioral test changes — ext validation is covered by
    ExtJsonValidatorTest and the platform-wide smoke tests.
    
    Smoke verified end-to-end against real Postgres:
      - GET /_meta/metadata/custom-fields/WorkOrder now returns both
        declarations with correct enum sets and maxLength.
      - POST /work-orders with valid ext {production_priority:"high",
        production_routing_notes:"Rush for customer demo"} → 201,
        canonical form persisted, round-trips via GET.
      - POST with invalid enum value → 400 "value 'emergency' is not
        in allowed set [low, normal, high, urgent]".
      - POST with unknown ext key → 400 "ext contains undeclared
        key(s) for 'WorkOrder': [unknown_field]".
      - Auto-spawn from confirmed SO → DRAFT work order with empty
        ext `{}`, confirming the applyTo(null) null-safe path.
    
    Five of the eight PBCs now participate in the HasExt pattern:
    Partner, Location, SalesOrder, PurchaseOrder, WorkOrder. The
    remaining three (Item, Uom, JournalEntry) either have their own
    custom-field story in separate entities or are derived state.
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P4.3 rollout — the last PBC whose controllers were still
    unannotated. Every endpoint in `UserController` now carries an
    `@RequirePermission("identity.user.*")` annotation matching the keys
    already declared in `identity.yml`:
    
      GET  /api/v1/identity/users           identity.user.read
      GET  /api/v1/identity/users/{id}      identity.user.read
      POST /api/v1/identity/users           identity.user.create
      PATCH /api/v1/identity/users/{id}     identity.user.update
      DELETE /api/v1/identity/users/{id}    identity.user.disable
    
    `AuthController` (login, refresh) is deliberately NOT annotated —
    it is in the platform-security public allowlist because login is the
    token-issuing endpoint (chicken-and-egg).
    
    KDoc on the controller class updated to reflect the new auth story
    (removing the stale "authentication deferred to v0.2" comment from
    before P4.1 / P4.3 landed).
    
    Smoke verified end-to-end against real Postgres:
      - Admin (wildcard `admin` role) → GET /users returns 200, POST
        /users returns 201 (new user `jane` created).
      - Unauthenticated GET and POST → 401 Unauthorized from the
        framework's JWT filter before @RequirePermission runs. A
        non-admin user without explicit grants would get 403 from the
        AOP evaluator; tested manually with the admin and anonymous
        cases.
    
    No test changes — the controller unit test is a thin DTO mapper
    test that doesn't exercise the Spring AOP aspect; identity-wide
    authz enforcement is covered by the platform-security tests plus
    the shipping smoke tests. 246 unit tests, all green.
    
    P4.3 is now complete across every core PBC:
      pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales,
      pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity.
    zichun authored
     
    Browse Code »
  • CI verified green for `986f02ce` on both gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »
  • Removes the ext-handling copy/paste that had grown across four PBCs
    (partners, inventory, orders-sales, orders-purchase). Every service
    that wrote the JSONB `ext` column was manually doing the same
    four-step sequence: validate, null-check, serialize with a local
    ObjectMapper, assign to the entity. And every response mapper was
    doing the inverse: check-if-blank, parse, cast, swallow errors.
    
    Net: ~15 lines saved per PBC, one place to change the ext contract
    later (e.g. PII redaction, audit tagging, field-level events), and
    a stable plug-in opt-in mechanism — any plug-in entity that
    implements `HasExt` automatically participates.
    
    New api.v1 surface:
    
      interface HasExt {
          val extEntityName: String     // key into metadata__custom_field
          var ext: String               // the serialized JSONB column
      }
    
    Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own
    entities into the same validation path. Zero Spring/Jackson
    dependencies — api.v1 stays clean.
    
    Extended `ExtJsonValidator` (platform-metadata) with two helpers:
    
      fun applyTo(entity: HasExt, ext: Map<String, Any?>?)
          — null-safe; validates; writes canonical JSON to entity.ext.
            Replaces the validate + writeValueAsString + assign triplet
            in every service's create() and update().
    
      fun parseExt(entity: HasExt): Map<String, Any?>
          — returns empty map on blank/corrupt column; response
            mappers never 500 on bad data. Replaces the four identical
            parseExt local functions.
    
    ExtJsonValidator now takes an ObjectMapper via constructor
    injection (Spring Boot's auto-configured bean).
    
    Entities that now implement HasExt (override val extEntityName;
    override var ext; companion object const val ENTITY_NAME):
      - Partner (`partners.Partner` → "Partner")
      - Location (`inventory.Location` → "Location")
      - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder")
      - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder")
    
    Deliberately NOT converted this chunk:
      - WorkOrder (pbc-production) — its ext column has no declared
        fields yet; a follow-up that adds declarations AND the
        HasExt implementation is cleaner than splitting the two.
      - JournalEntry (pbc-finance) — derived state, no ext column.
    
    Services lose:
      - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()`
        field (four copies eliminated)
      - The `parseExt(entity): Map` helper function (four copies)
      - The `companion object { const val ENTITY_NAME = ... }` constant
        (moved onto the entity where it belongs)
      - The `val canonicalExt = extValidator.validate(...)` +
        `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }`
        create pattern (replaced with one applyTo call)
      - The `if (command.ext != null) { ... }` update pattern
        (applyTo is null-safe)
    
    Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and
    parseExt (null-safe path, happy path, failure path, blank column,
    round-trip, malformed JSON). Existing service tests just swap the
    mock setup from stubbing `validate` to stubbing `applyTo` and
    `parseExt` with no-ops.
    
    Smoke verified end-to-end against real Postgres:
      - POST /partners with valid ext (partners_credit_limit,
        partners_industry) → 201, canonical form persisted.
      - GET /partners/by-code/X → 200, ext round-trips.
      - POST with invalid enum value → 400 "value 'x' is not in
        allowed set [printing, publishing, packaging, other]".
      - POST with undeclared key → 400 "ext contains undeclared
        key(s) for 'Partner': [rogue_field]".
      - PATCH with new ext → 200, ext updated.
      - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe
        applyTo).
      - POST /orders/sales-orders with no ext → 201, the create path
        via the shared helper still works.
    
    246 unit tests (+6 over 240), 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • CI verified green for `75a75baa` on both the gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »
  • Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one
    step, single output, no BOM) into a real v2 production PBC:
    
      1. IN_PROGRESS state between DRAFT and COMPLETED so "started but
         not finished" work orders are observable on a dashboard.
         WorkOrderService.start(id) performs the transition and publishes
         a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR
         IN_PROGRESS (v2 writes nothing to the ledger at start() so there
         is nothing to undo on cancel).
    
      2. Bill of materials via a new WorkOrderInput child entity —
         @OneToMany with cascade + orphanRemoval, same shape as
         SalesOrderLine. Each line carries (lineNo, itemCode,
         quantityPerUnit, sourceLocationCode). complete() now iterates
         the inputs in lineNo order and writes one MATERIAL_ISSUE
         ledger row per line (delta = -(quantityPerUnit × outputQuantity))
         BEFORE writing the PRODUCTION_RECEIPT for the output. All in
         one transaction — a failure anywhere rolls back every prior
         ledger row AND the status flip. Empty inputs list is legal
         (the v1 auto-spawn-from-SO path still works unchanged,
         writing only the PRODUCTION_RECEIPT).
    
      3. Scrap flow for COMPLETED work orders via a new scrap(id,
         scrapLocationCode, quantity, note) service method. Writes a
         negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and
         publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over
         adding a new SCRAP movement reason to keep the enum stable —
         the reference-string suffix is the disambiguator. The work
         order itself STAYS COMPLETED; scrap is a correction on top of
         a terminal state, not a state change.
    
      complete() now requires IN_PROGRESS (not DRAFT); existing callers
      must start() first.
    
      api.v1 grows two events (WorkOrderStartedEvent,
      WorkOrderScrappedEvent) alongside the three that already existed.
      Since this is additive within a major version, the api.v1 semver
      contract holds — existing subscribers continue to compile.
    
      Liquibase: 002-production-v2.xml widens the status CHECK and
      creates production__work_order_input with (work_order_id FK,
      line_no, item_code, quantity_per_unit, source_location_code) plus
      a unique (work_order_id, line_no) constraint, a CHECK
      quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE
      from the parent.
    
      Unit tests: WorkOrderServiceTest grows from 8 to 18 cases —
      covers start happy path, start rejection, complete-on-DRAFT
      rejection, empty-BOM complete, BOM-with-two-lines complete
      (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT
      all fire with the right references), scrap happy path, scrap on
      non-COMPLETED rejection, scrap with non-positive quantity
      rejection, cancel-from-IN_PROGRESS, and BOM validation rejects
      (unknown item, duplicate line_no).
    
    Smoke verified end-to-end against real Postgres:
      - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per
        brochure, output 100).
      - Started (DRAFT → IN_PROGRESS, no ledger rows).
      - Completed: paper balance 500→300 (MATERIAL_ISSUE -200),
        ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100
        (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE.
      - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged
        WO:WO-SMOKE:SCRAP, work order stayed COMPLETED.
      - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a
        DRAFT with empty BOM; starting + completing it writes only the
        PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the
        empty-BOM path is backwards-compatible.
      - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s,
        double-start 400s, cancel-from-IN_PROGRESS 200.
    
    240 unit tests, 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Completes the @RequirePermission rollout that started in commit
    b174cf60. Every non-state-transition endpoint in pbc-inventory
    (Location CRUD), pbc-orders-sales, and pbc-orders-purchase is now
    guarded by the pre-declared permission keys from their respective
    metadata YAMLs. State-transition verbs (confirm/cancel/ship/receive)
    were annotated in the original P4.3 demo chunk; this one fills in
    the list/get/create/update gap.
    
    Inventory
      - LocationController: list/get/getByCode → inventory.location.read;
        create → inventory.location.create;
        update → inventory.location.update;
        deactivate → inventory.location.deactivate.
      - (StockBalanceController.adjust + StockMovementController.record
        were already annotated with inventory.stock.adjust.)
    
    Orders-sales
      - SalesOrderController: list/get/getByCode → orders.sales.read;
        create → orders.sales.create; update → orders.sales.update.
        (confirm/cancel/ship were already annotated.)
    
    Orders-purchase
      - PurchaseOrderController: list/get/getByCode → orders.purchase.read;
        create → orders.purchase.create; update → orders.purchase.update.
        (confirm/cancel/receive were already annotated.)
    
    No new permission keys. Every key this chunk consumes was already
    declared in the relevant metadata YAML since the respective PBC was
    first built — catalog + partners already shipped in this state, and
    the inventory/orders YAMLs declared their read/create/update keys
    from day one but the controllers hadn't started using them.
    
    Admin happy path still works (bootstrap admin has the wildcard
    `admin` role, same as after commit b174cf60). 230 unit tests still
    green — annotations are purely additive, no existing test hits the
    @RequirePermission path since service-level tests bypass the
    controller entirely.
    
    Combined with b174cf60, the framework now has full @RequirePermission
    coverage on every PBC controller except pbc-identity's user admin
    (which is a separate permission surface — user/role administration
    has its own security story). A minimum-privilege role like
    "sales-clerk" can now be granted exactly `orders.sales.read` +
    `orders.sales.create` + `partners.partner.read` and NOT accidentally
    see catalog admin, inventory movements, finance journals, or
    contact PII.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P4.3 permission-rollout gap for the two oldest PBCs that
    were never updated when the @RequirePermission aspect landed. The
    catalog and partners metadata YAMLs already declared all the needed
    permission keys — the controllers just weren't consuming them.
    
    Catalog
      - ItemController: list/get/getByCode → catalog.item.read;
        create → catalog.item.create; update → catalog.item.update;
        deactivate → catalog.item.deactivate.
      - UomController: list/get/getByCode → catalog.uom.read;
        create → catalog.uom.create; update → catalog.uom.update.
    
    Partners (including the PII boundary)
      - PartnerController: list/get/getByCode → partners.partner.read;
        create → partners.partner.create; update → partners.partner.update.
        (deactivate was already annotated in the P4.3 demo chunk.)
      - AddressController: all five verbs annotated with
        partners.address.{read,create,update,delete}.
      - ContactController: all five verbs annotated with
        partners.contact.{read,create,update,deactivate}. The
        "TODO once P4.3 lands" note in the class KDoc was removed; P4.3
        is live and the annotations are now in place. This is the PII
        boundary that CLAUDE.md flagged as incomplete after the original
        P4.3 rollout.
    
    No new permission keys were added — all 14 keys this touches were
    already declared in pbc-catalog/catalog.yml and
    pbc-partners/partners.yml when those PBCs were first built. The
    metadata loader has been serving them to the SPA/OpenAPI/MCP
    introspection endpoint since day one; this change just starts
    enforcing them at the controller.
    
    Smoke-tested end-to-end against real Postgres
      - Fresh DB + fresh boot.
      - Admin happy path (bootstrap admin has wildcard `admin` role):
          GET  /api/v1/catalog/items           → 200
          POST /api/v1/catalog/items           → 201 (SMOKE-1 created)
          GET  /api/v1/catalog/uoms            → 200
          POST /api/v1/partners/partners       → 201 (SMOKE-P created)
          POST /api/v1/partners/.../contacts   → 201 (contact created)
          GET  /api/v1/partners/.../contacts   → 200 (PII read)
      - Anonymous negative path (no Bearer token):
          GET  /api/v1/catalog/items           → 401
          GET  /api/v1/partners/.../contacts   → 401
      - 230 unit tests still green (annotations are purely additive,
        no existing test hit the @RequirePermission path since the
        service-level tests bypass the controller entirely).
    
    Why this is a genuine security improvement
      - Before: any authenticated user (including the eventual "Alice
        from reception", the contractor's read-only service account,
        the AI-agent MCP client) could read PII, create partners, and
        create catalog items.
      - After: those operations require explicit role-permission grants
        through metadata__role_permission. The bootstrap admin still
        has unconditional access via the wildcard admin role, so
        nothing in a fresh deployment is broken; but a real operator
        granting minimum-privilege roles now has the columns they need
        in the database to do it.
      - The contact PII boundary in particular is GDPR-relevant: before
        this change, any logged-in user could enumerate every contact's
        name + email + phone. After, only users with partners.contact.read
        can see them.
    
    What's still NOT annotated
      - pbc-inventory's Location create/update/deactivate endpoints
        (only stock.adjust and movement.create are annotated).
      - pbc-orders-sales and pbc-orders-purchase list/get/create/update
        endpoints (only the state-transition verbs are annotated).
      - pbc-identity's user admin endpoints.
      These are the next cleanup chunk. This one stays focused on
      catalog + partners because those were the two PBCs that predated
      P4.3 entirely and hadn't been touched since.
    zichun authored
     
    Browse Code »

  • zichun authored
     
    Browse Code »
  • The framework's eighth PBC and the first one that's NOT order- or
    master-data-shaped. Work orders are about *making things*, which is
    the reason the printing-shop reference customer exists in the first
    place. With this PBC in place the framework can express the full
    buy-sell-make loop end-to-end.
    
    What landed (new module pbc/pbc-production/)
      - WorkOrder entity (production__work_order):
          code, output_item_code, output_quantity, status (DRAFT|COMPLETED|
          CANCELLED), due_date (display-only), source_sales_order_code
          (nullable — work orders can be either auto-spawned from a
          confirmed SO or created manually), ext.
      - WorkOrderJpaRepository with existsBySourceSalesOrderCode /
        findBySourceSalesOrderCode for the auto-spawn dedup.
      - WorkOrderService.create / complete / cancel:
          • create validates the output item via CatalogApi (same seam
            SalesOrderService and PurchaseOrderService use), rejects
            non-positive quantities, publishes WorkOrderCreatedEvent.
          • complete(outputLocationCode) credits finished goods to the
            named location via InventoryApi.recordMovement with
            reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and
            reference="WO:<order_code>", then flips status to COMPLETED,
            then publishes WorkOrderCompletedEvent — all in the same
            @Transactional method.
          • cancel only allowed from DRAFT (no un-producing finished
            goods); publishes WorkOrderCancelledEvent.
      - SalesOrderConfirmedSubscriber (@PostConstruct →
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)):
        walks the confirmed sales order's lines via SalesOrdersApi
        (NOT by importing pbc-orders-sales) and calls
        WorkOrderService.create for each line. Coded as one bean with
        one subscription — matches pbc-finance's one-bean-per-subject
        pattern.
          • Idempotent on source sales order code — if any work order
            already exists for the SO, the whole spawn is a no-op.
          • Tolerant of a missing SO (defensive against a future async
            bus that could deliver the confirm event after the SO has
            vanished).
          • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g.
            WO-FROM-SO-2026-0001-L1.
    
      - REST controller /api/v1/production/work-orders: list, get,
        by-code, create, complete, cancel — each annotated with
        @RequirePermission. Four permission keys declared in the
        production.yml metadata: read / create / complete / cancel.
      - CompleteWorkOrderRequest: single-arg DTO uses the
        @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that
        already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest;
        cross-referenced in the KDoc so the third instance doesn't need
        re-discovery.
      - distribution/.../pbc-production/001-production-init.xml:
        CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext
        + the usual indexes. NEITHER output_item_code NOR
        source_sales_order_code is a foreign key (cross-PBC reference
        policy — guardrail #9).
      - settings.gradle.kts + distribution/build.gradle.kts: registers
        the new module and adds it to the distribution dependency list.
      - master.xml: includes the new changelog in dependency order,
        after pbc-finance.
    
    New api.v1 surface: org.vibeerp.api.v1.event.production.*
      - WorkOrderCreatedEvent, WorkOrderCompletedEvent,
        WorkOrderCancelledEvent — sealed under WorkOrderEvent,
        aggregateType="production.WorkOrder". Same pattern as the
        order events, so any future consumer (finance revenue
        recognition, warehouse put-away dashboard, a customer plug-in
        that needs to react to "work finished") subscribes through the
        public typed-class overload with no dependency on pbc-production.
    
    Unit tests (13 new, 217 → 230 total)
      - WorkOrderServiceTest (9 tests): create dedup, positive quantity
        check, catalog seam, happy-path create with event assertion,
        complete rejects non-DRAFT, complete happy path with
        InventoryApi.recordMovement assertion + event assertion, cancel
        from DRAFT, cancel rejects COMPLETED.
      - SalesOrderConfirmedSubscriberTest (5 tests): subscription
        registration count, spawns N work orders for N SO lines with
        correct code convention, idempotent when WOs already exist,
        no-op on missing SO, and a listener-routing test that captures
        the EventListener instance and verifies it forwards to the
        right service method.
    
    End-to-end smoke verified against real Postgres
      - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance)
        and SalesOrderConfirmedSubscriber (pbc-production) log their
        subscription registration before the first HTTP call.
      - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a
        finished-goods location (WH-FG).
      - Created a 2-line sales order (SO-WO-1), confirmed it.
          → Produced ONE orders_sales.SalesOrder outbox row.
          → Produced ONE AR POSTED finance__journal_entry for 1000 USD
            (500 × 1 + 250 × 2 — the pbc-finance consumer still works).
          → Produced TWO draft work orders auto-spawned from the SO
            lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and
            WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with
            source_sales_order_code=SO-WO-1.
      - Completed WO1 to WH-FG:
          → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A
            delta=500 reference="WO:WO-FROM-SO-WO-1-L1".
          → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG.
          → Flipped status to COMPLETED.
      - Cancelled WO2 → CANCELLED.
      - Created a manual WO-MANUAL-1 with no source SO → succeeds;
        demonstrates the "operator creates a WO to build inventory
        ahead of demand" path.
      - platform__event_outbox ends with 6 rows all DISPATCHED:
          orders_sales.SalesOrder SO-WO-1
          production.WorkOrder WO-FROM-SO-WO-1-L1  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L1  (completed)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (cancelled)
          production.WorkOrder WO-MANUAL-1         (created)
    
    Why this chunk was the right next move
      - pbc-finance was a PASSIVE consumer — it only wrote derived
        reporting state. pbc-production is the first ACTIVE consumer:
        it creates new aggregates with their own state machines and
        their own cross-PBC writes in reaction to another PBC's events.
        This is a meaningfully harder test of the event-driven
        integration story and it passes end-to-end.
      - "One ledger, three callers" is now real: sales shipments,
        purchase receipts, AND production receipts all feed the same
        inventory__stock_movement ledger through the same
        InventoryApi.recordMovement facade. The facade has proven
        stable under three very different callers.
      - The framework now expresses the basic ERP trinity: buy
        (purchase orders), sell (sales orders), make (work orders).
        That's the shape every real manufacturing customer needs, and
        it's done without any PBC importing another.
    
    What's deliberately NOT in v1
      - No bill of materials. complete() only credits finished goods;
        it does NOT issue raw materials. A shop floor that needs to
        consume 4 sheets of paper to produce 1 brochure does it
        manually via POST /api/v1/inventory/movements with reason=
        MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands
        as WorkOrderInput lines in a future chunk.
      - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in
        one step. A real shop floor needs "started but not finished"
        visibility; that's the next iteration.
      - No routings, operations, machine assignments, or due-date
        enforcement. due_date is display-only.
      - No "scrap defective output" flow for a COMPLETED work order.
        cancel refuses from COMPLETED; the fix requires a new
        MovementReason and a new event, not a special-case method
        on the service.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Extends pbc-inventory's MovementReason enum with the two reasons a
    production-style PBC needs to record stock movements through the
    existing InventoryApi.recordMovement facade. No new endpoint, no
    new database column — just two new enum values, two new sign-
    validation rules, and four new tests.
    
    Why this lands BEFORE pbc-production
      - It's the smallest self-contained change that unblocks any future
        production-related code (the framework's planned pbc-production,
        a customer plug-in's manufacturing module, or even an ad-hoc
        operator script). Each of those callers can now record
        "consume raw material" / "produce finished good" through the
        same primitive that already serves sales shipments and purchase
        receipts.
      - It validates the "one ledger, many callers" property the
        architecture spec promised. Adding a new movement reason takes
        zero schema changes (the column is varchar) and zero plug-in
        changes (the api.v1 facade takes the reason as a string and
        delegates to MovementReason.valueOf inside the adapter). The
        enum lives entirely inside pbc-inventory.
    
    What changed
      - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and
        PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one
        was added and how they fit the "one primitive for every direction"
        story.
      - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the
        must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT,
        and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive
        bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT.
      - 4 new unit tests:
          • record rejects positive delta on MATERIAL_ISSUE
          • record rejects negative delta on PRODUCTION_RECEIPT
          • record accepts a positive PRODUCTION_RECEIPT (happy path,
            new balance row at the receiving location)
          • record accepts a negative MATERIAL_ISSUE (decrements an
            existing balance from 1000 → 800)
      - Total tests: 213 → 217.
    
    Smoke test against real Postgres
      - Booted on a fresh DB; no schema migration needed because the
        `reason` column is varchar(32), already wide enough.
      - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location
        WH-PROD via the existing endpoints.
      - POST /api/v1/inventory/movements with reason=RECEIPT for 1000
        raw paper → balance row at 1000.
      - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE
        delta=-200 reference="WO:WO-EVT-1" → balance becomes 800,
        ledger row written.
      - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT
        delta=50 reference="WO:WO-EVT-1" → balance row at 50 for
        FG-WIDGET, ledger row written.
      - Negative test: POST PRODUCTION_RECEIPT with delta=-1 →
        400 Bad Request "movement reason PRODUCTION_RECEIPT requires
        a non-negative delta (got -1)" — the new sign rule fires.
      - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE,
        PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and
        RAW-PAPER=800 — the math is correct.
    
    What's deliberately NOT in this chunk
      - No pbc-production yet. That's the next chunk; this is just
        the foundation that lets it (or any other production-ish
        caller) write to the ledger correctly without needing changes
        to api.v1 or pbc-inventory ever again.
      - No new return-path reasons (RETURN_FROM_CUSTOMER,
        RETURN_TO_SUPPLIER) — those land when the returns flow does.
      - No reference convention for "WO:" — that's documented in the
        KDoc on `reference`, not enforced anywhere. The v0.16/v0.17
        convention "<source>:<code>" continues unchanged.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The minimal pbc-finance landed in commit bf090c2e only reacted to
    *ConfirmedEvent. This change wires the rest of the order lifecycle
    (ship/receive → SETTLED, cancel → REVERSED) so the journal entry
    reflects what actually happened to the order, not just the moment
    it was confirmed.
    
    JournalEntryStatus (new enum + new column)
      - POSTED   — created from a confirm event (existing behaviour)
      - SETTLED  — promoted by SalesOrderShippedEvent /
                   PurchaseOrderReceivedEvent
      - REVERSED — promoted by SalesOrderCancelledEvent /
                   PurchaseOrderCancelledEvent
      - The status field is intentionally a separate axis from
        JournalEntryType: type tells you "AR or AP", status tells you
        "where in its lifecycle".
    
    distribution/.../pbc-finance/002-finance-status.xml
      - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`,
        a CHECK constraint mirroring the enum values, and an index on
        status for the new filter endpoint. The DEFAULT 'POSTED' covers
        any existing rows on an upgraded environment without a backfill
        step.
    
    JournalEntryService — four new methods, all idempotent
      - settleFromSalesShipped(event)        → POSTED → SETTLED for AR
      - settleFromPurchaseReceived(event)    → POSTED → SETTLED for AP
      - reverseFromSalesCancelled(event)     → POSTED → REVERSED for AR
      - reverseFromPurchaseCancelled(event)  → POSTED → REVERSED for AP
      Each runs through a private settleByOrderCode/reverseByOrderCode
      helper that:
        1. Looks up the row by order_code (new repo method
           findFirstByOrderCode). If absent → no-op (e.g. cancel from
           DRAFT means no *ConfirmedEvent was ever published, so no
           journal entry exists; this is the most common cancel path).
        2. If the row is already in the destination status → no-op
           (idempotent under at-least-once delivery, e.g. outbox replay
           or future Kafka retry).
        3. Refuses to overwrite a contradictory terminal status — a
           SETTLED row cannot be REVERSED, and vice versa. The producer's
           state machine forbids cancel-from-shipped/received, so
           reaching here implies an upstream contract violation; logged
           at WARN and the row is left alone.
    
    OrderEventSubscribers — six subscriptions per @PostConstruct
      - All six order events from api.v1.event.orders.* are subscribed
        via the typed-class EventBus.subscribe(eventType, listener)
        overload, the same public API a plug-in would use. Boot log
        line updated: "pbc-finance subscribed to 6 order events".
    
    JournalEntryController — new ?status= filter
      - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED
        surfaces the partition. Existing ?orderCode= and ?type= filters
        unchanged. Read permission still finance.journal.read.
    
    12 new unit tests (213 total, was 201)
      - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency
        on duplicate destination status, refusal to overwrite a
        contradictory terminal status, no-op on missing row, default
        POSTED on new entries.
      - OrderEventSubscribersTest: assert all SIX subscriptions registered,
        one new test that captures all four lifecycle listeners and
        verifies they forward to the correct service methods.
    
    End-to-end smoke (real Postgres, fresh DB)
      - Booted with the new DDL applied (status column + CHECK + index)
        on an empty DB. The OrderEventSubscribers @PostConstruct line
        confirms 6 subscriptions registered before the first HTTP call.
      - Five lifecycle scenarios driven via REST:
          PO-FULL:        confirm + receive  → AP SETTLED  amount=50.00
          SO-FULL:        confirm + ship     → AR SETTLED  amount= 1.00
          SO-REVERSE:     confirm + cancel   → AR REVERSED amount= 1.00
          PO-REVERSE:     confirm + cancel   → AP REVERSED amount=50.00
          SO-DRAFT-CANCEL: cancel only       → NO ROW (no confirm event)
      - finance__journal_entry returns exactly 4 rows (the 5th scenario
        correctly produces nothing) and ?status filters all return the
        expected partition (POSTED=0, SETTLED=2, REVERSED=2).
    
    What's still NOT in pbc-finance
      - Still no debit/credit legs, no chart of accounts, no period
        close, no double-entry invariant. This is the v0.17 minimal
        seed; the real P5.9 build promotes it into a real GL.
      - No reaction to "settle then reverse" or "reverse then settle"
        other than the WARN-and-leave-alone defensive path. A real GL
        would write a separate compensating journal entry; the minimal
        PBC just keeps the row immutable once it leaves POSTED.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The framework's seventh PBC, and the first one whose ENTIRE purpose
    is to react to events published by other PBCs. It validates the
    *consumer* side of the cross-PBC event seam that was wired up in
    commit 67406e87 (event-driven cross-PBC integration). With pbc-finance
    in place, the bus now has both producers and consumers in real PBC
    business logic — not just the wildcard EventAuditLogSubscriber that
    ships with platform-events.
    
    What landed (new module pbc/pbc-finance/, ~480 lines including tests)
      - JournalEntry entity (finance__journal_entry):
          id, code (= originating event UUID), type (AR|AP),
          partner_code, order_code, amount, currency_code, posted_at, ext.
          Unique index on `code` is the durability anchor for idempotent
          event delivery; the service ALSO existsByCode-checks before
          insert to make duplicate-event handling a clean no-op rather
          than a constraint-violation exception.
      - JournalEntryJpaRepository with existsByCode + findByOrderCode +
        findByType (the read-side filters used by the controller).
      - JournalEntryService.recordSalesConfirmed / recordPurchaseConfirmed
        take a SalesOrderConfirmedEvent / PurchaseOrderConfirmedEvent and
        write the corresponding AR/AP row. @Transactional with
        Propagation.REQUIRED so the listener joins the publisher's TX
        when the bus delivers synchronously (today) and creates a fresh
        one if a future async bus delivers from a worker thread. The
        KDoc explains why REQUIRED is the correct default and why
        REQUIRES_NEW would be wrong here.
      - OrderEventSubscribers @Component with @PostConstruct that calls
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)
        and EventBus.subscribe(PurchaseOrderConfirmedEvent::class.java, ...)
        once at boot. Uses the public typed-class subscribe overload —
        NOT the platform-internal subscribeToAll wildcard helper. This
        is the API surface plug-ins will also use.
      - JournalEntryController: read-only REST under
        /api/v1/finance/journal-entries with @RequirePermission
        "finance.journal.read". Filter params: ?orderCode= and ?type=.
        Deliberately no POST endpoint — entries are derived state.
      - finance.yml metadata declaring 1 entity, 1 permission, 1 menu.
      - Liquibase changelog at distribution/.../pbc-finance/001-finance-init.xml
        + master.xml include + distribution/build.gradle.kts dep.
      - settings.gradle.kts: registers :pbc:pbc-finance.
      - 9 new unit tests (6 for JournalEntryService, 3 for
        OrderEventSubscribers) — including idempotency, dedup-by-event-id
        contract, listener-forwarding correctness via slot-captured
        EventListener invocation. Total tests: 192 → 201, 16 → 17 modules.
    
    Why this is the right shape
      - pbc-finance has zero source dependency on pbc-orders-sales,
        pbc-orders-purchase, pbc-partners, or pbc-catalog. The Gradle
        build refuses any cross-PBC dependency at configuration time —
        pbc-finance only declares api/api-v1, platform-persistence, and
        platform-security. The events and partner/item references it
        consumes all live in api.v1.event.orders / are stored as opaque
        string codes.
      - Subscribers go through EventBus.subscribe(eventType, listener),
        the public typed-class overload from api.v1.event.EventBus.
        Plug-ins use exactly this API; this PBC proves the API works
        end-to-end from a real consumer.
      - The consumer is idempotent on the producer's event id, so
        at-least-once delivery (outbox replay, future Kafka retry)
        cannot create duplicate journal entries. This makes the
        consumer correct under both the current synchronous bus and
        any future async / out-of-process bus.
      - Read-only REST API: derived state should not be writable from
        the outside. Adjustments and reversals will land later as their
        own command verbs when the real P5.9 finance build needs them,
        not as a generic create endpoint.
    
    End-to-end smoke verified against real Postgres
      - Booted on a fresh DB; the OrderEventSubscribers @PostConstruct
        log line confirms the subscription registered before any HTTP
        traffic.
      - Seeded an item, supplier, customer, location (existing PBCs).
      - Created PO PO-FIN-1 (5000 × 0.04 = 200 USD) → confirmed →
        GET /api/v1/finance/journal-entries returns ONE row:
          type=AP partner=SUP-PAPER order=PO-FIN-1 amount=200.0000 USD
      - Created SO SO-FIN-1 (50 × 0.10 = 5 USD) → confirmed →
        GET /api/v1/finance/journal-entries now returns TWO rows:
          type=AR partner=CUST-ACME order=SO-FIN-1 amount=5.0000 USD
          (plus the AP row from above)
      - GET /api/v1/finance/journal-entries?orderCode=PO-FIN-1 →
        only the AP row.
      - GET /api/v1/finance/journal-entries?type=AR → only the AR row.
      - platform__event_outbox shows 2 rows (one per confirm) both
        DISPATCHED, finance__journal_entry shows 2 rows.
      - The journal-entry code column equals the originating event
        UUID, proving the dedup contract is wired.
    
    What this is NOT (yet)
      - Not a real general ledger. No debit/credit legs, no chart of
        accounts, no period close, no double-entry invariant. P5.9
        promotes this minimal seed into a real finance PBC.
      - No reaction to ship/receive/cancel events yet — only confirm.
        Real revenue recognition (which happens at ship time for most
        accounting standards) lands with the P5.9 build.
      - No outbound api.v1.ext facade. pbc-finance does not (yet)
        expose itself to other PBCs; it is a pure consumer. When
        pbc-production needs to know "did this order's invoice clear",
        that facade gets added.
    zichun authored
     
    Browse Code »