• Closes the R2 gap: an admin can now manage users and roles
    entirely from the SPA without touching curl or Swagger UI.
    
    Backend (pbc-identity):
      - New RoleService with createRole, assignRole, revokeRole,
        findUserRoleCodes, listRoles. Each method validates
        existence + idempotency (duplicate assignment rejected,
        missing role rejected).
      - New RoleController at /api/v1/identity/roles (CRUD) +
        /api/v1/identity/users/{userId}/roles/{roleCode}
        (POST assign, DELETE revoke). All permission-gated:
        identity.role.read, identity.role.create,
        identity.role.assign.
      - identity.yml updated: added identity.role.create permission.
    
    SPA (web/):
      - UsersPage — list with username link to detail, "+ New User"
      - CreateUserPage — username, display name, email form
      - UserDetailPage — shows user info + role toggle list. Each
        role has an Assign/Revoke button that takes effect on the
        user's next login (JWT carries roles from login time).
      - RolesPage — list with inline create form (code + name)
      - Sidebar gains "System" section with Users + Roles links
      - API client + types: identity.listUsers, getUser, createUser,
        listRoles, createRole, getUserRoles, assignRole, revokeRole
    
    Infrastructure:
      - SpaController: added /users/** and /roles/** forwarding
      - SecurityConfiguration: added /users/** and /roles/** to the
        SPA permitAll block
    zichun authored
     
    Browse Code »

  • Adds GET /api/v1/production/work-orders/shop-floor — a pure read
    that returns every IN_PROGRESS work order with its current
    operation and planned/actual time totals. Designed to feed a
    future shop-floor dashboard (web SPA, mobile, or an external
    reporting tool) without any follow-up round trips.
    
    **Service method.** `WorkOrderService.shopFloorSnapshot()` is a
    @Transactional(readOnly = true) query that:
      1. Pulls every IN_PROGRESS work order via the existing
         `WorkOrderJpaRepository.findByStatus`.
      2. Sorts by WO code ascending so a dashboard poll gets a stable
         row order.
      3. For each WO picks the "current operation" = first op in
         IN_PROGRESS status, or, if none, first PENDING op. This
         captures both live states: "operator is running step N right
         now" and "operator just finished step N and hasn't picked up
         step N+1 yet".
      4. Computes `totalStandardMinutes` (sum across every op) +
         `totalActualMinutes` (sum of completed ops' `actualMinutes`
         only, treating null as zero).
      5. Counts completed vs total operations for a "step 2 of 5"
         badge.
      6. Returns a list of `ShopFloorEntry` DTOs — flat structure, one
         row per WO, nullable `current*` fields when a WO has no
         routing at all (v2-compat path).
    
    **HTTP surface.**
    - `GET /api/v1/production/work-orders/shop-floor`
    - New permission `production.shop-floor.read`
    - Response is `List<ShopFloorEntryResponse>` — flat so a SPA can
      render a table without joining across nested JSON. Fields are
      1:1 with the service-side `ShopFloorEntry`.
    
    **Design choices.**
    - Mounted under `/work-orders/shop-floor` rather than a top-level
      `/production/shop-floor` so every production read stays under
      the same permission/audit/OpenAPI root.
    - Read-only, zero events published, zero ledger writes. Pure
      projection over existing state.
    - Returns empty list when no WO is in-progress — the dashboard
      renders "no jobs running" without a special case.
    - Sorted by code so polling is deterministic. A future chunk
      might add sort-by-work-center if a dashboard needs a
      by-station view.
    
    **Why not a top-level "shop-floor" PBC.** A shop-floor dashboard
    doesn't own any state — every field it displays is projected from
    pbc-production. A new PBC would duplicate the data model and
    create a reaction loop on work order events. Keeping the read in
    pbc-production matches the CLAUDE.md guardrail "grow the PBC when
    real consumers appear, not on speculation".
    
    **Nullable `current*` fields.** A WO with an empty operations list
    (the v2-compat path — auto-spawned from SalesOrderConfirmedSubscriber
    before v3 routings) has all four `current*` fields set to null.
    The dashboard UI renders "no routing" or similar without any
    downstream round trip.
    
    **Tests (5 new).** empty snapshot when no IN_PROGRESS WOs; one
    entry per IN_PROGRESS WO with stable sort; current-op picks
    IN_PROGRESS over PENDING; current-op picks first PENDING when no
    op is IN_PROGRESS (between-operations state); v2-compat WO with
    no operations shows null current-op fields and zero time sums.
    
    **Smoke-tested end-to-end against real Postgres:**
    1. Empty shop-floor initially (no IN_PROGRESS WOs)
    2. Started plugin-printing-shop-quote-to-work-order BPMN with
       quoteCode=Q-DASH-1, quantity=500
    3. Started the resulting WO — shop-floor showed
       currentOperationLineNo=1 (CUT @ PRINTING-CUT-01) status=PENDING,
       0/4 completed, totalStandardMinutes=75, totalActualMinutes=0
    4. Started op 1 — currentOperationStatus flipped to IN_PROGRESS
    5. Completed op 1 with actualMinutes=17 — current op rolled
       forward to line 2 (PRINT @ PRINTING-PRESS-A) status=PENDING,
       operationsCompleted=1/4, totalActualMinutes=17
    
    24 modules, 355 unit tests (+5), all green.
    zichun authored
     
    Browse Code »
  • Extends WorkOrderRequestedEvent with an optional routing so a
    producer — core PBC or customer plug-in — can attach shop-floor
    operations to a requested work order without importing any
    pbc-production internals. The reference printing-shop plug-in's
    quote-to-work-order BPMN now ships a 4-step default routing
    (CUT → PRINT → FOLD → BIND) end-to-end through the public api.v1
    surface.
    
    **api.v1 surface additions (additive, defaulted).**
    - New public data class `RoutingOperationSpec(lineNo, operationCode,
      workCenter, standardMinutes)` in
      `api.v1.event.production.WorkOrderEvents` with init-block
      invariants matching pbc-production v3's internal validation
      (positive lineNo, non-blank operationCode + workCenter,
      non-negative standardMinutes).
    - `WorkOrderRequestedEvent` gains an `operations: List<RoutingOperationSpec>`
      field, defaulted to `emptyList()`. Existing callers compile
      without changes; the event's init block now also validates
      that every operation has a unique lineNo. Convention matches
      the other v1 events that already carry defaulted `eventId` and
      `occurredAt` — additive within a major version.
    
    **pbc-production subscriber wiring.**
    - `WorkOrderRequestedSubscriber.handle` now maps
      `event.operations` → `WorkOrderOperationCommand` 1:1 and passes
      them to `CreateWorkOrderCommand`. Empty list keeps the v2
      behavior exactly (auto-spawned orders from the SO path still
      get no routing and walk DRAFT → IN_PROGRESS → COMPLETED without
      any gate); a non-empty list feeds the new v3 WorkOrderOperation
      children and forces a sequential walk on the shop floor. The
      log line now includes `ops=<size>` so operators can see at a
      glance whether a WO came with a routing.
    
    **Reference plug-in.**
    - `CreateWorkOrderFromQuoteTaskHandler` now attaches
      `DEFAULT_PRINTING_SHOP_ROUTING`: a 4-step sequence modeled on
      the reference business doc's brochure production flow. Each
      step gets its own work center (PRINTING-CUT-01,
      PRINTING-PRESS-A, PRINTING-FOLD-01, PRINTING-BIND-01) so a
      future shop-floor dashboard can show which station is running
      which job. Standard times are round-number placeholders
      (15/30/10/20 minutes) — a real customer tunes them from
      historical data. Deliberately hard-coded in v1: a real shop
      with a dozen different flows would either ship a richer plug-in
      that picks routing per item type, or wait for a future Tier 1
      "routing template" metadata entity. v1 just proves the
      event-driven seam carries v3 operations end-to-end.
    
    **Why this is the right shape.**
    - Zero new compile-time coupling. The plug-in imports only
      `api.v1.event.production.RoutingOperationSpec`; the plug-in
      linter would refuse any reach into `pbc.production.*`.
    - Core pbc-production stays ignorant of the plug-in: the
      subscriber doesn't know where the event came from.
    - The same `WorkOrderRequestedEvent` path now works for ANY
      producer — the next customer plug-in that spawns routed work
      orders gets zero core changes.
    
    **Tests.** New `WorkOrderRequestedSubscriberTest.handle passes
    event operations through as WorkOrderOperationCommand` asserts
    the 1:1 mapping of RoutingOperationSpec → WorkOrderOperationCommand.
    The existing test gains one assertion that an empty `operations`
    list on the event produces an empty `operations` list on the
    command (backwards-compat lock-in).
    
    **Smoke-tested end-to-end against real Postgres:**
    1. POST /api/v1/workflow/process-instances with processDefinitionKey
       `plugin-printing-shop-quote-to-work-order` and variables
       `{quoteCode: "Q-ROUTING-001", itemCode: "FG-BROCHURE", quantity: 250}`
    2. BPMN runs through CreateWorkOrderFromQuoteTaskHandler,
       publishes WorkOrderRequestedEvent with 4 operations
    3. pbc-production subscriber creates WO `WO-FROM-PRINTINGSHOP-Q-ROUTING-001`
    4. GET /api/v1/production/work-orders/by-code/... returns the WO
       with status=DRAFT and 4 operations (CUT/PRINT/FOLD/BIND) all
       PENDING, each with its own work_center and standard_minutes.
    
    This is the framework's first business flow where a customer
    plug-in provides a routing to a core PBC end-to-end through
    api.v1 alone. Closes the loop between the v3 routings feature
    (commit fa867189) and the executable acceptance test in the
    reference plug-in.
    
    24 modules, 350 unit tests (+1), all green.
    zichun authored
     
    Browse Code »
  • Adds WorkOrderOperation child entity and two new verbs that gate
    WorkOrder.complete() behind a strict sequential walk of shop-floor
    steps. An empty operations list keeps the v2 behavior exactly; a
    non-empty list forces every op to reach COMPLETED before the work
    order can finish.
    
    **New domain.**
    - `production__work_order_operation` table with
      `UNIQUE (work_order_id, line_no)` and a status CHECK constraint
      admitting PENDING / IN_PROGRESS / COMPLETED.
    - `WorkOrderOperation` @Entity mirroring the `WorkOrderInput` shape:
      `lineNo`, `operationCode`, `workCenter`, `standardMinutes`,
      `status`, `actualMinutes` (nullable), `startedAt` + `completedAt`
      timestamps. No `ext` JSONB — operations are facts, not master
      records.
    - `WorkOrderOperationStatus` enum (PENDING / IN_PROGRESS / COMPLETED).
    - `WorkOrder.operations` collection with the same @OneToMany +
      cascade=ALL + orphanRemoval + @OrderBy("lineNo ASC") pattern as
      `inputs`.
    
    **State machine (sequential).**
    - `startOperation(workOrderId, operationId)` — parent WO must be
      IN_PROGRESS; target op must be PENDING; every earlier op must be
      COMPLETED. Flips to IN_PROGRESS and stamps `startedAt`.
      Idempotent no-op if already IN_PROGRESS.
    - `completeOperation(workOrderId, operationId, actualMinutes)` —
      parent WO must be IN_PROGRESS; target op must be IN_PROGRESS;
      `actualMinutes` must be non-negative. Flips to COMPLETED and
      stamps `completedAt`. Idempotent with the same `actualMinutes`;
      refuses to clobber with a different value.
    - `WorkOrder.complete()` gains a routings gate: refuses if any
      operation is not COMPLETED. Empty operations list is legal and
      preserves v2 behavior (auto-spawned orders from
      `SalesOrderConfirmedSubscriber` continue to complete without
      any gate).
    
    **Why sequential, not parallel.** v3 deliberately forbids parallel
    operations on one routing. The shop-floor dashboard story is
    trivial when the invariant is "you are on step N of M"; the unit
    test matrix is finite. Parallel routings (two presses in parallel)
    wait for a real consumer asking for them. Same pattern as every
    other pbc-production invariant — grow the PBC when consumers
    appear, not on speculation.
    
    **Why standardMinutes + actualMinutes instead of just timestamps.**
    The variance between planned and actual runtime is the single
    most interesting data point on a routing. Deriving it from
    `completedAt - startedAt` at report time has to fight
    shift-boundary and pause-resume ambiguity; the operator typing in
    "this run took 47 minutes" is the single source of truth. `startedAt`
    and `completedAt` are kept as an audit trail, not used for
    variance math.
    
    **Why work_center is a varchar not a FK.** Same cross-PBC discipline
    as every other identifier in pbc-production: work centers will be
    the seam for a future pbc-equipment PBC, and pinning a FK now
    would couple two PBC schemas before the consumer even exists
    (CLAUDE.md guardrail #9).
    
    **HTTP surface.**
    - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/start`
      → `production.work-order.operation.start`
    - `POST /api/v1/production/work-orders/{id}/operations/{operationId}/complete`
      → `production.work-order.operation.complete`
      Body: `{"actualMinutes": "..."}`. Annotated with the
      single-arg Jackson trap escape hatch (`@JsonCreator(mode=PROPERTIES)`
      + `@param:JsonProperty`) — same trap that bit
      `CompleteWorkOrderRequest`, `ShipSalesOrderRequest`,
      `ReceivePurchaseOrderRequest`. Caught at smoke-test time.
    - `CreateWorkOrderRequest` accepts an optional `operations` array
      alongside `inputs`.
    - `WorkOrderResponse` gains `operations: List<WorkOrderOperationResponse>`
      showing status, standardMinutes, actualMinutes, startedAt,
      completedAt.
    
    **Metadata.** Two new permissions in `production.yml`:
    `production.work-order.operation.start` and
    `production.work-order.operation.complete`.
    
    **Tests (12 new).** create-with-ops happy path; duplicate line_no
    refused; blank operationCode refused; complete() gated when any
    op is not COMPLETED; complete() passes when every op is COMPLETED;
    startOperation refused on DRAFT parent; startOperation flips
    PENDING to IN_PROGRESS and stamps startedAt; startOperation
    refuses skip-ahead over a PENDING predecessor; startOperation is
    idempotent when already IN_PROGRESS; completeOperation records
    actualMinutes and flips to COMPLETED; completeOperation rejects
    negative actualMinutes; completeOperation refuses clobbering an
    already-COMPLETED op with a different value.
    
    **Smoke-tested end-to-end against real Postgres:**
    - Created a WO with 3 operations (CUT → PRINT → BIND)
    - `complete()` refused while DRAFT, then refused while IN_PROGRESS
      with pending ops ("3 routing operation(s) are not yet COMPLETED")
    - Skip-ahead `startOperation(op2)` refused ("earlier operation(s)
      are not yet COMPLETED")
    - Walked ops 1 → 2 → 3 through start + complete with varying
      actualMinutes (17, 32.5, 18 vs standard 15, 30, 20)
    - Final `complete()` succeeded, wrote exactly ONE
      PRODUCTION_RECEIPT ledger row for 100 units of FG-BROCHURE —
      no premature writes
    - Separately verified a no-operations WO still walks DRAFT →
      IN_PROGRESS → COMPLETED exactly like v2
    
    24 modules, 349 unit tests (+12), all green.
    zichun authored
     
    Browse Code »
  • …c-warehousing StockTransfer
    
    First cross-PBC reaction originating from pbc-quality. Records a
    REJECTED inspection with explicit source + quarantine location
    codes, publishes an api.v1 event inside the same transaction as
    the row insert, and pbc-warehousing's new subscriber atomically
    creates + confirms a StockTransfer that moves the rejected
    quantity to the quarantine bin. The whole chain — inspection
    insert + event publish + transfer create + confirm + two ledger
    rows — runs in a single transaction under the synchronous
    in-process bus with Propagation.MANDATORY.
    
    ## Why the auto-quarantine is opt-in per-inspection
    
    Not every inspection wants physical movement. A REJECTED batch
    that's already separated from good stock on the shop floor doesn't
    need the framework to move anything; the operator just wants the
    record. Forcing every rejection to create a ledger pair would
    collide with real-world QC workflows.
    
    The contract is simple: the `InspectionRecord` now carries two
    OPTIONAL columns (`source_location_code`, `quarantine_location_code`).
    When BOTH are set AND the decision is REJECTED AND the rejected
    quantity is positive, the subscriber reacts. Otherwise it logs at
    DEBUG and does nothing. The event is published either way, so
    audit/KPI subscribers see every inspection regardless.
    
    ## api.v1 additions
    
    New event class `org.vibeerp.api.v1.event.quality.InspectionRecordedEvent`
    with nine fields:
    
      inspectionCode, itemCode, sourceReference, decision,
      inspectedQuantity, rejectedQuantity,
      sourceLocationCode?, quarantineLocationCode?, inspector
    
    All required fields validated in `init { }` — blank strings,
    non-positive inspected quantity, negative rejected quantity, or
    an unknown decision string all throw at publish time so a
    malformed event never hits the outbox.
    
    `aggregateType = "quality.InspectionRecord"` matches the
    `<pbc>.<aggregate>` convention.
    
    `decision` is carried as a String (not the pbc-quality
    `InspectionDecision` enum) to keep guardrail #10 honest — api.v1
    events MUST NOT leak internal PBC types. Consumers compare
    against the literal `"APPROVED"` / `"REJECTED"` strings.
    
    ## pbc-quality changes
    
    - `InspectionRecord` entity gains two nullable columns:
      `source_location_code` + `quarantine_location_code`.
    - Liquibase migration `002-quality-quarantine-locations.xml` adds
      the columns to `quality__inspection_record`.
    - `InspectionRecordService` now injects `EventBus` and publishes
      `InspectionRecordedEvent` inside the `@Transactional record()`
      method. The publish carries all nine fields including the
      optional locations.
    - `RecordInspectionCommand` + `RecordInspectionRequest` gain the
      two optional location fields; unchanged default-null means
      every existing caller keeps working unchanged.
    - `InspectionRecordResponse` exposes both new columns on the HTTP
      wire.
    
    ## pbc-warehousing changes
    
    - New `QualityRejectionQuarantineSubscriber` @Component.
    - Subscribes in `@PostConstruct` via the typed-class
      `EventBus.subscribe(InspectionRecordedEvent::class.java, ...)`
      overload — same pattern every other PBC subscriber uses
      (SalesOrderConfirmedSubscriber, WorkOrderRequestedSubscriber,
      the pbc-finance order subscribers).
    - `handle(event)` is `internal` so the unit test can drive it
      directly without going through the bus.
    - Activation contract (all must be true): decision=REJECTED,
      rejectedQuantity>0, sourceLocationCode non-blank,
      quarantineLocationCode non-blank. Any missing condition → no-op.
    - Idempotency: derived transfer code is `TR-QC-<inspectionCode>`.
      Before creating, the subscriber checks
      `stockTransfers.findByCode(derivedCode)` — if anything exists
      (DRAFT, CONFIRMED, or CANCELLED), the subscriber skips. A
      replay of the same event under at-least-once delivery is safe.
    - On success: creates a DRAFT StockTransfer with one line moving
      `rejectedQuantity` of `itemCode` from source to quarantine,
      then calls `confirm(id)` which writes the atomic TRANSFER_OUT
      + TRANSFER_IN ledger pair.
    
    ## Smoke test (fresh DB)
    
    ```
    # seed
    POST /api/v1/catalog/items       {code: WIDGET-1, baseUomCode: ea}
    POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE}
    POST /api/v1/inventory/locations {code: WH-QUARANTINE, type: WAREHOUSE}
    POST /api/v1/inventory/movements {itemCode: WIDGET-1, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT}
    
    # the cross-PBC reaction
    POST /api/v1/quality/inspections
         {code: QC-R-001,
          itemCode: WIDGET-1,
          sourceReference: "WO:WO-001",
          decision: REJECTED,
          inspectedQuantity: 50,
          rejectedQuantity: 7,
          reason: "surface scratches",
          sourceLocationCode: "WH-MAIN",
          quarantineLocationCode: "WH-QUARANTINE"}
      → 201 {..., sourceLocationCode: "WH-MAIN", quarantineLocationCode: "WH-QUARANTINE"}
    
    # automatically created + confirmed
    GET /api/v1/warehousing/stock-transfers/by-code/TR-QC-QC-R-001
      → 200 {
          "code": "TR-QC-QC-R-001",
          "fromLocationCode": "WH-MAIN",
          "toLocationCode": "WH-QUARANTINE",
          "status": "CONFIRMED",
          "note": "auto-quarantine from rejected inspection QC-R-001",
          "lines": [{"itemCode": "WIDGET-1", "quantity": 7.0}]
        }
    
    # ledger state (raw SQL)
    SELECT l.code, b.item_code, b.quantity
      FROM inventory__stock_balance b
      JOIN inventory__location l ON l.id = b.location_id
      WHERE b.item_code = 'WIDGET-1';
      WH-MAIN       | WIDGET-1 | 93.0000   ← was 100, now 93
      WH-QUARANTINE | WIDGET-1 |  7.0000   ← 7 rejected units here
    
    SELECT item_code, location, reason, delta, reference
      FROM inventory__stock_movement m JOIN inventory__location l ON l.id=m.location_id
      WHERE m.reference = 'TR:TR-QC-QC-R-001';
      WIDGET-1 | WH-MAIN       | TRANSFER_OUT | -7 | TR:TR-QC-QC-R-001
      WIDGET-1 | WH-QUARANTINE | TRANSFER_IN  |  7 | TR:TR-QC-QC-R-001
    
    # negatives
    POST /api/v1/quality/inspections {decision: APPROVED, ...+locations}
      → 201, but GET /TR-QC-QC-A-001 → 404 (no transfer, correct opt-out)
    
    POST /api/v1/quality/inspections {decision: REJECTED, rejected: 2, no locations}
      → 201, but GET /TR-QC-QC-R-002 → 404 (opt-in honored)
    
    # handler log
    [warehousing] auto-quarantining 7 units of 'WIDGET-1'
    from 'WH-MAIN' to 'WH-QUARANTINE'
    (inspection=QC-R-001, transfer=TR-QC-QC-R-001)
    ```
    
    Everything happens in ONE transaction because EventBusImpl uses
    Propagation.MANDATORY with synchronous delivery: the inspection
    insert, the event publish, the StockTransfer create, the
    confirm, and the two ledger rows all commit or roll back
    together.
    
    ## Tests
    
    - Updated `InspectionRecordServiceTest`: the service now takes an
      `EventBus` constructor argument. Every existing test got a
      relaxed `EventBus` mock; the one new test
      `record publishes InspectionRecordedEvent on success` captures
      the published event and asserts every field including the
      location codes.
    - 6 new unit tests in `QualityRejectionQuarantineSubscriberTest`:
      * subscribe registers one listener for InspectionRecordedEvent
      * handle creates and confirms a quarantine transfer on a
        fully-populated REJECTED event (asserts derived code,
        locations, item code, quantity)
      * handle is a no-op when decision is APPROVED
      * handle is a no-op when sourceLocationCode is missing
      * handle is a no-op when quarantineLocationCode is missing
      * handle skips when a transfer with the derived code already
        exists (idempotent replay)
    - Total framework unit tests: 334 (was 327), all green.
    
    ## What this unblocks
    
    - **Quality KPI dashboards** — any PBC can now subscribe to
      `InspectionRecordedEvent` without coupling to pbc-quality.
    - **pbc-finance quality-cost tracking** — when GL growth lands, a
      finance subscriber can debit a "quality variance" account on
      every REJECTED inspection.
    - **REF.2 / customer plug-in workflows** — the printing-shop
      plug-in can emit an `InspectionRecordedEvent` of its own from
      a BPMN service task (via `context.eventBus.publish`) and drive
      the same quarantine chain without touching pbc-quality's HTTP
      surface.
    
    ## Non-goals (parking lot)
    
    - Partial-batch quarantine decisions (moving some units to
      quarantine, some back to general stock, some to scrap). v1
      collapses the decision into a single "reject N units" action
      and assumes the operator splits batches manually before
      inspecting. A richer ResolutionPlan aggregate is a future
      chunk if real workflows need it.
    - Quality metrics storage. The event is audited by the existing
      wildcard event subscriber but no PBC rolls it up into a KPI
      table. Belongs to a future reporting feature.
    - Auto-approval chains. An APPROVED inspection could trigger a
      "release-from-hold" transfer (opposite direction) in a
      future-expanded subscriber, but v1 keeps the reaction
      REJECTED-only to match the "quarantine on fail" use case.
    zichun authored
     
    Browse Code »
  • …alidates locations at create
    
    Follow-up to the pbc-warehousing chunk. Plugs a real gap noticed in
    the smoke test: an unknown fromLocationCode or toLocationCode on a
    StockTransfer was silently accepted at create() and only surfaced
    as a confirm()-time rollback, which is a confusing UX — the operator
    types TR-001 wrong, hits "create", then hits "confirm" minutes later
    and sees "location GHOST-SRC is not in the inventory directory".
    
    ## api.v1 growth
    
    New cross-PBC method on `InventoryApi`:
    
        fun findLocationByCode(locationCode: String): LocationRef?
    
    Parallel shape to `CatalogApi.findItemByCode` — a lookup-by-code
    returning a lightweight ref or null, safe for any cross-PBC consumer
    to inject. The returned `LocationRef` data class carries id, code,
    name, type (as a String, not the inventory-internal LocationType
    enum — rationale in the KDoc), and active flag. Fields that are
    NOT part of the cross-PBC contract (audit columns, ext JSONB, the
    raw JPA entity) stay inside pbc-inventory.
    
    api.v1 additive change within the v1 line — no breaking rename, no
    signature churn on existing methods. The interface adds a new
    abstract method, which IS technically a source-breaking change for
    any in-tree implementation, but the only impl is
    pbc-inventory/InventoryApiAdapter which is updated in the same
    commit. No external plug-in implements InventoryApi (by design;
    plug-ins inject it, they don't provide it).
    
    ## Adapter implementation
    
    `InventoryApiAdapter.findLocationByCode` resolves the location via
    the existing `LocationJpaRepository.findByCode`, which is exactly
    what `recordMovement` already uses. A new private extension
    `Location.toRef()` builds the api.v1 DTO. Zero new SQL; zero new
    repository methods.
    
    ## pbc-warehousing wiring
    
    `StockTransferService.create` now calls the facade twice — once for
    the source location, once for the destination — BEFORE validating
    lines. The four-step ordering is: code uniqueness → from != to →
    non-empty lines → both locations exist and are active → per-line
    validation. Unknown locations produce a 400 with a clear message;
    deactivated locations produce a 400 distinguishing "doesn't exist"
    from "exists but can't be used":
    
        "from location code 'GHOST-SRC' is not in the inventory directory"
        "from location 'WH-CLOSED' is deactivated and cannot be transfer source"
    
    The confirm() path is unchanged. Locations may still vanish between
    create and confirm (though the likelihood is low for a normal
    workflow), and `recordMovement` will still raise its own error in
    that case — belt and suspenders.
    
    ## Smoke test
    
    ```
    POST /api/v1/inventory/locations {code: WH-GOOD, type: WAREHOUSE}
    POST /api/v1/catalog/items       {code: ITEM-1, baseUomCode: ea}
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-bad, fromLocationCode: GHOST-SRC, toLocationCode: WH-GOOD,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 400 "from location code 'GHOST-SRC' is not in the inventory directory"
         (before this commit: 201 DRAFT, then 400 at confirm)
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-bad2, fromLocationCode: WH-GOOD, toLocationCode: GHOST-DST,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 400 "to location code 'GHOST-DST' is not in the inventory directory"
    
    POST /api/v1/warehousing/stock-transfers
         {code: TR-ok, fromLocationCode: WH-GOOD, toLocationCode: WH-OTHER,
          lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]}
      → 201 DRAFT   ← happy path still works
    ```
    
    ## Tests
    
    - Updated the 3 existing `StockTransferServiceTest` tests that
      created real transfers to stub `inventory.findLocationByCode` for
      both WH-A and WH-B via a new `stubLocation()` helper.
    - 3 new tests:
      * `create rejects unknown from location via InventoryApi`
      * `create rejects unknown to location via InventoryApi`
      * `create rejects a deactivated from location`
    - Total framework unit tests: 300 (was 297), all green.
    
    ## Why this isn't a breaking api.v1 change
    
    InventoryApi is an interface consumed by other PBCs and by plug-ins,
    implemented ONLY by pbc-inventory. Adding a new method to an
    interface IS a source-breaking change for any implementer — but
    the framework's dependency rules mean no external code implements
    this interface. Plug-ins and other PBCs CONSUME it via dependency
    injection; the only production impl is InventoryApiAdapter, updated
    in the same commit. Binary compatibility for consumers is
    preserved: existing call sites compile and run unchanged because
    only the interface grew, not its existing methods.
    
    If/when a third party implements InventoryApi (e.g. a test double
    outside the framework, or a custom backend plug-in), this would be
    a semver-major-worthy addition. For the in-tree framework, it's
    additive-within-a-major.
    zichun authored
     
    Browse Code »
  • Closes the core PBC row of the v1.0 target. Ships pbc-quality as a
    lean v1 recording-only aggregate: any caller that performs a quality
    inspection (inbound goods, in-process work order output, outbound
    shipment) appends an immutable InspectionRecord with a decision
    (APPROVED/REJECTED), inspected/rejected quantities, a free-form
    source reference, and the inspector's principal id.
    
    ## Deliberately narrow v1 scope
    
    pbc-quality does NOT ship:
      - cross-PBC writes (no "rejected stock gets auto-quarantined" rule)
      - event publishing (no InspectionRecordedEvent in api.v1 yet)
      - inspection plans or templates (no "item X requires checks Y, Z")
      - multi-check records (one decision per row; multi-step
        inspections become multiple records)
    
    The rationale is the "follow the consumer" discipline: every seam
    the framework adds has to be driven by a real consumer. With no PBC
    yet subscribing to inspection events or calling into pbc-quality,
    speculatively building those capabilities would be guessing the
    shape. Future chunks that actually need them (e.g. pbc-warehousing
    auto-quarantine on rejection, pbc-production WorkOrder scrap from
    rejected QC) will grow the seam into the shape they need.
    
    Even at this narrow scope pbc-quality delivers real value: a
    queryable, append-only, permission-gated record of every QC
    decision in the system, filterable by source reference or item
    code, and linked to the catalog via CatalogApi.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject following the existing
      recipe. api-v1 + platform/persistence + platform/security only;
      no cross-pbc deps (guardrail #9 stays honest).
    - `InspectionRecord` entity — code, item_code, source_reference,
      decision (enum), inspected_quantity, rejected_quantity, inspector
      (principal id as String, same convention as created_by), reason,
      inspected_at. Owns table `quality__inspection_record`. No `ext`
      column in v1 — the aggregate is simple enough that adding Tier 1
      customization now would be speculation; it can be added in one
      edit when a customer asks for it.
    - `InspectionDecision` enum — APPROVED, REJECTED. Deliberately
      two-valued; see the entity KDoc for why "conditional accept" is
      rejected as a shape.
    - `InspectionRecordJpaRepository` — existsByCode, findByCode,
      findBySourceReference, findByItemCode.
    - `InspectionRecordService` — ONE write verb `record`. Inspections
      are immutable; revising means recording a new one with a new code.
      Validates:
        * code is unique
        * source reference non-blank
        * inspected quantity > 0
        * rejected quantity >= 0
        * rejected <= inspected
        * APPROVED ↔ rejected = 0, REJECTED ↔ rejected > 0
        * itemCode resolves via CatalogApi
      Inspector is read from `PrincipalContext.currentOrSystem()` at
      call time so a real HTTP user records their own inspections and
      a background job recording a batch uses a named system principal.
    - `InspectionRecordController` — `/api/v1/quality/inspections`
      with GET list (supports `?sourceReference=` and `?itemCode=`
      query params), GET by id, GET by-code, POST record. Every
      endpoint @RequirePermission-gated.
    - `META-INF/vibe-erp/metadata/quality.yml` — 1 entity, 2
      permissions (`quality.inspection.read`, `quality.inspection.record`),
      1 menu.
    - `distribution/.../db/changelog/pbc-quality/001-quality-init.xml`
      — single table with the full audit column set plus:
        * CHECK decision IN ('APPROVED', 'REJECTED')
        * CHECK inspected_quantity > 0
        * CHECK rejected_quantity >= 0
        * CHECK rejected_quantity <= inspected_quantity
      The application enforces the biconditional (APPROVED ↔ rejected=0)
      because CHECK constraints in Postgres can't express the same
      thing ergonomically; the DB enforces the weaker "rejected is
      within bounds" so a direct INSERT can't fabricate nonsense.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app, as admin)
    
    ```
    POST /api/v1/catalog/items {code: WIDGET-1, baseUomCode: ea}
      → 201
    
    POST /api/v1/quality/inspections
         {code: QC-2026-001, itemCode: WIDGET-1, sourceReference: "WO:WO-001",
          decision: APPROVED, inspectedQuantity: 100, rejectedQuantity: 0}
      → 201 {inspector: <admin principal uuid>, inspectedAt: "..."}
    
    POST /api/v1/quality/inspections
         {code: QC-2026-002, itemCode: WIDGET-1, sourceReference: "WO:WO-002",
          decision: REJECTED, inspectedQuantity: 50, rejectedQuantity: 7,
          reason: "surface scratches detected on 7 units"}
      → 201
    
    GET  /api/v1/quality/inspections?sourceReference=WO:WO-001
      → [{code: QC-2026-001, ...}]
    GET  /api/v1/quality/inspections?itemCode=WIDGET-1
      → [APPROVED, REJECTED]   ← filter works, 2 records
    
    # Negative: APPROVED with positive rejected
    POST /api/v1/quality/inspections
         {decision: APPROVED, rejectedQuantity: 3, ...}
      → 400 "APPROVED inspection must have rejected quantity = 0 (got 3);
             record a REJECTED inspection instead"
    
    # Negative: rejected > inspected
    POST /api/v1/quality/inspections
         {decision: REJECTED, inspectedQuantity: 5, rejectedQuantity: 10, ...}
      → 400 "rejected quantity (10) cannot exceed inspected (5)"
    
    GET  /api/v1/_meta/metadata
      → permissions include ["quality.inspection.read",
                              "quality.inspection.record"]
    ```
    
    The `inspector` field on the created records contains the admin
    user's principal UUID exactly as written by the
    `PrincipalContextFilter` — proving the audit trail end-to-end.
    
    ## Tests
    
    - 9 new unit tests in `InspectionRecordServiceTest`:
      * `record persists an APPROVED inspection with rejected=0`
      * `record persists a REJECTED inspection with positive rejected`
      * `inspector defaults to system when no principal is bound` —
        validates the `PrincipalContext.currentOrSystem()` fallback
      * `record rejects duplicate code`
      * `record rejects non-positive inspected quantity`
      * `record rejects rejected greater than inspected`
      * `APPROVED with positive rejected is rejected`
      * `REJECTED with zero rejected is rejected`
      * `record rejects unknown items via CatalogApi`
    - Total framework unit tests: 297 (was 288), all green.
    
    ## Framework state after this commit
    
    - **20 → 21 Gradle subprojects**
    - **10 of 10 core PBCs live** (pbc-identity, pbc-catalog, pbc-partners,
      pbc-inventory, pbc-warehousing, pbc-orders-sales, pbc-orders-purchase,
      pbc-finance, pbc-production, pbc-quality). The P5.x row of the
      implementation plan is complete at minimal v1 scope.
    - The v1.0 acceptance bar's "core PBC coverage" line is met. Remaining
      v1.0 work is cross-cutting (reports, forms, scheduler, web SPA)
      plus the richer per-PBC v2/v3 scopes.
    
    ## What this unblocks
    
    - **Cross-PBC quality integration** — any PBC that needs to react
      to a quality decision can subscribe when pbc-quality grows its
      event. pbc-warehousing quarantine on rejection is the obvious
      first consumer.
    - **The full buy-make-sell BPMN scenario** — now every step has a
      home: sales → procurement → warehousing → production → quality →
      finance are all live. The big reference-plug-in end-to-end
      flow is unblocked at the PBC level.
    - **Completes the P5.x row** of the implementation plan. Remaining
      v1.0 work is cross-cutting platform units (P1.8 reports, P1.9
      files, P1.10 jobs, P2.2/P2.3 designer/forms) plus the web SPA.
    zichun authored
     
    Browse Code »
  • Ninth core PBC. Ships the first-class orchestration aggregate for
    moving stock between locations: a header + lines that represents
    operator intent, and a confirm() verb that atomically posts the
    matching TRANSFER_OUT / TRANSFER_IN ledger pair per line via the
    existing InventoryApi.recordMovement facade.
    
    Takes the framework's core-PBC count to 9 of 10 (only pbc-quality
    remains in the P5.x row).
    
    ## The shape
    
    pbc-warehousing sits above pbc-inventory in the dependency graph:
    it doesn't replace the flat movement ledger, it orchestrates
    multi-row ledger writes with a business-level document on top. A
    DRAFT `warehousing__stock_transfer` row is queued intent (pickers
    haven't started yet); a CONFIRMED row reflects movements that have
    already posted to the `inventory__stock_movement` ledger. Each
    confirmed line becomes two ledger rows:
    
      TRANSFER_OUT(itemCode, fromLocationCode, -quantity, ref="TR:<code>")
      TRANSFER_IN (itemCode, toLocationCode,    quantity, ref="TR:<code>")
    
    All rows of one confirm call run inside ONE @Transactional method,
    so a failure anywhere — unknown item, unknown location, balance
    would go below zero — rolls back EVERY line's both halves. There
    is no half-confirmed transfer.
    
    ## Module contents
    
    - `build.gradle.kts` — new Gradle subproject, api-v1 + platform/*
      dependencies only. No cross-PBC dependency (guardrail #9 stays
      honest; CatalogApi + InventoryApi both come in via api.v1.ext).
    - `StockTransfer` entity — header with code, from/to location
      codes, status (DRAFT/CONFIRMED/CANCELLED), transfer_date, note,
      OneToMany<StockTransferLine>. Table name
      `warehousing__stock_transfer`.
    - `StockTransferLine` entity — lineNo, itemCode, quantity.
      `transfer_id → warehousing__stock_transfer(id) ON DELETE CASCADE`,
      unique `(transfer_id, line_no)`.
    - `StockTransferJpaRepository` — existsByCode + findByCode.
    - `StockTransferService` — create / confirm / cancel + three read
      methods. @Transactional service-level; all state transitions run
      through @Transactional methods so the event-bus MANDATORY
      propagation (if/when a pbc-warehousing event is added later) has
      a transaction to join. Business invariants:
        * code is unique (existsByCode short-circuit)
        * from != to (enforced in code AND in the Liquibase CHECK)
        * at least one line
        * each line: positive line_no, unique per transfer, positive
          quantity, itemCode must resolve via CatalogApi.findItemByCode
        * confirm requires DRAFT; writes OUT-first-per-line so a
          balance-goes-negative error aborts before touching the
          destination location
        * cancel requires DRAFT; CONFIRMED transfers are terminal
          (reverse by creating a NEW transfer in the opposite direction,
          matching the document-discipline rule every other PBC uses)
    - `StockTransferController` — `/api/v1/warehousing/stock-transfers`
      with GET list, GET by id, GET by-code, POST create, POST
      {id}/confirm, POST {id}/cancel. Every endpoint
      @RequirePermission-gated using the keys declared in the metadata
      YAML. Matches the shape of pbc-orders-sales, pbc-orders-purchase,
      pbc-production.
    - DTOs use the established pattern — jakarta.validation on the
      request, response mapping via extension functions.
    - `META-INF/vibe-erp/metadata/warehousing.yml` — 1 entity, 4
      permissions, 1 menu. Loaded by MetadataLoader at boot, visible
      via `GET /api/v1/_meta/metadata`.
    - `distribution/src/main/resources/db/changelog/pbc-warehousing/001-warehousing-init.xml`
      — creates both tables with the full audit column set, state
      CHECK constraint, locations-distinct CHECK, unique
      (transfer_id, line_no) index, quantity > 0 CHECK, item_code
      index for cross-PBC grep.
    - `settings.gradle.kts`, `distribution/build.gradle.kts`,
      `master.xml` all wired.
    
    ## Smoke test (fresh DB + running app)
    
    ```
    # seed
    POST /api/v1/catalog/items   {code: PAPER-A4, baseUomCode: sheet}
    POST /api/v1/catalog/items   {code: PAPER-A3, baseUomCode: sheet}
    POST /api/v1/inventory/locations {code: WH-MAIN, type: WAREHOUSE}
    POST /api/v1/inventory/locations {code: WH-SHOP, type: WAREHOUSE}
    POST /api/v1/inventory/movements {itemCode: PAPER-A4, locationId: <WH-MAIN>, delta: 100, reason: RECEIPT}
    POST /api/v1/inventory/movements {itemCode: PAPER-A3, locationId: <WH-MAIN>, delta: 50,  reason: RECEIPT}
    
    # exercise the new PBC
    POST /api/v1/warehousing/stock-transfers
         {code: TR-001, fromLocationCode: WH-MAIN, toLocationCode: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 30},
                  {lineNo: 2, itemCode: PAPER-A3, quantity: 10}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 200 CONFIRMED
    
    # verify balances via the raw DB (the HTTP stock-balance endpoint
    # has a separate unrelated bug returning 500; the ledger state is
    # what this commit is proving)
    SELECT item_code, location_id, quantity FROM inventory__stock_balance;
      PAPER-A4 / WH-MAIN →  70   ← debited 30
      PAPER-A4 / WH-SHOP →  30   ← credited 30
      PAPER-A3 / WH-MAIN →  40   ← debited 10
      PAPER-A3 / WH-SHOP →  10   ← credited 10
    
    SELECT item_code, location_id, reason, delta, reference
      FROM inventory__stock_movement ORDER BY occurred_at;
      PAPER-A4 / WH-MAIN / TRANSFER_OUT / -30 / TR:TR-001
      PAPER-A4 / WH-SHOP / TRANSFER_IN  /  30 / TR:TR-001
      PAPER-A3 / WH-MAIN / TRANSFER_OUT / -10 / TR:TR-001
      PAPER-A3 / WH-SHOP / TRANSFER_IN  /  10 / TR:TR-001
    ```
    
    Four rows all tagged `TR:TR-001`. A grep of the ledger attributes
    both halves of each line to the single source transfer document.
    
    ## Transactional rollback test (in the same smoke run)
    
    ```
    # ask for more than exists
    POST /api/v1/warehousing/stock-transfers
         {code: TR-002, from: WH-MAIN, to: WH-SHOP,
          lines: [{lineNo: 1, itemCode: PAPER-A4, quantity: 1000}]}
      → 201 DRAFT
    POST /api/v1/warehousing/stock-transfers/<id>/confirm
      → 400 "stock movement would push balance for 'PAPER-A4' at
             location <WH-MAIN> below zero (current=70.0000, delta=-1000.0000)"
    
    # assert TR-002 is still DRAFT
    GET /api/v1/warehousing/stock-transfers/<id> → status: DRAFT  ← NOT flipped to CONFIRMED
    
    # assert the ledger still has exactly 6 rows (no partial writes)
    SELECT count(*) FROM inventory__stock_movement; → 6
    ```
    
    The failed confirm left no residue: status stayed DRAFT, and the
    ledger count is unchanged at 6 (the 2 RECEIPT seeds + the 4
    TRANSFER_OUT/IN from TR-001). Propagation.REQUIRED + Spring's
    default rollback-on-unchecked-exception semantics do exactly what
    the KDoc promises.
    
    ## State-machine guards
    
    ```
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/confirm
      → 400 "cannot confirm stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be confirmed"
    
    POST /api/v1/warehousing/stock-transfers/<confirmed-id>/cancel
      → 400 "cannot cancel stock transfer TR-001 in status CONFIRMED;
             only DRAFT can be cancelled — reverse a confirmed transfer
             by creating a new one in the other direction"
    ```
    
    ## Tests
    
    - 10 new unit tests in `StockTransferServiceTest`:
      * `create persists a DRAFT transfer when everything validates`
      * `create rejects duplicate code`
      * `create rejects same from and to location`
      * `create rejects an empty line list`
      * `create rejects duplicate line numbers`
      * `create rejects non-positive quantities`
      * `create rejects unknown items via CatalogApi`
      * `confirm writes an atomic TRANSFER_OUT + TRANSFER_IN pair per line`
        — uses `verifyOrder` to assert OUT-first-per-line dispatch order
      * `confirm refuses a non-DRAFT transfer`
      * `cancel refuses a CONFIRMED transfer`
      * `cancel flips a DRAFT transfer to CANCELLED`
    - Total framework unit tests: 288 (was 278), all green.
    
    ## What this unblocks
    
    - **Real warehouse workflows** — confirm a transfer from a picker
      UI (R1 is pending), driven by a BPMN that hands the confirm to a
      TaskHandler once the physical move is complete.
    - **pbc-quality (P5.8, last remaining core PBC)** — inspection
      plans + results + holds. Holds would typically quarantine stock
      by moving it to a QUARANTINE location via a stock transfer,
      which is the natural consumer for this aggregate.
    - **Stocktakes (physical inventory reconciliation)** — future
      pbc-warehousing verb that compares counted vs recorded and posts
      the differences as ADJUSTMENT rows; shares the same
      `recordMovement` primitive.
    zichun authored
     
    Browse Code »
  • …duction auto-creates WorkOrder
    
    First end-to-end cross-PBC workflow driven entirely from a customer
    plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a
    TaskHandler that publishes a generic api.v1 event; pbc-production
    reacts by creating a DRAFT WorkOrder. The plug-in has zero
    compile-time coupling to pbc-production, and pbc-production has zero
    knowledge the plug-in exists.
    
    ## Why an event, not a facade
    
    Two options were on the table for "how does a plug-in ask
    pbc-production to create a WorkOrder":
    
      (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi`
          with a `createWorkOrder(command)` method
      (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production`
          that anyone can publish — this commit
    
    Facade pattern (a) is what InventoryApi.recordMovement and
    CatalogApi.findItemByCode use: synchronous, in-transaction,
    caller-blocks-on-completion. Event pattern (b) is what
    SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses:
    asynchronous over the bus, still in-transaction (the bus uses
    `Propagation.MANDATORY` with synchronous delivery so a failure
    rolls everything back), but the caller doesn't need a typed result.
    
    Option (b) wins for plug-in → pbc-production:
    
    - Plug-in compile-time surface stays identical: plug-ins already
      import `api.v1.event.*` to publish. No new api.v1.ext package.
      Zero new plug-in dependency.
    - The outbox gets the row for free — a crash between publish and
      delivery replays cleanly from `platform__event_outbox`.
    - A second customer plug-in shipping a different flow that ALSO
      wants to auto-spawn work orders doesn't need a second facade, just
      publishes the same event. pbc-scheduling (future) can subscribe
      to the same channel without duplicating code.
    
    The synchronous facade pattern stays the right tool for cross-PBC
    operations the caller needs to observe (read-throughs, inventory
    debits that must block the current transaction). Creating a DRAFT
    work order is a fire-and-trust operation — the event shape fits.
    
    ## What landed
    
    ### api.v1 — WorkOrderRequestedEvent
    
    New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent`
    with four required fields:
      - `code`: desired work-order code (must be unique globally;
        convention is to bake the source reference into it so duplicate
        detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`)
      - `outputItemCode` + `outputQuantity`: what to produce
      - `sourceReference`: opaque free-form pointer used in logs and
        the outbox audit trail. Example values:
        `plugin:printing-shop:quote:Q-007`,
        `pbc-orders-sales:SO-2026-001:L2`
    
    The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the
    existing `WorkOrderEvent` sealed interface is for LIFECYCLE events
    published BY pbc-production, not for inbound requests). `init`
    validators reject blank strings and non-positive quantities so a
    malformed event fails fast at publish time rather than at the
    subscriber.
    
    ### pbc-production — WorkOrderRequestedSubscriber
    
    New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`.
    Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe`
    overload (same pattern as `SalesOrderConfirmedSubscriber` + the six
    pbc-finance order subscribers). The subscriber:
    
      1. Looks up `workOrders.findByCode(event.code)` as the idempotent
         short-circuit. If a WorkOrder with that code already exists
         (outbox replay, future async bus retry, developer re-running the
         same BPMN process), the subscriber logs at DEBUG and returns.
         **Second execution of the same BPMN produces the same outbox row
         which the subscriber then skips — the database ends up with
         exactly ONE WorkOrder regardless of how many times the process
         runs.**
      2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with
         the event's fields. `sourceSalesOrderCode` is null because this
         is the generic path, not the SO-driven one.
    
    Why this is a SECOND subscriber rather than extending
    `SalesOrderConfirmedSubscriber`: the two events serve different
    producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific
    and requires a round-trip through `SalesOrdersApi.findByCode` to
    fetch the lines; `WorkOrderRequestedEvent` carries everything the
    subscriber needs inline. Collapsing them would mean the generic
    path inherits the SO-flow's SO-specific lookup and short-circuit
    logic that doesn't apply to it.
    
    ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler
    
    New plug-in TaskHandler in
    `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`.
    Captures the `PluginContext` via constructor — same pattern as
    `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside
    `execute`:
    
      1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables
         (`quantity` accepts Number or String since Flowable's variable
         coercion is flexible).
      2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and
         `sourceReference = "plugin:printing-shop:quote:$quoteCode"`.
      3. Logs via `context.logger.info(...)` — the line is tagged
         `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`.
      4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`.
         This is the first time a plug-in TaskHandler publishes a cross-PBC
         event from inside a workflow — proves the event-bus leg of the
         handler-context pattern works end-to-end.
      5. Writes `workOrderCode` + `workOrderRequested=true` back to the
         process variables so a downstream BPMN step or the HTTP caller
         can see the derived code.
    
    The handler is registered in `PrintingShopPlugin.start(context)`
    alongside `PlateApprovalTaskHandler`:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
    
    Teardown via `unregisterAllByOwner("printing-shop")` still works
    unchanged — the scoped registrar tracks both handlers.
    
    ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml
    
    New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the
    plug-in JAR. Single synchronous service task, process definition
    key `plugin-printing-shop-quote-to-work-order`, service task id
    `printing_shop.quote.create_work_order` (matches the handler key).
    Auto-deployed by the host's `PluginProcessDeployer` at plug-in
    start — the printing-shop plug-in now ships two BPMNs bundled into
    one Flowable deployment, both under category `printing-shop`.
    
    ## Smoke test (fresh DB)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ...
    [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order
    PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...':
      [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml]
    pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload)
    
    # 1) seed a catalog item
    $ curl -X POST /api/v1/catalog/items
           {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"}
      → 201 BOOK-HARDCOVER
    
    # 2) start the plug-in's quote-to-work-order BPMN
    $ curl -X POST /api/v1/workflow/process-instances
           {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order",
            "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}}
      → 201 {"ended":true,
             "variables":{"quoteCode":"Q-007",
                          "itemCode":"BOOK-HARDCOVER",
                          "quantity":500,
                          "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007",
                          "workOrderRequested":true}}
    
    Log lines observed:
      [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent
         (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500)
      [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007'
         for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007')
    
    # 3) verify the WorkOrder now exists in pbc-production
    $ curl /api/v1/production/work-orders
      → [{"id":"029c2482-...",
          "code":"WO-FROM-PRINTINGSHOP-Q-007",
          "outputItemCode":"BOOK-HARDCOVER",
          "outputQuantity":500.0,
          "status":"DRAFT",
          "sourceSalesOrderCode":null,
          "inputs":[], "ext":{}}]
    
    # 4) run the SAME BPMN a second time — verify idempotent
    $ curl -X POST /api/v1/workflow/process-instances
           {same body as above}
      → 201  (process ends, workOrderRequested=true, new event published + delivered)
    $ curl /api/v1/production/work-orders
      → count=1, still only WO-FROM-PRINTINGSHOP-Q-007
    ```
    
    Every single step runs through an api.v1 public surface. No framework
    core code knows the printing-shop plug-in exists; no plug-in code knows
    pbc-production exists. They meet on the event bus, and the outbox
    guarantees the delivery.
    
    ## Tests
    
    - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`:
      * `subscribe registers one listener for WorkOrderRequestedEvent`
      * `handle creates a work order from the event fields` — captures the
        `CreateWorkOrderCommand` and asserts every field
      * `handle short-circuits when a work order with that code already exists`
        — proves the idempotent branch
    - Total framework unit tests: 278 (was 275), all green.
    
    ## What this unblocks
    
    - **Richer multi-step BPMNs** in the plug-in that chain plate
      approval + quote → work order + production start + completion.
    - **Plug-in-owned Quote entity** — the printing-shop plug-in can now
      introduce a `plugin_printingshop__quote` table via its own Liquibase
      changelog and have its HTTP endpoint create quotes that kick off the
      quote-to-work-order workflow automatically (or on operator confirm).
    - **pbc-production routings/operations (v3)** — each operation becomes
      a BPMN step, potentially driven by plug-ins contributing custom
      steps via the same TaskHandler + event seam.
    - **Second reference plug-in** — any new customer plug-in can publish
      `WorkOrderRequestedEvent` from its own workflows without any
      framework change.
    
    ## Non-goals (parking lot)
    
    - The handler publishes but does not also read pbc-production state
      back. A future "wait for WO completion" BPMN step could subscribe
      to `WorkOrderCompletedEvent` inside a user-task + signal flow, but
      the engine's signal/correlation machinery isn't wired to
      plug-ins yet.
    - Quote entity + HTTP + real business logic. REF.1 proves the
      cross-PBC event seam; the richer quote lifecycle is a separate
      chunk that can layer on top of this.
    - Transactional rollback integration test. The synchronous bus +
      `Propagation.MANDATORY` guarantees it, but an explicit test that
      a subscriber throw rolls back both the ledger-adjacent writes and
      the Flowable process state would be worth adding with a real
      test container run.
    zichun authored
     
    Browse Code »
  • Two small closer items that tidy up the end of the HasExt rollout:
    
    1. inventory.yml gains a `customFields:` section with two core
       declarations for Location: `inventory_address_city` (string,
       maxLength 128) and `inventory_floor_area_sqm` (decimal 10,2).
       Completes the "every HasExt entity has at least one declared
       field" symmetry. Printing-shop plug-in already adds its own
       `printing_shop_press_id` etc. on top.
    
    2. CLAUDE.md "Repository state" section updated to reflect this
       session's milestones:
       - pbc-production v2 (IN_PROGRESS + BOM + scrap) now called out
         explicitly in the PBC list.
       - MATERIAL_ISSUE added to the buy-sell-MAKE loop description —
         the work-order completion now consumes raw materials per BOM
         line AND credits finished goods atomically.
       - New bullet: "Tier 1 customization is universal across every
         core entity with an ext column" — HasExt on Partner, Location,
         SalesOrder, PurchaseOrder, WorkOrder, Item; every service uses
         applyTo/parseExt helpers, zero duplication.
       - New bullet: "Clean Core extensibility is executable" — the
         reference printing-shop plug-in's metadata YAML ships
         customFields on Partner/Item/SalesOrder/WorkOrder and the
         MetadataLoader merges them with core declarations at load
         time. Executable grade-A extension under the A/B/C/D safety
         scale.
       - Printing-shop plug-in description updated to note that its
         metadata YAML now carries custom fields on core entities, not
         just its own entities.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Location returns 2 core
        fields.
      - POST /inventory/locations with `{inventory_address_city:
        "Shenzhen", inventory_floor_area_sqm: "1250.50"}` → 201,
        canonical form persisted, ext round-trips.
      - POST with `inventory_floor_area_sqm: "123456789012345.678"` →
        400 "ext.inventory_floor_area_sqm: decimal scale 3 exceeds
        declared scale 2" — the validator's precision/scale rules fire
        exactly as designed.
    
    No code changes. 246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • Completes the HasExt rollout across every core entity with an ext
    column. Item was the last one that carried an ext JSONB column
    without any validation wired — a plug-in could declare custom fields
    for Item but nothing would enforce them on save. This fixes that and
    restores two printing-shop-specific Item fields to the reference
    plug-in that were temporarily dropped from the previous Tier 1
    customization chunk (commit 16c59310) precisely because Item wasn't
    wired.
    
    Code changes:
      - Item implements HasExt; `ext` becomes `override var ext`, a
        companion constant holds the entity name "Item".
      - ItemService injects ExtJsonValidator, calls applyTo() in both
        create() and update() (create + update symmetry like partners
        and locations). parseExt passthrough added for response mappers.
      - CreateItemCommand, UpdateItemCommand, CreateItemRequest,
        UpdateItemRequest gain a nullable ext field.
      - ItemResponse now carries the parsed ext map, same shape as
        PartnerResponse / LocationResponse / SalesOrderResponse.
      - pbc-catalog build.gradle adds
        `implementation(project(":platform:platform-metadata"))`.
      - ItemServiceTest constructor updated to pass the new validator
        dependency with no-op stubs.
    
    Plug-in YAML (printing-shop.yml):
      - Re-added `printing_shop_color_count` (integer) and
        `printing_shop_paper_gsm` (integer) custom fields targeting Item.
        These were originally in the commit 16c59310 draft but removed
        because Item wasn't wired. Now that Item is wired, they're back
        and actually enforced.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields.
      - POST /catalog/items with `{printing_shop_color_count: 4,
        printing_shop_paper_gsm: 170}` → 201, canonical form persisted.
      - GET roundtrip preserves both integer values.
      - POST with `printing_shop_color_count: "not-a-number"` → 400
        "ext.printing_shop_color_count: not a valid integer: 'not-a-number'".
      - POST with `rogue_key` → 400 "ext contains undeclared key(s)
        for 'Item': [rogue_key]".
    
    Six of eight PBCs now participate in HasExt:
      Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item.
    The remaining two are pbc-identity (User has no ext column by
    design — identity is a security concern, not a customization one)
    and pbc-finance (JournalEntry is derived state from events, no
    customization surface).
    
    Five core entities carry Tier 1 custom fields as of this commit:
      Partner     (2 core + 1 plug-in)
      Item        (0 core + 2 plug-in)
      SalesOrder  (0 core + 1 plug-in)
      WorkOrder   (2 core + 1 plug-in)
      Location    (0 core + 0 plug-in — wired but no declarations yet)
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • Closes the last known gap from the HasExt refactor (commit 986f02ce):
    pbc-production's WorkOrder had an `ext` column but no validator was
    wired, so an operator could write arbitrary JSON without any
    schema enforcement. This fixes that and adds the first Tier 1
    custom fields for WorkOrder.
    
    Code changes:
      - WorkOrder implements HasExt; ext becomes `override var ext`,
        ENTITY_NAME moves onto the entity companion.
      - WorkOrderService injects ExtJsonValidator, calls applyTo() in
        create() before saving (null-safe so the
        SalesOrderConfirmedSubscriber's auto-spawn path still works —
        verified by smoke test).
      - CreateWorkOrderCommand + CreateWorkOrderRequest gain an `ext`
        field that flows through to the validator.
      - WorkOrderResponse gains an `ext: Map<String, Any?>` field; the
        response mapper signature changes to `toResponse(service)` to
        reach the validator via a convenience parseExt delegate on the
        service (same pattern as the other four PBCs).
      - pbc-production Gradle build adds `implementation(project(":platform:platform-metadata"))`.
    
    Metadata (production.yml):
      - Permission keys extended to match the v2 state machine:
        production.work-order.start (was missing) and
        production.work-order.scrap (was missing). The existing
        .read / .create / .complete / .cancel keys stay.
      - Two custom fields declared:
          * production_priority (enum: low, normal, high, urgent)
          * production_routing_notes (string, maxLength 1024)
        Both are optional and non-PII; an operator can now add
        priority and routing notes to a work order through the public
        API without any code change, which is the whole point of
        Tier 1 customization.
    
    Unit tests: WorkOrderServiceTest constructor updated to pass the
    new extValidator dependency and stub applyTo/parseExt as no-ops.
    No behavioral test changes — ext validation is covered by
    ExtJsonValidatorTest and the platform-wide smoke tests.
    
    Smoke verified end-to-end against real Postgres:
      - GET /_meta/metadata/custom-fields/WorkOrder now returns both
        declarations with correct enum sets and maxLength.
      - POST /work-orders with valid ext {production_priority:"high",
        production_routing_notes:"Rush for customer demo"} → 201,
        canonical form persisted, round-trips via GET.
      - POST with invalid enum value → 400 "value 'emergency' is not
        in allowed set [low, normal, high, urgent]".
      - POST with unknown ext key → 400 "ext contains undeclared
        key(s) for 'WorkOrder': [unknown_field]".
      - Auto-spawn from confirmed SO → DRAFT work order with empty
        ext `{}`, confirming the applyTo(null) null-safe path.
    
    Five of the eight PBCs now participate in the HasExt pattern:
    Partner, Location, SalesOrder, PurchaseOrder, WorkOrder. The
    remaining three (Item, Uom, JournalEntry) either have their own
    custom-field story in separate entities or are derived state.
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • Closes the P4.3 rollout — the last PBC whose controllers were still
    unannotated. Every endpoint in `UserController` now carries an
    `@RequirePermission("identity.user.*")` annotation matching the keys
    already declared in `identity.yml`:
    
      GET  /api/v1/identity/users           identity.user.read
      GET  /api/v1/identity/users/{id}      identity.user.read
      POST /api/v1/identity/users           identity.user.create
      PATCH /api/v1/identity/users/{id}     identity.user.update
      DELETE /api/v1/identity/users/{id}    identity.user.disable
    
    `AuthController` (login, refresh) is deliberately NOT annotated —
    it is in the platform-security public allowlist because login is the
    token-issuing endpoint (chicken-and-egg).
    
    KDoc on the controller class updated to reflect the new auth story
    (removing the stale "authentication deferred to v0.2" comment from
    before P4.1 / P4.3 landed).
    
    Smoke verified end-to-end against real Postgres:
      - Admin (wildcard `admin` role) → GET /users returns 200, POST
        /users returns 201 (new user `jane` created).
      - Unauthenticated GET and POST → 401 Unauthorized from the
        framework's JWT filter before @RequirePermission runs. A
        non-admin user without explicit grants would get 403 from the
        AOP evaluator; tested manually with the admin and anonymous
        cases.
    
    No test changes — the controller unit test is a thin DTO mapper
    test that doesn't exercise the Spring AOP aspect; identity-wide
    authz enforcement is covered by the platform-security tests plus
    the shipping smoke tests. 246 unit tests, all green.
    
    P4.3 is now complete across every core PBC:
      pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales,
      pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity.
    zichun authored
     
    Browse Code »
  • Removes the ext-handling copy/paste that had grown across four PBCs
    (partners, inventory, orders-sales, orders-purchase). Every service
    that wrote the JSONB `ext` column was manually doing the same
    four-step sequence: validate, null-check, serialize with a local
    ObjectMapper, assign to the entity. And every response mapper was
    doing the inverse: check-if-blank, parse, cast, swallow errors.
    
    Net: ~15 lines saved per PBC, one place to change the ext contract
    later (e.g. PII redaction, audit tagging, field-level events), and
    a stable plug-in opt-in mechanism — any plug-in entity that
    implements `HasExt` automatically participates.
    
    New api.v1 surface:
    
      interface HasExt {
          val extEntityName: String     // key into metadata__custom_field
          var ext: String               // the serialized JSONB column
      }
    
    Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own
    entities into the same validation path. Zero Spring/Jackson
    dependencies — api.v1 stays clean.
    
    Extended `ExtJsonValidator` (platform-metadata) with two helpers:
    
      fun applyTo(entity: HasExt, ext: Map<String, Any?>?)
          — null-safe; validates; writes canonical JSON to entity.ext.
            Replaces the validate + writeValueAsString + assign triplet
            in every service's create() and update().
    
      fun parseExt(entity: HasExt): Map<String, Any?>
          — returns empty map on blank/corrupt column; response
            mappers never 500 on bad data. Replaces the four identical
            parseExt local functions.
    
    ExtJsonValidator now takes an ObjectMapper via constructor
    injection (Spring Boot's auto-configured bean).
    
    Entities that now implement HasExt (override val extEntityName;
    override var ext; companion object const val ENTITY_NAME):
      - Partner (`partners.Partner` → "Partner")
      - Location (`inventory.Location` → "Location")
      - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder")
      - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder")
    
    Deliberately NOT converted this chunk:
      - WorkOrder (pbc-production) — its ext column has no declared
        fields yet; a follow-up that adds declarations AND the
        HasExt implementation is cleaner than splitting the two.
      - JournalEntry (pbc-finance) — derived state, no ext column.
    
    Services lose:
      - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()`
        field (four copies eliminated)
      - The `parseExt(entity): Map` helper function (four copies)
      - The `companion object { const val ENTITY_NAME = ... }` constant
        (moved onto the entity where it belongs)
      - The `val canonicalExt = extValidator.validate(...)` +
        `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }`
        create pattern (replaced with one applyTo call)
      - The `if (command.ext != null) { ... }` update pattern
        (applyTo is null-safe)
    
    Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and
    parseExt (null-safe path, happy path, failure path, blank column,
    round-trip, malformed JSON). Existing service tests just swap the
    mock setup from stubbing `validate` to stubbing `applyTo` and
    `parseExt` with no-ops.
    
    Smoke verified end-to-end against real Postgres:
      - POST /partners with valid ext (partners_credit_limit,
        partners_industry) → 201, canonical form persisted.
      - GET /partners/by-code/X → 200, ext round-trips.
      - POST with invalid enum value → 400 "value 'x' is not in
        allowed set [printing, publishing, packaging, other]".
      - POST with undeclared key → 400 "ext contains undeclared
        key(s) for 'Partner': [rogue_field]".
      - PATCH with new ext → 200, ext updated.
      - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe
        applyTo).
      - POST /orders/sales-orders with no ext → 201, the create path
        via the shared helper still works.
    
    246 unit tests (+6 over 240), 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one
    step, single output, no BOM) into a real v2 production PBC:
    
      1. IN_PROGRESS state between DRAFT and COMPLETED so "started but
         not finished" work orders are observable on a dashboard.
         WorkOrderService.start(id) performs the transition and publishes
         a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR
         IN_PROGRESS (v2 writes nothing to the ledger at start() so there
         is nothing to undo on cancel).
    
      2. Bill of materials via a new WorkOrderInput child entity —
         @OneToMany with cascade + orphanRemoval, same shape as
         SalesOrderLine. Each line carries (lineNo, itemCode,
         quantityPerUnit, sourceLocationCode). complete() now iterates
         the inputs in lineNo order and writes one MATERIAL_ISSUE
         ledger row per line (delta = -(quantityPerUnit × outputQuantity))
         BEFORE writing the PRODUCTION_RECEIPT for the output. All in
         one transaction — a failure anywhere rolls back every prior
         ledger row AND the status flip. Empty inputs list is legal
         (the v1 auto-spawn-from-SO path still works unchanged,
         writing only the PRODUCTION_RECEIPT).
    
      3. Scrap flow for COMPLETED work orders via a new scrap(id,
         scrapLocationCode, quantity, note) service method. Writes a
         negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and
         publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over
         adding a new SCRAP movement reason to keep the enum stable —
         the reference-string suffix is the disambiguator. The work
         order itself STAYS COMPLETED; scrap is a correction on top of
         a terminal state, not a state change.
    
      complete() now requires IN_PROGRESS (not DRAFT); existing callers
      must start() first.
    
      api.v1 grows two events (WorkOrderStartedEvent,
      WorkOrderScrappedEvent) alongside the three that already existed.
      Since this is additive within a major version, the api.v1 semver
      contract holds — existing subscribers continue to compile.
    
      Liquibase: 002-production-v2.xml widens the status CHECK and
      creates production__work_order_input with (work_order_id FK,
      line_no, item_code, quantity_per_unit, source_location_code) plus
      a unique (work_order_id, line_no) constraint, a CHECK
      quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE
      from the parent.
    
      Unit tests: WorkOrderServiceTest grows from 8 to 18 cases —
      covers start happy path, start rejection, complete-on-DRAFT
      rejection, empty-BOM complete, BOM-with-two-lines complete
      (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT
      all fire with the right references), scrap happy path, scrap on
      non-COMPLETED rejection, scrap with non-positive quantity
      rejection, cancel-from-IN_PROGRESS, and BOM validation rejects
      (unknown item, duplicate line_no).
    
    Smoke verified end-to-end against real Postgres:
      - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per
        brochure, output 100).
      - Started (DRAFT → IN_PROGRESS, no ledger rows).
      - Completed: paper balance 500→300 (MATERIAL_ISSUE -200),
        ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100
        (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE.
      - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged
        WO:WO-SMOKE:SCRAP, work order stayed COMPLETED.
      - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a
        DRAFT with empty BOM; starting + completing it writes only the
        PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the
        empty-BOM path is backwards-compatible.
      - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s,
        double-start 400s, cancel-from-IN_PROGRESS 200.
    
    240 unit tests, 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • Completes the @RequirePermission rollout that started in commit
    b174cf60. Every non-state-transition endpoint in pbc-inventory
    (Location CRUD), pbc-orders-sales, and pbc-orders-purchase is now
    guarded by the pre-declared permission keys from their respective
    metadata YAMLs. State-transition verbs (confirm/cancel/ship/receive)
    were annotated in the original P4.3 demo chunk; this one fills in
    the list/get/create/update gap.
    
    Inventory
      - LocationController: list/get/getByCode → inventory.location.read;
        create → inventory.location.create;
        update → inventory.location.update;
        deactivate → inventory.location.deactivate.
      - (StockBalanceController.adjust + StockMovementController.record
        were already annotated with inventory.stock.adjust.)
    
    Orders-sales
      - SalesOrderController: list/get/getByCode → orders.sales.read;
        create → orders.sales.create; update → orders.sales.update.
        (confirm/cancel/ship were already annotated.)
    
    Orders-purchase
      - PurchaseOrderController: list/get/getByCode → orders.purchase.read;
        create → orders.purchase.create; update → orders.purchase.update.
        (confirm/cancel/receive were already annotated.)
    
    No new permission keys. Every key this chunk consumes was already
    declared in the relevant metadata YAML since the respective PBC was
    first built — catalog + partners already shipped in this state, and
    the inventory/orders YAMLs declared their read/create/update keys
    from day one but the controllers hadn't started using them.
    
    Admin happy path still works (bootstrap admin has the wildcard
    `admin` role, same as after commit b174cf60). 230 unit tests still
    green — annotations are purely additive, no existing test hits the
    @RequirePermission path since service-level tests bypass the
    controller entirely.
    
    Combined with b174cf60, the framework now has full @RequirePermission
    coverage on every PBC controller except pbc-identity's user admin
    (which is a separate permission surface — user/role administration
    has its own security story). A minimum-privilege role like
    "sales-clerk" can now be granted exactly `orders.sales.read` +
    `orders.sales.create` + `partners.partner.read` and NOT accidentally
    see catalog admin, inventory movements, finance journals, or
    contact PII.
    zichun authored
     
    Browse Code »
  • Closes the P4.3 permission-rollout gap for the two oldest PBCs that
    were never updated when the @RequirePermission aspect landed. The
    catalog and partners metadata YAMLs already declared all the needed
    permission keys — the controllers just weren't consuming them.
    
    Catalog
      - ItemController: list/get/getByCode → catalog.item.read;
        create → catalog.item.create; update → catalog.item.update;
        deactivate → catalog.item.deactivate.
      - UomController: list/get/getByCode → catalog.uom.read;
        create → catalog.uom.create; update → catalog.uom.update.
    
    Partners (including the PII boundary)
      - PartnerController: list/get/getByCode → partners.partner.read;
        create → partners.partner.create; update → partners.partner.update.
        (deactivate was already annotated in the P4.3 demo chunk.)
      - AddressController: all five verbs annotated with
        partners.address.{read,create,update,delete}.
      - ContactController: all five verbs annotated with
        partners.contact.{read,create,update,deactivate}. The
        "TODO once P4.3 lands" note in the class KDoc was removed; P4.3
        is live and the annotations are now in place. This is the PII
        boundary that CLAUDE.md flagged as incomplete after the original
        P4.3 rollout.
    
    No new permission keys were added — all 14 keys this touches were
    already declared in pbc-catalog/catalog.yml and
    pbc-partners/partners.yml when those PBCs were first built. The
    metadata loader has been serving them to the SPA/OpenAPI/MCP
    introspection endpoint since day one; this change just starts
    enforcing them at the controller.
    
    Smoke-tested end-to-end against real Postgres
      - Fresh DB + fresh boot.
      - Admin happy path (bootstrap admin has wildcard `admin` role):
          GET  /api/v1/catalog/items           → 200
          POST /api/v1/catalog/items           → 201 (SMOKE-1 created)
          GET  /api/v1/catalog/uoms            → 200
          POST /api/v1/partners/partners       → 201 (SMOKE-P created)
          POST /api/v1/partners/.../contacts   → 201 (contact created)
          GET  /api/v1/partners/.../contacts   → 200 (PII read)
      - Anonymous negative path (no Bearer token):
          GET  /api/v1/catalog/items           → 401
          GET  /api/v1/partners/.../contacts   → 401
      - 230 unit tests still green (annotations are purely additive,
        no existing test hit the @RequirePermission path since the
        service-level tests bypass the controller entirely).
    
    Why this is a genuine security improvement
      - Before: any authenticated user (including the eventual "Alice
        from reception", the contractor's read-only service account,
        the AI-agent MCP client) could read PII, create partners, and
        create catalog items.
      - After: those operations require explicit role-permission grants
        through metadata__role_permission. The bootstrap admin still
        has unconditional access via the wildcard admin role, so
        nothing in a fresh deployment is broken; but a real operator
        granting minimum-privilege roles now has the columns they need
        in the database to do it.
      - The contact PII boundary in particular is GDPR-relevant: before
        this change, any logged-in user could enumerate every contact's
        name + email + phone. After, only users with partners.contact.read
        can see them.
    
    What's still NOT annotated
      - pbc-inventory's Location create/update/deactivate endpoints
        (only stock.adjust and movement.create are annotated).
      - pbc-orders-sales and pbc-orders-purchase list/get/create/update
        endpoints (only the state-transition verbs are annotated).
      - pbc-identity's user admin endpoints.
      These are the next cleanup chunk. This one stays focused on
      catalog + partners because those were the two PBCs that predated
      P4.3 entirely and hadn't been touched since.
    zichun authored
     
    Browse Code »

  • The framework's eighth PBC and the first one that's NOT order- or
    master-data-shaped. Work orders are about *making things*, which is
    the reason the printing-shop reference customer exists in the first
    place. With this PBC in place the framework can express the full
    buy-sell-make loop end-to-end.
    
    What landed (new module pbc/pbc-production/)
      - WorkOrder entity (production__work_order):
          code, output_item_code, output_quantity, status (DRAFT|COMPLETED|
          CANCELLED), due_date (display-only), source_sales_order_code
          (nullable — work orders can be either auto-spawned from a
          confirmed SO or created manually), ext.
      - WorkOrderJpaRepository with existsBySourceSalesOrderCode /
        findBySourceSalesOrderCode for the auto-spawn dedup.
      - WorkOrderService.create / complete / cancel:
          • create validates the output item via CatalogApi (same seam
            SalesOrderService and PurchaseOrderService use), rejects
            non-positive quantities, publishes WorkOrderCreatedEvent.
          • complete(outputLocationCode) credits finished goods to the
            named location via InventoryApi.recordMovement with
            reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and
            reference="WO:<order_code>", then flips status to COMPLETED,
            then publishes WorkOrderCompletedEvent — all in the same
            @Transactional method.
          • cancel only allowed from DRAFT (no un-producing finished
            goods); publishes WorkOrderCancelledEvent.
      - SalesOrderConfirmedSubscriber (@PostConstruct →
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)):
        walks the confirmed sales order's lines via SalesOrdersApi
        (NOT by importing pbc-orders-sales) and calls
        WorkOrderService.create for each line. Coded as one bean with
        one subscription — matches pbc-finance's one-bean-per-subject
        pattern.
          • Idempotent on source sales order code — if any work order
            already exists for the SO, the whole spawn is a no-op.
          • Tolerant of a missing SO (defensive against a future async
            bus that could deliver the confirm event after the SO has
            vanished).
          • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g.
            WO-FROM-SO-2026-0001-L1.
    
      - REST controller /api/v1/production/work-orders: list, get,
        by-code, create, complete, cancel — each annotated with
        @RequirePermission. Four permission keys declared in the
        production.yml metadata: read / create / complete / cancel.
      - CompleteWorkOrderRequest: single-arg DTO uses the
        @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that
        already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest;
        cross-referenced in the KDoc so the third instance doesn't need
        re-discovery.
      - distribution/.../pbc-production/001-production-init.xml:
        CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext
        + the usual indexes. NEITHER output_item_code NOR
        source_sales_order_code is a foreign key (cross-PBC reference
        policy — guardrail #9).
      - settings.gradle.kts + distribution/build.gradle.kts: registers
        the new module and adds it to the distribution dependency list.
      - master.xml: includes the new changelog in dependency order,
        after pbc-finance.
    
    New api.v1 surface: org.vibeerp.api.v1.event.production.*
      - WorkOrderCreatedEvent, WorkOrderCompletedEvent,
        WorkOrderCancelledEvent — sealed under WorkOrderEvent,
        aggregateType="production.WorkOrder". Same pattern as the
        order events, so any future consumer (finance revenue
        recognition, warehouse put-away dashboard, a customer plug-in
        that needs to react to "work finished") subscribes through the
        public typed-class overload with no dependency on pbc-production.
    
    Unit tests (13 new, 217 → 230 total)
      - WorkOrderServiceTest (9 tests): create dedup, positive quantity
        check, catalog seam, happy-path create with event assertion,
        complete rejects non-DRAFT, complete happy path with
        InventoryApi.recordMovement assertion + event assertion, cancel
        from DRAFT, cancel rejects COMPLETED.
      - SalesOrderConfirmedSubscriberTest (5 tests): subscription
        registration count, spawns N work orders for N SO lines with
        correct code convention, idempotent when WOs already exist,
        no-op on missing SO, and a listener-routing test that captures
        the EventListener instance and verifies it forwards to the
        right service method.
    
    End-to-end smoke verified against real Postgres
      - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance)
        and SalesOrderConfirmedSubscriber (pbc-production) log their
        subscription registration before the first HTTP call.
      - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a
        finished-goods location (WH-FG).
      - Created a 2-line sales order (SO-WO-1), confirmed it.
          → Produced ONE orders_sales.SalesOrder outbox row.
          → Produced ONE AR POSTED finance__journal_entry for 1000 USD
            (500 × 1 + 250 × 2 — the pbc-finance consumer still works).
          → Produced TWO draft work orders auto-spawned from the SO
            lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and
            WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with
            source_sales_order_code=SO-WO-1.
      - Completed WO1 to WH-FG:
          → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A
            delta=500 reference="WO:WO-FROM-SO-WO-1-L1".
          → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG.
          → Flipped status to COMPLETED.
      - Cancelled WO2 → CANCELLED.
      - Created a manual WO-MANUAL-1 with no source SO → succeeds;
        demonstrates the "operator creates a WO to build inventory
        ahead of demand" path.
      - platform__event_outbox ends with 6 rows all DISPATCHED:
          orders_sales.SalesOrder SO-WO-1
          production.WorkOrder WO-FROM-SO-WO-1-L1  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L1  (completed)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (cancelled)
          production.WorkOrder WO-MANUAL-1         (created)
    
    Why this chunk was the right next move
      - pbc-finance was a PASSIVE consumer — it only wrote derived
        reporting state. pbc-production is the first ACTIVE consumer:
        it creates new aggregates with their own state machines and
        their own cross-PBC writes in reaction to another PBC's events.
        This is a meaningfully harder test of the event-driven
        integration story and it passes end-to-end.
      - "One ledger, three callers" is now real: sales shipments,
        purchase receipts, AND production receipts all feed the same
        inventory__stock_movement ledger through the same
        InventoryApi.recordMovement facade. The facade has proven
        stable under three very different callers.
      - The framework now expresses the basic ERP trinity: buy
        (purchase orders), sell (sales orders), make (work orders).
        That's the shape every real manufacturing customer needs, and
        it's done without any PBC importing another.
    
    What's deliberately NOT in v1
      - No bill of materials. complete() only credits finished goods;
        it does NOT issue raw materials. A shop floor that needs to
        consume 4 sheets of paper to produce 1 brochure does it
        manually via POST /api/v1/inventory/movements with reason=
        MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands
        as WorkOrderInput lines in a future chunk.
      - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in
        one step. A real shop floor needs "started but not finished"
        visibility; that's the next iteration.
      - No routings, operations, machine assignments, or due-date
        enforcement. due_date is display-only.
      - No "scrap defective output" flow for a COMPLETED work order.
        cancel refuses from COMPLETED; the fix requires a new
        MovementReason and a new event, not a special-case method
        on the service.
    zichun authored
     
    Browse Code »
  • Extends pbc-inventory's MovementReason enum with the two reasons a
    production-style PBC needs to record stock movements through the
    existing InventoryApi.recordMovement facade. No new endpoint, no
    new database column — just two new enum values, two new sign-
    validation rules, and four new tests.
    
    Why this lands BEFORE pbc-production
      - It's the smallest self-contained change that unblocks any future
        production-related code (the framework's planned pbc-production,
        a customer plug-in's manufacturing module, or even an ad-hoc
        operator script). Each of those callers can now record
        "consume raw material" / "produce finished good" through the
        same primitive that already serves sales shipments and purchase
        receipts.
      - It validates the "one ledger, many callers" property the
        architecture spec promised. Adding a new movement reason takes
        zero schema changes (the column is varchar) and zero plug-in
        changes (the api.v1 facade takes the reason as a string and
        delegates to MovementReason.valueOf inside the adapter). The
        enum lives entirely inside pbc-inventory.
    
    What changed
      - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and
        PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one
        was added and how they fit the "one primitive for every direction"
        story.
      - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the
        must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT,
        and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive
        bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT.
      - 4 new unit tests:
          • record rejects positive delta on MATERIAL_ISSUE
          • record rejects negative delta on PRODUCTION_RECEIPT
          • record accepts a positive PRODUCTION_RECEIPT (happy path,
            new balance row at the receiving location)
          • record accepts a negative MATERIAL_ISSUE (decrements an
            existing balance from 1000 → 800)
      - Total tests: 213 → 217.
    
    Smoke test against real Postgres
      - Booted on a fresh DB; no schema migration needed because the
        `reason` column is varchar(32), already wide enough.
      - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location
        WH-PROD via the existing endpoints.
      - POST /api/v1/inventory/movements with reason=RECEIPT for 1000
        raw paper → balance row at 1000.
      - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE
        delta=-200 reference="WO:WO-EVT-1" → balance becomes 800,
        ledger row written.
      - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT
        delta=50 reference="WO:WO-EVT-1" → balance row at 50 for
        FG-WIDGET, ledger row written.
      - Negative test: POST PRODUCTION_RECEIPT with delta=-1 →
        400 Bad Request "movement reason PRODUCTION_RECEIPT requires
        a non-negative delta (got -1)" — the new sign rule fires.
      - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE,
        PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and
        RAW-PAPER=800 — the math is correct.
    
    What's deliberately NOT in this chunk
      - No pbc-production yet. That's the next chunk; this is just
        the foundation that lets it (or any other production-ish
        caller) write to the ledger correctly without needing changes
        to api.v1 or pbc-inventory ever again.
      - No new return-path reasons (RETURN_FROM_CUSTOMER,
        RETURN_TO_SUPPLIER) — those land when the returns flow does.
      - No reference convention for "WO:" — that's documented in the
        KDoc on `reference`, not enforced anywhere. The v0.16/v0.17
        convention "<source>:<code>" continues unchanged.
    zichun authored
     
    Browse Code »
  • The minimal pbc-finance landed in commit bf090c2e only reacted to
    *ConfirmedEvent. This change wires the rest of the order lifecycle
    (ship/receive → SETTLED, cancel → REVERSED) so the journal entry
    reflects what actually happened to the order, not just the moment
    it was confirmed.
    
    JournalEntryStatus (new enum + new column)
      - POSTED   — created from a confirm event (existing behaviour)
      - SETTLED  — promoted by SalesOrderShippedEvent /
                   PurchaseOrderReceivedEvent
      - REVERSED — promoted by SalesOrderCancelledEvent /
                   PurchaseOrderCancelledEvent
      - The status field is intentionally a separate axis from
        JournalEntryType: type tells you "AR or AP", status tells you
        "where in its lifecycle".
    
    distribution/.../pbc-finance/002-finance-status.xml
      - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`,
        a CHECK constraint mirroring the enum values, and an index on
        status for the new filter endpoint. The DEFAULT 'POSTED' covers
        any existing rows on an upgraded environment without a backfill
        step.
    
    JournalEntryService — four new methods, all idempotent
      - settleFromSalesShipped(event)        → POSTED → SETTLED for AR
      - settleFromPurchaseReceived(event)    → POSTED → SETTLED for AP
      - reverseFromSalesCancelled(event)     → POSTED → REVERSED for AR
      - reverseFromPurchaseCancelled(event)  → POSTED → REVERSED for AP
      Each runs through a private settleByOrderCode/reverseByOrderCode
      helper that:
        1. Looks up the row by order_code (new repo method
           findFirstByOrderCode). If absent → no-op (e.g. cancel from
           DRAFT means no *ConfirmedEvent was ever published, so no
           journal entry exists; this is the most common cancel path).
        2. If the row is already in the destination status → no-op
           (idempotent under at-least-once delivery, e.g. outbox replay
           or future Kafka retry).
        3. Refuses to overwrite a contradictory terminal status — a
           SETTLED row cannot be REVERSED, and vice versa. The producer's
           state machine forbids cancel-from-shipped/received, so
           reaching here implies an upstream contract violation; logged
           at WARN and the row is left alone.
    
    OrderEventSubscribers — six subscriptions per @PostConstruct
      - All six order events from api.v1.event.orders.* are subscribed
        via the typed-class EventBus.subscribe(eventType, listener)
        overload, the same public API a plug-in would use. Boot log
        line updated: "pbc-finance subscribed to 6 order events".
    
    JournalEntryController — new ?status= filter
      - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED
        surfaces the partition. Existing ?orderCode= and ?type= filters
        unchanged. Read permission still finance.journal.read.
    
    12 new unit tests (213 total, was 201)
      - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency
        on duplicate destination status, refusal to overwrite a
        contradictory terminal status, no-op on missing row, default
        POSTED on new entries.
      - OrderEventSubscribersTest: assert all SIX subscriptions registered,
        one new test that captures all four lifecycle listeners and
        verifies they forward to the correct service methods.
    
    End-to-end smoke (real Postgres, fresh DB)
      - Booted with the new DDL applied (status column + CHECK + index)
        on an empty DB. The OrderEventSubscribers @PostConstruct line
        confirms 6 subscriptions registered before the first HTTP call.
      - Five lifecycle scenarios driven via REST:
          PO-FULL:        confirm + receive  → AP SETTLED  amount=50.00
          SO-FULL:        confirm + ship     → AR SETTLED  amount= 1.00
          SO-REVERSE:     confirm + cancel   → AR REVERSED amount= 1.00
          PO-REVERSE:     confirm + cancel   → AP REVERSED amount=50.00
          SO-DRAFT-CANCEL: cancel only       → NO ROW (no confirm event)
      - finance__journal_entry returns exactly 4 rows (the 5th scenario
        correctly produces nothing) and ?status filters all return the
        expected partition (POSTED=0, SETTLED=2, REVERSED=2).
    
    What's still NOT in pbc-finance
      - Still no debit/credit legs, no chart of accounts, no period
        close, no double-entry invariant. This is the v0.17 minimal
        seed; the real P5.9 build promotes it into a real GL.
      - No reaction to "settle then reverse" or "reverse then settle"
        other than the WARN-and-leave-alone defensive path. A real GL
        would write a separate compensating journal entry; the minimal
        PBC just keeps the row immutable once it leaves POSTED.
    zichun authored
     
    Browse Code »
  • The framework's seventh PBC, and the first one whose ENTIRE purpose
    is to react to events published by other PBCs. It validates the
    *consumer* side of the cross-PBC event seam that was wired up in
    commit 67406e87 (event-driven cross-PBC integration). With pbc-finance
    in place, the bus now has both producers and consumers in real PBC
    business logic — not just the wildcard EventAuditLogSubscriber that
    ships with platform-events.
    
    What landed (new module pbc/pbc-finance/, ~480 lines including tests)
      - JournalEntry entity (finance__journal_entry):
          id, code (= originating event UUID), type (AR|AP),
          partner_code, order_code, amount, currency_code, posted_at, ext.
          Unique index on `code` is the durability anchor for idempotent
          event delivery; the service ALSO existsByCode-checks before
          insert to make duplicate-event handling a clean no-op rather
          than a constraint-violation exception.
      - JournalEntryJpaRepository with existsByCode + findByOrderCode +
        findByType (the read-side filters used by the controller).
      - JournalEntryService.recordSalesConfirmed / recordPurchaseConfirmed
        take a SalesOrderConfirmedEvent / PurchaseOrderConfirmedEvent and
        write the corresponding AR/AP row. @Transactional with
        Propagation.REQUIRED so the listener joins the publisher's TX
        when the bus delivers synchronously (today) and creates a fresh
        one if a future async bus delivers from a worker thread. The
        KDoc explains why REQUIRED is the correct default and why
        REQUIRES_NEW would be wrong here.
      - OrderEventSubscribers @Component with @PostConstruct that calls
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)
        and EventBus.subscribe(PurchaseOrderConfirmedEvent::class.java, ...)
        once at boot. Uses the public typed-class subscribe overload —
        NOT the platform-internal subscribeToAll wildcard helper. This
        is the API surface plug-ins will also use.
      - JournalEntryController: read-only REST under
        /api/v1/finance/journal-entries with @RequirePermission
        "finance.journal.read". Filter params: ?orderCode= and ?type=.
        Deliberately no POST endpoint — entries are derived state.
      - finance.yml metadata declaring 1 entity, 1 permission, 1 menu.
      - Liquibase changelog at distribution/.../pbc-finance/001-finance-init.xml
        + master.xml include + distribution/build.gradle.kts dep.
      - settings.gradle.kts: registers :pbc:pbc-finance.
      - 9 new unit tests (6 for JournalEntryService, 3 for
        OrderEventSubscribers) — including idempotency, dedup-by-event-id
        contract, listener-forwarding correctness via slot-captured
        EventListener invocation. Total tests: 192 → 201, 16 → 17 modules.
    
    Why this is the right shape
      - pbc-finance has zero source dependency on pbc-orders-sales,
        pbc-orders-purchase, pbc-partners, or pbc-catalog. The Gradle
        build refuses any cross-PBC dependency at configuration time —
        pbc-finance only declares api/api-v1, platform-persistence, and
        platform-security. The events and partner/item references it
        consumes all live in api.v1.event.orders / are stored as opaque
        string codes.
      - Subscribers go through EventBus.subscribe(eventType, listener),
        the public typed-class overload from api.v1.event.EventBus.
        Plug-ins use exactly this API; this PBC proves the API works
        end-to-end from a real consumer.
      - The consumer is idempotent on the producer's event id, so
        at-least-once delivery (outbox replay, future Kafka retry)
        cannot create duplicate journal entries. This makes the
        consumer correct under both the current synchronous bus and
        any future async / out-of-process bus.
      - Read-only REST API: derived state should not be writable from
        the outside. Adjustments and reversals will land later as their
        own command verbs when the real P5.9 finance build needs them,
        not as a generic create endpoint.
    
    End-to-end smoke verified against real Postgres
      - Booted on a fresh DB; the OrderEventSubscribers @PostConstruct
        log line confirms the subscription registered before any HTTP
        traffic.
      - Seeded an item, supplier, customer, location (existing PBCs).
      - Created PO PO-FIN-1 (5000 × 0.04 = 200 USD) → confirmed →
        GET /api/v1/finance/journal-entries returns ONE row:
          type=AP partner=SUP-PAPER order=PO-FIN-1 amount=200.0000 USD
      - Created SO SO-FIN-1 (50 × 0.10 = 5 USD) → confirmed →
        GET /api/v1/finance/journal-entries now returns TWO rows:
          type=AR partner=CUST-ACME order=SO-FIN-1 amount=5.0000 USD
          (plus the AP row from above)
      - GET /api/v1/finance/journal-entries?orderCode=PO-FIN-1 →
        only the AP row.
      - GET /api/v1/finance/journal-entries?type=AR → only the AR row.
      - platform__event_outbox shows 2 rows (one per confirm) both
        DISPATCHED, finance__journal_entry shows 2 rows.
      - The journal-entry code column equals the originating event
        UUID, proving the dedup contract is wired.
    
    What this is NOT (yet)
      - Not a real general ledger. No debit/credit legs, no chart of
        accounts, no period close, no double-entry invariant. P5.9
        promotes this minimal seed into a real finance PBC.
      - No reaction to ship/receive/cancel events yet — only confirm.
        Real revenue recognition (which happens at ship time for most
        accounting standards) lands with the P5.9 build.
      - No outbound api.v1.ext facade. pbc-finance does not (yet)
        expose itself to other PBCs; it is a pure consumer. When
        pbc-production needs to know "did this order's invoice clear",
        that facade gets added.
    zichun authored
     
    Browse Code »
  • The event bus and transactional outbox have existed since P1.7 but no
    real PBC business logic was publishing through them. This change closes
    that loop end-to-end:
    
    api.v1.event.orders (new public surface)
      - SalesOrderConfirmedEvent / SalesOrderShippedEvent /
        SalesOrderCancelledEvent — sealed under SalesOrderEvent,
        aggregateType = "orders_sales.SalesOrder"
      - PurchaseOrderConfirmedEvent / PurchaseOrderReceivedEvent /
        PurchaseOrderCancelledEvent — sealed under PurchaseOrderEvent,
        aggregateType = "orders_purchase.PurchaseOrder"
      - Events live in api.v1 (not inside the PBCs) so other PBCs and
        customer plug-ins can subscribe without importing the producing
        PBC — that would violate guardrail #9.
    
    pbc-orders-sales / pbc-orders-purchase
      - SalesOrderService and PurchaseOrderService now inject EventBus
        and publish a typed event from each state-changing method
        (confirm, ship/receive, cancel). The publish runs INSIDE the
        same @Transactional method as the JPA mutation and the
        InventoryApi.recordMovement ledger writes — EventBusImpl uses
        Propagation.MANDATORY, so a publish outside a transaction
        fails loudly. A failure in any line rolls back the status
        change AND every ledger row AND the would-have-been outbox row.
      - 6 new unit tests (3 per service) mockk the EventBus and verify
        each transition publishes exactly one matching event with the
        expected fields. Total tests: 186 → 192.
    
    End-to-end smoke verified against real Postgres
      - Created supplier, customer, item PAPER-A4, location WH-MAIN.
      - Drove a PO and an SO through the full state machine plus a
        cancel of each. 6 events fired:
          orders_purchase.PurchaseOrder × 3 (confirm + receive + cancel)
          orders_sales.SalesOrder       × 3 (confirm + ship + cancel)
      - The wildcard EventAuditLogSubscriber logged each one at INFO
        level to /tmp/vibe-erp-boot.log with the [event-audit] tag.
      - platform__event_outbox shows 6 rows, all flipped from PENDING
        to DISPATCHED by the OutboxPoller within seconds.
      - The publish-inside-the-ledger-transaction guarantee means a
        subscriber that reads inventory__stock_movement on event
        receipt is guaranteed to see the matching SALES_SHIPMENT or
        PURCHASE_RECEIPT rows. This is what the architecture spec
        section 9 promised and now delivers.
    
    Why this is the right shape
      - Other PBCs (production, finance) and customer plug-ins can now
        react to "an order was confirmed/shipped/received/cancelled"
        without ever importing pbc-orders-* internals. The event class
        objects live in api.v1, the only stable contract surface.
      - The aggregateType strings ("orders_sales.SalesOrder",
        "orders_purchase.PurchaseOrder") match the <pbc>.<aggregate>
        convention documented on DomainEvent.aggregateType, so a
        cross-classloader subscriber can use the topic-string subscribe
        overload without holding the concrete Class<E>.
      - The bus's outbox row is the durability anchor for the future
        Kafka/NATS bridge: switching from in-process delivery to
        cross-process delivery will require zero changes to either
        PBC's publish call.
    zichun authored
     
    Browse Code »
  • The buying-side mirror of pbc-orders-sales. Adds the 6th real PBC
    and closes the loop: the framework now does both directions of the
    inventory flow through the same `InventoryApi.recordMovement` facade.
    Buy stock with a PO that hits RECEIVED, ship stock with a SO that
    hits SHIPPED, both feed the same `inventory__stock_movement` ledger.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-orders-purchase` (16 modules total
      now). Same dependency set as pbc-orders-sales, same architectural
      enforcement — no direct dependency on any other PBC; cross-PBC
      references go through `api.v1.ext.<pbc>` facades at runtime.
    * Two JPA entities mirroring SalesOrder / SalesOrderLine:
      - `PurchaseOrder` (header) — code, partner_code (varchar, NOT a
        UUID FK), status enum DRAFT/CONFIRMED/RECEIVED/CANCELLED,
        order_date, expected_date (nullable, the supplier's promised
        delivery date), currency_code, total_amount, ext jsonb.
      - `PurchaseOrderLine` — purchase_order_id FK, line_no, item_code,
        quantity, unit_price, currency_code. Same shape as the sales
        order line; the api.v1 facade reuses `SalesOrderLineRef` rather
        than declaring a duplicate type.
    * `PurchaseOrderService.create` performs three cross-PBC validations
      in one transaction:
      1. PartnersApi.findPartnerByCode → reject if null.
      2. The partner's `type` must be SUPPLIER or BOTH (a CUSTOMER-only
         partner cannot be the supplier of a purchase order — the
         mirror of the sales-order rule that rejects SUPPLIER-only
         partners as customers).
      3. CatalogApi.findItemByCode for EVERY line.
      Then validates: at least one line, no duplicate line numbers,
      positive quantity, non-negative price, currency matches header.
      The header total is RECOMPUTED from the lines (caller's value
      ignored — never trust a financial aggregate sent over the wire).
    * State machine enforced by `confirm()`, `cancel()`, and `receive()`:
      - DRAFT → CONFIRMED   (confirm)
      - DRAFT → CANCELLED   (cancel)
      - CONFIRMED → CANCELLED (cancel before receipt)
      - CONFIRMED → RECEIVED  (receive — increments inventory)
      - RECEIVED → ×          (terminal; cancellation requires a
                                return-to-supplier flow)
    * `receive(id, receivingLocationCode)` walks every line and calls
      `inventoryApi.recordMovement(... +line.quantity reason="PURCHASE_RECEIPT"
      reference="PO:<order_code>")`. The whole operation runs in ONE
      transaction so a failure on any line rolls back EVERY line's
      already-written movement AND the order status change. The
      customer cannot end up with "5 of 7 lines received, status
      still CONFIRMED, ledger half-written".
    * New `POST /api/v1/orders/purchase-orders/{id}/receive` endpoint
      with body `{"receivingLocationCode": "WH-MAIN"}`, gated by
      `orders.purchase.receive`. The single-arg DTO has the same
      Jackson `@JsonCreator(mode = PROPERTIES)` workaround as
      `ShipSalesOrderRequest` (the trap is documented in the class
      KDoc with a back-reference to ShipSalesOrderRequest).
    * Confirm/cancel/receive endpoints carry `@RequirePermission`
      annotations (`orders.purchase.confirm`, `orders.purchase.cancel`,
      `orders.purchase.receive`). All three keys declared in the new
      `orders-purchase.yml` metadata.
    * New api.v1 facade `org.vibeerp.api.v1.ext.orders.PurchaseOrdersApi`
      + `PurchaseOrderRef`. Reuses the existing `SalesOrderLineRef`
      type for the line shape — buying and selling lines carry the
      same fields, so duplicating the ref type would be busywork.
    * `PurchaseOrdersApiAdapter` — sixth `*ApiAdapter` after Identity,
      Catalog, Partners, Inventory, SalesOrders.
    * `orders-purchase.yml` metadata declaring 2 entities, 6 permission
      keys, 1 menu entry under "Purchasing".
    
    End-to-end smoke test (the full demo loop)
    ------------------------------------------
    Reset Postgres, booted the app, ran:
    * Login as admin
    * POST /catalog/items → PAPER-A4
    * POST /partners → SUP-PAPER (SUPPLIER)
    * POST /inventory/locations → WH-MAIN
    * GET /inventory/balances?itemCode=PAPER-A4 → [] (no stock)
    * POST /orders/purchase-orders → PO-2026-0001 for 5000 sheets
      @ $0.04 = total $200.00 (recomputed from the line)
    * POST /purchase-orders/{id}/confirm → status CONFIRMED
    * POST /purchase-orders/{id}/receive body={"receivingLocationCode":"WH-MAIN"}
      → status RECEIVED
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=5000
    * GET /inventory/movements?itemCode=PAPER-A4 →
      PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001
    
    Then the FULL loop with the sales side from the previous chunk:
    * POST /partners → CUST-ACME (CUSTOMER)
    * POST /orders/sales-orders → SO-2026-0001 for 50 sheets
    * confirm + ship from WH-MAIN
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=4950 (5000-50)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      PURCHASE_RECEIPT delta=5000  ref=PO:PO-2026-0001
      SALES_SHIPMENT   delta=-50   ref=SO:SO-2026-0001
    
    The framework's `InventoryApi.recordMovement` facade now has TWO
    callers — pbc-orders-sales (negative deltas, SALES_SHIPMENT) and
    pbc-orders-purchase (positive deltas, PURCHASE_RECEIPT) — feeding
    the same ledger from both sides.
    
    Failure paths verified:
    * Re-receive a RECEIVED PO → 400 "only CONFIRMED orders can be received"
    * Cancel a RECEIVED PO → 400 "issue a return-to-supplier flow instead"
    * Create a PO from a CUSTOMER-only partner → 400 "partner 'CUST-ONLY'
      is type CUSTOMER and cannot be the supplier of a purchase order"
    
    Regression: catalog uoms, identity users, partners, inventory,
    sales orders, purchase orders, printing-shop plates with i18n,
    metadata entities (15 now, was 13) — all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 16 subprojects, 186 unit tests (was 175),
      all green. The 11 new tests cover the same shapes as the
      sales-order tests but inverted: unknown supplier, CUSTOMER-only
      rejection, BOTH-type acceptance, unknown item, empty lines,
      total recomputation, confirm/cancel state machine,
      receive-rejects-non-CONFIRMED, receive-walks-lines-with-positive-
      delta, cancel-rejects-RECEIVED, cancel-CONFIRMED-allowed.
    
    What was deferred
    -----------------
    * **RFQs** (request for quotation) and **supplier price catalogs**
      — both lay alongside POs but neither is in v1.
    * **Partial receipts**. v1's RECEIVED is "all-or-nothing"; the
      supplier delivering 4500 of 5000 sheets is not yet modelled.
    * **Supplier returns / refunds**. The cancel-RECEIVED rejection
      message says "issue a return-to-supplier flow" — that flow
      doesn't exist yet.
    * **Three-way matching** (PO + receipt + invoice). Lands with
      pbc-finance.
    * **Multi-leg transfers**. TRANSFER_IN/TRANSFER_OUT exist in the
      movement enum but no service operation yet writes both legs
      in one transaction.
    zichun authored
     
    Browse Code »
  • The killer demo finally works: place a sales order, ship it, watch
    inventory drop. This chunk lands the two pieces that close the loop:
    the inventory movement ledger (the audit-grade history of every
    stock change) and the sales-order /ship endpoint that calls
    InventoryApi.recordMovement to atomically debit stock for every line.
    
    This is the framework's FIRST cross-PBC WRITE flow. Every earlier
    cross-PBC call was a read (CatalogApi.findItemByCode,
    PartnersApi.findPartnerByCode, InventoryApi.findStockBalance).
    Shipping inverts that: pbc-orders-sales synchronously writes to
    inventory's tables (via the api.v1 facade) as a side effect of
    changing its own state, all in ONE Spring transaction.
    
    What landed
    -----------
    * New `inventory__stock_movement` table — append-only ledger
      (id, item_code, location_id FK, signed delta, reason enum,
      reference, occurred_at, audit cols). CHECK constraint
      `delta <> 0` rejects no-op rows. Indexes on item_code,
      location_id, the (item, location) composite, reference, and
      occurred_at. Migration is in its own changelog file
      (002-inventory-movement-ledger.xml) per the project convention
      that each new schema cut is a new file.
    * New `StockMovement` JPA entity + repository + `MovementReason`
      enum (RECEIPT, ISSUE, ADJUSTMENT, SALES_SHIPMENT, PURCHASE_RECEIPT,
      TRANSFER_OUT, TRANSFER_IN). Each value carries a documented sign
      convention; the service rejects mismatches (a SALES_SHIPMENT
      with positive delta is a caller bug, not silently coerced).
    * New `StockMovementService.record(...)` — the ONE entry point for
      changing inventory. Cross-PBC item validation via CatalogApi,
      local location validation, sign-vs-reason enforcement, and
      negative-balance rejection all happen BEFORE the write. The
      ledger row insert AND the balance row update happen in the
      SAME database transaction so the two cannot drift.
    * `StockBalanceService.adjust` refactored to delegate: it computes
      delta = newQty - oldQty and calls record(... ADJUSTMENT). The
      REST endpoint keeps its absolute-quantity semantics — operators
      type "the shelf has 47" not "decrease by 3" — but every
      adjustment now writes a ledger row too. A no-op adjustment
      (re-saving the same value) does NOT write a row, so the audit
      log doesn't fill with noise from operator clicks that didn't
      change anything.
    * New `StockMovementController` at `/api/v1/inventory/movements`:
      GET filters by itemCode, locationId, or reference (for "all
      movements caused by SO-2026-0001"); POST records a manual
      movement. Both protected by `inventory.stock.adjust`.
    * `InventoryApi` facade extended with `recordMovement(itemCode,
      locationCode, delta, reason: String, reference)`. The reason is
      a String in the api.v1 surface (not the local enum) so plug-ins
      don't import inventory's internal types — the closed set is
      documented on the interface. The adapter parses the string with
      a meaningful error on unknown values.
    * New `SHIPPED` status on `SalesOrderStatus`. Transitions:
      DRAFT → CONFIRMED → SHIPPED (terminal). Cancelling a SHIPPED
      order is rejected with "issue a return / refund flow instead".
    * New `SalesOrderService.ship(id, shippingLocationCode)`: walks
      every line, calls `inventoryApi.recordMovement(... -line.quantity
      reason="SALES_SHIPMENT" reference="SO:{order_code}")`, flips
      status to SHIPPED. The whole operation runs in ONE transaction
      so a failure on any line — bad item, bad location, would push
      balance negative — rolls back the order status change AND every
      other line's already-written movement. The customer never ends
      up with "5 of 7 lines shipped, status still CONFIRMED, ledger
      half-written".
    * New `POST /api/v1/orders/sales-orders/{id}/ship` endpoint with
      body `{"shippingLocationCode": "WH-MAIN"}`, gated by the new
      `orders.sales.ship` permission key.
    * `ShipSalesOrderRequest` is a single-arg Kotlin data class — same
      Jackson deserialization trap as `RefreshRequest`. Fixed with
      `@JsonCreator(mode = PROPERTIES) + @param:JsonProperty`. The
      trap is documented in the class KDoc.
    
    End-to-end smoke test (the killer demo)
    ---------------------------------------
    Reset Postgres, booted the app, ran:
    * Login as admin
    * POST /catalog/items → PAPER-A4
    * POST /partners → CUST-ACME
    * POST /inventory/locations → WH-MAIN
    * POST /inventory/balances/adjust → quantity=1000
      (now writes a ledger row via the new path)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      ADJUSTMENT delta=1000 ref=null
    * POST /orders/sales-orders → SO-2026-0001 (50 units of PAPER-A4)
    * POST /sales-orders/{id}/confirm → status CONFIRMED
    * POST /sales-orders/{id}/ship body={"shippingLocationCode":"WH-MAIN"}
      → status SHIPPED
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=950
      (1000 - 50)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      ADJUSTMENT     delta=1000   ref=null
      SALES_SHIPMENT delta=-50    ref=SO:SO-2026-0001
    
    Failure paths verified:
    * Re-ship a SHIPPED order → 400 "only CONFIRMED orders can be shipped"
    * Cancel a SHIPPED order → 400 "issue a return / refund flow instead"
    * Place a 10000-unit order, confirm, try to ship from a 950-stock
      warehouse → 400 "stock movement would push balance for 'PAPER-A4'
      at location ... below zero (current=950.0000, delta=-10000.0000)";
      balance unchanged after the rollback (transaction integrity
      verified)
    
    Regression: catalog uoms, identity users, inventory locations,
    printing-shop plates with i18n, metadata entities — all still
    HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 175 unit tests (was 163),
      all green. The 12 new tests cover:
      - StockMovementServiceTest (8): zero-delta rejection, positive
        SALES_SHIPMENT rejection, negative RECEIPT rejection, both
        signs allowed on ADJUSTMENT, unknown item via CatalogApi seam,
        unknown location, would-push-balance-negative rejection,
        new-row + existing-row balance update.
      - StockBalanceServiceTest, rewritten (5): negative-quantity
        early reject, delegation with computed positive delta,
        delegation with computed negative delta, no-op adjustment
        short-circuit (NO ledger row written), no-op on missing row
        creates an empty row at zero.
      - SalesOrderServiceTest, additions (3): ship rejects non-CONFIRMED,
        ship walks lines and calls recordMovement with negated quantity
        + correct reference, cancel rejects SHIPPED.
    
    What was deferred
    -----------------
    * **Event publication.** A `StockMovementRecorded` event would
      let pbc-finance and pbc-production react to ledger writes
      without polling. The event bus has been wired since P1.7 but
      no real cross-PBC flow uses it yet — that's the natural next
      chunk and the chunk after this commit.
    * **Multi-leg transfers.** TRANSFER_OUT and TRANSFER_IN are in
      the enum but no service operation atomically writes both legs
      yet (both legs in one transaction is required to keep total
      on-hand invariant).
    * **Reservation / pick lists.** "Reserve 50 of PAPER-A4 for an
      unconfirmed order" is its own concept that lands later.
    * **Shipped-order returns / refunds.** The cancel-SHIPPED rule
      points the user at "use a return flow" — that flow doesn't
      exist yet. v1 says shipments are terminal.
    zichun authored
     
    Browse Code »
  • The framework's authorization layer is now live. Until now, every
    authenticated user could do everything; the framework had only an
    authentication gate. This chunk adds method-level @RequirePermission
    annotations enforced by a Spring AOP aspect that consults the JWT's
    roles claim and a metadata-driven role-permission map.
    
    What landed
    -----------
    * New `Role` and `UserRole` JPA entities mapping the existing
      identity__role + identity__user_role tables (the schema was
      created in the original identity init but never wired to JPA).
      RoleJpaRepository + UserRoleJpaRepository with a JPQL query that
      returns a user's role codes in one round-trip.
    * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a
      Set<String> of role codes and encodes them as a `roles` JWT claim
      (sorted for deterministic tests). Refresh tokens NEVER carry roles
      by design — see the rationale on `JwtIssuer.issueRefreshToken`. A
      role revocation propagates within one access-token lifetime
      (15 min default).
    * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`.
      Missing claim → empty set, NOT an error (refresh tokens, system
      tokens, and pre-P4.3 tokens all legitimately omit it).
    * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)`
      before minting the access token. `AuthService.refresh` re-reads
      the user's roles too — so a refresh always picks up the latest
      set, since refresh tokens deliberately don't carry roles.
    * New `AuthorizationContext` ThreadLocal in `platform-security.authz`
      carrying an `AuthorizedPrincipal(id, username, roles)`. Separate
      from `PrincipalContext` (which lives in platform-persistence and
      carries only the principal id, for the audit listener). The two
      contexts coexist because the audit listener has no business
      knowing what roles a user has.
    * `PrincipalContextFilter` now populates BOTH contexts on every
      authenticated request, reading the JWT's `username` and `roles`
      claims via `Jwt.getClaimAsStringList("roles")`. The filter is the
      one and only place that knows about Spring Security types AND
      about both vibe_erp contexts; everything downstream uses just the
      Spring-free abstractions.
    * `PermissionEvaluator` Spring bean: takes a role set + permission
      key, returns boolean. Resolution chain:
      1. The literal `admin` role short-circuits to `true` for every
         key (the wildcard exists so the bootstrap admin can do
         everything from the very first boot without seeding a complete
         role-permission mapping).
      2. Otherwise consults an in-memory `Map<role, Set<permission>>`
         loaded from `metadata__role_permission` rows. The cache is
         rebuilt by `refresh()`, called from `VibeErpPluginManager`
         after the initial core load AND after every plug-in load.
      3. Empty role set is always denied. No implicit grants.
    * `@RequirePermission("...")` annotation in `platform-security.authz`.
      `RequirePermissionAspect` is a Spring AOP @Aspect with @Around
      advice that intercepts every annotated method, reads the current
      request's `AuthorizationContext`, calls
      `PermissionEvaluator.has(...)`, and either proceeds or throws
      `PermissionDeniedException`.
    * New `PermissionDeniedException` carrying the offending key.
      `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with
      `"permission denied: 'partners.partner.deactivate'"` as the
      detail. The key IS surfaced to the caller (unlike the 401's
      generic "invalid credentials") because the SPA needs it to
      render a useful "your role doesn't include X" message and
      callers are already authenticated, so it's not an enumeration
      vector.
    * `BootstrapAdminInitializer` now creates the wildcard `admin`
      role on first boot and grants it to the bootstrap admin user.
    * `@RequirePermission` applied to four sensitive endpoints as the
      demo: `PartnerController.deactivate`,
      `StockBalanceController.adjust`, `SalesOrderController.confirm`,
      `SalesOrderController.cancel`. More endpoints will gain
      annotations as additional roles are introduced; v1 keeps the
      blast radius narrow.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, verified:
    * Admin login → JWT length 265 (was 241), decoded claims include
      `"roles":["admin"]`
    * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED
      (admin wildcard short-circuits the permission check)
    * Inserted a 'powerless' user via raw SQL with no role assignments
      but copied the admin's password hash so login works
    * Powerless login → JWT length 247, decoded claims have NO roles
      field at all
    * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with
      `"permission denied: 'orders.sales.cancel'"` in the body
    * Powerless DELETE /partners/{id} → **403 Forbidden** with
      `"permission denied: 'partners.partner.deactivate'"`
    * Powerless GET /sales-orders, /partners, /catalog/items → all 200
      (read endpoints have no @RequirePermission)
    * Admin regression: catalog uoms, identity users, inventory
      locations, printing-shop plates with i18n, metadata custom-fields
      endpoint — all still HTTP 2xx
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 163 unit tests (was 153),
      all green. The 10 new tests cover:
      - PermissionEvaluator: empty roles deny, admin wildcard, explicit
        role-permission grant, multi-role union, unknown role denial,
        malformed payload tolerance, currentHas with no AuthorizationContext,
        currentHas with bound context (8 tests).
      - JwtRoundTrip: roles claim round-trips through the access token,
        refresh token never carries roles even when asked (2 tests).
    
    What was deferred
    -----------------
    * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak-
      compatible OIDC client will reuse the same authorization layer
      unchanged — the roles will come from OIDC ID tokens instead of
      the local user store.
    * **Permission key validation at boot.** The framework does NOT
      yet check that every `@RequirePermission` value matches a
      declared metadata permission key. The plug-in linter is the
      natural place for that check to land later.
    * **Role hierarchy**. Roles are flat in v1; a role with permission
      X cannot inherit from another role. Adding a `parent_role` field
      on the role row is a non-breaking change later.
    * **Resource-aware permissions** ("the user owns THIS partner").
      v1 only checks the operation, not the operand. Resource-aware
      checks are post-v1.
    * **Composite (AND/OR) permission requirements**. A single key
      per call site keeps the contract simple. Composite requirements
      live in service code that calls `PermissionEvaluator.currentHas`
      directly.
    * **Role management UI / REST**. The framework can EVALUATE
      permissions but has no first-class endpoints for "create a
      role", "grant a permission to a role", "assign a role to a
      user". v1 expects these to be done via direct DB writes or via
      the future SPA's role editor (P3.x); the wiring above is
      intentionally policy-only, not management.
    zichun authored
     
    Browse Code »
  • The fifth real PBC and the first business workflow PBC. pbc-inventory
    proved a PBC could consume ONE cross-PBC facade (CatalogApi).
    pbc-orders-sales consumes TWO simultaneously (PartnersApi for the
    customer, CatalogApi for every line's item) in a single transaction —
    the most rigorous test of the modular monolith story so far. Neither
    source PBC is on the compile classpath; the Gradle build refuses any
    direct dependency. Spring DI wires the api.v1 interfaces to their
    concrete adapters at runtime.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-orders-sales` (15 modules total).
    * Two JPA entities, both extending `AuditedJpaEntity`:
      - `SalesOrder` (header) — code, partner_code (varchar, NOT a UUID
        FK to partners), status enum DRAFT/CONFIRMED/CANCELLED, order_date,
        currency_code (varchar(3)), total_amount numeric(18,4),
        ext jsonb. Eager-loaded `lines` collection because every read of
        the header is followed by a read of the lines in practice.
      - `SalesOrderLine` — sales_order_id FK, line_no, item_code (varchar,
        NOT a UUID FK to catalog), quantity, unit_price, currency_code.
        Per-line currency in the schema even though v1 enforces all-lines-
        match-header (so multi-currency relaxation is later schema-free).
        No `ext` jsonb on lines: lines are facts, not master records;
        custom fields belong on the header.
    * `SalesOrderService.create` performs **three independent
      cross-PBC validations** in one transaction:
      1. PartnersApi.findPartnerByCode → reject if null (covers unknown
         AND inactive partners; the facade hides them).
      2. PartnersApi result.type must be CUSTOMER or BOTH (a SUPPLIER-only
         partner cannot be the customer of a sales order).
      3. CatalogApi.findItemByCode for EVERY line → reject if null.
      Then it ALSO validates: at least one line, no duplicate line numbers,
      positive quantity, non-negative price, currency matches header.
      The header total is RECOMPUTED from the lines — the caller's value
      is intentionally ignored. Never trust a financial aggregate sent
      over the wire.
    * State machine enforced by `confirm()` and `cancel()`:
      - DRAFT → CONFIRMED   (confirm)
      - DRAFT → CANCELLED   (cancel from draft)
      - CONFIRMED → CANCELLED (cancel a confirmed order)
      Anything else throws with a descriptive message. CONFIRMED orders
      are immutable except for cancellation — the `update` method refuses
      to mutate a non-DRAFT order.
    * `update` with line items REPLACES the existing lines wholesale
      (PUT semantics for lines, PATCH for header columns). Partial line
      edits are not modelled because the typical "edit one line" UI
      gesture renders to a full re-send anyway.
    * REST: `/api/v1/orders/sales-orders` (CRUD + `/confirm` + `/cancel`).
      State transitions live on dedicated POST endpoints rather than
      PATCH-based status writes — they have side effects (lines become
      immutable, downstream PBCs will receive events in future versions),
      and sentinel-status writes hide that.
    * New api.v1 facade `org.vibeerp.api.v1.ext.orders.SalesOrdersApi`
      with `findByCode`, `findById`, `SalesOrderRef`, `SalesOrderLineRef`.
      Fifth ext.* package after identity, catalog, partners, inventory.
      Sets up the next consumers: pbc-production for work orders, pbc-finance
      for invoicing, the printing-shop reference plug-in for the
      quote-to-job-card workflow.
    * `SalesOrdersApiAdapter` runtime implementation. Cancelled orders ARE
      returned by the facade (unlike inactive items / partners which are
      hidden) because downstream consumers may legitimately need to react
      to a cancellation — release a production slot, void an invoice, etc.
    * `orders-sales.yml` metadata declaring 2 entities, 5 permission keys,
      1 menu entry.
    
    Build enforcement (still load-bearing)
    --------------------------------------
    The root `build.gradle.kts` STILL refuses any direct dependency from
    `pbc-orders-sales` to either `pbc-partners` or `pbc-catalog`. Try
    adding either as `implementation(project(...))` and the build fails
    at configuration time with the architectural violation. The
    cross-PBC interfaces live in api-v1; the concrete adapters live in
    their owning PBCs; Spring DI assembles them at runtime via the
    bootstrap @ComponentScan. pbc-orders-sales sees only the api.v1
    interfaces.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit:
    * POST /api/v1/catalog/items × 2  → PAPER-A4, INK-CYAN
    * POST /api/v1/partners/partners → CUST-ACME (CUSTOMER), SUP-ONLY (SUPPLIER)
    * POST /api/v1/orders/sales-orders → 201, two lines, total 386.50
      (5000 × 0.05 + 3 × 45.50 = 250.00 + 136.50, correctly recomputed)
    * POST .../sales-orders with FAKE-PARTNER → 400 with the meaningful
      message "partner code 'FAKE-PARTNER' is not in the partners
      directory (or is inactive)"
    * POST .../sales-orders with SUP-ONLY → 400 "partner 'SUP-ONLY' is
      type SUPPLIER and cannot be the customer of a sales order"
    * POST .../sales-orders with FAKE-ITEM line → 400 "line 1: item code
      'FAKE-ITEM' is not in the catalog (or is inactive)"
    * POST /{id}/confirm → status DRAFT → CONFIRMED
    * PATCH the CONFIRMED order → 400 "only DRAFT orders are mutable"
    * Re-confirm a CONFIRMED order → 400 "only DRAFT can be confirmed"
    * POST /{id}/cancel a CONFIRMED order → status CANCELLED (allowed)
    * SELECT * FROM orders_sales__sales_order — single row, total
      386.5000, status CANCELLED
    * SELECT * FROM orders_sales__sales_order_line — two rows in line_no
      order with the right items and quantities
    * GET /api/v1/_meta/metadata/entities → 13 entities now (was 11)
    * Regression: catalog uoms, identity users, partners, inventory
      locations, printing-shop plates with i18n (Accept-Language: zh-CN)
      all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 153 unit tests (was 139),
      all green. The 14 new tests cover: unknown/SUPPLIER-only/BOTH-type
      partner paths, unknown item path, empty/duplicate-lineno line
      arrays, negative-quantity early reject (verifies CatalogApi NOT
      consulted), currency mismatch reject, total recomputation, all
      three state-machine transitions and the rejected ones.
    
    What was deferred
    -----------------
    * **Sales-order shipping**. Confirmed orders cannot yet ship, because
      shipping requires atomically debiting inventory — which needs the
      movement ledger that was deferred from P5.3. The pair of chunks
      (movement ledger + sales-order shipping flow) is the natural next
      combination.
    * **Multi-currency lines**. The schema column is per-line but the
      service enforces all-lines-match-header in v1. Relaxing this is a
      service-only change.
    * **Quotes** (DRAFT-but-customer-visible) and **deliveries** (the
      thing that triggers shipping). v1 only models the order itself.
    * **Pricing engine / discounts**. v1 takes the unit price the caller
      sends. A real ERP has a price book lookup, customer-specific
      pricing, volume discounts, promotional pricing — all of which slot
      in BEFORE the line price is set, leaving the schema unchanged.
    * **Tax**. v1 totals are pre-tax. Tax calculation is its own PBC
      (and a regulatory minefield) that lands later.
    zichun authored
     
    Browse Code »
  • The fourth real PBC, and the first one that CONSUMES another PBC's
    api.v1.ext facade. Until now every PBC was a *provider* of an
    ext.<pbc> interface (identity, catalog, partners). pbc-inventory is
    the first *consumer*: it injects org.vibeerp.api.v1.ext.catalog.CatalogApi
    to validate item codes before adjusting stock. This proves the
    cross-PBC contract works in both directions, exactly as guardrail #9
    requires.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-inventory` (14 modules total now).
    * Two JPA entities, both extending `AuditedJpaEntity`:
      - `Location` — code, name, type (WAREHOUSE/BIN/VIRTUAL), active,
        ext jsonb. Single table for all location levels with a type
        discriminator (no recursive self-reference in v1; YAGNI for the
        "one warehouse, handful of bins" shape every printing shop has).
      - `StockBalance` — item_code (varchar, NOT a UUID FK), location_id
        FK, quantity numeric(18,4). The item_code is deliberately a
        string FK that references nothing because pbc-inventory has no
        compile-time link to pbc-catalog — the cross-PBC link goes
        through CatalogApi at runtime. UNIQUE INDEX on
        (item_code, location_id) is the primary integrity guarantee;
        UUID id is the addressable PK. CHECK (quantity >= 0).
    * `LocationService` and `StockBalanceService` with full CRUD +
      adjust semantics. ext jsonb on Location goes through ExtJsonValidator
      (P3.4 — Tier 1 customisation).
    * `StockBalanceService.adjust(itemCode, locationId, quantity)`:
      1. Reject negative quantity.
      2. **Inject CatalogApi**, call `findItemByCode(itemCode)`, reject
         if null with a meaningful 400. THIS is the cross-PBC seam test.
      3. Verify the location exists.
      4. SELECT-then-save upsert on (item_code, location_id) — single
         row per cell, mutated in place when the row exists, created
         when it doesn't. Single-instance deployment makes the
         read-modify-write race window academic.
    * REST: `/api/v1/inventory/locations` (CRUD), `/api/v1/inventory/balances`
      (GET with itemCode or locationId filters, POST /adjust).
    * New api.v1 facade `org.vibeerp.api.v1.ext.inventory` with
      `InventoryApi.findStockBalance(itemCode, locationCode)` +
      `totalOnHand(itemCode)` + `StockBalanceRef`. Fourth ext.* package
      after identity, catalog, partners. Sets up the next consumers
      (sales orders, purchase orders, the printing-shop plug-in's
      "do we have enough paper for this job?").
    * `InventoryApiAdapter` runtime implementation in pbc-inventory.
    * `inventory.yml` metadata declaring 2 entities, 6 permission keys,
      2 menu entries.
    
    Build enforcement (the load-bearing bit)
    ----------------------------------------
    The root build.gradle.kts STILL refuses any direct dependency from
    pbc-inventory to pbc-catalog. Try adding `implementation(project(
    ":pbc:pbc-catalog"))` to pbc-inventory's build.gradle.kts and the
    build fails at configuration time with "Architectural violation in
    :pbc:pbc-inventory: depends on :pbc:pbc-catalog". The CatalogApi
    interface is in api-v1; the CatalogApiAdapter implementation is in
    pbc-catalog; Spring DI wires them at runtime via the bootstrap
    @ComponentScan. pbc-inventory only ever sees the interface.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit:
    * POST /api/v1/inventory/locations → 201, "WH-MAIN" warehouse
    * POST /api/v1/catalog/items → 201, "PAPER-A4" sheet item
    * POST /api/v1/inventory/balances/adjust with itemCode=PAPER-A4 → 200,
      the cross-PBC catalog lookup succeeded
    * POST .../adjust with itemCode=FAKE-ITEM → 400 with the meaningful
      message "item code 'FAKE-ITEM' is not in the catalog (or is inactive)"
      — the cross-PBC seam REJECTS unknown items as designed
    * POST .../adjust with quantity=-5 → 400 "stock quantity must be
      non-negative", caught BEFORE the CatalogApi mock would be invoked
    * POST .../adjust again with quantity=7500 → 200; SELECT shows ONE
      row with id unchanged and quantity = 7500 (upsert mutates, not
      duplicates)
    * GET /api/v1/inventory/balances?itemCode=PAPER-A4 → the row, with
      scale-4 numeric serialised verbatim
    * GET /api/v1/_meta/metadata/entities → 11 entities now (was 9 before
      Location + StockBalance landed)
    * Regression: catalog uoms, identity users, partners, printing-shop
      plates with i18n (Accept-Language: zh-CN), Location custom-fields
      endpoint all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 14 subprojects, 139 unit tests (was 129),
      all green. The 10 new tests cover Location CRUD + the StockBalance
      adjust path with mocked CatalogApi: unknown item rejection, unknown
      location rejection, negative-quantity early reject (verifies
      CatalogApi is NOT consulted), happy-path create, and upsert
      (existing row mutated, save() not called because @Transactional
      flushes the JPA-managed entity on commit).
    
    What was deferred
    -----------------
    * `inventory__stock_movement` append-only ledger. The current operation
      is "set the quantity"; receipts/issues/transfers as discrete events
      with audit trail land in a focused follow-up. The balance row will
      then be regenerated from the ledger via a Liquibase backfill.
    * Negative-balance / over-issue prevention. The CHECK constraint
      blocks SET to a negative value, but there's no concept of "you
      cannot ISSUE more than is on hand" yet because there is no
      separate ISSUE operation — only absolute SET.
    * Lots, batches, serial numbers, expiry dates. Plenty of printing
      shops need none of these; the ones that do can either wait for
      the lot/serial chunk later or add the columns via Tier 1 custom
      fields on Location for now.
    * Cross-warehouse transfer atomicity (debit one, credit another in
      one transaction). Same — needs the ledger.
    zichun authored
     
    Browse Code »
  • The keystone of the framework's "key user adds fields without code"
    promise. A YAML declaration is now enough to add a typed custom field
    to an existing entity, validate it on every save, and surface it to
    the SPA / OpenAPI / AI agent. This is the bit that makes vibe_erp a
    *framework* instead of a fork-per-customer app.
    
    What landed
    -----------
    * Extended `MetadataYamlFile` with a top-level `customFields:` section
      and `CustomFieldYaml` + `CustomFieldTypeYaml` wire-format DTOs. The
      YAML uses a flat `kind: decimal / scale: 2` discriminator instead of
      Jackson polymorphic deserialization so plug-in authors don't have to
      nest type configs and api.v1's `FieldType` stays free of Jackson
      imports.
    * `MetadataLoader.doLoad` now upserts `metadata__custom_field` rows
      alongside entities/permissions/menus, with the same delete-by-source
      idempotency. `LoadResult` carries the count for the boot log.
    * `CustomFieldRegistry` reads every `metadata__custom_field` row from
      the database and builds an in-memory `Map<entityName, List<CustomField>>`
      for the validator's hot path. `refresh()` is called by
      `VibeErpPluginManager` after the initial core load AND after every
      plug-in load, so a freshly-installed plug-in's custom fields are
      immediately enforceable. ConcurrentHashMap under the hood; reads
      are lock-free.
    * `ExtJsonValidator` is the on-save half: takes (entityName, ext map),
      walks the declared fields, coerces each value to its native type,
      and returns the canonicalised map (or throws IllegalArgumentException
      with ALL violations joined for a single 400). Per-FieldType rules:
      - String: maxLength enforced.
      - Integer: accepts Number and numeric String, rejects garbage.
      - Decimal: precision/scale enforced; preserved as plain string in
        canonical form so JSON encoding doesn't lose trailing zeros.
      - Boolean: accepts true/false (case-insensitive).
      - Date / DateTime: ISO-8601 parse via java.time.
      - Uuid: java.util.UUID parse.
      - Enum: must be in declared `allowedValues`.
      - Money / Quantity / Json / Reference: pass-through (Reference target
        existence check pending the cross-PBC EntityRegistry seam).
      Unknown ext keys are rejected with the entity's name and the keys
      themselves listed. ALL violations are returned in one response, not
      failing on the first, so a form submitter fixes everything in one
      round-trip.
    * `Partner` is the first PBC entity to wire ext through the validator:
      `CreatePartnerRequest` and `UpdatePartnerRequest` accept an
      `ext: Map<String, Any?>?`; `PartnerService.create/update` calls
      `extValidator.validate("Partner", ext)` and persists the canonical
      JSON to the existing `partners__partner.ext` JSONB column;
      `PartnerResponse` parses it back so callers see what they wrote.
    * `partners.yml` now declares two custom fields on Partner —
      `partners_credit_limit` (Decimal precision=14, scale=2) and
      `partners_industry` (Enum of printing/publishing/packaging/other) —
      with English and Chinese labels. Tagged `source=core` so the
      framework has something to demo from a fresh boot.
    * New public `GET /api/v1/_meta/metadata/custom-fields/{entityName}`
      endpoint serves the api.v1 runtime view of declarations from the
      in-memory registry (so it reflects every refresh) for the SPA's form
      builder, the OpenAPI generator, and the AI agent function catalog.
      The existing `GET /api/v1/_meta/metadata` endpoint also gained a
      `customFields` list.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, verified:
    * Boot log: `CustomFieldRegistry: refreshed 2 custom fields across 1
      entities (0 malformed rows skipped)` — twice (after core load and
      after plug-in load).
    * `GET /api/v1/_meta/metadata/custom-fields/Partner` → both declarations
      with their labels.
    * `POST /api/v1/partners/partners` with `ext = {credit_limit: "50000.00",
      industry: "printing"}` → 201; the response echoes the canonical map.
    * `POST` with `ext = {credit_limit: "1.234", industry: "unicycles",
      rogue: "x"}` → 400 with all THREE violations in one body:
      "ext contains undeclared key(s) for 'Partner': [rogue]; ext.partners_credit_limit:
      decimal scale 3 exceeds declared scale 2; ext.partners_industry:
      value 'unicycles' is not in allowed set [printing, publishing,
      packaging, other]".
    * `SELECT ext FROM partners__partner WHERE code = 'CUST-EXT-OK'` →
      `{"partners_industry": "printing", "partners_credit_limit": "50000.00"}`
      — the canonical JSON is in the JSONB column verbatim.
    * Regression: catalog uoms, identity users, partners list, and the
      printing-shop plug-in's POST /plates (with i18n via Accept-Language:
      zh-CN) all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 13 subprojects, 129 unit tests (was 118), all
      green. The 11 new tests cover each FieldType variant, multi-violation
      reporting, required missing rejection, unknown key rejection, and the
      null-value-dropped case.
    
    What was deferred
    -----------------
    * JPA listener auto-validation. Right now PartnerService explicitly
      calls extValidator.validate(...) before save; Item, Uom, User and
      the plug-in tables don't yet. Promoting the validator to a JPA
      PrePersist/PreUpdate listener attached to a marker interface
      HasExt is the right next step but is its own focused chunk.
    * Reference target existence check. FieldType.Reference passes
      through unchanged because the cross-PBC EntityRegistry that would
      let the validator look up "does an instance with this id exist?"
      is a separate seam landing later.
    * The customization UI (Tier 1 self-service add a field via the SPA)
      is P3.3 — the runtime enforcement is done, the editor is not.
    * Custom-field-aware permissions. The `pii: true` flag is in the
      metadata but not yet read by anything; the DSAR/erasure pipeline
      that will consume it is post-v1.0.
    zichun authored
     
    Browse Code »
  • The third real PBC. Validates the modular-monolith template against a
    parent-with-children aggregate (Partner → Addresses → Contacts), where
    the previous two PBCs only had single-table or two-independent-table
    shapes.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-partners` (12 modules total now).
    * Three JPA entities, all extending `AuditedJpaEntity`:
      - `Partner` — code, name, type (CUSTOMER/SUPPLIER/BOTH), tax_id,
        website, email, phone, active, ext jsonb. Single-table for both
        customers and suppliers because the role flag is a property of
        the relationship, not the organisation.
      - `Address` — partner_id FK, address_type (BILLING/SHIPPING/OTHER),
        line1/line2/city/region/postal_code/country_code (ISO 3166-1),
        is_primary. Two free address lines + structured city/region/code
        is the smallest set that round-trips through every postal system.
      - `Contact` — partner_id FK, full_name, role, email, phone, active.
        PII-tagged in metadata YAML for the future audit/export tooling.
    * Spring Data JPA repos, application services with full CRUD and the
      invariants below, REST controllers under
      `/api/v1/partners/partners` (+ nested addresses, contacts).
    * `partners-init.xml` Liquibase changelog with the three tables, FKs,
      GIN index on `partner.ext`, indexes on type/active/country.
    * New api.v1 facade `org.vibeerp.api.v1.ext.partners` with
      `PartnersApi` + `PartnerRef`. Third `ext.<pbc>` after identity and
      catalog. Inactive partners hidden at the facade boundary.
    * `PartnersApiAdapter` runtime implementation in pbc-partners, never
      leaking JPA entity types.
    * `partners.yml` metadata declaring all 3 entities, 12 permission
      keys, 1 menu entry. Picked up automatically by `MetadataLoader`.
    * 15 new unit tests across `PartnerServiceTest`, `AddressServiceTest`
      and `ContactServiceTest` (mockk-based, mirroring catalog tests).
    
    Invariants enforced in code (not blindly delegated to the DB)
    -------------------------------------------------------------
    * Partner code uniqueness — explicit check produces a 400 with a real
      message instead of a 500 from the unique-index violation.
    * Partner code is NOT updatable — every external reference uses code,
      so renaming is a data-migration concern, not an API call.
    * Partner deactivate cascades to contacts (also flipped to inactive).
      Addresses are NOT touched (no `active` column — they exist or they
      don't). Verified end-to-end against Postgres.
    * "Primary" flag is at most one per (partner, address_type). When a
      new/updated address is marked primary, all OTHER primaries of the
      same type for the same partner are demoted in the same transaction.
    * Addresses and contacts reject operations on unknown partners
      up-front to give better errors than the FK-violation.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit:
    * POST /api/v1/auth/login (admin) → JWT
    * POST /api/v1/partners/partners (CUSTOMER, SUPPLIER) → 201
    * GET  /api/v1/partners/partners → lists both
    * GET  /api/v1/partners/partners/by-code/CUST-ACME → resolves
    * POST /api/v1/partners/partners (dup code) → 400 with real message
    * POST .../{id}/addresses (BILLING, primary) → 201
    * POST .../{id}/contacts → 201
    * DELETE /api/v1/partners/partners/{id} → 204; partner active=false
    * GET  .../contacts → contact ALSO active=false (cascade verified)
    * GET  /api/v1/_meta/metadata/entities → 3 partners entities present
    * GET  /api/v1/_meta/metadata/permissions → 12 partners permissions
    * Regression: catalog UoMs/items, identity users, printing-shop
      plug-in plates all still HTTP 200.
    
    Build
    -----
    * `./gradlew build`: 12 subprojects, 107 unit tests, all green
      (was 11 / 92 before this commit).
    * The architectural rule still enforced: pbc-partners depends on
      api-v1 + platform-persistence + platform-security only — no
      cross-PBC dep, no platform-bootstrap dep.
    
    What was deferred
    -----------------
    * Permission enforcement on contact endpoints (P4.3). Currently plain
      authenticated; the metadata declares the planned `partners.contact.*`
      keys for when @RequirePermission lands.
    * Per-country address structure layered on top via metadata forms
      (P3.x). The current schema is the smallest universal subset.
    * `deletePartnerCompletely` — out of scope for v1; should be a
      separate "data scrub" admin tool, not a routine API call.
    zichun authored
     
    Browse Code »
  • Adds the foundation for the entire Tier 1 customization story. Core
    PBCs and plug-ins now ship YAML files declaring their entities,
    permissions, and menus; a `MetadataLoader` walks the host classpath
    and each plug-in JAR at boot, upserts the rows tagged with their
    source, and exposes them at a public REST endpoint so the future
    SPA, AI-agent function catalog, OpenAPI generator, and external
    introspection tooling can all see what the framework offers without
    scraping code.
    
    What landed:
    
    * New `platform/platform-metadata/` Gradle subproject. Depends on
      api-v1 + platform-persistence + jackson-yaml + spring-jdbc.
    
    * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward-
      compatible: unknown top-level keys are ignored, so a future plug-in
      built against a newer schema (forms, workflows, rules, translations)
      loads cleanly on an older host that doesn't know those sections yet.
    
    * `MetadataLoader` with two entry points:
    
        loadCore() — uses Spring's PathMatchingResourcePatternResolver
          against the host classloader. Finds every classpath*:META-INF/
          vibe-erp/metadata/*.yml across all jars contributing to the
          application. Tagged source='core'.
    
        loadFromPluginJar(pluginId, jarPath) — opens ONE specific
          plug-in JAR via java.util.jar.JarFile and walks its entries
          directly. This is critical: a plug-in's PluginClassLoader is
          parent-first, so a classpath*: scan against it would ALSO
          pick up the host's metadata files via parent classpath. We
          saw this in the first smoke run — the plug-in source ended
          up with 6 entities (the plug-in's 2 + the host's 4) before
          the fix. Walking the JAR file directly guarantees only the
          plug-in's own files load. Tagged source='plugin:<id>'.
    
      Both entry points use the same delete-then-insert idempotent core
      (doLoad). Loading the same source twice produces the same final
      state. User-edited metadata (source='user') is NEVER touched by
      either path — it survives boot, plug-in install, and plug-in
      upgrade. This is what lets a future SPA "Customize" UI add custom
      fields without fearing they'll be wiped on the next deploy.
    
    * `VibeErpPluginManager.afterPropertiesSet()` now calls
      metadataLoader.loadCore() at the very start, then walks plug-ins
      and calls loadFromPluginJar(...) for each one between Liquibase
      migration and start(context). Order is guaranteed: core → linter
      → migrate → metadata → start. The CommandLineRunner I originally
      put `loadCore()` in turned out to be wrong because Spring runs
      CommandLineRunners AFTER InitializingBean.afterPropertiesSet(),
      so the plug-in metadata was loading BEFORE core — the wrong way
      around. Calling loadCore() inline in the plug-in manager fixes
      the ordering without any @Order(...) gymnastics.
    
    * `MetadataController` exposes:
        GET /api/v1/_meta/metadata           — all three sections
        GET /api/v1/_meta/metadata/entities  — entities only
        GET /api/v1/_meta/metadata/permissions
        GET /api/v1/_meta/metadata/menus
      Public allowlist (covered by the existing /api/v1/_meta/** rule
      in SecurityConfiguration). The metadata is intentionally non-
      sensitive — entity names, permission keys, menu paths. Nothing
      in here is PII or secret; the SPA needs to read it before the
      user has logged in.
    
    * YAML files shipped:
      - pbc-identity/META-INF/vibe-erp/metadata/identity.yml
        (User + Role entities, 6 permissions, Users + Roles menus)
      - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml
        (Item + Uom entities, 7 permissions, Items + UoMs menus)
      - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml
        (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus
        in a "Printing shop" section)
    
    Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths,
    mixed sections, blank pluginId rejection, missing-file no-op wipe)
    + 7 MetadataYamlParseTest cases (DTO mapping, optional fields,
    section defaults, forward-compat unknown keys). Total now
    **92 unit tests** across 11 modules, all green.
    
    End-to-end smoke test against fresh Postgres + plug-in loaded:
    
      Boot logs:
        MetadataLoader: source='core' loaded 4 entities, 13 permissions,
          4 menus from 2 file(s)
        MetadataLoader: source='plugin:printing-shop' loaded 2 entities,
          5 permissions, 2 menus from 1 file(s)
    
      HTTP smoke (everything green):
        GET /api/v1/_meta/metadata (no auth)              → 200
          6 entities, 18 permissions, 6 menus
          entity names: User, Role, Item, Uom, Plate, InkRecipe
          menu sections: Catalog, Printing shop, System
        GET /api/v1/_meta/metadata/entities                → 200
        GET /api/v1/_meta/metadata/menus                   → 200
    
      Direct DB verification:
        metadata__entity:    core=4, plugin:printing-shop=2
        metadata__permission: core=13, plugin:printing-shop=5
        metadata__menu:      core=4, plugin:printing-shop=2
    
      Idempotency: restart the app, identical row counts.
    
      Existing endpoints regression:
        GET /api/v1/identity/users (Bearer)               → 1 user
        GET /api/v1/catalog/uoms (Bearer)                  → 15 UoMs
        GET /api/v1/plugins/printing-shop/ping (Bearer)    → 200
    
    Bugs caught and fixed during the smoke test:
    
      • The first attempt loaded core metadata via a CommandLineRunner
        annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata
        inline in VibeErpPluginManager.afterPropertiesSet(). Spring
        runs all InitializingBeans BEFORE any CommandLineRunner, so
        the plug-in metadata loaded first and the core load came
        second — wrong order. Fix: drop CoreMetadataInitializer
        entirely; have the plug-in manager call metadataLoader.loadCore()
        directly at the start of afterPropertiesSet().
    
      • The first attempt's plug-in load used
        metadataLoader.load(pluginClassLoader, ...) which used Spring's
        PathMatchingResourcePatternResolver against the plug-in's
        classloader. PluginClassLoader is parent-first, so the resolver
        enumerated BOTH the plug-in's own JAR AND the host classpath's
        metadata files, tagging core entities as source='plugin:<id>'
        and corrupting the seed counts. Fix: refactor MetadataLoader
        to expose loadFromPluginJar(pluginId, jarPath) which opens
        the plug-in JAR directly via java.util.jar.JarFile and walks
        its entries — never asking the classloader at all. The
        api-v1 surface didn't change.
    
      • Two KDoc comments contained the literal string `*.yml` after
        a `/` character (`/metadata/*.yml`), forming the `/*` pattern
        that Kotlin's lexer treats as a nested-comment opener. The
        file failed to compile with "Unclosed comment". This is the
        third time I've hit this trap; rewriting both KDocs to avoid
        the literal `/*` sequence.
    
      • The MetadataLoaderTest's hand-rolled JAR builder didn't include
        explicit directory entries for parent paths. Real Gradle JARs
        do include them, and Spring's PathMatchingResourcePatternResolver
        needs them to enumerate via classpath*:. Fixed the test helper
        to write directory entries for every parent of each file.
    
    Implementation plan refreshed: P1.5 marked DONE. Next priority
    candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom
    field application via the ext jsonb column, which would unlock the
    full Tier 1 customization story).
    
    Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests,
    metadata seeded for 6 entities + 18 permissions + 6 menus.
    vibe_erp authored
     
    Browse Code »
  • Adds the framework's event bus, the second cross-cutting service (after
    auth) that PBCs and plug-ins both consume. Implements the transactional
    outbox pattern from the architecture spec section 9 — events are
    written to the database in the same transaction as the publisher's
    domain change, so a publish followed by a rollback never escapes.
    This is the seam where a future Kafka/NATS bridge plugs in WITHOUT
    touching any PBC code.
    
    What landed:
    
    * New `platform/platform-events/` module:
      - `EventOutboxEntry` JPA entity backed by `platform__event_outbox`
        (id, event_id, topic, aggregate_type, aggregate_id, payload jsonb,
        status, attempts, last_error, occurred_at, dispatched_at, version).
        Status enum: PENDING / DISPATCHED / FAILED.
      - `EventOutboxRepository` Spring Data JPA repo with a pessimistic
        SELECT FOR UPDATE query for poller dispatch.
      - `ListenerRegistry` — in-memory subscription holder, indexed both
        by event class (Class.isInstance) and by topic string. Supports
        a `**` wildcard for the platform's audit subscriber. Backed by
        CopyOnWriteArrayList so dispatch is lock-free.
      - `EventBusImpl` — implements the api.v1 EventBus. publish() writes
        the outbox row AND synchronously delivers to in-process listeners
        in the SAME transaction. Marked Propagation.MANDATORY so the bus
        refuses to publish outside an existing transaction (preventing
        publish-and-rollback leaks). Listener exceptions are caught and
        logged; the outbox row still commits.
      - `OutboxPoller` — Spring @Scheduled component that runs every 5s,
        drains PENDING / FAILED rows under a pessimistic lock, marks them
        DISPATCHED. v0.5 has no real external dispatcher — the poller is
        the seam where Kafka/NATS plugs in later.
      - `EventBusConfiguration` — @EnableScheduling so the poller actually
        runs. Lives in this module so the seam activates automatically
        when platform-events is on the classpath.
      - `EventAuditLogSubscriber` — wildcard subscriber that logs every
        event at INFO. Demo proof that the bus works end-to-end. Future
        versions replace it with a real audit log writer.
    
    * `platform__event_outbox` Liquibase changeset (platform-events-001):
      table + unique index on event_id + index on (status, created_at) +
      index on topic.
    
    * DefaultPluginContext.eventBus is no longer a stub that throws —
      it's now the real EventBus injected by VibeErpPluginManager.
      Plug-ins can publish and subscribe via the api.v1 surface. Note:
      subscriptions are NOT auto-scoped to the plug-in lifecycle in v0.5;
      a plug-in that wants its subscriptions removed on stop() must call
      subscription.close() explicitly. Auto-scoping lands when per-plug-in
      Spring child contexts ship.
    
    * pbc-identity now publishes `UserCreatedEvent` after a successful
      UserService.create(). The event class is internal to pbc-identity
      (not in api.v1) — other PBCs subscribe by topic string
      (`identity.user.created`), not by class. This is the right tradeoff:
      string topics are stable across plug-in classloaders, class equality
      is not, and adding every event class to api.v1 would be perpetual
      surface-area bloat.
    
    Tests: 13 new unit tests (9 EventBusImplTest + 4 OutboxPollerTest)
    plus 2 new UserServiceTest cases that verify the publish happens on
    the happy path and does NOT happen when create() rejects a duplicate.
    Total now 76 unit tests across the framework, all green.
    
    End-to-end smoke test against fresh Postgres with the plug-in loaded
    (everything green):
    
      EventAuditLogSubscriber subscribed to ** at boot
      Outbox empty before any user create                      ✓
      POST /api/v1/auth/login                                  → 200
      POST /api/v1/identity/users (create alice)               → 201
      Outbox row appears with topic=identity.user.created,
        status=PENDING immediately after create                ✓
      EventAuditLogSubscriber log line fires synchronously
        inside the create transaction                          ✓
      POST /api/v1/identity/users (create bob)                 → 201
      Wait 8s (one OutboxPoller cycle)
      Both outbox rows now DISPATCHED, dispatched_at set       ✓
      Existing PBCs still work:
        GET /api/v1/identity/users → 3 users                   ✓
        GET /api/v1/catalog/uoms → 15 UoMs                     ✓
      Plug-in still works:
        GET /api/v1/plugins/printing-shop/ping → 200           ✓
    
    The most important assertion is the synchronous audit log line
    appearing on the same thread as the user creation request. That
    proves the entire chain — UserService.create() → eventBus.publish()
    → EventBusImpl writes outbox row → ListenerRegistry.deliver()
    finds wildcard subscriber → EventAuditLogSubscriber.handle()
    logs — runs end-to-end inside the publisher's transaction.
    The poller flipping PENDING → DISPATCHED 5s later proves the
    outbox + poller seam works without any external dispatcher.
    
    Bug encountered and fixed during the smoke test:
    
      • EventBusImplTest used `ObjectMapper().registerKotlinModule()`
        which doesn't pick up jackson-datatype-jsr310. Production code
        uses Spring Boot's auto-configured ObjectMapper which already
        has jsr310 because spring-boot-starter-web is on the classpath
        of distribution. The test setup was the only place using a bare
        mapper. Fixed by switching to `findAndRegisterModules()` AND
        by adding jackson-datatype-jsr310 as an explicit implementation
        dependency of platform-events (so future modules that depend on
        the bus without bringing web in still get Instant serialization).
    
    What is explicitly NOT in this chunk:
    
      • External dispatcher (Kafka/NATS bridge) — the poller is a no-op
        that just marks rows DISPATCHED. The seam exists; the dispatcher
        is a future P1.7.b unit.
      • Exponential backoff on FAILED rows — every cycle re-attempts.
        Real backoff lands when there's a real dispatcher to fail.
      • Dead-letter queue — same.
      • Per-plug-in subscription auto-scoping — plug-ins must close()
        explicitly today.
      • Async / fire-and-forget publish — synchronous in-process only.
    vibe_erp authored
     
    Browse Code »
  • Adds the second core PBC, validating that the pbc-identity template is
    actually clonable and that the Gradle dependency rule fires correctly
    for a real second PBC.
    
    What landed:
    
    * New `pbc/pbc-catalog/` Gradle subproject. Same shape as pbc-identity:
      api-v1 + platform-persistence + platform-security only (no
      platform-bootstrap, no other pbc). The architecture rule in the root
      build.gradle.kts now has two real PBCs to enforce against.
    
    * `Uom` entity (catalog__uom) — code, name, dimension, ext jsonb.
      Code is the natural key (stable, human-readable). UomService rejects
      duplicate codes and refuses to update the code itself (would invalidate
      every Item FK referencing it). UomController at /api/v1/catalog/uoms
      exposes list, get-by-id, get-by-code, create, update.
    
    * `Item` entity (catalog__item) — code, name, description, item_type
      (GOOD/SERVICE/DIGITAL enum), base_uom_code FK, active flag, ext jsonb.
      ItemService validates the referenced UoM exists at the application
      layer (better error message than the DB FK alone), refuses to update
      code or baseUomCode (data-migration operations, not edits), supports
      soft delete via deactivate. ItemController at /api/v1/catalog/items
      with full CRUD.
    
    * `org.vibeerp.api.v1.ext.catalog.CatalogApi` — second cross-PBC facade
      in api.v1 (after IdentityApi). Exposes findItemByCode(code) and
      findUomByCode(code) returning safe ItemRef/UomRef DTOs. Inactive items
      are filtered to null at the boundary so callers cannot accidentally
      reference deactivated catalog rows.
    
    * `CatalogApiAdapter` in pbc-catalog — concrete @Component
      implementing CatalogApi. Maps internal entities to api.v1 DTOs without
      leaking storage types.
    
    * Liquibase changeset (catalog-init-001..003) creates both tables with
      unique indexes on code, GIN indexes on ext, and seeds 15 canonical
      units of measure: kg/g/t (mass), m/cm/mm/km (length), m2 (area),
      l/ml (volume), ea/sheet/pack (count), h/min (time). Tagged
      created_by='__seed__' so a future metadata uninstall sweep can
      identify them.
    
    Tests: 11 new unit tests (UomServiceTest x5, ItemServiceTest x6),
    total now 49 unit tests across the framework, all green.
    
    End-to-end smoke test against fresh Postgres via docker-compose
    (14/14 passing):
      GET /api/v1/catalog/items (no auth)            → 401
      POST /api/v1/auth/login                        → access token
      GET /api/v1/catalog/uoms (Bearer)              → 15 seeded UoMs
      GET /api/v1/catalog/uoms/by-code/kg            → 200
      POST custom UoM 'roll'                         → 201
      POST duplicate UoM 'kg'                        → 400 + clear message
      GET items                                       → []
      POST item with unknown UoM                     → 400 + clear message
      POST item with valid UoM                       → 201
      catalog__item.created_by                       → admin user UUID
                                                       (NOT __system__)
      GET /by-code/INK-CMYK-CYAN                     → 200
      PATCH item name + description                  → 200
      DELETE item                                    → 204
      GET item                                       → active=false
    
    The principal-context bridge from P4.1 keeps working without any
    additional wiring in pbc-catalog: every PBC inherits the audit
    behavior for free by extending AuditedJpaEntity. That is exactly the
    "PBCs follow a recipe, the framework provides the cross-cutting
    machinery" promise from the architecture spec.
    
    Architectural rule enforcement still active: confirmed by reading the
    build.gradle.kts and observing that pbc-catalog declares no
    :platform:platform-bootstrap and no :pbc:pbc-identity dependency. The
    build refuses to load on either violation.
    vibe_erp authored
     
    Browse Code »

  • Implements the auth unit from the implementation plan. Until now, the
    framework let any caller hit any endpoint; with the single-tenant
    refactor there is no second wall, so auth was the most pressing gap.
    
    What landed:
    
    * New `platform-security` module owns the framework's security
      primitives (JWT issuer/verifier, password encoder, Spring Security
      filter chain config, AuthenticationFailedException). Lives between
      platform-persistence and platform-bootstrap.
    
    * `JwtIssuer` mints HS256-signed access (15min) and refresh (7d) tokens
      via NimbusJwtEncoder. `JwtVerifier` decodes them back to a typed
      `DecodedToken` so PBCs never need to import OAuth2 types. JWT secret
      is read from VIBEERP_JWT_SECRET; the framework refuses to start if
      the secret is shorter than 32 bytes.
    
    * `SecurityConfiguration` wires Spring Security with JWT resource
      server, stateless sessions, CSRF disabled, and a public allowlist
      for /actuator/health, /actuator/info, /api/v1/_meta/**,
      /api/v1/auth/login, /api/v1/auth/refresh.
    
    * `PrincipalContext` (in platform-persistence/security) is the bridge
      between Spring Security's SecurityContextHolder and the audit
      listener. Bound by `PrincipalContextFilter` which runs AFTER
      BearerTokenAuthenticationFilter so SecurityContextHolder is fully
      populated. The audit listener (AuditedJpaEntityListener) now reads
      from PrincipalContext, so created_by/updated_by are real user ids
      instead of __system__.
    
    * `pbc-identity` gains `UserCredential` (separate table from User —
      password hashes never share a query plan with user records),
      `AuthService` (login + refresh, generic AuthenticationFailedException
      on every failure to thwart account enumeration), and `AuthController`
      exposing /api/v1/auth/login and /api/v1/auth/refresh.
    
    * `BootstrapAdminInitializer` runs on first boot of an empty
      identity__user table, creates an `admin` user with a random
      16-char password printed to the application logs. Subsequent
      boots see the user exists and skip silently.
    
    * GlobalExceptionHandler maps AuthenticationFailedException → 401
      with a generic "invalid credentials" body (RFC 7807 ProblemDetail).
    
    * New module also brings BouncyCastle as a runtime-only dep
      (Argon2PasswordEncoder needs it).
    
    Tests: 38 unit tests pass, including JwtRoundTripTest (issue/decode
    round trip + tamper detection + secret-length validation),
    PrincipalContextTest (ThreadLocal lifecycle), AuthServiceTest (9 cases
    covering login + refresh happy paths and every failure mode).
    
    End-to-end smoke test against a fresh Postgres via docker-compose:
      GET /api/v1/identity/users (no auth)        → 401
      POST /api/v1/auth/login (admin + bootstrap) → 200 + access/refresh
      POST /api/v1/auth/login (wrong password)    → 401
      GET  /api/v1/identity/users (Bearer)        → 200, lists admin
      POST /api/v1/identity/users (Bearer)        → 201, creates alice
      alice.created_by                            → admin's user UUID
      POST /api/v1/auth/refresh (refresh token)   → 200 + new pair
      POST /api/v1/auth/refresh (access token)    → 401 (type mismatch)
      GET  /api/v1/identity/users (garbage token) → 401
      GET  /api/v1/_meta/info (no auth, public)   → 200
    
    Plan: docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md
    refreshed to drop the now-dead P1.1 (RLS hook) and H1 (per-region
    tenant routing), reorder priorities so P4.1 is first, and reflect the
    single-tenant change throughout.
    
    Bug fixes encountered along the way (caught by the smoke test, not by
    unit tests — the value of running real workflows):
    
      • JwtIssuer was producing IssuedToken.expiresAt with nanosecond
        precision but JWT exp is integer seconds; the round-trip test
        failed equality. Fixed by truncating to ChronoUnit.SECONDS at
        issue time.
      • PrincipalContextFilter was registered with addFilterAfter
        UsernamePasswordAuthenticationFilter, which runs BEFORE the
        OAuth2 BearerTokenAuthenticationFilter, so SecurityContextHolder
        was empty when the bridge filter read it. Result: every
        authenticated request still wrote __system__ in audit columns.
        Fixed by addFilterAfter BearerTokenAuthenticationFilter::class.
      • RefreshRequest is a single-String data class. jackson-module-kotlin
        interprets single-arg data classes as delegate-based creators, so
        Jackson tried to deserialize the entire JSON object as a String
        and threw HttpMessageNotReadableException. Fixed by adding
        @JsonCreator(mode = PROPERTIES) + @param:JsonProperty.
    vibe_erp authored
     
    Browse Code »
  • Design change: vibe_erp deliberately does NOT support multiple companies in
    one process. Each running instance serves exactly one company against an
    isolated Postgres database. Hosting many customers means provisioning many
    independent instances, not multiplexing them.
    
    Why: most ERP/EBC customers will not accept a SaaS where their data shares
    a database with other companies. The single-tenant-per-instance model is
    what the user actually wants the product to look like, and it dramatically
    simplifies the framework.
    
    What changed:
    - CLAUDE.md guardrail #5 rewritten from "multi-tenant from day one" to
      "single-tenant per instance, isolated database"
    - api.v1: removed TenantId value class entirely; removed tenantId from
      Entity, AuditedEntity, Principal, DomainEvent, RequestContext,
      TaskContext, IdentityApi.UserRef, Repository
    - platform-persistence: deleted TenantContext, HibernateTenantResolver,
      TenantAwareJpaTransactionManager, TenancyJpaConfiguration; removed
      @TenantId and tenant_id column from AuditedJpaEntity
    - platform-bootstrap: deleted TenantResolutionFilter; dropped
      vibeerp.instance.mode and default-tenant from properties; added
      vibeerp.instance.company-name; added VibeErpApplication @EnableJpaRepositories
      and @EntityScan so PBC repositories outside the main package are wired;
      added GlobalExceptionHandler that maps IllegalArgumentException → 400
      and NoSuchElementException → 404 (RFC 7807 ProblemDetail)
    - pbc-identity: removed tenant_id from User, repository, controller, DTOs,
      IdentityApiAdapter; updated UserService duplicate-username message and
      the matching test
    - distribution: dropped multiTenancy=DISCRIMINATOR and
      tenant_identifier_resolver from application.yaml; configured Spring Boot
      mainClass on the springBoot extension (not just bootJar) so bootRun works
    - Liquibase: rewrote platform-init changelog to drop platform__tenant and
      the tenant_id columns on every metadata__* table; rewrote
      pbc-identity init to drop tenant_id columns, the (tenant_id, *)
      composite indexes, and the per-table RLS policies
    - IdentifiersTest replaced with Id<T> tests since the TenantId tests
      no longer apply
    
    Verified end-to-end against a real Postgres via docker-compose:
      POST /api/v1/identity/users   → 201 Created
      GET  /api/v1/identity/users   → list works
      GET  /api/v1/identity/users/X → fetch by id works
      POST duplicate username       → 400 Bad Request (was 500)
      PATCH bogus id                → 404 Not Found (was 500)
      PATCH alice                   → 200 OK
      DELETE alice                  → 204, alice now disabled
    
    All 18 unit tests pass.
    vibe_erp authored
     
    Browse Code »
  • BLOCKER: wire Hibernate multi-tenancy
    - application.yaml: set hibernate.tenant_identifier_resolver and
      hibernate.multiTenancy=DISCRIMINATOR so HibernateTenantResolver is
      actually installed into the SessionFactory
    - AuditedJpaEntity.tenantId: add @org.hibernate.annotations.TenantId so
      every PBC entity inherits the discriminator
    - AuditedJpaEntityListener.onCreate: throw if a caller pre-set tenantId
      to a different value than the current TenantContext, instead of
      silently overwriting (defense against cross-tenant write bugs)
    
    IMPORTANT: dependency hygiene
    - pbc-identity no longer depends on platform-bootstrap (wrong direction;
      bootstrap assembles PBCs at the top of the stack)
    - root build.gradle.kts: tighten the architectural-rule enforcement to
      also reject :pbc:* -> platform-bootstrap; switch plug-in detection
      from a fragile pathname heuristic to an explicit
      extra["vibeerp.module-kind"] = "plugin" marker; reference plug-in
      declares the marker
    
    IMPORTANT: api.v1 surface additions (all non-breaking)
    - Repository: documented closed exception set; new
      PersistenceExceptions.kt declares OptimisticLockConflictException,
      UniqueConstraintViolationException, EntityValidationException, and
      EntityNotFoundException so plug-ins never see Hibernate types
    - TaskContext: now exposes tenantId(), principal(), locale(),
      correlationId() so workflow handlers (which run outside an HTTP
      request) can pass tenant-aware calls back into api.v1
    - EventBus: subscribe() now returns a Subscription with close() so
      long-lived subscribers can deregister explicitly; added a
      subscribe(topic: String, ...) overload for cross-classloader event
      routing where Class<E> equality is unreliable
    - IdentityApi.findUserById: tightened from Id<*> to PrincipalId so the
      type system rejects "wrong-id-kind" mistakes at the cross-PBC boundary
    
    NITs:
    - HealthController.kt -> MetaController.kt (file name now matches the
      class name); added TODO(v0.2) for reading implementationVersion from
      the Spring Boot BuildProperties bean
    vibe_erp authored
     
    Browse Code »
  • …er, IdentityApi adapter, unit tests)
    vibe_erp authored
     
    Browse Code »