• zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • New platform subproject `platform/platform-workflow` that makes
    `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This
    is the framework's first chunk of Phase 2 (embedded workflow engine)
    and the dependency other work has been waiting on — pbc-production
    routings/operations, the full buy-make-sell BPMN scenario in the
    reference plug-in, and ultimately the BPMN designer web UI all hang
    off this seam.
    
    ## The shape
    
    - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a
      single new module. Every other module in the framework still sees
      only the api.v1 TaskHandler + WorkflowTask + TaskContext surface —
      guardrail #10 stays honest, no Flowable type leaks to plug-ins or
      PBCs.
    - `TaskHandlerRegistry` is the host-side index of every registered
      handler, keyed by `TaskHandler.key()`. Auto-populated from every
      Spring bean implementing TaskHandler via constructor injection of
      `List<TaskHandler>`; duplicate keys fail fast at registration time.
      `register` / `unregister` exposed for a future plug-in lifecycle
      integration.
    - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate
      named `taskDispatcher`. Every BPMN service task in the framework
      references it via `flowable:delegateExpression="${taskDispatcher}"`.
      The dispatcher reads `execution.currentActivityId` as the task key
      (BPMN `id` attribute = TaskHandler key — no extension elements, no
      field injection, no second source of truth) and routes to the
      matching registered handler. A defensive copy of the execution
      variables is passed to the handler so it cannot mutate Flowable's
      internal map.
    - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the
      api.v1 `TaskContext` — the variable `set(name, value)` call
      forwards through Flowable's variable scope (persisted in the same
      transaction as the surrounding service task execution) and null
      values remove the variable. Principal + locale are documented
      placeholders for now (a workflow-engine `Principal.System`),
      waiting on the propagation chunk that plumbs the initiating user
      through `runtimeService.startProcessInstanceByKey(...)`.
    - `WorkflowService` is a thin facade over Flowable's `RuntimeService`
      + `RepositoryService` exposing exactly the four operations the
      controller needs: start, list active, inspect variables, list
      definitions. Everything richer (signals, timers, sub-processes,
      user-task completion, history queries) lands on this seam in later
      chunks.
    - `WorkflowController` at `/api/v1/workflow/**`:
      * `POST /process-instances`                       (permission `workflow.process.start`)
      * `GET  /process-instances`                       (`workflow.process.read`)
      * `GET  /process-instances/{id}/variables`        (`workflow.process.read`)
      * `GET  /definitions`                             (`workflow.definition.read`)
      * `GET  /handlers`                                (`workflow.definition.read`)
      Exception handlers map `NoSuchElementException` +
      `FlowableObjectNotFoundException` → 404, `IllegalArgumentException`
      → 400, and any other `FlowableException` → 400. Permissions are
      declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded
      by the core MetadataLoader so they show up under
      `GET /api/v1/_meta/metadata` alongside every other permission.
    
    ## The executable self-test
    
    - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module
      classpath and Flowable's starter auto-deploys it at boot.
      Structure: `start` → serviceTask id=`vibeerp.workflow.ping`
      (delegateExpression=`${taskDispatcher}`) → `end`. Process
      definitionKey is `vibeerp-workflow-ping` (distinct from the
      serviceTask id because BPMN 2.0 ids must be unique per document).
    - `PingTaskHandler` is a real shipped bean, not test code: its
      `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and
      `correlationId=<ctx.correlationId()>` to the process variables.
      Operators and AI agents get a trivial "is the workflow engine
      alive?" probe out of the box.
    
    Why the demo lives in src/main, not src/test: Flowable's auto-deployer
    reads from the host classpath at boot, so if either half lived under
    src/test the smoke test wouldn't be reproducible from the shipped
    image — exactly what CLAUDE.md's "reference plug-in is the executable
    acceptance test" discipline is trying to prevent.
    
    ## The Flowable + Liquibase trap
    
    **Learned the hard way during the smoke test.** Adding
    `flowable-spring-boot-starter-process` immediately broke boot with
    `Schema-validation: missing table [catalog__item]`. Liquibase was
    silently not running. Root cause: Flowable 7.x registers a Spring
    Boot `EnvironmentPostProcessor` called
    `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has
    already set an explicit value, forces
    `spring.liquibase.enabled=false` with a WARN log line that reads
    "Flowable pulls in Liquibase but does not use the Spring Boot
    configuration for it". Our master.xml then never executes and JPA
    validation fails against the empty schema. Fix is a single line in
    `distribution/src/main/resources/application.yaml` —
    `spring.liquibase.enabled: true` — with a comment explaining why it
    must stay there for anyone who touches config next.
    
    Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`,
    etc. tables coexist happily in the same public schema — 39 ACT_*
    tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable
    manages its own schema via its internal MyBatis DDL, Liquibase manages
    ours, they don't touch each other.
    
    ## Smoke-test transcript (fresh DB, dev profile)
    
    ```
    docker compose down -v && docker compose up -d db
    ./gradlew :distribution:bootRun &
    # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables,
    #     MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler,
    #     BPMN auto-deployed from classpath
    POST /api/v1/auth/login → JWT
    GET  /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping)
    GET  /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]}
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping",
          "businessKey":"smoke-1",
          "variables":{"greeting":"ni hao"}}
      → 201 {"processInstanceId":"...","ended":true,
             "variables":{"pong":true,"pongAt":"2026-04-09T...",
                          "correlationId":"...","greeting":"ni hao"}}
    POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"}
      → 404 {"message":"No process definition found for key 'does-not-exist'"}
    GET  /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity)
    ```
    
    ## Tests
    
    - 15 new unit tests in `platform-workflow/src/test`:
      * `TaskHandlerRegistryTest` — init with initial handlers, duplicate
        key fails fast, blank key rejected, unregister removes,
        unregister on unknown returns false, find on missing returns null
      * `DispatchingJavaDelegateTest` — dispatches by currentActivityId,
        throws on missing handler, defensive-copies the variable map
      * `DelegateTaskContextTest` — set non-null forwards, set null
        removes, blank name rejected, principal/locale/correlationId
        passthrough, default correlation id is stable across calls
      * `PingTaskHandlerTest` — key matches the BPMN serviceTask id,
        execute writes pong + pongAt + correlationId
    - Total framework unit tests: 261 (was 246), all green.
    
    ## What this unblocks
    
    - **REF.1** — real quote→job-card workflow handler in the
      printing-shop plug-in
    - **pbc-production routings/operations (v3)** — each operation
      becomes a BPMN step with duration + machine assignment
    - **P2.3** — user-task form rendering (landing on top of the
      RuntimeService already exposed via WorkflowService)
    - **P2.2** — BPMN designer web page (later, depends on R1)
    
    ## Deliberate non-goals (parking lot)
    
    - Principal propagation from the REST caller through the process
      start into the handler — uses a fixed `workflow-engine`
      `Principal.System` for now. Follow-up chunk will plumb the
      authenticated user as a Flowable variable.
    - Plug-in-contributed TaskHandler registration via PF4J child
      contexts — the registry exposes `register/unregister` but the
      plug-in loader doesn't call them yet. Follow-up chunk.
    - BPMN user tasks, signals, timers, history queries — seam exists,
      deliberately not built out.
    - Workflow deployment from `metadata__workflow` rows (the Tier 1
      path). Today deployment is classpath-only via Flowable's auto-
      deployer.
    - The Flowable async job executor is explicitly deactivated
      (`flowable.async-executor-activate: false`) — background-job
      machinery belongs to the future Quartz integration (P1.10), not
      Flowable.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Closes the P4.3 rollout — the last PBC whose controllers were still
    unannotated. Every endpoint in `UserController` now carries an
    `@RequirePermission("identity.user.*")` annotation matching the keys
    already declared in `identity.yml`:
    
      GET  /api/v1/identity/users           identity.user.read
      GET  /api/v1/identity/users/{id}      identity.user.read
      POST /api/v1/identity/users           identity.user.create
      PATCH /api/v1/identity/users/{id}     identity.user.update
      DELETE /api/v1/identity/users/{id}    identity.user.disable
    
    `AuthController` (login, refresh) is deliberately NOT annotated —
    it is in the platform-security public allowlist because login is the
    token-issuing endpoint (chicken-and-egg).
    
    KDoc on the controller class updated to reflect the new auth story
    (removing the stale "authentication deferred to v0.2" comment from
    before P4.1 / P4.3 landed).
    
    Smoke verified end-to-end against real Postgres:
      - Admin (wildcard `admin` role) → GET /users returns 200, POST
        /users returns 201 (new user `jane` created).
      - Unauthenticated GET and POST → 401 Unauthorized from the
        framework's JWT filter before @RequirePermission runs. A
        non-admin user without explicit grants would get 403 from the
        AOP evaluator; tested manually with the admin and anonymous
        cases.
    
    No test changes — the controller unit test is a thin DTO mapper
    test that doesn't exercise the Spring AOP aspect; identity-wide
    authz enforcement is covered by the platform-security tests plus
    the shipping smoke tests. 246 unit tests, all green.
    
    P4.3 is now complete across every core PBC:
      pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales,
      pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity.
    zichun authored
     
    Browse Code »
  • CI verified green for `986f02ce` on both gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »
  • Removes the ext-handling copy/paste that had grown across four PBCs
    (partners, inventory, orders-sales, orders-purchase). Every service
    that wrote the JSONB `ext` column was manually doing the same
    four-step sequence: validate, null-check, serialize with a local
    ObjectMapper, assign to the entity. And every response mapper was
    doing the inverse: check-if-blank, parse, cast, swallow errors.
    
    Net: ~15 lines saved per PBC, one place to change the ext contract
    later (e.g. PII redaction, audit tagging, field-level events), and
    a stable plug-in opt-in mechanism — any plug-in entity that
    implements `HasExt` automatically participates.
    
    New api.v1 surface:
    
      interface HasExt {
          val extEntityName: String     // key into metadata__custom_field
          var ext: String               // the serialized JSONB column
      }
    
    Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own
    entities into the same validation path. Zero Spring/Jackson
    dependencies — api.v1 stays clean.
    
    Extended `ExtJsonValidator` (platform-metadata) with two helpers:
    
      fun applyTo(entity: HasExt, ext: Map<String, Any?>?)
          — null-safe; validates; writes canonical JSON to entity.ext.
            Replaces the validate + writeValueAsString + assign triplet
            in every service's create() and update().
    
      fun parseExt(entity: HasExt): Map<String, Any?>
          — returns empty map on blank/corrupt column; response
            mappers never 500 on bad data. Replaces the four identical
            parseExt local functions.
    
    ExtJsonValidator now takes an ObjectMapper via constructor
    injection (Spring Boot's auto-configured bean).
    
    Entities that now implement HasExt (override val extEntityName;
    override var ext; companion object const val ENTITY_NAME):
      - Partner (`partners.Partner` → "Partner")
      - Location (`inventory.Location` → "Location")
      - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder")
      - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder")
    
    Deliberately NOT converted this chunk:
      - WorkOrder (pbc-production) — its ext column has no declared
        fields yet; a follow-up that adds declarations AND the
        HasExt implementation is cleaner than splitting the two.
      - JournalEntry (pbc-finance) — derived state, no ext column.
    
    Services lose:
      - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()`
        field (four copies eliminated)
      - The `parseExt(entity): Map` helper function (four copies)
      - The `companion object { const val ENTITY_NAME = ... }` constant
        (moved onto the entity where it belongs)
      - The `val canonicalExt = extValidator.validate(...)` +
        `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }`
        create pattern (replaced with one applyTo call)
      - The `if (command.ext != null) { ... }` update pattern
        (applyTo is null-safe)
    
    Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and
    parseExt (null-safe path, happy path, failure path, blank column,
    round-trip, malformed JSON). Existing service tests just swap the
    mock setup from stubbing `validate` to stubbing `applyTo` and
    `parseExt` with no-ops.
    
    Smoke verified end-to-end against real Postgres:
      - POST /partners with valid ext (partners_credit_limit,
        partners_industry) → 201, canonical form persisted.
      - GET /partners/by-code/X → 200, ext round-trips.
      - POST with invalid enum value → 400 "value 'x' is not in
        allowed set [printing, publishing, packaging, other]".
      - POST with undeclared key → 400 "ext contains undeclared
        key(s) for 'Partner': [rogue_field]".
      - PATCH with new ext → 200, ext updated.
      - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe
        applyTo).
      - POST /orders/sales-orders with no ext → 201, the create path
        via the shared helper still works.
    
    246 unit tests (+6 over 240), 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • CI verified green for `75a75baa` on both the gradle build and docker
    image jobs.
    zichun authored
     
    Browse Code »
  • Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one
    step, single output, no BOM) into a real v2 production PBC:
    
      1. IN_PROGRESS state between DRAFT and COMPLETED so "started but
         not finished" work orders are observable on a dashboard.
         WorkOrderService.start(id) performs the transition and publishes
         a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR
         IN_PROGRESS (v2 writes nothing to the ledger at start() so there
         is nothing to undo on cancel).
    
      2. Bill of materials via a new WorkOrderInput child entity —
         @OneToMany with cascade + orphanRemoval, same shape as
         SalesOrderLine. Each line carries (lineNo, itemCode,
         quantityPerUnit, sourceLocationCode). complete() now iterates
         the inputs in lineNo order and writes one MATERIAL_ISSUE
         ledger row per line (delta = -(quantityPerUnit × outputQuantity))
         BEFORE writing the PRODUCTION_RECEIPT for the output. All in
         one transaction — a failure anywhere rolls back every prior
         ledger row AND the status flip. Empty inputs list is legal
         (the v1 auto-spawn-from-SO path still works unchanged,
         writing only the PRODUCTION_RECEIPT).
    
      3. Scrap flow for COMPLETED work orders via a new scrap(id,
         scrapLocationCode, quantity, note) service method. Writes a
         negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and
         publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over
         adding a new SCRAP movement reason to keep the enum stable —
         the reference-string suffix is the disambiguator. The work
         order itself STAYS COMPLETED; scrap is a correction on top of
         a terminal state, not a state change.
    
      complete() now requires IN_PROGRESS (not DRAFT); existing callers
      must start() first.
    
      api.v1 grows two events (WorkOrderStartedEvent,
      WorkOrderScrappedEvent) alongside the three that already existed.
      Since this is additive within a major version, the api.v1 semver
      contract holds — existing subscribers continue to compile.
    
      Liquibase: 002-production-v2.xml widens the status CHECK and
      creates production__work_order_input with (work_order_id FK,
      line_no, item_code, quantity_per_unit, source_location_code) plus
      a unique (work_order_id, line_no) constraint, a CHECK
      quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE
      from the parent.
    
      Unit tests: WorkOrderServiceTest grows from 8 to 18 cases —
      covers start happy path, start rejection, complete-on-DRAFT
      rejection, empty-BOM complete, BOM-with-two-lines complete
      (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT
      all fire with the right references), scrap happy path, scrap on
      non-COMPLETED rejection, scrap with non-positive quantity
      rejection, cancel-from-IN_PROGRESS, and BOM validation rejects
      (unknown item, duplicate line_no).
    
    Smoke verified end-to-end against real Postgres:
      - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per
        brochure, output 100).
      - Started (DRAFT → IN_PROGRESS, no ledger rows).
      - Completed: paper balance 500→300 (MATERIAL_ISSUE -200),
        ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100
        (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE.
      - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged
        WO:WO-SMOKE:SCRAP, work order stayed COMPLETED.
      - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a
        DRAFT with empty BOM; starting + completing it writes only the
        PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the
        empty-BOM path is backwards-compatible.
      - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s,
        double-start 400s, cancel-from-IN_PROGRESS 200.
    
    240 unit tests, 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »

  • zichun authored
     
    Browse Code »
  • The framework's eighth PBC and the first one that's NOT order- or
    master-data-shaped. Work orders are about *making things*, which is
    the reason the printing-shop reference customer exists in the first
    place. With this PBC in place the framework can express the full
    buy-sell-make loop end-to-end.
    
    What landed (new module pbc/pbc-production/)
      - WorkOrder entity (production__work_order):
          code, output_item_code, output_quantity, status (DRAFT|COMPLETED|
          CANCELLED), due_date (display-only), source_sales_order_code
          (nullable — work orders can be either auto-spawned from a
          confirmed SO or created manually), ext.
      - WorkOrderJpaRepository with existsBySourceSalesOrderCode /
        findBySourceSalesOrderCode for the auto-spawn dedup.
      - WorkOrderService.create / complete / cancel:
          • create validates the output item via CatalogApi (same seam
            SalesOrderService and PurchaseOrderService use), rejects
            non-positive quantities, publishes WorkOrderCreatedEvent.
          • complete(outputLocationCode) credits finished goods to the
            named location via InventoryApi.recordMovement with
            reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and
            reference="WO:<order_code>", then flips status to COMPLETED,
            then publishes WorkOrderCompletedEvent — all in the same
            @Transactional method.
          • cancel only allowed from DRAFT (no un-producing finished
            goods); publishes WorkOrderCancelledEvent.
      - SalesOrderConfirmedSubscriber (@PostConstruct →
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)):
        walks the confirmed sales order's lines via SalesOrdersApi
        (NOT by importing pbc-orders-sales) and calls
        WorkOrderService.create for each line. Coded as one bean with
        one subscription — matches pbc-finance's one-bean-per-subject
        pattern.
          • Idempotent on source sales order code — if any work order
            already exists for the SO, the whole spawn is a no-op.
          • Tolerant of a missing SO (defensive against a future async
            bus that could deliver the confirm event after the SO has
            vanished).
          • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g.
            WO-FROM-SO-2026-0001-L1.
    
      - REST controller /api/v1/production/work-orders: list, get,
        by-code, create, complete, cancel — each annotated with
        @RequirePermission. Four permission keys declared in the
        production.yml metadata: read / create / complete / cancel.
      - CompleteWorkOrderRequest: single-arg DTO uses the
        @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that
        already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest;
        cross-referenced in the KDoc so the third instance doesn't need
        re-discovery.
      - distribution/.../pbc-production/001-production-init.xml:
        CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext
        + the usual indexes. NEITHER output_item_code NOR
        source_sales_order_code is a foreign key (cross-PBC reference
        policy — guardrail #9).
      - settings.gradle.kts + distribution/build.gradle.kts: registers
        the new module and adds it to the distribution dependency list.
      - master.xml: includes the new changelog in dependency order,
        after pbc-finance.
    
    New api.v1 surface: org.vibeerp.api.v1.event.production.*
      - WorkOrderCreatedEvent, WorkOrderCompletedEvent,
        WorkOrderCancelledEvent — sealed under WorkOrderEvent,
        aggregateType="production.WorkOrder". Same pattern as the
        order events, so any future consumer (finance revenue
        recognition, warehouse put-away dashboard, a customer plug-in
        that needs to react to "work finished") subscribes through the
        public typed-class overload with no dependency on pbc-production.
    
    Unit tests (13 new, 217 → 230 total)
      - WorkOrderServiceTest (9 tests): create dedup, positive quantity
        check, catalog seam, happy-path create with event assertion,
        complete rejects non-DRAFT, complete happy path with
        InventoryApi.recordMovement assertion + event assertion, cancel
        from DRAFT, cancel rejects COMPLETED.
      - SalesOrderConfirmedSubscriberTest (5 tests): subscription
        registration count, spawns N work orders for N SO lines with
        correct code convention, idempotent when WOs already exist,
        no-op on missing SO, and a listener-routing test that captures
        the EventListener instance and verifies it forwards to the
        right service method.
    
    End-to-end smoke verified against real Postgres
      - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance)
        and SalesOrderConfirmedSubscriber (pbc-production) log their
        subscription registration before the first HTTP call.
      - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a
        finished-goods location (WH-FG).
      - Created a 2-line sales order (SO-WO-1), confirmed it.
          → Produced ONE orders_sales.SalesOrder outbox row.
          → Produced ONE AR POSTED finance__journal_entry for 1000 USD
            (500 × 1 + 250 × 2 — the pbc-finance consumer still works).
          → Produced TWO draft work orders auto-spawned from the SO
            lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and
            WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with
            source_sales_order_code=SO-WO-1.
      - Completed WO1 to WH-FG:
          → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A
            delta=500 reference="WO:WO-FROM-SO-WO-1-L1".
          → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG.
          → Flipped status to COMPLETED.
      - Cancelled WO2 → CANCELLED.
      - Created a manual WO-MANUAL-1 with no source SO → succeeds;
        demonstrates the "operator creates a WO to build inventory
        ahead of demand" path.
      - platform__event_outbox ends with 6 rows all DISPATCHED:
          orders_sales.SalesOrder SO-WO-1
          production.WorkOrder WO-FROM-SO-WO-1-L1  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (created)
          production.WorkOrder WO-FROM-SO-WO-1-L1  (completed)
          production.WorkOrder WO-FROM-SO-WO-1-L2  (cancelled)
          production.WorkOrder WO-MANUAL-1         (created)
    
    Why this chunk was the right next move
      - pbc-finance was a PASSIVE consumer — it only wrote derived
        reporting state. pbc-production is the first ACTIVE consumer:
        it creates new aggregates with their own state machines and
        their own cross-PBC writes in reaction to another PBC's events.
        This is a meaningfully harder test of the event-driven
        integration story and it passes end-to-end.
      - "One ledger, three callers" is now real: sales shipments,
        purchase receipts, AND production receipts all feed the same
        inventory__stock_movement ledger through the same
        InventoryApi.recordMovement facade. The facade has proven
        stable under three very different callers.
      - The framework now expresses the basic ERP trinity: buy
        (purchase orders), sell (sales orders), make (work orders).
        That's the shape every real manufacturing customer needs, and
        it's done without any PBC importing another.
    
    What's deliberately NOT in v1
      - No bill of materials. complete() only credits finished goods;
        it does NOT issue raw materials. A shop floor that needs to
        consume 4 sheets of paper to produce 1 brochure does it
        manually via POST /api/v1/inventory/movements with reason=
        MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands
        as WorkOrderInput lines in a future chunk.
      - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in
        one step. A real shop floor needs "started but not finished"
        visibility; that's the next iteration.
      - No routings, operations, machine assignments, or due-date
        enforcement. due_date is display-only.
      - No "scrap defective output" flow for a COMPLETED work order.
        cancel refuses from COMPLETED; the fix requires a new
        MovementReason and a new event, not a special-case method
        on the service.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • Extends pbc-inventory's MovementReason enum with the two reasons a
    production-style PBC needs to record stock movements through the
    existing InventoryApi.recordMovement facade. No new endpoint, no
    new database column — just two new enum values, two new sign-
    validation rules, and four new tests.
    
    Why this lands BEFORE pbc-production
      - It's the smallest self-contained change that unblocks any future
        production-related code (the framework's planned pbc-production,
        a customer plug-in's manufacturing module, or even an ad-hoc
        operator script). Each of those callers can now record
        "consume raw material" / "produce finished good" through the
        same primitive that already serves sales shipments and purchase
        receipts.
      - It validates the "one ledger, many callers" property the
        architecture spec promised. Adding a new movement reason takes
        zero schema changes (the column is varchar) and zero plug-in
        changes (the api.v1 facade takes the reason as a string and
        delegates to MovementReason.valueOf inside the adapter). The
        enum lives entirely inside pbc-inventory.
    
    What changed
      - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and
        PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one
        was added and how they fit the "one primitive for every direction"
        story.
      - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the
        must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT,
        and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive
        bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT.
      - 4 new unit tests:
          • record rejects positive delta on MATERIAL_ISSUE
          • record rejects negative delta on PRODUCTION_RECEIPT
          • record accepts a positive PRODUCTION_RECEIPT (happy path,
            new balance row at the receiving location)
          • record accepts a negative MATERIAL_ISSUE (decrements an
            existing balance from 1000 → 800)
      - Total tests: 213 → 217.
    
    Smoke test against real Postgres
      - Booted on a fresh DB; no schema migration needed because the
        `reason` column is varchar(32), already wide enough.
      - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location
        WH-PROD via the existing endpoints.
      - POST /api/v1/inventory/movements with reason=RECEIPT for 1000
        raw paper → balance row at 1000.
      - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE
        delta=-200 reference="WO:WO-EVT-1" → balance becomes 800,
        ledger row written.
      - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT
        delta=50 reference="WO:WO-EVT-1" → balance row at 50 for
        FG-WIDGET, ledger row written.
      - Negative test: POST PRODUCTION_RECEIPT with delta=-1 →
        400 Bad Request "movement reason PRODUCTION_RECEIPT requires
        a non-negative delta (got -1)" — the new sign rule fires.
      - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE,
        PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and
        RAW-PAPER=800 — the math is correct.
    
    What's deliberately NOT in this chunk
      - No pbc-production yet. That's the next chunk; this is just
        the foundation that lets it (or any other production-ish
        caller) write to the ledger correctly without needing changes
        to api.v1 or pbc-inventory ever again.
      - No new return-path reasons (RETURN_FROM_CUSTOMER,
        RETURN_TO_SUPPLIER) — those land when the returns flow does.
      - No reference convention for "WO:" — that's documented in the
        KDoc on `reference`, not enforced anywhere. The v0.16/v0.17
        convention "<source>:<code>" continues unchanged.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The minimal pbc-finance landed in commit bf090c2e only reacted to
    *ConfirmedEvent. This change wires the rest of the order lifecycle
    (ship/receive → SETTLED, cancel → REVERSED) so the journal entry
    reflects what actually happened to the order, not just the moment
    it was confirmed.
    
    JournalEntryStatus (new enum + new column)
      - POSTED   — created from a confirm event (existing behaviour)
      - SETTLED  — promoted by SalesOrderShippedEvent /
                   PurchaseOrderReceivedEvent
      - REVERSED — promoted by SalesOrderCancelledEvent /
                   PurchaseOrderCancelledEvent
      - The status field is intentionally a separate axis from
        JournalEntryType: type tells you "AR or AP", status tells you
        "where in its lifecycle".
    
    distribution/.../pbc-finance/002-finance-status.xml
      - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`,
        a CHECK constraint mirroring the enum values, and an index on
        status for the new filter endpoint. The DEFAULT 'POSTED' covers
        any existing rows on an upgraded environment without a backfill
        step.
    
    JournalEntryService — four new methods, all idempotent
      - settleFromSalesShipped(event)        → POSTED → SETTLED for AR
      - settleFromPurchaseReceived(event)    → POSTED → SETTLED for AP
      - reverseFromSalesCancelled(event)     → POSTED → REVERSED for AR
      - reverseFromPurchaseCancelled(event)  → POSTED → REVERSED for AP
      Each runs through a private settleByOrderCode/reverseByOrderCode
      helper that:
        1. Looks up the row by order_code (new repo method
           findFirstByOrderCode). If absent → no-op (e.g. cancel from
           DRAFT means no *ConfirmedEvent was ever published, so no
           journal entry exists; this is the most common cancel path).
        2. If the row is already in the destination status → no-op
           (idempotent under at-least-once delivery, e.g. outbox replay
           or future Kafka retry).
        3. Refuses to overwrite a contradictory terminal status — a
           SETTLED row cannot be REVERSED, and vice versa. The producer's
           state machine forbids cancel-from-shipped/received, so
           reaching here implies an upstream contract violation; logged
           at WARN and the row is left alone.
    
    OrderEventSubscribers — six subscriptions per @PostConstruct
      - All six order events from api.v1.event.orders.* are subscribed
        via the typed-class EventBus.subscribe(eventType, listener)
        overload, the same public API a plug-in would use. Boot log
        line updated: "pbc-finance subscribed to 6 order events".
    
    JournalEntryController — new ?status= filter
      - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED
        surfaces the partition. Existing ?orderCode= and ?type= filters
        unchanged. Read permission still finance.journal.read.
    
    12 new unit tests (213 total, was 201)
      - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency
        on duplicate destination status, refusal to overwrite a
        contradictory terminal status, no-op on missing row, default
        POSTED on new entries.
      - OrderEventSubscribersTest: assert all SIX subscriptions registered,
        one new test that captures all four lifecycle listeners and
        verifies they forward to the correct service methods.
    
    End-to-end smoke (real Postgres, fresh DB)
      - Booted with the new DDL applied (status column + CHECK + index)
        on an empty DB. The OrderEventSubscribers @PostConstruct line
        confirms 6 subscriptions registered before the first HTTP call.
      - Five lifecycle scenarios driven via REST:
          PO-FULL:        confirm + receive  → AP SETTLED  amount=50.00
          SO-FULL:        confirm + ship     → AR SETTLED  amount= 1.00
          SO-REVERSE:     confirm + cancel   → AR REVERSED amount= 1.00
          PO-REVERSE:     confirm + cancel   → AP REVERSED amount=50.00
          SO-DRAFT-CANCEL: cancel only       → NO ROW (no confirm event)
      - finance__journal_entry returns exactly 4 rows (the 5th scenario
        correctly produces nothing) and ?status filters all return the
        expected partition (POSTED=0, SETTLED=2, REVERSED=2).
    
    What's still NOT in pbc-finance
      - Still no debit/credit legs, no chart of accounts, no period
        close, no double-entry invariant. This is the v0.17 minimal
        seed; the real P5.9 build promotes it into a real GL.
      - No reaction to "settle then reverse" or "reverse then settle"
        other than the WARN-and-leave-alone defensive path. A real GL
        would write a separate compensating journal entry; the minimal
        PBC just keeps the row immutable once it leaves POSTED.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The framework's seventh PBC, and the first one whose ENTIRE purpose
    is to react to events published by other PBCs. It validates the
    *consumer* side of the cross-PBC event seam that was wired up in
    commit 67406e87 (event-driven cross-PBC integration). With pbc-finance
    in place, the bus now has both producers and consumers in real PBC
    business logic — not just the wildcard EventAuditLogSubscriber that
    ships with platform-events.
    
    What landed (new module pbc/pbc-finance/, ~480 lines including tests)
      - JournalEntry entity (finance__journal_entry):
          id, code (= originating event UUID), type (AR|AP),
          partner_code, order_code, amount, currency_code, posted_at, ext.
          Unique index on `code` is the durability anchor for idempotent
          event delivery; the service ALSO existsByCode-checks before
          insert to make duplicate-event handling a clean no-op rather
          than a constraint-violation exception.
      - JournalEntryJpaRepository with existsByCode + findByOrderCode +
        findByType (the read-side filters used by the controller).
      - JournalEntryService.recordSalesConfirmed / recordPurchaseConfirmed
        take a SalesOrderConfirmedEvent / PurchaseOrderConfirmedEvent and
        write the corresponding AR/AP row. @Transactional with
        Propagation.REQUIRED so the listener joins the publisher's TX
        when the bus delivers synchronously (today) and creates a fresh
        one if a future async bus delivers from a worker thread. The
        KDoc explains why REQUIRED is the correct default and why
        REQUIRES_NEW would be wrong here.
      - OrderEventSubscribers @Component with @PostConstruct that calls
        EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)
        and EventBus.subscribe(PurchaseOrderConfirmedEvent::class.java, ...)
        once at boot. Uses the public typed-class subscribe overload —
        NOT the platform-internal subscribeToAll wildcard helper. This
        is the API surface plug-ins will also use.
      - JournalEntryController: read-only REST under
        /api/v1/finance/journal-entries with @RequirePermission
        "finance.journal.read". Filter params: ?orderCode= and ?type=.
        Deliberately no POST endpoint — entries are derived state.
      - finance.yml metadata declaring 1 entity, 1 permission, 1 menu.
      - Liquibase changelog at distribution/.../pbc-finance/001-finance-init.xml
        + master.xml include + distribution/build.gradle.kts dep.
      - settings.gradle.kts: registers :pbc:pbc-finance.
      - 9 new unit tests (6 for JournalEntryService, 3 for
        OrderEventSubscribers) — including idempotency, dedup-by-event-id
        contract, listener-forwarding correctness via slot-captured
        EventListener invocation. Total tests: 192 → 201, 16 → 17 modules.
    
    Why this is the right shape
      - pbc-finance has zero source dependency on pbc-orders-sales,
        pbc-orders-purchase, pbc-partners, or pbc-catalog. The Gradle
        build refuses any cross-PBC dependency at configuration time —
        pbc-finance only declares api/api-v1, platform-persistence, and
        platform-security. The events and partner/item references it
        consumes all live in api.v1.event.orders / are stored as opaque
        string codes.
      - Subscribers go through EventBus.subscribe(eventType, listener),
        the public typed-class overload from api.v1.event.EventBus.
        Plug-ins use exactly this API; this PBC proves the API works
        end-to-end from a real consumer.
      - The consumer is idempotent on the producer's event id, so
        at-least-once delivery (outbox replay, future Kafka retry)
        cannot create duplicate journal entries. This makes the
        consumer correct under both the current synchronous bus and
        any future async / out-of-process bus.
      - Read-only REST API: derived state should not be writable from
        the outside. Adjustments and reversals will land later as their
        own command verbs when the real P5.9 finance build needs them,
        not as a generic create endpoint.
    
    End-to-end smoke verified against real Postgres
      - Booted on a fresh DB; the OrderEventSubscribers @PostConstruct
        log line confirms the subscription registered before any HTTP
        traffic.
      - Seeded an item, supplier, customer, location (existing PBCs).
      - Created PO PO-FIN-1 (5000 × 0.04 = 200 USD) → confirmed →
        GET /api/v1/finance/journal-entries returns ONE row:
          type=AP partner=SUP-PAPER order=PO-FIN-1 amount=200.0000 USD
      - Created SO SO-FIN-1 (50 × 0.10 = 5 USD) → confirmed →
        GET /api/v1/finance/journal-entries now returns TWO rows:
          type=AR partner=CUST-ACME order=SO-FIN-1 amount=5.0000 USD
          (plus the AP row from above)
      - GET /api/v1/finance/journal-entries?orderCode=PO-FIN-1 →
        only the AP row.
      - GET /api/v1/finance/journal-entries?type=AR → only the AR row.
      - platform__event_outbox shows 2 rows (one per confirm) both
        DISPATCHED, finance__journal_entry shows 2 rows.
      - The journal-entry code column equals the originating event
        UUID, proving the dedup contract is wired.
    
    What this is NOT (yet)
      - Not a real general ledger. No debit/credit legs, no chart of
        accounts, no period close, no double-entry invariant. P5.9
        promotes this minimal seed into a real finance PBC.
      - No reaction to ship/receive/cancel events yet — only confirm.
        Real revenue recognition (which happens at ship time for most
        accounting standards) lands with the P5.9 build.
      - No outbound api.v1.ext facade. pbc-finance does not (yet)
        expose itself to other PBCs; it is a pure consumer. When
        pbc-production needs to know "did this order's invoice clear",
        that facade gets added.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The event bus and transactional outbox have existed since P1.7 but no
    real PBC business logic was publishing through them. This change closes
    that loop end-to-end:
    
    api.v1.event.orders (new public surface)
      - SalesOrderConfirmedEvent / SalesOrderShippedEvent /
        SalesOrderCancelledEvent — sealed under SalesOrderEvent,
        aggregateType = "orders_sales.SalesOrder"
      - PurchaseOrderConfirmedEvent / PurchaseOrderReceivedEvent /
        PurchaseOrderCancelledEvent — sealed under PurchaseOrderEvent,
        aggregateType = "orders_purchase.PurchaseOrder"
      - Events live in api.v1 (not inside the PBCs) so other PBCs and
        customer plug-ins can subscribe without importing the producing
        PBC — that would violate guardrail #9.
    
    pbc-orders-sales / pbc-orders-purchase
      - SalesOrderService and PurchaseOrderService now inject EventBus
        and publish a typed event from each state-changing method
        (confirm, ship/receive, cancel). The publish runs INSIDE the
        same @Transactional method as the JPA mutation and the
        InventoryApi.recordMovement ledger writes — EventBusImpl uses
        Propagation.MANDATORY, so a publish outside a transaction
        fails loudly. A failure in any line rolls back the status
        change AND every ledger row AND the would-have-been outbox row.
      - 6 new unit tests (3 per service) mockk the EventBus and verify
        each transition publishes exactly one matching event with the
        expected fields. Total tests: 186 → 192.
    
    End-to-end smoke verified against real Postgres
      - Created supplier, customer, item PAPER-A4, location WH-MAIN.
      - Drove a PO and an SO through the full state machine plus a
        cancel of each. 6 events fired:
          orders_purchase.PurchaseOrder × 3 (confirm + receive + cancel)
          orders_sales.SalesOrder       × 3 (confirm + ship + cancel)
      - The wildcard EventAuditLogSubscriber logged each one at INFO
        level to /tmp/vibe-erp-boot.log with the [event-audit] tag.
      - platform__event_outbox shows 6 rows, all flipped from PENDING
        to DISPATCHED by the OutboxPoller within seconds.
      - The publish-inside-the-ledger-transaction guarantee means a
        subscriber that reads inventory__stock_movement on event
        receipt is guaranteed to see the matching SALES_SHIPMENT or
        PURCHASE_RECEIPT rows. This is what the architecture spec
        section 9 promised and now delivers.
    
    Why this is the right shape
      - Other PBCs (production, finance) and customer plug-ins can now
        react to "an order was confirmed/shipped/received/cancelled"
        without ever importing pbc-orders-* internals. The event class
        objects live in api.v1, the only stable contract surface.
      - The aggregateType strings ("orders_sales.SalesOrder",
        "orders_purchase.PurchaseOrder") match the <pbc>.<aggregate>
        convention documented on DomainEvent.aggregateType, so a
        cross-classloader subscriber can use the topic-string subscribe
        overload without holding the concrete Class<E>.
      - The bus's outbox row is the durability anchor for the future
        Kafka/NATS bridge: switching from in-process delivery to
        cross-process delivery will require zero changes to either
        PBC's publish call.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The buying-side mirror of pbc-orders-sales. Adds the 6th real PBC
    and closes the loop: the framework now does both directions of the
    inventory flow through the same `InventoryApi.recordMovement` facade.
    Buy stock with a PO that hits RECEIVED, ship stock with a SO that
    hits SHIPPED, both feed the same `inventory__stock_movement` ledger.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-orders-purchase` (16 modules total
      now). Same dependency set as pbc-orders-sales, same architectural
      enforcement — no direct dependency on any other PBC; cross-PBC
      references go through `api.v1.ext.<pbc>` facades at runtime.
    * Two JPA entities mirroring SalesOrder / SalesOrderLine:
      - `PurchaseOrder` (header) — code, partner_code (varchar, NOT a
        UUID FK), status enum DRAFT/CONFIRMED/RECEIVED/CANCELLED,
        order_date, expected_date (nullable, the supplier's promised
        delivery date), currency_code, total_amount, ext jsonb.
      - `PurchaseOrderLine` — purchase_order_id FK, line_no, item_code,
        quantity, unit_price, currency_code. Same shape as the sales
        order line; the api.v1 facade reuses `SalesOrderLineRef` rather
        than declaring a duplicate type.
    * `PurchaseOrderService.create` performs three cross-PBC validations
      in one transaction:
      1. PartnersApi.findPartnerByCode → reject if null.
      2. The partner's `type` must be SUPPLIER or BOTH (a CUSTOMER-only
         partner cannot be the supplier of a purchase order — the
         mirror of the sales-order rule that rejects SUPPLIER-only
         partners as customers).
      3. CatalogApi.findItemByCode for EVERY line.
      Then validates: at least one line, no duplicate line numbers,
      positive quantity, non-negative price, currency matches header.
      The header total is RECOMPUTED from the lines (caller's value
      ignored — never trust a financial aggregate sent over the wire).
    * State machine enforced by `confirm()`, `cancel()`, and `receive()`:
      - DRAFT → CONFIRMED   (confirm)
      - DRAFT → CANCELLED   (cancel)
      - CONFIRMED → CANCELLED (cancel before receipt)
      - CONFIRMED → RECEIVED  (receive — increments inventory)
      - RECEIVED → ×          (terminal; cancellation requires a
                                return-to-supplier flow)
    * `receive(id, receivingLocationCode)` walks every line and calls
      `inventoryApi.recordMovement(... +line.quantity reason="PURCHASE_RECEIPT"
      reference="PO:<order_code>")`. The whole operation runs in ONE
      transaction so a failure on any line rolls back EVERY line's
      already-written movement AND the order status change. The
      customer cannot end up with "5 of 7 lines received, status
      still CONFIRMED, ledger half-written".
    * New `POST /api/v1/orders/purchase-orders/{id}/receive` endpoint
      with body `{"receivingLocationCode": "WH-MAIN"}`, gated by
      `orders.purchase.receive`. The single-arg DTO has the same
      Jackson `@JsonCreator(mode = PROPERTIES)` workaround as
      `ShipSalesOrderRequest` (the trap is documented in the class
      KDoc with a back-reference to ShipSalesOrderRequest).
    * Confirm/cancel/receive endpoints carry `@RequirePermission`
      annotations (`orders.purchase.confirm`, `orders.purchase.cancel`,
      `orders.purchase.receive`). All three keys declared in the new
      `orders-purchase.yml` metadata.
    * New api.v1 facade `org.vibeerp.api.v1.ext.orders.PurchaseOrdersApi`
      + `PurchaseOrderRef`. Reuses the existing `SalesOrderLineRef`
      type for the line shape — buying and selling lines carry the
      same fields, so duplicating the ref type would be busywork.
    * `PurchaseOrdersApiAdapter` — sixth `*ApiAdapter` after Identity,
      Catalog, Partners, Inventory, SalesOrders.
    * `orders-purchase.yml` metadata declaring 2 entities, 6 permission
      keys, 1 menu entry under "Purchasing".
    
    End-to-end smoke test (the full demo loop)
    ------------------------------------------
    Reset Postgres, booted the app, ran:
    * Login as admin
    * POST /catalog/items → PAPER-A4
    * POST /partners → SUP-PAPER (SUPPLIER)
    * POST /inventory/locations → WH-MAIN
    * GET /inventory/balances?itemCode=PAPER-A4 → [] (no stock)
    * POST /orders/purchase-orders → PO-2026-0001 for 5000 sheets
      @ $0.04 = total $200.00 (recomputed from the line)
    * POST /purchase-orders/{id}/confirm → status CONFIRMED
    * POST /purchase-orders/{id}/receive body={"receivingLocationCode":"WH-MAIN"}
      → status RECEIVED
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=5000
    * GET /inventory/movements?itemCode=PAPER-A4 →
      PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001
    
    Then the FULL loop with the sales side from the previous chunk:
    * POST /partners → CUST-ACME (CUSTOMER)
    * POST /orders/sales-orders → SO-2026-0001 for 50 sheets
    * confirm + ship from WH-MAIN
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=4950 (5000-50)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      PURCHASE_RECEIPT delta=5000  ref=PO:PO-2026-0001
      SALES_SHIPMENT   delta=-50   ref=SO:SO-2026-0001
    
    The framework's `InventoryApi.recordMovement` facade now has TWO
    callers — pbc-orders-sales (negative deltas, SALES_SHIPMENT) and
    pbc-orders-purchase (positive deltas, PURCHASE_RECEIPT) — feeding
    the same ledger from both sides.
    
    Failure paths verified:
    * Re-receive a RECEIVED PO → 400 "only CONFIRMED orders can be received"
    * Cancel a RECEIVED PO → 400 "issue a return-to-supplier flow instead"
    * Create a PO from a CUSTOMER-only partner → 400 "partner 'CUST-ONLY'
      is type CUSTOMER and cannot be the supplier of a purchase order"
    
    Regression: catalog uoms, identity users, partners, inventory,
    sales orders, purchase orders, printing-shop plates with i18n,
    metadata entities (15 now, was 13) — all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 16 subprojects, 186 unit tests (was 175),
      all green. The 11 new tests cover the same shapes as the
      sales-order tests but inverted: unknown supplier, CUSTOMER-only
      rejection, BOTH-type acceptance, unknown item, empty lines,
      total recomputation, confirm/cancel state machine,
      receive-rejects-non-CONFIRMED, receive-walks-lines-with-positive-
      delta, cancel-rejects-RECEIVED, cancel-CONFIRMED-allowed.
    
    What was deferred
    -----------------
    * **RFQs** (request for quotation) and **supplier price catalogs**
      — both lay alongside POs but neither is in v1.
    * **Partial receipts**. v1's RECEIVED is "all-or-nothing"; the
      supplier delivering 4500 of 5000 sheets is not yet modelled.
    * **Supplier returns / refunds**. The cancel-RECEIVED rejection
      message says "issue a return-to-supplier flow" — that flow
      doesn't exist yet.
    * **Three-way matching** (PO + receipt + invoice). Lands with
      pbc-finance.
    * **Multi-leg transfers**. TRANSFER_IN/TRANSFER_OUT exist in the
      movement enum but no service operation yet writes both legs
      in one transaction.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The killer demo finally works: place a sales order, ship it, watch
    inventory drop. This chunk lands the two pieces that close the loop:
    the inventory movement ledger (the audit-grade history of every
    stock change) and the sales-order /ship endpoint that calls
    InventoryApi.recordMovement to atomically debit stock for every line.
    
    This is the framework's FIRST cross-PBC WRITE flow. Every earlier
    cross-PBC call was a read (CatalogApi.findItemByCode,
    PartnersApi.findPartnerByCode, InventoryApi.findStockBalance).
    Shipping inverts that: pbc-orders-sales synchronously writes to
    inventory's tables (via the api.v1 facade) as a side effect of
    changing its own state, all in ONE Spring transaction.
    
    What landed
    -----------
    * New `inventory__stock_movement` table — append-only ledger
      (id, item_code, location_id FK, signed delta, reason enum,
      reference, occurred_at, audit cols). CHECK constraint
      `delta <> 0` rejects no-op rows. Indexes on item_code,
      location_id, the (item, location) composite, reference, and
      occurred_at. Migration is in its own changelog file
      (002-inventory-movement-ledger.xml) per the project convention
      that each new schema cut is a new file.
    * New `StockMovement` JPA entity + repository + `MovementReason`
      enum (RECEIPT, ISSUE, ADJUSTMENT, SALES_SHIPMENT, PURCHASE_RECEIPT,
      TRANSFER_OUT, TRANSFER_IN). Each value carries a documented sign
      convention; the service rejects mismatches (a SALES_SHIPMENT
      with positive delta is a caller bug, not silently coerced).
    * New `StockMovementService.record(...)` — the ONE entry point for
      changing inventory. Cross-PBC item validation via CatalogApi,
      local location validation, sign-vs-reason enforcement, and
      negative-balance rejection all happen BEFORE the write. The
      ledger row insert AND the balance row update happen in the
      SAME database transaction so the two cannot drift.
    * `StockBalanceService.adjust` refactored to delegate: it computes
      delta = newQty - oldQty and calls record(... ADJUSTMENT). The
      REST endpoint keeps its absolute-quantity semantics — operators
      type "the shelf has 47" not "decrease by 3" — but every
      adjustment now writes a ledger row too. A no-op adjustment
      (re-saving the same value) does NOT write a row, so the audit
      log doesn't fill with noise from operator clicks that didn't
      change anything.
    * New `StockMovementController` at `/api/v1/inventory/movements`:
      GET filters by itemCode, locationId, or reference (for "all
      movements caused by SO-2026-0001"); POST records a manual
      movement. Both protected by `inventory.stock.adjust`.
    * `InventoryApi` facade extended with `recordMovement(itemCode,
      locationCode, delta, reason: String, reference)`. The reason is
      a String in the api.v1 surface (not the local enum) so plug-ins
      don't import inventory's internal types — the closed set is
      documented on the interface. The adapter parses the string with
      a meaningful error on unknown values.
    * New `SHIPPED` status on `SalesOrderStatus`. Transitions:
      DRAFT → CONFIRMED → SHIPPED (terminal). Cancelling a SHIPPED
      order is rejected with "issue a return / refund flow instead".
    * New `SalesOrderService.ship(id, shippingLocationCode)`: walks
      every line, calls `inventoryApi.recordMovement(... -line.quantity
      reason="SALES_SHIPMENT" reference="SO:{order_code}")`, flips
      status to SHIPPED. The whole operation runs in ONE transaction
      so a failure on any line — bad item, bad location, would push
      balance negative — rolls back the order status change AND every
      other line's already-written movement. The customer never ends
      up with "5 of 7 lines shipped, status still CONFIRMED, ledger
      half-written".
    * New `POST /api/v1/orders/sales-orders/{id}/ship` endpoint with
      body `{"shippingLocationCode": "WH-MAIN"}`, gated by the new
      `orders.sales.ship` permission key.
    * `ShipSalesOrderRequest` is a single-arg Kotlin data class — same
      Jackson deserialization trap as `RefreshRequest`. Fixed with
      `@JsonCreator(mode = PROPERTIES) + @param:JsonProperty`. The
      trap is documented in the class KDoc.
    
    End-to-end smoke test (the killer demo)
    ---------------------------------------
    Reset Postgres, booted the app, ran:
    * Login as admin
    * POST /catalog/items → PAPER-A4
    * POST /partners → CUST-ACME
    * POST /inventory/locations → WH-MAIN
    * POST /inventory/balances/adjust → quantity=1000
      (now writes a ledger row via the new path)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      ADJUSTMENT delta=1000 ref=null
    * POST /orders/sales-orders → SO-2026-0001 (50 units of PAPER-A4)
    * POST /sales-orders/{id}/confirm → status CONFIRMED
    * POST /sales-orders/{id}/ship body={"shippingLocationCode":"WH-MAIN"}
      → status SHIPPED
    * GET /inventory/balances?itemCode=PAPER-A4 → quantity=950
      (1000 - 50)
    * GET /inventory/movements?itemCode=PAPER-A4 →
      ADJUSTMENT     delta=1000   ref=null
      SALES_SHIPMENT delta=-50    ref=SO:SO-2026-0001
    
    Failure paths verified:
    * Re-ship a SHIPPED order → 400 "only CONFIRMED orders can be shipped"
    * Cancel a SHIPPED order → 400 "issue a return / refund flow instead"
    * Place a 10000-unit order, confirm, try to ship from a 950-stock
      warehouse → 400 "stock movement would push balance for 'PAPER-A4'
      at location ... below zero (current=950.0000, delta=-10000.0000)";
      balance unchanged after the rollback (transaction integrity
      verified)
    
    Regression: catalog uoms, identity users, inventory locations,
    printing-shop plates with i18n, metadata entities — all still
    HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 175 unit tests (was 163),
      all green. The 12 new tests cover:
      - StockMovementServiceTest (8): zero-delta rejection, positive
        SALES_SHIPMENT rejection, negative RECEIPT rejection, both
        signs allowed on ADJUSTMENT, unknown item via CatalogApi seam,
        unknown location, would-push-balance-negative rejection,
        new-row + existing-row balance update.
      - StockBalanceServiceTest, rewritten (5): negative-quantity
        early reject, delegation with computed positive delta,
        delegation with computed negative delta, no-op adjustment
        short-circuit (NO ledger row written), no-op on missing row
        creates an empty row at zero.
      - SalesOrderServiceTest, additions (3): ship rejects non-CONFIRMED,
        ship walks lines and calls recordMovement with negated quantity
        + correct reference, cancel rejects SHIPPED.
    
    What was deferred
    -----------------
    * **Event publication.** A `StockMovementRecorded` event would
      let pbc-finance and pbc-production react to ledger writes
      without polling. The event bus has been wired since P1.7 but
      no real cross-PBC flow uses it yet — that's the natural next
      chunk and the chunk after this commit.
    * **Multi-leg transfers.** TRANSFER_OUT and TRANSFER_IN are in
      the enum but no service operation atomically writes both legs
      yet (both legs in one transaction is required to keep total
      on-hand invariant).
    * **Reservation / pick lists.** "Reserve 50 of PAPER-A4 for an
      unconfirmed order" is its own concept that lands later.
    * **Shipped-order returns / refunds.** The cancel-SHIPPED rule
      points the user at "use a return flow" — that flow doesn't
      exist yet. v1 says shipments are terminal.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The framework's authorization layer is now live. Until now, every
    authenticated user could do everything; the framework had only an
    authentication gate. This chunk adds method-level @RequirePermission
    annotations enforced by a Spring AOP aspect that consults the JWT's
    roles claim and a metadata-driven role-permission map.
    
    What landed
    -----------
    * New `Role` and `UserRole` JPA entities mapping the existing
      identity__role + identity__user_role tables (the schema was
      created in the original identity init but never wired to JPA).
      RoleJpaRepository + UserRoleJpaRepository with a JPQL query that
      returns a user's role codes in one round-trip.
    * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a
      Set<String> of role codes and encodes them as a `roles` JWT claim
      (sorted for deterministic tests). Refresh tokens NEVER carry roles
      by design — see the rationale on `JwtIssuer.issueRefreshToken`. A
      role revocation propagates within one access-token lifetime
      (15 min default).
    * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`.
      Missing claim → empty set, NOT an error (refresh tokens, system
      tokens, and pre-P4.3 tokens all legitimately omit it).
    * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)`
      before minting the access token. `AuthService.refresh` re-reads
      the user's roles too — so a refresh always picks up the latest
      set, since refresh tokens deliberately don't carry roles.
    * New `AuthorizationContext` ThreadLocal in `platform-security.authz`
      carrying an `AuthorizedPrincipal(id, username, roles)`. Separate
      from `PrincipalContext` (which lives in platform-persistence and
      carries only the principal id, for the audit listener). The two
      contexts coexist because the audit listener has no business
      knowing what roles a user has.
    * `PrincipalContextFilter` now populates BOTH contexts on every
      authenticated request, reading the JWT's `username` and `roles`
      claims via `Jwt.getClaimAsStringList("roles")`. The filter is the
      one and only place that knows about Spring Security types AND
      about both vibe_erp contexts; everything downstream uses just the
      Spring-free abstractions.
    * `PermissionEvaluator` Spring bean: takes a role set + permission
      key, returns boolean. Resolution chain:
      1. The literal `admin` role short-circuits to `true` for every
         key (the wildcard exists so the bootstrap admin can do
         everything from the very first boot without seeding a complete
         role-permission mapping).
      2. Otherwise consults an in-memory `Map<role, Set<permission>>`
         loaded from `metadata__role_permission` rows. The cache is
         rebuilt by `refresh()`, called from `VibeErpPluginManager`
         after the initial core load AND after every plug-in load.
      3. Empty role set is always denied. No implicit grants.
    * `@RequirePermission("...")` annotation in `platform-security.authz`.
      `RequirePermissionAspect` is a Spring AOP @Aspect with @Around
      advice that intercepts every annotated method, reads the current
      request's `AuthorizationContext`, calls
      `PermissionEvaluator.has(...)`, and either proceeds or throws
      `PermissionDeniedException`.
    * New `PermissionDeniedException` carrying the offending key.
      `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with
      `"permission denied: 'partners.partner.deactivate'"` as the
      detail. The key IS surfaced to the caller (unlike the 401's
      generic "invalid credentials") because the SPA needs it to
      render a useful "your role doesn't include X" message and
      callers are already authenticated, so it's not an enumeration
      vector.
    * `BootstrapAdminInitializer` now creates the wildcard `admin`
      role on first boot and grants it to the bootstrap admin user.
    * `@RequirePermission` applied to four sensitive endpoints as the
      demo: `PartnerController.deactivate`,
      `StockBalanceController.adjust`, `SalesOrderController.confirm`,
      `SalesOrderController.cancel`. More endpoints will gain
      annotations as additional roles are introduced; v1 keeps the
      blast radius narrow.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, verified:
    * Admin login → JWT length 265 (was 241), decoded claims include
      `"roles":["admin"]`
    * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED
      (admin wildcard short-circuits the permission check)
    * Inserted a 'powerless' user via raw SQL with no role assignments
      but copied the admin's password hash so login works
    * Powerless login → JWT length 247, decoded claims have NO roles
      field at all
    * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with
      `"permission denied: 'orders.sales.cancel'"` in the body
    * Powerless DELETE /partners/{id} → **403 Forbidden** with
      `"permission denied: 'partners.partner.deactivate'"`
    * Powerless GET /sales-orders, /partners, /catalog/items → all 200
      (read endpoints have no @RequirePermission)
    * Admin regression: catalog uoms, identity users, inventory
      locations, printing-shop plates with i18n, metadata custom-fields
      endpoint — all still HTTP 2xx
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 163 unit tests (was 153),
      all green. The 10 new tests cover:
      - PermissionEvaluator: empty roles deny, admin wildcard, explicit
        role-permission grant, multi-role union, unknown role denial,
        malformed payload tolerance, currentHas with no AuthorizationContext,
        currentHas with bound context (8 tests).
      - JwtRoundTrip: roles claim round-trips through the access token,
        refresh token never carries roles even when asked (2 tests).
    
    What was deferred
    -----------------
    * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak-
      compatible OIDC client will reuse the same authorization layer
      unchanged — the roles will come from OIDC ID tokens instead of
      the local user store.
    * **Permission key validation at boot.** The framework does NOT
      yet check that every `@RequirePermission` value matches a
      declared metadata permission key. The plug-in linter is the
      natural place for that check to land later.
    * **Role hierarchy**. Roles are flat in v1; a role with permission
      X cannot inherit from another role. Adding a `parent_role` field
      on the role row is a non-breaking change later.
    * **Resource-aware permissions** ("the user owns THIS partner").
      v1 only checks the operation, not the operand. Resource-aware
      checks are post-v1.
    * **Composite (AND/OR) permission requirements**. A single key
      per call site keeps the contract simple. Composite requirements
      live in service code that calls `PermissionEvaluator.currentHas`
      directly.
    * **Role management UI / REST**. The framework can EVALUATE
      permissions but has no first-class endpoints for "create a
      role", "grant a permission to a role", "assign a role to a
      user". v1 expects these to be done via direct DB writes or via
      the future SPA's role editor (P3.x); the wiring above is
      intentionally policy-only, not management.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The fifth real PBC and the first business workflow PBC. pbc-inventory
    proved a PBC could consume ONE cross-PBC facade (CatalogApi).
    pbc-orders-sales consumes TWO simultaneously (PartnersApi for the
    customer, CatalogApi for every line's item) in a single transaction —
    the most rigorous test of the modular monolith story so far. Neither
    source PBC is on the compile classpath; the Gradle build refuses any
    direct dependency. Spring DI wires the api.v1 interfaces to their
    concrete adapters at runtime.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-orders-sales` (15 modules total).
    * Two JPA entities, both extending `AuditedJpaEntity`:
      - `SalesOrder` (header) — code, partner_code (varchar, NOT a UUID
        FK to partners), status enum DRAFT/CONFIRMED/CANCELLED, order_date,
        currency_code (varchar(3)), total_amount numeric(18,4),
        ext jsonb. Eager-loaded `lines` collection because every read of
        the header is followed by a read of the lines in practice.
      - `SalesOrderLine` — sales_order_id FK, line_no, item_code (varchar,
        NOT a UUID FK to catalog), quantity, unit_price, currency_code.
        Per-line currency in the schema even though v1 enforces all-lines-
        match-header (so multi-currency relaxation is later schema-free).
        No `ext` jsonb on lines: lines are facts, not master records;
        custom fields belong on the header.
    * `SalesOrderService.create` performs **three independent
      cross-PBC validations** in one transaction:
      1. PartnersApi.findPartnerByCode → reject if null (covers unknown
         AND inactive partners; the facade hides them).
      2. PartnersApi result.type must be CUSTOMER or BOTH (a SUPPLIER-only
         partner cannot be the customer of a sales order).
      3. CatalogApi.findItemByCode for EVERY line → reject if null.
      Then it ALSO validates: at least one line, no duplicate line numbers,
      positive quantity, non-negative price, currency matches header.
      The header total is RECOMPUTED from the lines — the caller's value
      is intentionally ignored. Never trust a financial aggregate sent
      over the wire.
    * State machine enforced by `confirm()` and `cancel()`:
      - DRAFT → CONFIRMED   (confirm)
      - DRAFT → CANCELLED   (cancel from draft)
      - CONFIRMED → CANCELLED (cancel a confirmed order)
      Anything else throws with a descriptive message. CONFIRMED orders
      are immutable except for cancellation — the `update` method refuses
      to mutate a non-DRAFT order.
    * `update` with line items REPLACES the existing lines wholesale
      (PUT semantics for lines, PATCH for header columns). Partial line
      edits are not modelled because the typical "edit one line" UI
      gesture renders to a full re-send anyway.
    * REST: `/api/v1/orders/sales-orders` (CRUD + `/confirm` + `/cancel`).
      State transitions live on dedicated POST endpoints rather than
      PATCH-based status writes — they have side effects (lines become
      immutable, downstream PBCs will receive events in future versions),
      and sentinel-status writes hide that.
    * New api.v1 facade `org.vibeerp.api.v1.ext.orders.SalesOrdersApi`
      with `findByCode`, `findById`, `SalesOrderRef`, `SalesOrderLineRef`.
      Fifth ext.* package after identity, catalog, partners, inventory.
      Sets up the next consumers: pbc-production for work orders, pbc-finance
      for invoicing, the printing-shop reference plug-in for the
      quote-to-job-card workflow.
    * `SalesOrdersApiAdapter` runtime implementation. Cancelled orders ARE
      returned by the facade (unlike inactive items / partners which are
      hidden) because downstream consumers may legitimately need to react
      to a cancellation — release a production slot, void an invoice, etc.
    * `orders-sales.yml` metadata declaring 2 entities, 5 permission keys,
      1 menu entry.
    
    Build enforcement (still load-bearing)
    --------------------------------------
    The root `build.gradle.kts` STILL refuses any direct dependency from
    `pbc-orders-sales` to either `pbc-partners` or `pbc-catalog`. Try
    adding either as `implementation(project(...))` and the build fails
    at configuration time with the architectural violation. The
    cross-PBC interfaces live in api-v1; the concrete adapters live in
    their owning PBCs; Spring DI assembles them at runtime via the
    bootstrap @ComponentScan. pbc-orders-sales sees only the api.v1
    interfaces.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit:
    * POST /api/v1/catalog/items × 2  → PAPER-A4, INK-CYAN
    * POST /api/v1/partners/partners → CUST-ACME (CUSTOMER), SUP-ONLY (SUPPLIER)
    * POST /api/v1/orders/sales-orders → 201, two lines, total 386.50
      (5000 × 0.05 + 3 × 45.50 = 250.00 + 136.50, correctly recomputed)
    * POST .../sales-orders with FAKE-PARTNER → 400 with the meaningful
      message "partner code 'FAKE-PARTNER' is not in the partners
      directory (or is inactive)"
    * POST .../sales-orders with SUP-ONLY → 400 "partner 'SUP-ONLY' is
      type SUPPLIER and cannot be the customer of a sales order"
    * POST .../sales-orders with FAKE-ITEM line → 400 "line 1: item code
      'FAKE-ITEM' is not in the catalog (or is inactive)"
    * POST /{id}/confirm → status DRAFT → CONFIRMED
    * PATCH the CONFIRMED order → 400 "only DRAFT orders are mutable"
    * Re-confirm a CONFIRMED order → 400 "only DRAFT can be confirmed"
    * POST /{id}/cancel a CONFIRMED order → status CANCELLED (allowed)
    * SELECT * FROM orders_sales__sales_order — single row, total
      386.5000, status CANCELLED
    * SELECT * FROM orders_sales__sales_order_line — two rows in line_no
      order with the right items and quantities
    * GET /api/v1/_meta/metadata/entities → 13 entities now (was 11)
    * Regression: catalog uoms, identity users, partners, inventory
      locations, printing-shop plates with i18n (Accept-Language: zh-CN)
      all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 153 unit tests (was 139),
      all green. The 14 new tests cover: unknown/SUPPLIER-only/BOTH-type
      partner paths, unknown item path, empty/duplicate-lineno line
      arrays, negative-quantity early reject (verifies CatalogApi NOT
      consulted), currency mismatch reject, total recomputation, all
      three state-machine transitions and the rejected ones.
    
    What was deferred
    -----------------
    * **Sales-order shipping**. Confirmed orders cannot yet ship, because
      shipping requires atomically debiting inventory — which needs the
      movement ledger that was deferred from P5.3. The pair of chunks
      (movement ledger + sales-order shipping flow) is the natural next
      combination.
    * **Multi-currency lines**. The schema column is per-line but the
      service enforces all-lines-match-header in v1. Relaxing this is a
      service-only change.
    * **Quotes** (DRAFT-but-customer-visible) and **deliveries** (the
      thing that triggers shipping). v1 only models the order itself.
    * **Pricing engine / discounts**. v1 takes the unit price the caller
      sends. A real ERP has a price book lookup, customer-specific
      pricing, volume discounts, promotional pricing — all of which slot
      in BEFORE the line price is set, leaving the schema unchanged.
    * **Tax**. v1 totals are pre-tax. Tax calculation is its own PBC
      (and a regulatory minefield) that lands later.
    zichun authored
     
    Browse Code »
  • zichun authored
     
    Browse Code »
  • The fourth real PBC, and the first one that CONSUMES another PBC's
    api.v1.ext facade. Until now every PBC was a *provider* of an
    ext.<pbc> interface (identity, catalog, partners). pbc-inventory is
    the first *consumer*: it injects org.vibeerp.api.v1.ext.catalog.CatalogApi
    to validate item codes before adjusting stock. This proves the
    cross-PBC contract works in both directions, exactly as guardrail #9
    requires.
    
    What landed
    -----------
    * New Gradle subproject `pbc/pbc-inventory` (14 modules total now).
    * Two JPA entities, both extending `AuditedJpaEntity`:
      - `Location` — code, name, type (WAREHOUSE/BIN/VIRTUAL), active,
        ext jsonb. Single table for all location levels with a type
        discriminator (no recursive self-reference in v1; YAGNI for the
        "one warehouse, handful of bins" shape every printing shop has).
      - `StockBalance` — item_code (varchar, NOT a UUID FK), location_id
        FK, quantity numeric(18,4). The item_code is deliberately a
        string FK that references nothing because pbc-inventory has no
        compile-time link to pbc-catalog — the cross-PBC link goes
        through CatalogApi at runtime. UNIQUE INDEX on
        (item_code, location_id) is the primary integrity guarantee;
        UUID id is the addressable PK. CHECK (quantity >= 0).
    * `LocationService` and `StockBalanceService` with full CRUD +
      adjust semantics. ext jsonb on Location goes through ExtJsonValidator
      (P3.4 — Tier 1 customisation).
    * `StockBalanceService.adjust(itemCode, locationId, quantity)`:
      1. Reject negative quantity.
      2. **Inject CatalogApi**, call `findItemByCode(itemCode)`, reject
         if null with a meaningful 400. THIS is the cross-PBC seam test.
      3. Verify the location exists.
      4. SELECT-then-save upsert on (item_code, location_id) — single
         row per cell, mutated in place when the row exists, created
         when it doesn't. Single-instance deployment makes the
         read-modify-write race window academic.
    * REST: `/api/v1/inventory/locations` (CRUD), `/api/v1/inventory/balances`
      (GET with itemCode or locationId filters, POST /adjust).
    * New api.v1 facade `org.vibeerp.api.v1.ext.inventory` with
      `InventoryApi.findStockBalance(itemCode, locationCode)` +
      `totalOnHand(itemCode)` + `StockBalanceRef`. Fourth ext.* package
      after identity, catalog, partners. Sets up the next consumers
      (sales orders, purchase orders, the printing-shop plug-in's
      "do we have enough paper for this job?").
    * `InventoryApiAdapter` runtime implementation in pbc-inventory.
    * `inventory.yml` metadata declaring 2 entities, 6 permission keys,
      2 menu entries.
    
    Build enforcement (the load-bearing bit)
    ----------------------------------------
    The root build.gradle.kts STILL refuses any direct dependency from
    pbc-inventory to pbc-catalog. Try adding `implementation(project(
    ":pbc:pbc-catalog"))` to pbc-inventory's build.gradle.kts and the
    build fails at configuration time with "Architectural violation in
    :pbc:pbc-inventory: depends on :pbc:pbc-catalog". The CatalogApi
    interface is in api-v1; the CatalogApiAdapter implementation is in
    pbc-catalog; Spring DI wires them at runtime via the bootstrap
    @ComponentScan. pbc-inventory only ever sees the interface.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit:
    * POST /api/v1/inventory/locations → 201, "WH-MAIN" warehouse
    * POST /api/v1/catalog/items → 201, "PAPER-A4" sheet item
    * POST /api/v1/inventory/balances/adjust with itemCode=PAPER-A4 → 200,
      the cross-PBC catalog lookup succeeded
    * POST .../adjust with itemCode=FAKE-ITEM → 400 with the meaningful
      message "item code 'FAKE-ITEM' is not in the catalog (or is inactive)"
      — the cross-PBC seam REJECTS unknown items as designed
    * POST .../adjust with quantity=-5 → 400 "stock quantity must be
      non-negative", caught BEFORE the CatalogApi mock would be invoked
    * POST .../adjust again with quantity=7500 → 200; SELECT shows ONE
      row with id unchanged and quantity = 7500 (upsert mutates, not
      duplicates)
    * GET /api/v1/inventory/balances?itemCode=PAPER-A4 → the row, with
      scale-4 numeric serialised verbatim
    * GET /api/v1/_meta/metadata/entities → 11 entities now (was 9 before
      Location + StockBalance landed)
    * Regression: catalog uoms, identity users, partners, printing-shop
      plates with i18n (Accept-Language: zh-CN), Location custom-fields
      endpoint all still HTTP 2xx.
    
    Build
    -----
    * `./gradlew build`: 14 subprojects, 139 unit tests (was 129),
      all green. The 10 new tests cover Location CRUD + the StockBalance
      adjust path with mocked CatalogApi: unknown item rejection, unknown
      location rejection, negative-quantity early reject (verifies
      CatalogApi is NOT consulted), happy-path create, and upsert
      (existing row mutated, save() not called because @Transactional
      flushes the JPA-managed entity on commit).
    
    What was deferred
    -----------------
    * `inventory__stock_movement` append-only ledger. The current operation
      is "set the quantity"; receipts/issues/transfers as discrete events
      with audit trail land in a focused follow-up. The balance row will
      then be regenerated from the ledger via a Liquibase backfill.
    * Negative-balance / over-issue prevention. The CHECK constraint
      blocks SET to a negative value, but there's no concept of "you
      cannot ISSUE more than is on hand" yet because there is no
      separate ISSUE operation — only absolute SET.
    * Lots, batches, serial numbers, expiry dates. Plenty of printing
      shops need none of these; the ones that do can either wait for
      the lot/serial chunk later or add the columns via Tier 1 custom
      fields on Location for now.
    * Cross-warehouse transfer atomicity (debit one, credit another in
      one transaction). Same — needs the ledger.
    zichun authored
     
    Browse Code »