-
Pins today's 5 feature commits: - 25353240 SPA CRUD forms (item, partner, PO, WO) - 82c5267d R2 identity screens (users, roles, assignment) - c2fab13b S3 file backend (P1.9 complete) - 6ad72c7c OIDC federation (P4.2 complete) - 17771894 SPA fill (create location, adjust stock) P1.9 promoted from Partial to DONE. P4.2 promoted from Pending to DONE. R2 promoted from Pending to DONE. R4 updated to reflect create forms for every manageable entity. Version bumped 0.29.0 -> 0.30.0-SNAPSHOT.
-
Updates the "at a glance" row to v0.29.0-SNAPSHOT + fc62d6d7, bumps the Phase 6 R1 row to DONE with the commit ref and an overview of what landed (Gradle wrapper, SpaController, security reordering, bundled fat-jar, 16 pages), and rewrites the "How to run" section to walk the click-through demo instead of just curl. README's status table updated to reflect 10/10 PBCs + 356 tests + SPA status; building section now mentions that `./gradlew build` compiles the SPA too. No code changes.
-
New platform subproject `platform/platform-workflow` that makes `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This is the framework's first chunk of Phase 2 (embedded workflow engine) and the dependency other work has been waiting on — pbc-production routings/operations, the full buy-make-sell BPMN scenario in the reference plug-in, and ultimately the BPMN designer web UI all hang off this seam. ## The shape - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a single new module. Every other module in the framework still sees only the api.v1 TaskHandler + WorkflowTask + TaskContext surface — guardrail #10 stays honest, no Flowable type leaks to plug-ins or PBCs. - `TaskHandlerRegistry` is the host-side index of every registered handler, keyed by `TaskHandler.key()`. Auto-populated from every Spring bean implementing TaskHandler via constructor injection of `List<TaskHandler>`; duplicate keys fail fast at registration time. `register` / `unregister` exposed for a future plug-in lifecycle integration. - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate named `taskDispatcher`. Every BPMN service task in the framework references it via `flowable:delegateExpression="${taskDispatcher}"`. The dispatcher reads `execution.currentActivityId` as the task key (BPMN `id` attribute = TaskHandler key — no extension elements, no field injection, no second source of truth) and routes to the matching registered handler. A defensive copy of the execution variables is passed to the handler so it cannot mutate Flowable's internal map. - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the api.v1 `TaskContext` — the variable `set(name, value)` call forwards through Flowable's variable scope (persisted in the same transaction as the surrounding service task execution) and null values remove the variable. Principal + locale are documented placeholders for now (a workflow-engine `Principal.System`), waiting on the propagation chunk that plumbs the initiating user through `runtimeService.startProcessInstanceByKey(...)`. - `WorkflowService` is a thin facade over Flowable's `RuntimeService` + `RepositoryService` exposing exactly the four operations the controller needs: start, list active, inspect variables, list definitions. Everything richer (signals, timers, sub-processes, user-task completion, history queries) lands on this seam in later chunks. - `WorkflowController` at `/api/v1/workflow/**`: * `POST /process-instances` (permission `workflow.process.start`) * `GET /process-instances` (`workflow.process.read`) * `GET /process-instances/{id}/variables` (`workflow.process.read`) * `GET /definitions` (`workflow.definition.read`) * `GET /handlers` (`workflow.definition.read`) Exception handlers map `NoSuchElementException` + `FlowableObjectNotFoundException` → 404, `IllegalArgumentException` → 400, and any other `FlowableException` → 400. Permissions are declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded by the core MetadataLoader so they show up under `GET /api/v1/_meta/metadata` alongside every other permission. ## The executable self-test - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module classpath and Flowable's starter auto-deploys it at boot. Structure: `start` → serviceTask id=`vibeerp.workflow.ping` (delegateExpression=`${taskDispatcher}`) → `end`. Process definitionKey is `vibeerp-workflow-ping` (distinct from the serviceTask id because BPMN 2.0 ids must be unique per document). - `PingTaskHandler` is a real shipped bean, not test code: its `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and `correlationId=<ctx.correlationId()>` to the process variables. Operators and AI agents get a trivial "is the workflow engine alive?" probe out of the box. Why the demo lives in src/main, not src/test: Flowable's auto-deployer reads from the host classpath at boot, so if either half lived under src/test the smoke test wouldn't be reproducible from the shipped image — exactly what CLAUDE.md's "reference plug-in is the executable acceptance test" discipline is trying to prevent. ## The Flowable + Liquibase trap **Learned the hard way during the smoke test.** Adding `flowable-spring-boot-starter-process` immediately broke boot with `Schema-validation: missing table [catalog__item]`. Liquibase was silently not running. Root cause: Flowable 7.x registers a Spring Boot `EnvironmentPostProcessor` called `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has already set an explicit value, forces `spring.liquibase.enabled=false` with a WARN log line that reads "Flowable pulls in Liquibase but does not use the Spring Boot configuration for it". Our master.xml then never executes and JPA validation fails against the empty schema. Fix is a single line in `distribution/src/main/resources/application.yaml` — `spring.liquibase.enabled: true` — with a comment explaining why it must stay there for anyone who touches config next. Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`, etc. tables coexist happily in the same public schema — 39 ACT_* tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable manages its own schema via its internal MyBatis DDL, Liquibase manages ours, they don't touch each other. ## Smoke-test transcript (fresh DB, dev profile) ``` docker compose down -v && docker compose up -d db ./gradlew :distribution:bootRun & # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables, # MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler, # BPMN auto-deployed from classpath POST /api/v1/auth/login → JWT GET /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping) GET /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]} POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping", "businessKey":"smoke-1", "variables":{"greeting":"ni hao"}} → 201 {"processInstanceId":"...","ended":true, "variables":{"pong":true,"pongAt":"2026-04-09T...", "correlationId":"...","greeting":"ni hao"}} POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"} → 404 {"message":"No process definition found for key 'does-not-exist'"} GET /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity) ``` ## Tests - 15 new unit tests in `platform-workflow/src/test`: * `TaskHandlerRegistryTest` — init with initial handlers, duplicate key fails fast, blank key rejected, unregister removes, unregister on unknown returns false, find on missing returns null * `DispatchingJavaDelegateTest` — dispatches by currentActivityId, throws on missing handler, defensive-copies the variable map * `DelegateTaskContextTest` — set non-null forwards, set null removes, blank name rejected, principal/locale/correlationId passthrough, default correlation id is stable across calls * `PingTaskHandlerTest` — key matches the BPMN serviceTask id, execute writes pong + pongAt + correlationId - Total framework unit tests: 261 (was 246), all green. ## What this unblocks - **REF.1** — real quote→job-card workflow handler in the printing-shop plug-in - **pbc-production routings/operations (v3)** — each operation becomes a BPMN step with duration + machine assignment - **P2.3** — user-task form rendering (landing on top of the RuntimeService already exposed via WorkflowService) - **P2.2** — BPMN designer web page (later, depends on R1) ## Deliberate non-goals (parking lot) - Principal propagation from the REST caller through the process start into the handler — uses a fixed `workflow-engine` `Principal.System` for now. Follow-up chunk will plumb the authenticated user as a Flowable variable. - Plug-in-contributed TaskHandler registration via PF4J child contexts — the registry exposes `register/unregister` but the plug-in loader doesn't call them yet. Follow-up chunk. - BPMN user tasks, signals, timers, history queries — seam exists, deliberately not built out. - Workflow deployment from `metadata__workflow` rows (the Tier 1 path). Today deployment is classpath-only via Flowable's auto- deployer. - The Flowable async job executor is explicitly deactivated (`flowable.async-executor-activate: false`) — background-job machinery belongs to the future Quartz integration (P1.10), not Flowable. -
Closes the P4.3 rollout — the last PBC whose controllers were still unannotated. Every endpoint in `UserController` now carries an `@RequirePermission("identity.user.*")` annotation matching the keys already declared in `identity.yml`: GET /api/v1/identity/users identity.user.read GET /api/v1/identity/users/{id} identity.user.read POST /api/v1/identity/users identity.user.create PATCH /api/v1/identity/users/{id} identity.user.update DELETE /api/v1/identity/users/{id} identity.user.disable `AuthController` (login, refresh) is deliberately NOT annotated — it is in the platform-security public allowlist because login is the token-issuing endpoint (chicken-and-egg). KDoc on the controller class updated to reflect the new auth story (removing the stale "authentication deferred to v0.2" comment from before P4.1 / P4.3 landed). Smoke verified end-to-end against real Postgres: - Admin (wildcard `admin` role) → GET /users returns 200, POST /users returns 201 (new user `jane` created). - Unauthenticated GET and POST → 401 Unauthorized from the framework's JWT filter before @RequirePermission runs. A non-admin user without explicit grants would get 403 from the AOP evaluator; tested manually with the admin and anonymous cases. No test changes — the controller unit test is a thin DTO mapper test that doesn't exercise the Spring AOP aspect; identity-wide authz enforcement is covered by the platform-security tests plus the shipping smoke tests. 246 unit tests, all green. P4.3 is now complete across every core PBC: pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales, pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity. -
CI verified green for `986f02ce` on both gradle build and docker image jobs.
-
Removes the ext-handling copy/paste that had grown across four PBCs (partners, inventory, orders-sales, orders-purchase). Every service that wrote the JSONB `ext` column was manually doing the same four-step sequence: validate, null-check, serialize with a local ObjectMapper, assign to the entity. And every response mapper was doing the inverse: check-if-blank, parse, cast, swallow errors. Net: ~15 lines saved per PBC, one place to change the ext contract later (e.g. PII redaction, audit tagging, field-level events), and a stable plug-in opt-in mechanism — any plug-in entity that implements `HasExt` automatically participates. New api.v1 surface: interface HasExt { val extEntityName: String // key into metadata__custom_field var ext: String // the serialized JSONB column } Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own entities into the same validation path. Zero Spring/Jackson dependencies — api.v1 stays clean. Extended `ExtJsonValidator` (platform-metadata) with two helpers: fun applyTo(entity: HasExt, ext: Map<String, Any?>?) — null-safe; validates; writes canonical JSON to entity.ext. Replaces the validate + writeValueAsString + assign triplet in every service's create() and update(). fun parseExt(entity: HasExt): Map<String, Any?> — returns empty map on blank/corrupt column; response mappers never 500 on bad data. Replaces the four identical parseExt local functions. ExtJsonValidator now takes an ObjectMapper via constructor injection (Spring Boot's auto-configured bean). Entities that now implement HasExt (override val extEntityName; override var ext; companion object const val ENTITY_NAME): - Partner (`partners.Partner` → "Partner") - Location (`inventory.Location` → "Location") - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder") - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder") Deliberately NOT converted this chunk: - WorkOrder (pbc-production) — its ext column has no declared fields yet; a follow-up that adds declarations AND the HasExt implementation is cleaner than splitting the two. - JournalEntry (pbc-finance) — derived state, no ext column. Services lose: - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()` field (four copies eliminated) - The `parseExt(entity): Map` helper function (four copies) - The `companion object { const val ENTITY_NAME = ... }` constant (moved onto the entity where it belongs) - The `val canonicalExt = extValidator.validate(...)` + `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }` create pattern (replaced with one applyTo call) - The `if (command.ext != null) { ... }` update pattern (applyTo is null-safe) Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and parseExt (null-safe path, happy path, failure path, blank column, round-trip, malformed JSON). Existing service tests just swap the mock setup from stubbing `validate` to stubbing `applyTo` and `parseExt` with no-ops. Smoke verified end-to-end against real Postgres: - POST /partners with valid ext (partners_credit_limit, partners_industry) → 201, canonical form persisted. - GET /partners/by-code/X → 200, ext round-trips. - POST with invalid enum value → 400 "value 'x' is not in allowed set [printing, publishing, packaging, other]". - POST with undeclared key → 400 "ext contains undeclared key(s) for 'Partner': [rogue_field]". - PATCH with new ext → 200, ext updated. - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe applyTo). - POST /orders/sales-orders with no ext → 201, the create path via the shared helper still works. 246 unit tests (+6 over 240), 18 Gradle subprojects. -
CI verified green for `75a75baa` on both the gradle build and docker image jobs.
-
Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one step, single output, no BOM) into a real v2 production PBC: 1. IN_PROGRESS state between DRAFT and COMPLETED so "started but not finished" work orders are observable on a dashboard. WorkOrderService.start(id) performs the transition and publishes a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR IN_PROGRESS (v2 writes nothing to the ledger at start() so there is nothing to undo on cancel). 2. Bill of materials via a new WorkOrderInput child entity — @OneToMany with cascade + orphanRemoval, same shape as SalesOrderLine. Each line carries (lineNo, itemCode, quantityPerUnit, sourceLocationCode). complete() now iterates the inputs in lineNo order and writes one MATERIAL_ISSUE ledger row per line (delta = -(quantityPerUnit × outputQuantity)) BEFORE writing the PRODUCTION_RECEIPT for the output. All in one transaction — a failure anywhere rolls back every prior ledger row AND the status flip. Empty inputs list is legal (the v1 auto-spawn-from-SO path still works unchanged, writing only the PRODUCTION_RECEIPT). 3. Scrap flow for COMPLETED work orders via a new scrap(id, scrapLocationCode, quantity, note) service method. Writes a negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over adding a new SCRAP movement reason to keep the enum stable — the reference-string suffix is the disambiguator. The work order itself STAYS COMPLETED; scrap is a correction on top of a terminal state, not a state change. complete() now requires IN_PROGRESS (not DRAFT); existing callers must start() first. api.v1 grows two events (WorkOrderStartedEvent, WorkOrderScrappedEvent) alongside the three that already existed. Since this is additive within a major version, the api.v1 semver contract holds — existing subscribers continue to compile. Liquibase: 002-production-v2.xml widens the status CHECK and creates production__work_order_input with (work_order_id FK, line_no, item_code, quantity_per_unit, source_location_code) plus a unique (work_order_id, line_no) constraint, a CHECK quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE from the parent. Unit tests: WorkOrderServiceTest grows from 8 to 18 cases — covers start happy path, start rejection, complete-on-DRAFT rejection, empty-BOM complete, BOM-with-two-lines complete (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT all fire with the right references), scrap happy path, scrap on non-COMPLETED rejection, scrap with non-positive quantity rejection, cancel-from-IN_PROGRESS, and BOM validation rejects (unknown item, duplicate line_no). Smoke verified end-to-end against real Postgres: - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per brochure, output 100). - Started (DRAFT → IN_PROGRESS, no ledger rows). - Completed: paper balance 500→300 (MATERIAL_ISSUE -200), ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100 (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE. - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged WO:WO-SMOKE:SCRAP, work order stayed COMPLETED. - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a DRAFT with empty BOM; starting + completing it writes only the PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the empty-BOM path is backwards-compatible. - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s, double-start 400s, cancel-from-IN_PROGRESS 200. 240 unit tests, 18 Gradle subprojects.
-
The framework's eighth PBC and the first one that's NOT order- or master-data-shaped. Work orders are about *making things*, which is the reason the printing-shop reference customer exists in the first place. With this PBC in place the framework can express the full buy-sell-make loop end-to-end. What landed (new module pbc/pbc-production/) - WorkOrder entity (production__work_order): code, output_item_code, output_quantity, status (DRAFT|COMPLETED| CANCELLED), due_date (display-only), source_sales_order_code (nullable — work orders can be either auto-spawned from a confirmed SO or created manually), ext. - WorkOrderJpaRepository with existsBySourceSalesOrderCode / findBySourceSalesOrderCode for the auto-spawn dedup. - WorkOrderService.create / complete / cancel: • create validates the output item via CatalogApi (same seam SalesOrderService and PurchaseOrderService use), rejects non-positive quantities, publishes WorkOrderCreatedEvent. • complete(outputLocationCode) credits finished goods to the named location via InventoryApi.recordMovement with reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and reference="WO:<order_code>", then flips status to COMPLETED, then publishes WorkOrderCompletedEvent — all in the same @Transactional method. • cancel only allowed from DRAFT (no un-producing finished goods); publishes WorkOrderCancelledEvent. - SalesOrderConfirmedSubscriber (@PostConstruct → EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)): walks the confirmed sales order's lines via SalesOrdersApi (NOT by importing pbc-orders-sales) and calls WorkOrderService.create for each line. Coded as one bean with one subscription — matches pbc-finance's one-bean-per-subject pattern. • Idempotent on source sales order code — if any work order already exists for the SO, the whole spawn is a no-op. • Tolerant of a missing SO (defensive against a future async bus that could deliver the confirm event after the SO has vanished). • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g. WO-FROM-SO-2026-0001-L1. - REST controller /api/v1/production/work-orders: list, get, by-code, create, complete, cancel — each annotated with @RequirePermission. Four permission keys declared in the production.yml metadata: read / create / complete / cancel. - CompleteWorkOrderRequest: single-arg DTO uses the @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest; cross-referenced in the KDoc so the third instance doesn't need re-discovery. - distribution/.../pbc-production/001-production-init.xml: CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext + the usual indexes. NEITHER output_item_code NOR source_sales_order_code is a foreign key (cross-PBC reference policy — guardrail #9). - settings.gradle.kts + distribution/build.gradle.kts: registers the new module and adds it to the distribution dependency list. - master.xml: includes the new changelog in dependency order, after pbc-finance. New api.v1 surface: org.vibeerp.api.v1.event.production.* - WorkOrderCreatedEvent, WorkOrderCompletedEvent, WorkOrderCancelledEvent — sealed under WorkOrderEvent, aggregateType="production.WorkOrder". Same pattern as the order events, so any future consumer (finance revenue recognition, warehouse put-away dashboard, a customer plug-in that needs to react to "work finished") subscribes through the public typed-class overload with no dependency on pbc-production. Unit tests (13 new, 217 → 230 total) - WorkOrderServiceTest (9 tests): create dedup, positive quantity check, catalog seam, happy-path create with event assertion, complete rejects non-DRAFT, complete happy path with InventoryApi.recordMovement assertion + event assertion, cancel from DRAFT, cancel rejects COMPLETED. - SalesOrderConfirmedSubscriberTest (5 tests): subscription registration count, spawns N work orders for N SO lines with correct code convention, idempotent when WOs already exist, no-op on missing SO, and a listener-routing test that captures the EventListener instance and verifies it forwards to the right service method. End-to-end smoke verified against real Postgres - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance) and SalesOrderConfirmedSubscriber (pbc-production) log their subscription registration before the first HTTP call. - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a finished-goods location (WH-FG). - Created a 2-line sales order (SO-WO-1), confirmed it. → Produced ONE orders_sales.SalesOrder outbox row. → Produced ONE AR POSTED finance__journal_entry for 1000 USD (500 × 1 + 250 × 2 — the pbc-finance consumer still works). → Produced TWO draft work orders auto-spawned from the SO lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with source_sales_order_code=SO-WO-1. - Completed WO1 to WH-FG: → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A delta=500 reference="WO:WO-FROM-SO-WO-1-L1". → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG. → Flipped status to COMPLETED. - Cancelled WO2 → CANCELLED. - Created a manual WO-MANUAL-1 with no source SO → succeeds; demonstrates the "operator creates a WO to build inventory ahead of demand" path. - platform__event_outbox ends with 6 rows all DISPATCHED: orders_sales.SalesOrder SO-WO-1 production.WorkOrder WO-FROM-SO-WO-1-L1 (created) production.WorkOrder WO-FROM-SO-WO-1-L2 (created) production.WorkOrder WO-FROM-SO-WO-1-L1 (completed) production.WorkOrder WO-FROM-SO-WO-1-L2 (cancelled) production.WorkOrder WO-MANUAL-1 (created) Why this chunk was the right next move - pbc-finance was a PASSIVE consumer — it only wrote derived reporting state. pbc-production is the first ACTIVE consumer: it creates new aggregates with their own state machines and their own cross-PBC writes in reaction to another PBC's events. This is a meaningfully harder test of the event-driven integration story and it passes end-to-end. - "One ledger, three callers" is now real: sales shipments, purchase receipts, AND production receipts all feed the same inventory__stock_movement ledger through the same InventoryApi.recordMovement facade. The facade has proven stable under three very different callers. - The framework now expresses the basic ERP trinity: buy (purchase orders), sell (sales orders), make (work orders). That's the shape every real manufacturing customer needs, and it's done without any PBC importing another. What's deliberately NOT in v1 - No bill of materials. complete() only credits finished goods; it does NOT issue raw materials. A shop floor that needs to consume 4 sheets of paper to produce 1 brochure does it manually via POST /api/v1/inventory/movements with reason= MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands as WorkOrderInput lines in a future chunk. - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in one step. A real shop floor needs "started but not finished" visibility; that's the next iteration. - No routings, operations, machine assignments, or due-date enforcement. due_date is display-only. - No "scrap defective output" flow for a COMPLETED work order. cancel refuses from COMPLETED; the fix requires a new MovementReason and a new event, not a special-case method on the service. -
2
-
Extends pbc-inventory's MovementReason enum with the two reasons a production-style PBC needs to record stock movements through the existing InventoryApi.recordMovement facade. No new endpoint, no new database column — just two new enum values, two new sign- validation rules, and four new tests. Why this lands BEFORE pbc-production - It's the smallest self-contained change that unblocks any future production-related code (the framework's planned pbc-production, a customer plug-in's manufacturing module, or even an ad-hoc operator script). Each of those callers can now record "consume raw material" / "produce finished good" through the same primitive that already serves sales shipments and purchase receipts. - It validates the "one ledger, many callers" property the architecture spec promised. Adding a new movement reason takes zero schema changes (the column is varchar) and zero plug-in changes (the api.v1 facade takes the reason as a string and delegates to MovementReason.valueOf inside the adapter). The enum lives entirely inside pbc-inventory. What changed - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one was added and how they fit the "one primitive for every direction" story. - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT, and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT. - 4 new unit tests: • record rejects positive delta on MATERIAL_ISSUE • record rejects negative delta on PRODUCTION_RECEIPT • record accepts a positive PRODUCTION_RECEIPT (happy path, new balance row at the receiving location) • record accepts a negative MATERIAL_ISSUE (decrements an existing balance from 1000 → 800) - Total tests: 213 → 217. Smoke test against real Postgres - Booted on a fresh DB; no schema migration needed because the `reason` column is varchar(32), already wide enough. - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location WH-PROD via the existing endpoints. - POST /api/v1/inventory/movements with reason=RECEIPT for 1000 raw paper → balance row at 1000. - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE delta=-200 reference="WO:WO-EVT-1" → balance becomes 800, ledger row written. - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT delta=50 reference="WO:WO-EVT-1" → balance row at 50 for FG-WIDGET, ledger row written. - Negative test: POST PRODUCTION_RECEIPT with delta=-1 → 400 Bad Request "movement reason PRODUCTION_RECEIPT requires a non-negative delta (got -1)" — the new sign rule fires. - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE, PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and RAW-PAPER=800 — the math is correct. What's deliberately NOT in this chunk - No pbc-production yet. That's the next chunk; this is just the foundation that lets it (or any other production-ish caller) write to the ledger correctly without needing changes to api.v1 or pbc-inventory ever again. - No new return-path reasons (RETURN_FROM_CUSTOMER, RETURN_TO_SUPPLIER) — those land when the returns flow does. - No reference convention for "WO:" — that's documented in the KDoc on `reference`, not enforced anywhere. The v0.16/v0.17 convention "<source>:<code>" continues unchanged. -
The minimal pbc-finance landed in commit bf090c2e only reacted to *ConfirmedEvent. This change wires the rest of the order lifecycle (ship/receive → SETTLED, cancel → REVERSED) so the journal entry reflects what actually happened to the order, not just the moment it was confirmed. JournalEntryStatus (new enum + new column) - POSTED — created from a confirm event (existing behaviour) - SETTLED — promoted by SalesOrderShippedEvent / PurchaseOrderReceivedEvent - REVERSED — promoted by SalesOrderCancelledEvent / PurchaseOrderCancelledEvent - The status field is intentionally a separate axis from JournalEntryType: type tells you "AR or AP", status tells you "where in its lifecycle". distribution/.../pbc-finance/002-finance-status.xml - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`, a CHECK constraint mirroring the enum values, and an index on status for the new filter endpoint. The DEFAULT 'POSTED' covers any existing rows on an upgraded environment without a backfill step. JournalEntryService — four new methods, all idempotent - settleFromSalesShipped(event) → POSTED → SETTLED for AR - settleFromPurchaseReceived(event) → POSTED → SETTLED for AP - reverseFromSalesCancelled(event) → POSTED → REVERSED for AR - reverseFromPurchaseCancelled(event) → POSTED → REVERSED for AP Each runs through a private settleByOrderCode/reverseByOrderCode helper that: 1. Looks up the row by order_code (new repo method findFirstByOrderCode). If absent → no-op (e.g. cancel from DRAFT means no *ConfirmedEvent was ever published, so no journal entry exists; this is the most common cancel path). 2. If the row is already in the destination status → no-op (idempotent under at-least-once delivery, e.g. outbox replay or future Kafka retry). 3. Refuses to overwrite a contradictory terminal status — a SETTLED row cannot be REVERSED, and vice versa. The producer's state machine forbids cancel-from-shipped/received, so reaching here implies an upstream contract violation; logged at WARN and the row is left alone. OrderEventSubscribers — six subscriptions per @PostConstruct - All six order events from api.v1.event.orders.* are subscribed via the typed-class EventBus.subscribe(eventType, listener) overload, the same public API a plug-in would use. Boot log line updated: "pbc-finance subscribed to 6 order events". JournalEntryController — new ?status= filter - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED surfaces the partition. Existing ?orderCode= and ?type= filters unchanged. Read permission still finance.journal.read. 12 new unit tests (213 total, was 201) - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency on duplicate destination status, refusal to overwrite a contradictory terminal status, no-op on missing row, default POSTED on new entries. - OrderEventSubscribersTest: assert all SIX subscriptions registered, one new test that captures all four lifecycle listeners and verifies they forward to the correct service methods. End-to-end smoke (real Postgres, fresh DB) - Booted with the new DDL applied (status column + CHECK + index) on an empty DB. The OrderEventSubscribers @PostConstruct line confirms 6 subscriptions registered before the first HTTP call. - Five lifecycle scenarios driven via REST: PO-FULL: confirm + receive → AP SETTLED amount=50.00 SO-FULL: confirm + ship → AR SETTLED amount= 1.00 SO-REVERSE: confirm + cancel → AR REVERSED amount= 1.00 PO-REVERSE: confirm + cancel → AP REVERSED amount=50.00 SO-DRAFT-CANCEL: cancel only → NO ROW (no confirm event) - finance__journal_entry returns exactly 4 rows (the 5th scenario correctly produces nothing) and ?status filters all return the expected partition (POSTED=0, SETTLED=2, REVERSED=2). What's still NOT in pbc-finance - Still no debit/credit legs, no chart of accounts, no period close, no double-entry invariant. This is the v0.17 minimal seed; the real P5.9 build promotes it into a real GL. - No reaction to "settle then reverse" or "reverse then settle" other than the WARN-and-leave-alone defensive path. A real GL would write a separate compensating journal entry; the minimal PBC just keeps the row immutable once it leaves POSTED.