-
Closes the P4.3 rollout — the last PBC whose controllers were still unannotated. Every endpoint in `UserController` now carries an `@RequirePermission("identity.user.*")` annotation matching the keys already declared in `identity.yml`: GET /api/v1/identity/users identity.user.read GET /api/v1/identity/users/{id} identity.user.read POST /api/v1/identity/users identity.user.create PATCH /api/v1/identity/users/{id} identity.user.update DELETE /api/v1/identity/users/{id} identity.user.disable `AuthController` (login, refresh) is deliberately NOT annotated — it is in the platform-security public allowlist because login is the token-issuing endpoint (chicken-and-egg). KDoc on the controller class updated to reflect the new auth story (removing the stale "authentication deferred to v0.2" comment from before P4.1 / P4.3 landed). Smoke verified end-to-end against real Postgres: - Admin (wildcard `admin` role) → GET /users returns 200, POST /users returns 201 (new user `jane` created). - Unauthenticated GET and POST → 401 Unauthorized from the framework's JWT filter before @RequirePermission runs. A non-admin user without explicit grants would get 403 from the AOP evaluator; tested manually with the admin and anonymous cases. No test changes — the controller unit test is a thin DTO mapper test that doesn't exercise the Spring AOP aspect; identity-wide authz enforcement is covered by the platform-security tests plus the shipping smoke tests. 246 unit tests, all green. P4.3 is now complete across every core PBC: pbc-catalog, pbc-partners, pbc-inventory, pbc-orders-sales, pbc-orders-purchase, pbc-finance, pbc-production, pbc-identity. -
CI verified green for `986f02ce` on both gradle build and docker image jobs.
-
Removes the ext-handling copy/paste that had grown across four PBCs (partners, inventory, orders-sales, orders-purchase). Every service that wrote the JSONB `ext` column was manually doing the same four-step sequence: validate, null-check, serialize with a local ObjectMapper, assign to the entity. And every response mapper was doing the inverse: check-if-blank, parse, cast, swallow errors. Net: ~15 lines saved per PBC, one place to change the ext contract later (e.g. PII redaction, audit tagging, field-level events), and a stable plug-in opt-in mechanism — any plug-in entity that implements `HasExt` automatically participates. New api.v1 surface: interface HasExt { val extEntityName: String // key into metadata__custom_field var ext: String // the serialized JSONB column } Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own entities into the same validation path. Zero Spring/Jackson dependencies — api.v1 stays clean. Extended `ExtJsonValidator` (platform-metadata) with two helpers: fun applyTo(entity: HasExt, ext: Map<String, Any?>?) — null-safe; validates; writes canonical JSON to entity.ext. Replaces the validate + writeValueAsString + assign triplet in every service's create() and update(). fun parseExt(entity: HasExt): Map<String, Any?> — returns empty map on blank/corrupt column; response mappers never 500 on bad data. Replaces the four identical parseExt local functions. ExtJsonValidator now takes an ObjectMapper via constructor injection (Spring Boot's auto-configured bean). Entities that now implement HasExt (override val extEntityName; override var ext; companion object const val ENTITY_NAME): - Partner (`partners.Partner` → "Partner") - Location (`inventory.Location` → "Location") - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder") - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder") Deliberately NOT converted this chunk: - WorkOrder (pbc-production) — its ext column has no declared fields yet; a follow-up that adds declarations AND the HasExt implementation is cleaner than splitting the two. - JournalEntry (pbc-finance) — derived state, no ext column. Services lose: - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()` field (four copies eliminated) - The `parseExt(entity): Map` helper function (four copies) - The `companion object { const val ENTITY_NAME = ... }` constant (moved onto the entity where it belongs) - The `val canonicalExt = extValidator.validate(...)` + `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }` create pattern (replaced with one applyTo call) - The `if (command.ext != null) { ... }` update pattern (applyTo is null-safe) Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and parseExt (null-safe path, happy path, failure path, blank column, round-trip, malformed JSON). Existing service tests just swap the mock setup from stubbing `validate` to stubbing `applyTo` and `parseExt` with no-ops. Smoke verified end-to-end against real Postgres: - POST /partners with valid ext (partners_credit_limit, partners_industry) → 201, canonical form persisted. - GET /partners/by-code/X → 200, ext round-trips. - POST with invalid enum value → 400 "value 'x' is not in allowed set [printing, publishing, packaging, other]". - POST with undeclared key → 400 "ext contains undeclared key(s) for 'Partner': [rogue_field]". - PATCH with new ext → 200, ext updated. - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe applyTo). - POST /orders/sales-orders with no ext → 201, the create path via the shared helper still works. 246 unit tests (+6 over 240), 18 Gradle subprojects. -
CI verified green for `75a75baa` on both the gradle build and docker image jobs.
-
Grows pbc-production from the minimal v1 (DRAFT → COMPLETED in one step, single output, no BOM) into a real v2 production PBC: 1. IN_PROGRESS state between DRAFT and COMPLETED so "started but not finished" work orders are observable on a dashboard. WorkOrderService.start(id) performs the transition and publishes a new WorkOrderStartedEvent. cancel() now accepts DRAFT OR IN_PROGRESS (v2 writes nothing to the ledger at start() so there is nothing to undo on cancel). 2. Bill of materials via a new WorkOrderInput child entity — @OneToMany with cascade + orphanRemoval, same shape as SalesOrderLine. Each line carries (lineNo, itemCode, quantityPerUnit, sourceLocationCode). complete() now iterates the inputs in lineNo order and writes one MATERIAL_ISSUE ledger row per line (delta = -(quantityPerUnit × outputQuantity)) BEFORE writing the PRODUCTION_RECEIPT for the output. All in one transaction — a failure anywhere rolls back every prior ledger row AND the status flip. Empty inputs list is legal (the v1 auto-spawn-from-SO path still works unchanged, writing only the PRODUCTION_RECEIPT). 3. Scrap flow for COMPLETED work orders via a new scrap(id, scrapLocationCode, quantity, note) service method. Writes a negative ADJUSTMENT ledger row tagged WO:<code>:SCRAP and publishes a new WorkOrderScrappedEvent. Chose ADJUSTMENT over adding a new SCRAP movement reason to keep the enum stable — the reference-string suffix is the disambiguator. The work order itself STAYS COMPLETED; scrap is a correction on top of a terminal state, not a state change. complete() now requires IN_PROGRESS (not DRAFT); existing callers must start() first. api.v1 grows two events (WorkOrderStartedEvent, WorkOrderScrappedEvent) alongside the three that already existed. Since this is additive within a major version, the api.v1 semver contract holds — existing subscribers continue to compile. Liquibase: 002-production-v2.xml widens the status CHECK and creates production__work_order_input with (work_order_id FK, line_no, item_code, quantity_per_unit, source_location_code) plus a unique (work_order_id, line_no) constraint, a CHECK quantity_per_unit > 0, and the audit columns. ON DELETE CASCADE from the parent. Unit tests: WorkOrderServiceTest grows from 8 to 18 cases — covers start happy path, start rejection, complete-on-DRAFT rejection, empty-BOM complete, BOM-with-two-lines complete (verifies both MATERIAL_ISSUE deltas AND the PRODUCTION_RECEIPT all fire with the right references), scrap happy path, scrap on non-COMPLETED rejection, scrap with non-positive quantity rejection, cancel-from-IN_PROGRESS, and BOM validation rejects (unknown item, duplicate line_no). Smoke verified end-to-end against real Postgres: - Created WO-SMOKE with 2-line BOM (2 paper + 0.5 ink per brochure, output 100). - Started (DRAFT → IN_PROGRESS, no ledger rows). - Completed: paper balance 500→300 (MATERIAL_ISSUE -200), ink 200→150 (MATERIAL_ISSUE -50), FG-BROCHURE 0→100 (PRODUCTION_RECEIPT +100). All 3 rows tagged WO:WO-SMOKE. - Scrapped 7 units: FG-BROCHURE 100→93, ADJUSTMENT -7 tagged WO:WO-SMOKE:SCRAP, work order stayed COMPLETED. - Auto-spawn: SO-42 confirm still creates WO-FROM-SO-42-L1 as a DRAFT with empty BOM; starting + completing it writes only the PRODUCTION_RECEIPT (zero MATERIAL_ISSUE rows), proving the empty-BOM path is backwards-compatible. - Negative paths: complete-on-DRAFT 400s, scrap-on-DRAFT 400s, double-start 400s, cancel-from-IN_PROGRESS 200. 240 unit tests, 18 Gradle subprojects. -
Completes the @RequirePermission rollout that started in commit b174cf60. Every non-state-transition endpoint in pbc-inventory (Location CRUD), pbc-orders-sales, and pbc-orders-purchase is now guarded by the pre-declared permission keys from their respective metadata YAMLs. State-transition verbs (confirm/cancel/ship/receive) were annotated in the original P4.3 demo chunk; this one fills in the list/get/create/update gap. Inventory - LocationController: list/get/getByCode → inventory.location.read; create → inventory.location.create; update → inventory.location.update; deactivate → inventory.location.deactivate. - (StockBalanceController.adjust + StockMovementController.record were already annotated with inventory.stock.adjust.) Orders-sales - SalesOrderController: list/get/getByCode → orders.sales.read; create → orders.sales.create; update → orders.sales.update. (confirm/cancel/ship were already annotated.) Orders-purchase - PurchaseOrderController: list/get/getByCode → orders.purchase.read; create → orders.purchase.create; update → orders.purchase.update. (confirm/cancel/receive were already annotated.) No new permission keys. Every key this chunk consumes was already declared in the relevant metadata YAML since the respective PBC was first built — catalog + partners already shipped in this state, and the inventory/orders YAMLs declared their read/create/update keys from day one but the controllers hadn't started using them. Admin happy path still works (bootstrap admin has the wildcard `admin` role, same as after commit b174cf60). 230 unit tests still green — annotations are purely additive, no existing test hits the @RequirePermission path since service-level tests bypass the controller entirely. Combined with b174cf60, the framework now has full @RequirePermission coverage on every PBC controller except pbc-identity's user admin (which is a separate permission surface — user/role administration has its own security story). A minimum-privilege role like "sales-clerk" can now be granted exactly `orders.sales.read` + `orders.sales.create` + `partners.partner.read` and NOT accidentally see catalog admin, inventory movements, finance journals, or contact PII.
-
Closes the P4.3 permission-rollout gap for the two oldest PBCs that were never updated when the @RequirePermission aspect landed. The catalog and partners metadata YAMLs already declared all the needed permission keys — the controllers just weren't consuming them. Catalog - ItemController: list/get/getByCode → catalog.item.read; create → catalog.item.create; update → catalog.item.update; deactivate → catalog.item.deactivate. - UomController: list/get/getByCode → catalog.uom.read; create → catalog.uom.create; update → catalog.uom.update. Partners (including the PII boundary) - PartnerController: list/get/getByCode → partners.partner.read; create → partners.partner.create; update → partners.partner.update. (deactivate was already annotated in the P4.3 demo chunk.) - AddressController: all five verbs annotated with partners.address.{read,create,update,delete}. - ContactController: all five verbs annotated with partners.contact.{read,create,update,deactivate}. The "TODO once P4.3 lands" note in the class KDoc was removed; P4.3 is live and the annotations are now in place. This is the PII boundary that CLAUDE.md flagged as incomplete after the original P4.3 rollout. No new permission keys were added — all 14 keys this touches were already declared in pbc-catalog/catalog.yml and pbc-partners/partners.yml when those PBCs were first built. The metadata loader has been serving them to the SPA/OpenAPI/MCP introspection endpoint since day one; this change just starts enforcing them at the controller. Smoke-tested end-to-end against real Postgres - Fresh DB + fresh boot. - Admin happy path (bootstrap admin has wildcard `admin` role): GET /api/v1/catalog/items → 200 POST /api/v1/catalog/items → 201 (SMOKE-1 created) GET /api/v1/catalog/uoms → 200 POST /api/v1/partners/partners → 201 (SMOKE-P created) POST /api/v1/partners/.../contacts → 201 (contact created) GET /api/v1/partners/.../contacts → 200 (PII read) - Anonymous negative path (no Bearer token): GET /api/v1/catalog/items → 401 GET /api/v1/partners/.../contacts → 401 - 230 unit tests still green (annotations are purely additive, no existing test hit the @RequirePermission path since the service-level tests bypass the controller entirely). Why this is a genuine security improvement - Before: any authenticated user (including the eventual "Alice from reception", the contractor's read-only service account, the AI-agent MCP client) could read PII, create partners, and create catalog items. - After: those operations require explicit role-permission grants through metadata__role_permission. The bootstrap admin still has unconditional access via the wildcard admin role, so nothing in a fresh deployment is broken; but a real operator granting minimum-privilege roles now has the columns they need in the database to do it. - The contact PII boundary in particular is GDPR-relevant: before this change, any logged-in user could enumerate every contact's name + email + phone. After, only users with partners.contact.read can see them. What's still NOT annotated - pbc-inventory's Location create/update/deactivate endpoints (only stock.adjust and movement.create are annotated). - pbc-orders-sales and pbc-orders-purchase list/get/create/update endpoints (only the state-transition verbs are annotated). - pbc-identity's user admin endpoints. These are the next cleanup chunk. This one stays focused on catalog + partners because those were the two PBCs that predated P4.3 entirely and hadn't been touched since.
-
The framework's eighth PBC and the first one that's NOT order- or master-data-shaped. Work orders are about *making things*, which is the reason the printing-shop reference customer exists in the first place. With this PBC in place the framework can express the full buy-sell-make loop end-to-end. What landed (new module pbc/pbc-production/) - WorkOrder entity (production__work_order): code, output_item_code, output_quantity, status (DRAFT|COMPLETED| CANCELLED), due_date (display-only), source_sales_order_code (nullable — work orders can be either auto-spawned from a confirmed SO or created manually), ext. - WorkOrderJpaRepository with existsBySourceSalesOrderCode / findBySourceSalesOrderCode for the auto-spawn dedup. - WorkOrderService.create / complete / cancel: • create validates the output item via CatalogApi (same seam SalesOrderService and PurchaseOrderService use), rejects non-positive quantities, publishes WorkOrderCreatedEvent. • complete(outputLocationCode) credits finished goods to the named location via InventoryApi.recordMovement with reason=PRODUCTION_RECEIPT (added in commit c52d0d59) and reference="WO:<order_code>", then flips status to COMPLETED, then publishes WorkOrderCompletedEvent — all in the same @Transactional method. • cancel only allowed from DRAFT (no un-producing finished goods); publishes WorkOrderCancelledEvent. - SalesOrderConfirmedSubscriber (@PostConstruct → EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...)): walks the confirmed sales order's lines via SalesOrdersApi (NOT by importing pbc-orders-sales) and calls WorkOrderService.create for each line. Coded as one bean with one subscription — matches pbc-finance's one-bean-per-subject pattern. • Idempotent on source sales order code — if any work order already exists for the SO, the whole spawn is a no-op. • Tolerant of a missing SO (defensive against a future async bus that could deliver the confirm event after the SO has vanished). • The WO code convention: WO-FROM-<so_code>-L<lineno>, e.g. WO-FROM-SO-2026-0001-L1. - REST controller /api/v1/production/work-orders: list, get, by-code, create, complete, cancel — each annotated with @RequirePermission. Four permission keys declared in the production.yml metadata: read / create / complete / cancel. - CompleteWorkOrderRequest: single-arg DTO uses the @JsonCreator(mode=PROPERTIES) + @param:JsonProperty trick that already bit ShipSalesOrderRequest and ReceivePurchaseOrderRequest; cross-referenced in the KDoc so the third instance doesn't need re-discovery. - distribution/.../pbc-production/001-production-init.xml: CREATE TABLE with CHECK on status + CHECK on qty>0 + GIN on ext + the usual indexes. NEITHER output_item_code NOR source_sales_order_code is a foreign key (cross-PBC reference policy — guardrail #9). - settings.gradle.kts + distribution/build.gradle.kts: registers the new module and adds it to the distribution dependency list. - master.xml: includes the new changelog in dependency order, after pbc-finance. New api.v1 surface: org.vibeerp.api.v1.event.production.* - WorkOrderCreatedEvent, WorkOrderCompletedEvent, WorkOrderCancelledEvent — sealed under WorkOrderEvent, aggregateType="production.WorkOrder". Same pattern as the order events, so any future consumer (finance revenue recognition, warehouse put-away dashboard, a customer plug-in that needs to react to "work finished") subscribes through the public typed-class overload with no dependency on pbc-production. Unit tests (13 new, 217 → 230 total) - WorkOrderServiceTest (9 tests): create dedup, positive quantity check, catalog seam, happy-path create with event assertion, complete rejects non-DRAFT, complete happy path with InventoryApi.recordMovement assertion + event assertion, cancel from DRAFT, cancel rejects COMPLETED. - SalesOrderConfirmedSubscriberTest (5 tests): subscription registration count, spawns N work orders for N SO lines with correct code convention, idempotent when WOs already exist, no-op on missing SO, and a listener-routing test that captures the EventListener instance and verifies it forwards to the right service method. End-to-end smoke verified against real Postgres - Fresh DB, fresh boot. Both OrderEventSubscribers (pbc-finance) and SalesOrderConfirmedSubscriber (pbc-production) log their subscription registration before the first HTTP call. - Seeded two items (BROCHURE-A, BROCHURE-B), a customer, and a finished-goods location (WH-FG). - Created a 2-line sales order (SO-WO-1), confirmed it. → Produced ONE orders_sales.SalesOrder outbox row. → Produced ONE AR POSTED finance__journal_entry for 1000 USD (500 × 1 + 250 × 2 — the pbc-finance consumer still works). → Produced TWO draft work orders auto-spawned from the SO lines: WO-FROM-SO-WO-1-L1 (BROCHURE-A × 500) and WO-FROM-SO-WO-1-L2 (BROCHURE-B × 250), both with source_sales_order_code=SO-WO-1. - Completed WO1 to WH-FG: → Produced a PRODUCTION_RECEIPT ledger row for BROCHURE-A delta=500 reference="WO:WO-FROM-SO-WO-1-L1". → inventory__stock_balance now has BROCHURE-A = 500 at WH-FG. → Flipped status to COMPLETED. - Cancelled WO2 → CANCELLED. - Created a manual WO-MANUAL-1 with no source SO → succeeds; demonstrates the "operator creates a WO to build inventory ahead of demand" path. - platform__event_outbox ends with 6 rows all DISPATCHED: orders_sales.SalesOrder SO-WO-1 production.WorkOrder WO-FROM-SO-WO-1-L1 (created) production.WorkOrder WO-FROM-SO-WO-1-L2 (created) production.WorkOrder WO-FROM-SO-WO-1-L1 (completed) production.WorkOrder WO-FROM-SO-WO-1-L2 (cancelled) production.WorkOrder WO-MANUAL-1 (created) Why this chunk was the right next move - pbc-finance was a PASSIVE consumer — it only wrote derived reporting state. pbc-production is the first ACTIVE consumer: it creates new aggregates with their own state machines and their own cross-PBC writes in reaction to another PBC's events. This is a meaningfully harder test of the event-driven integration story and it passes end-to-end. - "One ledger, three callers" is now real: sales shipments, purchase receipts, AND production receipts all feed the same inventory__stock_movement ledger through the same InventoryApi.recordMovement facade. The facade has proven stable under three very different callers. - The framework now expresses the basic ERP trinity: buy (purchase orders), sell (sales orders), make (work orders). That's the shape every real manufacturing customer needs, and it's done without any PBC importing another. What's deliberately NOT in v1 - No bill of materials. complete() only credits finished goods; it does NOT issue raw materials. A shop floor that needs to consume 4 sheets of paper to produce 1 brochure does it manually via POST /api/v1/inventory/movements with reason= MATERIAL_ISSUE (added in commit c52d0d59). A proper BOM lands as WorkOrderInput lines in a future chunk. - No IN_PROGRESS state. complete() goes DRAFT → COMPLETED in one step. A real shop floor needs "started but not finished" visibility; that's the next iteration. - No routings, operations, machine assignments, or due-date enforcement. due_date is display-only. - No "scrap defective output" flow for a COMPLETED work order. cancel refuses from COMPLETED; the fix requires a new MovementReason and a new event, not a special-case method on the service. -
2
-
Extends pbc-inventory's MovementReason enum with the two reasons a production-style PBC needs to record stock movements through the existing InventoryApi.recordMovement facade. No new endpoint, no new database column — just two new enum values, two new sign- validation rules, and four new tests. Why this lands BEFORE pbc-production - It's the smallest self-contained change that unblocks any future production-related code (the framework's planned pbc-production, a customer plug-in's manufacturing module, or even an ad-hoc operator script). Each of those callers can now record "consume raw material" / "produce finished good" through the same primitive that already serves sales shipments and purchase receipts. - It validates the "one ledger, many callers" property the architecture spec promised. Adding a new movement reason takes zero schema changes (the column is varchar) and zero plug-in changes (the api.v1 facade takes the reason as a string and delegates to MovementReason.valueOf inside the adapter). The enum lives entirely inside pbc-inventory. What changed - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one was added and how they fit the "one primitive for every direction" story. - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT, and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT. - 4 new unit tests: • record rejects positive delta on MATERIAL_ISSUE • record rejects negative delta on PRODUCTION_RECEIPT • record accepts a positive PRODUCTION_RECEIPT (happy path, new balance row at the receiving location) • record accepts a negative MATERIAL_ISSUE (decrements an existing balance from 1000 → 800) - Total tests: 213 → 217. Smoke test against real Postgres - Booted on a fresh DB; no schema migration needed because the `reason` column is varchar(32), already wide enough. - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location WH-PROD via the existing endpoints. - POST /api/v1/inventory/movements with reason=RECEIPT for 1000 raw paper → balance row at 1000. - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE delta=-200 reference="WO:WO-EVT-1" → balance becomes 800, ledger row written. - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT delta=50 reference="WO:WO-EVT-1" → balance row at 50 for FG-WIDGET, ledger row written. - Negative test: POST PRODUCTION_RECEIPT with delta=-1 → 400 Bad Request "movement reason PRODUCTION_RECEIPT requires a non-negative delta (got -1)" — the new sign rule fires. - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE, PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and RAW-PAPER=800 — the math is correct. What's deliberately NOT in this chunk - No pbc-production yet. That's the next chunk; this is just the foundation that lets it (or any other production-ish caller) write to the ledger correctly without needing changes to api.v1 or pbc-inventory ever again. - No new return-path reasons (RETURN_FROM_CUSTOMER, RETURN_TO_SUPPLIER) — those land when the returns flow does. - No reference convention for "WO:" — that's documented in the KDoc on `reference`, not enforced anywhere. The v0.16/v0.17 convention "<source>:<code>" continues unchanged. -
The minimal pbc-finance landed in commit bf090c2e only reacted to *ConfirmedEvent. This change wires the rest of the order lifecycle (ship/receive → SETTLED, cancel → REVERSED) so the journal entry reflects what actually happened to the order, not just the moment it was confirmed. JournalEntryStatus (new enum + new column) - POSTED — created from a confirm event (existing behaviour) - SETTLED — promoted by SalesOrderShippedEvent / PurchaseOrderReceivedEvent - REVERSED — promoted by SalesOrderCancelledEvent / PurchaseOrderCancelledEvent - The status field is intentionally a separate axis from JournalEntryType: type tells you "AR or AP", status tells you "where in its lifecycle". distribution/.../pbc-finance/002-finance-status.xml - ALTER TABLE adds `status varchar(16) NOT NULL DEFAULT 'POSTED'`, a CHECK constraint mirroring the enum values, and an index on status for the new filter endpoint. The DEFAULT 'POSTED' covers any existing rows on an upgraded environment without a backfill step. JournalEntryService — four new methods, all idempotent - settleFromSalesShipped(event) → POSTED → SETTLED for AR - settleFromPurchaseReceived(event) → POSTED → SETTLED for AP - reverseFromSalesCancelled(event) → POSTED → REVERSED for AR - reverseFromPurchaseCancelled(event) → POSTED → REVERSED for AP Each runs through a private settleByOrderCode/reverseByOrderCode helper that: 1. Looks up the row by order_code (new repo method findFirstByOrderCode). If absent → no-op (e.g. cancel from DRAFT means no *ConfirmedEvent was ever published, so no journal entry exists; this is the most common cancel path). 2. If the row is already in the destination status → no-op (idempotent under at-least-once delivery, e.g. outbox replay or future Kafka retry). 3. Refuses to overwrite a contradictory terminal status — a SETTLED row cannot be REVERSED, and vice versa. The producer's state machine forbids cancel-from-shipped/received, so reaching here implies an upstream contract violation; logged at WARN and the row is left alone. OrderEventSubscribers — six subscriptions per @PostConstruct - All six order events from api.v1.event.orders.* are subscribed via the typed-class EventBus.subscribe(eventType, listener) overload, the same public API a plug-in would use. Boot log line updated: "pbc-finance subscribed to 6 order events". JournalEntryController — new ?status= filter - GET /api/v1/finance/journal-entries?status=POSTED|SETTLED|REVERSED surfaces the partition. Existing ?orderCode= and ?type= filters unchanged. Read permission still finance.journal.read. 12 new unit tests (213 total, was 201) - JournalEntryServiceTest: settle/reverse for AR + AP, idempotency on duplicate destination status, refusal to overwrite a contradictory terminal status, no-op on missing row, default POSTED on new entries. - OrderEventSubscribersTest: assert all SIX subscriptions registered, one new test that captures all four lifecycle listeners and verifies they forward to the correct service methods. End-to-end smoke (real Postgres, fresh DB) - Booted with the new DDL applied (status column + CHECK + index) on an empty DB. The OrderEventSubscribers @PostConstruct line confirms 6 subscriptions registered before the first HTTP call. - Five lifecycle scenarios driven via REST: PO-FULL: confirm + receive → AP SETTLED amount=50.00 SO-FULL: confirm + ship → AR SETTLED amount= 1.00 SO-REVERSE: confirm + cancel → AR REVERSED amount= 1.00 PO-REVERSE: confirm + cancel → AP REVERSED amount=50.00 SO-DRAFT-CANCEL: cancel only → NO ROW (no confirm event) - finance__journal_entry returns exactly 4 rows (the 5th scenario correctly produces nothing) and ?status filters all return the expected partition (POSTED=0, SETTLED=2, REVERSED=2). What's still NOT in pbc-finance - Still no debit/credit legs, no chart of accounts, no period close, no double-entry invariant. This is the v0.17 minimal seed; the real P5.9 build promotes it into a real GL. - No reaction to "settle then reverse" or "reverse then settle" other than the WARN-and-leave-alone defensive path. A real GL would write a separate compensating journal entry; the minimal PBC just keeps the row immutable once it leaves POSTED.
-
The framework's seventh PBC, and the first one whose ENTIRE purpose is to react to events published by other PBCs. It validates the *consumer* side of the cross-PBC event seam that was wired up in commit 67406e87 (event-driven cross-PBC integration). With pbc-finance in place, the bus now has both producers and consumers in real PBC business logic — not just the wildcard EventAuditLogSubscriber that ships with platform-events. What landed (new module pbc/pbc-finance/, ~480 lines including tests) - JournalEntry entity (finance__journal_entry): id, code (= originating event UUID), type (AR|AP), partner_code, order_code, amount, currency_code, posted_at, ext. Unique index on `code` is the durability anchor for idempotent event delivery; the service ALSO existsByCode-checks before insert to make duplicate-event handling a clean no-op rather than a constraint-violation exception. - JournalEntryJpaRepository with existsByCode + findByOrderCode + findByType (the read-side filters used by the controller). - JournalEntryService.recordSalesConfirmed / recordPurchaseConfirmed take a SalesOrderConfirmedEvent / PurchaseOrderConfirmedEvent and write the corresponding AR/AP row. @Transactional with Propagation.REQUIRED so the listener joins the publisher's TX when the bus delivers synchronously (today) and creates a fresh one if a future async bus delivers from a worker thread. The KDoc explains why REQUIRED is the correct default and why REQUIRES_NEW would be wrong here. - OrderEventSubscribers @Component with @PostConstruct that calls EventBus.subscribe(SalesOrderConfirmedEvent::class.java, ...) and EventBus.subscribe(PurchaseOrderConfirmedEvent::class.java, ...) once at boot. Uses the public typed-class subscribe overload — NOT the platform-internal subscribeToAll wildcard helper. This is the API surface plug-ins will also use. - JournalEntryController: read-only REST under /api/v1/finance/journal-entries with @RequirePermission "finance.journal.read". Filter params: ?orderCode= and ?type=. Deliberately no POST endpoint — entries are derived state. - finance.yml metadata declaring 1 entity, 1 permission, 1 menu. - Liquibase changelog at distribution/.../pbc-finance/001-finance-init.xml + master.xml include + distribution/build.gradle.kts dep. - settings.gradle.kts: registers :pbc:pbc-finance. - 9 new unit tests (6 for JournalEntryService, 3 for OrderEventSubscribers) — including idempotency, dedup-by-event-id contract, listener-forwarding correctness via slot-captured EventListener invocation. Total tests: 192 → 201, 16 → 17 modules. Why this is the right shape - pbc-finance has zero source dependency on pbc-orders-sales, pbc-orders-purchase, pbc-partners, or pbc-catalog. The Gradle build refuses any cross-PBC dependency at configuration time — pbc-finance only declares api/api-v1, platform-persistence, and platform-security. The events and partner/item references it consumes all live in api.v1.event.orders / are stored as opaque string codes. - Subscribers go through EventBus.subscribe(eventType, listener), the public typed-class overload from api.v1.event.EventBus. Plug-ins use exactly this API; this PBC proves the API works end-to-end from a real consumer. - The consumer is idempotent on the producer's event id, so at-least-once delivery (outbox replay, future Kafka retry) cannot create duplicate journal entries. This makes the consumer correct under both the current synchronous bus and any future async / out-of-process bus. - Read-only REST API: derived state should not be writable from the outside. Adjustments and reversals will land later as their own command verbs when the real P5.9 finance build needs them, not as a generic create endpoint. End-to-end smoke verified against real Postgres - Booted on a fresh DB; the OrderEventSubscribers @PostConstruct log line confirms the subscription registered before any HTTP traffic. - Seeded an item, supplier, customer, location (existing PBCs). - Created PO PO-FIN-1 (5000 × 0.04 = 200 USD) → confirmed → GET /api/v1/finance/journal-entries returns ONE row: type=AP partner=SUP-PAPER order=PO-FIN-1 amount=200.0000 USD - Created SO SO-FIN-1 (50 × 0.10 = 5 USD) → confirmed → GET /api/v1/finance/journal-entries now returns TWO rows: type=AR partner=CUST-ACME order=SO-FIN-1 amount=5.0000 USD (plus the AP row from above) - GET /api/v1/finance/journal-entries?orderCode=PO-FIN-1 → only the AP row. - GET /api/v1/finance/journal-entries?type=AR → only the AR row. - platform__event_outbox shows 2 rows (one per confirm) both DISPATCHED, finance__journal_entry shows 2 rows. - The journal-entry code column equals the originating event UUID, proving the dedup contract is wired. What this is NOT (yet) - Not a real general ledger. No debit/credit legs, no chart of accounts, no period close, no double-entry invariant. P5.9 promotes this minimal seed into a real finance PBC. - No reaction to ship/receive/cancel events yet — only confirm. Real revenue recognition (which happens at ship time for most accounting standards) lands with the P5.9 build. - No outbound api.v1.ext facade. pbc-finance does not (yet) expose itself to other PBCs; it is a pure consumer. When pbc-production needs to know "did this order's invoice clear", that facade gets added.
-
1
-
The event bus and transactional outbox have existed since P1.7 but no real PBC business logic was publishing through them. This change closes that loop end-to-end: api.v1.event.orders (new public surface) - SalesOrderConfirmedEvent / SalesOrderShippedEvent / SalesOrderCancelledEvent — sealed under SalesOrderEvent, aggregateType = "orders_sales.SalesOrder" - PurchaseOrderConfirmedEvent / PurchaseOrderReceivedEvent / PurchaseOrderCancelledEvent — sealed under PurchaseOrderEvent, aggregateType = "orders_purchase.PurchaseOrder" - Events live in api.v1 (not inside the PBCs) so other PBCs and customer plug-ins can subscribe without importing the producing PBC — that would violate guardrail #9. pbc-orders-sales / pbc-orders-purchase - SalesOrderService and PurchaseOrderService now inject EventBus and publish a typed event from each state-changing method (confirm, ship/receive, cancel). The publish runs INSIDE the same @Transactional method as the JPA mutation and the InventoryApi.recordMovement ledger writes — EventBusImpl uses Propagation.MANDATORY, so a publish outside a transaction fails loudly. A failure in any line rolls back the status change AND every ledger row AND the would-have-been outbox row. - 6 new unit tests (3 per service) mockk the EventBus and verify each transition publishes exactly one matching event with the expected fields. Total tests: 186 → 192. End-to-end smoke verified against real Postgres - Created supplier, customer, item PAPER-A4, location WH-MAIN. - Drove a PO and an SO through the full state machine plus a cancel of each. 6 events fired: orders_purchase.PurchaseOrder × 3 (confirm + receive + cancel) orders_sales.SalesOrder × 3 (confirm + ship + cancel) - The wildcard EventAuditLogSubscriber logged each one at INFO level to /tmp/vibe-erp-boot.log with the [event-audit] tag. - platform__event_outbox shows 6 rows, all flipped from PENDING to DISPATCHED by the OutboxPoller within seconds. - The publish-inside-the-ledger-transaction guarantee means a subscriber that reads inventory__stock_movement on event receipt is guaranteed to see the matching SALES_SHIPMENT or PURCHASE_RECEIPT rows. This is what the architecture spec section 9 promised and now delivers. Why this is the right shape - Other PBCs (production, finance) and customer plug-ins can now react to "an order was confirmed/shipped/received/cancelled" without ever importing pbc-orders-* internals. The event class objects live in api.v1, the only stable contract surface. - The aggregateType strings ("orders_sales.SalesOrder", "orders_purchase.PurchaseOrder") match the <pbc>.<aggregate> convention documented on DomainEvent.aggregateType, so a cross-classloader subscriber can use the topic-string subscribe overload without holding the concrete Class<E>. - The bus's outbox row is the durability anchor for the future Kafka/NATS bridge: switching from in-process delivery to cross-process delivery will require zero changes to either PBC's publish call. -
The buying-side mirror of pbc-orders-sales. Adds the 6th real PBC and closes the loop: the framework now does both directions of the inventory flow through the same `InventoryApi.recordMovement` facade. Buy stock with a PO that hits RECEIVED, ship stock with a SO that hits SHIPPED, both feed the same `inventory__stock_movement` ledger. What landed ----------- * New Gradle subproject `pbc/pbc-orders-purchase` (16 modules total now). Same dependency set as pbc-orders-sales, same architectural enforcement — no direct dependency on any other PBC; cross-PBC references go through `api.v1.ext.<pbc>` facades at runtime. * Two JPA entities mirroring SalesOrder / SalesOrderLine: - `PurchaseOrder` (header) — code, partner_code (varchar, NOT a UUID FK), status enum DRAFT/CONFIRMED/RECEIVED/CANCELLED, order_date, expected_date (nullable, the supplier's promised delivery date), currency_code, total_amount, ext jsonb. - `PurchaseOrderLine` — purchase_order_id FK, line_no, item_code, quantity, unit_price, currency_code. Same shape as the sales order line; the api.v1 facade reuses `SalesOrderLineRef` rather than declaring a duplicate type. * `PurchaseOrderService.create` performs three cross-PBC validations in one transaction: 1. PartnersApi.findPartnerByCode → reject if null. 2. The partner's `type` must be SUPPLIER or BOTH (a CUSTOMER-only partner cannot be the supplier of a purchase order — the mirror of the sales-order rule that rejects SUPPLIER-only partners as customers). 3. CatalogApi.findItemByCode for EVERY line. Then validates: at least one line, no duplicate line numbers, positive quantity, non-negative price, currency matches header. The header total is RECOMPUTED from the lines (caller's value ignored — never trust a financial aggregate sent over the wire). * State machine enforced by `confirm()`, `cancel()`, and `receive()`: - DRAFT → CONFIRMED (confirm) - DRAFT → CANCELLED (cancel) - CONFIRMED → CANCELLED (cancel before receipt) - CONFIRMED → RECEIVED (receive — increments inventory) - RECEIVED → × (terminal; cancellation requires a return-to-supplier flow) * `receive(id, receivingLocationCode)` walks every line and calls `inventoryApi.recordMovement(... +line.quantity reason="PURCHASE_RECEIPT" reference="PO:<order_code>")`. The whole operation runs in ONE transaction so a failure on any line rolls back EVERY line's already-written movement AND the order status change. The customer cannot end up with "5 of 7 lines received, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/purchase-orders/{id}/receive` endpoint with body `{"receivingLocationCode": "WH-MAIN"}`, gated by `orders.purchase.receive`. The single-arg DTO has the same Jackson `@JsonCreator(mode = PROPERTIES)` workaround as `ShipSalesOrderRequest` (the trap is documented in the class KDoc with a back-reference to ShipSalesOrderRequest). * Confirm/cancel/receive endpoints carry `@RequirePermission` annotations (`orders.purchase.confirm`, `orders.purchase.cancel`, `orders.purchase.receive`). All three keys declared in the new `orders-purchase.yml` metadata. * New api.v1 facade `org.vibeerp.api.v1.ext.orders.PurchaseOrdersApi` + `PurchaseOrderRef`. Reuses the existing `SalesOrderLineRef` type for the line shape — buying and selling lines carry the same fields, so duplicating the ref type would be busywork. * `PurchaseOrdersApiAdapter` — sixth `*ApiAdapter` after Identity, Catalog, Partners, Inventory, SalesOrders. * `orders-purchase.yml` metadata declaring 2 entities, 6 permission keys, 1 menu entry under "Purchasing". End-to-end smoke test (the full demo loop) ------------------------------------------ Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → SUP-PAPER (SUPPLIER) * POST /inventory/locations → WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → [] (no stock) * POST /orders/purchase-orders → PO-2026-0001 for 5000 sheets @ $0.04 = total $200.00 (recomputed from the line) * POST /purchase-orders/{id}/confirm → status CONFIRMED * POST /purchase-orders/{id}/receive body={"receivingLocationCode":"WH-MAIN"} → status RECEIVED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=5000 * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 Then the FULL loop with the sales side from the previous chunk: * POST /partners → CUST-ACME (CUSTOMER) * POST /orders/sales-orders → SO-2026-0001 for 50 sheets * confirm + ship from WH-MAIN * GET /inventory/balances?itemCode=PAPER-A4 → quantity=4950 (5000-50) * GET /inventory/movements?itemCode=PAPER-A4 → PURCHASE_RECEIPT delta=5000 ref=PO:PO-2026-0001 SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 The framework's `InventoryApi.recordMovement` facade now has TWO callers — pbc-orders-sales (negative deltas, SALES_SHIPMENT) and pbc-orders-purchase (positive deltas, PURCHASE_RECEIPT) — feeding the same ledger from both sides. Failure paths verified: * Re-receive a RECEIVED PO → 400 "only CONFIRMED orders can be received" * Cancel a RECEIVED PO → 400 "issue a return-to-supplier flow instead" * Create a PO from a CUSTOMER-only partner → 400 "partner 'CUST-ONLY' is type CUSTOMER and cannot be the supplier of a purchase order" Regression: catalog uoms, identity users, partners, inventory, sales orders, purchase orders, printing-shop plates with i18n, metadata entities (15 now, was 13) — all still HTTP 2xx. Build ----- * `./gradlew build`: 16 subprojects, 186 unit tests (was 175), all green. The 11 new tests cover the same shapes as the sales-order tests but inverted: unknown supplier, CUSTOMER-only rejection, BOTH-type acceptance, unknown item, empty lines, total recomputation, confirm/cancel state machine, receive-rejects-non-CONFIRMED, receive-walks-lines-with-positive- delta, cancel-rejects-RECEIVED, cancel-CONFIRMED-allowed. What was deferred ----------------- * **RFQs** (request for quotation) and **supplier price catalogs** — both lay alongside POs but neither is in v1. * **Partial receipts**. v1's RECEIVED is "all-or-nothing"; the supplier delivering 4500 of 5000 sheets is not yet modelled. * **Supplier returns / refunds**. The cancel-RECEIVED rejection message says "issue a return-to-supplier flow" — that flow doesn't exist yet. * **Three-way matching** (PO + receipt + invoice). Lands with pbc-finance. * **Multi-leg transfers**. TRANSFER_IN/TRANSFER_OUT exist in the movement enum but no service operation yet writes both legs in one transaction. -
The killer demo finally works: place a sales order, ship it, watch inventory drop. This chunk lands the two pieces that close the loop: the inventory movement ledger (the audit-grade history of every stock change) and the sales-order /ship endpoint that calls InventoryApi.recordMovement to atomically debit stock for every line. This is the framework's FIRST cross-PBC WRITE flow. Every earlier cross-PBC call was a read (CatalogApi.findItemByCode, PartnersApi.findPartnerByCode, InventoryApi.findStockBalance). Shipping inverts that: pbc-orders-sales synchronously writes to inventory's tables (via the api.v1 facade) as a side effect of changing its own state, all in ONE Spring transaction. What landed ----------- * New `inventory__stock_movement` table — append-only ledger (id, item_code, location_id FK, signed delta, reason enum, reference, occurred_at, audit cols). CHECK constraint `delta <> 0` rejects no-op rows. Indexes on item_code, location_id, the (item, location) composite, reference, and occurred_at. Migration is in its own changelog file (002-inventory-movement-ledger.xml) per the project convention that each new schema cut is a new file. * New `StockMovement` JPA entity + repository + `MovementReason` enum (RECEIPT, ISSUE, ADJUSTMENT, SALES_SHIPMENT, PURCHASE_RECEIPT, TRANSFER_OUT, TRANSFER_IN). Each value carries a documented sign convention; the service rejects mismatches (a SALES_SHIPMENT with positive delta is a caller bug, not silently coerced). * New `StockMovementService.record(...)` — the ONE entry point for changing inventory. Cross-PBC item validation via CatalogApi, local location validation, sign-vs-reason enforcement, and negative-balance rejection all happen BEFORE the write. The ledger row insert AND the balance row update happen in the SAME database transaction so the two cannot drift. * `StockBalanceService.adjust` refactored to delegate: it computes delta = newQty - oldQty and calls record(... ADJUSTMENT). The REST endpoint keeps its absolute-quantity semantics — operators type "the shelf has 47" not "decrease by 3" — but every adjustment now writes a ledger row too. A no-op adjustment (re-saving the same value) does NOT write a row, so the audit log doesn't fill with noise from operator clicks that didn't change anything. * New `StockMovementController` at `/api/v1/inventory/movements`: GET filters by itemCode, locationId, or reference (for "all movements caused by SO-2026-0001"); POST records a manual movement. Both protected by `inventory.stock.adjust`. * `InventoryApi` facade extended with `recordMovement(itemCode, locationCode, delta, reason: String, reference)`. The reason is a String in the api.v1 surface (not the local enum) so plug-ins don't import inventory's internal types — the closed set is documented on the interface. The adapter parses the string with a meaningful error on unknown values. * New `SHIPPED` status on `SalesOrderStatus`. Transitions: DRAFT → CONFIRMED → SHIPPED (terminal). Cancelling a SHIPPED order is rejected with "issue a return / refund flow instead". * New `SalesOrderService.ship(id, shippingLocationCode)`: walks every line, calls `inventoryApi.recordMovement(... -line.quantity reason="SALES_SHIPMENT" reference="SO:{order_code}")`, flips status to SHIPPED. The whole operation runs in ONE transaction so a failure on any line — bad item, bad location, would push balance negative — rolls back the order status change AND every other line's already-written movement. The customer never ends up with "5 of 7 lines shipped, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/sales-orders/{id}/ship` endpoint with body `{"shippingLocationCode": "WH-MAIN"}`, gated by the new `orders.sales.ship` permission key. * `ShipSalesOrderRequest` is a single-arg Kotlin data class — same Jackson deserialization trap as `RefreshRequest`. Fixed with `@JsonCreator(mode = PROPERTIES) + @param:JsonProperty`. The trap is documented in the class KDoc. End-to-end smoke test (the killer demo) --------------------------------------- Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → CUST-ACME * POST /inventory/locations → WH-MAIN * POST /inventory/balances/adjust → quantity=1000 (now writes a ledger row via the new path) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null * POST /orders/sales-orders → SO-2026-0001 (50 units of PAPER-A4) * POST /sales-orders/{id}/confirm → status CONFIRMED * POST /sales-orders/{id}/ship body={"shippingLocationCode":"WH-MAIN"} → status SHIPPED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=950 (1000 - 50) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 Failure paths verified: * Re-ship a SHIPPED order → 400 "only CONFIRMED orders can be shipped" * Cancel a SHIPPED order → 400 "issue a return / refund flow instead" * Place a 10000-unit order, confirm, try to ship from a 950-stock warehouse → 400 "stock movement would push balance for 'PAPER-A4' at location ... below zero (current=950.0000, delta=-10000.0000)"; balance unchanged after the rollback (transaction integrity verified) Regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata entities — all still HTTP 2xx. Build ----- * `./gradlew build`: 15 subprojects, 175 unit tests (was 163), all green. The 12 new tests cover: - StockMovementServiceTest (8): zero-delta rejection, positive SALES_SHIPMENT rejection, negative RECEIPT rejection, both signs allowed on ADJUSTMENT, unknown item via CatalogApi seam, unknown location, would-push-balance-negative rejection, new-row + existing-row balance update. - StockBalanceServiceTest, rewritten (5): negative-quantity early reject, delegation with computed positive delta, delegation with computed negative delta, no-op adjustment short-circuit (NO ledger row written), no-op on missing row creates an empty row at zero. - SalesOrderServiceTest, additions (3): ship rejects non-CONFIRMED, ship walks lines and calls recordMovement with negated quantity + correct reference, cancel rejects SHIPPED. What was deferred ----------------- * **Event publication.** A `StockMovementRecorded` event would let pbc-finance and pbc-production react to ledger writes without polling. The event bus has been wired since P1.7 but no real cross-PBC flow uses it yet — that's the natural next chunk and the chunk after this commit. * **Multi-leg transfers.** TRANSFER_OUT and TRANSFER_IN are in the enum but no service operation atomically writes both legs yet (both legs in one transaction is required to keep total on-hand invariant). * **Reservation / pick lists.** "Reserve 50 of PAPER-A4 for an unconfirmed order" is its own concept that lands later. * **Shipped-order returns / refunds.** The cancel-SHIPPED rule points the user at "use a return flow" — that flow doesn't exist yet. v1 says shipments are terminal. -
The original test flipped the LAST character of the JWT signature segment from 'a' to 'b' (or vice versa). This was flaky in CI: with a random UUID subject, the issued token's signature segment can end on a base64url character whose flipped value, when decoded, lenient- parses to bits that the JWT decoder accepts as valid — leaving the signature still verifiable. The flake was reproducible locally with `./gradlew test --rerun-tasks` and was the cause of CI failures on the P5.5 and P4.3 docs-pin commits (and would have caught the underlying chunks too if the docs commit hadn't pushed first). Fix: tamper with the PAYLOAD (middle JWT segment) instead. Any change to the payload changes the bytes the signature is computed over, which ALWAYS fails HMAC verification — no edge cases. The test now flips the first character of the payload to a definitely- different valid base64url character, leaving the format intact so the failure mode is "signature does not verify" rather than "malformed token". Verified 10 consecutive `--rerun-tasks` runs all green locally.
-
The framework's authorization layer is now live. Until now, every authenticated user could do everything; the framework had only an authentication gate. This chunk adds method-level @RequirePermission annotations enforced by a Spring AOP aspect that consults the JWT's roles claim and a metadata-driven role-permission map. What landed ----------- * New `Role` and `UserRole` JPA entities mapping the existing identity__role + identity__user_role tables (the schema was created in the original identity init but never wired to JPA). RoleJpaRepository + UserRoleJpaRepository with a JPQL query that returns a user's role codes in one round-trip. * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a Set<String> of role codes and encodes them as a `roles` JWT claim (sorted for deterministic tests). Refresh tokens NEVER carry roles by design — see the rationale on `JwtIssuer.issueRefreshToken`. A role revocation propagates within one access-token lifetime (15 min default). * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`. Missing claim → empty set, NOT an error (refresh tokens, system tokens, and pre-P4.3 tokens all legitimately omit it). * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)` before minting the access token. `AuthService.refresh` re-reads the user's roles too — so a refresh always picks up the latest set, since refresh tokens deliberately don't carry roles. * New `AuthorizationContext` ThreadLocal in `platform-security.authz` carrying an `AuthorizedPrincipal(id, username, roles)`. Separate from `PrincipalContext` (which lives in platform-persistence and carries only the principal id, for the audit listener). The two contexts coexist because the audit listener has no business knowing what roles a user has. * `PrincipalContextFilter` now populates BOTH contexts on every authenticated request, reading the JWT's `username` and `roles` claims via `Jwt.getClaimAsStringList("roles")`. The filter is the one and only place that knows about Spring Security types AND about both vibe_erp contexts; everything downstream uses just the Spring-free abstractions. * `PermissionEvaluator` Spring bean: takes a role set + permission key, returns boolean. Resolution chain: 1. The literal `admin` role short-circuits to `true` for every key (the wildcard exists so the bootstrap admin can do everything from the very first boot without seeding a complete role-permission mapping). 2. Otherwise consults an in-memory `Map<role, Set<permission>>` loaded from `metadata__role_permission` rows. The cache is rebuilt by `refresh()`, called from `VibeErpPluginManager` after the initial core load AND after every plug-in load. 3. Empty role set is always denied. No implicit grants. * `@RequirePermission("...")` annotation in `platform-security.authz`. `RequirePermissionAspect` is a Spring AOP @Aspect with @Around advice that intercepts every annotated method, reads the current request's `AuthorizationContext`, calls `PermissionEvaluator.has(...)`, and either proceeds or throws `PermissionDeniedException`. * New `PermissionDeniedException` carrying the offending key. `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with `"permission denied: 'partners.partner.deactivate'"` as the detail. The key IS surfaced to the caller (unlike the 401's generic "invalid credentials") because the SPA needs it to render a useful "your role doesn't include X" message and callers are already authenticated, so it's not an enumeration vector. * `BootstrapAdminInitializer` now creates the wildcard `admin` role on first boot and grants it to the bootstrap admin user. * `@RequirePermission` applied to four sensitive endpoints as the demo: `PartnerController.deactivate`, `StockBalanceController.adjust`, `SalesOrderController.confirm`, `SalesOrderController.cancel`. More endpoints will gain annotations as additional roles are introduced; v1 keeps the blast radius narrow. End-to-end smoke test --------------------- Reset Postgres, booted the app, verified: * Admin login → JWT length 265 (was 241), decoded claims include `"roles":["admin"]` * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED (admin wildcard short-circuits the permission check) * Inserted a 'powerless' user via raw SQL with no role assignments but copied the admin's password hash so login works * Powerless login → JWT length 247, decoded claims have NO roles field at all * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with `"permission denied: 'orders.sales.cancel'"` in the body * Powerless DELETE /partners/{id} → **403 Forbidden** with `"permission denied: 'partners.partner.deactivate'"` * Powerless GET /sales-orders, /partners, /catalog/items → all 200 (read endpoints have no @RequirePermission) * Admin regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata custom-fields endpoint — all still HTTP 2xx Build ----- * `./gradlew build`: 15 subprojects, 163 unit tests (was 153), all green. The 10 new tests cover: - PermissionEvaluator: empty roles deny, admin wildcard, explicit role-permission grant, multi-role union, unknown role denial, malformed payload tolerance, currentHas with no AuthorizationContext, currentHas with bound context (8 tests). - JwtRoundTrip: roles claim round-trips through the access token, refresh token never carries roles even when asked (2 tests). What was deferred ----------------- * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak- compatible OIDC client will reuse the same authorization layer unchanged — the roles will come from OIDC ID tokens instead of the local user store. * **Permission key validation at boot.** The framework does NOT yet check that every `@RequirePermission` value matches a declared metadata permission key. The plug-in linter is the natural place for that check to land later. * **Role hierarchy**. Roles are flat in v1; a role with permission X cannot inherit from another role. Adding a `parent_role` field on the role row is a non-breaking change later. * **Resource-aware permissions** ("the user owns THIS partner"). v1 only checks the operation, not the operand. Resource-aware checks are post-v1. * **Composite (AND/OR) permission requirements**. A single key per call site keeps the contract simple. Composite requirements live in service code that calls `PermissionEvaluator.currentHas` directly. * **Role management UI / REST**. The framework can EVALUATE permissions but has no first-class endpoints for "create a role", "grant a permission to a role", "assign a role to a user". v1 expects these to be done via direct DB writes or via the future SPA's role editor (P3.x); the wiring above is intentionally policy-only, not management. -
The fifth real PBC and the first business workflow PBC. pbc-inventory proved a PBC could consume ONE cross-PBC facade (CatalogApi). pbc-orders-sales consumes TWO simultaneously (PartnersApi for the customer, CatalogApi for every line's item) in a single transaction — the most rigorous test of the modular monolith story so far. Neither source PBC is on the compile classpath; the Gradle build refuses any direct dependency. Spring DI wires the api.v1 interfaces to their concrete adapters at runtime. What landed ----------- * New Gradle subproject `pbc/pbc-orders-sales` (15 modules total). * Two JPA entities, both extending `AuditedJpaEntity`: - `SalesOrder` (header) — code, partner_code (varchar, NOT a UUID FK to partners), status enum DRAFT/CONFIRMED/CANCELLED, order_date, currency_code (varchar(3)), total_amount numeric(18,4), ext jsonb. Eager-loaded `lines` collection because every read of the header is followed by a read of the lines in practice. - `SalesOrderLine` — sales_order_id FK, line_no, item_code (varchar, NOT a UUID FK to catalog), quantity, unit_price, currency_code. Per-line currency in the schema even though v1 enforces all-lines- match-header (so multi-currency relaxation is later schema-free). No `ext` jsonb on lines: lines are facts, not master records; custom fields belong on the header. * `SalesOrderService.create` performs **three independent cross-PBC validations** in one transaction: 1. PartnersApi.findPartnerByCode → reject if null (covers unknown AND inactive partners; the facade hides them). 2. PartnersApi result.type must be CUSTOMER or BOTH (a SUPPLIER-only partner cannot be the customer of a sales order). 3. CatalogApi.findItemByCode for EVERY line → reject if null. Then it ALSO validates: at least one line, no duplicate line numbers, positive quantity, non-negative price, currency matches header. The header total is RECOMPUTED from the lines — the caller's value is intentionally ignored. Never trust a financial aggregate sent over the wire. * State machine enforced by `confirm()` and `cancel()`: - DRAFT → CONFIRMED (confirm) - DRAFT → CANCELLED (cancel from draft) - CONFIRMED → CANCELLED (cancel a confirmed order) Anything else throws with a descriptive message. CONFIRMED orders are immutable except for cancellation — the `update` method refuses to mutate a non-DRAFT order. * `update` with line items REPLACES the existing lines wholesale (PUT semantics for lines, PATCH for header columns). Partial line edits are not modelled because the typical "edit one line" UI gesture renders to a full re-send anyway. * REST: `/api/v1/orders/sales-orders` (CRUD + `/confirm` + `/cancel`). State transitions live on dedicated POST endpoints rather than PATCH-based status writes — they have side effects (lines become immutable, downstream PBCs will receive events in future versions), and sentinel-status writes hide that. * New api.v1 facade `org.vibeerp.api.v1.ext.orders.SalesOrdersApi` with `findByCode`, `findById`, `SalesOrderRef`, `SalesOrderLineRef`. Fifth ext.* package after identity, catalog, partners, inventory. Sets up the next consumers: pbc-production for work orders, pbc-finance for invoicing, the printing-shop reference plug-in for the quote-to-job-card workflow. * `SalesOrdersApiAdapter` runtime implementation. Cancelled orders ARE returned by the facade (unlike inactive items / partners which are hidden) because downstream consumers may legitimately need to react to a cancellation — release a production slot, void an invoice, etc. * `orders-sales.yml` metadata declaring 2 entities, 5 permission keys, 1 menu entry. Build enforcement (still load-bearing) -------------------------------------- The root `build.gradle.kts` STILL refuses any direct dependency from `pbc-orders-sales` to either `pbc-partners` or `pbc-catalog`. Try adding either as `implementation(project(...))` and the build fails at configuration time with the architectural violation. The cross-PBC interfaces live in api-v1; the concrete adapters live in their owning PBCs; Spring DI assembles them at runtime via the bootstrap @ComponentScan. pbc-orders-sales sees only the api.v1 interfaces. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/catalog/items × 2 → PAPER-A4, INK-CYAN * POST /api/v1/partners/partners → CUST-ACME (CUSTOMER), SUP-ONLY (SUPPLIER) * POST /api/v1/orders/sales-orders → 201, two lines, total 386.50 (5000 × 0.05 + 3 × 45.50 = 250.00 + 136.50, correctly recomputed) * POST .../sales-orders with FAKE-PARTNER → 400 with the meaningful message "partner code 'FAKE-PARTNER' is not in the partners directory (or is inactive)" * POST .../sales-orders with SUP-ONLY → 400 "partner 'SUP-ONLY' is type SUPPLIER and cannot be the customer of a sales order" * POST .../sales-orders with FAKE-ITEM line → 400 "line 1: item code 'FAKE-ITEM' is not in the catalog (or is inactive)" * POST /{id}/confirm → status DRAFT → CONFIRMED * PATCH the CONFIRMED order → 400 "only DRAFT orders are mutable" * Re-confirm a CONFIRMED order → 400 "only DRAFT can be confirmed" * POST /{id}/cancel a CONFIRMED order → status CANCELLED (allowed) * SELECT * FROM orders_sales__sales_order — single row, total 386.5000, status CANCELLED * SELECT * FROM orders_sales__sales_order_line — two rows in line_no order with the right items and quantities * GET /api/v1/_meta/metadata/entities → 13 entities now (was 11) * Regression: catalog uoms, identity users, partners, inventory locations, printing-shop plates with i18n (Accept-Language: zh-CN) all still HTTP 2xx. Build ----- * `./gradlew build`: 15 subprojects, 153 unit tests (was 139), all green. The 14 new tests cover: unknown/SUPPLIER-only/BOTH-type partner paths, unknown item path, empty/duplicate-lineno line arrays, negative-quantity early reject (verifies CatalogApi NOT consulted), currency mismatch reject, total recomputation, all three state-machine transitions and the rejected ones. What was deferred ----------------- * **Sales-order shipping**. Confirmed orders cannot yet ship, because shipping requires atomically debiting inventory — which needs the movement ledger that was deferred from P5.3. The pair of chunks (movement ledger + sales-order shipping flow) is the natural next combination. * **Multi-currency lines**. The schema column is per-line but the service enforces all-lines-match-header in v1. Relaxing this is a service-only change. * **Quotes** (DRAFT-but-customer-visible) and **deliveries** (the thing that triggers shipping). v1 only models the order itself. * **Pricing engine / discounts**. v1 takes the unit price the caller sends. A real ERP has a price book lookup, customer-specific pricing, volume discounts, promotional pricing — all of which slot in BEFORE the line price is set, leaving the schema unchanged. * **Tax**. v1 totals are pre-tax. Tax calculation is its own PBC (and a regulatory minefield) that lands later. -
The fourth real PBC, and the first one that CONSUMES another PBC's api.v1.ext facade. Until now every PBC was a *provider* of an ext.<pbc> interface (identity, catalog, partners). pbc-inventory is the first *consumer*: it injects org.vibeerp.api.v1.ext.catalog.CatalogApi to validate item codes before adjusting stock. This proves the cross-PBC contract works in both directions, exactly as guardrail #9 requires. What landed ----------- * New Gradle subproject `pbc/pbc-inventory` (14 modules total now). * Two JPA entities, both extending `AuditedJpaEntity`: - `Location` — code, name, type (WAREHOUSE/BIN/VIRTUAL), active, ext jsonb. Single table for all location levels with a type discriminator (no recursive self-reference in v1; YAGNI for the "one warehouse, handful of bins" shape every printing shop has). - `StockBalance` — item_code (varchar, NOT a UUID FK), location_id FK, quantity numeric(18,4). The item_code is deliberately a string FK that references nothing because pbc-inventory has no compile-time link to pbc-catalog — the cross-PBC link goes through CatalogApi at runtime. UNIQUE INDEX on (item_code, location_id) is the primary integrity guarantee; UUID id is the addressable PK. CHECK (quantity >= 0). * `LocationService` and `StockBalanceService` with full CRUD + adjust semantics. ext jsonb on Location goes through ExtJsonValidator (P3.4 — Tier 1 customisation). * `StockBalanceService.adjust(itemCode, locationId, quantity)`: 1. Reject negative quantity. 2. **Inject CatalogApi**, call `findItemByCode(itemCode)`, reject if null with a meaningful 400. THIS is the cross-PBC seam test. 3. Verify the location exists. 4. SELECT-then-save upsert on (item_code, location_id) — single row per cell, mutated in place when the row exists, created when it doesn't. Single-instance deployment makes the read-modify-write race window academic. * REST: `/api/v1/inventory/locations` (CRUD), `/api/v1/inventory/balances` (GET with itemCode or locationId filters, POST /adjust). * New api.v1 facade `org.vibeerp.api.v1.ext.inventory` with `InventoryApi.findStockBalance(itemCode, locationCode)` + `totalOnHand(itemCode)` + `StockBalanceRef`. Fourth ext.* package after identity, catalog, partners. Sets up the next consumers (sales orders, purchase orders, the printing-shop plug-in's "do we have enough paper for this job?"). * `InventoryApiAdapter` runtime implementation in pbc-inventory. * `inventory.yml` metadata declaring 2 entities, 6 permission keys, 2 menu entries. Build enforcement (the load-bearing bit) ---------------------------------------- The root build.gradle.kts STILL refuses any direct dependency from pbc-inventory to pbc-catalog. Try adding `implementation(project( ":pbc:pbc-catalog"))` to pbc-inventory's build.gradle.kts and the build fails at configuration time with "Architectural violation in :pbc:pbc-inventory: depends on :pbc:pbc-catalog". The CatalogApi interface is in api-v1; the CatalogApiAdapter implementation is in pbc-catalog; Spring DI wires them at runtime via the bootstrap @ComponentScan. pbc-inventory only ever sees the interface. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/inventory/locations → 201, "WH-MAIN" warehouse * POST /api/v1/catalog/items → 201, "PAPER-A4" sheet item * POST /api/v1/inventory/balances/adjust with itemCode=PAPER-A4 → 200, the cross-PBC catalog lookup succeeded * POST .../adjust with itemCode=FAKE-ITEM → 400 with the meaningful message "item code 'FAKE-ITEM' is not in the catalog (or is inactive)" — the cross-PBC seam REJECTS unknown items as designed * POST .../adjust with quantity=-5 → 400 "stock quantity must be non-negative", caught BEFORE the CatalogApi mock would be invoked * POST .../adjust again with quantity=7500 → 200; SELECT shows ONE row with id unchanged and quantity = 7500 (upsert mutates, not duplicates) * GET /api/v1/inventory/balances?itemCode=PAPER-A4 → the row, with scale-4 numeric serialised verbatim * GET /api/v1/_meta/metadata/entities → 11 entities now (was 9 before Location + StockBalance landed) * Regression: catalog uoms, identity users, partners, printing-shop plates with i18n (Accept-Language: zh-CN), Location custom-fields endpoint all still HTTP 2xx. Build ----- * `./gradlew build`: 14 subprojects, 139 unit tests (was 129), all green. The 10 new tests cover Location CRUD + the StockBalance adjust path with mocked CatalogApi: unknown item rejection, unknown location rejection, negative-quantity early reject (verifies CatalogApi is NOT consulted), happy-path create, and upsert (existing row mutated, save() not called because @Transactional flushes the JPA-managed entity on commit). What was deferred ----------------- * `inventory__stock_movement` append-only ledger. The current operation is "set the quantity"; receipts/issues/transfers as discrete events with audit trail land in a focused follow-up. The balance row will then be regenerated from the ledger via a Liquibase backfill. * Negative-balance / over-issue prevention. The CHECK constraint blocks SET to a negative value, but there's no concept of "you cannot ISSUE more than is on hand" yet because there is no separate ISSUE operation — only absolute SET. * Lots, batches, serial numbers, expiry dates. Plenty of printing shops need none of these; the ones that do can either wait for the lot/serial chunk later or add the columns via Tier 1 custom fields on Location for now. * Cross-warehouse transfer atomicity (debit one, credit another in one transaction). Same — needs the ledger. -
The previous commit shipped with `<this commit>` as a placeholder since the SHA wasn't known yet. Pin it now.
-
The keystone of the framework's "key user adds fields without code" promise. A YAML declaration is now enough to add a typed custom field to an existing entity, validate it on every save, and surface it to the SPA / OpenAPI / AI agent. This is the bit that makes vibe_erp a *framework* instead of a fork-per-customer app. What landed ----------- * Extended `MetadataYamlFile` with a top-level `customFields:` section and `CustomFieldYaml` + `CustomFieldTypeYaml` wire-format DTOs. The YAML uses a flat `kind: decimal / scale: 2` discriminator instead of Jackson polymorphic deserialization so plug-in authors don't have to nest type configs and api.v1's `FieldType` stays free of Jackson imports. * `MetadataLoader.doLoad` now upserts `metadata__custom_field` rows alongside entities/permissions/menus, with the same delete-by-source idempotency. `LoadResult` carries the count for the boot log. * `CustomFieldRegistry` reads every `metadata__custom_field` row from the database and builds an in-memory `Map<entityName, List<CustomField>>` for the validator's hot path. `refresh()` is called by `VibeErpPluginManager` after the initial core load AND after every plug-in load, so a freshly-installed plug-in's custom fields are immediately enforceable. ConcurrentHashMap under the hood; reads are lock-free. * `ExtJsonValidator` is the on-save half: takes (entityName, ext map), walks the declared fields, coerces each value to its native type, and returns the canonicalised map (or throws IllegalArgumentException with ALL violations joined for a single 400). Per-FieldType rules: - String: maxLength enforced. - Integer: accepts Number and numeric String, rejects garbage. - Decimal: precision/scale enforced; preserved as plain string in canonical form so JSON encoding doesn't lose trailing zeros. - Boolean: accepts true/false (case-insensitive). - Date / DateTime: ISO-8601 parse via java.time. - Uuid: java.util.UUID parse. - Enum: must be in declared `allowedValues`. - Money / Quantity / Json / Reference: pass-through (Reference target existence check pending the cross-PBC EntityRegistry seam). Unknown ext keys are rejected with the entity's name and the keys themselves listed. ALL violations are returned in one response, not failing on the first, so a form submitter fixes everything in one round-trip. * `Partner` is the first PBC entity to wire ext through the validator: `CreatePartnerRequest` and `UpdatePartnerRequest` accept an `ext: Map<String, Any?>?`; `PartnerService.create/update` calls `extValidator.validate("Partner", ext)` and persists the canonical JSON to the existing `partners__partner.ext` JSONB column; `PartnerResponse` parses it back so callers see what they wrote. * `partners.yml` now declares two custom fields on Partner — `partners_credit_limit` (Decimal precision=14, scale=2) and `partners_industry` (Enum of printing/publishing/packaging/other) — with English and Chinese labels. Tagged `source=core` so the framework has something to demo from a fresh boot. * New public `GET /api/v1/_meta/metadata/custom-fields/{entityName}` endpoint serves the api.v1 runtime view of declarations from the in-memory registry (so it reflects every refresh) for the SPA's form builder, the OpenAPI generator, and the AI agent function catalog. The existing `GET /api/v1/_meta/metadata` endpoint also gained a `customFields` list. End-to-end smoke test --------------------- Reset Postgres, booted the app, verified: * Boot log: `CustomFieldRegistry: refreshed 2 custom fields across 1 entities (0 malformed rows skipped)` — twice (after core load and after plug-in load). * `GET /api/v1/_meta/metadata/custom-fields/Partner` → both declarations with their labels. * `POST /api/v1/partners/partners` with `ext = {credit_limit: "50000.00", industry: "printing"}` → 201; the response echoes the canonical map. * `POST` with `ext = {credit_limit: "1.234", industry: "unicycles", rogue: "x"}` → 400 with all THREE violations in one body: "ext contains undeclared key(s) for 'Partner': [rogue]; ext.partners_credit_limit: decimal scale 3 exceeds declared scale 2; ext.partners_industry: value 'unicycles' is not in allowed set [printing, publishing, packaging, other]". * `SELECT ext FROM partners__partner WHERE code = 'CUST-EXT-OK'` → `{"partners_industry": "printing", "partners_credit_limit": "50000.00"}` — the canonical JSON is in the JSONB column verbatim. * Regression: catalog uoms, identity users, partners list, and the printing-shop plug-in's POST /plates (with i18n via Accept-Language: zh-CN) all still HTTP 2xx. Build ----- * `./gradlew build`: 13 subprojects, 129 unit tests (was 118), all green. The 11 new tests cover each FieldType variant, multi-violation reporting, required missing rejection, unknown key rejection, and the null-value-dropped case. What was deferred ----------------- * JPA listener auto-validation. Right now PartnerService explicitly calls extValidator.validate(...) before save; Item, Uom, User and the plug-in tables don't yet. Promoting the validator to a JPA PrePersist/PreUpdate listener attached to a marker interface HasExt is the right next step but is its own focused chunk. * Reference target existence check. FieldType.Reference passes through unchanged because the cross-PBC EntityRegistry that would let the validator look up "does an instance with this id exist?" is a separate seam landing later. * The customization UI (Tier 1 self-service add a field via the SPA) is P3.3 — the runtime enforcement is done, the editor is not. * Custom-field-aware permissions. The `pii: true` flag is in the metadata but not yet read by anything; the DSAR/erasure pipeline that will consume it is post-v1.0. -
The previous commit shipped with `<this commit>` as a placeholder since the SHA wasn't known yet. Pin it now.
-
The eighth cross-cutting platform service is live: plug-ins and PBCs now have a real Translator and LocaleProvider instead of the UnsupportedOperationException stubs that have shipped since v0.5. What landed ----------- * New Gradle subproject `platform/platform-i18n` (13 modules total). * `IcuTranslator` — backed by ICU4J's MessageFormat (named placeholders, plurals, gender, locale-aware number/date/currency formatting), the format every modern translation tool speaks natively. JDK ResourceBundle handles per-locale fallback (zh_CN → zh → root). * The translator takes a list of `BundleLocation(classLoader, baseName)` pairs and tries them in order. For the **core** translator the chain is just `[(host, "messages")]`; for a **per-plug-in** translator constructed by VibeErpPluginManager it's `[(pluginClassLoader, "META-INF/vibe-erp/i18n/messages"), (host, "messages")]` so plug-in keys override host keys, but plug-ins still inherit shared keys like `errors.not_found`. * Critical detail: the plug-in baseName uses a path the host does NOT publish, because PF4J's `PluginClassLoader` is parent-first — a `getResource("messages.properties")` against the plug-in classloader would find the HOST bundle through the parent chain, defeating the per-plug-in override entirely. Naming the plug-in resource somewhere the host doesn't claim sidesteps the trap. * The translator disables `ResourceBundle.Control`'s automatic JVM-default locale fallback. The default control walks `requested → root → JVM default → root` which would silently serve German strings to a Japanese-locale request just because the German bundle exists. The fallback chain stops at root within a bundle, then moves to the next bundle location, then returns the key string itself. * `RequestLocaleProvider` reads the active HTTP request's Accept-Language via `RequestContextHolder` + the servlet container's `getLocale()`. Outside an HTTP request (background jobs, workflow tasks, MCP agents) it falls back to the configured default locale (`vibeerp.i18n.defaultLocale`, default `en`). Importantly, when an HTTP request HAS no Accept-Language header it ALSO falls back to the configured default — never to the JVM's locale. * `I18nConfiguration` exposes `coreTranslator` and `coreLocaleProvider` beans. Per-plug-in translators are NOT beans — they're constructed imperatively per plug-in start in VibeErpPluginManager because each needs its own classloader at the front of the resolution chain. * `DefaultPluginContext` now wires `translator` and `localeProvider` for real instead of throwing `UnsupportedOperationException`. Bundles ------- * Core: `platform-i18n/src/main/resources/messages.properties` (English), `messages_zh_CN.properties` (Simplified Chinese), `messages_de.properties` (German). Six common keys (errors, ok/cancel/save/delete) and an ICU plural example for `counts.items`. Java 9+ JEP 226 reads .properties files as UTF-8 by default, so Chinese characters are written directly rather than as `\\uXXXX` escapes. * Reference plug-in: moved from the broken `i18n/messages_en-US.properties` / `messages_zh-CN.properties` (wrong path, hyphen-locale filenames ResourceBundle ignores) to the canonical `META-INF/vibe-erp/i18n/messages.properties` / `messages_zh_CN.properties` paths with underscore locale tags. Added a new `printingshop.plate.created` key with an ICU plural for `ink_count` to demonstrate non-trivial argument substitution. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit POST /api/v1/plugins/printing-shop/plates with three different Accept-Language headers: * (no header) → "Plate 'PLATE-001' created with no inks." (en-US, plug-in base bundle) * `Accept-Language: zh-CN` → "已创建印版 'PLATE-002' (无油墨)。" (zh-CN, plug-in zh_CN bundle) * `Accept-Language: de` → "Plate 'PLATE-003' created with no inks." (de, but the plug-in ships no German bundle so it falls back to the plug-in base bundle — correct, the key is plug-in-specific) Regression: identity, catalog, partners, and `GET /plates` all still HTTP 200 after the i18n wiring change. Build ----- * `./gradlew build`: 13 subprojects, 118 unit tests (was 107 / 12), all green. The 11 new tests cover ICU plural rendering, named-arg substitution, locale fallback (zh_CN → root, ja → root via NO_FALLBACK), cross-classloader override (a real JAR built in /tmp at test time), and RequestLocaleProvider's three resolution paths (no request → default; Accept-Language present → request locale; request without Accept-Language → default, NOT JVM locale). * The architectural rule still enforced: platform-plugins now imports platform-i18n, which is a platform-* dependency (allowed), not a pbc-* dependency (forbidden). What was deferred ----------------- * User-preferred locale from the authenticated user's profile row is NOT in the resolution chain yet — the `LocaleProvider` interface leaves room for it but the implementation only consults Accept-Language and the configured default. Adding it slots in between request and default without changing the api.v1 surface. * The metadata translation overrides table (`metadata__translation`) is also deferred — the `Translator` JavaDoc mentions it as the first lookup source, but right now keys come from .properties files only. Once Tier 1 customisation lands (P3.x), key users will be able to override any string from the SPA without touching code. -
The previous commit shipped with `<this commit>` as a placeholder since the SHA wasn't known yet. Pin it now.
-
The third real PBC. Validates the modular-monolith template against a parent-with-children aggregate (Partner → Addresses → Contacts), where the previous two PBCs only had single-table or two-independent-table shapes. What landed ----------- * New Gradle subproject `pbc/pbc-partners` (12 modules total now). * Three JPA entities, all extending `AuditedJpaEntity`: - `Partner` — code, name, type (CUSTOMER/SUPPLIER/BOTH), tax_id, website, email, phone, active, ext jsonb. Single-table for both customers and suppliers because the role flag is a property of the relationship, not the organisation. - `Address` — partner_id FK, address_type (BILLING/SHIPPING/OTHER), line1/line2/city/region/postal_code/country_code (ISO 3166-1), is_primary. Two free address lines + structured city/region/code is the smallest set that round-trips through every postal system. - `Contact` — partner_id FK, full_name, role, email, phone, active. PII-tagged in metadata YAML for the future audit/export tooling. * Spring Data JPA repos, application services with full CRUD and the invariants below, REST controllers under `/api/v1/partners/partners` (+ nested addresses, contacts). * `partners-init.xml` Liquibase changelog with the three tables, FKs, GIN index on `partner.ext`, indexes on type/active/country. * New api.v1 facade `org.vibeerp.api.v1.ext.partners` with `PartnersApi` + `PartnerRef`. Third `ext.<pbc>` after identity and catalog. Inactive partners hidden at the facade boundary. * `PartnersApiAdapter` runtime implementation in pbc-partners, never leaking JPA entity types. * `partners.yml` metadata declaring all 3 entities, 12 permission keys, 1 menu entry. Picked up automatically by `MetadataLoader`. * 15 new unit tests across `PartnerServiceTest`, `AddressServiceTest` and `ContactServiceTest` (mockk-based, mirroring catalog tests). Invariants enforced in code (not blindly delegated to the DB) ------------------------------------------------------------- * Partner code uniqueness — explicit check produces a 400 with a real message instead of a 500 from the unique-index violation. * Partner code is NOT updatable — every external reference uses code, so renaming is a data-migration concern, not an API call. * Partner deactivate cascades to contacts (also flipped to inactive). Addresses are NOT touched (no `active` column — they exist or they don't). Verified end-to-end against Postgres. * "Primary" flag is at most one per (partner, address_type). When a new/updated address is marked primary, all OTHER primaries of the same type for the same partner are demoted in the same transaction. * Addresses and contacts reject operations on unknown partners up-front to give better errors than the FK-violation. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/auth/login (admin) → JWT * POST /api/v1/partners/partners (CUSTOMER, SUPPLIER) → 201 * GET /api/v1/partners/partners → lists both * GET /api/v1/partners/partners/by-code/CUST-ACME → resolves * POST /api/v1/partners/partners (dup code) → 400 with real message * POST .../{id}/addresses (BILLING, primary) → 201 * POST .../{id}/contacts → 201 * DELETE /api/v1/partners/partners/{id} → 204; partner active=false * GET .../contacts → contact ALSO active=false (cascade verified) * GET /api/v1/_meta/metadata/entities → 3 partners entities present * GET /api/v1/_meta/metadata/permissions → 12 partners permissions * Regression: catalog UoMs/items, identity users, printing-shop plug-in plates all still HTTP 200. Build ----- * `./gradlew build`: 12 subprojects, 107 unit tests, all green (was 11 / 92 before this commit). * The architectural rule still enforced: pbc-partners depends on api-v1 + platform-persistence + platform-security only — no cross-PBC dep, no platform-bootstrap dep. What was deferred ----------------- * Permission enforcement on contact endpoints (P4.3). Currently plain authenticated; the metadata declares the planned `partners.contact.*` keys for when @RequirePermission lands. * Per-country address structure layered on top via metadata forms (P3.x). The current schema is the smallest universal subset. * `deletePartnerCompletely` — out of scope for v1; should be a separate "data scrub" admin tool, not a routine API call. -
The previous docs commit fixed README + CLAUDE + the three "scope note" guides but explicitly left the deeper how-to docs as-is, claiming the architecture hadn't changed. That was wrong on closer reading: those docs still described the pre-single-tenant world (tenant_id columns, two-wall isolation, per-tenant routing) and the pre-implementation plug-in API (class-based @PluginEndpoint controllers, ResponseBuilder, context.log instead of context.logger, plugin.yml entryClass field). A reader following any of those guides would write code that doesn't compile. This commit aligns them with what the framework actually does today. What changed: * docs/architecture/overview.md - Guardrail #5 rewritten from "Multi-tenant in spirit from day one" to "Single-tenant per instance, isolated database" matching CLAUDE.md. - The api.v1 package layout updated to match the actual current surface: dropped `Tenant`, added `PrincipalId`, `PersistenceExceptions`, the full plug-in package (PluginContext, PluginEndpointRegistrar, PluginRequest/Response, PluginJdbc, PluginRow), and the two live cross-PBC facades (IdentityApi, CatalogApi). - The whole "Multi-tenancy" section replaced with "Single-tenant per instance" — drops the two-wall framing, drops per-region routing, drops tenant_id columns and RLS. The single-tenant rationale is explained in line with the rest of the framework. - Custom fields and metadata store sections drop "per tenant" and tenant_id from their column lists. - Audit row in the cross-cutting table drops tenant_id from the column list and adds the PrincipalContext bridge note that's actually live as of P4.1. - "Tier 1 — Key user, no-code" intro drops "scoped to their tenant". * docs/plugin-api/overview.md - api.v1 package layout updated to match the real current surface (same set of additions and deletions as the architecture overview). - Per-package orientation table rewritten to describe what is actually in each package today, including which parts are live (event/, plugin/, http/) and which are stubbed (i18n/, workflow/, form/) with the relevant implementation-plan unit referenced. * docs/plugin-author/getting-started.md — full rewrite. The previous version walked an author through APIs that don't exist: • `Plugin.start(context: PluginContext)` and `Plugin.stop(context: PluginContext)` — actual API is `start(context)` and `stop()` (no arg), with the Pf4jPlugin + VibeErpPlugin dual-supertype trick using import aliases. • `context.log` — actual is `context.logger` (PluginLogger interface). • `plugin.yml` with `entryClass:` field and a structured `metadata.i18n:` subkey — actual schema has no entryClass (PF4J reads `Plugin-Class` from MANIFEST.MF) and `metadata:` is a flat list of paths. • Class-based `@PluginEndpoint` controllers with `ResponseBuilder` — actual is lambda-based registration via `context.endpoints.register(method, path, handler)` returning `PluginResponse`. • `request.translator.format(...)` — translator currently throws UnsupportedOperationException; the example would crash. The new version walks through: 1. The compileOnly api.v1 + pf4j dependency setup. 2. The dual-supertype Pf4jPlugin + VibeErpPlugin trick with import aliases (matches the actual reference plug-in). 3. The real JAR layout (META-INF/MANIFEST.MF for PF4J, plugin.yml for human metadata, META-INF/vibe-erp/db/changelog.xml for Liquibase, META-INF/vibe-erp/metadata/<id>.yml for the metadata loader). 4. Liquibase changelog at the unique META-INF path (not the host's `db/changelog/master.xml` path which would collide on parent-first classloader lookup — the bug we hit and fixed in commit 1ead32d7). 5. Lambda-based endpoint registration with real working examples (path templates, queryForObject, update, JSON body parsing). 6. Metadata YAML shape matching what MetadataLoader actually parses. 7. The boot-log sequence the operator will see, taken from a real smoke run. 8. An honest "what this guide does not yet cover" list with implementation-plan unit references. * docs/i18n/guide.md - Added a "Status" callout at the top: the api.v1 contract is locked, the ICU4J implementation is P1.6 and not yet wired, `context.translator` currently throws. - Bundle filename convention corrected from `<locale>.properties` to `messages_<locale>.properties` (matches what the reference plug-in actually ships). - plugin.yml example updated to show the flat `metadata:` array that PluginManifest actually parses (the previous structured `metadata.i18n:` subkey doesn't exist in the schema). - Merge precedence renamed "tenant overrides" → "operator overrides" since vibe_erp is single-tenant per instance. - LocaleProvider resolution chain updated to drop "tenant default" and reference `vibeerp.i18n.default-locale` configuration. - "Time zones are per-tenant" → "per-user with an instance default". - Locale-add reference to "i18n-overrides volume" annotated as "once P1.6 lands". * docs/customer-onboarding/guide.md - Section 3 "Create a tenant" deleted; replaced with "Configure the instance" pointing at vibeerp.instance.* config. - First-boot bullet list rewritten to drop the bogus "create default tenant row in identity__tenant" step (no such table) and add the real metadata-loader and per-plug-in lifecycle steps that happen. - Mounted-volume table updated: `i18n-overrides/` is "operator translation overrides", not "tenant-level". - "Bind users to roles, bind roles to tenants" simplified; OIDC group claims annotated as P4.2 (not yet wired). - Go-live checklist drops "non-production tenant" wording and "tenant locale defaults" (now instance-level config). - "Hosted multi-tenant operations" deferred-list item rewritten to "hosted operations: provisioning many independent vibe_erp instances", matching the new single-tenant deployment model. * docs/form-authoring/guide.md - Tier 1 section drops "scoped to the tenant", "tweaking for a specific tenant", "tenant-specific custom field". * docs/workflow-authoring/guide.md - "Approval chains that vary per tenant" → "Approval chains the customer wants to author themselves". * docs/index.md - Architecture overview row description updated from "multi-tenancy" to "single-tenant deployment model". - i18n guide row description updated from "tenant overrides" to "operator overrides". Build: ./gradlew build still green (no source touched). The remaining `tenant` references in docs are now ALL intentional — they appear in sentences like "vibe_erp is single-tenant per instance", "there is no create-a-tenant step", "no per-tenant bundles". The historical specs under docs/superpowers/specs/ were intentionally NOT touched: those are dated design records, and the implementation plan already had the single-tenant refactor noted at the top. -
The README and CLAUDE.md "Repository state" sections were both stale — they still claimed "v0.1 skeleton, one PBC implemented" and "no source code yet" when in reality the framework is at v0.6+P1.5 with 92 unit tests, 11 modules, 7 cross-cutting platform services live, 2 PBCs, and a real customer-style plug-in serving HTTP from its own DB tables. What landed: * New PROGRESS.md at the repo root. Single-page progress tracker that enumerates every implementation-plan unit (P1.x, P2.x, P3.x, P4.x, P5.x, R, REF.x, H/A/M.x) with status badges, commit refs for the done units, and the "next priority" call. Includes a snapshot table (modules / tests / PBCs / plug-ins / cross-cutting services), a "what's live right now" table per service, a "what the reference plug-in proves end-to-end" walkthrough, the "what's not yet live" deferred list, and a "how to run what exists today" runbook. * README.md "Building" + "Status" rewritten. Drops the obsolete "v0.1 skeleton" claim. New status table shows current counts. Adds the dev workflow (`installToDev` → `bootRun`) and points at PROGRESS.md for the per-feature view. * CLAUDE.md "Repository state" section rewritten from "no source code, build system, package manifest, test suite, or CI yet" to the actual current state: 11 subprojects, 92 tests, 7 services live, 2 PBCs, build commands, package root, and a pointer at PROGRESS.md. * docs/customer-onboarding/guide.md, docs/workflow-authoring/guide.md, docs/form-authoring/guide.md: replaced "v0.1: API only" annotations with "current: API only". The version label was conflating "the v0.1 skeleton" with "the current build" — accurate in spirit (the UI layer still hasn't shipped) but misleading when readers see the framework is on commit 18, not commit 1. Pointed each guide at PROGRESS.md for live status. Build: ./gradlew build still green (no source touched, but verified that nothing in the docs change broke anything). The deeper how-to docs (architecture/overview.md, plugin-api/overview.md, plugin-author/getting-started.md, i18n/guide.md, docs/index.md) were left as-is. They describe HOW the framework is supposed to work architecturally, and the architecture has not changed. Only the status / version / scope statements needed updating.
-
Adds the foundation for the entire Tier 1 customization story. Core PBCs and plug-ins now ship YAML files declaring their entities, permissions, and menus; a `MetadataLoader` walks the host classpath and each plug-in JAR at boot, upserts the rows tagged with their source, and exposes them at a public REST endpoint so the future SPA, AI-agent function catalog, OpenAPI generator, and external introspection tooling can all see what the framework offers without scraping code. What landed: * New `platform/platform-metadata/` Gradle subproject. Depends on api-v1 + platform-persistence + jackson-yaml + spring-jdbc. * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward- compatible: unknown top-level keys are ignored, so a future plug-in built against a newer schema (forms, workflows, rules, translations) loads cleanly on an older host that doesn't know those sections yet. * `MetadataLoader` with two entry points: loadCore() — uses Spring's PathMatchingResourcePatternResolver against the host classloader. Finds every classpath*:META-INF/ vibe-erp/metadata/*.yml across all jars contributing to the application. Tagged source='core'. loadFromPluginJar(pluginId, jarPath) — opens ONE specific plug-in JAR via java.util.jar.JarFile and walks its entries directly. This is critical: a plug-in's PluginClassLoader is parent-first, so a classpath*: scan against it would ALSO pick up the host's metadata files via parent classpath. We saw this in the first smoke run — the plug-in source ended up with 6 entities (the plug-in's 2 + the host's 4) before the fix. Walking the JAR file directly guarantees only the plug-in's own files load. Tagged source='plugin:<id>'. Both entry points use the same delete-then-insert idempotent core (doLoad). Loading the same source twice produces the same final state. User-edited metadata (source='user') is NEVER touched by either path — it survives boot, plug-in install, and plug-in upgrade. This is what lets a future SPA "Customize" UI add custom fields without fearing they'll be wiped on the next deploy. * `VibeErpPluginManager.afterPropertiesSet()` now calls metadataLoader.loadCore() at the very start, then walks plug-ins and calls loadFromPluginJar(...) for each one between Liquibase migration and start(context). Order is guaranteed: core → linter → migrate → metadata → start. The CommandLineRunner I originally put `loadCore()` in turned out to be wrong because Spring runs CommandLineRunners AFTER InitializingBean.afterPropertiesSet(), so the plug-in metadata was loading BEFORE core — the wrong way around. Calling loadCore() inline in the plug-in manager fixes the ordering without any @Order(...) gymnastics. * `MetadataController` exposes: GET /api/v1/_meta/metadata — all three sections GET /api/v1/_meta/metadata/entities — entities only GET /api/v1/_meta/metadata/permissions GET /api/v1/_meta/metadata/menus Public allowlist (covered by the existing /api/v1/_meta/** rule in SecurityConfiguration). The metadata is intentionally non- sensitive — entity names, permission keys, menu paths. Nothing in here is PII or secret; the SPA needs to read it before the user has logged in. * YAML files shipped: - pbc-identity/META-INF/vibe-erp/metadata/identity.yml (User + Role entities, 6 permissions, Users + Roles menus) - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml (Item + Uom entities, 7 permissions, Items + UoMs menus) - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus in a "Printing shop" section) Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths, mixed sections, blank pluginId rejection, missing-file no-op wipe) + 7 MetadataYamlParseTest cases (DTO mapping, optional fields, section defaults, forward-compat unknown keys). Total now **92 unit tests** across 11 modules, all green. End-to-end smoke test against fresh Postgres + plug-in loaded: Boot logs: MetadataLoader: source='core' loaded 4 entities, 13 permissions, 4 menus from 2 file(s) MetadataLoader: source='plugin:printing-shop' loaded 2 entities, 5 permissions, 2 menus from 1 file(s) HTTP smoke (everything green): GET /api/v1/_meta/metadata (no auth) → 200 6 entities, 18 permissions, 6 menus entity names: User, Role, Item, Uom, Plate, InkRecipe menu sections: Catalog, Printing shop, System GET /api/v1/_meta/metadata/entities → 200 GET /api/v1/_meta/metadata/menus → 200 Direct DB verification: metadata__entity: core=4, plugin:printing-shop=2 metadata__permission: core=13, plugin:printing-shop=5 metadata__menu: core=4, plugin:printing-shop=2 Idempotency: restart the app, identical row counts. Existing endpoints regression: GET /api/v1/identity/users (Bearer) → 1 user GET /api/v1/catalog/uoms (Bearer) → 15 UoMs GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 Bugs caught and fixed during the smoke test: • The first attempt loaded core metadata via a CommandLineRunner annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata inline in VibeErpPluginManager.afterPropertiesSet(). Spring runs all InitializingBeans BEFORE any CommandLineRunner, so the plug-in metadata loaded first and the core load came second — wrong order. Fix: drop CoreMetadataInitializer entirely; have the plug-in manager call metadataLoader.loadCore() directly at the start of afterPropertiesSet(). • The first attempt's plug-in load used metadataLoader.load(pluginClassLoader, ...) which used Spring's PathMatchingResourcePatternResolver against the plug-in's classloader. PluginClassLoader is parent-first, so the resolver enumerated BOTH the plug-in's own JAR AND the host classpath's metadata files, tagging core entities as source='plugin:<id>' and corrupting the seed counts. Fix: refactor MetadataLoader to expose loadFromPluginJar(pluginId, jarPath) which opens the plug-in JAR directly via java.util.jar.JarFile and walks its entries — never asking the classloader at all. The api-v1 surface didn't change. • Two KDoc comments contained the literal string `*.yml` after a `/` character (`/metadata/*.yml`), forming the `/*` pattern that Kotlin's lexer treats as a nested-comment opener. The file failed to compile with "Unclosed comment". This is the third time I've hit this trap; rewriting both KDocs to avoid the literal `/*` sequence. • The MetadataLoaderTest's hand-rolled JAR builder didn't include explicit directory entries for parent paths. Real Gradle JARs do include them, and Spring's PathMatchingResourcePatternResolver needs them to enumerate via classpath*:. Fixed the test helper to write directory entries for every parent of each file. Implementation plan refreshed: P1.5 marked DONE. Next priority candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom field application via the ext jsonb column, which would unlock the full Tier 1 customization story). Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests, metadata seeded for 6 entities + 18 permissions + 6 menus. -
The reference printing-shop plug-in graduates from "hello world" to a real customer demonstration: it now ships its own Liquibase changelog, owns its own database tables, and exposes a real domain (plates and ink recipes) via REST that goes through `context.jdbc` — a new typed-SQL surface in api.v1 — without ever touching Spring's `JdbcTemplate` or any other host internal type. A bytecode linter that runs before plug-in start refuses to load any plug-in that tries to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes. What landed: * api.v1 (additive, binary-compatible): - PluginJdbc — typed SQL access with named parameters. Methods: query, queryForObject, update, inTransaction. No Spring imports leaked. Forces plug-ins to use named params (no positional ?). - PluginRow — typed nullable accessors over a single result row: string, int, long, uuid, bool, instant, bigDecimal. Hides java.sql.ResultSet entirely. - PluginContext.jdbc getter with default impl that throws UnsupportedOperationException so older builds remain binary compatible per the api.v1 stability rules. * platform-plugins — three new sub-packages: - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate. ResultSetPluginRow translates each accessor through ResultSet.wasNull() so SQL NULL round-trips as Kotlin null instead of the JDBC defaults (0 for int, false for bool, etc. — bug factories). - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean for the whole process. Per-plugin isolation lands later. - migration/PluginLiquibaseRunner looks for META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via the PF4J classloader and applies it via Liquibase against the host's shared DataSource. The unique META-INF path matters: plug-ins also see the host's parent classpath, where the host's own db/changelog/master.xml lives, and a collision causes Liquibase ChangeLogParseException at install time. - lint/PluginLinter walks every .class entry in the plug-in JAR via java.util.jar.JarFile + ASM ClassReader, visits every type/ method/field/instruction reference, rejects on any reference to `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages. * VibeErpPluginManager lifecycle is now load → lint → migrate → start: - lint runs immediately after PF4J's loadPlugins(); rejected plug-ins are unloaded with a per-violation error log and never get to run any code - migrate runs the plug-in's own Liquibase changelog; failure means the plug-in is loaded but skipped (loud warning, framework boots fine) - then PF4J's startPlugins() runs the no-arg start - then we walk loaded plug-ins and call vibe_erp's start(context) with a fully-wired DefaultPluginContext (logger + endpoints + eventBus + jdbc). The plug-in's tables are guaranteed to exist by the time its lambdas run. * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the shared PluginJdbc and use it to talk to their own tables. * Reference plug-in (PrintingShopPlugin): - Ships META-INF/vibe-erp/db/changelog.xml with two changesets: plugin_printingshop__plate (id, code, name, width_mm, height_mm, status) and plugin_printingshop__ink_recipe (id, code, name, cmyk_c/m/y/k). - Now registers seven endpoints: GET /ping — health GET /echo/{name} — path variable demo GET /plates — list GET /plates/{id} — fetch POST /plates — create (with race-conditiony existence check before INSERT, since plug-ins can't import Spring's DataAccessException) GET /inks POST /inks - All CRUD lambdas use context.jdbc with named parameters. The plug-in still imports nothing from org.springframework.* in its own code (it does reach the host's Jackson via reflection for JSON parsing — a deliberate v0.6 shortcut documented inline). Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize in-memory plug-in JARs (clean class, forbidden platform ref, forbidden pbc ref, allowed api.v1 ref, multiple violations) and a mocked PluginWrapper to avoid touching the real PF4J loader. Total now **81 unit tests** across 10 modules, all green. End-to-end smoke test against fresh Postgres with the plug-in loaded (every assertion green): Boot logs: PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml Liquibase: ChangeSet printingshop-init-001 ran successfully Liquibase: ChangeSet printingshop-init-002 ran successfully Liquibase migrations applied successfully plugin.printing-shop: registered 7 endpoints HTTP smoke: \dt plugin_printingshop* → both tables exist GET /api/v1/plugins/printing-shop/plates → [] POST plate A4 → 201 + UUID POST plate A3 → 201 + UUID POST duplicate A4 → 409 + clear msg GET plates → 2 rows GET /plates/{id} → A4 details psql verifies both rows in plugin_printingshop__plate POST ink CYAN → 201 POST ink MAGENTA → 201 GET inks → 2 inks with nested CMYK GET /ping → 200 (existing endpoint) GET /api/v1/catalog/uoms → 15 UoMs (no regression) GET /api/v1/identity/users → 1 user (no regression) Bug encountered and fixed during the smoke test: • The plug-in initially shipped its changelog at db/changelog/master.xml, which collides with the HOST's db/changelog/master.xml. The plug-in classloader does parent-first lookup (PF4J default), so Liquibase's ClassLoaderResourceAccessor found BOTH files and threw ChangeLogParseException ("Found 2 files with the path"). Fixed by moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml, a path the host never uses, and updating PluginLiquibaseRunner. The unique META-INF prefix is now part of the documented plug-in convention. What is explicitly NOT in this chunk (deferred): • Per-plugin Spring child contexts — plug-ins still instantiate via PF4J's classloader without their own Spring beans • Per-plugin datasource isolation — one shared host pool today • Plug-in changelog table-prefix linter — convention only, runtime enforcement comes later • Rollback on plug-in uninstall — uninstall is operator-confirmed and rare; running dropAll() during stop() would lose data on accidental restart • Subscription auto-scoping on plug-in stop — plug-ins still close their own subscriptions in stop() • Real customer-grade JSON parsing in plug-in lambdas — the v0.6 reference plug-in uses reflection to find the host's Jackson; a real plug-in author would ship their own JSON library or use a future api.v1 typed-DTO surface Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1 all marked DONE in docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md. Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners).