-
…alidates locations at create Follow-up to the pbc-warehousing chunk. Plugs a real gap noticed in the smoke test: an unknown fromLocationCode or toLocationCode on a StockTransfer was silently accepted at create() and only surfaced as a confirm()-time rollback, which is a confusing UX — the operator types TR-001 wrong, hits "create", then hits "confirm" minutes later and sees "location GHOST-SRC is not in the inventory directory". ## api.v1 growth New cross-PBC method on `InventoryApi`: fun findLocationByCode(locationCode: String): LocationRef? Parallel shape to `CatalogApi.findItemByCode` — a lookup-by-code returning a lightweight ref or null, safe for any cross-PBC consumer to inject. The returned `LocationRef` data class carries id, code, name, type (as a String, not the inventory-internal LocationType enum — rationale in the KDoc), and active flag. Fields that are NOT part of the cross-PBC contract (audit columns, ext JSONB, the raw JPA entity) stay inside pbc-inventory. api.v1 additive change within the v1 line — no breaking rename, no signature churn on existing methods. The interface adds a new abstract method, which IS technically a source-breaking change for any in-tree implementation, but the only impl is pbc-inventory/InventoryApiAdapter which is updated in the same commit. No external plug-in implements InventoryApi (by design; plug-ins inject it, they don't provide it). ## Adapter implementation `InventoryApiAdapter.findLocationByCode` resolves the location via the existing `LocationJpaRepository.findByCode`, which is exactly what `recordMovement` already uses. A new private extension `Location.toRef()` builds the api.v1 DTO. Zero new SQL; zero new repository methods. ## pbc-warehousing wiring `StockTransferService.create` now calls the facade twice — once for the source location, once for the destination — BEFORE validating lines. The four-step ordering is: code uniqueness → from != to → non-empty lines → both locations exist and are active → per-line validation. Unknown locations produce a 400 with a clear message; deactivated locations produce a 400 distinguishing "doesn't exist" from "exists but can't be used": "from location code 'GHOST-SRC' is not in the inventory directory" "from location 'WH-CLOSED' is deactivated and cannot be transfer source" The confirm() path is unchanged. Locations may still vanish between create and confirm (though the likelihood is low for a normal workflow), and `recordMovement` will still raise its own error in that case — belt and suspenders. ## Smoke test ``` POST /api/v1/inventory/locations {code: WH-GOOD, type: WAREHOUSE} POST /api/v1/catalog/items {code: ITEM-1, baseUomCode: ea} POST /api/v1/warehousing/stock-transfers {code: TR-bad, fromLocationCode: GHOST-SRC, toLocationCode: WH-GOOD, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "from location code 'GHOST-SRC' is not in the inventory directory" (before this commit: 201 DRAFT, then 400 at confirm) POST /api/v1/warehousing/stock-transfers {code: TR-bad2, fromLocationCode: WH-GOOD, toLocationCode: GHOST-DST, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 400 "to location code 'GHOST-DST' is not in the inventory directory" POST /api/v1/warehousing/stock-transfers {code: TR-ok, fromLocationCode: WH-GOOD, toLocationCode: WH-OTHER, lines: [{lineNo: 1, itemCode: ITEM-1, quantity: 1}]} → 201 DRAFT ← happy path still works ``` ## Tests - Updated the 3 existing `StockTransferServiceTest` tests that created real transfers to stub `inventory.findLocationByCode` for both WH-A and WH-B via a new `stubLocation()` helper. - 3 new tests: * `create rejects unknown from location via InventoryApi` * `create rejects unknown to location via InventoryApi` * `create rejects a deactivated from location` - Total framework unit tests: 300 (was 297), all green. ## Why this isn't a breaking api.v1 change InventoryApi is an interface consumed by other PBCs and by plug-ins, implemented ONLY by pbc-inventory. Adding a new method to an interface IS a source-breaking change for any implementer — but the framework's dependency rules mean no external code implements this interface. Plug-ins and other PBCs CONSUME it via dependency injection; the only production impl is InventoryApiAdapter, updated in the same commit. Binary compatibility for consumers is preserved: existing call sites compile and run unchanged because only the interface grew, not its existing methods. If/when a third party implements InventoryApi (e.g. a test double outside the framework, or a custom backend plug-in), this would be a semver-major-worthy addition. For the in-tree framework, it's additive-within-a-major. -
Two small closer items that tidy up the end of the HasExt rollout: 1. inventory.yml gains a `customFields:` section with two core declarations for Location: `inventory_address_city` (string, maxLength 128) and `inventory_floor_area_sqm` (decimal 10,2). Completes the "every HasExt entity has at least one declared field" symmetry. Printing-shop plug-in already adds its own `printing_shop_press_id` etc. on top. 2. CLAUDE.md "Repository state" section updated to reflect this session's milestones: - pbc-production v2 (IN_PROGRESS + BOM + scrap) now called out explicitly in the PBC list. - MATERIAL_ISSUE added to the buy-sell-MAKE loop description — the work-order completion now consumes raw materials per BOM line AND credits finished goods atomically. - New bullet: "Tier 1 customization is universal across every core entity with an ext column" — HasExt on Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item; every service uses applyTo/parseExt helpers, zero duplication. - New bullet: "Clean Core extensibility is executable" — the reference printing-shop plug-in's metadata YAML ships customFields on Partner/Item/SalesOrder/WorkOrder and the MetadataLoader merges them with core declarations at load time. Executable grade-A extension under the A/B/C/D safety scale. - Printing-shop plug-in description updated to note that its metadata YAML now carries custom fields on core entities, not just its own entities. Smoke verified end-to-end against real Postgres with the plug-in staged: - GET /_meta/metadata/custom-fields/Location returns 2 core fields. - POST /inventory/locations with `{inventory_address_city: "Shenzhen", inventory_floor_area_sqm: "1250.50"}` → 201, canonical form persisted, ext round-trips. - POST with `inventory_floor_area_sqm: "123456789012345.678"` → 400 "ext.inventory_floor_area_sqm: decimal scale 3 exceeds declared scale 2" — the validator's precision/scale rules fire exactly as designed. No code changes. 246 unit tests, all green. 18 Gradle subprojects. -
Removes the ext-handling copy/paste that had grown across four PBCs (partners, inventory, orders-sales, orders-purchase). Every service that wrote the JSONB `ext` column was manually doing the same four-step sequence: validate, null-check, serialize with a local ObjectMapper, assign to the entity. And every response mapper was doing the inverse: check-if-blank, parse, cast, swallow errors. Net: ~15 lines saved per PBC, one place to change the ext contract later (e.g. PII redaction, audit tagging, field-level events), and a stable plug-in opt-in mechanism — any plug-in entity that implements `HasExt` automatically participates. New api.v1 surface: interface HasExt { val extEntityName: String // key into metadata__custom_field var ext: String // the serialized JSONB column } Lives in `org.vibeerp.api.v1.entity` so plug-ins can opt their own entities into the same validation path. Zero Spring/Jackson dependencies — api.v1 stays clean. Extended `ExtJsonValidator` (platform-metadata) with two helpers: fun applyTo(entity: HasExt, ext: Map<String, Any?>?) — null-safe; validates; writes canonical JSON to entity.ext. Replaces the validate + writeValueAsString + assign triplet in every service's create() and update(). fun parseExt(entity: HasExt): Map<String, Any?> — returns empty map on blank/corrupt column; response mappers never 500 on bad data. Replaces the four identical parseExt local functions. ExtJsonValidator now takes an ObjectMapper via constructor injection (Spring Boot's auto-configured bean). Entities that now implement HasExt (override val extEntityName; override var ext; companion object const val ENTITY_NAME): - Partner (`partners.Partner` → "Partner") - Location (`inventory.Location` → "Location") - SalesOrder (`orders_sales.SalesOrder` → "SalesOrder") - PurchaseOrder (`orders_purchase.PurchaseOrder` → "PurchaseOrder") Deliberately NOT converted this chunk: - WorkOrder (pbc-production) — its ext column has no declared fields yet; a follow-up that adds declarations AND the HasExt implementation is cleaner than splitting the two. - JournalEntry (pbc-finance) — derived state, no ext column. Services lose: - The `jsonMapper: ObjectMapper = ObjectMapper().registerKotlinModule()` field (four copies eliminated) - The `parseExt(entity): Map` helper function (four copies) - The `companion object { const val ENTITY_NAME = ... }` constant (moved onto the entity where it belongs) - The `val canonicalExt = extValidator.validate(...)` + `.also { it.ext = jsonMapper.writeValueAsString(canonicalExt) }` create pattern (replaced with one applyTo call) - The `if (command.ext != null) { ... }` update pattern (applyTo is null-safe) Unit tests: 6 new cases on ExtJsonValidatorTest cover applyTo and parseExt (null-safe path, happy path, failure path, blank column, round-trip, malformed JSON). Existing service tests just swap the mock setup from stubbing `validate` to stubbing `applyTo` and `parseExt` with no-ops. Smoke verified end-to-end against real Postgres: - POST /partners with valid ext (partners_credit_limit, partners_industry) → 201, canonical form persisted. - GET /partners/by-code/X → 200, ext round-trips. - POST with invalid enum value → 400 "value 'x' is not in allowed set [printing, publishing, packaging, other]". - POST with undeclared key → 400 "ext contains undeclared key(s) for 'Partner': [rogue_field]". - PATCH with new ext → 200, ext updated. - PATCH WITHOUT ext field → 200, prior ext preserved (null-safe applyTo). - POST /orders/sales-orders with no ext → 201, the create path via the shared helper still works. 246 unit tests (+6 over 240), 18 Gradle subprojects. -
Completes the @RequirePermission rollout that started in commit b174cf60. Every non-state-transition endpoint in pbc-inventory (Location CRUD), pbc-orders-sales, and pbc-orders-purchase is now guarded by the pre-declared permission keys from their respective metadata YAMLs. State-transition verbs (confirm/cancel/ship/receive) were annotated in the original P4.3 demo chunk; this one fills in the list/get/create/update gap. Inventory - LocationController: list/get/getByCode → inventory.location.read; create → inventory.location.create; update → inventory.location.update; deactivate → inventory.location.deactivate. - (StockBalanceController.adjust + StockMovementController.record were already annotated with inventory.stock.adjust.) Orders-sales - SalesOrderController: list/get/getByCode → orders.sales.read; create → orders.sales.create; update → orders.sales.update. (confirm/cancel/ship were already annotated.) Orders-purchase - PurchaseOrderController: list/get/getByCode → orders.purchase.read; create → orders.purchase.create; update → orders.purchase.update. (confirm/cancel/receive were already annotated.) No new permission keys. Every key this chunk consumes was already declared in the relevant metadata YAML since the respective PBC was first built — catalog + partners already shipped in this state, and the inventory/orders YAMLs declared their read/create/update keys from day one but the controllers hadn't started using them. Admin happy path still works (bootstrap admin has the wildcard `admin` role, same as after commit b174cf60). 230 unit tests still green — annotations are purely additive, no existing test hits the @RequirePermission path since service-level tests bypass the controller entirely. Combined with b174cf60, the framework now has full @RequirePermission coverage on every PBC controller except pbc-identity's user admin (which is a separate permission surface — user/role administration has its own security story). A minimum-privilege role like "sales-clerk" can now be granted exactly `orders.sales.read` + `orders.sales.create` + `partners.partner.read` and NOT accidentally see catalog admin, inventory movements, finance journals, or contact PII.
-
Extends pbc-inventory's MovementReason enum with the two reasons a production-style PBC needs to record stock movements through the existing InventoryApi.recordMovement facade. No new endpoint, no new database column — just two new enum values, two new sign- validation rules, and four new tests. Why this lands BEFORE pbc-production - It's the smallest self-contained change that unblocks any future production-related code (the framework's planned pbc-production, a customer plug-in's manufacturing module, or even an ad-hoc operator script). Each of those callers can now record "consume raw material" / "produce finished good" through the same primitive that already serves sales shipments and purchase receipts. - It validates the "one ledger, many callers" property the architecture spec promised. Adding a new movement reason takes zero schema changes (the column is varchar) and zero plug-in changes (the api.v1 facade takes the reason as a string and delegates to MovementReason.valueOf inside the adapter). The enum lives entirely inside pbc-inventory. What changed - StockMovement.kt: enum gains MATERIAL_ISSUE (Δ ≤ 0) and PRODUCTION_RECEIPT (Δ ≥ 0), with KDoc explaining why each one was added and how they fit the "one primitive for every direction" story. - StockMovementService.validateSign: PRODUCTION_RECEIPT joins the must-be-non-negative bucket alongside RECEIPT, PURCHASE_RECEIPT, and TRANSFER_IN; MATERIAL_ISSUE joins the must-be-non-positive bucket alongside ISSUE, SALES_SHIPMENT, and TRANSFER_OUT. - 4 new unit tests: • record rejects positive delta on MATERIAL_ISSUE • record rejects negative delta on PRODUCTION_RECEIPT • record accepts a positive PRODUCTION_RECEIPT (happy path, new balance row at the receiving location) • record accepts a negative MATERIAL_ISSUE (decrements an existing balance from 1000 → 800) - Total tests: 213 → 217. Smoke test against real Postgres - Booted on a fresh DB; no schema migration needed because the `reason` column is varchar(32), already wide enough. - Seeded an item RAW-PAPER, an item FG-WIDGET, and a location WH-PROD via the existing endpoints. - POST /api/v1/inventory/movements with reason=RECEIPT for 1000 raw paper → balance row at 1000. - POST /api/v1/inventory/movements with reason=MATERIAL_ISSUE delta=-200 reference="WO:WO-EVT-1" → balance becomes 800, ledger row written. - POST /api/v1/inventory/movements with reason=PRODUCTION_RECEIPT delta=50 reference="WO:WO-EVT-1" → balance row at 50 for FG-WIDGET, ledger row written. - Negative test: POST PRODUCTION_RECEIPT with delta=-1 → 400 Bad Request "movement reason PRODUCTION_RECEIPT requires a non-negative delta (got -1)" — the new sign rule fires. - Final ledger has 3 rows (RECEIPT, MATERIAL_ISSUE, PRODUCTION_RECEIPT); final balance has FG-WIDGET=50 and RAW-PAPER=800 — the math is correct. What's deliberately NOT in this chunk - No pbc-production yet. That's the next chunk; this is just the foundation that lets it (or any other production-ish caller) write to the ledger correctly without needing changes to api.v1 or pbc-inventory ever again. - No new return-path reasons (RETURN_FROM_CUSTOMER, RETURN_TO_SUPPLIER) — those land when the returns flow does. - No reference convention for "WO:" — that's documented in the KDoc on `reference`, not enforced anywhere. The v0.16/v0.17 convention "<source>:<code>" continues unchanged. -
The killer demo finally works: place a sales order, ship it, watch inventory drop. This chunk lands the two pieces that close the loop: the inventory movement ledger (the audit-grade history of every stock change) and the sales-order /ship endpoint that calls InventoryApi.recordMovement to atomically debit stock for every line. This is the framework's FIRST cross-PBC WRITE flow. Every earlier cross-PBC call was a read (CatalogApi.findItemByCode, PartnersApi.findPartnerByCode, InventoryApi.findStockBalance). Shipping inverts that: pbc-orders-sales synchronously writes to inventory's tables (via the api.v1 facade) as a side effect of changing its own state, all in ONE Spring transaction. What landed ----------- * New `inventory__stock_movement` table — append-only ledger (id, item_code, location_id FK, signed delta, reason enum, reference, occurred_at, audit cols). CHECK constraint `delta <> 0` rejects no-op rows. Indexes on item_code, location_id, the (item, location) composite, reference, and occurred_at. Migration is in its own changelog file (002-inventory-movement-ledger.xml) per the project convention that each new schema cut is a new file. * New `StockMovement` JPA entity + repository + `MovementReason` enum (RECEIPT, ISSUE, ADJUSTMENT, SALES_SHIPMENT, PURCHASE_RECEIPT, TRANSFER_OUT, TRANSFER_IN). Each value carries a documented sign convention; the service rejects mismatches (a SALES_SHIPMENT with positive delta is a caller bug, not silently coerced). * New `StockMovementService.record(...)` — the ONE entry point for changing inventory. Cross-PBC item validation via CatalogApi, local location validation, sign-vs-reason enforcement, and negative-balance rejection all happen BEFORE the write. The ledger row insert AND the balance row update happen in the SAME database transaction so the two cannot drift. * `StockBalanceService.adjust` refactored to delegate: it computes delta = newQty - oldQty and calls record(... ADJUSTMENT). The REST endpoint keeps its absolute-quantity semantics — operators type "the shelf has 47" not "decrease by 3" — but every adjustment now writes a ledger row too. A no-op adjustment (re-saving the same value) does NOT write a row, so the audit log doesn't fill with noise from operator clicks that didn't change anything. * New `StockMovementController` at `/api/v1/inventory/movements`: GET filters by itemCode, locationId, or reference (for "all movements caused by SO-2026-0001"); POST records a manual movement. Both protected by `inventory.stock.adjust`. * `InventoryApi` facade extended with `recordMovement(itemCode, locationCode, delta, reason: String, reference)`. The reason is a String in the api.v1 surface (not the local enum) so plug-ins don't import inventory's internal types — the closed set is documented on the interface. The adapter parses the string with a meaningful error on unknown values. * New `SHIPPED` status on `SalesOrderStatus`. Transitions: DRAFT → CONFIRMED → SHIPPED (terminal). Cancelling a SHIPPED order is rejected with "issue a return / refund flow instead". * New `SalesOrderService.ship(id, shippingLocationCode)`: walks every line, calls `inventoryApi.recordMovement(... -line.quantity reason="SALES_SHIPMENT" reference="SO:{order_code}")`, flips status to SHIPPED. The whole operation runs in ONE transaction so a failure on any line — bad item, bad location, would push balance negative — rolls back the order status change AND every other line's already-written movement. The customer never ends up with "5 of 7 lines shipped, status still CONFIRMED, ledger half-written". * New `POST /api/v1/orders/sales-orders/{id}/ship` endpoint with body `{"shippingLocationCode": "WH-MAIN"}`, gated by the new `orders.sales.ship` permission key. * `ShipSalesOrderRequest` is a single-arg Kotlin data class — same Jackson deserialization trap as `RefreshRequest`. Fixed with `@JsonCreator(mode = PROPERTIES) + @param:JsonProperty`. The trap is documented in the class KDoc. End-to-end smoke test (the killer demo) --------------------------------------- Reset Postgres, booted the app, ran: * Login as admin * POST /catalog/items → PAPER-A4 * POST /partners → CUST-ACME * POST /inventory/locations → WH-MAIN * POST /inventory/balances/adjust → quantity=1000 (now writes a ledger row via the new path) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null * POST /orders/sales-orders → SO-2026-0001 (50 units of PAPER-A4) * POST /sales-orders/{id}/confirm → status CONFIRMED * POST /sales-orders/{id}/ship body={"shippingLocationCode":"WH-MAIN"} → status SHIPPED * GET /inventory/balances?itemCode=PAPER-A4 → quantity=950 (1000 - 50) * GET /inventory/movements?itemCode=PAPER-A4 → ADJUSTMENT delta=1000 ref=null SALES_SHIPMENT delta=-50 ref=SO:SO-2026-0001 Failure paths verified: * Re-ship a SHIPPED order → 400 "only CONFIRMED orders can be shipped" * Cancel a SHIPPED order → 400 "issue a return / refund flow instead" * Place a 10000-unit order, confirm, try to ship from a 950-stock warehouse → 400 "stock movement would push balance for 'PAPER-A4' at location ... below zero (current=950.0000, delta=-10000.0000)"; balance unchanged after the rollback (transaction integrity verified) Regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata entities — all still HTTP 2xx. Build ----- * `./gradlew build`: 15 subprojects, 175 unit tests (was 163), all green. The 12 new tests cover: - StockMovementServiceTest (8): zero-delta rejection, positive SALES_SHIPMENT rejection, negative RECEIPT rejection, both signs allowed on ADJUSTMENT, unknown item via CatalogApi seam, unknown location, would-push-balance-negative rejection, new-row + existing-row balance update. - StockBalanceServiceTest, rewritten (5): negative-quantity early reject, delegation with computed positive delta, delegation with computed negative delta, no-op adjustment short-circuit (NO ledger row written), no-op on missing row creates an empty row at zero. - SalesOrderServiceTest, additions (3): ship rejects non-CONFIRMED, ship walks lines and calls recordMovement with negated quantity + correct reference, cancel rejects SHIPPED. What was deferred ----------------- * **Event publication.** A `StockMovementRecorded` event would let pbc-finance and pbc-production react to ledger writes without polling. The event bus has been wired since P1.7 but no real cross-PBC flow uses it yet — that's the natural next chunk and the chunk after this commit. * **Multi-leg transfers.** TRANSFER_OUT and TRANSFER_IN are in the enum but no service operation atomically writes both legs yet (both legs in one transaction is required to keep total on-hand invariant). * **Reservation / pick lists.** "Reserve 50 of PAPER-A4 for an unconfirmed order" is its own concept that lands later. * **Shipped-order returns / refunds.** The cancel-SHIPPED rule points the user at "use a return flow" — that flow doesn't exist yet. v1 says shipments are terminal. -
The framework's authorization layer is now live. Until now, every authenticated user could do everything; the framework had only an authentication gate. This chunk adds method-level @RequirePermission annotations enforced by a Spring AOP aspect that consults the JWT's roles claim and a metadata-driven role-permission map. What landed ----------- * New `Role` and `UserRole` JPA entities mapping the existing identity__role + identity__user_role tables (the schema was created in the original identity init but never wired to JPA). RoleJpaRepository + UserRoleJpaRepository with a JPQL query that returns a user's role codes in one round-trip. * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a Set<String> of role codes and encodes them as a `roles` JWT claim (sorted for deterministic tests). Refresh tokens NEVER carry roles by design — see the rationale on `JwtIssuer.issueRefreshToken`. A role revocation propagates within one access-token lifetime (15 min default). * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`. Missing claim → empty set, NOT an error (refresh tokens, system tokens, and pre-P4.3 tokens all legitimately omit it). * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)` before minting the access token. `AuthService.refresh` re-reads the user's roles too — so a refresh always picks up the latest set, since refresh tokens deliberately don't carry roles. * New `AuthorizationContext` ThreadLocal in `platform-security.authz` carrying an `AuthorizedPrincipal(id, username, roles)`. Separate from `PrincipalContext` (which lives in platform-persistence and carries only the principal id, for the audit listener). The two contexts coexist because the audit listener has no business knowing what roles a user has. * `PrincipalContextFilter` now populates BOTH contexts on every authenticated request, reading the JWT's `username` and `roles` claims via `Jwt.getClaimAsStringList("roles")`. The filter is the one and only place that knows about Spring Security types AND about both vibe_erp contexts; everything downstream uses just the Spring-free abstractions. * `PermissionEvaluator` Spring bean: takes a role set + permission key, returns boolean. Resolution chain: 1. The literal `admin` role short-circuits to `true` for every key (the wildcard exists so the bootstrap admin can do everything from the very first boot without seeding a complete role-permission mapping). 2. Otherwise consults an in-memory `Map<role, Set<permission>>` loaded from `metadata__role_permission` rows. The cache is rebuilt by `refresh()`, called from `VibeErpPluginManager` after the initial core load AND after every plug-in load. 3. Empty role set is always denied. No implicit grants. * `@RequirePermission("...")` annotation in `platform-security.authz`. `RequirePermissionAspect` is a Spring AOP @Aspect with @Around advice that intercepts every annotated method, reads the current request's `AuthorizationContext`, calls `PermissionEvaluator.has(...)`, and either proceeds or throws `PermissionDeniedException`. * New `PermissionDeniedException` carrying the offending key. `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with `"permission denied: 'partners.partner.deactivate'"` as the detail. The key IS surfaced to the caller (unlike the 401's generic "invalid credentials") because the SPA needs it to render a useful "your role doesn't include X" message and callers are already authenticated, so it's not an enumeration vector. * `BootstrapAdminInitializer` now creates the wildcard `admin` role on first boot and grants it to the bootstrap admin user. * `@RequirePermission` applied to four sensitive endpoints as the demo: `PartnerController.deactivate`, `StockBalanceController.adjust`, `SalesOrderController.confirm`, `SalesOrderController.cancel`. More endpoints will gain annotations as additional roles are introduced; v1 keeps the blast radius narrow. End-to-end smoke test --------------------- Reset Postgres, booted the app, verified: * Admin login → JWT length 265 (was 241), decoded claims include `"roles":["admin"]` * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED (admin wildcard short-circuits the permission check) * Inserted a 'powerless' user via raw SQL with no role assignments but copied the admin's password hash so login works * Powerless login → JWT length 247, decoded claims have NO roles field at all * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with `"permission denied: 'orders.sales.cancel'"` in the body * Powerless DELETE /partners/{id} → **403 Forbidden** with `"permission denied: 'partners.partner.deactivate'"` * Powerless GET /sales-orders, /partners, /catalog/items → all 200 (read endpoints have no @RequirePermission) * Admin regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata custom-fields endpoint — all still HTTP 2xx Build ----- * `./gradlew build`: 15 subprojects, 163 unit tests (was 153), all green. The 10 new tests cover: - PermissionEvaluator: empty roles deny, admin wildcard, explicit role-permission grant, multi-role union, unknown role denial, malformed payload tolerance, currentHas with no AuthorizationContext, currentHas with bound context (8 tests). - JwtRoundTrip: roles claim round-trips through the access token, refresh token never carries roles even when asked (2 tests). What was deferred ----------------- * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak- compatible OIDC client will reuse the same authorization layer unchanged — the roles will come from OIDC ID tokens instead of the local user store. * **Permission key validation at boot.** The framework does NOT yet check that every `@RequirePermission` value matches a declared metadata permission key. The plug-in linter is the natural place for that check to land later. * **Role hierarchy**. Roles are flat in v1; a role with permission X cannot inherit from another role. Adding a `parent_role` field on the role row is a non-breaking change later. * **Resource-aware permissions** ("the user owns THIS partner"). v1 only checks the operation, not the operand. Resource-aware checks are post-v1. * **Composite (AND/OR) permission requirements**. A single key per call site keeps the contract simple. Composite requirements live in service code that calls `PermissionEvaluator.currentHas` directly. * **Role management UI / REST**. The framework can EVALUATE permissions but has no first-class endpoints for "create a role", "grant a permission to a role", "assign a role to a user". v1 expects these to be done via direct DB writes or via the future SPA's role editor (P3.x); the wiring above is intentionally policy-only, not management. -
The fourth real PBC, and the first one that CONSUMES another PBC's api.v1.ext facade. Until now every PBC was a *provider* of an ext.<pbc> interface (identity, catalog, partners). pbc-inventory is the first *consumer*: it injects org.vibeerp.api.v1.ext.catalog.CatalogApi to validate item codes before adjusting stock. This proves the cross-PBC contract works in both directions, exactly as guardrail #9 requires. What landed ----------- * New Gradle subproject `pbc/pbc-inventory` (14 modules total now). * Two JPA entities, both extending `AuditedJpaEntity`: - `Location` — code, name, type (WAREHOUSE/BIN/VIRTUAL), active, ext jsonb. Single table for all location levels with a type discriminator (no recursive self-reference in v1; YAGNI for the "one warehouse, handful of bins" shape every printing shop has). - `StockBalance` — item_code (varchar, NOT a UUID FK), location_id FK, quantity numeric(18,4). The item_code is deliberately a string FK that references nothing because pbc-inventory has no compile-time link to pbc-catalog — the cross-PBC link goes through CatalogApi at runtime. UNIQUE INDEX on (item_code, location_id) is the primary integrity guarantee; UUID id is the addressable PK. CHECK (quantity >= 0). * `LocationService` and `StockBalanceService` with full CRUD + adjust semantics. ext jsonb on Location goes through ExtJsonValidator (P3.4 — Tier 1 customisation). * `StockBalanceService.adjust(itemCode, locationId, quantity)`: 1. Reject negative quantity. 2. **Inject CatalogApi**, call `findItemByCode(itemCode)`, reject if null with a meaningful 400. THIS is the cross-PBC seam test. 3. Verify the location exists. 4. SELECT-then-save upsert on (item_code, location_id) — single row per cell, mutated in place when the row exists, created when it doesn't. Single-instance deployment makes the read-modify-write race window academic. * REST: `/api/v1/inventory/locations` (CRUD), `/api/v1/inventory/balances` (GET with itemCode or locationId filters, POST /adjust). * New api.v1 facade `org.vibeerp.api.v1.ext.inventory` with `InventoryApi.findStockBalance(itemCode, locationCode)` + `totalOnHand(itemCode)` + `StockBalanceRef`. Fourth ext.* package after identity, catalog, partners. Sets up the next consumers (sales orders, purchase orders, the printing-shop plug-in's "do we have enough paper for this job?"). * `InventoryApiAdapter` runtime implementation in pbc-inventory. * `inventory.yml` metadata declaring 2 entities, 6 permission keys, 2 menu entries. Build enforcement (the load-bearing bit) ---------------------------------------- The root build.gradle.kts STILL refuses any direct dependency from pbc-inventory to pbc-catalog. Try adding `implementation(project( ":pbc:pbc-catalog"))` to pbc-inventory's build.gradle.kts and the build fails at configuration time with "Architectural violation in :pbc:pbc-inventory: depends on :pbc:pbc-catalog". The CatalogApi interface is in api-v1; the CatalogApiAdapter implementation is in pbc-catalog; Spring DI wires them at runtime via the bootstrap @ComponentScan. pbc-inventory only ever sees the interface. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit: * POST /api/v1/inventory/locations → 201, "WH-MAIN" warehouse * POST /api/v1/catalog/items → 201, "PAPER-A4" sheet item * POST /api/v1/inventory/balances/adjust with itemCode=PAPER-A4 → 200, the cross-PBC catalog lookup succeeded * POST .../adjust with itemCode=FAKE-ITEM → 400 with the meaningful message "item code 'FAKE-ITEM' is not in the catalog (or is inactive)" — the cross-PBC seam REJECTS unknown items as designed * POST .../adjust with quantity=-5 → 400 "stock quantity must be non-negative", caught BEFORE the CatalogApi mock would be invoked * POST .../adjust again with quantity=7500 → 200; SELECT shows ONE row with id unchanged and quantity = 7500 (upsert mutates, not duplicates) * GET /api/v1/inventory/balances?itemCode=PAPER-A4 → the row, with scale-4 numeric serialised verbatim * GET /api/v1/_meta/metadata/entities → 11 entities now (was 9 before Location + StockBalance landed) * Regression: catalog uoms, identity users, partners, printing-shop plates with i18n (Accept-Language: zh-CN), Location custom-fields endpoint all still HTTP 2xx. Build ----- * `./gradlew build`: 14 subprojects, 139 unit tests (was 129), all green. The 10 new tests cover Location CRUD + the StockBalance adjust path with mocked CatalogApi: unknown item rejection, unknown location rejection, negative-quantity early reject (verifies CatalogApi is NOT consulted), happy-path create, and upsert (existing row mutated, save() not called because @Transactional flushes the JPA-managed entity on commit). What was deferred ----------------- * `inventory__stock_movement` append-only ledger. The current operation is "set the quantity"; receipts/issues/transfers as discrete events with audit trail land in a focused follow-up. The balance row will then be regenerated from the ledger via a Liquibase backfill. * Negative-balance / over-issue prevention. The CHECK constraint blocks SET to a negative value, but there's no concept of "you cannot ISSUE more than is on hand" yet because there is no separate ISSUE operation — only absolute SET. * Lots, batches, serial numbers, expiry dates. Plenty of printing shops need none of these; the ones that do can either wait for the lot/serial chunk later or add the columns via Tier 1 custom fields on Location for now. * Cross-warehouse transfer atomicity (debit one, credit another in one transaction). Same — needs the ledger.