-
Completes the plug-in side of the embedded Flowable story. The P2.1 core made plug-ins able to register TaskHandlers; this chunk makes them able to ship the BPMN processes those handlers serve. ## Why Flowable's built-in auto-deployer couldn't do it Flowable's Spring Boot starter scans the host classpath at engine startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys every hit (the literal glob is paraphrased because the Kotlin KDoc comment below would otherwise treat the embedded slash-star as the start of a nested comment — feedback memory "Kotlin KDoc nested- comment trap"). PF4J plug-ins load through an isolated child classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml` resource shipped inside a plug-in JAR is never seen. This chunk adds a dedicated host-side deployer that opens each plug-in JAR file directly (same JarFile walk pattern as `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs with the Flowable `RepositoryService`. ## Mechanism ### New PluginProcessDeployer (platform-workflow) One Spring bean, two methods: - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR, collects every entry whose name starts with `processes/` and ends with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one Flowable `Deployment` named `plugin:<id>` with `category = pluginId`. Returns the deployment id or null (missing JAR / no BPMN resources). One deployment per plug-in keeps undeploy atomic and makes the teardown query unambiguous. - `undeployByPlugin(pluginId): Int` — runs `createDeploymentQuery().deploymentCategory(pluginId).list()` and calls `deleteDeployment(id, cascade=true)` on each hit. Cascading removes process instances and history rows along with the deployment — "uninstalling a plug-in makes it disappear". Idempotent: a second call returns 0. The deployer reads the JAR entries into byte arrays inside the JarFile's `use` block and then passes the bytes to `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the jar handle is already closed by the time Flowable sees the deployment. No input-stream lifetime tangles. ### VibeErpPluginManager wiring - New constructor dependency on `PluginProcessDeployer`. - Deploy happens AFTER `start(context)` succeeds. The ordering matters because a plug-in can only register its TaskHandlers during `start(context)`, and a deployed BPMN whose service-task delegate expression resolves to a key with no matching handler would still deploy (Flowable only resolves delegates at process-start time). Registering handlers first is the safer default: the moment the deployment lands, every referenced handler is already in the TaskHandlerRegistry. - BPMN deployment failure AFTER a successful `start(context)` now fully unwinds the plug-in state: call `instance.stop()`, remove the plug-in from the `started` list, strip its endpoints + its TaskHandlers + call `undeployByPlugin` (belt and suspenders — the deploy attempt may have partially succeeded). That mirrors the existing start-failure unwinding so the framework doesn't end up with a plug-in that's half-installed after any step throws. - `destroy()` calls `undeployByPlugin(pluginId)` alongside the existing `unregisterAllByOwner(pluginId)`. ### Reference plug-in BPMN `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml` — a minimal two-task process (`start` → serviceTask → `end`) whose serviceTask id is `printing_shop.plate.approve`, matching the PlateApprovalTaskHandler key landed in the previous commit. Process definition key is `plugin-printing-shop-plate-approval` (distinct from the serviceTask id because BPMN 2.0 requires element ids to be unique per document — same separation used for the core ping process). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... registered TaskHandler 'vibeerp.workflow.ping' owner='core' ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ... [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml] $ curl /api/v1/workflow/definitions (as admin) [ {"key":"plugin-printing-shop-plate-approval", "name":"Printing shop — plate approval", "version":1, "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4", "resourceName":"processes/plate-approval.bpmn20.xml"}, {"key":"vibeerp-workflow-ping", "name":"vibe_erp workflow ping", "version":1, "deploymentId":"4f48...", "resourceName":"vibeerp-ping.bpmn20.xml"} ] $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"plugin-printing-shop-plate-approval", "variables":{"plateId":"PLATE-007"}} → {"processInstanceId":"5b1b...", "ended":true, "variables":{"plateId":"PLATE-007", "plateApproved":true, "approvedBy":"user:admin", "approvedAt":"2026-04-09T04:48:30.514523Z"}} $ kill -TERM <pid> [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade) ``` Full end-to-end loop closed: plug-in ships a BPMN → host reads it out of the JAR → Flowable deployment registered under the plug-in category → HTTP caller starts a process instance via the standard `/api/v1/workflow/process-instances` surface → dispatcher routes by activity id to the plug-in's TaskHandler → handler writes output variables + plug-in sees the authenticated caller as `ctx.principal()` via the reserved `__vibeerp_*` process-variable propagation from commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs. ## Tests - 6 new unit tests on `PluginProcessDeployerTest`: * `deployFromPlugin returns null when jarPath is not a regular file` — guard against dev-exploded plug-in dirs * `deployFromPlugin returns null when the plug-in jar has no BPMN resources` * `deployFromPlugin reads every bpmn resource under processes and deploys one bundle` — builds a real temporary JAR with two BPMN entries + a README + a metadata YAML, verifies that both BPMNs go through `addBytes` with the right names and the README / metadata entries are skipped * `deployFromPlugin rejects a blank plug-in id` * `undeployByPlugin returns zero when there is nothing to remove` * `undeployByPlugin cascades a deleteDeployment per matching deployment` - Total framework unit tests: 275 (was 269), all green. ## Kotlin trap caught during authoring (feedback memory paid out) First compile failed with `Unclosed comment` on the last line of `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph containing the literal glob `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the backtick span was parsed as the start of a nested block comment even though the surrounding `/* ... */` KDoc was syntactically complete. The saved feedback-memory entry "Kotlin KDoc nested-comment trap" covered exactly this situation — the fix is to spell out glob characters as `[star]` / `[slash]` (or the word "slash-star") inside documentation so the literal `/*` never appears. The KDoc now documents the behaviour AND the workaround so the next maintainer doesn't hit the same trap. ## Non-goals (still parking lot) - Handler-side access to the full PluginContext — PlateApprovalTaskHandler is still a pure function because the framework doesn't hand TaskHandlers a context object. For REF.1 (real quote→job-card) handlers will need to read + mutate plug-in-owned tables; the cleanest approach is closure-capture inside the plug-in class (handler instantiated inside `start(context)` with the context captured in the outer scope). Decision deferred to REF.1. - BPMN resource hot reload. The deployer runs once per plug-in start; a plug-in whose BPMN changes under its feet at runtime isn't supported yet. - Plug-in-shipped DMN / CMMN resources. The deployer only looks at `.bpmn20.xml` and `.bpmn`. Decision-table and case-management resources are not on the v1.0 critical path. -
## What's new Plug-ins can now contribute workflow task handlers to the framework. The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans from the host Spring context; handlers defined inside a PF4J plug-in were invisible because the plug-in's child classloader is not in the host's bean list. This commit closes that gap. ## Mechanism ### api.v1 - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar` with a single `register(handler: TaskHandler)` method. Plug-ins call it from inside their `start(context)` lambda. - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as a new optional member with a default implementation that throws `UnsupportedOperationException("upgrade to v0.7 or later")`, so pre-existing plug-in jars remain binary-compatible with the new host and a plug-in built against v0.7 of the api-v1 surface fails fast on an old host instead of silently doing nothing. Same pattern we used for `endpoints` and `jdbc`. ### platform-workflow - `TaskHandlerRegistry` gains owner tagging. Every registered handler now carries an `ownerId`: core `@Component` beans get `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through the constructor-injection path), plug-in-contributed handlers get their PF4J plug-in id. New API: * `register(handler, ownerId = OWNER_CORE)` (default keeps existing call sites unchanged) * `unregisterAllByOwner(ownerId): Int` — strip every handler owned by that id in one call, returns the count for log correlation * The duplicate-key error message now includes both owners so a plug-in trying to stomp on a core handler gets an actionable "already registered by X (owner='core'), attempted by Y (owner='printing-shop')" instead of "already registered". * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>` to `ConcurrentHashMap<String, Entry>` where `Entry` carries `(handler, ownerId)`. `find(key)` still returns `TaskHandler?` so the dispatcher is unchanged. - No behavioral change for the hot-path (`DispatchingJavaDelegate`) — only the registration/teardown paths changed. ### platform-plugins - New dependency on `:platform:platform-workflow` (the only new inter- module dep of this chunk; it is the module that exposes `TaskHandlerRegistry`). - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)` that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`, so the plug-in never sees (or can tamper with) the owner id. - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor parameter and exposes it as the `PluginContext.taskHandlers` override. - `VibeErpPluginManager`: * injects `TaskHandlerRegistry` * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per plug-in when building `DefaultPluginContext` * partial-start failure now also calls `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching the existing `endpointRegistry.unregisterAll(pluginId)` cleanup so a throwing `start(context)` cannot leave stale registrations * `destroy()` calls the same `unregisterAllByOwner` for every started plug-in in reverse order, mirroring the endpoint cleanup ### reference-customer/plugin-printing-shop - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in- contributed TaskHandler in the framework. Key `printing_shop.plate.approve`. Reads a `plateId` process variable, writes `plateApproved`, `plateId`, `approvedBy` (principal label), `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper plate-approval handler would UPDATE `plugin_printingshop__plate` via `context.jdbc`, but that requires handing the TaskHandler a projection of the PluginContext — a deliberate non-goal of this chunk, deferred to the "handler context" follow-up. - `PrintingShopPlugin.start(context)` now ends with `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs the registration. - Package layout: `org.vibeerp.reference.printingshop.workflow` is the plug-in's workflow namespace going forward (the next printing- shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order — will live alongside). ## Smoke test (fresh DB, plug-in staged) ``` $ docker compose down -v && docker compose up -d db $ ./gradlew :distribution:bootRun & ... TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping] ... plug-in 'printing-shop' Liquibase migrations applied successfully vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler' [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve $ curl /api/v1/workflow/handlers (as admin) { "count": 2, "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"] } $ curl /api/v1/plugins/printing-shop/ping # plug-in HTTP still works {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...} $ curl -X POST /api/v1/workflow/process-instances {"processDefinitionKey":"vibeerp-workflow-ping"} (principal propagation from previous commit still works — pingedBy=user:admin) $ kill -TERM <pid> [ionShutdownHook] vibe_erp stopping 1 plug-in(s) [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped) [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s) ``` Every expected lifecycle event fires in the right order with the right owner attribution. Core handlers are untouched by plug-in teardown. ## Tests - 4 new / updated tests on `TaskHandlerRegistryTest`: * `unregisterAllByOwner only removes handlers owned by that id` — 2 core + 2 plug-in, unregister the plug-in owner, only the 2 plug-in keys are removed * `unregisterAllByOwner on unknown owner returns zero` * `register with blank owner is rejected` * Updated `duplicate key fails fast` to assert the new error message format including both owner ids - Total framework unit tests: 269 (was 265), all green. ## What this unblocks - **REF.1** (real printing-shop quote→job-card workflow) can now register its production handlers through the same seam - **Plug-in-contributed handlers with state access** — the next design question is how a plug-in handler gets at the plug-in's database and translator. Two options: pass a projection of the PluginContext through TaskContext, or keep a reference to the context captured at plug-in start (closure). The PlateApproval handler in this chunk is pure on purpose to keep the seam conversation separate. - **Plug-in-shipped BPMN auto-deployment** — Flowable's default classpath scan uses `classpath*:/processes/*.bpmn20.xml` which does NOT see PF4J plug-in classloaders. A dedicated `PluginProcessDeployer` that walks each started plug-in's JAR for BPMN resources and calls `repositoryService.createDeployment` is the natural companion to this commit, still pending. ## Non-goals (still parking lot) - BPMN processes shipped inside plug-in JARs (see above — needs its own chunk, because it requires reading resources from the PF4J classloader and constructing a Flowable deployment by hand) - Per-handler permission checks — a handler that wants a permission gate still has to call back through its own context; P4.3's @RequirePermission aspect doesn't reach into Flowable delegate execution. - Hot reload of a running plug-in's TaskHandlers. The seam supports it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised by any current caller.
-
The framework's authorization layer is now live. Until now, every authenticated user could do everything; the framework had only an authentication gate. This chunk adds method-level @RequirePermission annotations enforced by a Spring AOP aspect that consults the JWT's roles claim and a metadata-driven role-permission map. What landed ----------- * New `Role` and `UserRole` JPA entities mapping the existing identity__role + identity__user_role tables (the schema was created in the original identity init but never wired to JPA). RoleJpaRepository + UserRoleJpaRepository with a JPQL query that returns a user's role codes in one round-trip. * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a Set<String> of role codes and encodes them as a `roles` JWT claim (sorted for deterministic tests). Refresh tokens NEVER carry roles by design — see the rationale on `JwtIssuer.issueRefreshToken`. A role revocation propagates within one access-token lifetime (15 min default). * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`. Missing claim → empty set, NOT an error (refresh tokens, system tokens, and pre-P4.3 tokens all legitimately omit it). * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)` before minting the access token. `AuthService.refresh` re-reads the user's roles too — so a refresh always picks up the latest set, since refresh tokens deliberately don't carry roles. * New `AuthorizationContext` ThreadLocal in `platform-security.authz` carrying an `AuthorizedPrincipal(id, username, roles)`. Separate from `PrincipalContext` (which lives in platform-persistence and carries only the principal id, for the audit listener). The two contexts coexist because the audit listener has no business knowing what roles a user has. * `PrincipalContextFilter` now populates BOTH contexts on every authenticated request, reading the JWT's `username` and `roles` claims via `Jwt.getClaimAsStringList("roles")`. The filter is the one and only place that knows about Spring Security types AND about both vibe_erp contexts; everything downstream uses just the Spring-free abstractions. * `PermissionEvaluator` Spring bean: takes a role set + permission key, returns boolean. Resolution chain: 1. The literal `admin` role short-circuits to `true` for every key (the wildcard exists so the bootstrap admin can do everything from the very first boot without seeding a complete role-permission mapping). 2. Otherwise consults an in-memory `Map<role, Set<permission>>` loaded from `metadata__role_permission` rows. The cache is rebuilt by `refresh()`, called from `VibeErpPluginManager` after the initial core load AND after every plug-in load. 3. Empty role set is always denied. No implicit grants. * `@RequirePermission("...")` annotation in `platform-security.authz`. `RequirePermissionAspect` is a Spring AOP @Aspect with @Around advice that intercepts every annotated method, reads the current request's `AuthorizationContext`, calls `PermissionEvaluator.has(...)`, and either proceeds or throws `PermissionDeniedException`. * New `PermissionDeniedException` carrying the offending key. `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with `"permission denied: 'partners.partner.deactivate'"` as the detail. The key IS surfaced to the caller (unlike the 401's generic "invalid credentials") because the SPA needs it to render a useful "your role doesn't include X" message and callers are already authenticated, so it's not an enumeration vector. * `BootstrapAdminInitializer` now creates the wildcard `admin` role on first boot and grants it to the bootstrap admin user. * `@RequirePermission` applied to four sensitive endpoints as the demo: `PartnerController.deactivate`, `StockBalanceController.adjust`, `SalesOrderController.confirm`, `SalesOrderController.cancel`. More endpoints will gain annotations as additional roles are introduced; v1 keeps the blast radius narrow. End-to-end smoke test --------------------- Reset Postgres, booted the app, verified: * Admin login → JWT length 265 (was 241), decoded claims include `"roles":["admin"]` * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED (admin wildcard short-circuits the permission check) * Inserted a 'powerless' user via raw SQL with no role assignments but copied the admin's password hash so login works * Powerless login → JWT length 247, decoded claims have NO roles field at all * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with `"permission denied: 'orders.sales.cancel'"` in the body * Powerless DELETE /partners/{id} → **403 Forbidden** with `"permission denied: 'partners.partner.deactivate'"` * Powerless GET /sales-orders, /partners, /catalog/items → all 200 (read endpoints have no @RequirePermission) * Admin regression: catalog uoms, identity users, inventory locations, printing-shop plates with i18n, metadata custom-fields endpoint — all still HTTP 2xx Build ----- * `./gradlew build`: 15 subprojects, 163 unit tests (was 153), all green. The 10 new tests cover: - PermissionEvaluator: empty roles deny, admin wildcard, explicit role-permission grant, multi-role union, unknown role denial, malformed payload tolerance, currentHas with no AuthorizationContext, currentHas with bound context (8 tests). - JwtRoundTrip: roles claim round-trips through the access token, refresh token never carries roles even when asked (2 tests). What was deferred ----------------- * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak- compatible OIDC client will reuse the same authorization layer unchanged — the roles will come from OIDC ID tokens instead of the local user store. * **Permission key validation at boot.** The framework does NOT yet check that every `@RequirePermission` value matches a declared metadata permission key. The plug-in linter is the natural place for that check to land later. * **Role hierarchy**. Roles are flat in v1; a role with permission X cannot inherit from another role. Adding a `parent_role` field on the role row is a non-breaking change later. * **Resource-aware permissions** ("the user owns THIS partner"). v1 only checks the operation, not the operand. Resource-aware checks are post-v1. * **Composite (AND/OR) permission requirements**. A single key per call site keeps the contract simple. Composite requirements live in service code that calls `PermissionEvaluator.currentHas` directly. * **Role management UI / REST**. The framework can EVALUATE permissions but has no first-class endpoints for "create a role", "grant a permission to a role", "assign a role to a user". v1 expects these to be done via direct DB writes or via the future SPA's role editor (P3.x); the wiring above is intentionally policy-only, not management. -
The keystone of the framework's "key user adds fields without code" promise. A YAML declaration is now enough to add a typed custom field to an existing entity, validate it on every save, and surface it to the SPA / OpenAPI / AI agent. This is the bit that makes vibe_erp a *framework* instead of a fork-per-customer app. What landed ----------- * Extended `MetadataYamlFile` with a top-level `customFields:` section and `CustomFieldYaml` + `CustomFieldTypeYaml` wire-format DTOs. The YAML uses a flat `kind: decimal / scale: 2` discriminator instead of Jackson polymorphic deserialization so plug-in authors don't have to nest type configs and api.v1's `FieldType` stays free of Jackson imports. * `MetadataLoader.doLoad` now upserts `metadata__custom_field` rows alongside entities/permissions/menus, with the same delete-by-source idempotency. `LoadResult` carries the count for the boot log. * `CustomFieldRegistry` reads every `metadata__custom_field` row from the database and builds an in-memory `Map<entityName, List<CustomField>>` for the validator's hot path. `refresh()` is called by `VibeErpPluginManager` after the initial core load AND after every plug-in load, so a freshly-installed plug-in's custom fields are immediately enforceable. ConcurrentHashMap under the hood; reads are lock-free. * `ExtJsonValidator` is the on-save half: takes (entityName, ext map), walks the declared fields, coerces each value to its native type, and returns the canonicalised map (or throws IllegalArgumentException with ALL violations joined for a single 400). Per-FieldType rules: - String: maxLength enforced. - Integer: accepts Number and numeric String, rejects garbage. - Decimal: precision/scale enforced; preserved as plain string in canonical form so JSON encoding doesn't lose trailing zeros. - Boolean: accepts true/false (case-insensitive). - Date / DateTime: ISO-8601 parse via java.time. - Uuid: java.util.UUID parse. - Enum: must be in declared `allowedValues`. - Money / Quantity / Json / Reference: pass-through (Reference target existence check pending the cross-PBC EntityRegistry seam). Unknown ext keys are rejected with the entity's name and the keys themselves listed. ALL violations are returned in one response, not failing on the first, so a form submitter fixes everything in one round-trip. * `Partner` is the first PBC entity to wire ext through the validator: `CreatePartnerRequest` and `UpdatePartnerRequest` accept an `ext: Map<String, Any?>?`; `PartnerService.create/update` calls `extValidator.validate("Partner", ext)` and persists the canonical JSON to the existing `partners__partner.ext` JSONB column; `PartnerResponse` parses it back so callers see what they wrote. * `partners.yml` now declares two custom fields on Partner — `partners_credit_limit` (Decimal precision=14, scale=2) and `partners_industry` (Enum of printing/publishing/packaging/other) — with English and Chinese labels. Tagged `source=core` so the framework has something to demo from a fresh boot. * New public `GET /api/v1/_meta/metadata/custom-fields/{entityName}` endpoint serves the api.v1 runtime view of declarations from the in-memory registry (so it reflects every refresh) for the SPA's form builder, the OpenAPI generator, and the AI agent function catalog. The existing `GET /api/v1/_meta/metadata` endpoint also gained a `customFields` list. End-to-end smoke test --------------------- Reset Postgres, booted the app, verified: * Boot log: `CustomFieldRegistry: refreshed 2 custom fields across 1 entities (0 malformed rows skipped)` — twice (after core load and after plug-in load). * `GET /api/v1/_meta/metadata/custom-fields/Partner` → both declarations with their labels. * `POST /api/v1/partners/partners` with `ext = {credit_limit: "50000.00", industry: "printing"}` → 201; the response echoes the canonical map. * `POST` with `ext = {credit_limit: "1.234", industry: "unicycles", rogue: "x"}` → 400 with all THREE violations in one body: "ext contains undeclared key(s) for 'Partner': [rogue]; ext.partners_credit_limit: decimal scale 3 exceeds declared scale 2; ext.partners_industry: value 'unicycles' is not in allowed set [printing, publishing, packaging, other]". * `SELECT ext FROM partners__partner WHERE code = 'CUST-EXT-OK'` → `{"partners_industry": "printing", "partners_credit_limit": "50000.00"}` — the canonical JSON is in the JSONB column verbatim. * Regression: catalog uoms, identity users, partners list, and the printing-shop plug-in's POST /plates (with i18n via Accept-Language: zh-CN) all still HTTP 2xx. Build ----- * `./gradlew build`: 13 subprojects, 129 unit tests (was 118), all green. The 11 new tests cover each FieldType variant, multi-violation reporting, required missing rejection, unknown key rejection, and the null-value-dropped case. What was deferred ----------------- * JPA listener auto-validation. Right now PartnerService explicitly calls extValidator.validate(...) before save; Item, Uom, User and the plug-in tables don't yet. Promoting the validator to a JPA PrePersist/PreUpdate listener attached to a marker interface HasExt is the right next step but is its own focused chunk. * Reference target existence check. FieldType.Reference passes through unchanged because the cross-PBC EntityRegistry that would let the validator look up "does an instance with this id exist?" is a separate seam landing later. * The customization UI (Tier 1 self-service add a field via the SPA) is P3.3 — the runtime enforcement is done, the editor is not. * Custom-field-aware permissions. The `pii: true` flag is in the metadata but not yet read by anything; the DSAR/erasure pipeline that will consume it is post-v1.0. -
The eighth cross-cutting platform service is live: plug-ins and PBCs now have a real Translator and LocaleProvider instead of the UnsupportedOperationException stubs that have shipped since v0.5. What landed ----------- * New Gradle subproject `platform/platform-i18n` (13 modules total). * `IcuTranslator` — backed by ICU4J's MessageFormat (named placeholders, plurals, gender, locale-aware number/date/currency formatting), the format every modern translation tool speaks natively. JDK ResourceBundle handles per-locale fallback (zh_CN → zh → root). * The translator takes a list of `BundleLocation(classLoader, baseName)` pairs and tries them in order. For the **core** translator the chain is just `[(host, "messages")]`; for a **per-plug-in** translator constructed by VibeErpPluginManager it's `[(pluginClassLoader, "META-INF/vibe-erp/i18n/messages"), (host, "messages")]` so plug-in keys override host keys, but plug-ins still inherit shared keys like `errors.not_found`. * Critical detail: the plug-in baseName uses a path the host does NOT publish, because PF4J's `PluginClassLoader` is parent-first — a `getResource("messages.properties")` against the plug-in classloader would find the HOST bundle through the parent chain, defeating the per-plug-in override entirely. Naming the plug-in resource somewhere the host doesn't claim sidesteps the trap. * The translator disables `ResourceBundle.Control`'s automatic JVM-default locale fallback. The default control walks `requested → root → JVM default → root` which would silently serve German strings to a Japanese-locale request just because the German bundle exists. The fallback chain stops at root within a bundle, then moves to the next bundle location, then returns the key string itself. * `RequestLocaleProvider` reads the active HTTP request's Accept-Language via `RequestContextHolder` + the servlet container's `getLocale()`. Outside an HTTP request (background jobs, workflow tasks, MCP agents) it falls back to the configured default locale (`vibeerp.i18n.defaultLocale`, default `en`). Importantly, when an HTTP request HAS no Accept-Language header it ALSO falls back to the configured default — never to the JVM's locale. * `I18nConfiguration` exposes `coreTranslator` and `coreLocaleProvider` beans. Per-plug-in translators are NOT beans — they're constructed imperatively per plug-in start in VibeErpPluginManager because each needs its own classloader at the front of the resolution chain. * `DefaultPluginContext` now wires `translator` and `localeProvider` for real instead of throwing `UnsupportedOperationException`. Bundles ------- * Core: `platform-i18n/src/main/resources/messages.properties` (English), `messages_zh_CN.properties` (Simplified Chinese), `messages_de.properties` (German). Six common keys (errors, ok/cancel/save/delete) and an ICU plural example for `counts.items`. Java 9+ JEP 226 reads .properties files as UTF-8 by default, so Chinese characters are written directly rather than as `\\uXXXX` escapes. * Reference plug-in: moved from the broken `i18n/messages_en-US.properties` / `messages_zh-CN.properties` (wrong path, hyphen-locale filenames ResourceBundle ignores) to the canonical `META-INF/vibe-erp/i18n/messages.properties` / `messages_zh_CN.properties` paths with underscore locale tags. Added a new `printingshop.plate.created` key with an ICU plural for `ink_count` to demonstrate non-trivial argument substitution. End-to-end smoke test --------------------- Reset Postgres, booted the app, hit POST /api/v1/plugins/printing-shop/plates with three different Accept-Language headers: * (no header) → "Plate 'PLATE-001' created with no inks." (en-US, plug-in base bundle) * `Accept-Language: zh-CN` → "已创建印版 'PLATE-002' (无油墨)。" (zh-CN, plug-in zh_CN bundle) * `Accept-Language: de` → "Plate 'PLATE-003' created with no inks." (de, but the plug-in ships no German bundle so it falls back to the plug-in base bundle — correct, the key is plug-in-specific) Regression: identity, catalog, partners, and `GET /plates` all still HTTP 200 after the i18n wiring change. Build ----- * `./gradlew build`: 13 subprojects, 118 unit tests (was 107 / 12), all green. The 11 new tests cover ICU plural rendering, named-arg substitution, locale fallback (zh_CN → root, ja → root via NO_FALLBACK), cross-classloader override (a real JAR built in /tmp at test time), and RequestLocaleProvider's three resolution paths (no request → default; Accept-Language present → request locale; request without Accept-Language → default, NOT JVM locale). * The architectural rule still enforced: platform-plugins now imports platform-i18n, which is a platform-* dependency (allowed), not a pbc-* dependency (forbidden). What was deferred ----------------- * User-preferred locale from the authenticated user's profile row is NOT in the resolution chain yet — the `LocaleProvider` interface leaves room for it but the implementation only consults Accept-Language and the configured default. Adding it slots in between request and default without changing the api.v1 surface. * The metadata translation overrides table (`metadata__translation`) is also deferred — the `Translator` JavaDoc mentions it as the first lookup source, but right now keys come from .properties files only. Once Tier 1 customisation lands (P3.x), key users will be able to override any string from the SPA without touching code. -
Adds the foundation for the entire Tier 1 customization story. Core PBCs and plug-ins now ship YAML files declaring their entities, permissions, and menus; a `MetadataLoader` walks the host classpath and each plug-in JAR at boot, upserts the rows tagged with their source, and exposes them at a public REST endpoint so the future SPA, AI-agent function catalog, OpenAPI generator, and external introspection tooling can all see what the framework offers without scraping code. What landed: * New `platform/platform-metadata/` Gradle subproject. Depends on api-v1 + platform-persistence + jackson-yaml + spring-jdbc. * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward- compatible: unknown top-level keys are ignored, so a future plug-in built against a newer schema (forms, workflows, rules, translations) loads cleanly on an older host that doesn't know those sections yet. * `MetadataLoader` with two entry points: loadCore() — uses Spring's PathMatchingResourcePatternResolver against the host classloader. Finds every classpath*:META-INF/ vibe-erp/metadata/*.yml across all jars contributing to the application. Tagged source='core'. loadFromPluginJar(pluginId, jarPath) — opens ONE specific plug-in JAR via java.util.jar.JarFile and walks its entries directly. This is critical: a plug-in's PluginClassLoader is parent-first, so a classpath*: scan against it would ALSO pick up the host's metadata files via parent classpath. We saw this in the first smoke run — the plug-in source ended up with 6 entities (the plug-in's 2 + the host's 4) before the fix. Walking the JAR file directly guarantees only the plug-in's own files load. Tagged source='plugin:<id>'. Both entry points use the same delete-then-insert idempotent core (doLoad). Loading the same source twice produces the same final state. User-edited metadata (source='user') is NEVER touched by either path — it survives boot, plug-in install, and plug-in upgrade. This is what lets a future SPA "Customize" UI add custom fields without fearing they'll be wiped on the next deploy. * `VibeErpPluginManager.afterPropertiesSet()` now calls metadataLoader.loadCore() at the very start, then walks plug-ins and calls loadFromPluginJar(...) for each one between Liquibase migration and start(context). Order is guaranteed: core → linter → migrate → metadata → start. The CommandLineRunner I originally put `loadCore()` in turned out to be wrong because Spring runs CommandLineRunners AFTER InitializingBean.afterPropertiesSet(), so the plug-in metadata was loading BEFORE core — the wrong way around. Calling loadCore() inline in the plug-in manager fixes the ordering without any @Order(...) gymnastics. * `MetadataController` exposes: GET /api/v1/_meta/metadata — all three sections GET /api/v1/_meta/metadata/entities — entities only GET /api/v1/_meta/metadata/permissions GET /api/v1/_meta/metadata/menus Public allowlist (covered by the existing /api/v1/_meta/** rule in SecurityConfiguration). The metadata is intentionally non- sensitive — entity names, permission keys, menu paths. Nothing in here is PII or secret; the SPA needs to read it before the user has logged in. * YAML files shipped: - pbc-identity/META-INF/vibe-erp/metadata/identity.yml (User + Role entities, 6 permissions, Users + Roles menus) - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml (Item + Uom entities, 7 permissions, Items + UoMs menus) - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus in a "Printing shop" section) Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths, mixed sections, blank pluginId rejection, missing-file no-op wipe) + 7 MetadataYamlParseTest cases (DTO mapping, optional fields, section defaults, forward-compat unknown keys). Total now **92 unit tests** across 11 modules, all green. End-to-end smoke test against fresh Postgres + plug-in loaded: Boot logs: MetadataLoader: source='core' loaded 4 entities, 13 permissions, 4 menus from 2 file(s) MetadataLoader: source='plugin:printing-shop' loaded 2 entities, 5 permissions, 2 menus from 1 file(s) HTTP smoke (everything green): GET /api/v1/_meta/metadata (no auth) → 200 6 entities, 18 permissions, 6 menus entity names: User, Role, Item, Uom, Plate, InkRecipe menu sections: Catalog, Printing shop, System GET /api/v1/_meta/metadata/entities → 200 GET /api/v1/_meta/metadata/menus → 200 Direct DB verification: metadata__entity: core=4, plugin:printing-shop=2 metadata__permission: core=13, plugin:printing-shop=5 metadata__menu: core=4, plugin:printing-shop=2 Idempotency: restart the app, identical row counts. Existing endpoints regression: GET /api/v1/identity/users (Bearer) → 1 user GET /api/v1/catalog/uoms (Bearer) → 15 UoMs GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 Bugs caught and fixed during the smoke test: • The first attempt loaded core metadata via a CommandLineRunner annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata inline in VibeErpPluginManager.afterPropertiesSet(). Spring runs all InitializingBeans BEFORE any CommandLineRunner, so the plug-in metadata loaded first and the core load came second — wrong order. Fix: drop CoreMetadataInitializer entirely; have the plug-in manager call metadataLoader.loadCore() directly at the start of afterPropertiesSet(). • The first attempt's plug-in load used metadataLoader.load(pluginClassLoader, ...) which used Spring's PathMatchingResourcePatternResolver against the plug-in's classloader. PluginClassLoader is parent-first, so the resolver enumerated BOTH the plug-in's own JAR AND the host classpath's metadata files, tagging core entities as source='plugin:<id>' and corrupting the seed counts. Fix: refactor MetadataLoader to expose loadFromPluginJar(pluginId, jarPath) which opens the plug-in JAR directly via java.util.jar.JarFile and walks its entries — never asking the classloader at all. The api-v1 surface didn't change. • Two KDoc comments contained the literal string `*.yml` after a `/` character (`/metadata/*.yml`), forming the `/*` pattern that Kotlin's lexer treats as a nested-comment opener. The file failed to compile with "Unclosed comment". This is the third time I've hit this trap; rewriting both KDocs to avoid the literal `/*` sequence. • The MetadataLoaderTest's hand-rolled JAR builder didn't include explicit directory entries for parent paths. Real Gradle JARs do include them, and Spring's PathMatchingResourcePatternResolver needs them to enumerate via classpath*:. Fixed the test helper to write directory entries for every parent of each file. Implementation plan refreshed: P1.5 marked DONE. Next priority candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom field application via the ext jsonb column, which would unlock the full Tier 1 customization story). Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests, metadata seeded for 6 entities + 18 permissions + 6 menus. -
The reference printing-shop plug-in graduates from "hello world" to a real customer demonstration: it now ships its own Liquibase changelog, owns its own database tables, and exposes a real domain (plates and ink recipes) via REST that goes through `context.jdbc` — a new typed-SQL surface in api.v1 — without ever touching Spring's `JdbcTemplate` or any other host internal type. A bytecode linter that runs before plug-in start refuses to load any plug-in that tries to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes. What landed: * api.v1 (additive, binary-compatible): - PluginJdbc — typed SQL access with named parameters. Methods: query, queryForObject, update, inTransaction. No Spring imports leaked. Forces plug-ins to use named params (no positional ?). - PluginRow — typed nullable accessors over a single result row: string, int, long, uuid, bool, instant, bigDecimal. Hides java.sql.ResultSet entirely. - PluginContext.jdbc getter with default impl that throws UnsupportedOperationException so older builds remain binary compatible per the api.v1 stability rules. * platform-plugins — three new sub-packages: - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate. ResultSetPluginRow translates each accessor through ResultSet.wasNull() so SQL NULL round-trips as Kotlin null instead of the JDBC defaults (0 for int, false for bool, etc. — bug factories). - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean for the whole process. Per-plugin isolation lands later. - migration/PluginLiquibaseRunner looks for META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via the PF4J classloader and applies it via Liquibase against the host's shared DataSource. The unique META-INF path matters: plug-ins also see the host's parent classpath, where the host's own db/changelog/master.xml lives, and a collision causes Liquibase ChangeLogParseException at install time. - lint/PluginLinter walks every .class entry in the plug-in JAR via java.util.jar.JarFile + ASM ClassReader, visits every type/ method/field/instruction reference, rejects on any reference to `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages. * VibeErpPluginManager lifecycle is now load → lint → migrate → start: - lint runs immediately after PF4J's loadPlugins(); rejected plug-ins are unloaded with a per-violation error log and never get to run any code - migrate runs the plug-in's own Liquibase changelog; failure means the plug-in is loaded but skipped (loud warning, framework boots fine) - then PF4J's startPlugins() runs the no-arg start - then we walk loaded plug-ins and call vibe_erp's start(context) with a fully-wired DefaultPluginContext (logger + endpoints + eventBus + jdbc). The plug-in's tables are guaranteed to exist by the time its lambdas run. * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the shared PluginJdbc and use it to talk to their own tables. * Reference plug-in (PrintingShopPlugin): - Ships META-INF/vibe-erp/db/changelog.xml with two changesets: plugin_printingshop__plate (id, code, name, width_mm, height_mm, status) and plugin_printingshop__ink_recipe (id, code, name, cmyk_c/m/y/k). - Now registers seven endpoints: GET /ping — health GET /echo/{name} — path variable demo GET /plates — list GET /plates/{id} — fetch POST /plates — create (with race-conditiony existence check before INSERT, since plug-ins can't import Spring's DataAccessException) GET /inks POST /inks - All CRUD lambdas use context.jdbc with named parameters. The plug-in still imports nothing from org.springframework.* in its own code (it does reach the host's Jackson via reflection for JSON parsing — a deliberate v0.6 shortcut documented inline). Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize in-memory plug-in JARs (clean class, forbidden platform ref, forbidden pbc ref, allowed api.v1 ref, multiple violations) and a mocked PluginWrapper to avoid touching the real PF4J loader. Total now **81 unit tests** across 10 modules, all green. End-to-end smoke test against fresh Postgres with the plug-in loaded (every assertion green): Boot logs: PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml Liquibase: ChangeSet printingshop-init-001 ran successfully Liquibase: ChangeSet printingshop-init-002 ran successfully Liquibase migrations applied successfully plugin.printing-shop: registered 7 endpoints HTTP smoke: \dt plugin_printingshop* → both tables exist GET /api/v1/plugins/printing-shop/plates → [] POST plate A4 → 201 + UUID POST plate A3 → 201 + UUID POST duplicate A4 → 409 + clear msg GET plates → 2 rows GET /plates/{id} → A4 details psql verifies both rows in plugin_printingshop__plate POST ink CYAN → 201 POST ink MAGENTA → 201 GET inks → 2 inks with nested CMYK GET /ping → 200 (existing endpoint) GET /api/v1/catalog/uoms → 15 UoMs (no regression) GET /api/v1/identity/users → 1 user (no regression) Bug encountered and fixed during the smoke test: • The plug-in initially shipped its changelog at db/changelog/master.xml, which collides with the HOST's db/changelog/master.xml. The plug-in classloader does parent-first lookup (PF4J default), so Liquibase's ClassLoaderResourceAccessor found BOTH files and threw ChangeLogParseException ("Found 2 files with the path"). Fixed by moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml, a path the host never uses, and updating PluginLiquibaseRunner. The unique META-INF prefix is now part of the documented plug-in convention. What is explicitly NOT in this chunk (deferred): • Per-plugin Spring child contexts — plug-ins still instantiate via PF4J's classloader without their own Spring beans • Per-plugin datasource isolation — one shared host pool today • Plug-in changelog table-prefix linter — convention only, runtime enforcement comes later • Rollback on plug-in uninstall — uninstall is operator-confirmed and rare; running dropAll() during stop() would lose data on accidental restart • Subscription auto-scoping on plug-in stop — plug-ins still close their own subscriptions in stop() • Real customer-grade JSON parsing in plug-in lambdas — the v0.6 reference plug-in uses reflection to find the host's Jackson; a real plug-in author would ship their own JSON library or use a future api.v1 typed-DTO surface Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1 all marked DONE in docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md. Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners). -
Adds the framework's event bus, the second cross-cutting service (after auth) that PBCs and plug-ins both consume. Implements the transactional outbox pattern from the architecture spec section 9 — events are written to the database in the same transaction as the publisher's domain change, so a publish followed by a rollback never escapes. This is the seam where a future Kafka/NATS bridge plugs in WITHOUT touching any PBC code. What landed: * New `platform/platform-events/` module: - `EventOutboxEntry` JPA entity backed by `platform__event_outbox` (id, event_id, topic, aggregate_type, aggregate_id, payload jsonb, status, attempts, last_error, occurred_at, dispatched_at, version). Status enum: PENDING / DISPATCHED / FAILED. - `EventOutboxRepository` Spring Data JPA repo with a pessimistic SELECT FOR UPDATE query for poller dispatch. - `ListenerRegistry` — in-memory subscription holder, indexed both by event class (Class.isInstance) and by topic string. Supports a `**` wildcard for the platform's audit subscriber. Backed by CopyOnWriteArrayList so dispatch is lock-free. - `EventBusImpl` — implements the api.v1 EventBus. publish() writes the outbox row AND synchronously delivers to in-process listeners in the SAME transaction. Marked Propagation.MANDATORY so the bus refuses to publish outside an existing transaction (preventing publish-and-rollback leaks). Listener exceptions are caught and logged; the outbox row still commits. - `OutboxPoller` — Spring @Scheduled component that runs every 5s, drains PENDING / FAILED rows under a pessimistic lock, marks them DISPATCHED. v0.5 has no real external dispatcher — the poller is the seam where Kafka/NATS plugs in later. - `EventBusConfiguration` — @EnableScheduling so the poller actually runs. Lives in this module so the seam activates automatically when platform-events is on the classpath. - `EventAuditLogSubscriber` — wildcard subscriber that logs every event at INFO. Demo proof that the bus works end-to-end. Future versions replace it with a real audit log writer. * `platform__event_outbox` Liquibase changeset (platform-events-001): table + unique index on event_id + index on (status, created_at) + index on topic. * DefaultPluginContext.eventBus is no longer a stub that throws — it's now the real EventBus injected by VibeErpPluginManager. Plug-ins can publish and subscribe via the api.v1 surface. Note: subscriptions are NOT auto-scoped to the plug-in lifecycle in v0.5; a plug-in that wants its subscriptions removed on stop() must call subscription.close() explicitly. Auto-scoping lands when per-plug-in Spring child contexts ship. * pbc-identity now publishes `UserCreatedEvent` after a successful UserService.create(). The event class is internal to pbc-identity (not in api.v1) — other PBCs subscribe by topic string (`identity.user.created`), not by class. This is the right tradeoff: string topics are stable across plug-in classloaders, class equality is not, and adding every event class to api.v1 would be perpetual surface-area bloat. Tests: 13 new unit tests (9 EventBusImplTest + 4 OutboxPollerTest) plus 2 new UserServiceTest cases that verify the publish happens on the happy path and does NOT happen when create() rejects a duplicate. Total now 76 unit tests across the framework, all green. End-to-end smoke test against fresh Postgres with the plug-in loaded (everything green): EventAuditLogSubscriber subscribed to ** at boot Outbox empty before any user create ✓ POST /api/v1/auth/login → 200 POST /api/v1/identity/users (create alice) → 201 Outbox row appears with topic=identity.user.created, status=PENDING immediately after create ✓ EventAuditLogSubscriber log line fires synchronously inside the create transaction ✓ POST /api/v1/identity/users (create bob) → 201 Wait 8s (one OutboxPoller cycle) Both outbox rows now DISPATCHED, dispatched_at set ✓ Existing PBCs still work: GET /api/v1/identity/users → 3 users ✓ GET /api/v1/catalog/uoms → 15 UoMs ✓ Plug-in still works: GET /api/v1/plugins/printing-shop/ping → 200 ✓ The most important assertion is the synchronous audit log line appearing on the same thread as the user creation request. That proves the entire chain — UserService.create() → eventBus.publish() → EventBusImpl writes outbox row → ListenerRegistry.deliver() finds wildcard subscriber → EventAuditLogSubscriber.handle() logs — runs end-to-end inside the publisher's transaction. The poller flipping PENDING → DISPATCHED 5s later proves the outbox + poller seam works without any external dispatcher. Bug encountered and fixed during the smoke test: • EventBusImplTest used `ObjectMapper().registerKotlinModule()` which doesn't pick up jackson-datatype-jsr310. Production code uses Spring Boot's auto-configured ObjectMapper which already has jsr310 because spring-boot-starter-web is on the classpath of distribution. The test setup was the only place using a bare mapper. Fixed by switching to `findAndRegisterModules()` AND by adding jackson-datatype-jsr310 as an explicit implementation dependency of platform-events (so future modules that depend on the bus without bringing web in still get Instant serialization). What is explicitly NOT in this chunk: • External dispatcher (Kafka/NATS bridge) — the poller is a no-op that just marks rows DISPATCHED. The seam exists; the dispatcher is a future P1.7.b unit. • Exponential backoff on FAILED rows — every cycle re-attempts. Real backoff lands when there's a real dispatcher to fail. • Dead-letter queue — same. • Per-plug-in subscription auto-scoping — plug-ins must close() explicitly today. • Async / fire-and-forget publish — synchronous in-process only. -
The reference printing-shop plug-in now actually does something: its main class registers two HTTP endpoints during start(context), and a real curl to /api/v1/plugins/printing-shop/ping returns the JSON the plug-in's lambda produced. End-to-end smoke test 10/10 green. This is the chunk that turns vibe_erp from "an ERP app that has a plug-in folder" into "an ERP framework whose plug-ins can serve traffic". What landed: * api.v1 — additive (binary-compatible per the api.v1 stability rule): - org.vibeerp.api.v1.plugin.HttpMethod (enum) - org.vibeerp.api.v1.plugin.PluginRequest (path params, query, body) - org.vibeerp.api.v1.plugin.PluginResponse (status + body) - org.vibeerp.api.v1.plugin.PluginEndpointHandler (fun interface) - org.vibeerp.api.v1.plugin.PluginEndpointRegistrar (per-plugin scoped, register(method, path, handler)) - PluginContext.endpoints getter with default impl that throws UnsupportedOperationException so the addition is binary-compatible with plug-ins compiled against earlier api.v1 builds. * platform-plugins — three new files: - PluginEndpointRegistry: process-wide registration storage. Uses Spring's AntPathMatcher so {var} extracts path variables. Synchronized mutation. Exact-match fast path before pattern loop. Rejects duplicate (method, path) per plug-in. unregisterAll(plugin) on shutdown. - ScopedPluginEndpointRegistrar: per-plugin wrapper that tags every register() call with the right plugin id. Plug-ins cannot register under another plug-in's namespace. - PluginEndpointDispatcher: single Spring @RestController at /api/v1/plugins/{pluginId}/** that catches GET/POST/PUT/PATCH/DELETE, asks the registry for a match, builds a PluginRequest, calls the handler, serializes the response. 404 on no match, 500 on handler throw (logged with stack trace). - DefaultPluginContext: implements PluginContext with a real SLF4J-backed logger (every line tagged with the plug-in id) and the scoped endpoint registrar. The other six services (eventBus, transaction, translator, localeProvider, permissionCheck, entityRegistry) throw UnsupportedOperationException with messages pointing at the implementation plan unit that will land each one. Loud failure beats silent no-op. * VibeErpPluginManager — after PF4J's startPlugins() now walks every loaded plug-in, casts the wrapper instance to api.v1.plugin.Plugin, and calls start(context) with a freshly-built DefaultPluginContext. Tracks the started set so destroy() can call stop() and unregisterAll() in reverse order. Catches plug-in start failures loudly without bringing the framework down. * Reference plug-in (PrintingShopPlugin): - Now extends BOTH org.pf4j.Plugin (so PF4J's loader can instantiate it via the Plugin-Class manifest entry) AND org.vibeerp.api.v1.plugin.Plugin (so the host's vibe_erp lifecycle hook can call start(context)). Uses Kotlin import aliases to disambiguate the two `Plugin` simple names. - In start(context), registers two endpoints: GET /ping — returns {plugin, version, ok, message} GET /echo/{name} — extracts path variable, echoes it back - The /echo handler proves path-variable extraction works end-to-end. * Build infrastructure: - reference-customer/plugin-printing-shop now has an `installToDev` Gradle task that builds the JAR and stages it into <repo>/plugins-dev/. The task wipes any previous staged copies first so renaming the JAR on a version bump doesn't leave PF4J trying to load two versions. - distribution's `bootRun` task now (a) depends on `installToDev` so the staging happens automatically and (b) sets workingDir to the repo root so application-dev.yaml's relative `vibeerp.plugins.directory: ./plugins-dev` resolves to the right place. Without (b) bootRun's CWD was distribution/ and PF4J found "No plugins" — which is exactly the bug that surfaced in the first smoke run. - .gitignore now excludes /plugins-dev/ and /files-dev/. Tests: 12 new unit tests for PluginEndpointRegistry covering literal paths, single/multi path variables, duplicate registration rejection, literal-vs-pattern precedence, cross-plug-in isolation, method matching, and unregisterAll. Total now 61 unit tests across the framework, all green. End-to-end smoke test against fresh Postgres + the plug-in JAR loaded by PF4J at boot (10/10 passing): GET /api/v1/plugins/printing-shop/ping (no auth) → 401 POST /api/v1/auth/login → access token GET /api/v1/plugins/printing-shop/ping (Bearer) → 200 {plugin, version, ok, message} GET /api/v1/plugins/printing-shop/echo/hello → 200, echoed=hello GET /api/v1/plugins/printing-shop/echo/world → 200, echoed=world GET /api/v1/plugins/printing-shop/nonexistent → 404 (no handler) GET /api/v1/plugins/missing-plugin/ping → 404 (no plugin) POST /api/v1/plugins/printing-shop/ping → 404 (wrong method) GET /api/v1/catalog/uoms (Bearer) → 200, 15 UoMs GET /api/v1/identity/users (Bearer) → 200, 1 user PF4J resolved the JAR, started the plug-in, the host called vibe_erp's start(context), the plug-in registered two endpoints, and the dispatcher routed real HTTP traffic to the plug-in's lambdas. The boot log shows the full chain. What is explicitly NOT in this chunk and remains for later: • plug-in linter (P1.2) — bytecode scan for forbidden imports • plug-in Liquibase application (P1.4) — plug-in-owned schemas • per-plug-in Spring child context — currently we just instantiate the plug-in via PF4J's classloader; there is no Spring context for the plug-in's own beans • PluginContext.eventBus / transaction / translator / etc. — they still throw UnsupportedOperationException with TODO messages • Path-template precedence between multiple competing patterns (only literal-beats-pattern is implemented, not most-specific-pattern) • Permission checks at the dispatcher (Spring Security still catches plug-in endpoints with the global "anyRequest authenticated" rule, which is the right v0.5 behavior) • Hot reload of plug-ins (cold restart only) Bug encountered and fixed during the smoke test: • application-dev.yaml has `vibeerp.plugins.directory: ./plugins-dev`, a relative path. Gradle's `bootRun` task by default uses the subproject's directory as the working directory, so the relative path resolved to <repo>/distribution/plugins-dev/ instead of <repo>/plugins-dev/. PF4J reported "No plugins" because that directory was empty. Fixed by setting bootRun.workingDir = rootProject.layout.projectDirectory.asFile. • One KDoc comment in PluginEndpointDispatcher contained the literal string `/api/v1/plugins/{pluginId}/**` inside backticks. The Kotlin lexer doesn't treat backticks as comment-suppressing, so `/**` opened a nested KDoc comment that was never closed and the file failed to compile. Same root cause as the AuthController bug earlier in the session. Rewrote the line to avoid the literal `/**` sequence.