• P1.10 follow-up. Plug-ins can now register background job handlers
    the same way they already register workflow task handlers. The
    reference printing-shop plug-in ships a real PlateCleanupJobHandler
    that reads from its own database via `context.jdbc` as the
    executable acceptance test.
    
    ## Why this wasn't in the P1.10 chunk
    
    P1.10 landed the core scheduler + registry + Quartz bridge + HTTP
    surface, but the plug-in-loader integration was deliberately
    deferred — the JobHandlerRegistry already supported owner-tagged
    `register(handler, ownerId)` and `unregisterAllByOwner(ownerId)`,
    so the seam was defined; it just didn't have a caller from the
    PF4J plug-in side. Without a real plug-in consumer, shipping the
    integration would have been speculative.
    
    This commit closes the gap in exactly the shape the TaskHandler
    side already has: new api.v1 registrar interface, new scoped
    registrar in platform-plugins, one constructor parameter on
    DefaultPluginContext, one new field on VibeErpPluginManager, and
    the teardown paths all fall out automatically because
    JobHandlerRegistry already implements the owner-tagged cleanup.
    
    ## api.v1 additions
    
    - `org.vibeerp.api.v1.jobs.PluginJobHandlerRegistrar` — single
      method `register(handler: JobHandler)`. Mirrors
      `PluginTaskHandlerRegistrar` exactly, same ergonomics, same
      duplicate-key-throws discipline.
    - `PluginContext.jobs: PluginJobHandlerRegistrar` — new optional
      member with the default-throw backward-compat pattern used for
      `endpoints`, `jdbc`, `taskHandlers`, and `files`. An older host
      loading a newer plug-in jar fails loudly at first call rather
      than silently dropping scheduled work.
    
    ## platform-plugins wiring
    
    - New dependency on `:platform:platform-jobs`.
    - New internal class
      `org.vibeerp.platform.plugins.jobs.ScopedJobHandlerRegistrar`
      that implements the api.v1 registrar by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId = pluginId)`.
    - `DefaultPluginContext` gains a `scopedJobHandlers` constructor
      parameter and exposes it as `PluginContext.jobs`.
    - `VibeErpPluginManager`:
      * injects `JobHandlerRegistry`
      * constructs `ScopedJobHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `jobHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing endpoint + taskHandler + BPMN-deployment cleanups
      * `destroy()` reverse-iterates `started` and calls the same
        `unregisterAllByOwner` alongside the other four teardown steps
    
    ## Reference plug-in — PlateCleanupJobHandler
    
    New file
    `reference-customer/plugin-printing-shop/.../jobs/PlateCleanupJobHandler.kt`.
    Key `printing_shop.plate.cleanup`. Captures the `PluginContext`
    via constructor — same "handler-side plug-in context access"
    pattern the printing-shop plug-in already uses for its
    TaskHandlers.
    
    The handler is READ-ONLY in its v1 incarnation: it runs a
    GROUP-BY query over `plugin_printingshop__plate` via
    `context.jdbc.query(...)` and logs a per-status summary via
    `context.logger.info(...)`. A real cleanup job would also run an
    `UPDATE`/`DELETE` to prune DRAFT plates older than N days; the
    read-only shape is enough to exercise the seam end-to-end without
    introducing a retention policy the customer hasn't asked for.
    
    `PrintingShopPlugin.start(context)` now registers the handler
    alongside its two TaskHandlers:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
        context.jobs.register(PlateCleanupJobHandler(context))
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    # boot
    registered JobHandler 'vibeerp.jobs.ping' owner='core' ...
    JobHandlerRegistry initialised with 1 core JobHandler bean(s): [vibeerp.jobs.ping]
    ...
    registered JobHandler 'printing_shop.plate.cleanup' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 JobHandler: printing_shop.plate.cleanup
    
    # HTTP: list handlers — now shows both
    GET /api/v1/jobs/handlers
      → {"count":2,"keys":["printing_shop.plate.cleanup","vibeerp.jobs.ping"]}
    
    # HTTP: trigger the plug-in handler — proves dispatcher routes to it
    POST /api/v1/jobs/handlers/printing_shop.plate.cleanup/trigger
      → 200 {"handlerKey":"printing_shop.plate.cleanup",
             "correlationId":"95969129-d6bf-4d9a-8359-88310c4f63b9",
             "startedAt":"...","finishedAt":"...","ok":true}
    
    # Handler-side logs prove context.jdbc + context.logger access
    [plugin:printing-shop] PlateCleanupJobHandler firing corr='95969129-...'
    [plugin:printing-shop] PlateCleanupJobHandler summary: total=0 byStatus=[]
    
    # SIGTERM — clean teardown
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 2 handler(s)
    [ionShutdownHook] unregistered JobHandler 'printing_shop.plate.cleanup' (owner stopped)
    [ionShutdownHook] JobHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order. Core
    handlers are untouched by plug-in teardown.
    
    ## Tests
    
    No new unit tests in this commit — the test coverage is inherited
    from the previously landed components:
      - `JobHandlerRegistryTest` already covers owner-tagged
        `register` / `unregister` / `unregisterAllByOwner` / duplicate
        key rejection.
      - `ScopedTaskHandlerRegistrar` behavior (which this commit
        mirrors structurally) is exercised end-to-end by the
        printing-shop plug-in boot path.
    - Total framework unit tests: 334 (unchanged from the
      quality→warehousing quarantine chunk), all green.
    
    ## What this unblocks
    
    - **Plug-in-shipped scheduled work.** The printing-shop plug-in
      can now add cron schedules for its cleanup handler via
      `POST /api/v1/jobs/scheduled {scheduleKey, handlerKey,
      cronExpression}` without the operator touching core code.
    - **Plug-in-to-plug-in handler coexistence.** Two plug-ins can
      now ship job handlers with distinct keys and be torn down
      independently on reload — the owner-tagged cleanup strips only
      the stopping plug-in's handlers, leaving other plug-ins' and
      core handlers alone.
    - **The "plug-in contributes everything" story.** The reference
      printing-shop plug-in now contributes via every public seam the
      framework has: HTTP endpoints (7), custom fields on core
      entities (5), BPMNs (2), TaskHandlers (2), and a JobHandler (1)
      — plus its own database schema, its own metadata YAML, its own
      i18n bundles. That's every extension point a real customer
      plug-in would want.
    
    ## Non-goals (parking lot)
    
    - A real retention policy in PlateCleanupJobHandler. The handler
      logs a summary but doesn't mutate state. Customer-specific
      pruning rules belong in a customer-owned plug-in or a metadata-
      driven rule once that seam exists.
    - A built-in cron schedule for the plug-in's handler. The
      plug-in only registers the handler; scheduling is an operator
      decision exposed through the HTTP surface from P1.10.
    zichun authored
     
    Browse Code »
  • ## What's new
    
    Plug-ins can now contribute workflow task handlers to the framework.
    The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans
    from the host Spring context; handlers defined inside a PF4J plug-in
    were invisible because the plug-in's child classloader is not in the
    host's bean list. This commit closes that gap.
    
    ## Mechanism
    
    ### api.v1
    
    - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar`
      with a single `register(handler: TaskHandler)` method. Plug-ins call
      it from inside their `start(context)` lambda.
    - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as
      a new optional member with a default implementation that throws
      `UnsupportedOperationException("upgrade to v0.7 or later")`, so
      pre-existing plug-in jars remain binary-compatible with the new
      host and a plug-in built against v0.7 of the api-v1 surface fails
      fast on an old host instead of silently doing nothing. Same
      pattern we used for `endpoints` and `jdbc`.
    
    ### platform-workflow
    
    - `TaskHandlerRegistry` gains owner tagging. Every registered handler
      now carries an `ownerId`: core `@Component` beans get
      `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through
      the constructor-injection path), plug-in-contributed handlers get
      their PF4J plug-in id. New API:
      * `register(handler, ownerId = OWNER_CORE)` (default keeps existing
        call sites unchanged)
      * `unregisterAllByOwner(ownerId): Int` — strip every handler owned
        by that id in one call, returns the count for log correlation
      * The duplicate-key error message now includes both owners so a
        plug-in trying to stomp on a core handler gets an actionable
        "already registered by X (owner='core'), attempted by Y
        (owner='printing-shop')" instead of "already registered".
      * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>`
        to `ConcurrentHashMap<String, Entry>` where `Entry` carries
        `(handler, ownerId)`. `find(key)` still returns `TaskHandler?`
        so the dispatcher is unchanged.
    - No behavioral change for the hot-path (`DispatchingJavaDelegate`) —
      only the registration/teardown paths changed.
    
    ### platform-plugins
    
    - New dependency on `:platform:platform-workflow` (the only new inter-
      module dep of this chunk; it is the module that exposes
      `TaskHandlerRegistry`).
    - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)`
      that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId =
      pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`,
      so the plug-in never sees (or can tamper with) the owner id.
    - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor
      parameter and exposes it as the `PluginContext.taskHandlers`
      override.
    - `VibeErpPluginManager`:
      * injects `TaskHandlerRegistry`
      * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing `endpointRegistry.unregisterAll(pluginId)` cleanup
        so a throwing `start(context)` cannot leave stale registrations
      * `destroy()` calls the same `unregisterAllByOwner` for every
        started plug-in in reverse order, mirroring the endpoint cleanup
    
    ### reference-customer/plugin-printing-shop
    
    - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in-
      contributed TaskHandler in the framework. Key
      `printing_shop.plate.approve`. Reads a `plateId` process variable,
      writes `plateApproved`, `plateId`, `approvedBy` (principal label),
      `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper
      plate-approval handler would UPDATE `plugin_printingshop__plate` via
      `context.jdbc`, but that requires handing the TaskHandler a
      projection of the PluginContext — a deliberate non-goal of this
      chunk, deferred to the "handler context" follow-up.
    - `PrintingShopPlugin.start(context)` now ends with
      `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs
      the registration.
    - Package layout: `org.vibeerp.reference.printingshop.workflow` is
      the plug-in's workflow namespace going forward (the next printing-
      shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order
      — will live alongside).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler'
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    
    $ curl /api/v1/workflow/handlers (as admin)
    {
      "count": 2,
      "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"]
    }
    
    $ curl /api/v1/plugins/printing-shop/ping  # plug-in HTTP still works
    {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...}
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"vibeerp-workflow-ping"}
      (principal propagation from previous commit still works — pingedBy=user:admin)
    
    $ kill -TERM <pid>
    [ionShutdownHook] vibe_erp stopping 1 plug-in(s)
    [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped
    [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped)
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order with the
    right owner attribution. Core handlers are untouched by plug-in
    teardown.
    
    ## Tests
    
    - 4 new / updated tests on `TaskHandlerRegistryTest`:
      * `unregisterAllByOwner only removes handlers owned by that id`
        — 2 core + 2 plug-in, unregister the plug-in owner, only the
        2 plug-in keys are removed
      * `unregisterAllByOwner on unknown owner returns zero`
      * `register with blank owner is rejected`
      * Updated `duplicate key fails fast` to assert the new error
        message format including both owner ids
    - Total framework unit tests: 269 (was 265), all green.
    
    ## What this unblocks
    
    - **REF.1** (real printing-shop quote→job-card workflow) can now
      register its production handlers through the same seam
    - **Plug-in-contributed handlers with state access** — the next
      design question is how a plug-in handler gets at the plug-in's
      database and translator. Two options: pass a projection of the
      PluginContext through TaskContext, or keep a reference to the
      context captured at plug-in start (closure). The PlateApproval
      handler in this chunk is pure on purpose to keep the seam
      conversation separate.
    - **Plug-in-shipped BPMN auto-deployment** — Flowable's default
      classpath scan uses `classpath*:/processes/*.bpmn20.xml` which
      does NOT see PF4J plug-in classloaders. A dedicated
      `PluginProcessDeployer` that walks each started plug-in's JAR for
      BPMN resources and calls `repositoryService.createDeployment` is
      the natural companion to this commit, still pending.
    
    ## Non-goals (still parking lot)
    
    - BPMN processes shipped inside plug-in JARs (see above — needs
      its own chunk, because it requires reading resources from the
      PF4J classloader and constructing a Flowable deployment by hand)
    - Per-handler permission checks — a handler that wants a permission
      gate still has to call back through its own context; P4.3's
      @RequirePermission aspect doesn't reach into Flowable delegate
      execution.
    - Hot reload of a running plug-in's TaskHandlers. The seam supports
      it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised
      by any current caller.
    zichun authored
     
    Browse Code »

  • The framework's authorization layer is now live. Until now, every
    authenticated user could do everything; the framework had only an
    authentication gate. This chunk adds method-level @RequirePermission
    annotations enforced by a Spring AOP aspect that consults the JWT's
    roles claim and a metadata-driven role-permission map.
    
    What landed
    -----------
    * New `Role` and `UserRole` JPA entities mapping the existing
      identity__role + identity__user_role tables (the schema was
      created in the original identity init but never wired to JPA).
      RoleJpaRepository + UserRoleJpaRepository with a JPQL query that
      returns a user's role codes in one round-trip.
    * `JwtIssuer.issueAccessToken(userId, username, roles)` now accepts a
      Set<String> of role codes and encodes them as a `roles` JWT claim
      (sorted for deterministic tests). Refresh tokens NEVER carry roles
      by design — see the rationale on `JwtIssuer.issueRefreshToken`. A
      role revocation propagates within one access-token lifetime
      (15 min default).
    * `JwtVerifier` reads the `roles` claim into `DecodedToken.roles`.
      Missing claim → empty set, NOT an error (refresh tokens, system
      tokens, and pre-P4.3 tokens all legitimately omit it).
    * `AuthService.login` now calls `userRoles.findRoleCodesByUserId(...)`
      before minting the access token. `AuthService.refresh` re-reads
      the user's roles too — so a refresh always picks up the latest
      set, since refresh tokens deliberately don't carry roles.
    * New `AuthorizationContext` ThreadLocal in `platform-security.authz`
      carrying an `AuthorizedPrincipal(id, username, roles)`. Separate
      from `PrincipalContext` (which lives in platform-persistence and
      carries only the principal id, for the audit listener). The two
      contexts coexist because the audit listener has no business
      knowing what roles a user has.
    * `PrincipalContextFilter` now populates BOTH contexts on every
      authenticated request, reading the JWT's `username` and `roles`
      claims via `Jwt.getClaimAsStringList("roles")`. The filter is the
      one and only place that knows about Spring Security types AND
      about both vibe_erp contexts; everything downstream uses just the
      Spring-free abstractions.
    * `PermissionEvaluator` Spring bean: takes a role set + permission
      key, returns boolean. Resolution chain:
      1. The literal `admin` role short-circuits to `true` for every
         key (the wildcard exists so the bootstrap admin can do
         everything from the very first boot without seeding a complete
         role-permission mapping).
      2. Otherwise consults an in-memory `Map<role, Set<permission>>`
         loaded from `metadata__role_permission` rows. The cache is
         rebuilt by `refresh()`, called from `VibeErpPluginManager`
         after the initial core load AND after every plug-in load.
      3. Empty role set is always denied. No implicit grants.
    * `@RequirePermission("...")` annotation in `platform-security.authz`.
      `RequirePermissionAspect` is a Spring AOP @Aspect with @Around
      advice that intercepts every annotated method, reads the current
      request's `AuthorizationContext`, calls
      `PermissionEvaluator.has(...)`, and either proceeds or throws
      `PermissionDeniedException`.
    * New `PermissionDeniedException` carrying the offending key.
      `GlobalExceptionHandler` maps it to HTTP 403 Forbidden with
      `"permission denied: 'partners.partner.deactivate'"` as the
      detail. The key IS surfaced to the caller (unlike the 401's
      generic "invalid credentials") because the SPA needs it to
      render a useful "your role doesn't include X" message and
      callers are already authenticated, so it's not an enumeration
      vector.
    * `BootstrapAdminInitializer` now creates the wildcard `admin`
      role on first boot and grants it to the bootstrap admin user.
    * `@RequirePermission` applied to four sensitive endpoints as the
      demo: `PartnerController.deactivate`,
      `StockBalanceController.adjust`, `SalesOrderController.confirm`,
      `SalesOrderController.cancel`. More endpoints will gain
      annotations as additional roles are introduced; v1 keeps the
      blast radius narrow.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, verified:
    * Admin login → JWT length 265 (was 241), decoded claims include
      `"roles":["admin"]`
    * Admin POST /sales-orders/{id}/confirm → 200, status DRAFT → CONFIRMED
      (admin wildcard short-circuits the permission check)
    * Inserted a 'powerless' user via raw SQL with no role assignments
      but copied the admin's password hash so login works
    * Powerless login → JWT length 247, decoded claims have NO roles
      field at all
    * Powerless POST /sales-orders/{id}/cancel → **403 Forbidden** with
      `"permission denied: 'orders.sales.cancel'"` in the body
    * Powerless DELETE /partners/{id} → **403 Forbidden** with
      `"permission denied: 'partners.partner.deactivate'"`
    * Powerless GET /sales-orders, /partners, /catalog/items → all 200
      (read endpoints have no @RequirePermission)
    * Admin regression: catalog uoms, identity users, inventory
      locations, printing-shop plates with i18n, metadata custom-fields
      endpoint — all still HTTP 2xx
    
    Build
    -----
    * `./gradlew build`: 15 subprojects, 163 unit tests (was 153),
      all green. The 10 new tests cover:
      - PermissionEvaluator: empty roles deny, admin wildcard, explicit
        role-permission grant, multi-role union, unknown role denial,
        malformed payload tolerance, currentHas with no AuthorizationContext,
        currentHas with bound context (8 tests).
      - JwtRoundTrip: roles claim round-trips through the access token,
        refresh token never carries roles even when asked (2 tests).
    
    What was deferred
    -----------------
    * **OIDC integration (P4.2)**. Built-in JWT only. The Keycloak-
      compatible OIDC client will reuse the same authorization layer
      unchanged — the roles will come from OIDC ID tokens instead of
      the local user store.
    * **Permission key validation at boot.** The framework does NOT
      yet check that every `@RequirePermission` value matches a
      declared metadata permission key. The plug-in linter is the
      natural place for that check to land later.
    * **Role hierarchy**. Roles are flat in v1; a role with permission
      X cannot inherit from another role. Adding a `parent_role` field
      on the role row is a non-breaking change later.
    * **Resource-aware permissions** ("the user owns THIS partner").
      v1 only checks the operation, not the operand. Resource-aware
      checks are post-v1.
    * **Composite (AND/OR) permission requirements**. A single key
      per call site keeps the contract simple. Composite requirements
      live in service code that calls `PermissionEvaluator.currentHas`
      directly.
    * **Role management UI / REST**. The framework can EVALUATE
      permissions but has no first-class endpoints for "create a
      role", "grant a permission to a role", "assign a role to a
      user". v1 expects these to be done via direct DB writes or via
      the future SPA's role editor (P3.x); the wiring above is
      intentionally policy-only, not management.
    zichun authored
     
    Browse Code »
  • The eighth cross-cutting platform service is live: plug-ins and PBCs
    now have a real Translator and LocaleProvider instead of the
    UnsupportedOperationException stubs that have shipped since v0.5.
    
    What landed
    -----------
    * New Gradle subproject `platform/platform-i18n` (13 modules total).
    * `IcuTranslator` — backed by ICU4J's MessageFormat (named placeholders,
      plurals, gender, locale-aware number/date/currency formatting), the
      format every modern translation tool speaks natively. JDK ResourceBundle
      handles per-locale fallback (zh_CN → zh → root).
    * The translator takes a list of `BundleLocation(classLoader, baseName)`
      pairs and tries them in order. For the **core** translator the chain
      is just `[(host, "messages")]`; for a **per-plug-in** translator
      constructed by VibeErpPluginManager it's
      `[(pluginClassLoader, "META-INF/vibe-erp/i18n/messages"), (host, "messages")]`
      so plug-in keys override host keys, but plug-ins still inherit
      shared keys like `errors.not_found`.
    * Critical detail: the plug-in baseName uses a path the host does NOT
      publish, because PF4J's `PluginClassLoader` is parent-first — a
      `getResource("messages.properties")` against the plug-in classloader
      would find the HOST bundle through the parent chain, defeating the
      per-plug-in override entirely. Naming the plug-in resource somewhere
      the host doesn't claim sidesteps the trap.
    * The translator disables `ResourceBundle.Control`'s automatic JVM-default
      locale fallback. The default control walks `requested → root → JVM
      default → root` which would silently serve German strings to a
      Japanese-locale request just because the German bundle exists. The
      fallback chain stops at root within a bundle, then moves to the next
      bundle location, then returns the key string itself.
    * `RequestLocaleProvider` reads the active HTTP request's
      Accept-Language via `RequestContextHolder` + the servlet container's
      `getLocale()`. Outside an HTTP request (background jobs, workflow
      tasks, MCP agents) it falls back to the configured default locale
      (`vibeerp.i18n.defaultLocale`, default `en`). Importantly, when an
      HTTP request HAS no Accept-Language header it ALSO falls back to the
      configured default — never to the JVM's locale.
    * `I18nConfiguration` exposes `coreTranslator` and `coreLocaleProvider`
      beans. Per-plug-in translators are NOT beans — they're constructed
      imperatively per plug-in start in VibeErpPluginManager because each
      needs its own classloader at the front of the resolution chain.
    * `DefaultPluginContext` now wires `translator` and `localeProvider`
      for real instead of throwing `UnsupportedOperationException`.
    
    Bundles
    -------
    * Core: `platform-i18n/src/main/resources/messages.properties` (English),
      `messages_zh_CN.properties` (Simplified Chinese), `messages_de.properties`
      (German). Six common keys (errors, ok/cancel/save/delete) and an ICU
      plural example for `counts.items`. Java 9+ JEP 226 reads .properties
      files as UTF-8 by default, so Chinese characters are written directly
      rather than as `\\uXXXX` escapes.
    * Reference plug-in: moved from the broken `i18n/messages_en-US.properties`
      / `messages_zh-CN.properties` (wrong path, hyphen-locale filenames
      ResourceBundle ignores) to the canonical
      `META-INF/vibe-erp/i18n/messages.properties` /
      `messages_zh_CN.properties` paths with underscore locale tags.
      Added a new `printingshop.plate.created` key with an ICU plural for
      `ink_count` to demonstrate non-trivial argument substitution.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit POST /api/v1/plugins/printing-shop/plates
    with three different Accept-Language headers:
    * (no header)         → "Plate 'PLATE-001' created with no inks." (en-US, plug-in base bundle)
    * `Accept-Language: zh-CN` → "已创建印版 'PLATE-002' (无油墨)。" (zh-CN, plug-in zh_CN bundle)
    * `Accept-Language: de`    → "Plate 'PLATE-003' created with no inks." (de, but the plug-in
                                  ships no German bundle so it falls back to the plug-in base
                                  bundle — correct, the key is plug-in-specific)
    Regression: identity, catalog, partners, and `GET /plates` all still
    HTTP 200 after the i18n wiring change.
    
    Build
    -----
    * `./gradlew build`: 13 subprojects, 118 unit tests (was 107 / 12),
      all green. The 11 new tests cover ICU plural rendering, named-arg
      substitution, locale fallback (zh_CN → root, ja → root via NO_FALLBACK),
      cross-classloader override (a real JAR built in /tmp at test time),
      and RequestLocaleProvider's three resolution paths
      (no request → default; Accept-Language present → request locale;
      request without Accept-Language → default, NOT JVM locale).
    * The architectural rule still enforced: platform-plugins now imports
      platform-i18n, which is a platform-* dependency (allowed), not a
      pbc-* dependency (forbidden).
    
    What was deferred
    -----------------
    * User-preferred locale from the authenticated user's profile row is
      NOT in the resolution chain yet — the `LocaleProvider` interface
      leaves room for it but the implementation only consults
      Accept-Language and the configured default. Adding it slots in
      between request and default without changing the api.v1 surface.
    * The metadata translation overrides table (`metadata__translation`)
      is also deferred — the `Translator` JavaDoc mentions it as the
      first lookup source, but right now keys come from .properties files
      only. Once Tier 1 customisation lands (P3.x), key users will be able
      to override any string from the SPA without touching code.
    zichun authored
     
    Browse Code »
  • Adds the foundation for the entire Tier 1 customization story. Core
    PBCs and plug-ins now ship YAML files declaring their entities,
    permissions, and menus; a `MetadataLoader` walks the host classpath
    and each plug-in JAR at boot, upserts the rows tagged with their
    source, and exposes them at a public REST endpoint so the future
    SPA, AI-agent function catalog, OpenAPI generator, and external
    introspection tooling can all see what the framework offers without
    scraping code.
    
    What landed:
    
    * New `platform/platform-metadata/` Gradle subproject. Depends on
      api-v1 + platform-persistence + jackson-yaml + spring-jdbc.
    
    * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward-
      compatible: unknown top-level keys are ignored, so a future plug-in
      built against a newer schema (forms, workflows, rules, translations)
      loads cleanly on an older host that doesn't know those sections yet.
    
    * `MetadataLoader` with two entry points:
    
        loadCore() — uses Spring's PathMatchingResourcePatternResolver
          against the host classloader. Finds every classpath*:META-INF/
          vibe-erp/metadata/*.yml across all jars contributing to the
          application. Tagged source='core'.
    
        loadFromPluginJar(pluginId, jarPath) — opens ONE specific
          plug-in JAR via java.util.jar.JarFile and walks its entries
          directly. This is critical: a plug-in's PluginClassLoader is
          parent-first, so a classpath*: scan against it would ALSO
          pick up the host's metadata files via parent classpath. We
          saw this in the first smoke run — the plug-in source ended
          up with 6 entities (the plug-in's 2 + the host's 4) before
          the fix. Walking the JAR file directly guarantees only the
          plug-in's own files load. Tagged source='plugin:<id>'.
    
      Both entry points use the same delete-then-insert idempotent core
      (doLoad). Loading the same source twice produces the same final
      state. User-edited metadata (source='user') is NEVER touched by
      either path — it survives boot, plug-in install, and plug-in
      upgrade. This is what lets a future SPA "Customize" UI add custom
      fields without fearing they'll be wiped on the next deploy.
    
    * `VibeErpPluginManager.afterPropertiesSet()` now calls
      metadataLoader.loadCore() at the very start, then walks plug-ins
      and calls loadFromPluginJar(...) for each one between Liquibase
      migration and start(context). Order is guaranteed: core → linter
      → migrate → metadata → start. The CommandLineRunner I originally
      put `loadCore()` in turned out to be wrong because Spring runs
      CommandLineRunners AFTER InitializingBean.afterPropertiesSet(),
      so the plug-in metadata was loading BEFORE core — the wrong way
      around. Calling loadCore() inline in the plug-in manager fixes
      the ordering without any @Order(...) gymnastics.
    
    * `MetadataController` exposes:
        GET /api/v1/_meta/metadata           — all three sections
        GET /api/v1/_meta/metadata/entities  — entities only
        GET /api/v1/_meta/metadata/permissions
        GET /api/v1/_meta/metadata/menus
      Public allowlist (covered by the existing /api/v1/_meta/** rule
      in SecurityConfiguration). The metadata is intentionally non-
      sensitive — entity names, permission keys, menu paths. Nothing
      in here is PII or secret; the SPA needs to read it before the
      user has logged in.
    
    * YAML files shipped:
      - pbc-identity/META-INF/vibe-erp/metadata/identity.yml
        (User + Role entities, 6 permissions, Users + Roles menus)
      - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml
        (Item + Uom entities, 7 permissions, Items + UoMs menus)
      - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml
        (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus
        in a "Printing shop" section)
    
    Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths,
    mixed sections, blank pluginId rejection, missing-file no-op wipe)
    + 7 MetadataYamlParseTest cases (DTO mapping, optional fields,
    section defaults, forward-compat unknown keys). Total now
    **92 unit tests** across 11 modules, all green.
    
    End-to-end smoke test against fresh Postgres + plug-in loaded:
    
      Boot logs:
        MetadataLoader: source='core' loaded 4 entities, 13 permissions,
          4 menus from 2 file(s)
        MetadataLoader: source='plugin:printing-shop' loaded 2 entities,
          5 permissions, 2 menus from 1 file(s)
    
      HTTP smoke (everything green):
        GET /api/v1/_meta/metadata (no auth)              → 200
          6 entities, 18 permissions, 6 menus
          entity names: User, Role, Item, Uom, Plate, InkRecipe
          menu sections: Catalog, Printing shop, System
        GET /api/v1/_meta/metadata/entities                → 200
        GET /api/v1/_meta/metadata/menus                   → 200
    
      Direct DB verification:
        metadata__entity:    core=4, plugin:printing-shop=2
        metadata__permission: core=13, plugin:printing-shop=5
        metadata__menu:      core=4, plugin:printing-shop=2
    
      Idempotency: restart the app, identical row counts.
    
      Existing endpoints regression:
        GET /api/v1/identity/users (Bearer)               → 1 user
        GET /api/v1/catalog/uoms (Bearer)                  → 15 UoMs
        GET /api/v1/plugins/printing-shop/ping (Bearer)    → 200
    
    Bugs caught and fixed during the smoke test:
    
      • The first attempt loaded core metadata via a CommandLineRunner
        annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata
        inline in VibeErpPluginManager.afterPropertiesSet(). Spring
        runs all InitializingBeans BEFORE any CommandLineRunner, so
        the plug-in metadata loaded first and the core load came
        second — wrong order. Fix: drop CoreMetadataInitializer
        entirely; have the plug-in manager call metadataLoader.loadCore()
        directly at the start of afterPropertiesSet().
    
      • The first attempt's plug-in load used
        metadataLoader.load(pluginClassLoader, ...) which used Spring's
        PathMatchingResourcePatternResolver against the plug-in's
        classloader. PluginClassLoader is parent-first, so the resolver
        enumerated BOTH the plug-in's own JAR AND the host classpath's
        metadata files, tagging core entities as source='plugin:<id>'
        and corrupting the seed counts. Fix: refactor MetadataLoader
        to expose loadFromPluginJar(pluginId, jarPath) which opens
        the plug-in JAR directly via java.util.jar.JarFile and walks
        its entries — never asking the classloader at all. The
        api-v1 surface didn't change.
    
      • Two KDoc comments contained the literal string `*.yml` after
        a `/` character (`/metadata/*.yml`), forming the `/*` pattern
        that Kotlin's lexer treats as a nested-comment opener. The
        file failed to compile with "Unclosed comment". This is the
        third time I've hit this trap; rewriting both KDocs to avoid
        the literal `/*` sequence.
    
      • The MetadataLoaderTest's hand-rolled JAR builder didn't include
        explicit directory entries for parent paths. Real Gradle JARs
        do include them, and Spring's PathMatchingResourcePatternResolver
        needs them to enumerate via classpath*:. Fixed the test helper
        to write directory entries for every parent of each file.
    
    Implementation plan refreshed: P1.5 marked DONE. Next priority
    candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom
    field application via the ext jsonb column, which would unlock the
    full Tier 1 customization story).
    
    Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests,
    metadata seeded for 6 entities + 18 permissions + 6 menus.
    vibe_erp authored
     
    Browse Code »
  • The reference printing-shop plug-in graduates from "hello world" to a
    real customer demonstration: it now ships its own Liquibase changelog,
    owns its own database tables, and exposes a real domain (plates and
    ink recipes) via REST that goes through `context.jdbc` — a new
    typed-SQL surface in api.v1 — without ever touching Spring's
    `JdbcTemplate` or any other host internal type. A bytecode linter
    that runs before plug-in start refuses to load any plug-in that tries
    to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes.
    
    What landed:
    
    * api.v1 (additive, binary-compatible):
      - PluginJdbc — typed SQL access with named parameters. Methods:
        query, queryForObject, update, inTransaction. No Spring imports
        leaked. Forces plug-ins to use named params (no positional ?).
      - PluginRow — typed nullable accessors over a single result row:
        string, int, long, uuid, bool, instant, bigDecimal. Hides
        java.sql.ResultSet entirely.
      - PluginContext.jdbc getter with default impl that throws
        UnsupportedOperationException so older builds remain binary
        compatible per the api.v1 stability rules.
    
    * platform-plugins — three new sub-packages:
      - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate.
        ResultSetPluginRow translates each accessor through ResultSet.wasNull()
        so SQL NULL round-trips as Kotlin null instead of the JDBC defaults
        (0 for int, false for bool, etc. — bug factories).
      - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean
        for the whole process. Per-plugin isolation lands later.
      - migration/PluginLiquibaseRunner looks for
        META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via
        the PF4J classloader and applies it via Liquibase against the
        host's shared DataSource. The unique META-INF path matters:
        plug-ins also see the host's parent classpath, where the host's
        own db/changelog/master.xml lives, and a collision causes
        Liquibase ChangeLogParseException at install time.
      - lint/PluginLinter walks every .class entry in the plug-in JAR
        via java.util.jar.JarFile + ASM ClassReader, visits every type/
        method/field/instruction reference, rejects on any reference to
        `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages.
    
    * VibeErpPluginManager lifecycle is now load → lint → migrate → start:
      - lint runs immediately after PF4J's loadPlugins(); rejected
        plug-ins are unloaded with a per-violation error log and never
        get to run any code
      - migrate runs the plug-in's own Liquibase changelog; failure
        means the plug-in is loaded but skipped (loud warning, framework
        boots fine)
      - then PF4J's startPlugins() runs the no-arg start
      - then we walk loaded plug-ins and call vibe_erp's start(context)
        with a fully-wired DefaultPluginContext (logger + endpoints +
        eventBus + jdbc). The plug-in's tables are guaranteed to exist
        by the time its lambdas run.
    
    * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the
      shared PluginJdbc and use it to talk to their own tables.
    
    * Reference plug-in (PrintingShopPlugin):
      - Ships META-INF/vibe-erp/db/changelog.xml with two changesets:
        plugin_printingshop__plate (id, code, name, width_mm, height_mm,
        status) and plugin_printingshop__ink_recipe (id, code, name,
        cmyk_c/m/y/k).
      - Now registers seven endpoints:
          GET  /ping          — health
          GET  /echo/{name}   — path variable demo
          GET  /plates        — list
          GET  /plates/{id}   — fetch
          POST /plates        — create (with race-conditiony existence
                                check before INSERT, since plug-ins
                                can't import Spring's DataAccessException)
          GET  /inks
          POST /inks
      - All CRUD lambdas use context.jdbc with named parameters. The
        plug-in still imports nothing from org.springframework.* in its
        own code (it does reach the host's Jackson via reflection for
        JSON parsing — a deliberate v0.6 shortcut documented inline).
    
    Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize
    in-memory plug-in JARs (clean class, forbidden platform ref, forbidden
    pbc ref, allowed api.v1 ref, multiple violations) and a mocked
    PluginWrapper to avoid touching the real PF4J loader. Total now
    **81 unit tests** across 10 modules, all green.
    
    End-to-end smoke test against fresh Postgres with the plug-in loaded
    (every assertion green):
    
      Boot logs:
        PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml
        Liquibase: ChangeSet printingshop-init-001 ran successfully
        Liquibase: ChangeSet printingshop-init-002 ran successfully
        Liquibase migrations applied successfully
        plugin.printing-shop: registered 7 endpoints
    
      HTTP smoke:
        \dt plugin_printingshop*                  → both tables exist
        GET /api/v1/plugins/printing-shop/plates  → []
        POST plate A4                              → 201 + UUID
        POST plate A3                              → 201 + UUID
        POST duplicate A4                          → 409 + clear msg
        GET plates                                 → 2 rows
        GET /plates/{id}                           → A4 details
        psql verifies both rows in plugin_printingshop__plate
        POST ink CYAN                              → 201
        POST ink MAGENTA                           → 201
        GET inks                                   → 2 inks with nested CMYK
        GET /ping                                  → 200 (existing endpoint)
        GET /api/v1/catalog/uoms                   → 15 UoMs (no regression)
        GET /api/v1/identity/users                 → 1 user (no regression)
    
    Bug encountered and fixed during the smoke test:
    
      • The plug-in initially shipped its changelog at db/changelog/master.xml,
        which collides with the HOST's db/changelog/master.xml. The plug-in
        classloader does parent-first lookup (PF4J default), so Liquibase's
        ClassLoaderResourceAccessor found BOTH files and threw
        ChangeLogParseException ("Found 2 files with the path"). Fixed by
        moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml,
        a path the host never uses, and updating PluginLiquibaseRunner.
        The unique META-INF prefix is now part of the documented plug-in
        convention.
    
    What is explicitly NOT in this chunk (deferred):
    
      • Per-plugin Spring child contexts — plug-ins still instantiate via
        PF4J's classloader without their own Spring beans
      • Per-plugin datasource isolation — one shared host pool today
      • Plug-in changelog table-prefix linter — convention only, runtime
        enforcement comes later
      • Rollback on plug-in uninstall — uninstall is operator-confirmed
        and rare; running dropAll() during stop() would lose data on
        accidental restart
      • Subscription auto-scoping on plug-in stop — plug-ins still close
        their own subscriptions in stop()
      • Real customer-grade JSON parsing in plug-in lambdas — the v0.6
        reference plug-in uses reflection to find the host's Jackson; a
        real plug-in author would ship their own JSON library or use a
        future api.v1 typed-DTO surface
    
    Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1
    all marked DONE in
    docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md.
    Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners).
    vibe_erp authored
     
    Browse Code »
  • Adds the framework's event bus, the second cross-cutting service (after
    auth) that PBCs and plug-ins both consume. Implements the transactional
    outbox pattern from the architecture spec section 9 — events are
    written to the database in the same transaction as the publisher's
    domain change, so a publish followed by a rollback never escapes.
    This is the seam where a future Kafka/NATS bridge plugs in WITHOUT
    touching any PBC code.
    
    What landed:
    
    * New `platform/platform-events/` module:
      - `EventOutboxEntry` JPA entity backed by `platform__event_outbox`
        (id, event_id, topic, aggregate_type, aggregate_id, payload jsonb,
        status, attempts, last_error, occurred_at, dispatched_at, version).
        Status enum: PENDING / DISPATCHED / FAILED.
      - `EventOutboxRepository` Spring Data JPA repo with a pessimistic
        SELECT FOR UPDATE query for poller dispatch.
      - `ListenerRegistry` — in-memory subscription holder, indexed both
        by event class (Class.isInstance) and by topic string. Supports
        a `**` wildcard for the platform's audit subscriber. Backed by
        CopyOnWriteArrayList so dispatch is lock-free.
      - `EventBusImpl` — implements the api.v1 EventBus. publish() writes
        the outbox row AND synchronously delivers to in-process listeners
        in the SAME transaction. Marked Propagation.MANDATORY so the bus
        refuses to publish outside an existing transaction (preventing
        publish-and-rollback leaks). Listener exceptions are caught and
        logged; the outbox row still commits.
      - `OutboxPoller` — Spring @Scheduled component that runs every 5s,
        drains PENDING / FAILED rows under a pessimistic lock, marks them
        DISPATCHED. v0.5 has no real external dispatcher — the poller is
        the seam where Kafka/NATS plugs in later.
      - `EventBusConfiguration` — @EnableScheduling so the poller actually
        runs. Lives in this module so the seam activates automatically
        when platform-events is on the classpath.
      - `EventAuditLogSubscriber` — wildcard subscriber that logs every
        event at INFO. Demo proof that the bus works end-to-end. Future
        versions replace it with a real audit log writer.
    
    * `platform__event_outbox` Liquibase changeset (platform-events-001):
      table + unique index on event_id + index on (status, created_at) +
      index on topic.
    
    * DefaultPluginContext.eventBus is no longer a stub that throws —
      it's now the real EventBus injected by VibeErpPluginManager.
      Plug-ins can publish and subscribe via the api.v1 surface. Note:
      subscriptions are NOT auto-scoped to the plug-in lifecycle in v0.5;
      a plug-in that wants its subscriptions removed on stop() must call
      subscription.close() explicitly. Auto-scoping lands when per-plug-in
      Spring child contexts ship.
    
    * pbc-identity now publishes `UserCreatedEvent` after a successful
      UserService.create(). The event class is internal to pbc-identity
      (not in api.v1) — other PBCs subscribe by topic string
      (`identity.user.created`), not by class. This is the right tradeoff:
      string topics are stable across plug-in classloaders, class equality
      is not, and adding every event class to api.v1 would be perpetual
      surface-area bloat.
    
    Tests: 13 new unit tests (9 EventBusImplTest + 4 OutboxPollerTest)
    plus 2 new UserServiceTest cases that verify the publish happens on
    the happy path and does NOT happen when create() rejects a duplicate.
    Total now 76 unit tests across the framework, all green.
    
    End-to-end smoke test against fresh Postgres with the plug-in loaded
    (everything green):
    
      EventAuditLogSubscriber subscribed to ** at boot
      Outbox empty before any user create                      ✓
      POST /api/v1/auth/login                                  → 200
      POST /api/v1/identity/users (create alice)               → 201
      Outbox row appears with topic=identity.user.created,
        status=PENDING immediately after create                ✓
      EventAuditLogSubscriber log line fires synchronously
        inside the create transaction                          ✓
      POST /api/v1/identity/users (create bob)                 → 201
      Wait 8s (one OutboxPoller cycle)
      Both outbox rows now DISPATCHED, dispatched_at set       ✓
      Existing PBCs still work:
        GET /api/v1/identity/users → 3 users                   ✓
        GET /api/v1/catalog/uoms → 15 UoMs                     ✓
      Plug-in still works:
        GET /api/v1/plugins/printing-shop/ping → 200           ✓
    
    The most important assertion is the synchronous audit log line
    appearing on the same thread as the user creation request. That
    proves the entire chain — UserService.create() → eventBus.publish()
    → EventBusImpl writes outbox row → ListenerRegistry.deliver()
    finds wildcard subscriber → EventAuditLogSubscriber.handle()
    logs — runs end-to-end inside the publisher's transaction.
    The poller flipping PENDING → DISPATCHED 5s later proves the
    outbox + poller seam works without any external dispatcher.
    
    Bug encountered and fixed during the smoke test:
    
      • EventBusImplTest used `ObjectMapper().registerKotlinModule()`
        which doesn't pick up jackson-datatype-jsr310. Production code
        uses Spring Boot's auto-configured ObjectMapper which already
        has jsr310 because spring-boot-starter-web is on the classpath
        of distribution. The test setup was the only place using a bare
        mapper. Fixed by switching to `findAndRegisterModules()` AND
        by adding jackson-datatype-jsr310 as an explicit implementation
        dependency of platform-events (so future modules that depend on
        the bus without bringing web in still get Instant serialization).
    
    What is explicitly NOT in this chunk:
    
      • External dispatcher (Kafka/NATS bridge) — the poller is a no-op
        that just marks rows DISPATCHED. The seam exists; the dispatcher
        is a future P1.7.b unit.
      • Exponential backoff on FAILED rows — every cycle re-attempts.
        Real backoff lands when there's a real dispatcher to fail.
      • Dead-letter queue — same.
      • Per-plug-in subscription auto-scoping — plug-ins must close()
        explicitly today.
      • Async / fire-and-forget publish — synchronous in-process only.
    vibe_erp authored
     
    Browse Code »
  • The reference printing-shop plug-in now actually does something:
    its main class registers two HTTP endpoints during start(context),
    and a real curl to /api/v1/plugins/printing-shop/ping returns the
    JSON the plug-in's lambda produced. End-to-end smoke test 10/10
    green. This is the chunk that turns vibe_erp from "an ERP app
    that has a plug-in folder" into "an ERP framework whose plug-ins
    can serve traffic".
    
    What landed:
    
    * api.v1 — additive (binary-compatible per the api.v1 stability rule):
      - org.vibeerp.api.v1.plugin.HttpMethod (enum)
      - org.vibeerp.api.v1.plugin.PluginRequest (path params, query, body)
      - org.vibeerp.api.v1.plugin.PluginResponse (status + body)
      - org.vibeerp.api.v1.plugin.PluginEndpointHandler (fun interface)
      - org.vibeerp.api.v1.plugin.PluginEndpointRegistrar (per-plugin
        scoped, register(method, path, handler))
      - PluginContext.endpoints getter with default impl that throws
        UnsupportedOperationException so the addition is binary-compatible
        with plug-ins compiled against earlier api.v1 builds.
    
    * platform-plugins — three new files:
      - PluginEndpointRegistry: process-wide registration storage. Uses
        Spring's AntPathMatcher so {var} extracts path variables.
        Synchronized mutation. Exact-match fast path before pattern loop.
        Rejects duplicate (method, path) per plug-in. unregisterAll(plugin)
        on shutdown.
      - ScopedPluginEndpointRegistrar: per-plugin wrapper that tags every
        register() call with the right plugin id. Plug-ins cannot register
        under another plug-in's namespace.
      - PluginEndpointDispatcher: single Spring @RestController at
        /api/v1/plugins/{pluginId}/** that catches GET/POST/PUT/PATCH/DELETE,
        asks the registry for a match, builds a PluginRequest, calls the
        handler, serializes the response. 404 on no match, 500 on handler
        throw (logged with stack trace).
      - DefaultPluginContext: implements PluginContext with a real
        SLF4J-backed logger (every line tagged with the plug-in id) and
        the scoped endpoint registrar. The other six services
        (eventBus, transaction, translator, localeProvider,
        permissionCheck, entityRegistry) throw UnsupportedOperationException
        with messages pointing at the implementation plan unit that will
        land each one. Loud failure beats silent no-op.
    
    * VibeErpPluginManager — after PF4J's startPlugins() now walks every
      loaded plug-in, casts the wrapper instance to api.v1.plugin.Plugin,
      and calls start(context) with a freshly-built DefaultPluginContext.
      Tracks the started set so destroy() can call stop() and
      unregisterAll() in reverse order. Catches plug-in start failures
      loudly without bringing the framework down.
    
    * Reference plug-in (PrintingShopPlugin):
      - Now extends BOTH org.pf4j.Plugin (so PF4J's loader can instantiate
        it via the Plugin-Class manifest entry) AND
        org.vibeerp.api.v1.plugin.Plugin (so the host's vibe_erp lifecycle
        hook can call start(context)). Uses Kotlin import aliases to
        disambiguate the two `Plugin` simple names.
      - In start(context), registers two endpoints:
          GET /ping — returns {plugin, version, ok, message}
          GET /echo/{name} — extracts path variable, echoes it back
      - The /echo handler proves path-variable extraction works end-to-end.
    
    * Build infrastructure:
      - reference-customer/plugin-printing-shop now has an `installToDev`
        Gradle task that builds the JAR and stages it into <repo>/plugins-dev/.
        The task wipes any previous staged copies first so renaming the JAR
        on a version bump doesn't leave PF4J trying to load two versions.
      - distribution's `bootRun` task now (a) depends on `installToDev`
        so the staging happens automatically and (b) sets workingDir to
        the repo root so application-dev.yaml's relative
        `vibeerp.plugins.directory: ./plugins-dev` resolves to the right
        place. Without (b) bootRun's CWD was distribution/ and PF4J found
        "No plugins" — which is exactly the bug that surfaced in the first
        smoke run.
      - .gitignore now excludes /plugins-dev/ and /files-dev/.
    
    Tests: 12 new unit tests for PluginEndpointRegistry covering literal
    paths, single/multi path variables, duplicate registration rejection,
    literal-vs-pattern precedence, cross-plug-in isolation, method
    matching, and unregisterAll. Total now 61 unit tests across the
    framework, all green.
    
    End-to-end smoke test against fresh Postgres + the plug-in JAR
    loaded by PF4J at boot (10/10 passing):
      GET /api/v1/plugins/printing-shop/ping (no auth)         → 401
      POST /api/v1/auth/login                                  → access token
      GET /api/v1/plugins/printing-shop/ping (Bearer)          → 200
                                                                 {plugin, version, ok, message}
      GET /api/v1/plugins/printing-shop/echo/hello             → 200, echoed=hello
      GET /api/v1/plugins/printing-shop/echo/world             → 200, echoed=world
      GET /api/v1/plugins/printing-shop/nonexistent            → 404 (no handler)
      GET /api/v1/plugins/missing-plugin/ping                  → 404 (no plugin)
      POST /api/v1/plugins/printing-shop/ping                  → 404 (wrong method)
      GET /api/v1/catalog/uoms (Bearer)                        → 200, 15 UoMs
      GET /api/v1/identity/users (Bearer)                      → 200, 1 user
    
    PF4J resolved the JAR, started the plug-in, the host called
    vibe_erp's start(context), the plug-in registered two endpoints, and
    the dispatcher routed real HTTP traffic to the plug-in's lambdas.
    The boot log shows the full chain.
    
    What is explicitly NOT in this chunk and remains for later:
      • plug-in linter (P1.2) — bytecode scan for forbidden imports
      • plug-in Liquibase application (P1.4) — plug-in-owned schemas
      • per-plug-in Spring child context — currently we just instantiate
        the plug-in via PF4J's classloader; there is no Spring context
        for the plug-in's own beans
      • PluginContext.eventBus / transaction / translator / etc. — they
        still throw UnsupportedOperationException with TODO messages
      • Path-template precedence between multiple competing patterns
        (only literal-beats-pattern is implemented, not most-specific-pattern)
      • Permission checks at the dispatcher (Spring Security still
        catches plug-in endpoints with the global "anyRequest authenticated"
        rule, which is the right v0.5 behavior)
      • Hot reload of plug-ins (cold restart only)
    
    Bug encountered and fixed during the smoke test:
    
      • application-dev.yaml has `vibeerp.plugins.directory: ./plugins-dev`,
        a relative path. Gradle's `bootRun` task by default uses the
        subproject's directory as the working directory, so the relative
        path resolved to <repo>/distribution/plugins-dev/ instead of
        <repo>/plugins-dev/. PF4J reported "No plugins" because that
        directory was empty. Fixed by setting bootRun.workingDir =
        rootProject.layout.projectDirectory.asFile.
    
      • One KDoc comment in PluginEndpointDispatcher contained the literal
        string `/api/v1/plugins/{pluginId}/**` inside backticks. The
        Kotlin lexer doesn't treat backticks as comment-suppressing, so
        `/**` opened a nested KDoc comment that was never closed and the
        file failed to compile. Same root cause as the AuthController bug
        earlier in the session. Rewrote the line to avoid the literal
        `/**` sequence.
    vibe_erp authored
     
    Browse Code »