• Closes two open wiring gaps left by the P1.9 and P1.8 chunks —
    `PluginContext.files` and `PluginContext.reports` both previously
    threw `UnsupportedOperationException` because the host's
    `DefaultPluginContext` never received the concrete beans. This
    commit plumbs both through and exercises them end-to-end via a
    new printing-shop plug-in endpoint that generates a quote PDF,
    stores it in the file store, and returns the file handle.
    
    With this chunk the reference printing-shop plug-in demonstrates
    **every extension seam the framework provides**: HTTP endpoints,
    JDBC, metadata YAML, i18n, BPMN + TaskHandlers, JobHandlers,
    custom fields on core entities, event publishing via EventBus,
    ReportRenderer, and FileStorage. There is no major public plug-in
    surface left unexercised.
    
    ## Wiring: DefaultPluginContext + VibeErpPluginManager
    
    - `DefaultPluginContext` gains two new constructor parameters
      (`sharedFileStorage: FileStorage`, `sharedReportRenderer: ReportRenderer`)
      and two new overrides. Each is wired via Spring — they live in
      platform-files and platform-reports respectively, but
      platform-plugins only depends on api.v1 (the interfaces) and
      NOT on those modules directly. The concrete beans are injected
      by Spring at distribution boot time when every `@Component` is
      on the classpath.
    - `VibeErpPluginManager` adds `private val fileStorage: FileStorage`
      and `private val reportRenderer: ReportRenderer` constructor
      params and passes them through to every `DefaultPluginContext`
      it builds per plug-in.
    
    The `files` and `reports` getters in api.v1 `PluginContext` still
    have their default-throw backward-compat shim — a plug-in built
    against v0.8 of api.v1 loading on a v0.7 host would still fail
    loudly at first call with a clear "upgrade to v0.8" message. The
    override here makes the v0.8+ host honour the interface.
    
    ## Printing-shop reference — quote PDF endpoint
    
    - New `resources/reports/quote-template.jrxml` inside the plug-in
      JAR. Parameters: plateCode, plateName, widthMm, heightMm,
      status, customerName. Produces a single-page A4 PDF with a
      header, a table of plate attributes, and a footer.
    
    - New endpoint `POST /api/v1/plugins/printing-shop/plates/{id}/generate-quote-pdf`.
      Request body `{"customerName": "..."}`, response:
        `{"plateId", "plateCode", "customerName",
          "fileKey", "fileSize", "fileContentType", "downloadUrl"}`
    
      The handler does ALL of:
        1. Reads the plate row via `context.jdbc.queryForObject(...)`
        2. Loads the JRXML from the PLUG-IN's own classloader (not
           the host classpath — `this::class.java.classLoader
           .getResourceAsStream("reports/quote-template.jrxml")` —
           so the host's built-in `vibeerp-ping-report.jrxml` and the
           plug-in's template live in isolated namespaces)
        3. Renders via `context.reports.renderPdf(template, data)`
           — uses the host JasperReportRenderer under the hood
        4. Persists via `context.files.put(key, contentType, content)`
           under a plug-in-scoped key `plugin-printing-shop/quotes/quote-<code>.pdf`
        5. Returns the file handle plus a `downloadUrl` pointing at
           the framework's `/api/v1/files/download` endpoint the
           caller can immediately hit
    
    ## Smoke test (fresh DB + staged plug-in)
    
    ```
    # create a plate
    POST /api/v1/plugins/printing-shop/plates
         {code: PLATE-200, name: "Premium cover", widthMm: 420, heightMm: 594}
      → 201 {id, code: PLATE-200, status: DRAFT, ...}
    
    # generate + store the quote PDF
    POST /api/v1/plugins/printing-shop/plates/<id>/generate-quote-pdf
         {customerName: "Acme Inc"}
      → 201 {
          plateId, plateCode: "PLATE-200", customerName: "Acme Inc",
          fileKey: "plugin-printing-shop/quotes/quote-PLATE-200.pdf",
          fileSize: 1488,
          fileContentType: "application/pdf",
          downloadUrl: "/api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf"
        }
    
    # download via the framework's file endpoint
    GET /api/v1/files/download?key=plugin-printing-shop/quotes/quote-PLATE-200.pdf
      → 200
        Content-Type: application/pdf
        Content-Length: 1488
        body: valid PDF 1.5, 1 page
    
    $ file /tmp/plate-quote.pdf
      /tmp/plate-quote.pdf: PDF document, version 1.5, 1 pages (zip deflate encoded)
    
    # list by prefix
    GET /api/v1/files?prefix=plugin-printing-shop/
      → [{"key":"plugin-printing-shop/quotes/quote-PLATE-200.pdf",
          "size":1488, "contentType":"application/pdf", ...}]
    
    # plug-in log
    [plugin:printing-shop] registered 8 endpoints under /api/v1/plugins/printing-shop/
    [plugin:printing-shop] generated quote PDF for plate PLATE-200 (1488 bytes)
                           → plugin-printing-shop/quotes/quote-PLATE-200.pdf
    ```
    
    Four public surfaces composed in one flow: plug-in JDBC read →
    plug-in classloader resource load → host ReportRenderer compile/
    fill/export → host FileStorage put → host file controller
    download. Every step stays on api.v1; zero plug-in code reaches
    into a concrete platform class.
    
    ## Printing-shop plug-in — full extension surface exercised
    
    After this commit the reference printing-shop plug-in contributes
    via every public seam the framework offers:
    
    | Seam                          | How the plug-in uses it                                |
    |-------------------------------|--------------------------------------------------------|
    | HTTP endpoints (P1.3)         | 8 endpoints under /api/v1/plugins/printing-shop/       |
    | JDBC (P1.4)                   | Reads/writes its own plugin_printingshop__* tables    |
    | Liquibase                     | Own changelog.xml, 2 tables created at plug-in start  |
    | Metadata YAML (P1.5)          | 2 entities, 5 permissions, 2 menus                    |
    | Custom fields on CORE (P3.4)  | 5 plug-in fields on Partner/Item/SalesOrder/WorkOrder |
    | i18n (P1.6)                   | Own messages_<locale>.properties, quote number msgs   |
    | EventBus (P1.7)               | Publishes WorkOrderRequestedEvent from a TaskHandler  |
    | TaskHandlers (P2.1)           | 2 handlers (plate-approval, quote-to-work-order)      |
    | Plug-in BPMN (P2.1 followup)  | 2 BPMNs in processes/ auto-deployed at start          |
    | JobHandlers (P1.10 followup)  | PlateCleanupJobHandler using context.jdbc + logger    |
    | ReportRenderer (P1.8)         | Quote PDF from JRXML via context.reports              |
    | FileStorage (P1.9)            | Persists quote PDF via context.files                  |
    
    Everything listed in this table is exercised end-to-end by the
    current smoke test. The plug-in is the framework's executable
    acceptance test for the entire public extension surface.
    
    ## Tests
    
    No new unit tests — the wiring change is a plain constructor
    addition, the existing `DefaultPluginContext` has no dedicated
    test class (it's a thin dataclass-shaped bean), and
    `JasperReportRenderer` + `LocalDiskFileStorage` each have their
    own unit tests from the respective parent chunks. The change is
    validated end-to-end by the above smoke test; formalizing that
    into an integration test would need Testcontainers + a real
    plug-in JAR and belongs to a different (test-infra) chunk.
    
    - Total framework unit tests: 337 (unchanged), all green.
    
    ## Non-goals (parking lot)
    
    - Pre-compiled `.jasper` caching keyed by template hash. A
      hot-path benchmark would tell us whether the cache is worth
      shipping.
    - Multipart upload of a template into a plug-in's own `files`
      namespace so non-bundled templates can be tried without a
      plug-in rebuild. Nice-to-have for iteration but not on the
      v1.0 critical path.
    - Scoped file-key prefixes per plug-in enforced by the framework
      (today the plug-in picks its own prefix by convention; a
      `plugin.files.keyPrefix` config would let the host enforce
      that every plug-in-contributed file lives under
      `plugin-<id>/`). Future hardening chunk.
    zichun authored
     
    Browse Code »
  • …duction auto-creates WorkOrder
    
    First end-to-end cross-PBC workflow driven entirely from a customer
    plug-in through api.v1 surfaces. A printing-shop BPMN kicks off a
    TaskHandler that publishes a generic api.v1 event; pbc-production
    reacts by creating a DRAFT WorkOrder. The plug-in has zero
    compile-time coupling to pbc-production, and pbc-production has zero
    knowledge the plug-in exists.
    
    ## Why an event, not a facade
    
    Two options were on the table for "how does a plug-in ask
    pbc-production to create a WorkOrder":
    
      (a) add a new cross-PBC facade `api.v1.ext.production.ProductionApi`
          with a `createWorkOrder(command)` method
      (b) add a generic `WorkOrderRequestedEvent` in `api.v1.event.production`
          that anyone can publish — this commit
    
    Facade pattern (a) is what InventoryApi.recordMovement and
    CatalogApi.findItemByCode use: synchronous, in-transaction,
    caller-blocks-on-completion. Event pattern (b) is what
    SalesOrderConfirmedEvent → SalesOrderConfirmedSubscriber uses:
    asynchronous over the bus, still in-transaction (the bus uses
    `Propagation.MANDATORY` with synchronous delivery so a failure
    rolls everything back), but the caller doesn't need a typed result.
    
    Option (b) wins for plug-in → pbc-production:
    
    - Plug-in compile-time surface stays identical: plug-ins already
      import `api.v1.event.*` to publish. No new api.v1.ext package.
      Zero new plug-in dependency.
    - The outbox gets the row for free — a crash between publish and
      delivery replays cleanly from `platform__event_outbox`.
    - A second customer plug-in shipping a different flow that ALSO
      wants to auto-spawn work orders doesn't need a second facade, just
      publishes the same event. pbc-scheduling (future) can subscribe
      to the same channel without duplicating code.
    
    The synchronous facade pattern stays the right tool for cross-PBC
    operations the caller needs to observe (read-throughs, inventory
    debits that must block the current transaction). Creating a DRAFT
    work order is a fire-and-trust operation — the event shape fits.
    
    ## What landed
    
    ### api.v1 — WorkOrderRequestedEvent
    
    New event class `org.vibeerp.api.v1.event.production.WorkOrderRequestedEvent`
    with four required fields:
      - `code`: desired work-order code (must be unique globally;
        convention is to bake the source reference into it so duplicate
        detection is trivial, e.g. `WO-FROM-PRINTINGSHOP-Q-007`)
      - `outputItemCode` + `outputQuantity`: what to produce
      - `sourceReference`: opaque free-form pointer used in logs and
        the outbox audit trail. Example values:
        `plugin:printing-shop:quote:Q-007`,
        `pbc-orders-sales:SO-2026-001:L2`
    
    The class is a `DomainEvent` (not a `WorkOrderEvent` subclass — the
    existing `WorkOrderEvent` sealed interface is for LIFECYCLE events
    published BY pbc-production, not for inbound requests). `init`
    validators reject blank strings and non-positive quantities so a
    malformed event fails fast at publish time rather than at the
    subscriber.
    
    ### pbc-production — WorkOrderRequestedSubscriber
    
    New `@Component` in `pbc/pbc-production/.../event/WorkOrderRequestedSubscriber.kt`.
    Subscribes in `@PostConstruct` via the typed-class `EventBus.subscribe`
    overload (same pattern as `SalesOrderConfirmedSubscriber` + the six
    pbc-finance order subscribers). The subscriber:
    
      1. Looks up `workOrders.findByCode(event.code)` as the idempotent
         short-circuit. If a WorkOrder with that code already exists
         (outbox replay, future async bus retry, developer re-running the
         same BPMN process), the subscriber logs at DEBUG and returns.
         **Second execution of the same BPMN produces the same outbox row
         which the subscriber then skips — the database ends up with
         exactly ONE WorkOrder regardless of how many times the process
         runs.**
      2. Calls `WorkOrderService.create(CreateWorkOrderCommand(...))` with
         the event's fields. `sourceSalesOrderCode` is null because this
         is the generic path, not the SO-driven one.
    
    Why this is a SECOND subscriber rather than extending
    `SalesOrderConfirmedSubscriber`: the two events serve different
    producers. `SalesOrderConfirmedEvent` is pbc-orders-sales-specific
    and requires a round-trip through `SalesOrdersApi.findByCode` to
    fetch the lines; `WorkOrderRequestedEvent` carries everything the
    subscriber needs inline. Collapsing them would mean the generic
    path inherits the SO-flow's SO-specific lookup and short-circuit
    logic that doesn't apply to it.
    
    ### reference printing-shop plug-in — CreateWorkOrderFromQuoteTaskHandler
    
    New plug-in TaskHandler in
    `reference-customer/plugin-printing-shop/.../workflow/CreateWorkOrderFromQuoteTaskHandler.kt`.
    Captures the `PluginContext` via constructor — same pattern as
    `PlateApprovalTaskHandler` landed in `7b2ab34d` — and from inside
    `execute`:
    
      1. Reads `quoteCode`, `itemCode`, `quantity` off the process variables
         (`quantity` accepts Number or String since Flowable's variable
         coercion is flexible).
      2. Derives `workOrderCode = "WO-FROM-PRINTINGSHOP-$quoteCode"` and
         `sourceReference = "plugin:printing-shop:quote:$quoteCode"`.
      3. Logs via `context.logger.info(...)` — the line is tagged
         `[plugin:printing-shop]` by the framework's `Slf4jPluginLogger`.
      4. Publishes `WorkOrderRequestedEvent` via `context.eventBus.publish(...)`.
         This is the first time a plug-in TaskHandler publishes a cross-PBC
         event from inside a workflow — proves the event-bus leg of the
         handler-context pattern works end-to-end.
      5. Writes `workOrderCode` + `workOrderRequested=true` back to the
         process variables so a downstream BPMN step or the HTTP caller
         can see the derived code.
    
    The handler is registered in `PrintingShopPlugin.start(context)`
    alongside `PlateApprovalTaskHandler`:
    
        context.taskHandlers.register(PlateApprovalTaskHandler(context))
        context.taskHandlers.register(CreateWorkOrderFromQuoteTaskHandler(context))
    
    Teardown via `unregisterAllByOwner("printing-shop")` still works
    unchanged — the scoped registrar tracks both handlers.
    
    ### reference printing-shop plug-in — quote-to-work-order.bpmn20.xml
    
    New BPMN file `processes/quote-to-work-order.bpmn20.xml` in the
    plug-in JAR. Single synchronous service task, process definition
    key `plugin-printing-shop-quote-to-work-order`, service task id
    `printing_shop.quote.create_work_order` (matches the handler key).
    Auto-deployed by the host's `PluginProcessDeployer` at plug-in
    start — the printing-shop plug-in now ships two BPMNs bundled into
    one Flowable deployment, both under category `printing-shop`.
    
    ## Smoke test (fresh DB)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    registered TaskHandler 'printing_shop.quote.create_work_order' owner='printing-shop' ...
    [plugin:printing-shop] registered 2 TaskHandlers: printing_shop.plate.approve, printing_shop.quote.create_work_order
    PluginProcessDeployer: plug-in 'printing-shop' deployed 2 BPMN resource(s) as Flowable deploymentId='1e5c...':
      [processes/quote-to-work-order.bpmn20.xml, processes/plate-approval.bpmn20.xml]
    pbc-production subscribed to WorkOrderRequestedEvent via EventBus.subscribe (typed-class overload)
    
    # 1) seed a catalog item
    $ curl -X POST /api/v1/catalog/items
           {"code":"BOOK-HARDCOVER","name":"Hardcover book","itemType":"GOOD","baseUomCode":"ea"}
      → 201 BOOK-HARDCOVER
    
    # 2) start the plug-in's quote-to-work-order BPMN
    $ curl -X POST /api/v1/workflow/process-instances
           {"processDefinitionKey":"plugin-printing-shop-quote-to-work-order",
            "variables":{"quoteCode":"Q-007","itemCode":"BOOK-HARDCOVER","quantity":500}}
      → 201 {"ended":true,
             "variables":{"quoteCode":"Q-007",
                          "itemCode":"BOOK-HARDCOVER",
                          "quantity":500,
                          "workOrderCode":"WO-FROM-PRINTINGSHOP-Q-007",
                          "workOrderRequested":true}}
    
    Log lines observed:
      [plugin:printing-shop] quote Q-007: publishing WorkOrderRequestedEvent
         (code=WO-FROM-PRINTINGSHOP-Q-007, item=BOOK-HARDCOVER, qty=500)
      [production] WorkOrderRequestedEvent creating work order 'WO-FROM-PRINTINGSHOP-Q-007'
         for item 'BOOK-HARDCOVER' x 500 (source='plugin:printing-shop:quote:Q-007')
    
    # 3) verify the WorkOrder now exists in pbc-production
    $ curl /api/v1/production/work-orders
      → [{"id":"029c2482-...",
          "code":"WO-FROM-PRINTINGSHOP-Q-007",
          "outputItemCode":"BOOK-HARDCOVER",
          "outputQuantity":500.0,
          "status":"DRAFT",
          "sourceSalesOrderCode":null,
          "inputs":[], "ext":{}}]
    
    # 4) run the SAME BPMN a second time — verify idempotent
    $ curl -X POST /api/v1/workflow/process-instances
           {same body as above}
      → 201  (process ends, workOrderRequested=true, new event published + delivered)
    $ curl /api/v1/production/work-orders
      → count=1, still only WO-FROM-PRINTINGSHOP-Q-007
    ```
    
    Every single step runs through an api.v1 public surface. No framework
    core code knows the printing-shop plug-in exists; no plug-in code knows
    pbc-production exists. They meet on the event bus, and the outbox
    guarantees the delivery.
    
    ## Tests
    
    - 3 new tests in `pbc-production/.../WorkOrderRequestedSubscriberTest`:
      * `subscribe registers one listener for WorkOrderRequestedEvent`
      * `handle creates a work order from the event fields` — captures the
        `CreateWorkOrderCommand` and asserts every field
      * `handle short-circuits when a work order with that code already exists`
        — proves the idempotent branch
    - Total framework unit tests: 278 (was 275), all green.
    
    ## What this unblocks
    
    - **Richer multi-step BPMNs** in the plug-in that chain plate
      approval + quote → work order + production start + completion.
    - **Plug-in-owned Quote entity** — the printing-shop plug-in can now
      introduce a `plugin_printingshop__quote` table via its own Liquibase
      changelog and have its HTTP endpoint create quotes that kick off the
      quote-to-work-order workflow automatically (or on operator confirm).
    - **pbc-production routings/operations (v3)** — each operation becomes
      a BPMN step, potentially driven by plug-ins contributing custom
      steps via the same TaskHandler + event seam.
    - **Second reference plug-in** — any new customer plug-in can publish
      `WorkOrderRequestedEvent` from its own workflows without any
      framework change.
    
    ## Non-goals (parking lot)
    
    - The handler publishes but does not also read pbc-production state
      back. A future "wait for WO completion" BPMN step could subscribe
      to `WorkOrderCompletedEvent` inside a user-task + signal flow, but
      the engine's signal/correlation machinery isn't wired to
      plug-ins yet.
    - Quote entity + HTTP + real business logic. REF.1 proves the
      cross-PBC event seam; the richer quote lifecycle is a separate
      chunk that can layer on top of this.
    - Transactional rollback integration test. The synchronous bus +
      `Propagation.MANDATORY` guarantees it, but an explicit test that
      a subscriber throw rolls back both the ledger-adjacent writes and
      the Flowable process state would be worth adding with a real
      test container run.
    zichun authored
     
    Browse Code »
  • Completes the plug-in side of the embedded Flowable story. The P2.1
    core made plug-ins able to register TaskHandlers; this chunk makes
    them able to ship the BPMN processes those handlers serve.
    
    ## Why Flowable's built-in auto-deployer couldn't do it
    
    Flowable's Spring Boot starter scans the host classpath at engine
    startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys
    every hit (the literal glob is paraphrased because the Kotlin KDoc
    comment below would otherwise treat the embedded slash-star as the
    start of a nested comment — feedback memory "Kotlin KDoc nested-
    comment trap"). PF4J plug-ins load through an isolated child
    classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml`
    resource shipped inside a plug-in JAR is never seen. This chunk adds
    a dedicated host-side deployer that opens each plug-in JAR file
    directly (same JarFile walk pattern as
    `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs
    with the Flowable `RepositoryService`.
    
    ## Mechanism
    
    ### New PluginProcessDeployer (platform-workflow)
    
    One Spring bean, two methods:
    
    - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR,
      collects every entry whose name starts with `processes/` and ends
      with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one
      Flowable `Deployment` named `plugin:<id>` with `category = pluginId`.
      Returns the deployment id or null (missing JAR / no BPMN resources).
      One deployment per plug-in keeps undeploy atomic and makes the
      teardown query unambiguous.
    - `undeployByPlugin(pluginId): Int` — runs
      `createDeploymentQuery().deploymentCategory(pluginId).list()` and
      calls `deleteDeployment(id, cascade=true)` on each hit. Cascading
      removes process instances and history rows along with the
      deployment — "uninstalling a plug-in makes it disappear". Idempotent:
      a second call returns 0.
    
    The deployer reads the JAR entries into byte arrays inside the
    JarFile's `use` block and then passes the bytes to
    `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the
    jar handle is already closed by the time Flowable sees the
    deployment. No input-stream lifetime tangles.
    
    ### VibeErpPluginManager wiring
    
    - New constructor dependency on `PluginProcessDeployer`.
    - Deploy happens AFTER `start(context)` succeeds. The ordering matters
      because a plug-in can only register its TaskHandlers during
      `start(context)`, and a deployed BPMN whose service-task delegate
      expression resolves to a key with no matching handler would still
      deploy (Flowable only resolves delegates at process-start time).
      Registering handlers first is the safer default: the moment the
      deployment lands, every referenced handler is already in the
      TaskHandlerRegistry.
    - BPMN deployment failure AFTER a successful `start(context)` now
      fully unwinds the plug-in state: call `instance.stop()`, remove
      the plug-in from the `started` list, strip its endpoints + its
      TaskHandlers + call `undeployByPlugin` (belt and suspenders — the
      deploy attempt may have partially succeeded). That mirrors the
      existing start-failure unwinding so the framework doesn't end up
      with a plug-in that's half-installed after any step throws.
    - `destroy()` calls `undeployByPlugin(pluginId)` alongside the
      existing `unregisterAllByOwner(pluginId)`.
    
    ### Reference plug-in BPMN
    
    `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml`
    — a minimal two-task process (`start` → serviceTask → `end`) whose
    serviceTask id is `printing_shop.plate.approve`, matching the
    PlateApprovalTaskHandler key landed in the previous commit. Process
    definition key is `plugin-printing-shop-plate-approval` (distinct
    from the serviceTask id because BPMN 2.0 requires element ids to be
    unique per document — same separation used for the core ping
    process).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'vibeerp.workflow.ping' owner='core' ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml]
    
    $ curl /api/v1/workflow/definitions (as admin)
    [
      {"key":"plugin-printing-shop-plate-approval",
       "name":"Printing shop — plate approval",
       "version":1,
       "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4",
       "resourceName":"processes/plate-approval.bpmn20.xml"},
      {"key":"vibeerp-workflow-ping",
       "name":"vibe_erp workflow ping",
       "version":1,
       "deploymentId":"4f48...",
       "resourceName":"vibeerp-ping.bpmn20.xml"}
    ]
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"plugin-printing-shop-plate-approval",
              "variables":{"plateId":"PLATE-007"}}
      → {"processInstanceId":"5b1b...",
         "ended":true,
         "variables":{"plateId":"PLATE-007",
                      "plateApproved":true,
                      "approvedBy":"user:admin",
                      "approvedAt":"2026-04-09T04:48:30.514523Z"}}
    
    $ kill -TERM <pid>
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade)
    ```
    
    Full end-to-end loop closed: plug-in ships a BPMN → host reads it
    out of the JAR → Flowable deployment registered under the plug-in
    category → HTTP caller starts a process instance via the standard
    `/api/v1/workflow/process-instances` surface → dispatcher routes by
    activity id to the plug-in's TaskHandler → handler writes output
    variables + plug-in sees the authenticated caller as `ctx.principal()`
    via the reserved `__vibeerp_*` process-variable propagation from
    commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs.
    
    ## Tests
    
    - 6 new unit tests on `PluginProcessDeployerTest`:
      * `deployFromPlugin returns null when jarPath is not a regular file`
        — guard against dev-exploded plug-in dirs
      * `deployFromPlugin returns null when the plug-in jar has no BPMN resources`
      * `deployFromPlugin reads every bpmn resource under processes and
        deploys one bundle` — builds a real temporary JAR with two BPMN
        entries + a README + a metadata YAML, verifies that both BPMNs
        go through `addBytes` with the right names and the README /
        metadata entries are skipped
      * `deployFromPlugin rejects a blank plug-in id`
      * `undeployByPlugin returns zero when there is nothing to remove`
      * `undeployByPlugin cascades a deleteDeployment per matching deployment`
    - Total framework unit tests: 275 (was 269), all green.
    
    ## Kotlin trap caught during authoring (feedback memory paid out)
    
    First compile failed with `Unclosed comment` on the last line of
    `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph
    containing the literal glob
    `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the
    backtick span was parsed as the start of a nested block comment
    even though the surrounding `/* ... */` KDoc was syntactically
    complete. The saved feedback-memory entry "Kotlin KDoc nested-comment
    trap" covered exactly this situation — the fix is to spell out glob
    characters as `[star]` / `[slash]` (or the word "slash-star") inside
    documentation so the literal `/*` never appears. The KDoc now
    documents the behaviour AND the workaround so the next maintainer
    doesn't hit the same trap.
    
    ## Non-goals (still parking lot)
    
    - Handler-side access to the full PluginContext — PlateApprovalTaskHandler
      is still a pure function because the framework doesn't hand
      TaskHandlers a context object. For REF.1 (real quote→job-card)
      handlers will need to read + mutate plug-in-owned tables; the
      cleanest approach is closure-capture inside the plug-in class
      (handler instantiated inside `start(context)` with the context
      captured in the outer scope). Decision deferred to REF.1.
    - BPMN resource hot reload. The deployer runs once per plug-in
      start; a plug-in whose BPMN changes under its feet at runtime
      isn't supported yet.
    - Plug-in-shipped DMN / CMMN resources. The deployer only looks at
      `.bpmn20.xml` and `.bpmn`. Decision-table and case-management
      resources are not on the v1.0 critical path.
    zichun authored
     
    Browse Code »
  • Completes the HasExt rollout across every core entity with an ext
    column. Item was the last one that carried an ext JSONB column
    without any validation wired — a plug-in could declare custom fields
    for Item but nothing would enforce them on save. This fixes that and
    restores two printing-shop-specific Item fields to the reference
    plug-in that were temporarily dropped from the previous Tier 1
    customization chunk (commit 16c59310) precisely because Item wasn't
    wired.
    
    Code changes:
      - Item implements HasExt; `ext` becomes `override var ext`, a
        companion constant holds the entity name "Item".
      - ItemService injects ExtJsonValidator, calls applyTo() in both
        create() and update() (create + update symmetry like partners
        and locations). parseExt passthrough added for response mappers.
      - CreateItemCommand, UpdateItemCommand, CreateItemRequest,
        UpdateItemRequest gain a nullable ext field.
      - ItemResponse now carries the parsed ext map, same shape as
        PartnerResponse / LocationResponse / SalesOrderResponse.
      - pbc-catalog build.gradle adds
        `implementation(project(":platform:platform-metadata"))`.
      - ItemServiceTest constructor updated to pass the new validator
        dependency with no-op stubs.
    
    Plug-in YAML (printing-shop.yml):
      - Re-added `printing_shop_color_count` (integer) and
        `printing_shop_paper_gsm` (integer) custom fields targeting Item.
        These were originally in the commit 16c59310 draft but removed
        because Item wasn't wired. Now that Item is wired, they're back
        and actually enforced.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - GET /_meta/metadata/custom-fields/Item returns 2 plug-in fields.
      - POST /catalog/items with `{printing_shop_color_count: 4,
        printing_shop_paper_gsm: 170}` → 201, canonical form persisted.
      - GET roundtrip preserves both integer values.
      - POST with `printing_shop_color_count: "not-a-number"` → 400
        "ext.printing_shop_color_count: not a valid integer: 'not-a-number'".
      - POST with `rogue_key` → 400 "ext contains undeclared key(s)
        for 'Item': [rogue_key]".
    
    Six of eight PBCs now participate in HasExt:
      Partner, Location, SalesOrder, PurchaseOrder, WorkOrder, Item.
    The remaining two are pbc-identity (User has no ext column by
    design — identity is a security concern, not a customization one)
    and pbc-finance (JournalEntry is derived state from events, no
    customization surface).
    
    Five core entities carry Tier 1 custom fields as of this commit:
      Partner     (2 core + 1 plug-in)
      Item        (0 core + 2 plug-in)
      SalesOrder  (0 core + 1 plug-in)
      WorkOrder   (2 core + 1 plug-in)
      Location    (0 core + 0 plug-in — wired but no declarations yet)
    
    246 unit tests, all green. 18 Gradle subprojects.
    zichun authored
     
    Browse Code »
  • The reference printing-shop plug-in now demonstrates the framework's
    most important promise — that a customer plug-in can EXTEND core
    business entities (Partner, SalesOrder, WorkOrder) with customer-
    specific fields WITHOUT touching any core code.
    
    Added to `printing-shop.yml` customFields section:
    
      Partner:
        printing_shop_customer_segment (enum: agency, in_house, end_client, reseller)
    
      SalesOrder:
        printing_shop_quote_number (string, maxLength 32)
    
      WorkOrder:
        printing_shop_press_id (string, maxLength 32)
    
    Mechanism (no code changes, all metadata-driven):
      1. Plug-in YAML carries a `customFields:` section alongside its
         entities / permissions / menus.
      2. MetadataLoader.loadFromPluginJar reads the section and inserts
         rows into `metadata__custom_field` tagged
         `source='plugin:printing-shop'`.
      3. CustomFieldRegistry.refresh re-reads ALL rows (both `source='core'`
         and `source='plugin:*'`) and merges them by `targetEntity`.
      4. ExtJsonValidator.applyTo now validates incoming ext against the
         MERGED set, so a POST to /api/v1/partners/partners can include
         both core-declared (partners_industry) and plug-in-declared
         (printing_shop_customer_segment) fields in the same request,
         and both are enforced.
      5. Uninstalling the plug-in removes the plugin:printing-shop rows
         and the fields disappear from validation AND from the UI's
         custom-field catalog — no migration, no restart of anything
         other than the plug-in lifecycle itself.
    
    Convention established: plug-in-contributed custom field keys are
    prefixed with the plug-in id (e.g. `printing_shop_*`) so two
    independent plug-ins can't collide on the same entity. Documented
    inline in the YAML.
    
    Smoke verified end-to-end against real Postgres with the plug-in
    staged:
      - CustomFieldRegistry logs "refreshed 4 custom fields" after core
        load, then "refreshed 7 custom fields across 3 entities" after
        plug-in load — 4 core + 3 plug-in fields, merged.
      - GET /_meta/metadata/custom-fields/Partner returns 3 fields
        (2 core + 1 plug-in).
      - GET /_meta/metadata/custom-fields/SalesOrder returns 1 field
        (plug-in only).
      - GET /_meta/metadata/custom-fields/WorkOrder returns 3 fields
        (2 core + 1 plug-in).
      - POST /partners/partners with BOTH a core field AND a plug-in
        field → 201, canonical form persisted.
      - POST with an invalid plug-in enum value → 400 "ext.printing_shop_customer_segment:
        value 'ghost' is not in allowed set [agency, in_house, end_client, reseller]".
      - POST /orders/sales-orders with printing_shop_quote_number → 201,
        quote number round-trips.
      - POST /production/work-orders with mixed production_priority +
        printing_shop_press_id → 201, both fields persisted.
    
    This is the first executable demonstration of Clean Core
    extensibility (CLAUDE.md guardrail #7) — the plug-in extends
    core-owned data through a stable public contract (api.v1 HasExt +
    metadata__custom_field) without reaching into any core or platform
    internal class. This is the "A" grade on the A/B/C/D extensibility
    safety scale.
    
    No code changes. No test changes. 246 unit tests, still green.
    zichun authored
     
    Browse Code »

  • The eighth cross-cutting platform service is live: plug-ins and PBCs
    now have a real Translator and LocaleProvider instead of the
    UnsupportedOperationException stubs that have shipped since v0.5.
    
    What landed
    -----------
    * New Gradle subproject `platform/platform-i18n` (13 modules total).
    * `IcuTranslator` — backed by ICU4J's MessageFormat (named placeholders,
      plurals, gender, locale-aware number/date/currency formatting), the
      format every modern translation tool speaks natively. JDK ResourceBundle
      handles per-locale fallback (zh_CN → zh → root).
    * The translator takes a list of `BundleLocation(classLoader, baseName)`
      pairs and tries them in order. For the **core** translator the chain
      is just `[(host, "messages")]`; for a **per-plug-in** translator
      constructed by VibeErpPluginManager it's
      `[(pluginClassLoader, "META-INF/vibe-erp/i18n/messages"), (host, "messages")]`
      so plug-in keys override host keys, but plug-ins still inherit
      shared keys like `errors.not_found`.
    * Critical detail: the plug-in baseName uses a path the host does NOT
      publish, because PF4J's `PluginClassLoader` is parent-first — a
      `getResource("messages.properties")` against the plug-in classloader
      would find the HOST bundle through the parent chain, defeating the
      per-plug-in override entirely. Naming the plug-in resource somewhere
      the host doesn't claim sidesteps the trap.
    * The translator disables `ResourceBundle.Control`'s automatic JVM-default
      locale fallback. The default control walks `requested → root → JVM
      default → root` which would silently serve German strings to a
      Japanese-locale request just because the German bundle exists. The
      fallback chain stops at root within a bundle, then moves to the next
      bundle location, then returns the key string itself.
    * `RequestLocaleProvider` reads the active HTTP request's
      Accept-Language via `RequestContextHolder` + the servlet container's
      `getLocale()`. Outside an HTTP request (background jobs, workflow
      tasks, MCP agents) it falls back to the configured default locale
      (`vibeerp.i18n.defaultLocale`, default `en`). Importantly, when an
      HTTP request HAS no Accept-Language header it ALSO falls back to the
      configured default — never to the JVM's locale.
    * `I18nConfiguration` exposes `coreTranslator` and `coreLocaleProvider`
      beans. Per-plug-in translators are NOT beans — they're constructed
      imperatively per plug-in start in VibeErpPluginManager because each
      needs its own classloader at the front of the resolution chain.
    * `DefaultPluginContext` now wires `translator` and `localeProvider`
      for real instead of throwing `UnsupportedOperationException`.
    
    Bundles
    -------
    * Core: `platform-i18n/src/main/resources/messages.properties` (English),
      `messages_zh_CN.properties` (Simplified Chinese), `messages_de.properties`
      (German). Six common keys (errors, ok/cancel/save/delete) and an ICU
      plural example for `counts.items`. Java 9+ JEP 226 reads .properties
      files as UTF-8 by default, so Chinese characters are written directly
      rather than as `\\uXXXX` escapes.
    * Reference plug-in: moved from the broken `i18n/messages_en-US.properties`
      / `messages_zh-CN.properties` (wrong path, hyphen-locale filenames
      ResourceBundle ignores) to the canonical
      `META-INF/vibe-erp/i18n/messages.properties` /
      `messages_zh_CN.properties` paths with underscore locale tags.
      Added a new `printingshop.plate.created` key with an ICU plural for
      `ink_count` to demonstrate non-trivial argument substitution.
    
    End-to-end smoke test
    ---------------------
    Reset Postgres, booted the app, hit POST /api/v1/plugins/printing-shop/plates
    with three different Accept-Language headers:
    * (no header)         → "Plate 'PLATE-001' created with no inks." (en-US, plug-in base bundle)
    * `Accept-Language: zh-CN` → "已创建印版 'PLATE-002' (无油墨)。" (zh-CN, plug-in zh_CN bundle)
    * `Accept-Language: de`    → "Plate 'PLATE-003' created with no inks." (de, but the plug-in
                                  ships no German bundle so it falls back to the plug-in base
                                  bundle — correct, the key is plug-in-specific)
    Regression: identity, catalog, partners, and `GET /plates` all still
    HTTP 200 after the i18n wiring change.
    
    Build
    -----
    * `./gradlew build`: 13 subprojects, 118 unit tests (was 107 / 12),
      all green. The 11 new tests cover ICU plural rendering, named-arg
      substitution, locale fallback (zh_CN → root, ja → root via NO_FALLBACK),
      cross-classloader override (a real JAR built in /tmp at test time),
      and RequestLocaleProvider's three resolution paths
      (no request → default; Accept-Language present → request locale;
      request without Accept-Language → default, NOT JVM locale).
    * The architectural rule still enforced: platform-plugins now imports
      platform-i18n, which is a platform-* dependency (allowed), not a
      pbc-* dependency (forbidden).
    
    What was deferred
    -----------------
    * User-preferred locale from the authenticated user's profile row is
      NOT in the resolution chain yet — the `LocaleProvider` interface
      leaves room for it but the implementation only consults
      Accept-Language and the configured default. Adding it slots in
      between request and default without changing the api.v1 surface.
    * The metadata translation overrides table (`metadata__translation`)
      is also deferred — the `Translator` JavaDoc mentions it as the
      first lookup source, but right now keys come from .properties files
      only. Once Tier 1 customisation lands (P3.x), key users will be able
      to override any string from the SPA without touching code.
    zichun authored
     
    Browse Code »
  • Adds the foundation for the entire Tier 1 customization story. Core
    PBCs and plug-ins now ship YAML files declaring their entities,
    permissions, and menus; a `MetadataLoader` walks the host classpath
    and each plug-in JAR at boot, upserts the rows tagged with their
    source, and exposes them at a public REST endpoint so the future
    SPA, AI-agent function catalog, OpenAPI generator, and external
    introspection tooling can all see what the framework offers without
    scraping code.
    
    What landed:
    
    * New `platform/platform-metadata/` Gradle subproject. Depends on
      api-v1 + platform-persistence + jackson-yaml + spring-jdbc.
    
    * `MetadataYamlFile` DTOs (entities, permissions, menus). Forward-
      compatible: unknown top-level keys are ignored, so a future plug-in
      built against a newer schema (forms, workflows, rules, translations)
      loads cleanly on an older host that doesn't know those sections yet.
    
    * `MetadataLoader` with two entry points:
    
        loadCore() — uses Spring's PathMatchingResourcePatternResolver
          against the host classloader. Finds every classpath*:META-INF/
          vibe-erp/metadata/*.yml across all jars contributing to the
          application. Tagged source='core'.
    
        loadFromPluginJar(pluginId, jarPath) — opens ONE specific
          plug-in JAR via java.util.jar.JarFile and walks its entries
          directly. This is critical: a plug-in's PluginClassLoader is
          parent-first, so a classpath*: scan against it would ALSO
          pick up the host's metadata files via parent classpath. We
          saw this in the first smoke run — the plug-in source ended
          up with 6 entities (the plug-in's 2 + the host's 4) before
          the fix. Walking the JAR file directly guarantees only the
          plug-in's own files load. Tagged source='plugin:<id>'.
    
      Both entry points use the same delete-then-insert idempotent core
      (doLoad). Loading the same source twice produces the same final
      state. User-edited metadata (source='user') is NEVER touched by
      either path — it survives boot, plug-in install, and plug-in
      upgrade. This is what lets a future SPA "Customize" UI add custom
      fields without fearing they'll be wiped on the next deploy.
    
    * `VibeErpPluginManager.afterPropertiesSet()` now calls
      metadataLoader.loadCore() at the very start, then walks plug-ins
      and calls loadFromPluginJar(...) for each one between Liquibase
      migration and start(context). Order is guaranteed: core → linter
      → migrate → metadata → start. The CommandLineRunner I originally
      put `loadCore()` in turned out to be wrong because Spring runs
      CommandLineRunners AFTER InitializingBean.afterPropertiesSet(),
      so the plug-in metadata was loading BEFORE core — the wrong way
      around. Calling loadCore() inline in the plug-in manager fixes
      the ordering without any @Order(...) gymnastics.
    
    * `MetadataController` exposes:
        GET /api/v1/_meta/metadata           — all three sections
        GET /api/v1/_meta/metadata/entities  — entities only
        GET /api/v1/_meta/metadata/permissions
        GET /api/v1/_meta/metadata/menus
      Public allowlist (covered by the existing /api/v1/_meta/** rule
      in SecurityConfiguration). The metadata is intentionally non-
      sensitive — entity names, permission keys, menu paths. Nothing
      in here is PII or secret; the SPA needs to read it before the
      user has logged in.
    
    * YAML files shipped:
      - pbc-identity/META-INF/vibe-erp/metadata/identity.yml
        (User + Role entities, 6 permissions, Users + Roles menus)
      - pbc-catalog/META-INF/vibe-erp/metadata/catalog.yml
        (Item + Uom entities, 7 permissions, Items + UoMs menus)
      - reference plug-in/META-INF/vibe-erp/metadata/printing-shop.yml
        (Plate + InkRecipe entities, 5 permissions, Plates + Inks menus
        in a "Printing shop" section)
    
    Tests: 4 MetadataLoaderTest cases (loadFromPluginJar happy paths,
    mixed sections, blank pluginId rejection, missing-file no-op wipe)
    + 7 MetadataYamlParseTest cases (DTO mapping, optional fields,
    section defaults, forward-compat unknown keys). Total now
    **92 unit tests** across 11 modules, all green.
    
    End-to-end smoke test against fresh Postgres + plug-in loaded:
    
      Boot logs:
        MetadataLoader: source='core' loaded 4 entities, 13 permissions,
          4 menus from 2 file(s)
        MetadataLoader: source='plugin:printing-shop' loaded 2 entities,
          5 permissions, 2 menus from 1 file(s)
    
      HTTP smoke (everything green):
        GET /api/v1/_meta/metadata (no auth)              → 200
          6 entities, 18 permissions, 6 menus
          entity names: User, Role, Item, Uom, Plate, InkRecipe
          menu sections: Catalog, Printing shop, System
        GET /api/v1/_meta/metadata/entities                → 200
        GET /api/v1/_meta/metadata/menus                   → 200
    
      Direct DB verification:
        metadata__entity:    core=4, plugin:printing-shop=2
        metadata__permission: core=13, plugin:printing-shop=5
        metadata__menu:      core=4, plugin:printing-shop=2
    
      Idempotency: restart the app, identical row counts.
    
      Existing endpoints regression:
        GET /api/v1/identity/users (Bearer)               → 1 user
        GET /api/v1/catalog/uoms (Bearer)                  → 15 UoMs
        GET /api/v1/plugins/printing-shop/ping (Bearer)    → 200
    
    Bugs caught and fixed during the smoke test:
    
      • The first attempt loaded core metadata via a CommandLineRunner
        annotated @Order(HIGHEST_PRECEDENCE) and per-plug-in metadata
        inline in VibeErpPluginManager.afterPropertiesSet(). Spring
        runs all InitializingBeans BEFORE any CommandLineRunner, so
        the plug-in metadata loaded first and the core load came
        second — wrong order. Fix: drop CoreMetadataInitializer
        entirely; have the plug-in manager call metadataLoader.loadCore()
        directly at the start of afterPropertiesSet().
    
      • The first attempt's plug-in load used
        metadataLoader.load(pluginClassLoader, ...) which used Spring's
        PathMatchingResourcePatternResolver against the plug-in's
        classloader. PluginClassLoader is parent-first, so the resolver
        enumerated BOTH the plug-in's own JAR AND the host classpath's
        metadata files, tagging core entities as source='plugin:<id>'
        and corrupting the seed counts. Fix: refactor MetadataLoader
        to expose loadFromPluginJar(pluginId, jarPath) which opens
        the plug-in JAR directly via java.util.jar.JarFile and walks
        its entries — never asking the classloader at all. The
        api-v1 surface didn't change.
    
      • Two KDoc comments contained the literal string `*.yml` after
        a `/` character (`/metadata/*.yml`), forming the `/*` pattern
        that Kotlin's lexer treats as a nested-comment opener. The
        file failed to compile with "Unclosed comment". This is the
        third time I've hit this trap; rewriting both KDocs to avoid
        the literal `/*` sequence.
    
      • The MetadataLoaderTest's hand-rolled JAR builder didn't include
        explicit directory entries for parent paths. Real Gradle JARs
        do include them, and Spring's PathMatchingResourcePatternResolver
        needs them to enumerate via classpath*:. Fixed the test helper
        to write directory entries for every parent of each file.
    
    Implementation plan refreshed: P1.5 marked DONE. Next priority
    candidates: P5.2 (pbc-partners — third PBC clone) and P3.4 (custom
    field application via the ext jsonb column, which would unlock the
    full Tier 1 customization story).
    
    Framework state: 17→18 commits, 10→11 modules, 81→92 unit tests,
    metadata seeded for 6 entities + 18 permissions + 6 menus.
    vibe_erp authored
     
    Browse Code »
  • The reference printing-shop plug-in graduates from "hello world" to a
    real customer demonstration: it now ships its own Liquibase changelog,
    owns its own database tables, and exposes a real domain (plates and
    ink recipes) via REST that goes through `context.jdbc` — a new
    typed-SQL surface in api.v1 — without ever touching Spring's
    `JdbcTemplate` or any other host internal type. A bytecode linter
    that runs before plug-in start refuses to load any plug-in that tries
    to import `org.vibeerp.platform.*` or `org.vibeerp.pbc.*` classes.
    
    What landed:
    
    * api.v1 (additive, binary-compatible):
      - PluginJdbc — typed SQL access with named parameters. Methods:
        query, queryForObject, update, inTransaction. No Spring imports
        leaked. Forces plug-ins to use named params (no positional ?).
      - PluginRow — typed nullable accessors over a single result row:
        string, int, long, uuid, bool, instant, bigDecimal. Hides
        java.sql.ResultSet entirely.
      - PluginContext.jdbc getter with default impl that throws
        UnsupportedOperationException so older builds remain binary
        compatible per the api.v1 stability rules.
    
    * platform-plugins — three new sub-packages:
      - jdbc/DefaultPluginJdbc backed by Spring's NamedParameterJdbcTemplate.
        ResultSetPluginRow translates each accessor through ResultSet.wasNull()
        so SQL NULL round-trips as Kotlin null instead of the JDBC defaults
        (0 for int, false for bool, etc. — bug factories).
      - jdbc/PluginJdbcConfiguration provides one shared PluginJdbc bean
        for the whole process. Per-plugin isolation lands later.
      - migration/PluginLiquibaseRunner looks for
        META-INF/vibe-erp/db/changelog.xml inside the plug-in JAR via
        the PF4J classloader and applies it via Liquibase against the
        host's shared DataSource. The unique META-INF path matters:
        plug-ins also see the host's parent classpath, where the host's
        own db/changelog/master.xml lives, and a collision causes
        Liquibase ChangeLogParseException at install time.
      - lint/PluginLinter walks every .class entry in the plug-in JAR
        via java.util.jar.JarFile + ASM ClassReader, visits every type/
        method/field/instruction reference, rejects on any reference to
        `org/vibeerp/platform/` or `org/vibeerp/pbc/` packages.
    
    * VibeErpPluginManager lifecycle is now load → lint → migrate → start:
      - lint runs immediately after PF4J's loadPlugins(); rejected
        plug-ins are unloaded with a per-violation error log and never
        get to run any code
      - migrate runs the plug-in's own Liquibase changelog; failure
        means the plug-in is loaded but skipped (loud warning, framework
        boots fine)
      - then PF4J's startPlugins() runs the no-arg start
      - then we walk loaded plug-ins and call vibe_erp's start(context)
        with a fully-wired DefaultPluginContext (logger + endpoints +
        eventBus + jdbc). The plug-in's tables are guaranteed to exist
        by the time its lambdas run.
    
    * DefaultPluginContext.jdbc is no longer a stub. Plug-ins inject the
      shared PluginJdbc and use it to talk to their own tables.
    
    * Reference plug-in (PrintingShopPlugin):
      - Ships META-INF/vibe-erp/db/changelog.xml with two changesets:
        plugin_printingshop__plate (id, code, name, width_mm, height_mm,
        status) and plugin_printingshop__ink_recipe (id, code, name,
        cmyk_c/m/y/k).
      - Now registers seven endpoints:
          GET  /ping          — health
          GET  /echo/{name}   — path variable demo
          GET  /plates        — list
          GET  /plates/{id}   — fetch
          POST /plates        — create (with race-conditiony existence
                                check before INSERT, since plug-ins
                                can't import Spring's DataAccessException)
          GET  /inks
          POST /inks
      - All CRUD lambdas use context.jdbc with named parameters. The
        plug-in still imports nothing from org.springframework.* in its
        own code (it does reach the host's Jackson via reflection for
        JSON parsing — a deliberate v0.6 shortcut documented inline).
    
    Tests: 5 new PluginLinterTest cases use ASM ClassWriter to synthesize
    in-memory plug-in JARs (clean class, forbidden platform ref, forbidden
    pbc ref, allowed api.v1 ref, multiple violations) and a mocked
    PluginWrapper to avoid touching the real PF4J loader. Total now
    **81 unit tests** across 10 modules, all green.
    
    End-to-end smoke test against fresh Postgres with the plug-in loaded
    (every assertion green):
    
      Boot logs:
        PluginLiquibaseRunner: plug-in 'printing-shop' has changelog.xml
        Liquibase: ChangeSet printingshop-init-001 ran successfully
        Liquibase: ChangeSet printingshop-init-002 ran successfully
        Liquibase migrations applied successfully
        plugin.printing-shop: registered 7 endpoints
    
      HTTP smoke:
        \dt plugin_printingshop*                  → both tables exist
        GET /api/v1/plugins/printing-shop/plates  → []
        POST plate A4                              → 201 + UUID
        POST plate A3                              → 201 + UUID
        POST duplicate A4                          → 409 + clear msg
        GET plates                                 → 2 rows
        GET /plates/{id}                           → A4 details
        psql verifies both rows in plugin_printingshop__plate
        POST ink CYAN                              → 201
        POST ink MAGENTA                           → 201
        GET inks                                   → 2 inks with nested CMYK
        GET /ping                                  → 200 (existing endpoint)
        GET /api/v1/catalog/uoms                   → 15 UoMs (no regression)
        GET /api/v1/identity/users                 → 1 user (no regression)
    
    Bug encountered and fixed during the smoke test:
    
      • The plug-in initially shipped its changelog at db/changelog/master.xml,
        which collides with the HOST's db/changelog/master.xml. The plug-in
        classloader does parent-first lookup (PF4J default), so Liquibase's
        ClassLoaderResourceAccessor found BOTH files and threw
        ChangeLogParseException ("Found 2 files with the path"). Fixed by
        moving the plug-in changelog to META-INF/vibe-erp/db/changelog.xml,
        a path the host never uses, and updating PluginLiquibaseRunner.
        The unique META-INF prefix is now part of the documented plug-in
        convention.
    
    What is explicitly NOT in this chunk (deferred):
    
      • Per-plugin Spring child contexts — plug-ins still instantiate via
        PF4J's classloader without their own Spring beans
      • Per-plugin datasource isolation — one shared host pool today
      • Plug-in changelog table-prefix linter — convention only, runtime
        enforcement comes later
      • Rollback on plug-in uninstall — uninstall is operator-confirmed
        and rare; running dropAll() during stop() would lose data on
        accidental restart
      • Subscription auto-scoping on plug-in stop — plug-ins still close
        their own subscriptions in stop()
      • Real customer-grade JSON parsing in plug-in lambdas — the v0.6
        reference plug-in uses reflection to find the host's Jackson; a
        real plug-in author would ship their own JSON library or use a
        future api.v1 typed-DTO surface
    
    Implementation plan refreshed: P1.2, P1.3, P1.4, P1.7, P4.1, P5.1
    all marked DONE in
    docs/superpowers/specs/2026-04-07-vibe-erp-implementation-plan.md.
    Next priority candidates: P1.5 (metadata seeder) and P5.2 (pbc-partners).
    vibe_erp authored
     
    Browse Code »