• Completes the plug-in side of the embedded Flowable story. The P2.1
    core made plug-ins able to register TaskHandlers; this chunk makes
    them able to ship the BPMN processes those handlers serve.
    
    ## Why Flowable's built-in auto-deployer couldn't do it
    
    Flowable's Spring Boot starter scans the host classpath at engine
    startup for `classpath[*]:/processes/[*].bpmn20.xml` and auto-deploys
    every hit (the literal glob is paraphrased because the Kotlin KDoc
    comment below would otherwise treat the embedded slash-star as the
    start of a nested comment — feedback memory "Kotlin KDoc nested-
    comment trap"). PF4J plug-ins load through an isolated child
    classloader that is NOT visible to that scan, so a `processes/*.bpmn20.xml`
    resource shipped inside a plug-in JAR is never seen. This chunk adds
    a dedicated host-side deployer that opens each plug-in JAR file
    directly (same JarFile walk pattern as
    `MetadataLoader.loadFromPluginJar`) and hand-registers the BPMNs
    with the Flowable `RepositoryService`.
    
    ## Mechanism
    
    ### New PluginProcessDeployer (platform-workflow)
    
    One Spring bean, two methods:
    
    - `deployFromPlugin(pluginId, jarPath): String?` — walks the JAR,
      collects every entry whose name starts with `processes/` and ends
      with `.bpmn20.xml` or `.bpmn`, and bundles the whole set into one
      Flowable `Deployment` named `plugin:<id>` with `category = pluginId`.
      Returns the deployment id or null (missing JAR / no BPMN resources).
      One deployment per plug-in keeps undeploy atomic and makes the
      teardown query unambiguous.
    - `undeployByPlugin(pluginId): Int` — runs
      `createDeploymentQuery().deploymentCategory(pluginId).list()` and
      calls `deleteDeployment(id, cascade=true)` on each hit. Cascading
      removes process instances and history rows along with the
      deployment — "uninstalling a plug-in makes it disappear". Idempotent:
      a second call returns 0.
    
    The deployer reads the JAR entries into byte arrays inside the
    JarFile's `use` block and then passes the bytes to
    `DeploymentBuilder.addBytes(name, bytes)` outside the block, so the
    jar handle is already closed by the time Flowable sees the
    deployment. No input-stream lifetime tangles.
    
    ### VibeErpPluginManager wiring
    
    - New constructor dependency on `PluginProcessDeployer`.
    - Deploy happens AFTER `start(context)` succeeds. The ordering matters
      because a plug-in can only register its TaskHandlers during
      `start(context)`, and a deployed BPMN whose service-task delegate
      expression resolves to a key with no matching handler would still
      deploy (Flowable only resolves delegates at process-start time).
      Registering handlers first is the safer default: the moment the
      deployment lands, every referenced handler is already in the
      TaskHandlerRegistry.
    - BPMN deployment failure AFTER a successful `start(context)` now
      fully unwinds the plug-in state: call `instance.stop()`, remove
      the plug-in from the `started` list, strip its endpoints + its
      TaskHandlers + call `undeployByPlugin` (belt and suspenders — the
      deploy attempt may have partially succeeded). That mirrors the
      existing start-failure unwinding so the framework doesn't end up
      with a plug-in that's half-installed after any step throws.
    - `destroy()` calls `undeployByPlugin(pluginId)` alongside the
      existing `unregisterAllByOwner(pluginId)`.
    
    ### Reference plug-in BPMN
    
    `reference-customer/plugin-printing-shop/src/main/resources/processes/plate-approval.bpmn20.xml`
    — a minimal two-task process (`start` → serviceTask → `end`) whose
    serviceTask id is `printing_shop.plate.approve`, matching the
    PlateApprovalTaskHandler key landed in the previous commit. Process
    definition key is `plugin-printing-shop-plate-approval` (distinct
    from the serviceTask id because BPMN 2.0 requires element ids to be
    unique per document — same separation used for the core ping
    process).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    registered TaskHandler 'vibeerp.workflow.ping' owner='core' ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' ...
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    PluginProcessDeployer: plug-in 'printing-shop' deployed 1 BPMN resource(s) as Flowable deploymentId='4e9f...': [processes/plate-approval.bpmn20.xml]
    
    $ curl /api/v1/workflow/definitions (as admin)
    [
      {"key":"plugin-printing-shop-plate-approval",
       "name":"Printing shop — plate approval",
       "version":1,
       "deploymentId":"4e9f85a6-33cf-11f1-acaa-1afab74ef3b4",
       "resourceName":"processes/plate-approval.bpmn20.xml"},
      {"key":"vibeerp-workflow-ping",
       "name":"vibe_erp workflow ping",
       "version":1,
       "deploymentId":"4f48...",
       "resourceName":"vibeerp-ping.bpmn20.xml"}
    ]
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"plugin-printing-shop-plate-approval",
              "variables":{"plateId":"PLATE-007"}}
      → {"processInstanceId":"5b1b...",
         "ended":true,
         "variables":{"plateId":"PLATE-007",
                      "plateApproved":true,
                      "approvedBy":"user:admin",
                      "approvedAt":"2026-04-09T04:48:30.514523Z"}}
    
    $ kill -TERM <pid>
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    [ionShutdownHook] PluginProcessDeployer: plug-in 'printing-shop' deployment '4e9f...' removed (cascade)
    ```
    
    Full end-to-end loop closed: plug-in ships a BPMN → host reads it
    out of the JAR → Flowable deployment registered under the plug-in
    category → HTTP caller starts a process instance via the standard
    `/api/v1/workflow/process-instances` surface → dispatcher routes by
    activity id to the plug-in's TaskHandler → handler writes output
    variables + plug-in sees the authenticated caller as `ctx.principal()`
    via the reserved `__vibeerp_*` process-variable propagation from
    commit `ef9e5b42`. SIGTERM cleanly undeploys the plug-in's BPMNs.
    
    ## Tests
    
    - 6 new unit tests on `PluginProcessDeployerTest`:
      * `deployFromPlugin returns null when jarPath is not a regular file`
        — guard against dev-exploded plug-in dirs
      * `deployFromPlugin returns null when the plug-in jar has no BPMN resources`
      * `deployFromPlugin reads every bpmn resource under processes and
        deploys one bundle` — builds a real temporary JAR with two BPMN
        entries + a README + a metadata YAML, verifies that both BPMNs
        go through `addBytes` with the right names and the README /
        metadata entries are skipped
      * `deployFromPlugin rejects a blank plug-in id`
      * `undeployByPlugin returns zero when there is nothing to remove`
      * `undeployByPlugin cascades a deleteDeployment per matching deployment`
    - Total framework unit tests: 275 (was 269), all green.
    
    ## Kotlin trap caught during authoring (feedback memory paid out)
    
    First compile failed with `Unclosed comment` on the last line of
    `PluginProcessDeployer.kt`. The culprit was a KDoc paragraph
    containing the literal glob
    `classpath*:/processes/*.bpmn20.xml`: the embedded `/*` inside the
    backtick span was parsed as the start of a nested block comment
    even though the surrounding `/* ... */` KDoc was syntactically
    complete. The saved feedback-memory entry "Kotlin KDoc nested-comment
    trap" covered exactly this situation — the fix is to spell out glob
    characters as `[star]` / `[slash]` (or the word "slash-star") inside
    documentation so the literal `/*` never appears. The KDoc now
    documents the behaviour AND the workaround so the next maintainer
    doesn't hit the same trap.
    
    ## Non-goals (still parking lot)
    
    - Handler-side access to the full PluginContext — PlateApprovalTaskHandler
      is still a pure function because the framework doesn't hand
      TaskHandlers a context object. For REF.1 (real quote→job-card)
      handlers will need to read + mutate plug-in-owned tables; the
      cleanest approach is closure-capture inside the plug-in class
      (handler instantiated inside `start(context)` with the context
      captured in the outer scope). Decision deferred to REF.1.
    - BPMN resource hot reload. The deployer runs once per plug-in
      start; a plug-in whose BPMN changes under its feet at runtime
      isn't supported yet.
    - Plug-in-shipped DMN / CMMN resources. The deployer only looks at
      `.bpmn20.xml` and `.bpmn`. Decision-table and case-management
      resources are not on the v1.0 critical path.
    zichun authored
     
    Browse Code »
  • ## What's new
    
    Plug-ins can now contribute workflow task handlers to the framework.
    The P2.1 `TaskHandlerRegistry` only saw `@Component` TaskHandler beans
    from the host Spring context; handlers defined inside a PF4J plug-in
    were invisible because the plug-in's child classloader is not in the
    host's bean list. This commit closes that gap.
    
    ## Mechanism
    
    ### api.v1
    
    - New interface `org.vibeerp.api.v1.workflow.PluginTaskHandlerRegistrar`
      with a single `register(handler: TaskHandler)` method. Plug-ins call
      it from inside their `start(context)` lambda.
    - `PluginContext.taskHandlers: PluginTaskHandlerRegistrar` — added as
      a new optional member with a default implementation that throws
      `UnsupportedOperationException("upgrade to v0.7 or later")`, so
      pre-existing plug-in jars remain binary-compatible with the new
      host and a plug-in built against v0.7 of the api-v1 surface fails
      fast on an old host instead of silently doing nothing. Same
      pattern we used for `endpoints` and `jdbc`.
    
    ### platform-workflow
    
    - `TaskHandlerRegistry` gains owner tagging. Every registered handler
      now carries an `ownerId`: core `@Component` beans get
      `TaskHandlerRegistry.OWNER_CORE = "core"` (auto-assigned through
      the constructor-injection path), plug-in-contributed handlers get
      their PF4J plug-in id. New API:
      * `register(handler, ownerId = OWNER_CORE)` (default keeps existing
        call sites unchanged)
      * `unregisterAllByOwner(ownerId): Int` — strip every handler owned
        by that id in one call, returns the count for log correlation
      * The duplicate-key error message now includes both owners so a
        plug-in trying to stomp on a core handler gets an actionable
        "already registered by X (owner='core'), attempted by Y
        (owner='printing-shop')" instead of "already registered".
      * Internal storage switched from `ConcurrentHashMap<String, TaskHandler>`
        to `ConcurrentHashMap<String, Entry>` where `Entry` carries
        `(handler, ownerId)`. `find(key)` still returns `TaskHandler?`
        so the dispatcher is unchanged.
    - No behavioral change for the hot-path (`DispatchingJavaDelegate`) —
      only the registration/teardown paths changed.
    
    ### platform-plugins
    
    - New dependency on `:platform:platform-workflow` (the only new inter-
      module dep of this chunk; it is the module that exposes
      `TaskHandlerRegistry`).
    - New internal class `ScopedTaskHandlerRegistrar(hostRegistry, pluginId)`
      that implements the api.v1 `PluginTaskHandlerRegistrar` by delegating
      `register(handler)` to `hostRegistry.register(handler, ownerId =
      pluginId)`. Constructed fresh per plug-in by `VibeErpPluginManager`,
      so the plug-in never sees (or can tamper with) the owner id.
    - `DefaultPluginContext` gains a `scopedTaskHandlers` constructor
      parameter and exposes it as the `PluginContext.taskHandlers`
      override.
    - `VibeErpPluginManager`:
      * injects `TaskHandlerRegistry`
      * constructs `ScopedTaskHandlerRegistrar(registry, pluginId)` per
        plug-in when building `DefaultPluginContext`
      * partial-start failure now also calls
        `taskHandlerRegistry.unregisterAllByOwner(pluginId)`, matching
        the existing `endpointRegistry.unregisterAll(pluginId)` cleanup
        so a throwing `start(context)` cannot leave stale registrations
      * `destroy()` calls the same `unregisterAllByOwner` for every
        started plug-in in reverse order, mirroring the endpoint cleanup
    
    ### reference-customer/plugin-printing-shop
    
    - New file `workflow/PlateApprovalTaskHandler.kt` — the first plug-in-
      contributed TaskHandler in the framework. Key
      `printing_shop.plate.approve`. Reads a `plateId` process variable,
      writes `plateApproved`, `plateId`, `approvedBy` (principal label),
      `approvedAt` (ISO instant) and exits. No DB mutation yet: a proper
      plate-approval handler would UPDATE `plugin_printingshop__plate` via
      `context.jdbc`, but that requires handing the TaskHandler a
      projection of the PluginContext — a deliberate non-goal of this
      chunk, deferred to the "handler context" follow-up.
    - `PrintingShopPlugin.start(context)` now ends with
      `context.taskHandlers.register(PlateApprovalTaskHandler())` and logs
      the registration.
    - Package layout: `org.vibeerp.reference.printingshop.workflow` is
      the plug-in's workflow namespace going forward (the next printing-
      shop handlers for REF.1 — quote-to-job-card, job-card-to-work-order
      — will live alongside).
    
    ## Smoke test (fresh DB, plug-in staged)
    
    ```
    $ docker compose down -v && docker compose up -d db
    $ ./gradlew :distribution:bootRun &
    ...
    TaskHandlerRegistry initialised with 1 core TaskHandler bean(s): [vibeerp.workflow.ping]
    ...
    plug-in 'printing-shop' Liquibase migrations applied successfully
    vibe_erp plug-in loaded: id=printing-shop version=0.1.0-SNAPSHOT state=STARTED
    [plugin:printing-shop] printing-shop plug-in started — reference acceptance test active
    registered TaskHandler 'printing_shop.plate.approve' owner='printing-shop' class='org.vibeerp.reference.printingshop.workflow.PlateApprovalTaskHandler'
    [plugin:printing-shop] registered 1 TaskHandler: printing_shop.plate.approve
    
    $ curl /api/v1/workflow/handlers (as admin)
    {
      "count": 2,
      "keys": ["printing_shop.plate.approve", "vibeerp.workflow.ping"]
    }
    
    $ curl /api/v1/plugins/printing-shop/ping  # plug-in HTTP still works
    {"plugin":"printing-shop","ok":true,"version":"0.1.0-SNAPSHOT", ...}
    
    $ curl -X POST /api/v1/workflow/process-instances
             {"processDefinitionKey":"vibeerp-workflow-ping"}
      (principal propagation from previous commit still works — pingedBy=user:admin)
    
    $ kill -TERM <pid>
    [ionShutdownHook] vibe_erp stopping 1 plug-in(s)
    [ionShutdownHook] [plugin:printing-shop] printing-shop plug-in stopped
    [ionShutdownHook] unregistered TaskHandler 'printing_shop.plate.approve' (owner stopped)
    [ionShutdownHook] TaskHandlerRegistry.unregisterAllByOwner('printing-shop') removed 1 handler(s)
    ```
    
    Every expected lifecycle event fires in the right order with the
    right owner attribution. Core handlers are untouched by plug-in
    teardown.
    
    ## Tests
    
    - 4 new / updated tests on `TaskHandlerRegistryTest`:
      * `unregisterAllByOwner only removes handlers owned by that id`
        — 2 core + 2 plug-in, unregister the plug-in owner, only the
        2 plug-in keys are removed
      * `unregisterAllByOwner on unknown owner returns zero`
      * `register with blank owner is rejected`
      * Updated `duplicate key fails fast` to assert the new error
        message format including both owner ids
    - Total framework unit tests: 269 (was 265), all green.
    
    ## What this unblocks
    
    - **REF.1** (real printing-shop quote→job-card workflow) can now
      register its production handlers through the same seam
    - **Plug-in-contributed handlers with state access** — the next
      design question is how a plug-in handler gets at the plug-in's
      database and translator. Two options: pass a projection of the
      PluginContext through TaskContext, or keep a reference to the
      context captured at plug-in start (closure). The PlateApproval
      handler in this chunk is pure on purpose to keep the seam
      conversation separate.
    - **Plug-in-shipped BPMN auto-deployment** — Flowable's default
      classpath scan uses `classpath*:/processes/*.bpmn20.xml` which
      does NOT see PF4J plug-in classloaders. A dedicated
      `PluginProcessDeployer` that walks each started plug-in's JAR for
      BPMN resources and calls `repositoryService.createDeployment` is
      the natural companion to this commit, still pending.
    
    ## Non-goals (still parking lot)
    
    - BPMN processes shipped inside plug-in JARs (see above — needs
      its own chunk, because it requires reading resources from the
      PF4J classloader and constructing a Flowable deployment by hand)
    - Per-handler permission checks — a handler that wants a permission
      gate still has to call back through its own context; P4.3's
      @RequirePermission aspect doesn't reach into Flowable delegate
      execution.
    - Hot reload of a running plug-in's TaskHandlers. The seam supports
      it, but `unloadPlugin` + `loadPlugin` at runtime isn't exercised
      by any current caller.
    zichun authored
     
    Browse Code »
  • Before this commit, every TaskHandler saw a fixed `workflow-engine`
    System principal via `ctx.principal()` because there was no plumbing
    from the REST caller down to the dispatcher. A printing-shop
    quote-to-job-card handler (or any real business workflow) needs to
    know the actual user who kicked off the process so audit columns and
    role-based logic behave correctly.
    
    ## Mechanism
    
    The chain is: Spring Security populates `SecurityContextHolder` →
    `PrincipalContextFilter` mirrors it into `AuthorizationContext`
    (already existed) → `WorkflowService.startProcess` reads the bound
    `AuthorizedPrincipal` and stashes two reserved process variables
    (`__vibeerp_initiator_id`, `__vibeerp_initiator_username`) before
    calling `RuntimeService.startProcessInstanceByKey` →
    `DispatchingJavaDelegate` reads them back off each `DelegateExecution`
    when constructing the `DelegateTaskContext` → handler sees a real
    `Principal.User` from `ctx.principal()`.
    
    When the process is started outside an HTTP request (e.g. a future
    Quartz-scheduled process, or a signal fired by a PBC event
    subscriber), `AuthorizationContext.current()` is null, no initiator
    variables are written, and the dispatcher falls back to the
    `Principal.System("workflow-engine")` principal. A corrupt initiator
    id (e.g. a non-UUID string) also falls back to the system principal
    rather than failing the task, so a stale variable can't brick a
    running workflow.
    
    ## Reserved variable hygiene
    
    The `__vibeerp_` prefix is reserved framework plumbing. Two
    consequences wired in this commit:
    
    - `DispatchingJavaDelegate` strips keys starting with `__vibeerp_`
      from the variable snapshot handed to the handler (via
      `WorkflowTask.variables`), so handler code cannot accidentally
      depend on the initiator id through the wrong door — it must use
      `ctx.principal()`.
    - `WorkflowService.startProcess` and `getInstanceVariables` strip
      the same prefix from their HTTP response payloads so REST callers
      never see the plumbing either.
    
    The prefix constant lives on `DispatchingJavaDelegate.RESERVED_VAR_PREFIX`
    so there is exactly one source of truth. The two initiator variable
    names are public constants on `WorkflowService` — tests, future
    plug-in code, and any custom handlers that genuinely need the raw
    ids (e.g. a security-audit task) can depend on the stable symbols
    instead of hard-coded strings.
    
    ## PingTaskHandler as the executable witness
    
    `PingTaskHandler` now writes a `pingedBy` output variable with a
    principal label (`user:<username>`, `system:<name>`, or
    `plugin:<pluginId>`) and logs it. That makes the end-to-end smoke
    test trivially assertable:
    
    ```
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping"}
      (as admin user, with valid JWT)
      → {"processInstanceId": "...", "ended": true,
         "variables": {
           "pong": true,
           "pongAt": "...",
           "correlationId": "...",
           "pingedBy": "user:admin"
         }}
    ```
    
    Note the RESPONSE does NOT contain `__vibeerp_initiator_id` or
    `__vibeerp_initiator_username` — the reserved-var filter in the
    service layer hides them. The handler-side log line confirms
    `principal='user:admin'` in the service-task execution thread.
    
    ## Tests
    
    - 3 new tests in `DispatchingJavaDelegateTest`:
      * `resolveInitiator` returns a User principal when both vars set
      * falls back to system principal when id var is missing
      * falls back to system principal when id var is corrupt
        (non-UUID string)
    - Updated `variables given to the handler are a defensive copy` to
      also assert that reserved `__vibeerp_*` keys are stripped from
      the task's variable snapshot.
    - Updated `PingTaskHandlerTest`:
      * rename to "writes pong plus timestamp plus correlation id plus
        user principal label"
      * new test for the System-principal branch producing
        `pingedBy=system:workflow-engine`
    - Total framework unit tests: 265 (was 261), all green.
    
    ## Non-goals (still parking lot)
    
    - Plug-in-contributed TaskHandler registration via the PF4J loader
      walking child contexts for TaskHandler beans and calling
      `TaskHandlerRegistry.register`. The seam exists on the registry;
      the loader integration is the next chunk, and unblocks REF.1.
    - Propagation of the full role set (not just id+username) into the
      TaskContext. Handlers don't currently see the initiator's roles.
      Can be added as a third reserved variable when a handler actually
      needs it — YAGNI for now.
    - BPMN user tasks / signals / timers — engine supports them but we
      have no HTTP surface for them yet.
    zichun authored
     
    Browse Code »
  • New platform subproject `platform/platform-workflow` that makes
    `org.vibeerp.api.v1.workflow.TaskHandler` a live extension point. This
    is the framework's first chunk of Phase 2 (embedded workflow engine)
    and the dependency other work has been waiting on — pbc-production
    routings/operations, the full buy-make-sell BPMN scenario in the
    reference plug-in, and ultimately the BPMN designer web UI all hang
    off this seam.
    
    ## The shape
    
    - `flowable-spring-boot-starter-process:7.0.1` pulled in behind a
      single new module. Every other module in the framework still sees
      only the api.v1 TaskHandler + WorkflowTask + TaskContext surface —
      guardrail #10 stays honest, no Flowable type leaks to plug-ins or
      PBCs.
    - `TaskHandlerRegistry` is the host-side index of every registered
      handler, keyed by `TaskHandler.key()`. Auto-populated from every
      Spring bean implementing TaskHandler via constructor injection of
      `List<TaskHandler>`; duplicate keys fail fast at registration time.
      `register` / `unregister` exposed for a future plug-in lifecycle
      integration.
    - `DispatchingJavaDelegate` is a single Spring-managed JavaDelegate
      named `taskDispatcher`. Every BPMN service task in the framework
      references it via `flowable:delegateExpression="${taskDispatcher}"`.
      The dispatcher reads `execution.currentActivityId` as the task key
      (BPMN `id` attribute = TaskHandler key — no extension elements, no
      field injection, no second source of truth) and routes to the
      matching registered handler. A defensive copy of the execution
      variables is passed to the handler so it cannot mutate Flowable's
      internal map.
    - `DelegateTaskContext` adapts Flowable's `DelegateExecution` to the
      api.v1 `TaskContext` — the variable `set(name, value)` call
      forwards through Flowable's variable scope (persisted in the same
      transaction as the surrounding service task execution) and null
      values remove the variable. Principal + locale are documented
      placeholders for now (a workflow-engine `Principal.System`),
      waiting on the propagation chunk that plumbs the initiating user
      through `runtimeService.startProcessInstanceByKey(...)`.
    - `WorkflowService` is a thin facade over Flowable's `RuntimeService`
      + `RepositoryService` exposing exactly the four operations the
      controller needs: start, list active, inspect variables, list
      definitions. Everything richer (signals, timers, sub-processes,
      user-task completion, history queries) lands on this seam in later
      chunks.
    - `WorkflowController` at `/api/v1/workflow/**`:
      * `POST /process-instances`                       (permission `workflow.process.start`)
      * `GET  /process-instances`                       (`workflow.process.read`)
      * `GET  /process-instances/{id}/variables`        (`workflow.process.read`)
      * `GET  /definitions`                             (`workflow.definition.read`)
      * `GET  /handlers`                                (`workflow.definition.read`)
      Exception handlers map `NoSuchElementException` +
      `FlowableObjectNotFoundException` → 404, `IllegalArgumentException`
      → 400, and any other `FlowableException` → 400. Permissions are
      declared in a new `META-INF/vibe-erp/metadata/workflow.yml` loaded
      by the core MetadataLoader so they show up under
      `GET /api/v1/_meta/metadata` alongside every other permission.
    
    ## The executable self-test
    
    - `vibeerp-ping.bpmn20.xml` ships in `processes/` on the module
      classpath and Flowable's starter auto-deploys it at boot.
      Structure: `start` → serviceTask id=`vibeerp.workflow.ping`
      (delegateExpression=`${taskDispatcher}`) → `end`. Process
      definitionKey is `vibeerp-workflow-ping` (distinct from the
      serviceTask id because BPMN 2.0 ids must be unique per document).
    - `PingTaskHandler` is a real shipped bean, not test code: its
      `execute` writes `pong=true`, `pongAt=<Instant.now()>`, and
      `correlationId=<ctx.correlationId()>` to the process variables.
      Operators and AI agents get a trivial "is the workflow engine
      alive?" probe out of the box.
    
    Why the demo lives in src/main, not src/test: Flowable's auto-deployer
    reads from the host classpath at boot, so if either half lived under
    src/test the smoke test wouldn't be reproducible from the shipped
    image — exactly what CLAUDE.md's "reference plug-in is the executable
    acceptance test" discipline is trying to prevent.
    
    ## The Flowable + Liquibase trap
    
    **Learned the hard way during the smoke test.** Adding
    `flowable-spring-boot-starter-process` immediately broke boot with
    `Schema-validation: missing table [catalog__item]`. Liquibase was
    silently not running. Root cause: Flowable 7.x registers a Spring
    Boot `EnvironmentPostProcessor` called
    `FlowableLiquibaseEnvironmentPostProcessor` that, unless the user has
    already set an explicit value, forces
    `spring.liquibase.enabled=false` with a WARN log line that reads
    "Flowable pulls in Liquibase but does not use the Spring Boot
    configuration for it". Our master.xml then never executes and JPA
    validation fails against the empty schema. Fix is a single line in
    `distribution/src/main/resources/application.yaml` —
    `spring.liquibase.enabled: true` — with a comment explaining why it
    must stay there for anyone who touches config next.
    
    Flowable's own ACT_* tables and vibe_erp's `catalog__*`, `pbc.*__*`,
    etc. tables coexist happily in the same public schema — 39 ACT_*
    tables alongside 45 vibe_erp tables on the smoke-tested DB. Flowable
    manages its own schema via its internal MyBatis DDL, Liquibase manages
    ours, they don't touch each other.
    
    ## Smoke-test transcript (fresh DB, dev profile)
    
    ```
    docker compose down -v && docker compose up -d db
    ./gradlew :distribution:bootRun &
    # ... Flowable creates ACT_* tables, Liquibase creates vibe_erp tables,
    #     MetadataLoader loads workflow.yml, TaskHandlerRegistry boots with 1 handler,
    #     BPMN auto-deployed from classpath
    POST /api/v1/auth/login → JWT
    GET  /api/v1/workflow/definitions → 1 definition (vibeerp-workflow-ping)
    GET  /api/v1/workflow/handlers → {"count":1,"keys":["vibeerp.workflow.ping"]}
    POST /api/v1/workflow/process-instances
         {"processDefinitionKey":"vibeerp-workflow-ping",
          "businessKey":"smoke-1",
          "variables":{"greeting":"ni hao"}}
      → 201 {"processInstanceId":"...","ended":true,
             "variables":{"pong":true,"pongAt":"2026-04-09T...",
                          "correlationId":"...","greeting":"ni hao"}}
    POST /api/v1/workflow/process-instances {"processDefinitionKey":"does-not-exist"}
      → 404 {"message":"No process definition found for key 'does-not-exist'"}
    GET  /api/v1/catalog/uoms → still returns the 15 seeded UoMs (sanity)
    ```
    
    ## Tests
    
    - 15 new unit tests in `platform-workflow/src/test`:
      * `TaskHandlerRegistryTest` — init with initial handlers, duplicate
        key fails fast, blank key rejected, unregister removes,
        unregister on unknown returns false, find on missing returns null
      * `DispatchingJavaDelegateTest` — dispatches by currentActivityId,
        throws on missing handler, defensive-copies the variable map
      * `DelegateTaskContextTest` — set non-null forwards, set null
        removes, blank name rejected, principal/locale/correlationId
        passthrough, default correlation id is stable across calls
      * `PingTaskHandlerTest` — key matches the BPMN serviceTask id,
        execute writes pong + pongAt + correlationId
    - Total framework unit tests: 261 (was 246), all green.
    
    ## What this unblocks
    
    - **REF.1** — real quote→job-card workflow handler in the
      printing-shop plug-in
    - **pbc-production routings/operations (v3)** — each operation
      becomes a BPMN step with duration + machine assignment
    - **P2.3** — user-task form rendering (landing on top of the
      RuntimeService already exposed via WorkflowService)
    - **P2.2** — BPMN designer web page (later, depends on R1)
    
    ## Deliberate non-goals (parking lot)
    
    - Principal propagation from the REST caller through the process
      start into the handler — uses a fixed `workflow-engine`
      `Principal.System` for now. Follow-up chunk will plumb the
      authenticated user as a Flowable variable.
    - Plug-in-contributed TaskHandler registration via PF4J child
      contexts — the registry exposes `register/unregister` but the
      plug-in loader doesn't call them yet. Follow-up chunk.
    - BPMN user tasks, signals, timers, history queries — seam exists,
      deliberately not built out.
    - Workflow deployment from `metadata__workflow` rows (the Tier 1
      path). Today deployment is classpath-only via Flowable's auto-
      deployer.
    - The Flowable async job executor is explicitly deactivated
      (`flowable.async-executor-activate: false`) — background-job
      machinery belongs to the future Quartz integration (P1.10), not
      Flowable.
    zichun authored
     
    Browse Code »