Closes the P1.10 row of the implementation plan. New platform-jobs
subproject shipping a Quartz-backed background job engine adapted
to the api.v1 JobHandler contract, so PBCs and plug-ins can register
scheduled work without ever importing Quartz types.
## The shape (matches the P2.1 workflow engine)
platform-jobs is to scheduled work what platform-workflow is to
BPMN service tasks. Same pattern, same discipline:
- A single `@Component` bridge (`QuartzJobBridge`) is the ONLY
org.quartz.Job implementation in the framework. Every persistent
trigger points at it.
- A single `JobHandlerRegistry` (owner-tagged, duplicate-key-rejecting,
ConcurrentHashMap-backed) holds every registered JobHandler by key.
Mirrors `TaskHandlerRegistry`.
- The bridge reads the handler key from the trigger's JobDataMap,
looks it up in the registry, and executes the matching JobHandler
inside a `PrincipalContext.runAs("system:jobs:<key>")` block so
audit rows written during the job get a structured, greppable
`created_by` value ("system:jobs:core.audit.prune") instead of
the default `__system__`.
- Handler-thrown exceptions are re-wrapped as `JobExecutionException`
so Quartz's MISFIRE machinery handles them properly.
- `@DisallowConcurrentExecution` on the bridge stops a long-running
handler from being started again before it finishes.
## api.v1 additions (package `org.vibeerp.api.v1.jobs`)
- `JobHandler` — interface with `key()` + `execute(context)`.
Analogous to the workflow TaskHandler. Plug-ins implement this
to contribute scheduled work without any Quartz dependency.
- `JobContext` — read-only execution context passed to the handler:
principal, locale, correlation id, started-at instant, data map.
Unlike TaskContext it has no `set()` writeback — scheduled jobs
don't produce continuation state for a downstream step; a job
that wants to talk to the rest of the system writes to its own
domain table or publishes an event.
- `JobScheduler` — injectable facade exposing:
* `scheduleCron(scheduleKey, handlerKey, cronExpression, data)`
* `scheduleOnce(scheduleKey, handlerKey, runAt, data)`
* `unschedule(scheduleKey): Boolean`
* `triggerNow(handlerKey, data): JobExecutionSummary`
— synchronous in-thread execution, bypasses Quartz; used by
the HTTP trigger endpoint and by tests.
* `listScheduled(): List<ScheduledJobInfo>` — introspection
Both `scheduleCron` and `scheduleOnce` are idempotent on
`scheduleKey` (replace if exists).
- `ScheduledJobInfo` + `JobExecutionSummary` + `ScheduleKind` —
read-only DTOs returned by the scheduler.
## platform-jobs runtime
- `QuartzJobBridge` — the shared Job impl. Routes by the
`__vibeerp_handler_key` JobDataMap entry. Uses `@Autowired` field
injection because Quartz instantiates Job classes through its
own JobFactory (Spring Boot's `SpringBeanJobFactory` autowires
fields after construction, which is the documented pattern).
- `QuartzJobScheduler` — the concrete api.v1 `JobScheduler`
implementation. Builds JobDetail + Trigger pairs under fixed
group names (`vibeerp-jobs`), uses `addJob(replace=true)` +
explicit `checkExists` + `rescheduleJob` for idempotent
scheduling, strips the reserved `__vibeerp_handler_key` from the
data visible to the handler.
- `SimpleJobContext` — internal immutable `JobContext` impl.
Defensive-copies the data map at construction.
- `JobHandlerRegistry` — owner-tagged registry (OWNER_CORE by
default, any other string for plug-in ownership). Same
`register` / `unregister` / `unregisterAllByOwner` / `find` /
`keys` / `size` surface as `TaskHandlerRegistry`. The plug-in
loader integration seam is defined; the loader hook that calls
`register(handler, pluginId)` lands when a plug-in actually ships
a job handler (YAGNI).
- `JobController` at `/api/v1/jobs/**`:
* `GET /handlers` (perm `jobs.handler.read`)
* `POST /handlers/{key}/trigger` (perm `jobs.job.trigger`)
* `GET /scheduled` (perm `jobs.schedule.read`)
* `POST /scheduled` (perm `jobs.schedule.write`)
* `DELETE /scheduled/{key}` (perm `jobs.schedule.write`)
- `VibeErpPingJobHandler` — built-in diagnostic. Key
`vibeerp.jobs.ping`. Logs the invocation and exits. Safe to
trigger from any environment; mirrors the core
`vibeerp.workflow.ping` workflow handler from P2.1.
- `META-INF/vibe-erp/metadata/jobs.yml` — 4 permissions + 2 menus.
## Spring Boot config (application.yaml)
```
spring.quartz:
job-store-type: jdbc
jdbc:
initialize-schema: always # creates QRTZ_* tables on first boot
properties:
org.quartz.scheduler.instanceName: vibeerp-scheduler
org.quartz.scheduler.instanceId: AUTO
org.quartz.threadPool.threadCount: "4"
org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
org.quartz.jobStore.isClustered: "false"
```
## The config trap caught during smoke-test (documented in-file)
First boot crashed with `SchedulerConfigException: DataSource name
not set.` because I'd initially added
`org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX`
to the raw Quartz properties. That is correct for a standalone
Quartz deployment but WRONG for the Spring Boot starter: the
starter configures a `LocalDataSourceJobStore` that wraps the
Spring-managed DataSource automatically when `job-store-type=jdbc`,
and setting `jobStore.class` explicitly overrides that wrapper back
to Quartz's standalone JobStoreTX — which then fails at init
because Quartz-standalone expects a separately-named `dataSource`
property the Spring Boot starter doesn't supply. Fix: drop the
`jobStore.class` property entirely. The `driverDelegateClass` is
still fine to set explicitly because it's read by both the standalone
and Spring-wrapped JobStore implementations. Rationale is documented
in the config comment so the next maintainer doesn't add it back.
## Smoke test (fresh DB, as admin)
```
GET /api/v1/jobs/handlers
→ {"count": 1, "keys": ["vibeerp.jobs.ping"]}
POST /api/v1/jobs/handlers/vibeerp.jobs.ping/trigger
{"data": {"source": "smoke-test"}}
→ 200 {"handlerKey": "vibeerp.jobs.ping",
"correlationId": "e142...",
"startedAt": "...",
"finishedAt": "...",
"ok": true}
log: VibeErpPingJobHandler invoked at=... principal='system:jobs:manual-trigger'
data={source=smoke-test}
GET /api/v1/jobs/scheduled → []
POST /api/v1/jobs/scheduled
{"scheduleKey": "ping-every-sec",
"handlerKey": "vibeerp.jobs.ping",
"cronExpression": "0/1 * * * * ?",
"data": {"trigger": "cron"}}
→ 201 {"scheduleKey": "ping-every-sec", "handlerKey": "vibeerp.jobs.ping"}
# after 3 seconds
GET /api/v1/jobs/scheduled
→ [{"scheduleKey": "ping-every-sec",
"handlerKey": "vibeerp.jobs.ping",
"kind": "CRON",
"cronExpression": "0/1 * * * * ?",
"nextFireTime": "...",
"previousFireTime": "...",
"data": {"trigger": "cron"}}]
DELETE /api/v1/jobs/scheduled/ping-every-sec → 200 {"removed": true}
# handler log count after ~3 seconds of cron ticks
grep -c "VibeErpPingJobHandler invoked" /tmp/boot.log → 5
# 1 manual trigger + 4 cron ticks before unschedule — matches the
# 0/1 * * * * ? expression
# negatives
POST /api/v1/jobs/handlers/nope/trigger
→ 400 "no JobHandler registered for key 'nope'"
POST /api/v1/jobs/scheduled {cronExpression: "not a cron"}
→ 400 "invalid Quartz cron expression: 'not a cron'"
```
## Three schemas coexist in one Postgres database
```
SELECT count(*) FILTER (WHERE table_name LIKE 'qrtz_%') AS quartz_tables,
count(*) FILTER (WHERE table_name LIKE 'act_%') AS flowable_tables,
count(*) FILTER (WHERE table_name NOT LIKE 'qrtz_%'
AND table_name NOT LIKE 'act_%'
AND table_schema = 'public') AS vibeerp_tables
FROM information_schema.tables WHERE table_schema = 'public';
quartz_tables | flowable_tables | vibeerp_tables
---------------+-----------------+----------------
11 | 39 | 48
```
Three independent schema owners (Quartz / Flowable / Liquibase) in
one public schema, no collisions. Spring Boot's
`QuartzDataSourceScriptDatabaseInitializer` runs the QRTZ_* DDL
once and skips on subsequent boots; Flowable's internal MyBatis
schema manager does the same for ACT_* tables; our Liquibase owns
the rest.
## Tests
- 6 new tests in `JobHandlerRegistryTest`:
* initial handlers registered with OWNER_CORE
* duplicate key fails fast with both owners in the error
* unregisterAllByOwner only removes handlers owned by that id
* unregister by key returns false for unknown
* find on missing key returns null
* blank key is rejected
- 9 new tests in `QuartzJobSchedulerTest` (Quartz Scheduler mocked):
* scheduleCron rejects an unknown handler key
* scheduleCron rejects an invalid cron expression
* scheduleCron adds job + schedules trigger when nothing exists yet
* scheduleCron reschedules when the trigger already exists
* scheduleOnce uses a simple trigger at the requested instant
* unschedule returns true/false correctly
* triggerNow calls the handler synchronously and returns ok=true
* triggerNow propagates the handler's exception
* triggerNow rejects an unknown handler key
- Total framework unit tests: 315 (was 300), all green.
## What this unblocks
- **pbc-finance audit prune** — a core recurring job that deletes
posted journal entries older than N days, driven by a cron from
a Tier 1 metadata row.
- **Plug-in scheduled work** — once the loader integration hook is
wired (trivial follow-up), any plug-in's `start(context)` can
register a JobHandler via `context.jobs.register(handler)` and
the host strips it on plug-in stop via `unregisterAllByOwner`.
- **Delayed workflow continuations** — a BPMN handler can call
`jobScheduler.scheduleOnce(...)` to "re-evaluate this workflow
in 24 hours if no one has approved it", bridging the workflow
engine and the scheduler without introducing Thread.sleep.
- **Outbox draining strategy** — the existing 5-second OutboxPoller
can move from a Spring @Scheduled to a Quartz cron so it
inherits the scheduler's persistence, misfire handling, and the
future clustering story.
## Non-goals (parking lot)
- **Clustered scheduling.** `isClustered=false` for now. Making
this true requires every instance to share a unique `instanceId`
and agree on the JDBC lock policy — doable but out of v1.0 scope
since vibe_erp is single-tenant single-instance by design.
- **Async execution of triggerNow.** The current `triggerNow` runs
synchronously on the caller thread so HTTP requests see the real
result. A future "fire and forget" endpoint would delegate to
`Scheduler.triggerJob(...)` against the JobDetail instead.
- **Per-job permissions.** Today the four `jobs.*` permissions gate
the whole controller. A future enhancement could attach
per-handler permissions (so "trigger audit prune" requires a
different permission than "trigger pricing refresh").
- **Plug-in loader integration.** The seam is defined on
`JobHandlerRegistry` (owner tagging + unregisterAllByOwner) but
`VibeErpPluginManager` doesn't call it yet. Lands in the same
chunk as the first plug-in that ships a JobHandler.