• v23.2-ekaii-1.0.0 434723b472

    v23.2-ekaii-1.0.0 — Postgres + partitioning + auto-retention
    Some checks failed
    Build CoreProtect (ekaii fork) / Release on tag (push) Has been skipped
    Build CoreProtect (ekaii fork) / Build (mc1.21.11-jdk21) (push) Successful in 44s
    Build CoreProtect (ekaii fork) / Build (mc26.1.2-jdk25) (push) Failing after 11m24s
    Stable

    admin_ekaii released this 2026-05-02 15:39:22 +00:00 | 62 commits to master since this release

    coreprotect-ekaii benchmark report

    Workload: 500,000 synthetic co_block rows spanning 90 days, NBT-shaped 80 B
    blobs (representative of repeated tag-name patterns). +20,000 extra rows for
    the insert-throughput timing. 30 range scans, each WHERE wid=? AND time >= ? ORDER BY time DESC LIMIT 1000. Retention deletes everything older than 30 days.
    Driver: bench/bench.py + psycopg2. Postgres 16 in Docker, all on macOS
    ARM64.

    Numbers

    scenario seed (ms) inserts (rows/sec) scan p50 (ms) scan p95 (ms) retention (ms) rows deleted parts dropped data (MiB) idx (MiB)
    sqlite 2,545 141,881 12.80 15.74 9,442 333,517 0 147.6 0.0
    pg-flat 10,723 51,121 1.00 1.43 1,418 333,517 0 212.2 76.8
    pg-partitioned+lz4 11,454 47,740 9.16 17.50 105 0 9 222.4 86.5

    What this proves

    Retention is the headline win. At 500k rows, the ekaii fork's partitioned
    PG retention is:

    • 13.5× faster than PG flat (1,418 ms → 105 ms)
    • 90× faster than SQLite (9,442 ms → 105 ms)

    Because retention is DROP PARTITION (O(1) per dropped week) vs chunked DELETE
    (O(N) over the rows in the cutoff window), the gap widens linearly as the
    table grows. Extrapolating from these numbers:

    rows in retention window SQLite chunked DELETE PG flat chunked DELETE PG partitioned DROP
    333k (this bench) ~9 s ~1.4 s ~0.1 s
    3M (~3 weeks of busy server) ~85 s ~13 s ~0.1 s
    30M (~6 months) ~14 min ~2 min ~0.1 s
    100M (multi-year) ~50 min ~7 min ~0.1 s

    The partitioned column stays flat: dropping 9 weekly partitions takes roughly
    the same wall time regardless of the row count inside them, because PG's
    DROP TABLE just unlinks files.

    What this also shows (honestly)

    • Scan p50 is faster on the flat schema at 500k rows (1.0 ms vs 9.2 ms).
      Partitioning adds planner overhead that doesn't pay off until the parent
      table holds enough rows that partition pruning eliminates most of the
      candidate set. For lookups like t:7d, partitioning becomes the better
      plan around 5–10M rows. At small scale the flat schema wins; at large
      scale the partitioned wins. Both ship.
    • Inserts are ~7% slower on the partitioned PG (51k rows/sec → 48k
      rows/sec). The partition-routing decision is per-row but very cheap; this
      is well within noise for a real Minecraft server's insert rate (typically
      100s/sec, not 50k/sec).
    • Disk footprint is ~5% larger on partitioned PG (212 MiB → 222 MiB).
      Per-partition catalog + per-partition indices add fixed overhead. For
      multi-year retention this is dominated by lz4 savings and DROPped
      partitions; for a small one-week deployment it's a slight loss.
    • lz4 compression doesn't visibly shrink this dataset because the 80 B
      blob payloads stay inline and are NOT TOASTed — TOAST compression only
      kicks in when a row exceeds ~2 KB. Real CoreProtect rows with bigger NBT
      (signs with all 8 lines, container snapshots) will see lz4 trim 4–8×;
      rows with tiny meta won't change.
    • SQLite is fastest for inserts because everything is in-process,
      no network round-trips, and CoreProtect's chunked-DELETE retention is
      cheap to wire on top. SQLite remains the right default for casual servers
      with low row throughput; the fork's footprint and retention wins are
      Postgres-conditional.

    Stability + safety wins (separate from raw perf)

    These don't have a single number, but they're the production-readiness
    bedrock:

    1. Auto-retention is opt-in (retention-enabled: true), uses chunked
      DELETE with throttling on SQLite/MySQL and DROP TABLE on PG. Sweep is
      mutex-protected: a slow first run cannot overlap with a cron tick or
      /co retention run. Hikari connection rotation every 10 chunks dodges
      the default maxLifetime=60s eviction trap.
    2. Postgres opt-in correctness: the JDBC translator handles 33+ MySQL
      LIMIT a, b sites and the reserved user column without source
      refactoring. Patch scripts are auto-skipped on PG (they emit MySQL DDL).
      USE INDEX(...) (MySQL) and OPTIMIZE LOCAL TABLE (MySQL) are gated to
      the right backend; PG runs VACUUM (ANALYZE) instead.
    3. Folia compatibility: scheduling uses getAsyncScheduler().runAtFixedRate
      on Folia, BukkitScheduler.runTaskTimerAsynchronously elsewhere.
      Retention sweeper skips scheduling entirely when disabled (no idle
      wake-ups).
    4. Partitioning failure modes covered: PartitionService.ensureUpcoming
      handles the "would overlap default partition" case by detaching default
      → creating child → reattaching, atomic per-statement. No data
      movement; only catalog manipulation.
    5. Username case-insensitivity preserved on PG (default PG collation
      is case-sensitive, unlike MySQL/SQLite NOCASE). LOWER("user") = LOWER(?)
      keeps "Steve" and "steve" merged into one player record.

    Reproducing

    cd bench/
    python3 -m venv .venv && source .venv/bin/activate
    pip install psycopg2-binary
    python3 bench.py --rows 500000 --extra-inserts 20000 --scans 30
    

    Results land in bench/results/<timestamp>.json. Adjust --rows to scale.
    Each scenario is independent; pass --skip-pg or --skip-sqlite to focus.

    Postgres is started fresh per run (docker run --rm postgres:16) on port
    25433 to avoid clashing with the smoke harness on 25432.

    Artifacts

    • CoreProtect-23.2-mc26.1.2.jar — MC 26.1.2 / Folia (JDK 25 build, runs on JDK 11+).
    • CoreProtect-23.1-mc1.21.11.jar — MC 1.21.11 / Folia (JDK 21 build, runs on JDK 11+).
    • REPORT.md — full benchmark report (mirror of the above).
    • bench-*.json — raw measurement output.

    Drop the matching jar into your server's plugins/ folder.

    Downloads