prefect server in zig

add database migration system

- src/db/migrate.zig: migration runner with statement parser
- src/db/migrations_data.zig: migration registry using @embedFile
- src/db/migrations/001_initial/: full schema for sqlite and postgres
- replaces old CREATE TABLE IF NOT EXISTS approach
- tracks applied migrations in _migrations table
- tested on sqlite (local) and postgres (k8s)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

+911 -143
+5 -2
docs/database.md
··· 124 124 125 125 ## migrations 126 126 127 - **current state**: no migration system. schema is applied via `CREATE TABLE IF NOT EXISTS` on startup. 127 + schema changes are managed via embedded migrations. on startup, the server: 128 + 1. creates `_migrations` table if needed 129 + 2. applies any pending migrations in order 130 + 3. records applied migrations 128 131 129 - see [migrations.md](./migrations.md) for planning on proper migration support. 132 + see [migrations.md](./migrations.md) for details on adding new migrations. 130 133 131 134 see [python-reference/](./python-reference/) for how prefect python handles migrations with alembic. 132 135
+80 -130
docs/migrations.md
··· 1 1 # database migrations 2 2 3 - ## current state: no migration system 3 + embedded migration system supporting sqlite and postgresql. 4 4 5 - we currently have no migration system. schema changes are: 6 - - embedded in `src/db/schema/sqlite.zig` and `src/db/schema/postgres.zig` 7 - - applied via `CREATE TABLE IF NOT EXISTS` on startup 8 - - manually applied to existing databases (`ALTER TABLE ...`) 5 + ## overview 9 6 10 - this works for development but **will not work for production**. 7 + migrations are applied automatically on server startup. the system: 8 + - tracks applied migrations in `_migrations` table 9 + - embeds SQL files in the binary via `@embedFile` 10 + - supports dialect-specific SQL (sqlite vs postgres) 11 + - is idempotent (safe to run repeatedly) 11 12 12 - ## requirements 13 + ## structure 13 14 14 - 1. **version tracking** - know which migrations have been applied 15 - 2. **dual dialect support** - sqlite and postgres, possibly with different SQL 16 - 3. **forward migrations** - apply schema changes 17 - 4. **rollback capability** - undo changes (nice to have) 18 - 5. **embedded runtime** - apply migrations on server startup 19 - 6. **CLI tooling** - create/manage migrations during development 20 - 21 - ## options evaluated 22 - 23 - ### option 1: zmig (sqlite-only) 24 - 25 - [zmig](https://github.com/Jeansidharta/zmig) - native zig migration tool 26 - 27 - **pros:** 28 - - pure zig, embeds migrations in binary 29 - - timestamp-based ordering 30 - - up/down migrations 31 - - runtime API: `applyMigrations(db, alloc, options)` 32 - - requires zig 0.15.2 (we use this) 33 - 34 - **cons:** 35 - - sqlite only - no postgres support 36 - - would need to fork/extend for postgres 37 - 38 - **migration format:** 39 15 ``` 40 - migrations/ 41 - ├── 1758503588032-add_empirical_policy.up.sql 42 - └── 1758503588032-add_empirical_policy.down.sql 16 + src/db/ 17 + ├── migrations/ 18 + │ ├── 001_initial/ 19 + │ │ ├── sqlite.sql # sqlite DDL 20 + │ │ └── postgres.sql # postgres DDL 21 + │ └── 002_add_feature/ # future migrations 22 + │ ├── sqlite.sql 23 + │ └── postgres.sql 24 + ├── migrations_data.zig # migration registry 25 + └── migrate.zig # migration runner 43 26 ``` 44 27 45 - ### option 2: atlas (external tool) 46 - 47 - [atlas](https://atlasgo.io/) - schema-as-code migration tool (go-based) 48 - 49 - **pros:** 50 - - supports sqlite AND postgres 51 - - declarative schema definitions 52 - - automatic migration generation 53 - - k8s operator available 54 - - language-agnostic CLI 55 - 56 - **cons:** 57 - - external dependency (go binary) 58 - - not embedded in our binary 59 - - adds operational complexity 60 - - schema defined separately from zig code 28 + ## adding a new migration 61 29 62 - ### option 3: minimal DIY 30 + 1. create directory: `src/db/migrations/NNN_description/` 31 + 2. add `sqlite.sql` and `postgres.sql` with your DDL 32 + 3. register in `migrations_data.zig`: 63 33 64 - build minimal migration system ourselves: 34 + ```zig 35 + pub const all = [_]Migration{ 36 + .{ 37 + .id = "001_initial", 38 + .sqlite_sql = @embedFile("migrations/001_initial/sqlite.sql"), 39 + .postgres_sql = @embedFile("migrations/001_initial/postgres.sql"), 40 + }, 41 + .{ 42 + .id = "002_add_feature", // <-- add new entry 43 + .sqlite_sql = @embedFile("migrations/002_add_feature/sqlite.sql"), 44 + .postgres_sql = @embedFile("migrations/002_add_feature/postgres.sql"), 45 + }, 46 + }; 47 + ``` 65 48 66 - **pros:** 67 - - tailored to our exact needs 68 - - no external dependencies 69 - - full control over dialect differences 70 - - embeds in binary 49 + 4. rebuild: `zig build` 71 50 72 - **cons:** 73 - - development effort 74 - - must handle edge cases ourselves 51 + ## migration sql guidelines 75 52 76 - ### option 4: hybrid approach (recommended) 53 + - use `CREATE TABLE IF NOT EXISTS` for new tables 54 + - use `CREATE INDEX IF NOT EXISTS` for indexes 55 + - for schema changes, write dialect-specific SQL: 56 + - sqlite: may need table recreation for some ALTER operations 57 + - postgres: supports most ALTER operations directly 77 58 78 - **development**: use atlas CLI for migration creation/testing 79 - **runtime**: embed migration SQL files and apply with simple zig code 59 + example migration adding a column: 80 60 81 - ``` 82 - migrations/ 83 - ├── versions.txt # ordered list of migration IDs 84 - ├── 001_initial/ 85 - │ ├── sqlite.up.sql 86 - │ ├── sqlite.down.sql 87 - │ ├── postgres.up.sql 88 - │ └── postgres.down.sql 89 - └── 002_add_empirical_policy/ 90 - ├── sqlite.up.sql 91 - └── postgres.up.sql 61 + **sqlite.sql**: 62 + ```sql 63 + -- sqlite doesn't support ADD COLUMN IF NOT EXISTS, so we use a pragma check 64 + ALTER TABLE flow_run ADD COLUMN new_field TEXT DEFAULT ''; 92 65 ``` 93 66 94 - runtime logic (minimal zig code): 95 - ```zig 96 - pub fn applyMigrations(db: *Backend) !void { 97 - // 1. create migrations table if not exists 98 - // 2. read applied migrations from table 99 - // 3. read embedded migrations from @embedFile 100 - // 4. apply any pending migrations in order 101 - // 5. record applied migrations 102 - } 67 + **postgres.sql**: 68 + ```sql 69 + ALTER TABLE flow_run ADD COLUMN IF NOT EXISTS new_field TEXT DEFAULT ''; 103 70 ``` 104 71 105 - ## migration table schema 72 + ## tracking table 106 73 107 74 ```sql 108 75 CREATE TABLE IF NOT EXISTS _migrations ( 109 - id TEXT PRIMARY KEY, -- "001_initial" 110 - applied_at TEXT NOT NULL, -- ISO timestamp 111 - checksum TEXT -- SHA256 of migration SQL (optional) 76 + id TEXT PRIMARY KEY, -- "001_initial" 77 + applied_at TEXT NOT NULL -- ISO 8601 timestamp 112 78 ); 113 79 ``` 114 80 115 - ## implementation phases 81 + ## runtime behavior 116 82 117 - ### phase 1: tracking (minimal) 83 + on startup: 84 + 1. creates `_migrations` table if not exists 85 + 2. queries applied migrations 86 + 3. applies pending migrations in order 87 + 4. records each applied migration 118 88 119 - 1. add `_migrations` table to schema 120 - 2. on startup, record current "version" (e.g., "001_initial") 121 - 3. log warning if database version != code version 89 + ``` 90 + 05:39:28.243 | INFO | migrations - applying: 001_initial 91 + 05:39:28.262 | INFO | migrations - applied 001_initial (44 statements) 92 + ``` 122 93 123 - ### phase 2: embedded migrations 124 - 125 - 1. create `migrations/` directory with SQL files 126 - 2. embed migrations in binary via `@embedFile` 127 - 3. apply pending migrations on startup 128 - 4. support `--migrate-only` CLI flag 129 - 130 - ### phase 3: development tooling 131 - 132 - 1. add `zig build migrate:new` or similar 133 - 2. generate timestamped migration directories 134 - 3. optionally integrate atlas for schema diffing 135 - 136 - ### phase 4: rollbacks (if needed) 94 + on subsequent runs: 95 + ``` 96 + 05:39:42.218 | DEBUG | migrations - skipping already applied: 001_initial 97 + ``` 137 98 138 - 1. add down.sql files 139 - 2. add `--rollback` CLI flag 140 - 3. track rollback state 99 + ## design decisions 141 100 142 - ## alembic patterns to preserve 143 - 144 - from python prefect (see `docs/python-reference/migrations.md`): 145 - 146 - 1. **separate dialect chains** - different SQL per database 147 - 2. **migration notes** - document changes with both dialect IDs 148 - 3. **batch mode for sqlite** - table recreation for ALTER 149 - 4. **transaction per migration** - isolation 150 - 5. **migration lock** - prevent concurrent execution 101 + - **manual registration**: migrations must be explicitly added to `migrations_data.zig` 102 + - **no rollbacks**: forward-only migrations (simplicity over flexibility) 103 + - **no transactions for DDL**: sqlite auto-commits DDL; statements run sequentially 104 + - **statement splitting**: SQL files can contain multiple statements separated by `;` 151 105 152 - ## questions to resolve 106 + ## future work 153 107 154 - 1. **when to migrate?** startup vs explicit command vs both? 155 - 2. **failure handling?** abort startup? partial state? 156 - 3. **backwards compatibility?** can old code read new schema? 157 - 4. **testing?** how to test migrations in CI? 108 + - `--migrate-only` CLI flag for CI/CD 109 + - migration generation tooling 110 + - rollback support (if needed) 158 111 159 - ## references 112 + ## reference 160 113 161 - - [zmig](https://github.com/Jeansidharta/zmig) - zig sqlite migrations 162 - - [zigmigrate](https://github.com/eugenepentland/zigmigrate) - zig sqlite migrations (goose-inspired) 163 - - [atlas](https://atlasgo.io/) - schema-as-code tool 164 - - [flyway](https://www.bytebase.com/blog/flyway-vs-liquibase/) - java migration tool 165 - - python prefect: `~/github.com/prefecthq/prefect/src/prefect/server/database/_migrations/` 114 + - [python-reference/](./python-reference/) - how prefect python handles migrations 115 + - old schema files kept in `src/db/schema/` for reference
+279
src/db/migrate.zig
··· 1 + // migrate.zig - database migration runner 2 + // 3 + // applies embedded SQL migrations on startup, tracking applied versions 4 + // in the _migrations table 5 + 6 + const std = @import("std"); 7 + const Allocator = std.mem.Allocator; 8 + const backend = @import("backend.zig"); 9 + const migrations_data = @import("migrations_data.zig"); 10 + const log = @import("../logging.zig"); 11 + 12 + /// migration tracking table schema (same for sqlite and postgres) 13 + const migrations_table_sql = 14 + \\CREATE TABLE IF NOT EXISTS _migrations ( 15 + \\ id TEXT PRIMARY KEY, 16 + \\ applied_at TEXT NOT NULL 17 + \\) 18 + ; 19 + 20 + /// apply all pending migrations 21 + pub fn applyMigrations() !void { 22 + const allocator = std.heap.page_allocator; 23 + 24 + // 1. ensure _migrations table exists 25 + try ensureMigrationsTable(); 26 + 27 + // 2. get list of already-applied migration IDs 28 + var applied = std.StringHashMap(void).init(allocator); 29 + defer applied.deinit(); 30 + try getAppliedMigrations(&applied); 31 + 32 + // 3. apply each pending migration in order 33 + for (migrations_data.all) |m| { 34 + if (!applied.contains(m.id)) { 35 + try applyMigration(allocator, m); 36 + } else { 37 + log.debug("migrations", "skipping already applied: {s}", .{m.id}); 38 + } 39 + } 40 + } 41 + 42 + fn ensureMigrationsTable() !void { 43 + backend.db.exec(migrations_table_sql, .{}) catch |err| { 44 + log.err("migrations", "failed to create _migrations table: {}", .{err}); 45 + return err; 46 + }; 47 + } 48 + 49 + fn getAppliedMigrations(applied: *std.StringHashMap(void)) !void { 50 + var rows = backend.db.query("SELECT id FROM _migrations", .{}) catch |err| { 51 + // table might not exist yet on very first run, that's ok 52 + log.debug("migrations", "could not query _migrations: {}", .{err}); 53 + return; 54 + }; 55 + defer rows.deinit(); 56 + 57 + while (rows.next()) |row| { 58 + const id = row.text(0); 59 + // copy the string since row data is temporary 60 + const id_copy = applied.allocator.dupe(u8, id) catch continue; 61 + applied.put(id_copy, {}) catch continue; 62 + } 63 + } 64 + 65 + fn applyMigration(allocator: Allocator, m: migrations_data.Migration) !void { 66 + const sql = switch (backend.db.dialect) { 67 + .sqlite => m.sqlite_sql, 68 + .postgres => m.postgres_sql, 69 + }; 70 + 71 + log.info("migrations", "applying: {s}", .{m.id}); 72 + 73 + // split SQL into individual statements and execute each 74 + // we can't use transactions for DDL in sqlite (CREATE TABLE auto-commits) 75 + // so we execute each statement separately 76 + var iter = StatementIterator.init(sql); 77 + var stmt_count: usize = 0; 78 + while (iter.next()) |stmt| { 79 + if (stmt.len == 0) continue; 80 + backend.db.exec(stmt, .{}) catch |err| { 81 + log.err("migrations", "failed to execute statement {d}: {}", .{ stmt_count, err }); 82 + log.err("migrations", "statement: {s}", .{stmt[0..@min(stmt.len, 200)]}); 83 + return err; 84 + }; 85 + stmt_count += 1; 86 + } 87 + 88 + // record migration as applied 89 + const timestamp = getIsoTimestamp(allocator); 90 + defer allocator.free(timestamp); 91 + 92 + backend.db.exec( 93 + "INSERT INTO _migrations (id, applied_at) VALUES (?, ?)", 94 + .{ m.id, timestamp }, 95 + ) catch |err| { 96 + log.err("migrations", "failed to record migration: {}", .{err}); 97 + return err; 98 + }; 99 + 100 + log.info("migrations", "applied {s} ({d} statements)", .{ m.id, stmt_count }); 101 + } 102 + 103 + /// iterator that splits SQL into individual statements by semicolons 104 + /// handles comments and string literals 105 + const StatementIterator = struct { 106 + sql: []const u8, 107 + pos: usize, 108 + 109 + pub fn init(sql: []const u8) StatementIterator { 110 + return .{ .sql = sql, .pos = 0 }; 111 + } 112 + 113 + pub fn next(self: *StatementIterator) ?[]const u8 { 114 + if (self.pos >= self.sql.len) return null; 115 + 116 + const start = self.pos; 117 + var in_string = false; 118 + var in_comment = false; 119 + var in_line_comment = false; 120 + 121 + while (self.pos < self.sql.len) { 122 + const c = self.sql[self.pos]; 123 + 124 + // handle line comments (-- ...) 125 + if (!in_string and !in_comment and self.pos + 1 < self.sql.len) { 126 + if (c == '-' and self.sql[self.pos + 1] == '-') { 127 + in_line_comment = true; 128 + self.pos += 2; 129 + continue; 130 + } 131 + } 132 + 133 + if (in_line_comment) { 134 + if (c == '\n') { 135 + in_line_comment = false; 136 + } 137 + self.pos += 1; 138 + continue; 139 + } 140 + 141 + // handle block comments (/* ... */) 142 + if (!in_string and !in_comment and self.pos + 1 < self.sql.len) { 143 + if (c == '/' and self.sql[self.pos + 1] == '*') { 144 + in_comment = true; 145 + self.pos += 2; 146 + continue; 147 + } 148 + } 149 + 150 + if (in_comment) { 151 + if (c == '*' and self.pos + 1 < self.sql.len and self.sql[self.pos + 1] == '/') { 152 + in_comment = false; 153 + self.pos += 2; 154 + continue; 155 + } 156 + self.pos += 1; 157 + continue; 158 + } 159 + 160 + // handle string literals 161 + if (c == '\'') { 162 + if (in_string) { 163 + // check for escaped quote '' 164 + if (self.pos + 1 < self.sql.len and self.sql[self.pos + 1] == '\'') { 165 + self.pos += 2; 166 + continue; 167 + } 168 + in_string = false; 169 + } else { 170 + in_string = true; 171 + } 172 + self.pos += 1; 173 + continue; 174 + } 175 + 176 + // found statement terminator 177 + if (c == ';' and !in_string and !in_comment) { 178 + const stmt = std.mem.trim(u8, self.sql[start..self.pos], " \t\n\r"); 179 + self.pos += 1; 180 + // skip empty statements 181 + if (stmt.len == 0 or isOnlyComments(stmt)) { 182 + return self.next(); 183 + } 184 + return stmt; 185 + } 186 + 187 + self.pos += 1; 188 + } 189 + 190 + // handle trailing statement without semicolon 191 + const stmt = std.mem.trim(u8, self.sql[start..self.pos], " \t\n\r"); 192 + if (stmt.len == 0 or isOnlyComments(stmt)) { 193 + return null; 194 + } 195 + return stmt; 196 + } 197 + 198 + fn isOnlyComments(stmt: []const u8) bool { 199 + var i: usize = 0; 200 + while (i < stmt.len) { 201 + const c = stmt[i]; 202 + if (c == ' ' or c == '\t' or c == '\n' or c == '\r') { 203 + i += 1; 204 + continue; 205 + } 206 + if (c == '-' and i + 1 < stmt.len and stmt[i + 1] == '-') { 207 + // skip to end of line 208 + while (i < stmt.len and stmt[i] != '\n') i += 1; 209 + continue; 210 + } 211 + // found non-whitespace, non-comment 212 + return false; 213 + } 214 + return true; 215 + } 216 + }; 217 + 218 + fn getIsoTimestamp(allocator: Allocator) []const u8 { 219 + const ts = std.time.timestamp(); 220 + const epoch_seconds = std.time.epoch.EpochSeconds{ .secs = @intCast(ts) }; 221 + const day_seconds = epoch_seconds.getDaySeconds(); 222 + const year_day = epoch_seconds.getEpochDay().calculateYearDay(); 223 + const month_day = year_day.calculateMonthDay(); 224 + 225 + return std.fmt.allocPrint(allocator, "{d:0>4}-{d:0>2}-{d:0>2}T{d:0>2}:{d:0>2}:{d:0>2}Z", .{ 226 + year_day.year, 227 + @intFromEnum(month_day.month) + 1, 228 + month_day.day_index + 1, 229 + day_seconds.getHoursIntoDay(), 230 + day_seconds.getMinutesIntoHour(), 231 + day_seconds.getSecondsIntoMinute(), 232 + }) catch "1970-01-01T00:00:00Z"; 233 + } 234 + 235 + // tests 236 + 237 + test "statement iterator basic" { 238 + const sql = "CREATE TABLE a (id INT); CREATE TABLE b (id INT);"; 239 + var iter = StatementIterator.init(sql); 240 + 241 + const stmt1 = iter.next().?; 242 + try std.testing.expectEqualStrings("CREATE TABLE a (id INT)", stmt1); 243 + 244 + const stmt2 = iter.next().?; 245 + try std.testing.expectEqualStrings("CREATE TABLE b (id INT)", stmt2); 246 + 247 + try std.testing.expect(iter.next() == null); 248 + } 249 + 250 + test "statement iterator with comments" { 251 + const sql = 252 + \\-- this is a comment 253 + \\CREATE TABLE a (id INT); 254 + \\/* block comment */ 255 + \\CREATE TABLE b (id INT); 256 + ; 257 + var iter = StatementIterator.init(sql); 258 + 259 + const stmt1 = iter.next().?; 260 + try std.testing.expect(std.mem.indexOf(u8, stmt1, "CREATE TABLE a") != null); 261 + 262 + const stmt2 = iter.next().?; 263 + try std.testing.expect(std.mem.indexOf(u8, stmt2, "CREATE TABLE b") != null); 264 + 265 + try std.testing.expect(iter.next() == null); 266 + } 267 + 268 + test "statement iterator with string literals" { 269 + const sql = "INSERT INTO t VALUES ('hello; world'); SELECT 1;"; 270 + var iter = StatementIterator.init(sql); 271 + 272 + const stmt1 = iter.next().?; 273 + try std.testing.expect(std.mem.indexOf(u8, stmt1, "'hello; world'") != null); 274 + 275 + const stmt2 = iter.next().?; 276 + try std.testing.expectEqualStrings("SELECT 1", stmt2); 277 + 278 + try std.testing.expect(iter.next() == null); 279 + }
+259
src/db/migrations/001_initial/postgres.sql
··· 1 + -- 001_initial: full schema for postgres 2 + -- applied: initial release 3 + 4 + -- flow table 5 + CREATE TABLE IF NOT EXISTS flow ( 6 + id TEXT PRIMARY KEY, 7 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 8 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 9 + name TEXT NOT NULL UNIQUE, 10 + tags JSONB DEFAULT '[]' 11 + ); 12 + 13 + -- flow_run table 14 + CREATE TABLE IF NOT EXISTS flow_run ( 15 + id TEXT PRIMARY KEY, 16 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 17 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 18 + flow_id TEXT REFERENCES flow(id), 19 + name TEXT NOT NULL, 20 + parameters JSONB DEFAULT '{}', 21 + tags JSONB DEFAULT '[]', 22 + state_id TEXT, 23 + state_type TEXT, 24 + state_name TEXT, 25 + state_timestamp TEXT, 26 + run_count BIGINT DEFAULT 0, 27 + expected_start_time TEXT, 28 + next_scheduled_start_time TEXT, 29 + start_time TEXT, 30 + end_time TEXT, 31 + total_run_time DOUBLE PRECISION DEFAULT 0.0, 32 + deployment_id TEXT, 33 + deployment_version TEXT, 34 + work_queue_name TEXT, 35 + work_queue_id TEXT, 36 + auto_scheduled INTEGER DEFAULT 0, 37 + idempotency_key TEXT, 38 + empirical_policy JSONB DEFAULT '{}' 39 + ); 40 + 41 + -- flow_run_state table 42 + CREATE TABLE IF NOT EXISTS flow_run_state ( 43 + id TEXT PRIMARY KEY, 44 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 45 + flow_run_id TEXT REFERENCES flow_run(id) ON DELETE CASCADE, 46 + type TEXT NOT NULL, 47 + name TEXT NOT NULL, 48 + timestamp TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"') 49 + ); 50 + 51 + -- task_run table 52 + CREATE TABLE IF NOT EXISTS task_run ( 53 + id TEXT PRIMARY KEY, 54 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 55 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 56 + flow_run_id TEXT REFERENCES flow_run(id), 57 + name TEXT NOT NULL, 58 + task_key TEXT NOT NULL, 59 + dynamic_key TEXT NOT NULL, 60 + cache_key TEXT, 61 + tags JSONB DEFAULT '[]', 62 + state_id TEXT, 63 + state_type TEXT, 64 + state_name TEXT, 65 + state_timestamp TEXT, 66 + run_count BIGINT DEFAULT 0, 67 + expected_start_time TEXT, 68 + start_time TEXT, 69 + end_time TEXT, 70 + total_run_time DOUBLE PRECISION DEFAULT 0.0 71 + ); 72 + 73 + -- events table 74 + CREATE TABLE IF NOT EXISTS events ( 75 + id TEXT PRIMARY KEY, 76 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 77 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 78 + occurred TEXT NOT NULL, 79 + event TEXT NOT NULL, 80 + resource_id TEXT NOT NULL, 81 + resource JSONB NOT NULL DEFAULT '{}', 82 + related_resource_ids JSONB DEFAULT '[]', 83 + related JSONB DEFAULT '[]', 84 + payload JSONB DEFAULT '{}', 85 + received TEXT NOT NULL, 86 + recorded TEXT NOT NULL, 87 + follows TEXT 88 + ); 89 + 90 + -- block_type table 91 + CREATE TABLE IF NOT EXISTS block_type ( 92 + id TEXT PRIMARY KEY, 93 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 94 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 95 + name TEXT NOT NULL, 96 + slug TEXT NOT NULL UNIQUE, 97 + logo_url TEXT, 98 + documentation_url TEXT, 99 + description TEXT, 100 + code_example TEXT, 101 + is_protected INTEGER DEFAULT 0 102 + ); 103 + 104 + -- block_schema table 105 + CREATE TABLE IF NOT EXISTS block_schema ( 106 + id TEXT PRIMARY KEY, 107 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 108 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 109 + checksum TEXT NOT NULL, 110 + fields JSONB NOT NULL DEFAULT '{}', 111 + capabilities JSONB NOT NULL DEFAULT '[]', 112 + version TEXT NOT NULL DEFAULT '1', 113 + block_type_id TEXT NOT NULL REFERENCES block_type(id) ON DELETE CASCADE, 114 + UNIQUE(checksum, version) 115 + ); 116 + 117 + -- block_document table 118 + CREATE TABLE IF NOT EXISTS block_document ( 119 + id TEXT PRIMARY KEY, 120 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 121 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 122 + name TEXT, 123 + data JSONB NOT NULL DEFAULT '{}', 124 + is_anonymous INTEGER DEFAULT 0, 125 + block_type_id TEXT NOT NULL REFERENCES block_type(id), 126 + block_type_name TEXT, 127 + block_schema_id TEXT NOT NULL REFERENCES block_schema(id), 128 + UNIQUE(block_type_id, name) 129 + ); 130 + 131 + -- variable table 132 + CREATE TABLE IF NOT EXISTS variable ( 133 + id TEXT PRIMARY KEY, 134 + created TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 135 + updated TEXT DEFAULT TO_CHAR(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"'), 136 + name TEXT NOT NULL UNIQUE, 137 + value JSONB DEFAULT 'null', 138 + tags JSONB DEFAULT '[]' 139 + ); 140 + 141 + -- work_pool table 142 + CREATE TABLE IF NOT EXISTS work_pool ( 143 + id TEXT PRIMARY KEY, 144 + created TEXT NOT NULL, 145 + updated TEXT NOT NULL, 146 + name TEXT NOT NULL UNIQUE, 147 + description TEXT, 148 + type TEXT NOT NULL DEFAULT 'process', 149 + base_job_template JSONB DEFAULT '{}', 150 + is_paused INTEGER DEFAULT 0, 151 + concurrency_limit BIGINT, 152 + default_queue_id TEXT, 153 + status TEXT DEFAULT 'NOT_READY' 154 + ); 155 + 156 + -- work_queue table 157 + CREATE TABLE IF NOT EXISTS work_queue ( 158 + id TEXT PRIMARY KEY, 159 + created TEXT NOT NULL, 160 + updated TEXT NOT NULL, 161 + name TEXT NOT NULL, 162 + description TEXT DEFAULT '', 163 + is_paused INTEGER DEFAULT 0, 164 + concurrency_limit BIGINT, 165 + priority BIGINT DEFAULT 1, 166 + work_pool_id TEXT NOT NULL REFERENCES work_pool(id) ON DELETE CASCADE, 167 + last_polled TEXT, 168 + status TEXT DEFAULT 'NOT_READY', 169 + UNIQUE(work_pool_id, name) 170 + ); 171 + 172 + -- worker table 173 + CREATE TABLE IF NOT EXISTS worker ( 174 + id TEXT PRIMARY KEY, 175 + created TEXT NOT NULL, 176 + updated TEXT NOT NULL, 177 + name TEXT NOT NULL, 178 + work_pool_id TEXT NOT NULL REFERENCES work_pool(id) ON DELETE CASCADE, 179 + last_heartbeat_time TEXT, 180 + heartbeat_interval_seconds BIGINT, 181 + status TEXT DEFAULT 'OFFLINE', 182 + UNIQUE(work_pool_id, name) 183 + ); 184 + 185 + -- deployment table 186 + CREATE TABLE IF NOT EXISTS deployment ( 187 + id TEXT PRIMARY KEY, 188 + created TEXT NOT NULL, 189 + updated TEXT NOT NULL, 190 + name TEXT NOT NULL, 191 + flow_id TEXT NOT NULL REFERENCES flow(id) ON DELETE CASCADE, 192 + version TEXT, 193 + description TEXT, 194 + paused INTEGER DEFAULT 0, 195 + status TEXT DEFAULT 'NOT_READY', 196 + last_polled TEXT, 197 + parameters JSONB DEFAULT '{}', 198 + parameter_openapi_schema TEXT, 199 + enforce_parameter_schema INTEGER DEFAULT 1, 200 + tags JSONB DEFAULT '[]', 201 + labels JSONB DEFAULT '{}', 202 + path TEXT, 203 + entrypoint TEXT, 204 + job_variables JSONB DEFAULT '{}', 205 + pull_steps TEXT, 206 + work_pool_name TEXT, 207 + work_queue_name TEXT, 208 + work_queue_id TEXT REFERENCES work_queue(id) ON DELETE SET NULL, 209 + storage_document_id TEXT, 210 + infrastructure_document_id TEXT, 211 + concurrency_limit BIGINT, 212 + UNIQUE(flow_id, name) 213 + ); 214 + 215 + -- deployment_schedule table 216 + CREATE TABLE IF NOT EXISTS deployment_schedule ( 217 + id TEXT PRIMARY KEY, 218 + created TEXT NOT NULL, 219 + updated TEXT NOT NULL, 220 + deployment_id TEXT NOT NULL REFERENCES deployment(id) ON DELETE CASCADE, 221 + schedule TEXT NOT NULL, 222 + active INTEGER DEFAULT 1, 223 + max_scheduled_runs BIGINT, 224 + parameters JSONB DEFAULT '{}', 225 + slug TEXT, 226 + UNIQUE(deployment_id, slug) 227 + ); 228 + 229 + -- indexes 230 + CREATE INDEX IF NOT EXISTS ix_flow_run__state_type ON flow_run(state_type); 231 + CREATE INDEX IF NOT EXISTS ix_flow_run__flow_id ON flow_run(flow_id); 232 + CREATE INDEX IF NOT EXISTS ix_flow_run__next_scheduled_start_time ON flow_run(next_scheduled_start_time); 233 + CREATE UNIQUE INDEX IF NOT EXISTS uq_flow_run__flow_id_idempotency_key ON flow_run(flow_id, idempotency_key) WHERE idempotency_key IS NOT NULL; 234 + CREATE INDEX IF NOT EXISTS ix_flow_run_state__flow_run_id ON flow_run_state(flow_run_id); 235 + CREATE INDEX IF NOT EXISTS ix_task_run__flow_run_id ON task_run(flow_run_id); 236 + CREATE INDEX IF NOT EXISTS ix_task_run__task_key_dynamic_key ON task_run(task_key, dynamic_key); 237 + CREATE INDEX IF NOT EXISTS ix_events__occurred ON events(occurred); 238 + CREATE INDEX IF NOT EXISTS ix_events__event__id ON events(event, id); 239 + CREATE INDEX IF NOT EXISTS ix_events__event_resource_id_occurred ON events(event, resource_id, occurred); 240 + CREATE INDEX IF NOT EXISTS ix_events__occurred_id ON events(occurred, id); 241 + CREATE INDEX IF NOT EXISTS ix_block_schema__block_type_id ON block_schema(block_type_id); 242 + CREATE INDEX IF NOT EXISTS ix_block_schema__checksum ON block_schema(checksum); 243 + CREATE INDEX IF NOT EXISTS ix_block_document__block_type_id ON block_document(block_type_id); 244 + CREATE INDEX IF NOT EXISTS ix_block_document__block_schema_id ON block_document(block_schema_id); 245 + CREATE INDEX IF NOT EXISTS ix_block_document__name ON block_document(name); 246 + CREATE INDEX IF NOT EXISTS ix_variable__name ON variable(name); 247 + CREATE INDEX IF NOT EXISTS ix_work_pool__name ON work_pool(name); 248 + CREATE INDEX IF NOT EXISTS ix_work_pool__type ON work_pool(type); 249 + CREATE INDEX IF NOT EXISTS ix_work_queue__work_pool_id ON work_queue(work_pool_id); 250 + CREATE INDEX IF NOT EXISTS ix_work_queue__priority ON work_queue(priority); 251 + CREATE INDEX IF NOT EXISTS ix_worker__work_pool_id ON worker(work_pool_id); 252 + CREATE INDEX IF NOT EXISTS ix_flow_run__deployment_id ON flow_run(deployment_id); 253 + CREATE INDEX IF NOT EXISTS ix_deployment__flow_id ON deployment(flow_id); 254 + CREATE INDEX IF NOT EXISTS ix_deployment__work_queue_id ON deployment(work_queue_id); 255 + CREATE INDEX IF NOT EXISTS ix_deployment__work_pool_name ON deployment(work_pool_name); 256 + CREATE INDEX IF NOT EXISTS ix_deployment__created ON deployment(created); 257 + CREATE INDEX IF NOT EXISTS ix_deployment__updated ON deployment(updated); 258 + CREATE INDEX IF NOT EXISTS ix_deployment_schedule__deployment_id ON deployment_schedule(deployment_id); 259 + CREATE INDEX IF NOT EXISTS ix_deployment_schedule__active ON deployment_schedule(active);
+259
src/db/migrations/001_initial/sqlite.sql
··· 1 + -- 001_initial: full schema for sqlite 2 + -- applied: initial release 3 + 4 + -- flow table 5 + CREATE TABLE IF NOT EXISTS flow ( 6 + id TEXT PRIMARY KEY, 7 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 8 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 9 + name TEXT NOT NULL UNIQUE, 10 + tags TEXT DEFAULT '[]' 11 + ); 12 + 13 + -- flow_run table 14 + CREATE TABLE IF NOT EXISTS flow_run ( 15 + id TEXT PRIMARY KEY, 16 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 17 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 18 + flow_id TEXT REFERENCES flow(id), 19 + name TEXT NOT NULL, 20 + parameters TEXT DEFAULT '{}', 21 + tags TEXT DEFAULT '[]', 22 + state_id TEXT, 23 + state_type TEXT, 24 + state_name TEXT, 25 + state_timestamp TEXT, 26 + run_count INTEGER DEFAULT 0, 27 + expected_start_time TEXT, 28 + next_scheduled_start_time TEXT, 29 + start_time TEXT, 30 + end_time TEXT, 31 + total_run_time REAL DEFAULT 0.0, 32 + deployment_id TEXT, 33 + deployment_version TEXT, 34 + work_queue_name TEXT, 35 + work_queue_id TEXT, 36 + auto_scheduled INTEGER DEFAULT 0, 37 + idempotency_key TEXT, 38 + empirical_policy TEXT DEFAULT '{}' 39 + ); 40 + 41 + -- flow_run_state table 42 + CREATE TABLE IF NOT EXISTS flow_run_state ( 43 + id TEXT PRIMARY KEY, 44 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 45 + flow_run_id TEXT REFERENCES flow_run(id) ON DELETE CASCADE, 46 + type TEXT NOT NULL, 47 + name TEXT NOT NULL, 48 + timestamp TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')) 49 + ); 50 + 51 + -- task_run table 52 + CREATE TABLE IF NOT EXISTS task_run ( 53 + id TEXT PRIMARY KEY, 54 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 55 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 56 + flow_run_id TEXT REFERENCES flow_run(id), 57 + name TEXT NOT NULL, 58 + task_key TEXT NOT NULL, 59 + dynamic_key TEXT NOT NULL, 60 + cache_key TEXT, 61 + tags TEXT DEFAULT '[]', 62 + state_id TEXT, 63 + state_type TEXT, 64 + state_name TEXT, 65 + state_timestamp TEXT, 66 + run_count INTEGER DEFAULT 0, 67 + expected_start_time TEXT, 68 + start_time TEXT, 69 + end_time TEXT, 70 + total_run_time REAL DEFAULT 0.0 71 + ); 72 + 73 + -- events table 74 + CREATE TABLE IF NOT EXISTS events ( 75 + id TEXT PRIMARY KEY, 76 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 77 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 78 + occurred TEXT NOT NULL, 79 + event TEXT NOT NULL, 80 + resource_id TEXT NOT NULL, 81 + resource TEXT NOT NULL DEFAULT '{}', 82 + related_resource_ids TEXT DEFAULT '[]', 83 + related TEXT DEFAULT '[]', 84 + payload TEXT DEFAULT '{}', 85 + received TEXT NOT NULL, 86 + recorded TEXT NOT NULL, 87 + follows TEXT 88 + ); 89 + 90 + -- block_type table 91 + CREATE TABLE IF NOT EXISTS block_type ( 92 + id TEXT PRIMARY KEY, 93 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 94 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 95 + name TEXT NOT NULL, 96 + slug TEXT NOT NULL UNIQUE, 97 + logo_url TEXT, 98 + documentation_url TEXT, 99 + description TEXT, 100 + code_example TEXT, 101 + is_protected INTEGER DEFAULT 0 102 + ); 103 + 104 + -- block_schema table 105 + CREATE TABLE IF NOT EXISTS block_schema ( 106 + id TEXT PRIMARY KEY, 107 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 108 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 109 + checksum TEXT NOT NULL, 110 + fields TEXT NOT NULL DEFAULT '{}', 111 + capabilities TEXT NOT NULL DEFAULT '[]', 112 + version TEXT NOT NULL DEFAULT '1', 113 + block_type_id TEXT NOT NULL REFERENCES block_type(id) ON DELETE CASCADE, 114 + UNIQUE(checksum, version) 115 + ); 116 + 117 + -- block_document table 118 + CREATE TABLE IF NOT EXISTS block_document ( 119 + id TEXT PRIMARY KEY, 120 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 121 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 122 + name TEXT, 123 + data TEXT NOT NULL DEFAULT '{}', 124 + is_anonymous INTEGER DEFAULT 0, 125 + block_type_id TEXT NOT NULL REFERENCES block_type(id), 126 + block_type_name TEXT, 127 + block_schema_id TEXT NOT NULL REFERENCES block_schema(id), 128 + UNIQUE(block_type_id, name) 129 + ); 130 + 131 + -- variable table 132 + CREATE TABLE IF NOT EXISTS variable ( 133 + id TEXT PRIMARY KEY, 134 + created TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 135 + updated TEXT DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ', 'now')), 136 + name TEXT NOT NULL UNIQUE, 137 + value TEXT DEFAULT 'null', 138 + tags TEXT DEFAULT '[]' 139 + ); 140 + 141 + -- work_pool table 142 + CREATE TABLE IF NOT EXISTS work_pool ( 143 + id TEXT PRIMARY KEY, 144 + created TEXT NOT NULL, 145 + updated TEXT NOT NULL, 146 + name TEXT NOT NULL UNIQUE, 147 + description TEXT, 148 + type TEXT NOT NULL DEFAULT 'process', 149 + base_job_template TEXT DEFAULT '{}', 150 + is_paused INTEGER DEFAULT 0, 151 + concurrency_limit INTEGER, 152 + default_queue_id TEXT, 153 + status TEXT DEFAULT 'NOT_READY' 154 + ); 155 + 156 + -- work_queue table 157 + CREATE TABLE IF NOT EXISTS work_queue ( 158 + id TEXT PRIMARY KEY, 159 + created TEXT NOT NULL, 160 + updated TEXT NOT NULL, 161 + name TEXT NOT NULL, 162 + description TEXT DEFAULT '', 163 + is_paused INTEGER DEFAULT 0, 164 + concurrency_limit INTEGER, 165 + priority INTEGER DEFAULT 1, 166 + work_pool_id TEXT NOT NULL REFERENCES work_pool(id) ON DELETE CASCADE, 167 + last_polled TEXT, 168 + status TEXT DEFAULT 'NOT_READY', 169 + UNIQUE(work_pool_id, name) 170 + ); 171 + 172 + -- worker table 173 + CREATE TABLE IF NOT EXISTS worker ( 174 + id TEXT PRIMARY KEY, 175 + created TEXT NOT NULL, 176 + updated TEXT NOT NULL, 177 + name TEXT NOT NULL, 178 + work_pool_id TEXT NOT NULL REFERENCES work_pool(id) ON DELETE CASCADE, 179 + last_heartbeat_time TEXT, 180 + heartbeat_interval_seconds INTEGER, 181 + status TEXT DEFAULT 'OFFLINE', 182 + UNIQUE(work_pool_id, name) 183 + ); 184 + 185 + -- deployment table 186 + CREATE TABLE IF NOT EXISTS deployment ( 187 + id TEXT PRIMARY KEY, 188 + created TEXT NOT NULL, 189 + updated TEXT NOT NULL, 190 + name TEXT NOT NULL, 191 + flow_id TEXT NOT NULL REFERENCES flow(id) ON DELETE CASCADE, 192 + version TEXT, 193 + description TEXT, 194 + paused INTEGER DEFAULT 0, 195 + status TEXT DEFAULT 'NOT_READY', 196 + last_polled TEXT, 197 + parameters TEXT DEFAULT '{}', 198 + parameter_openapi_schema TEXT, 199 + enforce_parameter_schema INTEGER DEFAULT 1, 200 + tags TEXT DEFAULT '[]', 201 + labels TEXT DEFAULT '{}', 202 + path TEXT, 203 + entrypoint TEXT, 204 + job_variables TEXT DEFAULT '{}', 205 + pull_steps TEXT, 206 + work_pool_name TEXT, 207 + work_queue_name TEXT, 208 + work_queue_id TEXT REFERENCES work_queue(id) ON DELETE SET NULL, 209 + storage_document_id TEXT, 210 + infrastructure_document_id TEXT, 211 + concurrency_limit INTEGER, 212 + UNIQUE(flow_id, name) 213 + ); 214 + 215 + -- deployment_schedule table 216 + CREATE TABLE IF NOT EXISTS deployment_schedule ( 217 + id TEXT PRIMARY KEY, 218 + created TEXT NOT NULL, 219 + updated TEXT NOT NULL, 220 + deployment_id TEXT NOT NULL REFERENCES deployment(id) ON DELETE CASCADE, 221 + schedule TEXT NOT NULL, 222 + active INTEGER DEFAULT 1, 223 + max_scheduled_runs INTEGER, 224 + parameters TEXT DEFAULT '{}', 225 + slug TEXT, 226 + UNIQUE(deployment_id, slug) 227 + ); 228 + 229 + -- indexes 230 + CREATE INDEX IF NOT EXISTS ix_flow_run__state_type ON flow_run(state_type); 231 + CREATE INDEX IF NOT EXISTS ix_flow_run__flow_id ON flow_run(flow_id); 232 + CREATE INDEX IF NOT EXISTS ix_flow_run__next_scheduled_start_time ON flow_run(next_scheduled_start_time); 233 + CREATE UNIQUE INDEX IF NOT EXISTS uq_flow_run__flow_id_idempotency_key ON flow_run(flow_id, idempotency_key) WHERE idempotency_key IS NOT NULL; 234 + CREATE INDEX IF NOT EXISTS ix_flow_run_state__flow_run_id ON flow_run_state(flow_run_id); 235 + CREATE INDEX IF NOT EXISTS ix_task_run__flow_run_id ON task_run(flow_run_id); 236 + CREATE INDEX IF NOT EXISTS ix_task_run__task_key_dynamic_key ON task_run(task_key, dynamic_key); 237 + CREATE INDEX IF NOT EXISTS ix_events__occurred ON events(occurred); 238 + CREATE INDEX IF NOT EXISTS ix_events__event__id ON events(event, id); 239 + CREATE INDEX IF NOT EXISTS ix_events__event_resource_id_occurred ON events(event, resource_id, occurred); 240 + CREATE INDEX IF NOT EXISTS ix_events__occurred_id ON events(occurred, id); 241 + CREATE INDEX IF NOT EXISTS ix_block_schema__block_type_id ON block_schema(block_type_id); 242 + CREATE INDEX IF NOT EXISTS ix_block_schema__checksum ON block_schema(checksum); 243 + CREATE INDEX IF NOT EXISTS ix_block_document__block_type_id ON block_document(block_type_id); 244 + CREATE INDEX IF NOT EXISTS ix_block_document__block_schema_id ON block_document(block_schema_id); 245 + CREATE INDEX IF NOT EXISTS ix_block_document__name ON block_document(name); 246 + CREATE INDEX IF NOT EXISTS ix_variable__name ON variable(name); 247 + CREATE INDEX IF NOT EXISTS ix_work_pool__name ON work_pool(name); 248 + CREATE INDEX IF NOT EXISTS ix_work_pool__type ON work_pool(type); 249 + CREATE INDEX IF NOT EXISTS ix_work_queue__work_pool_id ON work_queue(work_pool_id); 250 + CREATE INDEX IF NOT EXISTS ix_work_queue__priority ON work_queue(priority); 251 + CREATE INDEX IF NOT EXISTS ix_worker__work_pool_id ON worker(work_pool_id); 252 + CREATE INDEX IF NOT EXISTS ix_flow_run__deployment_id ON flow_run(deployment_id); 253 + CREATE INDEX IF NOT EXISTS ix_deployment__flow_id ON deployment(flow_id); 254 + CREATE INDEX IF NOT EXISTS ix_deployment__work_queue_id ON deployment(work_queue_id); 255 + CREATE INDEX IF NOT EXISTS ix_deployment__work_pool_name ON deployment(work_pool_name); 256 + CREATE INDEX IF NOT EXISTS ix_deployment__created ON deployment(created); 257 + CREATE INDEX IF NOT EXISTS ix_deployment__updated ON deployment(updated); 258 + CREATE INDEX IF NOT EXISTS ix_deployment_schedule__deployment_id ON deployment_schedule(deployment_id); 259 + CREATE INDEX IF NOT EXISTS ix_deployment_schedule__active ON deployment_schedule(active);
+26
src/db/migrations_data.zig
··· 1 + // migrations_data.zig - manually maintained migration list 2 + // 3 + // when adding a new migration: 4 + // 1. create migrations/NNN_description/ directory 5 + // 2. add sqlite.sql and postgres.sql files 6 + // 3. add entry to `all` array below 7 + 8 + pub const Migration = struct { 9 + id: []const u8, 10 + sqlite_sql: []const u8, 11 + postgres_sql: []const u8, 12 + }; 13 + 14 + pub const all = [_]Migration{ 15 + .{ 16 + .id = "001_initial", 17 + .sqlite_sql = @embedFile("migrations/001_initial/sqlite.sql"), 18 + .postgres_sql = @embedFile("migrations/001_initial/postgres.sql"), 19 + }, 20 + // add new migrations here: 21 + // .{ 22 + // .id = "002_add_feature", 23 + // .sqlite_sql = @embedFile("migrations/002_add_feature/sqlite.sql"), 24 + // .postgres_sql = @embedFile("migrations/002_add_feature/postgres.sql"), 25 + // }, 26 + };
+3 -11
src/db/sqlite.zig
··· 3 3 const Thread = std.Thread; 4 4 const log = @import("../logging.zig"); 5 5 const backend = @import("backend.zig"); 6 - const sqlite_schema = @import("schema/sqlite.zig"); 7 - const postgres_schema = @import("schema/postgres.zig"); 6 + const migrate = @import("migrate.zig"); 8 7 9 8 // sub-modules 10 9 pub const flows = @import("flows.zig"); ··· 70 69 }, 71 70 } 72 71 73 - try initSchema(); 72 + // apply migrations (creates/updates schema) 73 + try migrate.applyMigrations(); 74 74 } 75 75 76 76 pub fn close() void { 77 77 backend.close(); 78 78 } 79 - 80 - fn initSchema() !void { 81 - // Dispatch to dialect-specific schema initialization 82 - switch (backend.db.dialect) { 83 - .sqlite => try sqlite_schema.init(), 84 - .postgres => try postgres_schema.init(), 85 - } 86 - }