tangled
alpha
login
or
join now
zicklag.dev
/
atproto-plc
forked from
smokesignal.events/atproto-plc
0
fork
atom
Rust and WASM did-method-plc tools and structures
0
fork
atom
overview
issues
pulls
pipelines
feature: full chain validation with support for forks
Nick Gerakines
4 months ago
32d3e814
2b0247ad
+2280
-211
9 changed files
expand all
collapse all
unified
split
Cargo.toml
src
bin
README.md
plc-audit.rs
plc-fork-viz.rs
lib.rs
validation.rs
wasm
package-lock.json
package.json
plc-audit.js
+6
-1
Cargo.toml
reviewed
···
1
1
[package]
2
2
name = "atproto-plc"
3
3
-
version = "0.1.0"
3
3
+
version = "0.2.0"
4
4
authors = ["Nick Gerakines <nick.gerakines@gmail.com>"]
5
5
edition = "2024"
6
6
rust-version = "1.90.0"
···
85
85
[[bin]]
86
86
name = "plc-audit"
87
87
path = "src/bin/plc-audit.rs"
88
88
+
required-features = ["cli"]
89
89
+
90
90
+
[[bin]]
91
91
+
name = "plc-fork-viz"
92
92
+
path = "src/bin/plc-fork-viz.rs"
88
93
required-features = ["cli"]
89
94
90
95
[profile.release]
+225
-125
src/bin/README.md
reviewed
···
1
1
-
# plc-audit: DID Audit Log Validator
1
1
+
# AT Protocol PLC Command-Line Tools
2
2
+
3
3
+
Command-line tools for working with AT Protocol PLC (Public Ledger of Credentials) DIDs.
4
4
+
5
5
+
## Tools
2
6
3
3
-
A command-line tool for fetching and validating DID audit logs from plc.directory.
7
7
+
### plc-audit: DID Audit Log Validator
4
8
5
5
-
## Features
9
9
+
Validates the cryptographic integrity of a DID's audit log from plc.directory.
10
10
+
11
11
+
#### Features
6
12
7
13
- 🔍 **Fetch Audit Logs**: Retrieves complete operation history from plc.directory
8
14
- 🔐 **Cryptographic Validation**: Verifies all signatures using rotation keys
···
10
16
- 📊 **Detailed Output**: Shows operation history and final DID state
11
17
- ⚡ **Fast & Reliable**: Built with Rust for performance and safety
12
18
13
13
-
## Installation
19
19
+
#### Usage
14
20
15
21
```bash
16
16
-
cargo build --release --features cli --bin plc-audit
22
22
+
# Basic validation
23
23
+
cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur
24
24
+
25
25
+
# Verbose mode (show all operations)
26
26
+
cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur --verbose
27
27
+
28
28
+
# Quiet mode (VALID/INVALID only)
29
29
+
cargo run --bin plc-audit --features cli -- did:plc:z72i7hdynmk6r22z27h6tvur --quiet
30
30
+
31
31
+
# Custom PLC directory
32
32
+
cargo run --bin plc-audit --features cli -- did:plc:example --plc-url https://custom.plc.directory
17
33
```
18
34
19
19
-
The binary will be available at `./target/release/plc-audit`.
35
35
+
---
20
36
21
21
-
## Usage
37
37
+
### plc-fork-viz: Fork Visualizer
22
38
23
23
-
### Basic Validation
39
39
+
Visualizes forks in a DID's operation chain, showing which operations won/lost based on rotation key priority and the 72-hour recovery window.
24
40
25
25
-
Validate a DID and show detailed output:
41
41
+
#### Features
42
42
+
43
43
+
- 🔀 **Fork Detection**: Identifies competing operations in the chain
44
44
+
- 🔐 **Priority Analysis**: Determines which rotation key signed each operation
45
45
+
- ⏱️ **Recovery Window**: Applies 72-hour recovery window rules
46
46
+
- 📊 **Multiple Formats**: Tree, JSON, and Markdown visualization
47
47
+
- 🎨 **Color Coding**: Green for canonical operations, red for rejected
48
48
+
49
49
+
#### Usage
26
50
27
51
```bash
28
28
-
plc-audit did:plc:z72i7hdynmk6r22z27h6tvur
52
52
+
# Basic fork visualization
53
53
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz
54
54
+
55
55
+
# Verbose mode (detailed operation info)
56
56
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --verbose
57
57
+
58
58
+
# Show timestamps and recovery window calculations
59
59
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --timing
60
60
+
61
61
+
# Show full DIDs/CIDs
62
62
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --full-ids
63
63
+
64
64
+
# Output as JSON
65
65
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --format json
66
66
+
67
67
+
# Output as Markdown
68
68
+
cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz --format markdown
29
69
```
30
70
31
31
-
### Verbose Mode
71
71
+
#### Output Formats
32
72
33
33
-
Show all operations in the audit log:
73
73
+
**Tree Format** (default):
74
74
+
```
75
75
+
Fork at operation referencing bafyre...abc123
76
76
+
├─ 🔴 ✗ CID: bafyre...def456
77
77
+
│ Signed by: rotation_key[1]
78
78
+
│ Reason: Invalidated by higher-priority key[0] within recovery window
79
79
+
└─ 🟢 ✓ CID: bafyre...ghi789
80
80
+
Signed by: rotation_key[0]
81
81
+
Status: CANONICAL (winner)
82
82
+
```
34
83
35
35
-
```bash
36
36
-
plc-audit did:plc:z72i7hdynmk6r22z27h6tvur --verbose
84
84
+
**JSON Format**:
85
85
+
```json
86
86
+
[
87
87
+
{
88
88
+
"prev_cid": "bafyre...",
89
89
+
"winner_cid": "bafyre...",
90
90
+
"operations": [...]
91
91
+
}
92
92
+
]
37
93
```
38
94
39
39
-
Output includes:
40
40
-
- Operation index and CID
41
41
-
- Creation timestamp
42
42
-
- Operation type (Genesis/Update)
43
43
-
- Previous operation reference
95
95
+
**Markdown Format**:
96
96
+
| Status | CID | Key Index | Timestamp | Reason |
97
97
+
|--------|-----|-----------|-----------|--------|
98
98
+
| ✅ Winner | bafyre...ghi789 | 0 | 2025-01-15 14:30:00 | Canonical operation |
99
99
+
| ❌ Rejected | bafyre...def456 | 1 | 2025-01-15 10:00:00 | Invalidated by higher-priority key[0] |
44
100
45
45
-
### Quiet Mode
101
101
+
#### Fork Resolution Rules
46
102
47
47
-
Only show validation result (useful for scripts):
103
103
+
1. **Rotation Key Priority**: Keys are ordered by array index (0 = highest priority)
104
104
+
2. **Recovery Window**: 72 hours from the first operation's timestamp
105
105
+
3. **First-Received Default**: The operation received first wins unless invalidated
106
106
+
4. **Higher Priority Override**: A higher-priority key can invalidate if:
107
107
+
- It arrives within 72 hours
108
108
+
- Its key index is lower (e.g., key[0] beats key[1])
109
109
+
110
110
+
#### Example: No Forks
48
111
49
112
```bash
50
50
-
plc-audit did:plc:z72i7hdynmk6r22z27h6tvur --quiet
51
51
-
```
113
113
+
$ cargo run --bin plc-fork-viz --features cli -- did:plc:ewvi7nxzyoun6zhxrhs64oiz
52
114
53
53
-
Output: `✅ VALID` or error message
115
115
+
🔍 Analyzing forks in: did:plc:ewvi7nxzyoun6zhxrhs64oiz
116
116
+
Source: https://plc.directory
54
117
55
55
-
### Custom PLC Directory
118
118
+
📊 Audit log contains 5 operations
119
119
+
120
120
+
✅ No forks detected - this is a linear operation chain
121
121
+
All operations form a single canonical path from genesis to tip.
122
122
+
```
56
123
57
57
-
Use a custom PLC directory server:
124
124
+
#### Example: Fork Detected
58
125
59
126
```bash
60
60
-
plc-audit did:plc:example --plc-url https://custom.plc.directory
61
61
-
```
127
127
+
$ cargo run --bin plc-fork-viz --features cli -- did:plc:z7x2k3j4m5n6 --timing
62
128
63
63
-
## What is Validated?
129
129
+
🔍 Analyzing forks in: did:plc:z7x2k3j4m5n6
130
130
+
Source: https://plc.directory
64
131
65
65
-
The tool performs comprehensive validation:
132
132
+
📊 Audit log contains 8 operations
133
133
+
⚠️ Detected 1 fork point(s)
66
134
67
67
-
1. **DID Format Validation**
68
68
-
- Checks prefix is `did:plc:`
69
69
-
- Verifies identifier is exactly 24 characters
70
70
-
- Ensures only valid base32 characters (a-z, 2-7)
135
135
+
📊 Fork Visualization (Tree Format)
136
136
+
═══════════════════════════════════════════════════════════════
71
137
72
72
-
2. **Chain Linkage Verification**
73
73
-
- First operation must be genesis (prev = null)
74
74
-
- Each subsequent operation's `prev` field must match previous operation's CID
75
75
-
- No breaks in the chain
138
138
+
Fork at operation referencing bafyre...abc123
139
139
+
├─ 🔴 ✗ CID: bafyre...def456
140
140
+
│ Signed by: rotation_key[1]
141
141
+
│ Timestamp: 2025-01-15 10:00:00 UTC
142
142
+
│ Reason: Invalidated by higher-priority key[0] within recovery window
143
143
+
│
144
144
+
└─ 🟢 ✓ CID: bafyre...ghi789
145
145
+
│ Signed by: rotation_key[0]
146
146
+
│ Timestamp: 2025-01-15 14:00:00 UTC
147
147
+
│ Status: CANONICAL (winner)
76
148
77
77
-
3. **Cryptographic Signature Verification**
78
78
-
- Each operation's signature is verified using rotation keys
79
79
-
- Genesis operation establishes initial rotation keys
80
80
-
- Later operations can rotate keys
149
149
+
═══════════════════════════════════════════════════════════════
150
150
+
📈 Summary:
151
151
+
Total operations: 8
152
152
+
Fork points: 1
153
153
+
Rejected operations: 1
81
154
82
82
-
4. **State Consistency**
83
83
-
- Final state is extracted and displayed
84
84
-
- Shows rotation keys, verification methods, services, and aliases
155
155
+
🔐 Fork Resolution Details:
156
156
+
Fork 1: Winner is bafyre...ghi789 (signed by key[0])
157
157
+
```
85
158
86
86
-
## Exit Codes
159
159
+
---
87
160
88
88
-
- `0`: Validation successful
89
89
-
- `1`: Validation failed or error occurred
161
161
+
## Building & Installation
90
162
91
91
-
## Example Output
163
163
+
### Build Both Tools
92
164
93
93
-
### Standard Mode
165
165
+
```bash
166
166
+
cargo build --bins --features cli --release
167
167
+
```
94
168
169
169
+
### Install to ~/.cargo/bin
170
170
+
171
171
+
```bash
172
172
+
cargo install --path . --features cli
95
173
```
96
96
-
🔍 Fetching audit log for: did:plc:z72i7hdynmk6r22z27h6tvur
97
97
-
Source: https://plc.directory
98
174
99
99
-
📊 Audit Log Summary:
100
100
-
Total operations: 4
101
101
-
Genesis operation: bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm
102
102
-
Latest operation: bafyreifn4pkect7nymne3sxkdg7tn7534msyxcjkshmzqtijmn3enyxm3q
175
175
+
Then use directly:
176
176
+
```bash
177
177
+
plc-audit did:plc:ewvi7nxzyoun6zhxrhs64oiz
178
178
+
plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz
179
179
+
```
103
180
104
104
-
🔐 Validating operation chain...
181
181
+
---
182
182
+
183
183
+
## Common Workflows
184
184
+
185
185
+
### Workflow 1: Validate and Check for Forks
186
186
+
187
187
+
```bash
188
188
+
# First validate the audit log
189
189
+
$ plc-audit did:plc:ewvi7nxzyoun6zhxrhs64oiz
105
190
✅ Validation successful!
106
191
107
107
-
📄 Final DID State:
108
108
-
Rotation keys: 2
109
109
-
[0] did:key:zQ3shhCGUqDKjStzuDxPkTxN6ujddP4RkEKJJouJGRRkaLGbg
110
110
-
[1] did:key:zQ3shpKnbdPx3g3CmPf5cRVTPe1HtSwVn5ish3wSnDPQCbLJK
192
192
+
# Then check for forks
193
193
+
$ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz
194
194
+
```
111
195
112
112
-
Verification methods: 1
113
113
-
atproto: did:key:zQ3shQo6TF2moaqMTrUZEM1jeuYRQXeHEx4evX9751y2qPqRA
196
196
+
### Workflow 2: Export Fork Data
114
197
115
115
-
Also known as: 1
116
116
-
- at://bsky.app
198
198
+
```bash
199
199
+
# Export as JSON for analysis
200
200
+
$ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz --format json > forks.json
117
201
118
118
-
Services: 1
119
119
-
atproto_pds: https://puffball.us-east.host.bsky.network (AtprotoPersonalDataServer)
202
202
+
# Generate Markdown report
203
203
+
$ plc-fork-viz did:plc:ewvi7nxzyoun6zhxrhs64oiz --format markdown > FORK_REPORT.md
120
204
```
121
205
122
122
-
### Verbose Mode
206
206
+
### Workflow 3: Monitor DIDs
207
207
+
208
208
+
```bash
209
209
+
#!/bin/bash
210
210
+
# Monitor a DID for changes and forks
211
211
+
DID="did:plc:ewvi7nxzyoun6zhxrhs64oiz"
123
212
124
124
-
Shows detailed operation information:
213
213
+
# Validate
214
214
+
if plc-audit $DID --quiet; then
215
215
+
echo "✅ DID is valid"
125
216
217
217
+
# Check for forks
218
218
+
plc-fork-viz $DID --format json > /tmp/forks.json
219
219
+
220
220
+
if [ -s /tmp/forks.json ]; then
221
221
+
echo "⚠️ Forks detected!"
222
222
+
plc-fork-viz $DID
223
223
+
fi
224
224
+
else
225
225
+
echo "❌ DID validation failed!"
226
226
+
exit 1
227
227
+
fi
126
228
```
127
127
-
📋 Operations:
128
128
-
[0] ✅ bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm - 2023-04-12T04:53:57.057Z
129
129
-
Type: Genesis (creates the DID)
130
130
-
[1] ✅ bafyreihmuvr3frdvd6vmdhucih277prdcfcezf67lasg5oekxoimnunjoq - 2023-04-12T17:26:46.468Z
131
131
-
Type: Update
132
132
-
Previous: bafyreigp6shzy6dlcxuowwoxz7u5nemdrkad2my5zwzpwilcnhih7bw6zm
133
133
-
...
134
134
-
```
229
229
+
230
230
+
---
231
231
+
232
232
+
## Understanding Fork Visualization
233
233
+
234
234
+
### Symbols
235
235
+
236
236
+
- 🌱 Genesis operation (creates the DID)
237
237
+
- 🟢 ✓ Canonical operation (winner)
238
238
+
- 🔴 ✗ Rejected operation (lost the fork)
239
239
+
- ├─ Fork branch (more operations follow)
240
240
+
- └─ Final fork branch
241
241
+
242
242
+
### Rejection Reasons
243
243
+
244
244
+
1. **"Invalidated by higher-priority key[N] within recovery window"**
245
245
+
- A higher-priority rotation key signed a competing operation within 72 hours
135
246
136
136
-
## Error Handling
247
247
+
2. **"Higher-priority key[N] but outside 72-hour recovery window (X hours late)"**
248
248
+
- A higher-priority key tried to invalidate but arrived too late
137
249
138
138
-
The tool provides clear error messages:
250
250
+
3. **"Lower-priority key[N] (current winner has key[M])"**
251
251
+
- This operation was signed by a lower-priority key and can't override
139
252
140
140
-
### Invalid DID Format
141
141
-
```
142
142
-
❌ Error: Invalid DID format: DID must be exactly 32 characters, got 18
143
143
-
Expected format: did:plc:<24 lowercase base32 characters>
144
144
-
```
253
253
+
---
145
254
146
146
-
### Network Errors
147
147
-
```
148
148
-
❌ Error: Failed to fetch audit log: HTTP error: 404 - Not Found
149
149
-
```
255
255
+
## Troubleshooting
150
256
151
151
-
### Invalid Signature
152
152
-
```
153
153
-
❌ Validation failed: Invalid signature at operation 2
154
154
-
Error: Signature verification failed
155
155
-
CID: bafyrei...
156
156
-
```
257
257
+
### Error: "Invalid DID format"
258
258
+
DID must follow format: `did:plc:<24 lowercase base32 characters>`
157
259
158
158
-
### Broken Chain
159
159
-
```
160
160
-
❌ Validation failed: Chain linkage broken at operation 2
161
161
-
Expected prev: bafyreiabc...
162
162
-
Actual prev: bafyreixyz...
163
163
-
```
260
260
+
### Error: "Failed to fetch audit log"
261
261
+
- Check internet connection
262
262
+
- Verify DID exists on plc.directory
263
263
+
- Try `--plc-url` for custom PLC directories
164
264
165
165
-
## Use Cases
265
265
+
### No forks detected but expected
266
266
+
- The DID may have a linear operation chain
267
267
+
- All operations were submitted sequentially without conflicts
166
268
167
167
-
- **Audit DID History**: Review all changes made to a DID over time
168
168
-
- **Verify DID Integrity**: Ensure a DID hasn't been tampered with
169
169
-
- **Debug Issues**: Identify problems in operation chains
170
170
-
- **Monitor DIDs**: Automate validation in scripts or monitoring systems
171
171
-
- **Security Analysis**: Investigate suspicious DID activity
269
269
+
---
172
270
173
271
## Technical Details
174
272
175
175
-
The validator:
176
176
-
- Uses `reqwest` for HTTP requests to plc.directory
177
177
-
- Implements cryptographic verification with P-256 and secp256k1
178
178
-
- Validates ECDSA signatures using the `atproto-plc` library
179
179
-
- Supports both standard and legacy operation formats
273
273
+
**plc-audit** validates:
274
274
+
1. DID format (prefix, length, base32 encoding)
275
275
+
2. Chain linkage (prev references)
276
276
+
3. Cryptographic signatures (ECDSA with P-256/secp256k1)
277
277
+
4. State consistency
180
278
181
181
-
## JavaScript/WASM Version
279
279
+
**plc-fork-viz** implements:
280
280
+
1. Fork detection (multiple operations with same prev CID)
281
281
+
2. Rotation key index resolution
282
282
+
3. Priority-based fork resolution
283
283
+
4. 72-hour recovery window enforcement
182
284
183
183
-
A JavaScript implementation using WebAssembly is also available. See [`wasm/README.md`](../../wasm/README.md) for details.
285
285
+
Both tools use:
286
286
+
- `reqwest` for HTTP requests
287
287
+
- `atproto-plc` library for cryptographic operations
288
288
+
- `clap` for command-line parsing
184
289
185
185
-
### Quick Start
290
290
+
---
186
291
187
187
-
```bash
188
188
-
# Build WASM module
189
189
-
cd wasm && ./build.sh
292
292
+
## See Also
190
293
191
191
-
# Run the tool
192
192
-
node plc-audit.js did:plc:z72i7hdynmk6r22z27h6tvur --verbose
193
193
-
```
294
294
+
- [AT Protocol PLC Specification](https://atproto.com/specs/did-plc)
295
295
+
- [Fork Resolution Implementation Report](../../FORK_RESOLUTION_REPORT.md)
296
296
+
- [Implementation Summary](../../IMPLEMENTATION_SUMMARY.md)
297
297
+
- [Library Documentation](../../README.md)
194
298
195
195
-
The JavaScript version provides the same functionality and output format as the Rust binary, making it suitable for:
196
196
-
- Cross-platform deployment (runs anywhere with Node.js)
197
197
-
- Web applications and browser extensions
198
198
-
- Integration with JavaScript/TypeScript projects
199
199
-
- Smaller binary size (~200KB WASM vs 1.5MB native)
299
299
+
---
200
300
201
301
## License
202
302
+189
-46
src/bin/plc-audit.rs
reviewed
···
3
3
//! This binary fetches DID audit logs from plc.directory and validates
4
4
//! each operation cryptographically to ensure the chain is valid.
5
5
6
6
-
use atproto_plc::{Did, Operation};
6
6
+
use atproto_plc::{Did, Operation, OperationChainValidator, PlcState};
7
7
use clap::Parser;
8
8
use reqwest::blocking::Client;
9
9
use serde::Deserialize;
···
121
121
println!();
122
122
}
123
123
124
124
-
// Validate the operation chain
124
124
+
// Detect forks and build canonical chain
125
125
if !args.quiet {
126
126
-
println!("🔐 Validating operation chain...");
126
126
+
println!("🔐 Analyzing operation chain...");
127
127
println!();
128
128
}
129
129
130
130
-
// Step 1: Validate chain linkage (prev references)
130
130
+
// Detect fork points and nullified operations
131
131
+
let has_forks = detect_forks(&audit_log);
132
132
+
let has_nullified = audit_log.iter().any(|e| e.nullified);
133
133
+
134
134
+
if has_forks || has_nullified {
135
135
+
if !args.quiet {
136
136
+
if has_forks {
137
137
+
println!("⚠️ Fork detected - multiple operations reference the same prev CID");
138
138
+
}
139
139
+
if has_nullified {
140
140
+
println!("⚠️ Nullified operations detected - will validate canonical chain only");
141
141
+
}
142
142
+
println!();
143
143
+
}
144
144
+
145
145
+
// Use fork resolution to build canonical chain
146
146
+
if args.verbose {
147
147
+
println!("Step 1: Fork Resolution & Canonical Chain Building");
148
148
+
println!("===================================================");
149
149
+
}
150
150
+
151
151
+
// Build operations and timestamps for fork resolution
152
152
+
let operations: Vec<_> = audit_log.iter().map(|e| e.operation.clone()).collect();
153
153
+
let timestamps: Vec<_> = audit_log
154
154
+
.iter()
155
155
+
.map(|e| {
156
156
+
e.created_at
157
157
+
.parse::<chrono::DateTime<chrono::Utc>>()
158
158
+
.unwrap_or_else(|_| chrono::Utc::now())
159
159
+
})
160
160
+
.collect();
161
161
+
162
162
+
// Resolve forks and get canonical chain
163
163
+
match OperationChainValidator::validate_chain_with_forks(&operations, ×tamps) {
164
164
+
Ok(final_state) => {
165
165
+
if args.verbose {
166
166
+
println!(" ✅ Fork resolution complete");
167
167
+
println!(" ✅ Canonical chain validated successfully");
168
168
+
println!();
169
169
+
170
170
+
// Show which operations are in the canonical chain
171
171
+
println!("Canonical Chain Operations:");
172
172
+
println!("===========================");
173
173
+
174
174
+
// Build the canonical chain by following non-nullified operations
175
175
+
let canonical_indices = build_canonical_chain_indices(&audit_log);
176
176
+
177
177
+
for idx in &canonical_indices {
178
178
+
let entry = &audit_log[*idx];
179
179
+
println!(" [{}] ✅ {} - {}", idx, entry.cid, entry.created_at);
180
180
+
}
181
181
+
println!();
182
182
+
183
183
+
if has_nullified {
184
184
+
println!("Nullified/Rejected Operations:");
185
185
+
println!("==============================");
186
186
+
for (i, entry) in audit_log.iter().enumerate() {
187
187
+
if entry.nullified && !canonical_indices.contains(&i) {
188
188
+
println!(" [{}] ❌ {} - {} (nullified)", i, entry.cid, entry.created_at);
189
189
+
if let Some(prev) = entry.operation.prev() {
190
190
+
println!(" Referenced: {}", prev);
191
191
+
}
192
192
+
}
193
193
+
}
194
194
+
println!();
195
195
+
}
196
196
+
}
197
197
+
198
198
+
// Display final state
199
199
+
display_final_state(&final_state, args.quiet);
200
200
+
return;
201
201
+
}
202
202
+
Err(e) => {
203
203
+
eprintln!();
204
204
+
eprintln!("❌ Validation failed: {}", e);
205
205
+
process::exit(1);
206
206
+
}
207
207
+
}
208
208
+
}
209
209
+
210
210
+
// Simple linear chain validation (no forks or nullified operations)
131
211
if args.verbose {
132
132
-
println!("Step 1: Chain Linkage Validation");
212
212
+
println!("Step 1: Linear Chain Validation");
133
213
println!("================================");
134
214
}
135
215
136
216
for i in 1..audit_log.len() {
137
137
-
if audit_log[i].nullified {
138
138
-
if args.verbose {
139
139
-
println!(" [{}] ⊘ Skipped (nullified)", i);
140
140
-
}
141
141
-
continue;
142
142
-
}
143
143
-
144
217
let prev_cid = audit_log[i - 1].cid.clone();
145
218
let expected_prev = audit_log[i].operation.prev();
146
219
···
298
371
}
299
372
300
373
// Build final state
301
301
-
let final_entry = audit_log.iter().filter(|e| !e.nullified).last().unwrap();
374
374
+
let final_entry = audit_log.last().unwrap();
302
375
if let Some(_rotation_keys) = final_entry.operation.rotation_keys() {
303
376
let final_state = match &final_entry.operation {
304
377
Operation::PlcOperation {
···
308
381
services,
309
382
..
310
383
} => {
311
311
-
use atproto_plc::PlcState;
312
384
PlcState {
313
385
rotation_keys: rotation_keys.clone(),
314
386
verification_methods: verification_methods.clone(),
···
317
389
}
318
390
}
319
391
_ => {
320
320
-
use atproto_plc::PlcState;
321
392
PlcState::new()
322
393
}
323
394
};
324
395
325
325
-
{
326
326
-
if args.quiet {
327
327
-
println!("✅ VALID");
396
396
+
display_final_state(&final_state, args.quiet);
397
397
+
} else {
398
398
+
eprintln!("❌ Error: Could not extract final state");
399
399
+
process::exit(1);
400
400
+
}
401
401
+
}
402
402
+
403
403
+
/// Detect if there are fork points in the audit log
404
404
+
fn detect_forks(audit_log: &[AuditLogEntry]) -> bool {
405
405
+
use std::collections::HashMap;
406
406
+
407
407
+
let mut prev_counts: HashMap<String, usize> = HashMap::new();
408
408
+
409
409
+
for entry in audit_log {
410
410
+
if let Some(prev) = entry.operation.prev() {
411
411
+
*prev_counts.entry(prev.to_string()).or_insert(0) += 1;
412
412
+
}
413
413
+
}
414
414
+
415
415
+
// If any prev CID is referenced by more than one operation, there's a fork
416
416
+
prev_counts.values().any(|&count| count > 1)
417
417
+
}
418
418
+
419
419
+
/// Build a list of indices that form the canonical chain
420
420
+
fn build_canonical_chain_indices(audit_log: &[AuditLogEntry]) -> Vec<usize> {
421
421
+
use std::collections::HashMap;
422
422
+
423
423
+
// Build a map of prev CID to operations
424
424
+
let mut prev_to_indices: HashMap<String, Vec<usize>> = HashMap::new();
425
425
+
426
426
+
for (i, entry) in audit_log.iter().enumerate() {
427
427
+
if let Some(prev) = entry.operation.prev() {
428
428
+
prev_to_indices
429
429
+
.entry(prev.to_string())
430
430
+
.or_default()
431
431
+
.push(i);
432
432
+
}
433
433
+
}
434
434
+
435
435
+
// Start from genesis and follow the canonical chain
436
436
+
let mut canonical = Vec::new();
437
437
+
438
438
+
// Find genesis (first operation)
439
439
+
let genesis = match audit_log.first() {
440
440
+
Some(g) => g,
441
441
+
None => return canonical,
442
442
+
};
443
443
+
444
444
+
canonical.push(0);
445
445
+
let mut current_cid = genesis.cid.clone();
446
446
+
447
447
+
// Follow the chain, preferring non-nullified operations
448
448
+
loop {
449
449
+
if let Some(indices) = prev_to_indices.get(¤t_cid) {
450
450
+
// Find the first non-nullified operation
451
451
+
if let Some(&next_idx) = indices.iter().find(|&&idx| !audit_log[idx].nullified) {
452
452
+
canonical.push(next_idx);
453
453
+
current_cid = audit_log[next_idx].cid.clone();
328
454
} else {
329
329
-
println!("✅ Validation successful!");
330
330
-
println!();
331
331
-
println!("📄 Final DID State:");
332
332
-
println!(" Rotation keys: {}", final_state.rotation_keys.len());
333
333
-
for (i, key) in final_state.rotation_keys.iter().enumerate() {
334
334
-
println!(" [{}] {}", i, key);
335
335
-
}
336
336
-
println!();
337
337
-
println!(" Verification methods: {}", final_state.verification_methods.len());
338
338
-
for (name, key) in &final_state.verification_methods {
339
339
-
println!(" {}: {}", name, key);
340
340
-
}
341
341
-
println!();
342
342
-
if !final_state.also_known_as.is_empty() {
343
343
-
println!(" Also known as: {}", final_state.also_known_as.len());
344
344
-
for uri in &final_state.also_known_as {
345
345
-
println!(" - {}", uri);
346
346
-
}
347
347
-
println!();
348
348
-
}
349
349
-
if !final_state.services.is_empty() {
350
350
-
println!(" Services: {}", final_state.services.len());
351
351
-
for (name, service) in &final_state.services {
352
352
-
println!(" {}: {} ({})", name, service.endpoint, service.service_type);
353
353
-
}
455
455
+
// All operations at this point are nullified - try to find any operation
456
456
+
if let Some(&next_idx) = indices.first() {
457
457
+
canonical.push(next_idx);
458
458
+
current_cid = audit_log[next_idx].cid.clone();
459
459
+
} else {
460
460
+
break;
354
461
}
355
462
}
463
463
+
} else {
464
464
+
// No more operations
465
465
+
break;
356
466
}
467
467
+
}
468
468
+
469
469
+
canonical
470
470
+
}
471
471
+
472
472
+
/// Display the final state after validation
473
473
+
fn display_final_state(final_state: &PlcState, quiet: bool) {
474
474
+
if quiet {
475
475
+
println!("✅ VALID");
357
476
} else {
358
358
-
eprintln!("❌ Error: Could not extract final state");
359
359
-
process::exit(1);
477
477
+
println!("✅ Validation successful!");
478
478
+
println!();
479
479
+
println!("📄 Final DID State:");
480
480
+
println!(" Rotation keys: {}", final_state.rotation_keys.len());
481
481
+
for (i, key) in final_state.rotation_keys.iter().enumerate() {
482
482
+
println!(" [{}] {}", i, key);
483
483
+
}
484
484
+
println!();
485
485
+
println!(" Verification methods: {}", final_state.verification_methods.len());
486
486
+
for (name, key) in &final_state.verification_methods {
487
487
+
println!(" {}: {}", name, key);
488
488
+
}
489
489
+
println!();
490
490
+
if !final_state.also_known_as.is_empty() {
491
491
+
println!(" Also known as: {}", final_state.also_known_as.len());
492
492
+
for uri in &final_state.also_known_as {
493
493
+
println!(" - {}", uri);
494
494
+
}
495
495
+
println!();
496
496
+
}
497
497
+
if !final_state.services.is_empty() {
498
498
+
println!(" Services: {}", final_state.services.len());
499
499
+
for (name, service) in &final_state.services {
500
500
+
println!(" {}: {} ({})", name, service.endpoint, service.service_type);
501
501
+
}
502
502
+
}
360
503
}
361
504
}
362
505
···
365
508
let url = format!("{}/{}/log/audit", plc_url, did);
366
509
367
510
let client = Client::builder()
368
368
-
.user_agent("atproto-plc-audit/0.1.0")
511
511
+
.user_agent("atproto-plc-audit/0.2.0")
369
512
.timeout(std::time::Duration::from_secs(30))
370
513
.build()?;
371
514
+613
src/bin/plc-fork-viz.rs
reviewed
···
1
1
+
//! PLC Fork Visualizer
2
2
+
//!
3
3
+
//! This binary fetches DID audit logs and visualizes any forks in the operation chain,
4
4
+
//! showing which operations won/lost and why based on rotation key priority and the
5
5
+
//! 72-hour recovery window.
6
6
+
7
7
+
use atproto_plc::{Did, Operation, VerifyingKey};
8
8
+
use chrono::{DateTime, Utc};
9
9
+
use clap::Parser;
10
10
+
use reqwest::blocking::Client;
11
11
+
use serde::Deserialize;
12
12
+
use std::collections::HashMap;
13
13
+
use std::process;
14
14
+
15
15
+
/// Command-line arguments
16
16
+
#[derive(Parser, Debug)]
17
17
+
#[command(
18
18
+
name = "plc-fork-viz",
19
19
+
about = "Visualize forks in PLC audit logs",
20
20
+
long_about = "Fetches and visualizes fork points in DID operation chains from plc.directory,\nshowing which operations won/lost based on rotation key priority and recovery window rules"
21
21
+
)]
22
22
+
struct Args {
23
23
+
/// The DID to analyze (e.g., did:plc:ewvi7nxzyoun6zhxrhs64oiz)
24
24
+
#[arg(value_name = "DID")]
25
25
+
did: String,
26
26
+
27
27
+
/// Custom PLC directory URL (default: https://plc.directory)
28
28
+
#[arg(long, default_value = "https://plc.directory")]
29
29
+
plc_url: String,
30
30
+
31
31
+
/// Show detailed information for each operation
32
32
+
#[arg(short, long)]
33
33
+
verbose: bool,
34
34
+
35
35
+
/// Show timing information (timestamps and recovery window calculations)
36
36
+
#[arg(short, long)]
37
37
+
timing: bool,
38
38
+
39
39
+
/// Show full DIDs and CIDs instead of truncated versions
40
40
+
#[arg(long)]
41
41
+
full_ids: bool,
42
42
+
43
43
+
/// Output format: tree (default), json, or markdown
44
44
+
#[arg(short, long, default_value = "tree")]
45
45
+
format: OutputFormat,
46
46
+
}
47
47
+
48
48
+
#[derive(Debug, Clone, clap::ValueEnum)]
49
49
+
enum OutputFormat {
50
50
+
Tree,
51
51
+
Json,
52
52
+
Markdown,
53
53
+
}
54
54
+
55
55
+
/// Audit log response from plc.directory
56
56
+
#[derive(Debug, Deserialize, Clone)]
57
57
+
struct AuditLogEntry {
58
58
+
/// The DID this operation is for
59
59
+
#[allow(dead_code)]
60
60
+
did: String,
61
61
+
62
62
+
/// The operation itself
63
63
+
operation: Operation,
64
64
+
65
65
+
/// CID of this operation
66
66
+
cid: String,
67
67
+
68
68
+
/// Timestamp when this operation was created
69
69
+
#[serde(rename = "createdAt")]
70
70
+
created_at: String,
71
71
+
72
72
+
/// Nullified flag (if this operation was invalidated by fork resolution)
73
73
+
#[allow(dead_code)]
74
74
+
#[serde(default)]
75
75
+
nullified: bool,
76
76
+
}
77
77
+
78
78
+
/// Represents a fork point in the operation chain
79
79
+
#[derive(Debug, Clone)]
80
80
+
struct ForkPoint {
81
81
+
/// The prev CID that all operations in this fork reference
82
82
+
prev_cid: String,
83
83
+
84
84
+
/// Competing operations at this fork
85
85
+
operations: Vec<ForkOperation>,
86
86
+
87
87
+
/// The winning operation (after fork resolution)
88
88
+
winner_cid: String,
89
89
+
}
90
90
+
91
91
+
/// An operation that's part of a fork
92
92
+
#[derive(Debug, Clone)]
93
93
+
struct ForkOperation {
94
94
+
cid: String,
95
95
+
operation: Operation,
96
96
+
timestamp: DateTime<Utc>,
97
97
+
signing_key_index: Option<usize>,
98
98
+
signing_key: Option<String>,
99
99
+
is_winner: bool,
100
100
+
rejection_reason: Option<String>,
101
101
+
}
102
102
+
103
103
+
fn main() {
104
104
+
let args = Args::parse();
105
105
+
106
106
+
// Parse and validate the DID
107
107
+
let did = match Did::parse(&args.did) {
108
108
+
Ok(did) => did,
109
109
+
Err(e) => {
110
110
+
eprintln!("❌ Error: Invalid DID format: {}", e);
111
111
+
eprintln!(" Expected format: did:plc:<24 lowercase base32 characters>");
112
112
+
process::exit(1);
113
113
+
}
114
114
+
};
115
115
+
116
116
+
println!("🔍 Analyzing forks in: {}", did);
117
117
+
println!(" Source: {}", args.plc_url);
118
118
+
println!();
119
119
+
120
120
+
// Fetch the audit log
121
121
+
let audit_log = match fetch_audit_log(&args.plc_url, &did) {
122
122
+
Ok(log) => log,
123
123
+
Err(e) => {
124
124
+
eprintln!("❌ Error: Failed to fetch audit log: {}", e);
125
125
+
process::exit(1);
126
126
+
}
127
127
+
};
128
128
+
129
129
+
if audit_log.is_empty() {
130
130
+
eprintln!("❌ Error: No operations found in audit log");
131
131
+
process::exit(1);
132
132
+
}
133
133
+
134
134
+
println!("📊 Audit log contains {} operations", audit_log.len());
135
135
+
136
136
+
// Detect forks
137
137
+
let forks = detect_forks(&audit_log, &args);
138
138
+
139
139
+
if forks.is_empty() {
140
140
+
println!("\n✅ No forks detected - this is a linear operation chain");
141
141
+
println!(" All operations form a single canonical path from genesis to tip.");
142
142
+
143
143
+
if args.verbose {
144
144
+
println!("\n📋 Linear chain visualization:");
145
145
+
visualize_linear_chain(&audit_log, &args);
146
146
+
}
147
147
+
148
148
+
return;
149
149
+
}
150
150
+
151
151
+
println!("⚠️ Detected {} fork point(s)", forks.len());
152
152
+
println!();
153
153
+
154
154
+
// Visualize based on format
155
155
+
match args.format {
156
156
+
OutputFormat::Tree => visualize_tree(&audit_log, &forks, &args),
157
157
+
OutputFormat::Json => visualize_json(&forks),
158
158
+
OutputFormat::Markdown => visualize_markdown(&audit_log, &forks, &args),
159
159
+
}
160
160
+
}
161
161
+
162
162
+
/// Detect fork points in the audit log
163
163
+
fn detect_forks(audit_log: &[AuditLogEntry], args: &Args) -> Vec<ForkPoint> {
164
164
+
let mut prev_to_operations: HashMap<String, Vec<AuditLogEntry>> = HashMap::new();
165
165
+
166
166
+
// Group operations by their prev CID
167
167
+
for entry in audit_log {
168
168
+
if let Some(prev) = entry.operation.prev() {
169
169
+
prev_to_operations
170
170
+
.entry(prev.to_string())
171
171
+
.or_default()
172
172
+
.push(entry.clone());
173
173
+
}
174
174
+
}
175
175
+
176
176
+
// Build operation map for state reconstruction
177
177
+
let operation_map: HashMap<String, AuditLogEntry> = audit_log
178
178
+
.iter()
179
179
+
.map(|e| (e.cid.clone(), e.clone()))
180
180
+
.collect();
181
181
+
182
182
+
let mut forks = Vec::new();
183
183
+
184
184
+
// Find fork points (where multiple operations reference the same prev)
185
185
+
for (prev_cid, operations) in prev_to_operations {
186
186
+
if operations.len() > 1 {
187
187
+
if args.verbose {
188
188
+
println!("🔀 Fork detected at {}", truncate(&prev_cid, args));
189
189
+
println!(" {} competing operations", operations.len());
190
190
+
}
191
191
+
192
192
+
// Get the state at the prev operation to determine rotation keys
193
193
+
let state = if let Some(prev_entry) = operation_map.get(&prev_cid) {
194
194
+
get_state_from_operation(&prev_entry.operation)
195
195
+
} else {
196
196
+
// This shouldn't happen in a valid chain
197
197
+
continue;
198
198
+
};
199
199
+
200
200
+
// Analyze each operation in the fork
201
201
+
let mut fork_ops = Vec::new();
202
202
+
for entry in &operations {
203
203
+
let timestamp = parse_timestamp(&entry.created_at);
204
204
+
205
205
+
// Determine which rotation key signed this operation
206
206
+
let (signing_key_index, signing_key) = if !state.rotation_keys.is_empty() {
207
207
+
find_signing_key(&entry.operation, &state.rotation_keys)
208
208
+
} else {
209
209
+
(None, None)
210
210
+
};
211
211
+
212
212
+
fork_ops.push(ForkOperation {
213
213
+
cid: entry.cid.clone(),
214
214
+
operation: entry.operation.clone(),
215
215
+
timestamp,
216
216
+
signing_key_index,
217
217
+
signing_key,
218
218
+
is_winner: false,
219
219
+
rejection_reason: None,
220
220
+
});
221
221
+
}
222
222
+
223
223
+
// Resolve the fork to determine winner
224
224
+
let winner_cid = resolve_fork(&mut fork_ops);
225
225
+
226
226
+
forks.push(ForkPoint {
227
227
+
prev_cid,
228
228
+
operations: fork_ops,
229
229
+
winner_cid,
230
230
+
});
231
231
+
}
232
232
+
}
233
233
+
234
234
+
// Sort forks chronologically
235
235
+
forks.sort_by_key(|f| {
236
236
+
f.operations
237
237
+
.iter()
238
238
+
.map(|op| op.timestamp)
239
239
+
.min()
240
240
+
.unwrap_or_else(Utc::now)
241
241
+
});
242
242
+
243
243
+
forks
244
244
+
}
245
245
+
246
246
+
/// Resolve a fork point and mark the winner
247
247
+
fn resolve_fork(fork_ops: &mut [ForkOperation]) -> String {
248
248
+
// Sort by timestamp (chronological order)
249
249
+
fork_ops.sort_by_key(|op| op.timestamp);
250
250
+
251
251
+
// First-received is the default winner
252
252
+
let mut winner_idx = 0;
253
253
+
fork_ops[0].is_winner = true;
254
254
+
255
255
+
// Check if any later operation can invalidate based on priority
256
256
+
for i in 1..fork_ops.len() {
257
257
+
let competing_key_idx = fork_ops[i].signing_key_index;
258
258
+
let winner_key_idx = fork_ops[winner_idx].signing_key_index;
259
259
+
260
260
+
match (competing_key_idx, winner_key_idx) {
261
261
+
(Some(competing_idx), Some(winner_idx_val)) => {
262
262
+
if competing_idx < winner_idx_val {
263
263
+
// Higher priority (lower index)
264
264
+
let time_diff = fork_ops[i].timestamp - fork_ops[winner_idx].timestamp;
265
265
+
266
266
+
if time_diff <= chrono::Duration::hours(72) {
267
267
+
// Within recovery window - this operation wins
268
268
+
fork_ops[winner_idx].is_winner = false;
269
269
+
fork_ops[winner_idx].rejection_reason = Some(format!(
270
270
+
"Invalidated by higher-priority key[{}] within recovery window",
271
271
+
competing_idx
272
272
+
));
273
273
+
274
274
+
fork_ops[i].is_winner = true;
275
275
+
winner_idx = i;
276
276
+
} else {
277
277
+
// Outside recovery window
278
278
+
fork_ops[i].rejection_reason = Some(format!(
279
279
+
"Higher-priority key[{}] but outside 72-hour recovery window ({:.1}h late)",
280
280
+
competing_idx,
281
281
+
time_diff.num_hours() as f64
282
282
+
));
283
283
+
}
284
284
+
} else {
285
285
+
// Lower priority
286
286
+
fork_ops[i].rejection_reason = Some(format!(
287
287
+
"Lower-priority key[{}] (current winner has key[{}])",
288
288
+
competing_idx,
289
289
+
winner_idx_val
290
290
+
));
291
291
+
}
292
292
+
}
293
293
+
_ => {
294
294
+
fork_ops[i].rejection_reason = Some("Could not determine signing key".to_string());
295
295
+
}
296
296
+
}
297
297
+
}
298
298
+
299
299
+
fork_ops[winner_idx].cid.clone()
300
300
+
}
301
301
+
302
302
+
/// Find which rotation key signed an operation
303
303
+
fn find_signing_key(operation: &Operation, rotation_keys: &[String]) -> (Option<usize>, Option<String>) {
304
304
+
for (index, key_did) in rotation_keys.iter().enumerate() {
305
305
+
if let Ok(verifying_key) = VerifyingKey::from_did_key(key_did) {
306
306
+
if operation.verify(&[verifying_key]).is_ok() {
307
307
+
return (Some(index), Some(key_did.clone()));
308
308
+
}
309
309
+
}
310
310
+
}
311
311
+
(None, None)
312
312
+
}
313
313
+
314
314
+
/// Get state from an operation
315
315
+
fn get_state_from_operation(operation: &Operation) -> atproto_plc::PlcState {
316
316
+
match operation {
317
317
+
Operation::PlcOperation {
318
318
+
rotation_keys,
319
319
+
verification_methods,
320
320
+
also_known_as,
321
321
+
services,
322
322
+
..
323
323
+
} => atproto_plc::PlcState {
324
324
+
rotation_keys: rotation_keys.clone(),
325
325
+
verification_methods: verification_methods.clone(),
326
326
+
also_known_as: also_known_as.clone(),
327
327
+
services: services.clone(),
328
328
+
},
329
329
+
_ => atproto_plc::PlcState::new(),
330
330
+
}
331
331
+
}
332
332
+
333
333
+
/// Parse ISO 8601 timestamp
334
334
+
fn parse_timestamp(timestamp: &str) -> DateTime<Utc> {
335
335
+
timestamp
336
336
+
.parse::<DateTime<Utc>>()
337
337
+
.unwrap_or_else(|_| Utc::now())
338
338
+
}
339
339
+
340
340
+
/// Visualize forks as a tree
341
341
+
fn visualize_tree(audit_log: &[AuditLogEntry], forks: &[ForkPoint], args: &Args) {
342
342
+
println!("📊 Fork Visualization (Tree Format)");
343
343
+
println!("═══════════════════════════════════════════════════════════════");
344
344
+
println!();
345
345
+
346
346
+
// Build a map of which operations are part of forks
347
347
+
let mut fork_map: HashMap<String, &ForkPoint> = HashMap::new();
348
348
+
for fork in forks {
349
349
+
for op in &fork.operations {
350
350
+
fork_map.insert(op.cid.clone(), fork);
351
351
+
}
352
352
+
}
353
353
+
354
354
+
// Track which prev CIDs have been processed
355
355
+
let mut processed_forks: std::collections::HashSet<String> = std::collections::HashSet::new();
356
356
+
357
357
+
for entry in audit_log.iter() {
358
358
+
let is_genesis = entry.operation.is_genesis();
359
359
+
let prev = entry.operation.prev();
360
360
+
361
361
+
// Check if this operation is part of a fork
362
362
+
if let Some(_prev_cid) = prev {
363
363
+
if let Some(fork) = fork_map.get(&entry.cid) {
364
364
+
// This is a fork point
365
365
+
if !processed_forks.contains(&fork.prev_cid) {
366
366
+
processed_forks.insert(fork.prev_cid.clone());
367
367
+
368
368
+
println!("Fork at operation referencing {}", truncate(&fork.prev_cid, args));
369
369
+
370
370
+
for (j, fork_op) in fork.operations.iter().enumerate() {
371
371
+
let symbol = if fork_op.is_winner { "✓" } else { "✗" };
372
372
+
let color = if fork_op.is_winner { "🟢" } else { "🔴" };
373
373
+
let prefix = if j == fork.operations.len() - 1 { "└─" } else { "├─" };
374
374
+
375
375
+
println!(
376
376
+
" {} {} {} CID: {}",
377
377
+
prefix,
378
378
+
color,
379
379
+
symbol,
380
380
+
truncate(&fork_op.cid, args)
381
381
+
);
382
382
+
383
383
+
if let Some(key_idx) = fork_op.signing_key_index {
384
384
+
println!(" │ Signed by: rotation_key[{}]", key_idx);
385
385
+
if args.verbose {
386
386
+
if let Some(key) = &fork_op.signing_key {
387
387
+
println!(" │ Key: {}", truncate(key, args));
388
388
+
}
389
389
+
}
390
390
+
}
391
391
+
392
392
+
if args.timing {
393
393
+
println!(
394
394
+
" │ Timestamp: {}",
395
395
+
fork_op.timestamp.format("%Y-%m-%d %H:%M:%S UTC")
396
396
+
);
397
397
+
}
398
398
+
399
399
+
if !fork_op.is_winner {
400
400
+
if let Some(reason) = &fork_op.rejection_reason {
401
401
+
println!(" │ Reason: {}", reason);
402
402
+
}
403
403
+
} else {
404
404
+
println!(" │ Status: CANONICAL (winner)");
405
405
+
}
406
406
+
407
407
+
if args.verbose {
408
408
+
if let Some(Operation::PlcOperation { services, .. }) = Some(&fork_op.operation) {
409
409
+
if !services.is_empty() {
410
410
+
println!(" │ Services: {} configured", services.len());
411
411
+
}
412
412
+
}
413
413
+
}
414
414
+
415
415
+
if j < fork.operations.len() - 1 {
416
416
+
println!(" │");
417
417
+
}
418
418
+
}
419
419
+
println!();
420
420
+
}
421
421
+
continue;
422
422
+
}
423
423
+
}
424
424
+
425
425
+
// Regular operation (not part of a fork)
426
426
+
if is_genesis {
427
427
+
println!("🌱 Genesis");
428
428
+
println!(" CID: {}", truncate(&entry.cid, args));
429
429
+
if args.timing {
430
430
+
println!(" Timestamp: {}", entry.created_at);
431
431
+
}
432
432
+
if args.verbose {
433
433
+
if let Operation::PlcOperation { rotation_keys, .. } = &entry.operation {
434
434
+
println!(" Rotation keys: {}", rotation_keys.len());
435
435
+
}
436
436
+
}
437
437
+
println!();
438
438
+
}
439
439
+
}
440
440
+
441
441
+
// Summary
442
442
+
println!("═══════════════════════════════════════════════════════════════");
443
443
+
println!("📈 Summary:");
444
444
+
println!(" Total operations: {}", audit_log.len());
445
445
+
println!(" Fork points: {}", forks.len());
446
446
+
447
447
+
let total_competing_ops: usize = forks.iter().map(|f| f.operations.len()).sum();
448
448
+
let rejected_ops = total_competing_ops - forks.len();
449
449
+
println!(" Rejected operations: {}", rejected_ops);
450
450
+
451
451
+
if !forks.is_empty() {
452
452
+
println!("\n🔐 Fork Resolution Details:");
453
453
+
for (i, fork) in forks.iter().enumerate() {
454
454
+
let winner = fork.operations.iter().find(|op| op.is_winner).unwrap();
455
455
+
println!(
456
456
+
" Fork {}: Winner is {} (signed by key[{}])",
457
457
+
i + 1,
458
458
+
truncate(&winner.cid, args),
459
459
+
winner.signing_key_index.unwrap_or(999)
460
460
+
);
461
461
+
}
462
462
+
}
463
463
+
}
464
464
+
465
465
+
/// Visualize linear chain (no forks)
466
466
+
fn visualize_linear_chain(audit_log: &[AuditLogEntry], args: &Args) {
467
467
+
for (i, entry) in audit_log.iter().enumerate() {
468
468
+
let symbol = if i == 0 { "🌱" } else { " ↓" };
469
469
+
println!("{} Operation {}: {}", symbol, i, truncate(&entry.cid, args));
470
470
+
471
471
+
if args.timing {
472
472
+
println!(" Timestamp: {}", entry.created_at);
473
473
+
}
474
474
+
475
475
+
if args.verbose {
476
476
+
if let Some(prev) = entry.operation.prev() {
477
477
+
println!(" Previous: {}", truncate(prev, args));
478
478
+
}
479
479
+
}
480
480
+
}
481
481
+
}
482
482
+
483
483
+
/// Visualize forks as JSON
484
484
+
fn visualize_json(forks: &[ForkPoint]) {
485
485
+
#[derive(serde::Serialize)]
486
486
+
struct JsonFork {
487
487
+
prev_cid: String,
488
488
+
winner_cid: String,
489
489
+
operations: Vec<JsonForkOp>,
490
490
+
}
491
491
+
492
492
+
#[derive(serde::Serialize)]
493
493
+
struct JsonForkOp {
494
494
+
cid: String,
495
495
+
timestamp: String,
496
496
+
signing_key_index: Option<usize>,
497
497
+
is_winner: bool,
498
498
+
rejection_reason: Option<String>,
499
499
+
}
500
500
+
501
501
+
let json_forks: Vec<JsonFork> = forks
502
502
+
.iter()
503
503
+
.map(|f| JsonFork {
504
504
+
prev_cid: f.prev_cid.clone(),
505
505
+
winner_cid: f.winner_cid.clone(),
506
506
+
operations: f
507
507
+
.operations
508
508
+
.iter()
509
509
+
.map(|op| JsonForkOp {
510
510
+
cid: op.cid.clone(),
511
511
+
timestamp: op.timestamp.to_rfc3339(),
512
512
+
signing_key_index: op.signing_key_index,
513
513
+
is_winner: op.is_winner,
514
514
+
rejection_reason: op.rejection_reason.clone(),
515
515
+
})
516
516
+
.collect(),
517
517
+
})
518
518
+
.collect();
519
519
+
520
520
+
println!(
521
521
+
"{}",
522
522
+
serde_json::to_string_pretty(&json_forks).unwrap()
523
523
+
);
524
524
+
}
525
525
+
526
526
+
/// Visualize forks as Markdown
527
527
+
fn visualize_markdown(audit_log: &[AuditLogEntry], forks: &[ForkPoint], args: &Args) {
528
528
+
println!("# PLC Fork Analysis Report\n");
529
529
+
println!("**DID**: {}\n", args.did);
530
530
+
println!("**Total Operations**: {}", audit_log.len());
531
531
+
println!("**Fork Points**: {}\n", forks.len());
532
532
+
533
533
+
if forks.is_empty() {
534
534
+
println!("✅ No forks detected - linear operation chain\n");
535
535
+
return;
536
536
+
}
537
537
+
538
538
+
println!("## Fork Details\n");
539
539
+
540
540
+
for (i, fork) in forks.iter().enumerate() {
541
541
+
println!("### Fork {} (at {})\n", i + 1, truncate(&fork.prev_cid, args));
542
542
+
println!("| Status | CID | Key Index | Timestamp | Reason |");
543
543
+
println!("|--------|-----|-----------|-----------|--------|");
544
544
+
545
545
+
for op in &fork.operations {
546
546
+
let status = if op.is_winner { "✅ Winner" } else { "❌ Rejected" };
547
547
+
let key_idx = op
548
548
+
.signing_key_index
549
549
+
.map(|i| i.to_string())
550
550
+
.unwrap_or_else(|| "?".to_string());
551
551
+
let timestamp = op.timestamp.format("%Y-%m-%d %H:%M:%S");
552
552
+
let reason = op
553
553
+
.rejection_reason
554
554
+
.as_deref()
555
555
+
.unwrap_or("Canonical operation");
556
556
+
557
557
+
println!(
558
558
+
"| {} | {} | {} | {} | {} |",
559
559
+
status,
560
560
+
truncate(&op.cid, args),
561
561
+
key_idx,
562
562
+
timestamp,
563
563
+
reason
564
564
+
);
565
565
+
}
566
566
+
567
567
+
println!();
568
568
+
}
569
569
+
570
570
+
println!("## Summary\n");
571
571
+
println!("- Total competing operations: {}", forks.iter().map(|f| f.operations.len()).sum::<usize>());
572
572
+
println!("- Rejected operations: {}", forks.iter().map(|f| f.operations.len() - 1).sum::<usize>());
573
573
+
}
574
574
+
575
575
+
/// Truncate a string for display
576
576
+
fn truncate(s: &str, args: &Args) -> String {
577
577
+
if args.full_ids {
578
578
+
s.to_string()
579
579
+
} else {
580
580
+
if s.len() > 20 {
581
581
+
format!("{}...{}", &s[..8], &s[s.len() - 8..])
582
582
+
} else {
583
583
+
s.to_string()
584
584
+
}
585
585
+
}
586
586
+
}
587
587
+
588
588
+
/// Fetch the audit log for a DID from plc.directory
589
589
+
fn fetch_audit_log(
590
590
+
plc_url: &str,
591
591
+
did: &Did,
592
592
+
) -> Result<Vec<AuditLogEntry>, Box<dyn std::error::Error>> {
593
593
+
let url = format!("{}/{}/log/audit", plc_url, did);
594
594
+
595
595
+
let client = Client::builder()
596
596
+
.user_agent("atproto-plc-fork-viz/0.2.0")
597
597
+
.timeout(std::time::Duration::from_secs(30))
598
598
+
.build()?;
599
599
+
600
600
+
let response = client.get(&url).send()?;
601
601
+
602
602
+
if !response.status().is_success() {
603
603
+
return Err(format!(
604
604
+
"HTTP error: {} - {}",
605
605
+
response.status(),
606
606
+
response.text().unwrap_or_default()
607
607
+
)
608
608
+
.into());
609
609
+
}
610
610
+
611
611
+
let audit_log: Vec<AuditLogEntry> = response.json()?;
612
612
+
Ok(audit_log)
613
613
+
}
+1
-1
src/lib.rs
reviewed
···
136
136
fn test_library_info() {
137
137
let info = library_info();
138
138
assert!(info.contains("atproto-plc"));
139
139
-
assert!(info.contains("0.1.0"));
139
139
+
assert!(info.contains("0.2.0"));
140
140
}
141
141
142
142
#[test]
+943
-4
src/validation.rs
reviewed
···
5
5
use crate::error::{PlcError, Result};
6
6
use crate::operations::Operation;
7
7
use chrono::{DateTime, Duration, Utc};
8
8
+
use std::collections::HashMap;
8
9
9
10
/// Recovery window duration (72 hours)
10
11
const RECOVERY_WINDOW_HOURS: i64 = 72;
11
12
13
13
+
/// Represents an operation with its server-assigned timestamp
14
14
+
#[derive(Debug, Clone)]
15
15
+
pub struct OperationWithTimestamp {
16
16
+
/// The operation itself
17
17
+
pub operation: Operation,
18
18
+
/// Server-assigned timestamp when the operation was received
19
19
+
pub timestamp: DateTime<Utc>,
20
20
+
}
21
21
+
22
22
+
/// Represents a fork point where multiple operations reference the same prev CID
23
23
+
#[derive(Debug)]
24
24
+
struct ForkPoint {
25
25
+
/// The prev CID that multiple operations reference
26
26
+
#[allow(dead_code)]
27
27
+
prev_cid: String,
28
28
+
/// Competing operations at this fork point, with their timestamps and signing key indices
29
29
+
operations: Vec<(Operation, DateTime<Utc>, usize)>,
30
30
+
}
31
31
+
32
32
+
impl ForkPoint {
33
33
+
/// Create a new fork point
34
34
+
fn new(prev_cid: String) -> Self {
35
35
+
Self {
36
36
+
prev_cid,
37
37
+
operations: Vec::new(),
38
38
+
}
39
39
+
}
40
40
+
41
41
+
/// Add an operation to this fork point
42
42
+
fn add_operation(&mut self, operation: Operation, timestamp: DateTime<Utc>, key_index: usize) {
43
43
+
self.operations.push((operation, timestamp, key_index));
44
44
+
}
45
45
+
46
46
+
/// Resolve this fork point and return the canonical operation
47
47
+
///
48
48
+
/// Resolution algorithm:
49
49
+
/// 1. Sort operations by timestamp (first-received wins by default)
50
50
+
/// 2. Check if any later-received operation with higher priority can invalidate within 72 hours
51
51
+
/// 3. Return the winning operation
52
52
+
fn resolve(&self) -> Result<Operation> {
53
53
+
if self.operations.is_empty() {
54
54
+
return Err(PlcError::ForkResolutionError(
55
55
+
"Fork point has no operations".to_string(),
56
56
+
));
57
57
+
}
58
58
+
59
59
+
// If only one operation, it wins by default
60
60
+
if self.operations.len() == 1 {
61
61
+
return Ok(self.operations[0].0.clone());
62
62
+
}
63
63
+
64
64
+
// Sort by timestamp - first-received wins by default
65
65
+
let mut sorted = self.operations.clone();
66
66
+
sorted.sort_by_key(|(_, timestamp, _)| *timestamp);
67
67
+
68
68
+
// The first operation in chronological order is the default winner
69
69
+
let (mut canonical_op, mut canonical_ts, mut canonical_key_idx) = sorted[0].clone();
70
70
+
71
71
+
// Check if any later-received operation can invalidate the current canonical
72
72
+
for (competing_op, competing_ts, competing_key_idx) in &sorted[1..] {
73
73
+
// Can only invalidate if the competing operation has higher priority (lower index)
74
74
+
if *competing_key_idx < canonical_key_idx {
75
75
+
// Check if within recovery window from the canonical operation
76
76
+
let time_diff = *competing_ts - canonical_ts;
77
77
+
if time_diff <= Duration::hours(RECOVERY_WINDOW_HOURS)
78
78
+
&& time_diff >= Duration::zero()
79
79
+
{
80
80
+
// This higher-priority operation invalidates the canonical one
81
81
+
// Update the canonical to this operation
82
82
+
canonical_op = competing_op.clone();
83
83
+
canonical_ts = *competing_ts;
84
84
+
canonical_key_idx = *competing_key_idx;
85
85
+
}
86
86
+
}
87
87
+
}
88
88
+
89
89
+
Ok(canonical_op)
90
90
+
}
91
91
+
}
92
92
+
93
93
+
/// Builder for constructing the canonical chain from a set of operations with timestamps
94
94
+
struct CanonicalChainBuilder {
95
95
+
/// All operations with timestamps, in the order they were received
96
96
+
operations: Vec<OperationWithTimestamp>,
97
97
+
}
98
98
+
99
99
+
impl CanonicalChainBuilder {
100
100
+
/// Create a new builder
101
101
+
fn new(operations: Vec<OperationWithTimestamp>) -> Self {
102
102
+
Self { operations }
103
103
+
}
104
104
+
105
105
+
/// Build the canonical chain by detecting and resolving forks
106
106
+
///
107
107
+
/// Algorithm:
108
108
+
/// 1. Build a graph of operations by CID
109
109
+
/// 2. Detect fork points (multiple operations with same prev CID)
110
110
+
/// 3. Resolve each fork using rotation key priority and recovery window
111
111
+
/// 4. Build the canonical chain from genesis to tip
112
112
+
fn build(&self) -> Result<Vec<Operation>> {
113
113
+
if self.operations.is_empty() {
114
114
+
return Err(PlcError::EmptyChain);
115
115
+
}
116
116
+
117
117
+
// Build a map of CID -> (operation, timestamp) for quick lookup
118
118
+
let mut operation_map: HashMap<String, (Operation, DateTime<Utc>)> = HashMap::new();
119
119
+
for op_with_ts in &self.operations {
120
120
+
let cid = op_with_ts.operation.cid()?;
121
121
+
operation_map.insert(cid, (op_with_ts.operation.clone(), op_with_ts.timestamp));
122
122
+
}
123
123
+
124
124
+
// Find the genesis operation (prev = None)
125
125
+
let genesis = self
126
126
+
.operations
127
127
+
.iter()
128
128
+
.find(|op| op.operation.is_genesis())
129
129
+
.ok_or_else(|| PlcError::FirstOperationNotGenesis)?;
130
130
+
131
131
+
// Detect fork points - group operations by their prev CID
132
132
+
let mut prev_to_operations: HashMap<String, Vec<(Operation, DateTime<Utc>)>> =
133
133
+
HashMap::new();
134
134
+
135
135
+
for op_with_ts in &self.operations {
136
136
+
if let Some(prev_cid) = op_with_ts.operation.prev() {
137
137
+
prev_to_operations
138
138
+
.entry(prev_cid.to_string())
139
139
+
.or_default()
140
140
+
.push((op_with_ts.operation.clone(), op_with_ts.timestamp));
141
141
+
}
142
142
+
}
143
143
+
144
144
+
// Build fork points for any prev CID with multiple operations
145
145
+
let mut fork_points: HashMap<String, ForkPoint> = HashMap::new();
146
146
+
for (prev_cid, operations) in &prev_to_operations {
147
147
+
if operations.len() > 1 {
148
148
+
// This is a fork point - multiple operations reference the same prev
149
149
+
let mut fork_point = ForkPoint::new(prev_cid.clone());
150
150
+
151
151
+
// For each operation, determine which rotation key signed it
152
152
+
// We need to look at the state at the prev operation
153
153
+
for (operation, timestamp) in operations {
154
154
+
// Get the state at the prev operation to find rotation keys
155
155
+
let rotation_keys = if let Some((prev_op, _)) = operation_map.get(prev_cid) {
156
156
+
// Build state up to prev operation to get its rotation keys
157
157
+
self.get_state_at_operation(prev_op)?
158
158
+
} else {
159
159
+
PlcState::new()
160
160
+
};
161
161
+
162
162
+
// Find which rotation key signed this operation
163
163
+
let key_index = self.find_signing_key_index(operation, &rotation_keys)?;
164
164
+
fork_point.add_operation(operation.clone(), *timestamp, key_index);
165
165
+
}
166
166
+
167
167
+
fork_points.insert(prev_cid.clone(), fork_point);
168
168
+
}
169
169
+
}
170
170
+
171
171
+
// Build the canonical chain by starting from genesis and following the canonical path
172
172
+
let mut canonical_chain = vec![genesis.operation.clone()];
173
173
+
let mut current_cid = genesis.operation.cid()?;
174
174
+
175
175
+
// Follow the chain until we reach the tip
176
176
+
loop {
177
177
+
// Check if there's a fork at this point
178
178
+
if let Some(fork_point) = fork_points.get(¤t_cid) {
179
179
+
// Resolve the fork and get the canonical operation
180
180
+
let canonical_op = fork_point.resolve()?;
181
181
+
let next_cid = canonical_op.cid()?;
182
182
+
canonical_chain.push(canonical_op);
183
183
+
current_cid = next_cid;
184
184
+
} else if let Some(operations) = prev_to_operations.get(¤t_cid) {
185
185
+
// No fork, just a single operation
186
186
+
if operations.len() == 1 {
187
187
+
let (operation, _) = &operations[0];
188
188
+
let next_cid = operation.cid()?;
189
189
+
canonical_chain.push(operation.clone());
190
190
+
current_cid = next_cid;
191
191
+
} else {
192
192
+
// This shouldn't happen - we should have detected this as a fork
193
193
+
return Err(PlcError::ForkResolutionError(
194
194
+
"Unexpected multiple operations without fork point".to_string(),
195
195
+
));
196
196
+
}
197
197
+
} else {
198
198
+
// No more operations - we've reached the tip
199
199
+
break;
200
200
+
}
201
201
+
}
202
202
+
203
203
+
Ok(canonical_chain)
204
204
+
}
205
205
+
206
206
+
/// Find the index of the rotation key that signed this operation
207
207
+
fn find_signing_key_index(
208
208
+
&self,
209
209
+
operation: &Operation,
210
210
+
state: &PlcState,
211
211
+
) -> Result<usize> {
212
212
+
if state.rotation_keys.is_empty() {
213
213
+
return Err(PlcError::InvalidRotationKeys(
214
214
+
"No rotation keys available for verification".to_string(),
215
215
+
));
216
216
+
}
217
217
+
218
218
+
// Try each rotation key and return the index of the first one that verifies
219
219
+
for (index, key_did) in state.rotation_keys.iter().enumerate() {
220
220
+
let verifying_key = VerifyingKey::from_did_key(key_did)?;
221
221
+
if operation.verify(&[verifying_key]).is_ok() {
222
222
+
return Ok(index);
223
223
+
}
224
224
+
}
225
225
+
226
226
+
// No key verified the signature
227
227
+
Err(PlcError::SignatureVerificationFailed)
228
228
+
}
229
229
+
230
230
+
/// Get the state at a specific operation
231
231
+
///
232
232
+
/// This reconstructs the state by applying all operations from genesis up to
233
233
+
/// and including the specified operation.
234
234
+
fn get_state_at_operation(&self, target_operation: &Operation) -> Result<PlcState> {
235
235
+
let mut state = PlcState::new();
236
236
+
let target_cid = target_operation.cid()?;
237
237
+
238
238
+
// Build the chain up to the target operation
239
239
+
// We need to traverse from genesis to the target
240
240
+
for op_with_ts in &self.operations {
241
241
+
let op = &op_with_ts.operation;
242
242
+
243
243
+
// Apply this operation to the state
244
244
+
match op {
245
245
+
Operation::PlcOperation {
246
246
+
rotation_keys,
247
247
+
verification_methods,
248
248
+
also_known_as,
249
249
+
services,
250
250
+
..
251
251
+
} => {
252
252
+
state.rotation_keys = rotation_keys.clone();
253
253
+
state.verification_methods = verification_methods.clone();
254
254
+
state.also_known_as = also_known_as.clone();
255
255
+
state.services = services.clone();
256
256
+
}
257
257
+
Operation::PlcTombstone { .. } => {
258
258
+
state = PlcState::new();
259
259
+
}
260
260
+
Operation::LegacyCreate { .. } => {
261
261
+
return Err(PlcError::InvalidOperationType(
262
262
+
"Legacy create operations not fully supported".to_string(),
263
263
+
));
264
264
+
}
265
265
+
}
266
266
+
267
267
+
// Check if this is the target operation
268
268
+
if op.cid()? == target_cid {
269
269
+
break;
270
270
+
}
271
271
+
}
272
272
+
273
273
+
Ok(state)
274
274
+
}
275
275
+
}
276
276
+
12
277
/// Operation chain validator
13
278
pub struct OperationChainValidator;
14
279
···
122
387
///
123
388
/// This handles the recovery mechanism where operations signed by higher-priority
124
389
/// rotation keys can invalidate later operations if submitted within 72 hours.
390
390
+
///
391
391
+
/// # Arguments
392
392
+
///
393
393
+
/// * `operations` - All operations in the audit log (may contain forks)
394
394
+
/// * `timestamps` - Server-assigned timestamps for each operation
395
395
+
///
396
396
+
/// # Returns
397
397
+
///
398
398
+
/// The final state after applying the canonical chain (after fork resolution)
399
399
+
///
400
400
+
/// # Errors
401
401
+
///
402
402
+
/// Returns errors if:
403
403
+
/// - Operations and timestamps arrays have different lengths
404
404
+
/// - Chain is empty
405
405
+
/// - First operation is not genesis
406
406
+
/// - Fork resolution fails
407
407
+
/// - Any signature is invalid
408
408
+
/// - Any operation violates constraints
125
409
pub fn validate_chain_with_forks(
126
410
operations: &[Operation],
127
411
timestamps: &[DateTime<Utc>],
···
132
416
));
133
417
}
134
418
135
135
-
// For now, we do basic validation without fork resolution
136
136
-
// Full fork resolution would require tracking all possible forks
137
137
-
// and selecting the canonical chain based on rotation key priority
138
138
-
Self::validate_chain(operations)
419
419
+
if operations.is_empty() {
420
420
+
return Err(PlcError::EmptyChain);
421
421
+
}
422
422
+
423
423
+
// Build operations with timestamps
424
424
+
let operations_with_timestamps: Vec<OperationWithTimestamp> = operations
425
425
+
.iter()
426
426
+
.zip(timestamps.iter())
427
427
+
.map(|(op, ts)| OperationWithTimestamp {
428
428
+
operation: op.clone(),
429
429
+
timestamp: *ts,
430
430
+
})
431
431
+
.collect();
432
432
+
433
433
+
// Use the canonical chain builder to resolve forks
434
434
+
let builder = CanonicalChainBuilder::new(operations_with_timestamps);
435
435
+
let canonical_chain = builder.build()?;
436
436
+
437
437
+
// Validate the canonical chain
438
438
+
Self::validate_chain(&canonical_chain)
139
439
}
140
440
141
441
/// Check if an operation is within the recovery window relative to another operation
···
370
670
#[test]
371
671
fn test_validate_chain_empty() {
372
672
assert!(OperationChainValidator::validate_chain(&[]).is_err());
673
673
+
}
674
674
+
675
675
+
// ========================================================================
676
676
+
// Fork Resolution Tests
677
677
+
// ========================================================================
678
678
+
679
679
+
/// Test simple fork resolution where higher priority key wins within recovery window
680
680
+
#[test]
681
681
+
fn test_fork_resolution_priority_within_window() {
682
682
+
// Create two rotation keys with different priorities
683
683
+
let primary_key = SigningKey::generate_p256(); // Index 0 - highest priority
684
684
+
let backup_key = SigningKey::generate_k256(); // Index 1 - lower priority
685
685
+
686
686
+
let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()];
687
687
+
688
688
+
// Create genesis operation
689
689
+
let genesis = Operation::new_genesis(
690
690
+
rotation_keys.clone(),
691
691
+
HashMap::new(),
692
692
+
vec![],
693
693
+
HashMap::new(),
694
694
+
)
695
695
+
.sign(&primary_key)
696
696
+
.unwrap();
697
697
+
698
698
+
let genesis_cid = genesis.cid().unwrap();
699
699
+
let genesis_time = Utc::now();
700
700
+
701
701
+
// Create two competing operations that both reference genesis
702
702
+
// Operation A: signed by backup key (lower priority)
703
703
+
let mut services_a = HashMap::new();
704
704
+
services_a.insert(
705
705
+
"pds".to_string(),
706
706
+
ServiceEndpoint {
707
707
+
service_type: "AtprotoPersonalDataServer".to_string(),
708
708
+
endpoint: "https://pds-a.example.com".to_string(),
709
709
+
},
710
710
+
);
711
711
+
712
712
+
let op_a = Operation::new_update(
713
713
+
rotation_keys.clone(),
714
714
+
HashMap::new(),
715
715
+
vec![],
716
716
+
services_a,
717
717
+
genesis_cid.clone(),
718
718
+
)
719
719
+
.sign(&backup_key)
720
720
+
.unwrap();
721
721
+
722
722
+
let op_a_time = genesis_time + Duration::hours(1);
723
723
+
724
724
+
// Operation B: signed by primary key (higher priority), arrives 24 hours after A
725
725
+
let mut services_b = HashMap::new();
726
726
+
services_b.insert(
727
727
+
"pds".to_string(),
728
728
+
ServiceEndpoint {
729
729
+
service_type: "AtprotoPersonalDataServer".to_string(),
730
730
+
endpoint: "https://pds-b.example.com".to_string(),
731
731
+
},
732
732
+
);
733
733
+
734
734
+
let op_b = Operation::new_update(
735
735
+
rotation_keys.clone(),
736
736
+
HashMap::new(),
737
737
+
vec![],
738
738
+
services_b,
739
739
+
genesis_cid,
740
740
+
)
741
741
+
.sign(&primary_key)
742
742
+
.unwrap();
743
743
+
744
744
+
let op_b_time = op_a_time + Duration::hours(24); // Within 72-hour window
745
745
+
746
746
+
// Build operations and timestamps arrays
747
747
+
let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()];
748
748
+
let timestamps = vec![genesis_time, op_a_time, op_b_time];
749
749
+
750
750
+
// Validate with fork resolution
751
751
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
752
752
+
assert!(result.is_ok());
753
753
+
754
754
+
let state = result.unwrap();
755
755
+
756
756
+
// Operation B should win because it has higher priority (lower index)
757
757
+
// and was received within the 72-hour recovery window
758
758
+
assert_eq!(
759
759
+
state.services.get("pds").unwrap().endpoint,
760
760
+
"https://pds-b.example.com"
761
761
+
);
762
762
+
}
763
763
+
764
764
+
/// Test fork resolution where lower priority operation wins after recovery window expires
765
765
+
#[test]
766
766
+
fn test_fork_resolution_priority_outside_window() {
767
767
+
let primary_key = SigningKey::generate_p256(); // Index 0
768
768
+
let backup_key = SigningKey::generate_k256(); // Index 1
769
769
+
770
770
+
let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()];
771
771
+
772
772
+
let genesis = Operation::new_genesis(
773
773
+
rotation_keys.clone(),
774
774
+
HashMap::new(),
775
775
+
vec![],
776
776
+
HashMap::new(),
777
777
+
)
778
778
+
.sign(&primary_key)
779
779
+
.unwrap();
780
780
+
781
781
+
let genesis_cid = genesis.cid().unwrap();
782
782
+
let genesis_time = Utc::now();
783
783
+
784
784
+
// Operation A: signed by backup key (lower priority)
785
785
+
let mut services_a = HashMap::new();
786
786
+
services_a.insert(
787
787
+
"pds".to_string(),
788
788
+
ServiceEndpoint {
789
789
+
service_type: "AtprotoPersonalDataServer".to_string(),
790
790
+
endpoint: "https://pds-a.example.com".to_string(),
791
791
+
},
792
792
+
);
793
793
+
794
794
+
let op_a = Operation::new_update(
795
795
+
rotation_keys.clone(),
796
796
+
HashMap::new(),
797
797
+
vec![],
798
798
+
services_a,
799
799
+
genesis_cid.clone(),
800
800
+
)
801
801
+
.sign(&backup_key)
802
802
+
.unwrap();
803
803
+
804
804
+
let op_a_time = genesis_time + Duration::hours(1);
805
805
+
806
806
+
// Operation B: signed by primary key, arrives 100 hours after A (outside 72-hour window)
807
807
+
let mut services_b = HashMap::new();
808
808
+
services_b.insert(
809
809
+
"pds".to_string(),
810
810
+
ServiceEndpoint {
811
811
+
service_type: "AtprotoPersonalDataServer".to_string(),
812
812
+
endpoint: "https://pds-b.example.com".to_string(),
813
813
+
},
814
814
+
);
815
815
+
816
816
+
let op_b = Operation::new_update(
817
817
+
rotation_keys.clone(),
818
818
+
HashMap::new(),
819
819
+
vec![],
820
820
+
services_b,
821
821
+
genesis_cid,
822
822
+
)
823
823
+
.sign(&primary_key)
824
824
+
.unwrap();
825
825
+
826
826
+
let op_b_time = op_a_time + Duration::hours(100); // Outside 72-hour window
827
827
+
828
828
+
let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()];
829
829
+
let timestamps = vec![genesis_time, op_a_time, op_b_time];
830
830
+
831
831
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
832
832
+
assert!(result.is_ok());
833
833
+
834
834
+
let state = result.unwrap();
835
835
+
836
836
+
// Operation A should win because even though B has higher priority,
837
837
+
// it arrived outside the 72-hour recovery window
838
838
+
assert_eq!(
839
839
+
state.services.get("pds").unwrap().endpoint,
840
840
+
"https://pds-a.example.com"
841
841
+
);
842
842
+
}
843
843
+
844
844
+
/// Test multiple forks at different points in the chain
845
845
+
#[test]
846
846
+
fn test_fork_resolution_multiple_forks() {
847
847
+
let key1 = SigningKey::generate_p256();
848
848
+
let key2 = SigningKey::generate_k256();
849
849
+
let key3 = SigningKey::generate_p256();
850
850
+
851
851
+
let rotation_keys = vec![key1.to_did_key(), key2.to_did_key(), key3.to_did_key()];
852
852
+
853
853
+
// Genesis
854
854
+
let genesis = Operation::new_genesis(
855
855
+
rotation_keys.clone(),
856
856
+
HashMap::new(),
857
857
+
vec![],
858
858
+
HashMap::new(),
859
859
+
)
860
860
+
.sign(&key1)
861
861
+
.unwrap();
862
862
+
863
863
+
let genesis_cid = genesis.cid().unwrap();
864
864
+
let genesis_time = Utc::now();
865
865
+
866
866
+
// First fork at genesis
867
867
+
let mut services_1a = HashMap::new();
868
868
+
services_1a.insert(
869
869
+
"pds".to_string(),
870
870
+
ServiceEndpoint {
871
871
+
service_type: "AtprotoPersonalDataServer".to_string(),
872
872
+
endpoint: "https://fork1a.example.com".to_string(),
873
873
+
},
874
874
+
);
875
875
+
876
876
+
let op_1a = Operation::new_update(
877
877
+
rotation_keys.clone(),
878
878
+
HashMap::new(),
879
879
+
vec![],
880
880
+
services_1a,
881
881
+
genesis_cid.clone(),
882
882
+
)
883
883
+
.sign(&key2)
884
884
+
.unwrap();
885
885
+
886
886
+
let op_1a_time = genesis_time + Duration::hours(1);
887
887
+
888
888
+
// Second operation at same fork (higher priority, within window)
889
889
+
let mut services_1b = HashMap::new();
890
890
+
services_1b.insert(
891
891
+
"pds".to_string(),
892
892
+
ServiceEndpoint {
893
893
+
service_type: "AtprotoPersonalDataServer".to_string(),
894
894
+
endpoint: "https://fork1b.example.com".to_string(),
895
895
+
},
896
896
+
);
897
897
+
898
898
+
let op_1b = Operation::new_update(
899
899
+
rotation_keys.clone(),
900
900
+
HashMap::new(),
901
901
+
vec![],
902
902
+
services_1b,
903
903
+
genesis_cid,
904
904
+
)
905
905
+
.sign(&key1)
906
906
+
.unwrap();
907
907
+
908
908
+
let op_1b_time = op_1a_time + Duration::hours(2);
909
909
+
910
910
+
// Next operation continues from op_1b (the winner)
911
911
+
let op_1b_cid = op_1b.cid().unwrap();
912
912
+
913
913
+
let mut services_2 = HashMap::new();
914
914
+
services_2.insert(
915
915
+
"pds".to_string(),
916
916
+
ServiceEndpoint {
917
917
+
service_type: "AtprotoPersonalDataServer".to_string(),
918
918
+
endpoint: "https://continuation.example.com".to_string(),
919
919
+
},
920
920
+
);
921
921
+
922
922
+
let op_2 = Operation::new_update(
923
923
+
rotation_keys.clone(),
924
924
+
HashMap::new(),
925
925
+
vec![],
926
926
+
services_2,
927
927
+
op_1b_cid,
928
928
+
)
929
929
+
.sign(&key1)
930
930
+
.unwrap();
931
931
+
932
932
+
let op_2_time = op_1b_time + Duration::hours(1);
933
933
+
934
934
+
let operations = vec![
935
935
+
genesis.clone(),
936
936
+
op_1a.clone(),
937
937
+
op_1b.clone(),
938
938
+
op_2.clone(),
939
939
+
];
940
940
+
let timestamps = vec![genesis_time, op_1a_time, op_1b_time, op_2_time];
941
941
+
942
942
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
943
943
+
assert!(result.is_ok());
944
944
+
945
945
+
let state = result.unwrap();
946
946
+
947
947
+
// The final state should be from op_2, which continues from op_1b
948
948
+
assert_eq!(
949
949
+
state.services.get("pds").unwrap().endpoint,
950
950
+
"https://continuation.example.com"
951
951
+
);
952
952
+
}
953
953
+
954
954
+
/// Test fork with three competing operations
955
955
+
#[test]
956
956
+
fn test_fork_resolution_three_way_fork() {
957
957
+
let key1 = SigningKey::generate_p256(); // Highest priority
958
958
+
let key2 = SigningKey::generate_k256(); // Medium priority
959
959
+
let key3 = SigningKey::generate_p256(); // Lowest priority
960
960
+
961
961
+
let rotation_keys = vec![key1.to_did_key(), key2.to_did_key(), key3.to_did_key()];
962
962
+
963
963
+
let genesis = Operation::new_genesis(
964
964
+
rotation_keys.clone(),
965
965
+
HashMap::new(),
966
966
+
vec![],
967
967
+
HashMap::new(),
968
968
+
)
969
969
+
.sign(&key1)
970
970
+
.unwrap();
971
971
+
972
972
+
let genesis_cid = genesis.cid().unwrap();
973
973
+
let genesis_time = Utc::now();
974
974
+
975
975
+
// Three competing operations
976
976
+
let mut services_a = HashMap::new();
977
977
+
services_a.insert(
978
978
+
"pds".to_string(),
979
979
+
ServiceEndpoint {
980
980
+
service_type: "AtprotoPersonalDataServer".to_string(),
981
981
+
endpoint: "https://op-a.example.com".to_string(),
982
982
+
},
983
983
+
);
984
984
+
985
985
+
let op_a = Operation::new_update(
986
986
+
rotation_keys.clone(),
987
987
+
HashMap::new(),
988
988
+
vec![],
989
989
+
services_a,
990
990
+
genesis_cid.clone(),
991
991
+
)
992
992
+
.sign(&key3)
993
993
+
.unwrap(); // Lowest priority, first to arrive
994
994
+
995
995
+
let op_a_time = genesis_time + Duration::hours(1);
996
996
+
997
997
+
let mut services_b = HashMap::new();
998
998
+
services_b.insert(
999
999
+
"pds".to_string(),
1000
1000
+
ServiceEndpoint {
1001
1001
+
service_type: "AtprotoPersonalDataServer".to_string(),
1002
1002
+
endpoint: "https://op-b.example.com".to_string(),
1003
1003
+
},
1004
1004
+
);
1005
1005
+
1006
1006
+
let op_b = Operation::new_update(
1007
1007
+
rotation_keys.clone(),
1008
1008
+
HashMap::new(),
1009
1009
+
vec![],
1010
1010
+
services_b,
1011
1011
+
genesis_cid.clone(),
1012
1012
+
)
1013
1013
+
.sign(&key2)
1014
1014
+
.unwrap(); // Medium priority
1015
1015
+
1016
1016
+
let op_b_time = op_a_time + Duration::hours(2);
1017
1017
+
1018
1018
+
let mut services_c = HashMap::new();
1019
1019
+
services_c.insert(
1020
1020
+
"pds".to_string(),
1021
1021
+
ServiceEndpoint {
1022
1022
+
service_type: "AtprotoPersonalDataServer".to_string(),
1023
1023
+
endpoint: "https://op-c.example.com".to_string(),
1024
1024
+
},
1025
1025
+
);
1026
1026
+
1027
1027
+
let op_c = Operation::new_update(
1028
1028
+
rotation_keys.clone(),
1029
1029
+
HashMap::new(),
1030
1030
+
vec![],
1031
1031
+
services_c,
1032
1032
+
genesis_cid,
1033
1033
+
)
1034
1034
+
.sign(&key1)
1035
1035
+
.unwrap(); // Highest priority, within window
1036
1036
+
1037
1037
+
let op_c_time = op_a_time + Duration::hours(10);
1038
1038
+
1039
1039
+
let operations = vec![
1040
1040
+
genesis.clone(),
1041
1041
+
op_a.clone(),
1042
1042
+
op_b.clone(),
1043
1043
+
op_c.clone(),
1044
1044
+
];
1045
1045
+
let timestamps = vec![genesis_time, op_a_time, op_b_time, op_c_time];
1046
1046
+
1047
1047
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
1048
1048
+
assert!(result.is_ok());
1049
1049
+
1050
1050
+
let state = result.unwrap();
1051
1051
+
1052
1052
+
// Operation C should win (highest priority, within window)
1053
1053
+
assert_eq!(
1054
1054
+
state.services.get("pds").unwrap().endpoint,
1055
1055
+
"https://op-c.example.com"
1056
1056
+
);
1057
1057
+
}
1058
1058
+
1059
1059
+
/// Test no fork - linear chain should work as before
1060
1060
+
#[test]
1061
1061
+
fn test_fork_resolution_no_fork() {
1062
1062
+
let key = SigningKey::generate_p256();
1063
1063
+
let rotation_keys = vec![key.to_did_key()];
1064
1064
+
1065
1065
+
let genesis = Operation::new_genesis(
1066
1066
+
rotation_keys.clone(),
1067
1067
+
HashMap::new(),
1068
1068
+
vec![],
1069
1069
+
HashMap::new(),
1070
1070
+
)
1071
1071
+
.sign(&key)
1072
1072
+
.unwrap();
1073
1073
+
1074
1074
+
let genesis_cid = genesis.cid().unwrap();
1075
1075
+
let genesis_time = Utc::now();
1076
1076
+
1077
1077
+
let mut services = HashMap::new();
1078
1078
+
services.insert(
1079
1079
+
"pds".to_string(),
1080
1080
+
ServiceEndpoint {
1081
1081
+
service_type: "AtprotoPersonalDataServer".to_string(),
1082
1082
+
endpoint: "https://pds.example.com".to_string(),
1083
1083
+
},
1084
1084
+
);
1085
1085
+
1086
1086
+
let op1 = Operation::new_update(
1087
1087
+
rotation_keys.clone(),
1088
1088
+
HashMap::new(),
1089
1089
+
vec![],
1090
1090
+
services.clone(),
1091
1091
+
genesis_cid,
1092
1092
+
)
1093
1093
+
.sign(&key)
1094
1094
+
.unwrap();
1095
1095
+
1096
1096
+
let op1_time = genesis_time + Duration::hours(1);
1097
1097
+
1098
1098
+
let operations = vec![genesis.clone(), op1.clone()];
1099
1099
+
let timestamps = vec![genesis_time, op1_time];
1100
1100
+
1101
1101
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
1102
1102
+
assert!(result.is_ok());
1103
1103
+
1104
1104
+
let state = result.unwrap();
1105
1105
+
assert_eq!(
1106
1106
+
state.services.get("pds").unwrap().endpoint,
1107
1107
+
"https://pds.example.com"
1108
1108
+
);
1109
1109
+
}
1110
1110
+
1111
1111
+
/// Test fork resolution with rotation key changes
1112
1112
+
#[test]
1113
1113
+
fn test_fork_resolution_with_key_rotation() {
1114
1114
+
let key1 = SigningKey::generate_p256();
1115
1115
+
let key2 = SigningKey::generate_k256();
1116
1116
+
let key3 = SigningKey::generate_p256();
1117
1117
+
1118
1118
+
// Initial rotation keys
1119
1119
+
let rotation_keys_v1 = vec![key1.to_did_key(), key2.to_did_key()];
1120
1120
+
1121
1121
+
let genesis = Operation::new_genesis(
1122
1122
+
rotation_keys_v1.clone(),
1123
1123
+
HashMap::new(),
1124
1124
+
vec![],
1125
1125
+
HashMap::new(),
1126
1126
+
)
1127
1127
+
.sign(&key1)
1128
1128
+
.unwrap();
1129
1129
+
1130
1130
+
let genesis_cid = genesis.cid().unwrap();
1131
1131
+
let genesis_time = Utc::now();
1132
1132
+
1133
1133
+
// Update rotation keys in first operation
1134
1134
+
let rotation_keys_v2 = vec![key1.to_did_key(), key3.to_did_key()];
1135
1135
+
1136
1136
+
let op1 = Operation::new_update(
1137
1137
+
rotation_keys_v2.clone(),
1138
1138
+
HashMap::new(),
1139
1139
+
vec![],
1140
1140
+
HashMap::new(),
1141
1141
+
genesis_cid,
1142
1142
+
)
1143
1143
+
.sign(&key1)
1144
1144
+
.unwrap();
1145
1145
+
1146
1146
+
let op1_cid = op1.cid().unwrap();
1147
1147
+
let op1_time = genesis_time + Duration::hours(1);
1148
1148
+
1149
1149
+
// Create a fork at op1 - both operations use the new rotation keys
1150
1150
+
let mut services_a = HashMap::new();
1151
1151
+
services_a.insert(
1152
1152
+
"pds".to_string(),
1153
1153
+
ServiceEndpoint {
1154
1154
+
service_type: "AtprotoPersonalDataServer".to_string(),
1155
1155
+
endpoint: "https://op-a.example.com".to_string(),
1156
1156
+
},
1157
1157
+
);
1158
1158
+
1159
1159
+
let op_a = Operation::new_update(
1160
1160
+
rotation_keys_v2.clone(),
1161
1161
+
HashMap::new(),
1162
1162
+
vec![],
1163
1163
+
services_a,
1164
1164
+
op1_cid.clone(),
1165
1165
+
)
1166
1166
+
.sign(&key3)
1167
1167
+
.unwrap(); // Index 1 in new keys
1168
1168
+
1169
1169
+
let op_a_time = op1_time + Duration::hours(1);
1170
1170
+
1171
1171
+
let mut services_b = HashMap::new();
1172
1172
+
services_b.insert(
1173
1173
+
"pds".to_string(),
1174
1174
+
ServiceEndpoint {
1175
1175
+
service_type: "AtprotoPersonalDataServer".to_string(),
1176
1176
+
endpoint: "https://op-b.example.com".to_string(),
1177
1177
+
},
1178
1178
+
);
1179
1179
+
1180
1180
+
let op_b = Operation::new_update(
1181
1181
+
rotation_keys_v2.clone(),
1182
1182
+
HashMap::new(),
1183
1183
+
vec![],
1184
1184
+
services_b,
1185
1185
+
op1_cid,
1186
1186
+
)
1187
1187
+
.sign(&key1)
1188
1188
+
.unwrap(); // Index 0 in new keys (higher priority)
1189
1189
+
1190
1190
+
let op_b_time = op_a_time + Duration::hours(2);
1191
1191
+
1192
1192
+
let operations = vec![
1193
1193
+
genesis.clone(),
1194
1194
+
op1.clone(),
1195
1195
+
op_a.clone(),
1196
1196
+
op_b.clone(),
1197
1197
+
];
1198
1198
+
let timestamps = vec![genesis_time, op1_time, op_a_time, op_b_time];
1199
1199
+
1200
1200
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
1201
1201
+
assert!(result.is_ok());
1202
1202
+
1203
1203
+
let state = result.unwrap();
1204
1204
+
1205
1205
+
// Operation B should win (signed by key1 which is index 0 in the new rotation keys)
1206
1206
+
assert_eq!(
1207
1207
+
state.services.get("pds").unwrap().endpoint,
1208
1208
+
"https://op-b.example.com"
1209
1209
+
);
1210
1210
+
}
1211
1211
+
1212
1212
+
/// Test that operations with mismatched timestamps and operations fail
1213
1213
+
#[test]
1214
1214
+
fn test_fork_resolution_mismatched_lengths() {
1215
1215
+
let key = SigningKey::generate_p256();
1216
1216
+
let rotation_keys = vec![key.to_did_key()];
1217
1217
+
1218
1218
+
let genesis = Operation::new_genesis(
1219
1219
+
rotation_keys,
1220
1220
+
HashMap::new(),
1221
1221
+
vec![],
1222
1222
+
HashMap::new(),
1223
1223
+
)
1224
1224
+
.sign(&key)
1225
1225
+
.unwrap();
1226
1226
+
1227
1227
+
let operations = vec![genesis];
1228
1228
+
let timestamps = vec![Utc::now(), Utc::now()]; // Different length
1229
1229
+
1230
1230
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
1231
1231
+
assert!(result.is_err());
1232
1232
+
}
1233
1233
+
1234
1234
+
/// Test recovery window boundary (exactly 72 hours)
1235
1235
+
#[test]
1236
1236
+
fn test_fork_resolution_recovery_window_boundary() {
1237
1237
+
let primary_key = SigningKey::generate_p256();
1238
1238
+
let backup_key = SigningKey::generate_k256();
1239
1239
+
1240
1240
+
let rotation_keys = vec![primary_key.to_did_key(), backup_key.to_did_key()];
1241
1241
+
1242
1242
+
let genesis = Operation::new_genesis(
1243
1243
+
rotation_keys.clone(),
1244
1244
+
HashMap::new(),
1245
1245
+
vec![],
1246
1246
+
HashMap::new(),
1247
1247
+
)
1248
1248
+
.sign(&primary_key)
1249
1249
+
.unwrap();
1250
1250
+
1251
1251
+
let genesis_cid = genesis.cid().unwrap();
1252
1252
+
let genesis_time = Utc::now();
1253
1253
+
1254
1254
+
// Operation A: signed by backup key
1255
1255
+
let mut services_a = HashMap::new();
1256
1256
+
services_a.insert(
1257
1257
+
"pds".to_string(),
1258
1258
+
ServiceEndpoint {
1259
1259
+
service_type: "AtprotoPersonalDataServer".to_string(),
1260
1260
+
endpoint: "https://op-a.example.com".to_string(),
1261
1261
+
},
1262
1262
+
);
1263
1263
+
1264
1264
+
let op_a = Operation::new_update(
1265
1265
+
rotation_keys.clone(),
1266
1266
+
HashMap::new(),
1267
1267
+
vec![],
1268
1268
+
services_a,
1269
1269
+
genesis_cid.clone(),
1270
1270
+
)
1271
1271
+
.sign(&backup_key)
1272
1272
+
.unwrap();
1273
1273
+
1274
1274
+
let op_a_time = genesis_time + Duration::hours(1);
1275
1275
+
1276
1276
+
// Operation B: exactly at 72-hour boundary (should still be within window)
1277
1277
+
let mut services_b = HashMap::new();
1278
1278
+
services_b.insert(
1279
1279
+
"pds".to_string(),
1280
1280
+
ServiceEndpoint {
1281
1281
+
service_type: "AtprotoPersonalDataServer".to_string(),
1282
1282
+
endpoint: "https://op-b.example.com".to_string(),
1283
1283
+
},
1284
1284
+
);
1285
1285
+
1286
1286
+
let op_b = Operation::new_update(
1287
1287
+
rotation_keys.clone(),
1288
1288
+
HashMap::new(),
1289
1289
+
vec![],
1290
1290
+
services_b,
1291
1291
+
genesis_cid,
1292
1292
+
)
1293
1293
+
.sign(&primary_key)
1294
1294
+
.unwrap();
1295
1295
+
1296
1296
+
// Exactly 72 hours after op_a
1297
1297
+
let op_b_time = op_a_time + Duration::hours(72);
1298
1298
+
1299
1299
+
let operations = vec![genesis.clone(), op_a.clone(), op_b.clone()];
1300
1300
+
let timestamps = vec![genesis_time, op_a_time, op_b_time];
1301
1301
+
1302
1302
+
let result = OperationChainValidator::validate_chain_with_forks(&operations, ×tamps);
1303
1303
+
assert!(result.is_ok());
1304
1304
+
1305
1305
+
let state = result.unwrap();
1306
1306
+
1307
1307
+
// At exactly 72 hours, the higher priority operation should still win
1308
1308
+
assert_eq!(
1309
1309
+
state.services.get("pds").unwrap().endpoint,
1310
1310
+
"https://op-b.example.com"
1311
1311
+
);
373
1312
}
374
1313
}
+2
-2
wasm/package-lock.json
reviewed
···
1
1
{
2
2
"name": "atproto-plc",
3
3
-
"version": "0.1.0",
3
3
+
"version": "0.2.0",
4
4
"lockfileVersion": 3,
5
5
"requires": true,
6
6
"packages": {
7
7
"": {
8
8
"name": "atproto-plc",
9
9
-
"version": "0.1.0",
9
9
+
"version": "0.2.0",
10
10
"license": "MIT OR Apache-2.0",
11
11
"devDependencies": {
12
12
"@types/node": "^20.0.0"
+1
-1
wasm/package.json
reviewed
···
1
1
{
2
2
"name": "atproto-plc",
3
3
-
"version": "0.1.0",
3
3
+
"version": "0.2.0",
4
4
"description": "did-method-plc implementation for ATProto with WASM support",
5
5
"type": "module",
6
6
"main": "index.js",
+300
-31
wasm/plc-audit.js
reviewed
···
62
62
try {
63
63
const response = await fetch(url, {
64
64
headers: {
65
65
-
'User-Agent': 'atproto-plc-audit-wasm/0.1.0',
65
65
+
'User-Agent': 'atproto-plc-audit-wasm/0.2.0',
66
66
},
67
67
});
68
68
···
78
78
}
79
79
80
80
/**
81
81
+
* Detect if there are fork points in the audit log
82
82
+
*/
83
83
+
function detectForks(operations) {
84
84
+
const prevCounts = new Map();
85
85
+
86
86
+
for (const entry of operations) {
87
87
+
const prev = entry.operation.prev();
88
88
+
if (prev) {
89
89
+
prevCounts.set(prev, (prevCounts.get(prev) || 0) + 1);
90
90
+
}
91
91
+
}
92
92
+
93
93
+
// If any prev CID is referenced by more than one operation, there's a fork
94
94
+
return Array.from(prevCounts.values()).some(count => count > 1);
95
95
+
}
96
96
+
97
97
+
/**
98
98
+
* Build a list of indices that form the canonical chain
99
99
+
*/
100
100
+
function buildCanonicalChainIndices(operations) {
101
101
+
// Build a map of prev CID to operations
102
102
+
const prevToIndices = new Map();
103
103
+
104
104
+
for (let i = 0; i < operations.length; i++) {
105
105
+
const prev = operations[i].operation.prev();
106
106
+
if (prev) {
107
107
+
if (!prevToIndices.has(prev)) {
108
108
+
prevToIndices.set(prev, []);
109
109
+
}
110
110
+
prevToIndices.get(prev).push(i);
111
111
+
}
112
112
+
}
113
113
+
114
114
+
// Start from genesis and follow the canonical chain
115
115
+
const canonical = [];
116
116
+
117
117
+
// Find genesis (first operation)
118
118
+
if (operations.length === 0) {
119
119
+
return canonical;
120
120
+
}
121
121
+
122
122
+
canonical.push(0);
123
123
+
let currentCid = operations[0].cid;
124
124
+
125
125
+
// Follow the chain, preferring non-nullified operations
126
126
+
while (true) {
127
127
+
const indices = prevToIndices.get(currentCid);
128
128
+
if (!indices || indices.length === 0) {
129
129
+
break;
130
130
+
}
131
131
+
132
132
+
// Find the first non-nullified operation
133
133
+
const nextIdx = indices.find(idx => !operations[idx].nullified);
134
134
+
if (nextIdx !== undefined) {
135
135
+
canonical.push(nextIdx);
136
136
+
currentCid = operations[nextIdx].cid;
137
137
+
} else {
138
138
+
// All operations at this point are nullified - try to find any operation
139
139
+
if (indices.length > 0) {
140
140
+
canonical.push(indices[0]);
141
141
+
currentCid = operations[indices[0]].cid;
142
142
+
} else {
143
143
+
break;
144
144
+
}
145
145
+
}
146
146
+
}
147
147
+
148
148
+
return canonical;
149
149
+
}
150
150
+
151
151
+
/**
152
152
+
* Display the final state after validation
153
153
+
*/
154
154
+
function displayFinalState(finalEntry, rawEntry) {
155
155
+
const rotationKeys = finalEntry.operation.rotationKeys();
156
156
+
157
157
+
if (!rotationKeys) {
158
158
+
console.error('❌ Error: Could not extract final state');
159
159
+
process.exit(1);
160
160
+
}
161
161
+
162
162
+
if (args.quiet) {
163
163
+
console.log('✅ VALID');
164
164
+
} else {
165
165
+
console.log('✅ Validation successful!');
166
166
+
console.log();
167
167
+
console.log('📄 Final DID State:');
168
168
+
console.log(' Rotation keys:', rotationKeys.length);
169
169
+
for (let i = 0; i < rotationKeys.length; i++) {
170
170
+
console.log(` [${i}] ${rotationKeys[i]}`);
171
171
+
}
172
172
+
console.log();
173
173
+
174
174
+
// Extract additional state from the raw operation
175
175
+
const op = rawEntry.operation;
176
176
+
if (op.verificationMethods) {
177
177
+
const vmKeys = Object.keys(op.verificationMethods);
178
178
+
console.log(' Verification methods:', vmKeys.length);
179
179
+
for (const name of vmKeys) {
180
180
+
console.log(` ${name}: ${op.verificationMethods[name]}`);
181
181
+
}
182
182
+
console.log();
183
183
+
}
184
184
+
185
185
+
if (op.alsoKnownAs && op.alsoKnownAs.length > 0) {
186
186
+
console.log(' Also known as:', op.alsoKnownAs.length);
187
187
+
for (const uri of op.alsoKnownAs) {
188
188
+
console.log(` - ${uri}`);
189
189
+
}
190
190
+
console.log();
191
191
+
}
192
192
+
193
193
+
if (op.services) {
194
194
+
const serviceNames = Object.keys(op.services);
195
195
+
if (serviceNames.length > 0) {
196
196
+
console.log(' Services:', serviceNames.length);
197
197
+
for (const name of serviceNames) {
198
198
+
const service = op.services[name];
199
199
+
console.log(` ${name}: ${service.endpoint} (${service.type})`);
200
200
+
}
201
201
+
}
202
202
+
}
203
203
+
}
204
204
+
}
205
205
+
206
206
+
/**
81
207
* Main validation logic
82
208
*/
83
209
async function main() {
···
154
280
console.log();
155
281
}
156
282
157
157
-
// Validate the operation chain
283
283
+
// Detect forks and build canonical chain
158
284
if (!args.quiet) {
159
159
-
console.log('🔐 Validating operation chain...');
285
285
+
console.log('🔐 Analyzing operation chain...');
160
286
console.log();
161
287
}
162
288
163
163
-
// Step 1: Validate chain linkage (prev references)
289
289
+
// Detect fork points and nullified operations
290
290
+
const hasForks = detectForks(operations);
291
291
+
const hasNullified = operations.some(e => e.nullified);
292
292
+
293
293
+
if (hasForks || hasNullified) {
294
294
+
if (!args.quiet) {
295
295
+
if (hasForks) {
296
296
+
console.log('⚠️ Fork detected - multiple operations reference the same prev CID');
297
297
+
}
298
298
+
if (hasNullified) {
299
299
+
console.log('⚠️ Nullified operations detected - will validate canonical chain only');
300
300
+
}
301
301
+
console.log();
302
302
+
}
303
303
+
304
304
+
// Build canonical chain
305
305
+
if (args.verbose) {
306
306
+
console.log('Step 1: Fork Resolution & Canonical Chain Building');
307
307
+
console.log('===================================================');
308
308
+
}
309
309
+
310
310
+
const canonicalIndices = buildCanonicalChainIndices(operations);
311
311
+
312
312
+
if (args.verbose) {
313
313
+
console.log(' ✅ Fork resolution complete');
314
314
+
console.log(' ✅ Canonical chain identified');
315
315
+
console.log();
316
316
+
317
317
+
console.log('Canonical Chain Operations:');
318
318
+
console.log('===========================');
319
319
+
320
320
+
for (const idx of canonicalIndices) {
321
321
+
const entry = operations[idx];
322
322
+
console.log(` [${idx}] ✅ ${entry.cid} - ${entry.createdAt}`);
323
323
+
}
324
324
+
console.log();
325
325
+
326
326
+
if (hasNullified) {
327
327
+
console.log('Nullified/Rejected Operations:');
328
328
+
console.log('==============================');
329
329
+
for (let i = 0; i < operations.length; i++) {
330
330
+
const entry = operations[i];
331
331
+
if (entry.nullified && !canonicalIndices.includes(i)) {
332
332
+
console.log(` [${i}] ❌ ${entry.cid} - ${entry.createdAt} (nullified)`);
333
333
+
const prev = entry.operation.prev();
334
334
+
if (prev) {
335
335
+
console.log(' Referenced:', prev);
336
336
+
}
337
337
+
}
338
338
+
}
339
339
+
console.log();
340
340
+
}
341
341
+
}
342
342
+
343
343
+
// Validate signatures along canonical chain
344
344
+
if (args.verbose) {
345
345
+
console.log('Step 2: Cryptographic Signature Validation');
346
346
+
console.log('==========================================');
347
347
+
}
348
348
+
349
349
+
let currentRotationKeys = [];
350
350
+
351
351
+
for (const idx of canonicalIndices) {
352
352
+
const entry = operations[idx];
353
353
+
354
354
+
// For genesis operation, extract rotation keys
355
355
+
if (idx === 0) {
356
356
+
if (args.verbose) {
357
357
+
console.log(` [${idx}] Genesis operation - extracting rotation keys`);
358
358
+
}
359
359
+
360
360
+
const rotationKeys = entry.operation.rotationKeys();
361
361
+
if (rotationKeys) {
362
362
+
currentRotationKeys = rotationKeys;
363
363
+
364
364
+
if (args.verbose) {
365
365
+
console.log(' Rotation keys:', rotationKeys.length);
366
366
+
for (let j = 0; j < rotationKeys.length; j++) {
367
367
+
console.log(` [${j}] ${rotationKeys[j]}`);
368
368
+
}
369
369
+
console.log(' ⚠️ Genesis signature cannot be verified (bootstrapping trust)');
370
370
+
}
371
371
+
}
372
372
+
continue;
373
373
+
}
374
374
+
375
375
+
if (args.verbose) {
376
376
+
console.log(` [${idx}] Validating signature...`);
377
377
+
console.log(' CID:', entry.cid);
378
378
+
console.log(' Signature:', entry.operation.signature());
379
379
+
}
380
380
+
381
381
+
// Validate signature using current rotation keys
382
382
+
if (currentRotationKeys.length > 0) {
383
383
+
if (args.verbose) {
384
384
+
console.log(' Available rotation keys:', currentRotationKeys.length);
385
385
+
for (let j = 0; j < currentRotationKeys.length; j++) {
386
386
+
console.log(` [${j}] ${currentRotationKeys[j]}`);
387
387
+
}
388
388
+
}
389
389
+
390
390
+
// Parse verifying keys
391
391
+
const verifyingKeys = [];
392
392
+
for (const keyStr of currentRotationKeys) {
393
393
+
try {
394
394
+
verifyingKeys.push(WasmVerifyingKey.fromDidKey(keyStr));
395
395
+
} catch (error) {
396
396
+
console.error(`Warning: Failed to parse rotation key: ${keyStr}`);
397
397
+
}
398
398
+
}
399
399
+
400
400
+
if (args.verbose) {
401
401
+
console.log(` Parsed verifying keys: ${verifyingKeys.length}/${currentRotationKeys.length}`);
402
402
+
}
403
403
+
404
404
+
// Try to verify with each key and track which one worked
405
405
+
try {
406
406
+
const keyIndex = entry.operation.verifyWithKeyIndex(verifyingKeys);
407
407
+
408
408
+
if (args.verbose) {
409
409
+
console.log(` ✅ Signature verified with rotation key [${keyIndex}]`);
410
410
+
console.log(` ${currentRotationKeys[keyIndex]}`);
411
411
+
}
412
412
+
} catch (error) {
413
413
+
console.error();
414
414
+
console.error(`❌ Validation failed: Invalid signature at operation ${idx}`);
415
415
+
console.error(' Error:', error.message);
416
416
+
console.error(' CID:', entry.cid);
417
417
+
console.error(` Tried ${verifyingKeys.length} rotation keys, none verified the signature`);
418
418
+
process.exit(1);
419
419
+
}
420
420
+
}
421
421
+
422
422
+
// Update rotation keys if this operation changes them
423
423
+
const newRotationKeys = entry.operation.rotationKeys();
424
424
+
if (newRotationKeys) {
425
425
+
const keysChanged = JSON.stringify(newRotationKeys) !== JSON.stringify(currentRotationKeys);
426
426
+
427
427
+
if (keysChanged) {
428
428
+
if (args.verbose) {
429
429
+
console.log(' 🔄 Rotation keys updated by this operation');
430
430
+
console.log(' Old keys:', currentRotationKeys.length);
431
431
+
console.log(' New keys:', newRotationKeys.length);
432
432
+
for (let j = 0; j < newRotationKeys.length; j++) {
433
433
+
console.log(` [${j}] ${newRotationKeys[j]}`);
434
434
+
}
435
435
+
}
436
436
+
currentRotationKeys = newRotationKeys;
437
437
+
}
438
438
+
}
439
439
+
}
440
440
+
441
441
+
if (args.verbose) {
442
442
+
console.log();
443
443
+
console.log('✅ Cryptographic signature validation complete');
444
444
+
console.log();
445
445
+
}
446
446
+
447
447
+
// Build final state
448
448
+
const finalIdx = canonicalIndices[canonicalIndices.length - 1];
449
449
+
const finalEntry = operations[finalIdx];
450
450
+
const finalRawEntry = auditLog[finalIdx];
451
451
+
displayFinalState(finalEntry, finalRawEntry);
452
452
+
return;
453
453
+
}
454
454
+
455
455
+
// Simple linear chain validation (no forks or nullified operations)
164
456
if (args.verbose) {
165
165
-
console.log('Step 1: Chain Linkage Validation');
457
457
+
console.log('Step 1: Linear Chain Validation');
166
458
console.log('================================');
167
459
}
168
460
169
461
for (let i = 1; i < operations.length; i++) {
170
170
-
if (operations[i].nullified) {
171
171
-
if (args.verbose) {
172
172
-
console.log(` [${i}] ⊘ Skipped (nullified)`);
173
173
-
}
174
174
-
continue;
175
175
-
}
176
176
-
177
462
const prevCid = operations[i - 1].cid;
178
463
const expectedPrev = operations[i].operation.prev();
179
464
···
323
608
}
324
609
325
610
// Build final state
326
326
-
const finalEntry = operations.filter(e => !e.nullified).pop();
327
327
-
const finalRotationKeys = finalEntry.operation.rotationKeys();
328
328
-
329
329
-
if (finalRotationKeys) {
330
330
-
if (args.quiet) {
331
331
-
console.log('✅ VALID');
332
332
-
} else {
333
333
-
console.log('✅ Validation successful!');
334
334
-
console.log();
335
335
-
console.log('📄 Final DID State:');
336
336
-
console.log(' Rotation keys:', finalRotationKeys.length);
337
337
-
for (let i = 0; i < finalRotationKeys.length; i++) {
338
338
-
console.log(` [${i}] ${finalRotationKeys[i]}`);
339
339
-
}
340
340
-
}
341
341
-
} else {
342
342
-
console.error('❌ Error: Could not extract final state');
343
343
-
process.exit(1);
344
344
-
}
611
611
+
const finalEntry = operations[operations.length - 1];
612
612
+
const finalRawEntry = auditLog[auditLog.length - 1];
613
613
+
displayFinalState(finalEntry, finalRawEntry);
345
614
346
615
} catch (error) {
347
616
console.error('❌ Fatal error:', error.message);