···11-# Development Workflow
22-33-## Setting Up Firehose Feed with Known DIDs
44-55-For development and testing, you can populate your local feed with known Arabica users:
66-77-### 1. Create a Known DIDs File
88-99-Create `known-dids.txt` in the project root:
1010-1111-```bash
1212-cat > known-dids.txt << 'EOF'
1313-# Known Arabica users for development
1414-# Add one DID per line
1515-1616-# Example (replace with real DIDs):
1717-# did:plc:abc123xyz
1818-# did:plc:def456uvw
1919-2020-EOF
2121-```
2222-2323-### 2. Find DIDs to Add
2424-2525-You can find DIDs of Arabica users in several ways:
2626-2727-**From Bluesky profiles:**
2828-- Visit a user's profile on Bluesky
2929-- Check the URL or profile metadata for their DID
3030-3131-**From authenticated sessions:**
3232-- After logging into Arabica, check your browser cookies
3333-- The `did` cookie contains your DID
3434-3535-**From AT Protocol explorer tools:**
3636-- Use tools like `atproto.blue` to search for users
3737-3838-### 3. Run Server with Backfill
3939-4040-```bash
4141-# Start server with firehose and backfill
4242-go run cmd/server/main.go --firehose --known-dids known-dids.txt
4343-4444-# Or with nix (requires adding flags to flake.nix)
4545-nix run -- --firehose --known-dids known-dids.txt
4646-```
4747-4848-### 4. Monitor Backfill Progress
4949-5050-Watch the logs for backfill activity:
5151-5252-```
5353-{"level":"info","count":3,"file":"known-dids.txt","message":"Loaded known DIDs from file"}
5454-{"level":"info","did":"did:plc:abc123xyz","message":"backfilling user records"}
5555-{"level":"info","total":5,"success":5,"message":"Backfill complete"}
5656-```
5757-5858-### 5. Verify Feed Data
5959-6060-Once backfilled, check:
6161-- Home page feed should show brews from backfilled users
6262-- `/feed` endpoint should return feed items
6363-- Database should contain indexed records
6464-6565-## File Format Notes
6666-6767-The `known-dids.txt` file supports:
6868-6969-- **Comments**: Lines starting with `#`
7070-- **Empty lines**: Ignored
7171-- **Whitespace**: Automatically trimmed
7272-- **Validation**: Non-DID lines logged as warnings
7373-7474-Example valid file:
7575-7676-```
7777-# Coffee enthusiasts to follow
7878-did:plc:user1abc
7979-8080-# Another user
8181-did:plc:user2def
8282-8383-did:web:coffee.example.com # Web DID example
8484-```
8585-8686-## Security Note
8787-8888-⚠️ **Important**: The `known-dids.txt` file is gitignored by default. Do not commit DIDs unless you have permission from the users.
8989-9090-For production deployments, rely on organic discovery via firehose rather than manual DID lists.
-82
.skills/htmx-alpine-integration.md
···11-# HTMX + Alpine.js Integration Pattern
22-33-## Problem: "Alpine Expression Error: [variable] is not defined"
44-55-When HTMX swaps in content containing Alpine.js directives (like `x-show`, `x-if`, `@click`), Alpine may not automatically process the new DOM elements, resulting in console errors like:
66-77-```
88-Alpine Expression Error: activeTab is not defined
99-Expression: "activeTab === 'brews'"
1010-```
1111-1212-## Root Cause
1313-1414-HTMX loads and swaps content into the DOM after Alpine has already initialized. The new elements contain Alpine directives that reference variables in a parent Alpine component's scope, but Alpine doesn't automatically bind these new elements to the existing component.
1515-1616-## Solution
1717-1818-Use HTMX's `hx-on::after-swap` event to manually tell Alpine to initialize the new DOM tree:
1919-2020-```html
2121-<div id="content"
2222- hx-get="/api/data"
2323- hx-trigger="load"
2424- hx-swap="innerHTML"
2525- hx-on::after-swap="Alpine.initTree($el)">
2626-</div>
2727-```
2828-2929-### Key Points
3030-3131-- `hx-on::after-swap` - HTMX event that fires after content swap completes
3232-- `Alpine.initTree($el)` - Tells Alpine to process all directives in the swapped element
3333-- `$el` - HTMX provides this as the target element that received the swap
3434-3535-## Common Scenario
3636-3737-**Parent template** (defines Alpine scope):
3838-```html
3939-<div x-data="{ activeTab: 'brews' }">
4040- <!-- Static content with tab buttons -->
4141- <button @click="activeTab = 'brews'">Brews</button>
4242-4343- <!-- HTMX loads dynamic content here -->
4444- <div id="content"
4545- hx-get="/api/tabs"
4646- hx-trigger="load"
4747- hx-swap="innerHTML"
4848- hx-on::after-swap="Alpine.initTree($el)">
4949- </div>
5050-</div>
5151-```
5252-5353-**Loaded partial** (uses parent scope):
5454-```html
5555-<div x-show="activeTab === 'brews'">
5656- <!-- Brew content -->
5757-</div>
5858-<div x-show="activeTab === 'beans'">
5959- <!-- Bean content -->
6060-</div>
6161-```
6262-6363-Without `Alpine.initTree($el)`, the `x-show` directives won't be bound to the parent's `activeTab` variable.
6464-6565-## Alternative: Alpine Morph Plugin
6666-6767-For more complex scenarios with nested Alpine components, use the Alpine Morph plugin:
6868-6969-```html
7070-<script src="https://cdn.jsdelivr.net/npm/@alpinejs/morph@3.x.x/dist/cdn.min.js"></script>
7171-<div hx-swap="morph"></div>
7272-```
7373-7474-This preserves Alpine state during swaps but requires the plugin.
7575-7676-## When to Use
7777-7878-Apply this pattern whenever:
7979-1. HTMX loads content containing Alpine directives
8080-2. The loaded content references variables from a parent Alpine component
8181-3. You see "Expression Error: [variable] is not defined" in console
8282-4. Alpine directives in HTMX-loaded content don't work (no reactivity, clicks ignored, etc.)
+12-7
BACKLOG.md
···2424 - If adding mobile apps, third-party API consumers, or microservices architecture, revisit this
2525 - For now, monolithic approach is appropriate for HTMX-based web app with decentralized storage
26262727-- Backfill seems to be called when user hits homepage, probably only needs to be done on startup
2727+- Maybe swap from boltdb to sqlite
2828+ - Use the non-cgo library
28292930## Fixes
30313131-- After adding a bean via add brew, that bean does not show up in the drop down until after a refresh
3232- - Happens with grinders and likely brewers also
3232+- Homepage still shows cached feed items on homepage when not authed. should show a cached version of firehose (last 5 entries, cache last 20) from the server.
3333+ This fetch should not try to backfill anything
3434+3535+- Feed database in prod seems to be showing outdated data -- not sure why, local dev seems to show most recent.
3636+3737+- View button for somebody else's brew leads to an invalid page. need to show the same view brew page but w/o the edit and delete buttons.
3838+- Back button in view should take user back to their previous page (not sure how to handle this exactly though)
3939+4040+- Header should probably always be attached to the top of the screen?
33413434-- Adding a grinder via the new brew page does not populate fields correctly other than the name
3535- - Also seems to happen to brewers
3636- - To solve this issue and the above, we likely should consolidate creation to use the same popup as the manage page uses,
3737- since that one works, and should already be a template partial.
4242+- Feed item "view details" button should go away, the "new brew" in "addded a new brew" should take to view page instead (underline this text)
+15-7
CLAUDE.md
···9595- Invalidated on writes
9696- Background cleanup removes expired entries
97979898+### Backfill Strategy
9999+100100+User records are backfilled from their PDS once per DID:
101101+102102+- **On startup**: Backfills registered users + known-dids file
103103+- **On first login**: Backfills the user's historical records
104104+- **Deduplication**: Tracks backfilled DIDs in `BucketBackfilled` to prevent redundant fetches
105105+- **Idempotent**: Safe to call multiple times (checks backfill status first)
106106+107107+This prevents excessive PDS requests while ensuring new users' historical data is indexed.
108108+98109## Common Tasks
99110100111### Run Development Server
101112102113```bash
103103-# Basic mode (polling-based feed)
114114+# Run server (uses firehose mode by default)
104115go run cmd/server/main.go
105116106106-# With firehose (real-time AT Protocol feed)
107107-go run cmd/server/main.go --firehose
108108-109109-# With firehose + backfill known DIDs
110110-go run cmd/server/main.go --firehose --known-dids known-dids.txt
117117+# Backfill known DIDs on startup
118118+go run cmd/server/main.go --known-dids known-dids.txt
111119112120# Using nix
113121nix run
···129137130138| Flag | Type | Default | Description |
131139| --------------- | ------ | ------- | ----------------------------------------------------- |
132132-| `--firehose` | bool | false | Enable real-time firehose feed via Jetstream |
140140+| `--firehose` | bool | true | [DEPRECATED] Firehose is now the default (ignored) |
133141| `--known-dids` | string | "" | Path to file with DIDs to backfill (one per line) |
134142135143**Known DIDs File Format:**
README.deploy.md
docs/deploy.md
+20-36
README.md
···2233Coffee brew tracking application build on ATProto
4455+Development is on GitHub, and is mirrored to Tangled:
66+77+- [Tangled](https://tangled.org/arabica.social/arabica)
88+- [GitHub](https://github.com/arabica-social/arabica)
99+1010+GitHub is currently the primary repo, but that may change in the future.
1111+1212+## Features
1313+1414+- Track coffee brews with detailed parameters
1515+- Store data in your AT Protocol Personal Data Server
1616+- Community feed of recent brews from registered users (polling or real-time firehose)
1717+- Manage beans, roasters, grinders, and brewers
1818+- Export brew data as JSON
1919+- Mobile-friendly PWA design
2020+521## Tech Stack
62277-- **Backend:** Go with stdlib HTTP router
88-- **Storage:** AT Protocol Personal Data Servers
99-- **Local DB:** BoltDB for OAuth sessions and feed registry
1010-- **Templates:** html/template
1111-- **Frontend:** HTMX + Alpine.js + Tailwind CSS
2323+- Backend: Go with stdlib HTTP router
2424+- Storage: AT Protocol Personal Data Servers + BoltDB for local cache
2525+- Templates: html/template
2626+- Frontend: HTMX + Alpine.js + Tailwind CSS
12271328## Quick Start
1429···45604661### Command-Line Flags
47624848-- `--firehose` - Enable real-time feed via AT Protocol Jetstream (default: false)
4963- `--known-dids <file>` - Path to file with DIDs to backfill on startup (one per line)
50645165### Environment Variables
···6074- `SECURE_COOKIES` - Set to true for HTTPS (default: false)
6175- `LOG_LEVEL` - Logging level: debug, info, warn, error (default: info)
6276- `LOG_FORMAT` - Log format: console, json (default: console)
6363-6464-## Features
6565-6666-- Track coffee brews with detailed parameters
6767-- Store data in your AT Protocol Personal Data Server
6868-- Community feed of recent brews from registered users (polling or real-time firehose)
6969-- Manage beans, roasters, grinders, and brewers
7070-- Export brew data as JSON
7171-- Mobile-friendly PWA design
7272-7373-### Firehose Mode
7474-7575-Enable real-time feed updates via AT Protocol's Jetstream:
7676-7777-```bash
7878-# Basic firehose mode
7979-go run cmd/server/main.go --firehose
8080-8181-# With known DIDs for backfill
8282-go run cmd/server/main.go --firehose --known-dids known-dids.txt
8383-```
8484-8585-**Known DIDs file format:**
8686-```
8787-# Comments start with #
8888-did:plc:abc123xyz
8989-did:plc:def456uvw
9090-```
9191-9292-The firehose automatically indexes **all** Arabica records across the AT Protocol network. The `--known-dids` flag allows you to backfill historical records from specific users on startup (useful for development/testing).
93779478## Architecture
9579
+114
cmd/server/logging_test.go
···11+package main
22+33+import (
44+ "bytes"
55+ "encoding/json"
66+ "os"
77+ "path/filepath"
88+ "strings"
99+ "testing"
1010+1111+ "github.com/rs/zerolog"
1212+ "github.com/rs/zerolog/log"
1313+)
1414+1515+// TestKnownDIDsLogging verifies that DIDs are logged correctly
1616+func TestKnownDIDsLogging(t *testing.T) {
1717+ // Create a buffer to capture log output
1818+ var buf bytes.Buffer
1919+2020+ // Configure zerolog to write JSON to our buffer
2121+ originalLogger := log.Logger
2222+ defer func() {
2323+ log.Logger = originalLogger
2424+ }()
2525+2626+ log.Logger = zerolog.New(&buf).With().Timestamp().Logger()
2727+ zerolog.SetGlobalLevel(zerolog.InfoLevel)
2828+2929+ // Create a temporary test file
3030+ tmpDir := t.TempDir()
3131+ testFile := filepath.Join(tmpDir, "test-dids.txt")
3232+3333+ content := `# Test DIDs
3434+did:plc:abc123
3535+did:web:example.com
3636+did:plc:xyz789
3737+`
3838+3939+ if err := os.WriteFile(testFile, []byte(content), 0644); err != nil {
4040+ t.Fatalf("Failed to create test file: %v", err)
4141+ }
4242+4343+ // Load DIDs from file
4444+ dids, err := loadKnownDIDs(testFile)
4545+ if err != nil {
4646+ t.Fatalf("loadKnownDIDs failed: %v", err)
4747+ }
4848+4949+ // Simulate logging (like we do in main.go)
5050+ log.Info().
5151+ Int("count", len(dids)).
5252+ Str("file", testFile).
5353+ Strs("dids", dids).
5454+ Msg("Loaded known DIDs from file")
5555+5656+ // Parse the log output
5757+ logOutput := buf.String()
5858+5959+ // Verify it contains JSON log
6060+ if !strings.Contains(logOutput, "Loaded known DIDs from file") {
6161+ t.Errorf("Log output missing expected message. Got: %s", logOutput)
6262+ }
6363+6464+ // Parse as JSON to verify structure
6565+ var logEntry map[string]interface{}
6666+ if err := json.Unmarshal([]byte(strings.TrimSpace(logOutput)), &logEntry); err != nil {
6767+ t.Fatalf("Failed to parse log as JSON: %v\nOutput: %s", err, logOutput)
6868+ }
6969+7070+ // Verify log fields
7171+ if logEntry["count"] != float64(3) {
7272+ t.Errorf("Expected count=3, got %v", logEntry["count"])
7373+ }
7474+7575+ if logEntry["file"] != testFile {
7676+ t.Errorf("Expected file=%s, got %v", testFile, logEntry["file"])
7777+ }
7878+7979+ // Verify DIDs array is present
8080+ didsFromLog, ok := logEntry["dids"].([]interface{})
8181+ if !ok {
8282+ t.Fatalf("Expected 'dids' to be an array, got %T", logEntry["dids"])
8383+ }
8484+8585+ if len(didsFromLog) != 3 {
8686+ t.Errorf("Expected 3 DIDs in log, got %d", len(didsFromLog))
8787+ }
8888+8989+ // Verify DID values
9090+ expectedDIDs := map[string]bool{
9191+ "did:plc:abc123": false,
9292+ "did:web:example.com": false,
9393+ "did:plc:xyz789": false,
9494+ }
9595+9696+ for _, did := range didsFromLog {
9797+ didStr, ok := did.(string)
9898+ if !ok {
9999+ t.Errorf("DID is not a string: %v", did)
100100+ continue
101101+ }
102102+ if _, exists := expectedDIDs[didStr]; exists {
103103+ expectedDIDs[didStr] = true
104104+ } else {
105105+ t.Errorf("Unexpected DID in log: %s", didStr)
106106+ }
107107+ }
108108+109109+ for did, found := range expectedDIDs {
110110+ if !found {
111111+ t.Errorf("Expected DID not found in log: %s", did)
112112+ }
113113+ }
114114+}
+89-80
cmd/server/main.go
···26262727func main() {
2828 // Parse command-line flags
2929- useFirehose := flag.Bool("firehose", false, "Enable firehose-based feed (Jetstream consumer)")
3029 knownDIDsFile := flag.String("known-dids", "", "Path to file containing DIDs to backfill on startup (one per line)")
3130 flag.Parse()
3231···5857 })
5958 }
60596161- log.Info().Bool("firehose", *useFirehose).Msg("Starting Arabica Coffee Tracker")
6060+ log.Info().Msg("Starting Arabica Coffee Tracker")
62616362 // Get port from env or use default
6463 port := os.Getenv("PORT")
···143142 sigCh := make(chan os.Signal, 1)
144143 signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
145144146146- // Initialize firehose consumer if enabled
147147- var firehoseConsumer *firehose.Consumer
148148- if *useFirehose {
149149- // Determine feed index path
150150- feedIndexPath := os.Getenv("ARABICA_FEED_INDEX_PATH")
151151- if feedIndexPath == "" {
152152- dataDir := os.Getenv("XDG_DATA_HOME")
153153- if dataDir == "" {
154154- home, err := os.UserHomeDir()
155155- if err != nil {
156156- log.Fatal().Err(err).Msg("Failed to get home directory for feed index")
157157- }
158158- dataDir = filepath.Join(home, ".local", "share")
145145+ // Initialize firehose consumer
146146+ // Determine feed index path
147147+ feedIndexPath := os.Getenv("ARABICA_FEED_INDEX_PATH")
148148+ if feedIndexPath == "" {
149149+ dataDir := os.Getenv("XDG_DATA_HOME")
150150+ if dataDir == "" {
151151+ home, err := os.UserHomeDir()
152152+ if err != nil {
153153+ log.Fatal().Err(err).Msg("Failed to get home directory for feed index")
159154 }
160160- feedIndexPath = filepath.Join(dataDir, "arabica", "feed-index.db")
155155+ dataDir = filepath.Join(home, ".local", "share")
161156 }
157157+ feedIndexPath = filepath.Join(dataDir, "arabica", "feed-index.db")
158158+ }
162159163163- // Create firehose config
164164- firehoseConfig := firehose.DefaultConfig()
165165- firehoseConfig.IndexPath = feedIndexPath
160160+ // Create firehose config
161161+ firehoseConfig := firehose.DefaultConfig()
162162+ firehoseConfig.IndexPath = feedIndexPath
166163167167- // Parse profile cache TTL from env if set
168168- if ttlStr := os.Getenv("ARABICA_PROFILE_CACHE_TTL"); ttlStr != "" {
169169- if ttl, err := time.ParseDuration(ttlStr); err == nil {
170170- firehoseConfig.ProfileCacheTTL = int64(ttl.Seconds())
171171- }
164164+ // Parse profile cache TTL from env if set
165165+ if ttlStr := os.Getenv("ARABICA_PROFILE_CACHE_TTL"); ttlStr != "" {
166166+ if ttl, err := time.ParseDuration(ttlStr); err == nil {
167167+ firehoseConfig.ProfileCacheTTL = int64(ttl.Seconds())
172168 }
169169+ }
173170174174- // Create feed index
175175- feedIndex, err := firehose.NewFeedIndex(feedIndexPath, time.Duration(firehoseConfig.ProfileCacheTTL)*time.Second)
176176- if err != nil {
177177- log.Fatal().Err(err).Str("path", feedIndexPath).Msg("Failed to create feed index")
178178- }
171171+ // Create feed index
172172+ feedIndex, err := firehose.NewFeedIndex(feedIndexPath, time.Duration(firehoseConfig.ProfileCacheTTL)*time.Second)
173173+ if err != nil {
174174+ log.Fatal().Err(err).Str("path", feedIndexPath).Msg("Failed to create feed index")
175175+ }
179176180180- log.Info().Str("path", feedIndexPath).Msg("Feed index opened")
177177+ log.Info().Str("path", feedIndexPath).Msg("Feed index opened")
181178182182- // Create and start consumer
183183- firehoseConsumer = firehose.NewConsumer(firehoseConfig, feedIndex)
184184- firehoseConsumer.Start(ctx)
179179+ // Create and start consumer
180180+ firehoseConsumer := firehose.NewConsumer(firehoseConfig, feedIndex)
181181+ firehoseConsumer.Start(ctx)
185182186186- // Wire up the feed service to use the firehose index
187187- adapter := firehose.NewFeedIndexAdapter(feedIndex)
188188- feedService.SetFirehoseIndex(adapter)
183183+ // Wire up the feed service to use the firehose index
184184+ adapter := firehose.NewFeedIndexAdapter(feedIndex)
185185+ feedService.SetFirehoseIndex(adapter)
186186+187187+ log.Info().Msg("Firehose consumer started")
189188190190- log.Info().Msg("Firehose consumer started")
189189+ // Log known DIDs from database (DIDs discovered via firehose)
190190+ if knownDIDsFromDB, err := feedIndex.GetKnownDIDs(); err == nil {
191191+ if len(knownDIDsFromDB) > 0 {
192192+ log.Info().
193193+ Int("count", len(knownDIDsFromDB)).
194194+ Strs("dids", knownDIDsFromDB).
195195+ Msg("Known DIDs from firehose index")
196196+ } else {
197197+ log.Info().Msg("No known DIDs in firehose index yet (will populate as events arrive)")
198198+ }
199199+ } else {
200200+ log.Warn().Err(err).Msg("Failed to retrieve known DIDs from firehose index")
201201+ }
191202192192- // Backfill registered users and known DIDs in background
193193- go func() {
194194- time.Sleep(5 * time.Second) // Wait for initial connection
203203+ // Backfill registered users and known DIDs in background
204204+ go func() {
205205+ time.Sleep(5 * time.Second) // Wait for initial connection
195206196196- // Collect all DIDs to backfill
197197- didsToBackfill := make(map[string]struct{})
207207+ // Collect all DIDs to backfill
208208+ didsToBackfill := make(map[string]struct{})
198209199199- // Add registered users
200200- for _, did := range feedRegistry.List() {
201201- didsToBackfill[did] = struct{}{}
202202- }
210210+ // Add registered users
211211+ for _, did := range feedRegistry.List() {
212212+ didsToBackfill[did] = struct{}{}
213213+ }
203214204204- // Add DIDs from known-dids file if provided
205205- if *knownDIDsFile != "" {
206206- knownDIDs, err := loadKnownDIDs(*knownDIDsFile)
207207- if err != nil {
208208- log.Warn().Err(err).Str("file", *knownDIDsFile).Msg("Failed to load known DIDs file")
209209- } else {
210210- for _, did := range knownDIDs {
211211- didsToBackfill[did] = struct{}{}
212212- }
213213- log.Info().Int("count", len(knownDIDs)).Str("file", *knownDIDsFile).Msg("Loaded known DIDs from file")
215215+ // Add DIDs from known-dids file if provided
216216+ if *knownDIDsFile != "" {
217217+ knownDIDs, err := loadKnownDIDs(*knownDIDsFile)
218218+ if err != nil {
219219+ log.Warn().Err(err).Str("file", *knownDIDsFile).Msg("Failed to load known DIDs file")
220220+ } else {
221221+ for _, did := range knownDIDs {
222222+ didsToBackfill[did] = struct{}{}
214223 }
224224+ log.Info().
225225+ Int("count", len(knownDIDs)).
226226+ Str("file", *knownDIDsFile).
227227+ Strs("dids", knownDIDs).
228228+ Msg("Loaded known DIDs from file")
215229 }
230230+ }
216231217217- // Backfill all collected DIDs
218218- successCount := 0
219219- for did := range didsToBackfill {
220220- if err := firehoseConsumer.BackfillDID(ctx, did); err != nil {
221221- log.Warn().Err(err).Str("did", did).Msg("Failed to backfill user")
222222- } else {
223223- successCount++
224224- }
232232+ // Backfill all collected DIDs
233233+ successCount := 0
234234+ for did := range didsToBackfill {
235235+ if err := firehoseConsumer.BackfillDID(ctx, did); err != nil {
236236+ log.Warn().Err(err).Str("did", did).Msg("Failed to backfill user")
237237+ } else {
238238+ successCount++
225239 }
226226- log.Info().Int("total", len(didsToBackfill)).Int("success", successCount).Msg("Backfill complete")
227227- }()
228228- }
240240+ }
241241+ log.Info().Int("total", len(didsToBackfill)).Int("success", successCount).Msg("Backfill complete")
242242+ }()
229243230244 // Register users in the feed when they authenticate
231245 // This ensures users are added to the feed even if they had an existing session
232246 oauthManager.SetOnAuthSuccess(func(did string) {
233247 feedRegistry.Register(did)
234234- // If firehose is enabled, backfill the user's records
235235- if firehoseConsumer != nil {
236236- go func() {
237237- if err := firehoseConsumer.BackfillDID(context.Background(), did); err != nil {
238238- log.Warn().Err(err).Str("did", did).Msg("Failed to backfill new user")
239239- }
240240- }()
241241- }
248248+ // Backfill the user's records
249249+ go func() {
250250+ if err := firehoseConsumer.BackfillDID(context.Background(), did); err != nil {
251251+ log.Warn().Err(err).Str("did", did).Msg("Failed to backfill new user")
252252+ }
253253+ }()
242254 })
243255244256 if clientID == "" {
···299311 Str("address", "0.0.0.0:"+port).
300312 Str("url", "http://localhost:"+port).
301313 Bool("secure_cookies", secureCookies).
302302- Bool("firehose", *useFirehose).
303314 Str("database", dbPath).
304315 Msg("Starting HTTP server")
305316···313324 log.Info().Msg("Shutdown signal received")
314325315326 // Stop firehose consumer first
316316- if firehoseConsumer != nil {
317317- log.Info().Msg("Stopping firehose consumer...")
318318- firehoseConsumer.Stop()
319319- }
327327+ log.Info().Msg("Stopping firehose consumer...")
328328+ firehoseConsumer.Stop()
320329321330 // Graceful shutdown of HTTP server
322331 shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 10*time.Second)
···11+# Back Button Implementation
22+33+## Overview
44+55+Implemented a smart back button feature that allows users to navigate back to their previous page across the Arabica application. The solution uses a hybrid approach combining JavaScript's `history.back()` with intelligent fallbacks.
66+77+## Approach Chosen: Hybrid JavaScript History with Smart Fallbacks
88+99+### Why This Approach?
1010+1111+1. **Best User Experience**: Uses browser history when available, preserving scroll position and form state
1212+2. **Handles Edge Cases**: Falls back gracefully for direct links, external referrers, and bookmarks
1313+3. **Simple Implementation**: No server-side session tracking needed
1414+4. **HTMX Compatible**: Works seamlessly with HTMX navigation and partial page updates
1515+1616+### How It Works
1717+1818+The implementation consists of:
1919+2020+1. **JavaScript Module** (`web/static/js/back-button.js`):
2121+ - Detects if the user came from within the app (same-origin referrer)
2222+ - Uses `history.back()` for internal navigation (preserves history stack)
2323+ - Falls back to a specified URL for external/direct navigation
2424+ - Automatically re-initializes after HTMX content swaps
2525+2626+2. **HTML Attributes**:
2727+ - `data-back-button`: Marks an element as a back button
2828+ - `data-fallback`: Specifies the fallback URL (default: `/brews`)
2929+3030+3. **Visual Design**:
3131+ - SVG arrow icon for clear affordance
3232+ - Consistent styling matching the app's brown theme
3333+ - Hover states for better interactivity
3434+3535+## Implementation Details
3636+3737+### JavaScript Logic
3838+3939+```javascript
4040+function handleBackNavigation(button) {
4141+ const fallbackUrl = button.getAttribute('data-fallback') || '/brews';
4242+ const referrer = document.referrer;
4343+4444+ // Check if referrer is from same origin
4545+ const hasSameOriginReferrer = referrer &&
4646+ referrer.startsWith(window.location.origin) &&
4747+ referrer !== currentUrl;
4848+4949+ if (hasSameOriginReferrer) {
5050+ window.history.back(); // Use browser history
5151+ } else {
5252+ window.location.href = fallbackUrl; // Use fallback
5353+ }
5454+}
5555+```
5656+5757+### Edge Cases Handled
5858+5959+1. **Direct Links** (e.g., bookmarked URL):
6060+ - Referrer: empty or external
6161+ - Behavior: Navigate to fallback URL
6262+6363+2. **External Referrers** (e.g., from social media):
6464+ - Referrer: different origin
6565+ - Behavior: Navigate to fallback URL
6666+6767+3. **Internal Navigation**:
6868+ - Referrer: same origin
6969+ - Behavior: Use `history.back()` (preserves state)
7070+7171+4. **HTMX Partial Updates**:
7272+ - Automatically reinitializes buttons after HTMX swaps
7373+ - Ensures back buttons in dynamically loaded content work
7474+7575+5. **Page Refresh**:
7676+ - Referrer: same as current URL
7777+ - Behavior: Navigate to fallback URL (prevents staying on same page)
7878+7979+## Files Modified
8080+8181+### New Files
8282+8383+1. **`web/static/js/back-button.js`**
8484+ - Core back button logic
8585+ - Initialization and event handling
8686+ - HTMX integration
8787+8888+### Modified Templates
8989+9090+1. **`templates/layout.tmpl`**
9191+ - Added back-button.js script reference
9292+9393+2. **`templates/brew_view.tmpl`**
9494+ - Replaced static "Back to Brews" link with smart back button
9595+ - Fallback: `/brews`
9696+9797+3. **`templates/brew_form.tmpl`**
9898+ - Added back button in header (for both new and edit modes)
9999+ - Fallback: `/brews`
100100+101101+4. **`templates/about.tmpl`**
102102+ - Added back button in header
103103+ - Fallback: `/` (home page)
104104+105105+5. **`templates/terms.tmpl`**
106106+ - Added back button in header
107107+ - Fallback: `/` (home page)
108108+109109+6. **`templates/manage.tmpl`**
110110+ - Added back button in header
111111+ - Fallback: `/brews`
112112+113113+## Usage Examples
114114+115115+### Basic Back Button
116116+```html
117117+<button
118118+ data-back-button
119119+ data-fallback="/brews"
120120+ class="...">
121121+ Back
122122+</button>
123123+```
124124+125125+### With Custom Fallback
126126+```html
127127+<button
128128+ data-back-button
129129+ data-fallback="/profile"
130130+ class="...">
131131+ Back to Profile
132132+</button>
133133+```
134134+135135+### With Icon (as implemented)
136136+```html
137137+<button
138138+ data-back-button
139139+ data-fallback="/brews"
140140+ class="inline-flex items-center text-brown-700 hover:text-brown-900 font-medium transition-colors cursor-pointer">
141141+ <svg class="w-5 h-5" ...>
142142+ <path d="M10 19l-7-7m0 0l7-7m-7 7h18"/>
143143+ </svg>
144144+</button>
145145+```
146146+147147+## Navigation Flow Examples
148148+149149+### Example 1: Normal Flow
150150+1. User visits `/` (home)
151151+2. Clicks "View All Brews" → `/brews`
152152+3. Clicks on a brew → `/brews/abc123`
153153+4. Clicks back button → Returns to `/brews` (via history.back())
154154+155155+### Example 2: Direct Link
156156+1. User opens bookmark directly to `/brews/abc123`
157157+2. Clicks back button → Navigates to `/brews` (fallback)
158158+159159+### Example 3: External Referrer
160160+1. User clicks link from Twitter to `/brews/abc123`
161161+2. Clicks back button → Navigates to `/brews` (fallback, not back to Twitter)
162162+163163+### Example 4: Profile to Brew
164164+1. User visits `/profile/@alice.bsky.social`
165165+2. Clicks on a brew → `/brews/abc123`
166166+3. Clicks back button → Returns to `/profile/@alice.bsky.social`
167167+168168+## Limitations
169169+170170+1. **No History Stack Detection**:
171171+ - Cannot reliably detect if history stack is empty
172172+ - Uses referrer as a proxy, which is a reasonable heuristic
173173+174174+2. **Referrer Privacy**:
175175+ - Some browsers/users may disable referrer headers
176176+ - Falls back to default URL in these cases (safe behavior)
177177+178178+3. **Cross-Origin Navigation**:
179179+ - Intentionally doesn't go back to external sites
180180+ - This is a feature, not a bug (keeps users in the app)
181181+182182+4. **No History Length Check**:
183183+ - `window.history.length` is unreliable across browsers
184184+ - Our referrer-based approach is more predictable
185185+186186+## Future Enhancements (Optional)
187187+188188+1. **Session Storage Tracking**:
189189+ - Could track navigation history in sessionStorage
190190+ - Would allow more sophisticated back button logic
191191+ - Trade-off: added complexity vs. marginal benefit
192192+193193+2. **Contextual Fallbacks**:
194194+ - Could pass context-specific fallbacks from server
195195+ - Example: brew detail could remember which list it came from
196196+ - Trade-off: requires server-side state or URL params
197197+198198+3. **Breadcrumb Integration**:
199199+ - Could display breadcrumbs alongside back button
200200+ - Better for complex navigation hierarchies
201201+ - Trade-off: more UI complexity
202202+203203+## Testing Recommendations
204204+205205+Manual testing scenarios:
206206+1. ✅ Navigate from home → brews → brew detail → back (should use history)
207207+2. ✅ Open brew detail via bookmark → back (should go to fallback)
208208+3. ✅ Navigate from feed → brew detail → back (should return to feed)
209209+4. ✅ Navigate from profile → brew detail → back (should return to profile)
210210+5. ✅ Open about page → back (should go to home)
211211+6. ✅ Edit brew form → back (should return to previous page)
212212+213213+## Conclusion
214214+215215+The implemented solution provides an excellent balance of:
216216+- **User Experience**: Preserves browser history when possible
217217+- **Reliability**: Always provides a sensible fallback
218218+- **Simplicity**: No server-side complexity or session tracking
219219+- **Maintainability**: Single JavaScript module, easy to understand
220220+- **Compatibility**: Works with HTMX, Alpine.js, and standard navigation
221221+222222+The approach handles all realistic edge cases while keeping the implementation straightforward and performant.
+10
internal/atproto/nsid.go
···4646func BuildATURI(did, collection, rkey string) string {
4747 return fmt.Sprintf("at://%s/%s/%s", did, collection, rkey)
4848}
4949+5050+// ExtractRKeyFromURI extracts the record key from an AT-URI
5151+// Returns the rkey if successful, empty string if parsing fails
5252+func ExtractRKeyFromURI(uri string) string {
5353+ components, err := ResolveATURI(uri)
5454+ if err != nil {
5555+ return ""
5656+ }
5757+ return components.RKey
5858+}
+2-1
internal/atproto/oauth.go
···116116}
117117118118// SetOnAuthSuccess sets a callback that is called when a user authenticates successfully
119119-// This is called both on initial login and when validating an existing session
119119+// This is called both on initial login and when validating an existing session (on every authenticated request)
120120+// Implementations should be idempotent or track state to avoid redundant operations
120121func (m *OAuthManager) SetOnAuthSuccess(fn func(did string)) {
121122 m.onAuthSuccess = fn
122123}
···33import (
44 "context"
55 "fmt"
66- "sort"
76 "sync"
87 "time"
98···1312 "github.com/rs/zerolog/log"
1413)
15141616-// PublicFeedCacheTTL is the duration for which the public feed cache is valid.
1717-// This value can be adjusted based on desired freshness vs. performance tradeoff.
1818-// Consider values between 5-10 minutes for a good balance.
1919-const PublicFeedCacheTTL = 5 * time.Minute
1515+const (
1616+ // PublicFeedCacheTTL is the duration for which the public feed cache is valid.
1717+ // This value can be adjusted based on desired freshness vs. performance tradeoff.
1818+ // Consider values between 5-10 minutes for a good balance.
1919+ PublicFeedCacheTTL = 5 * time.Minute
20202121-// PublicFeedLimit is the number of items to show for unauthenticated users
2222-const PublicFeedLimit = 5
2121+ // PublicFeedCacheSize is the number of items to cache in the server
2222+ PublicFeedCacheSize = 20
2323+ // PublicFeedLimit is the number of items to show for unauthenticated users
2424+ PublicFeedLimit = 5
2525+ // Number of feed items to show for authenticated users.
2626+ FeedLimit = 20
2727+)
23282429// FeedItem represents an activity in the social feed with author info
2530type FeedItem struct {
···40454146// publicFeedCache holds cached feed items for unauthenticated users
4247type publicFeedCache struct {
4343- items []*FeedItem
4444- expiresAt time.Time
4545- fromFirehose bool // tracks if cache was populated from firehose
4646- mu sync.RWMutex
4848+ items []*FeedItem
4949+ expiresAt time.Time
5050+ mu sync.RWMutex
4751}
48524953// FirehoseIndex is the interface for the firehose feed index
···70747175// Service fetches and aggregates brews from registered users
7276type Service struct {
7373- registry *Registry
7474- publicClient *atproto.PublicClient
7575- cache *publicFeedCache
7676- firehoseIndex FirehoseIndex
7777- useFirehose bool
7777+ registry *Registry
7878+ cache *publicFeedCache
7979+ firehoseIndex FirehoseIndex
7880}
79818082// NewService creates a new feed service
8183func NewService(registry *Registry) *Service {
8284 return &Service{
8383- registry: registry,
8484- publicClient: atproto.NewPublicClient(),
8585- cache: &publicFeedCache{},
8585+ registry: registry,
8686+ cache: &publicFeedCache{},
8687 }
8788}
88898989-// SetFirehoseIndex configures the service to use firehose-based feed when available
9090+// SetFirehoseIndex configures the service to use firehose-based feed
9091func (s *Service) SetFirehoseIndex(index FirehoseIndex) {
9192 s.firehoseIndex = index
9292- s.useFirehose = true
9393 log.Info().Msg("feed: firehose index configured")
9494}
95959696// GetCachedPublicFeed returns cached feed items for unauthenticated users.
9797// It returns up to PublicFeedLimit items from the cache, refreshing if expired.
9898+// The cache stores PublicFeedCacheSize items internally but only returns PublicFeedLimit.
9899func (s *Service) GetCachedPublicFeed(ctx context.Context) ([]*FeedItem, error) {
99100 s.cache.mu.RLock()
100101 cacheValid := time.Now().Before(s.cache.expiresAt) && len(s.cache.items) > 0
101101- cacheFromFirehose := s.cache.fromFirehose
102102 items := s.cache.items
103103 s.cache.mu.RUnlock()
104104105105- // Check if we need to refresh: cache expired, empty, or firehose is now ready but cache was from polling
106106- firehoseReady := s.useFirehose && s.firehoseIndex != nil && s.firehoseIndex.IsReady()
107107- needsRefresh := !cacheValid || (firehoseReady && !cacheFromFirehose)
108108-109109- if !needsRefresh {
110110- log.Debug().Int("item_count", len(items)).Bool("from_firehose", cacheFromFirehose).Msg("feed: returning cached public feed")
105105+ if cacheValid {
106106+ // Return only the first PublicFeedLimit items from the cache
107107+ if len(items) > PublicFeedLimit {
108108+ items = items[:PublicFeedLimit]
109109+ }
110110+ log.Debug().Int("item_count", len(items)).Msg("feed: returning cached public feed")
111111 return items, nil
112112 }
113113114114- // Cache is expired, empty, or we need to switch to firehose data
114114+ // Cache is expired or empty, refresh it
115115 return s.refreshPublicFeedCache(ctx)
116116}
117117···120120 s.cache.mu.Lock()
121121 defer s.cache.mu.Unlock()
122122123123- // Check if firehose is ready (for tracking cache source)
124124- firehoseReady := s.useFirehose && s.firehoseIndex != nil && s.firehoseIndex.IsReady()
125125-126123 // Double-check if another goroutine already refreshed the cache
127127- // But still refresh if firehose is ready and cache was from polling
128124 if time.Now().Before(s.cache.expiresAt) && len(s.cache.items) > 0 {
129129- if !firehoseReady || s.cache.fromFirehose {
130130- return s.cache.items, nil
125125+ // Return only the first PublicFeedLimit items
126126+ items := s.cache.items
127127+ if len(items) > PublicFeedLimit {
128128+ items = items[:PublicFeedLimit]
131129 }
132132- // Firehose is ready but cache was from polling, continue to refresh
130130+ return items, nil
133131 }
134132135135- log.Debug().Bool("firehose_ready", firehoseReady).Msg("feed: refreshing public feed cache")
133133+ log.Debug().Msg("feed: refreshing public feed cache")
136134137137- // Fetch fresh feed items (limited to PublicFeedLimit)
138138- items, err := s.GetRecentRecords(ctx, PublicFeedLimit)
135135+ // Fetch PublicFeedCacheSize items to cache (20 items)
136136+ items, err := s.GetRecentRecords(ctx, PublicFeedCacheSize)
139137 if err != nil {
140138 // If we have stale data, return it rather than failing
141139 if len(s.cache.items) > 0 {
142140 log.Warn().Err(err).Msg("feed: failed to refresh cache, returning stale data")
143143- return s.cache.items, nil
141141+ cachedItems := s.cache.items
142142+ if len(cachedItems) > PublicFeedLimit {
143143+ cachedItems = cachedItems[:PublicFeedLimit]
144144+ }
145145+ return cachedItems, nil
144146 }
145147 return nil, err
146148 }
147149148148- // Update cache
150150+ // Update cache with all fetched items
149151 s.cache.items = items
150152 s.cache.expiresAt = time.Now().Add(PublicFeedCacheTTL)
151151- s.cache.fromFirehose = firehoseReady
152153153154 log.Debug().
154154- Int("item_count", len(items)).
155155+ Int("cached_count", len(items)).
155156 Time("expires_at", s.cache.expiresAt).
156156- Bool("from_firehose", firehoseReady).
157157 Msg("feed: updated public feed cache")
158158159159- return items, nil
159159+ // Return only the first PublicFeedLimit items to the user
160160+ displayItems := items
161161+ if len(displayItems) > PublicFeedLimit {
162162+ displayItems = displayItems[:PublicFeedLimit]
163163+ }
164164+165165+ return displayItems, nil
160166}
161167162162-// GetRecentRecords fetches recent activity (brews and other records) from all registered users
168168+// GetRecentRecords fetches recent activity (brews and other records) from firehose index
163169// Returns up to `limit` items sorted by most recent first
164170func (s *Service) GetRecentRecords(ctx context.Context, limit int) ([]*FeedItem, error) {
165165- // Try firehose index first if available and ready
166166- if s.useFirehose && s.firehoseIndex != nil && s.firehoseIndex.IsReady() {
167167- log.Debug().Msg("feed: using firehose index")
168168- return s.getRecentRecordsFromFirehose(ctx, limit)
171171+ if s.firehoseIndex == nil || !s.firehoseIndex.IsReady() {
172172+ log.Warn().Msg("feed: firehose index not ready")
173173+ return nil, fmt.Errorf("firehose index not ready")
169174 }
170175171171- // Fallback to polling
172172- return s.getRecentRecordsViaPolling(ctx, limit)
176176+ log.Debug().Msg("feed: using firehose index")
177177+ return s.getRecentRecordsFromFirehose(ctx, limit)
173178}
174179175180// getRecentRecordsFromFirehose fetches feed items from the firehose index
176181func (s *Service) getRecentRecordsFromFirehose(ctx context.Context, limit int) ([]*FeedItem, error) {
177182 firehoseItems, err := s.firehoseIndex.GetRecentFeed(ctx, limit)
178183 if err != nil {
179179- log.Warn().Err(err).Msg("feed: firehose index error, falling back to polling")
180180- return s.getRecentRecordsViaPolling(ctx, limit)
184184+ log.Warn().Err(err).Msg("feed: firehose index error")
185185+ return nil, err
181186 }
182187183188 // Convert FirehoseFeedItem to FeedItem
···198203 }
199204200205 log.Debug().Int("count", len(items)).Msg("feed: returning items from firehose index")
201201- return items, nil
202202-}
203203-204204-// getRecentRecordsViaPolling fetches feed items by polling each user's PDS
205205-func (s *Service) getRecentRecordsViaPolling(ctx context.Context, limit int) ([]*FeedItem, error) {
206206- dids := s.registry.List()
207207- if len(dids) == 0 {
208208- log.Debug().Msg("feed: no registered users")
209209- return nil, nil
210210- }
211211-212212- log.Debug().Int("user_count", len(dids)).Msg("feed: fetching activity from registered users (polling)")
213213-214214- // Fetch all records from all users in parallel
215215- type userActivity struct {
216216- did string
217217- profile *atproto.Profile
218218- brews []*models.Brew
219219- beans []*models.Bean
220220- roasters []*models.Roaster
221221- grinders []*models.Grinder
222222- brewers []*models.Brewer
223223- err error
224224- }
225225-226226- results := make(chan userActivity, len(dids))
227227- var wg sync.WaitGroup
228228-229229- for _, did := range dids {
230230- wg.Add(1)
231231- go func(did string) {
232232- defer wg.Done()
233233-234234- result := userActivity{did: did}
235235-236236- // Fetch profile
237237- profile, err := s.publicClient.GetProfile(ctx, did)
238238- if err != nil {
239239- log.Warn().Err(err).Str("did", did).Msg("failed to fetch profile for feed")
240240- result.err = err
241241- results <- result
242242- return
243243- }
244244- result.profile = profile
245245-246246- // Fetch recent brews (limit per user to avoid fetching too many)
247247- brewsOutput, err := s.publicClient.ListRecords(ctx, did, atproto.NSIDBrew, 10)
248248- if err != nil {
249249- log.Warn().Err(err).Str("did", did).Msg("failed to fetch brews for feed")
250250- result.err = err
251251- results <- result
252252- return
253253- }
254254-255255- // Fetch recent beans
256256- beansOutput, err := s.publicClient.ListRecords(ctx, did, atproto.NSIDBean, 10)
257257- if err != nil {
258258- log.Warn().Err(err).Str("did", did).Msg("failed to fetch beans for feed")
259259- }
260260-261261- // Fetch recent roasters
262262- roastersOutput, err := s.publicClient.ListRecords(ctx, did, atproto.NSIDRoaster, 10)
263263- if err != nil {
264264- log.Warn().Err(err).Str("did", did).Msg("failed to fetch roasters for feed")
265265- }
266266-267267- // Fetch recent grinders
268268- grindersOutput, err := s.publicClient.ListRecords(ctx, did, atproto.NSIDGrinder, 10)
269269- if err != nil {
270270- log.Warn().Err(err).Str("did", did).Msg("failed to fetch grinders for feed")
271271- }
272272-273273- // Fetch recent brewers
274274- brewersOutput, err := s.publicClient.ListRecords(ctx, did, atproto.NSIDBrewer, 10)
275275- if err != nil {
276276- log.Warn().Err(err).Str("did", did).Msg("failed to fetch brewers for feed")
277277- }
278278-279279- // Fetch all beans, roasters, brewers, and grinders for this user to resolve references
280280- allBeansOutput, _ := s.publicClient.ListRecords(ctx, did, atproto.NSIDBean, 100)
281281- allRoastersOutput, _ := s.publicClient.ListRecords(ctx, did, atproto.NSIDRoaster, 100)
282282- allBrewersOutput, _ := s.publicClient.ListRecords(ctx, did, atproto.NSIDBrewer, 100)
283283- allGrindersOutput, _ := s.publicClient.ListRecords(ctx, did, atproto.NSIDGrinder, 100)
284284-285285- // Build lookup maps (keyed by AT-URI)
286286- beanMap := make(map[string]*models.Bean)
287287- beanRoasterRefMap := make(map[string]string) // bean URI -> roaster URI
288288- roasterMap := make(map[string]*models.Roaster)
289289- brewerMap := make(map[string]*models.Brewer)
290290- grinderMap := make(map[string]*models.Grinder)
291291-292292- // Populate bean map
293293- if allBeansOutput != nil {
294294- for _, beanRecord := range allBeansOutput.Records {
295295- bean, err := atproto.RecordToBean(beanRecord.Value, beanRecord.URI)
296296- if err == nil {
297297- beanMap[beanRecord.URI] = bean
298298- // Store roaster reference if present
299299- if roasterRef, ok := beanRecord.Value["roasterRef"].(string); ok && roasterRef != "" {
300300- beanRoasterRefMap[beanRecord.URI] = roasterRef
301301- }
302302- }
303303- }
304304- }
305305-306306- // Populate roaster map
307307- if allRoastersOutput != nil {
308308- for _, roasterRecord := range allRoastersOutput.Records {
309309- roaster, err := atproto.RecordToRoaster(roasterRecord.Value, roasterRecord.URI)
310310- if err == nil {
311311- roasterMap[roasterRecord.URI] = roaster
312312- }
313313- }
314314- }
315315-316316- // Populate brewer map
317317- if allBrewersOutput != nil {
318318- for _, brewerRecord := range allBrewersOutput.Records {
319319- brewer, err := atproto.RecordToBrewer(brewerRecord.Value, brewerRecord.URI)
320320- if err == nil {
321321- brewerMap[brewerRecord.URI] = brewer
322322- }
323323- }
324324- }
325325-326326- // Populate grinder map
327327- if allGrindersOutput != nil {
328328- for _, grinderRecord := range allGrindersOutput.Records {
329329- grinder, err := atproto.RecordToGrinder(grinderRecord.Value, grinderRecord.URI)
330330- if err == nil {
331331- grinderMap[grinderRecord.URI] = grinder
332332- }
333333- }
334334- }
335335-336336- // Convert records to Brew models and resolve references
337337- brews := make([]*models.Brew, 0, len(brewsOutput.Records))
338338- for _, record := range brewsOutput.Records {
339339- brew, err := atproto.RecordToBrew(record.Value, record.URI)
340340- if err != nil {
341341- log.Warn().Err(err).Str("uri", record.URI).Msg("failed to parse brew record")
342342- continue
343343- }
344344-345345- // Resolve bean reference
346346- if beanRef, ok := record.Value["beanRef"].(string); ok && beanRef != "" {
347347- if bean, found := beanMap[beanRef]; found {
348348- brew.Bean = bean
349349-350350- // Resolve roaster reference for this bean
351351- if roasterRef, found := beanRoasterRefMap[beanRef]; found {
352352- if roaster, found := roasterMap[roasterRef]; found {
353353- brew.Bean.Roaster = roaster
354354- }
355355- }
356356- }
357357- }
358358-359359- // Resolve brewer reference
360360- if brewerRef, ok := record.Value["brewerRef"].(string); ok && brewerRef != "" {
361361- if brewer, found := brewerMap[brewerRef]; found {
362362- brew.BrewerObj = brewer
363363- }
364364- }
365365-366366- // Resolve grinder reference
367367- if grinderRef, ok := record.Value["grinderRef"].(string); ok && grinderRef != "" {
368368- if grinder, found := grinderMap[grinderRef]; found {
369369- brew.GrinderObj = grinder
370370- }
371371- }
372372-373373- brews = append(brews, brew)
374374- }
375375- result.brews = brews
376376-377377- // Convert beans to models and resolve roaster references
378378- beans := make([]*models.Bean, 0)
379379- if beansOutput != nil {
380380- for _, record := range beansOutput.Records {
381381- bean, err := atproto.RecordToBean(record.Value, record.URI)
382382- if err != nil {
383383- log.Warn().Err(err).Str("uri", record.URI).Msg("failed to parse bean record")
384384- continue
385385- }
386386-387387- // Resolve roaster reference
388388- if roasterRef, found := beanRoasterRefMap[record.URI]; found {
389389- if roaster, found := roasterMap[roasterRef]; found {
390390- bean.Roaster = roaster
391391- }
392392- }
393393-394394- beans = append(beans, bean)
395395- }
396396- }
397397- result.beans = beans
398398-399399- // Convert roasters to models
400400- roasters := make([]*models.Roaster, 0)
401401- if roastersOutput != nil {
402402- for _, record := range roastersOutput.Records {
403403- roaster, err := atproto.RecordToRoaster(record.Value, record.URI)
404404- if err != nil {
405405- log.Warn().Err(err).Str("uri", record.URI).Msg("failed to parse roaster record")
406406- continue
407407- }
408408- roasters = append(roasters, roaster)
409409- }
410410- }
411411- result.roasters = roasters
412412-413413- // Convert grinders to models
414414- grinders := make([]*models.Grinder, 0)
415415- if grindersOutput != nil {
416416- for _, record := range grindersOutput.Records {
417417- grinder, err := atproto.RecordToGrinder(record.Value, record.URI)
418418- if err != nil {
419419- log.Warn().Err(err).Str("uri", record.URI).Msg("failed to parse grinder record")
420420- continue
421421- }
422422- grinders = append(grinders, grinder)
423423- }
424424- }
425425- result.grinders = grinders
426426-427427- // Convert brewers to models
428428- brewers := make([]*models.Brewer, 0)
429429- if brewersOutput != nil {
430430- for _, record := range brewersOutput.Records {
431431- brewer, err := atproto.RecordToBrewer(record.Value, record.URI)
432432- if err != nil {
433433- log.Warn().Err(err).Str("uri", record.URI).Msg("failed to parse brewer record")
434434- continue
435435- }
436436- brewers = append(brewers, brewer)
437437- }
438438- }
439439- result.brewers = brewers
440440-441441- results <- result
442442- }(did)
443443- }
444444-445445- // Wait for all goroutines to complete
446446- go func() {
447447- wg.Wait()
448448- close(results)
449449- }()
450450-451451- // Collect all feed items
452452- var items []*FeedItem
453453- for result := range results {
454454- if result.err != nil {
455455- continue
456456- }
457457-458458- totalRecords := len(result.brews) + len(result.beans) + len(result.roasters) + len(result.grinders) + len(result.brewers)
459459-460460- log.Debug().
461461- Str("did", result.did).
462462- Str("handle", result.profile.Handle).
463463- Int("brew_count", len(result.brews)).
464464- Int("bean_count", len(result.beans)).
465465- Int("roaster_count", len(result.roasters)).
466466- Int("grinder_count", len(result.grinders)).
467467- Int("brewer_count", len(result.brewers)).
468468- Int("total_records", totalRecords).
469469- Msg("feed: collected records from user")
470470-471471- // Add brews to feed
472472- for _, brew := range result.brews {
473473- items = append(items, &FeedItem{
474474- RecordType: "brew",
475475- Action: "☕ added a new brew",
476476- Brew: brew,
477477- Author: result.profile,
478478- Timestamp: brew.CreatedAt,
479479- TimeAgo: FormatTimeAgo(brew.CreatedAt),
480480- })
481481- }
482482-483483- // Add beans to feed
484484- for _, bean := range result.beans {
485485- items = append(items, &FeedItem{
486486- RecordType: "bean",
487487- Action: "🫘 added a new bean",
488488- Bean: bean,
489489- Author: result.profile,
490490- Timestamp: bean.CreatedAt,
491491- TimeAgo: FormatTimeAgo(bean.CreatedAt),
492492- })
493493- }
494494-495495- // Add roasters to feed
496496- for _, roaster := range result.roasters {
497497- items = append(items, &FeedItem{
498498- RecordType: "roaster",
499499- Action: "🏪 added a new roaster",
500500- Roaster: roaster,
501501- Author: result.profile,
502502- Timestamp: roaster.CreatedAt,
503503- TimeAgo: FormatTimeAgo(roaster.CreatedAt),
504504- })
505505- }
506506-507507- // Add grinders to feed
508508- for _, grinder := range result.grinders {
509509- items = append(items, &FeedItem{
510510- RecordType: "grinder",
511511- Action: "⚙️ added a new grinder",
512512- Grinder: grinder,
513513- Author: result.profile,
514514- Timestamp: grinder.CreatedAt,
515515- TimeAgo: FormatTimeAgo(grinder.CreatedAt),
516516- })
517517- }
518518-519519- // Add brewers to feed
520520- for _, brewer := range result.brewers {
521521- items = append(items, &FeedItem{
522522- RecordType: "brewer",
523523- Action: "☕ added a new brewer",
524524- Brewer: brewer,
525525- Author: result.profile,
526526- Timestamp: brewer.CreatedAt,
527527- TimeAgo: FormatTimeAgo(brewer.CreatedAt),
528528- })
529529- }
530530- }
531531-532532- // Sort by timestamp descending (most recent first)
533533- sort.Slice(items, func(i, j int) bool {
534534- return items[i].Timestamp.After(items[j].Timestamp)
535535- })
536536-537537- // Limit results
538538- if len(items) > limit {
539539- items = items[:limit]
540540- }
541541-542542- log.Debug().Int("total_items", len(items)).Msg("feed: returning items")
543543-544206 return items, nil
545207}
546208
+43-1
internal/firehose/index.go
···41414242 // BucketKnownDIDs stores all DIDs we've seen with Arabica records
4343 BucketKnownDIDs = []byte("known_dids")
4444+4545+ // BucketBackfilled stores DIDs that have been backfilled: {did} -> {timestamp}
4646+ BucketBackfilled = []byte("backfilled")
4447)
45484649// IndexedRecord represents a record stored in the index
···107110 BucketProfiles,
108111 BucketMeta,
109112 BucketKnownDIDs,
113113+ BucketBackfilled,
110114 }
111115 for _, bucket := range buckets {
112116 if _, err := tx.CreateBucketIfNotExists(bucket); err != nil {
···329333func (idx *FeedIndex) GetRecentFeed(ctx context.Context, limit int) ([]*FeedItem, error) {
330334 var records []*IndexedRecord
331335336336+ // FIX: this seems to show the first 20 records for main deployment
337337+ // - unclear why, but is likely an issue with the db being stale
332338 err := idx.db.View(func(tx *bolt.Tx) error {
333339 byTime := tx.Bucket(BucketByTime)
334340 recordsBucket := tx.Bucket(BucketRecords)
···337343338344 // Iterate in reverse (newest first)
339345 count := 0
340340- for k, _ := c.Last(); k != nil && count < limit*2; k, _ = c.Prev() {
346346+ for k, _ := c.First(); k != nil && count < limit*2; k, _ = c.Next() {
341347 // Extract URI from key (format: timestamp:uri)
342348 uri := extractURIFromTimeKey(k)
343349 if uri == "" {
···692698 }
693699}
694700701701+// IsBackfilled checks if a DID has already been backfilled
702702+func (idx *FeedIndex) IsBackfilled(did string) bool {
703703+ var exists bool
704704+ _ = idx.db.View(func(tx *bolt.Tx) error {
705705+ b := tx.Bucket(BucketBackfilled)
706706+ exists = b.Get([]byte(did)) != nil
707707+ return nil
708708+ })
709709+ return exists
710710+}
711711+712712+// MarkBackfilled marks a DID as backfilled with current timestamp
713713+func (idx *FeedIndex) MarkBackfilled(did string) error {
714714+ return idx.db.Update(func(tx *bolt.Tx) error {
715715+ b := tx.Bucket(BucketBackfilled)
716716+ timestamp := []byte(time.Now().Format(time.RFC3339))
717717+ return b.Put([]byte(did), timestamp)
718718+ })
719719+}
720720+695721// BackfillUser fetches all existing records for a DID and adds them to the index
722722+// Returns early if the DID has already been backfilled
696723func (idx *FeedIndex) BackfillUser(ctx context.Context, did string) error {
724724+ // Check if already backfilled
725725+ if idx.IsBackfilled(did) {
726726+ log.Debug().Str("did", did).Msg("DID already backfilled, skipping")
727727+ return nil
728728+ }
729729+697730 log.Info().Str("did", did).Msg("backfilling user records")
698731732732+ recordCount := 0
699733 for _, collection := range ArabicaCollections {
700734 records, err := idx.publicClient.ListRecords(ctx, did, collection, 100)
701735 if err != nil {
···718752719753 if err := idx.UpsertRecord(did, collection, rkey, record.CID, recordJSON, 0); err != nil {
720754 log.Warn().Err(err).Str("uri", record.URI).Msg("failed to upsert record during backfill")
755755+ } else {
756756+ recordCount++
721757 }
722758 }
723759 }
724760761761+ // Mark as backfilled
762762+ if err := idx.MarkBackfilled(did); err != nil {
763763+ log.Warn().Err(err).Str("did", did).Msg("failed to mark DID as backfilled")
764764+ }
765765+766766+ log.Info().Str("did", did).Int("record_count", recordCount).Msg("backfill complete")
725767 return nil
726768}
+102
internal/firehose/index_test.go
···11+package firehose
22+33+import (
44+ "testing"
55+ "time"
66+)
77+88+func TestBackfillTracking(t *testing.T) {
99+ // Create temporary index
1010+ tmpDir := t.TempDir()
1111+ idx, err := NewFeedIndex(tmpDir+"/test.db", 1*time.Hour)
1212+ if err != nil {
1313+ t.Fatalf("Failed to create index: %v", err)
1414+ }
1515+ defer idx.Close()
1616+1717+ testDID := "did:plc:test123abc"
1818+1919+ // Initially should not be backfilled
2020+ if idx.IsBackfilled(testDID) {
2121+ t.Error("DID should not be backfilled initially")
2222+ }
2323+2424+ // Mark as backfilled
2525+ if err := idx.MarkBackfilled(testDID); err != nil {
2626+ t.Fatalf("Failed to mark DID as backfilled: %v", err)
2727+ }
2828+2929+ // Now should be backfilled
3030+ if !idx.IsBackfilled(testDID) {
3131+ t.Error("DID should be marked as backfilled")
3232+ }
3333+3434+ // Different DID should not be backfilled
3535+ otherDID := "did:plc:other456def"
3636+ if idx.IsBackfilled(otherDID) {
3737+ t.Error("Other DID should not be backfilled")
3838+ }
3939+}
4040+4141+func TestBackfillTracking_Persistence(t *testing.T) {
4242+ tmpDir := t.TempDir()
4343+ dbPath := tmpDir + "/test.db"
4444+ testDID := "did:plc:persist123"
4545+4646+ // Create index and mark DID as backfilled
4747+ {
4848+ idx, err := NewFeedIndex(dbPath, 1*time.Hour)
4949+ if err != nil {
5050+ t.Fatalf("Failed to create index: %v", err)
5151+ }
5252+5353+ if err := idx.MarkBackfilled(testDID); err != nil {
5454+ t.Fatalf("Failed to mark DID as backfilled: %v", err)
5555+ }
5656+5757+ idx.Close()
5858+ }
5959+6060+ // Reopen index and verify DID is still marked as backfilled
6161+ {
6262+ idx, err := NewFeedIndex(dbPath, 1*time.Hour)
6363+ if err != nil {
6464+ t.Fatalf("Failed to reopen index: %v", err)
6565+ }
6666+ defer idx.Close()
6767+6868+ if !idx.IsBackfilled(testDID) {
6969+ t.Error("DID should still be marked as backfilled after reopening")
7070+ }
7171+ }
7272+}
7373+7474+func TestBackfillTracking_MultipleDIDs(t *testing.T) {
7575+ tmpDir := t.TempDir()
7676+ idx, err := NewFeedIndex(tmpDir+"/test.db", 1*time.Hour)
7777+ if err != nil {
7878+ t.Fatalf("Failed to create index: %v", err)
7979+ }
8080+ defer idx.Close()
8181+8282+ dids := []string{
8383+ "did:plc:user1",
8484+ "did:plc:user2",
8585+ "did:web:example.com",
8686+ "did:plc:user3",
8787+ }
8888+8989+ // Mark all as backfilled
9090+ for _, did := range dids {
9191+ if err := idx.MarkBackfilled(did); err != nil {
9292+ t.Fatalf("Failed to mark DID %s as backfilled: %v", did, err)
9393+ }
9494+ }
9595+9696+ // Verify all are marked
9797+ for _, did := range dids {
9898+ if !idx.IsBackfilled(did) {
9999+ t.Errorf("DID %s should be marked as backfilled", did)
100100+ }
101101+ }
102102+}
+134-17
internal/handlers/handlers.go
···164164165165 if h.feedService != nil {
166166 if isAuthenticated {
167167- // Authenticated users get the full feed (20 items), fetched fresh
168168- feedItems, _ = h.feedService.GetRecentRecords(r.Context(), 20)
167167+ feedItems, _ = h.feedService.GetRecentRecords(r.Context(), feed.FeedLimit)
169168 } else {
170169 // Unauthenticated users get a limited feed from the cache
171170 feedItems, _ = h.feedService.GetCachedPublicFeed(r.Context())
···302301 return
303302 }
304303305305- // Check authentication (optional for view)
306306- store, authenticated := h.getAtprotoStore(r)
307307- if !authenticated {
308308- http.Redirect(w, r, "/login", http.StatusFound)
309309- return
304304+ // Check if owner (DID or handle) is specified in query params
305305+ owner := r.URL.Query().Get("owner")
306306+307307+ // Check authentication
308308+ didStr, err := atproto.GetAuthenticatedDID(r.Context())
309309+ isAuthenticated := err == nil && didStr != ""
310310+311311+ var userProfile *bff.UserProfile
312312+ if isAuthenticated {
313313+ userProfile = h.getUserProfile(r.Context(), didStr)
310314 }
311315312312- didStr, _ := atproto.GetAuthenticatedDID(r.Context())
313313- userProfile := h.getUserProfile(r.Context(), didStr)
316316+ var brew *models.Brew
317317+ var brewOwnerDID string
318318+ var isOwner bool
314319315315- brew, err := store.GetBrewByRKey(r.Context(), rkey)
316316- if err != nil {
317317- http.Error(w, "Brew not found", http.StatusNotFound)
318318- log.Error().Err(err).Str("rkey", rkey).Msg("Failed to get brew for view")
319319- return
320320+ if owner != "" {
321321+ // Viewing someone else's brew - use public client
322322+ publicClient := atproto.NewPublicClient()
323323+324324+ // Resolve owner to DID if it's a handle
325325+ if strings.HasPrefix(owner, "did:") {
326326+ brewOwnerDID = owner
327327+ } else {
328328+ resolved, err := publicClient.ResolveHandle(r.Context(), owner)
329329+ if err != nil {
330330+ log.Warn().Err(err).Str("handle", owner).Msg("Failed to resolve handle for brew view")
331331+ http.Error(w, "User not found", http.StatusNotFound)
332332+ return
333333+ }
334334+ brewOwnerDID = resolved
335335+ }
336336+337337+ // Fetch the brew record from the owner's PDS
338338+ record, err := publicClient.GetRecord(r.Context(), brewOwnerDID, atproto.NSIDBrew, rkey)
339339+ if err != nil {
340340+ log.Error().Err(err).Str("did", brewOwnerDID).Str("rkey", rkey).Msg("Failed to get brew record")
341341+ http.Error(w, "Brew not found", http.StatusNotFound)
342342+ return
343343+ }
344344+345345+ // Convert record to brew
346346+ brew, err = atproto.RecordToBrew(record.Value, record.URI)
347347+ if err != nil {
348348+ log.Error().Err(err).Msg("Failed to convert brew record")
349349+ http.Error(w, "Failed to load brew", http.StatusInternalServerError)
350350+ return
351351+ }
352352+353353+ // Resolve references (bean, grinder, brewer)
354354+ if err := h.resolveBrewReferences(r.Context(), brew, brewOwnerDID, record.Value); err != nil {
355355+ log.Warn().Err(err).Msg("Failed to resolve some brew references")
356356+ // Don't fail the request, just log the warning
357357+ }
358358+359359+ // Check if viewing user is the owner
360360+ isOwner = isAuthenticated && didStr == brewOwnerDID
361361+ } else {
362362+ // Viewing own brew - require authentication
363363+ store, authenticated := h.getAtprotoStore(r)
364364+ if !authenticated {
365365+ http.Redirect(w, r, "/login", http.StatusFound)
366366+ return
367367+ }
368368+369369+ brew, err = store.GetBrewByRKey(r.Context(), rkey)
370370+ if err != nil {
371371+ http.Error(w, "Brew not found", http.StatusNotFound)
372372+ log.Error().Err(err).Str("rkey", rkey).Msg("Failed to get brew for view")
373373+ return
374374+ }
375375+376376+ brewOwnerDID = didStr
377377+ isOwner = true
320378 }
321379322322- if err := bff.RenderBrewView(w, brew, authenticated, didStr, userProfile); err != nil {
380380+ if err := bff.RenderBrewView(w, brew, isAuthenticated, didStr, userProfile, isOwner); err != nil {
323381 http.Error(w, "Failed to render page", http.StatusInternalServerError)
324382 log.Error().Err(err).Msg("Failed to render brew view")
325383 }
384384+}
385385+386386+// resolveBrewReferences resolves bean, grinder, and brewer references for a brew
387387+func (h *Handler) resolveBrewReferences(ctx context.Context, brew *models.Brew, ownerDID string, record map[string]interface{}) error {
388388+ publicClient := atproto.NewPublicClient()
389389+390390+ // Resolve bean reference
391391+ if beanRef, ok := record["beanRef"].(string); ok && beanRef != "" {
392392+ beanRecord, err := publicClient.GetRecord(ctx, ownerDID, atproto.NSIDBean, atproto.ExtractRKeyFromURI(beanRef))
393393+ if err == nil {
394394+ if bean, err := atproto.RecordToBean(beanRecord.Value, beanRecord.URI); err == nil {
395395+ brew.Bean = bean
396396+397397+ // Resolve roaster reference for the bean
398398+ if roasterRef, ok := beanRecord.Value["roasterRef"].(string); ok && roasterRef != "" {
399399+ roasterRecord, err := publicClient.GetRecord(ctx, ownerDID, atproto.NSIDRoaster, atproto.ExtractRKeyFromURI(roasterRef))
400400+ if err == nil {
401401+ if roaster, err := atproto.RecordToRoaster(roasterRecord.Value, roasterRecord.URI); err == nil {
402402+ brew.Bean.Roaster = roaster
403403+ }
404404+ }
405405+ }
406406+ }
407407+ }
408408+ }
409409+410410+ // Resolve grinder reference
411411+ if grinderRef, ok := record["grinderRef"].(string); ok && grinderRef != "" {
412412+ grinderRecord, err := publicClient.GetRecord(ctx, ownerDID, atproto.NSIDGrinder, atproto.ExtractRKeyFromURI(grinderRef))
413413+ if err == nil {
414414+ if grinder, err := atproto.RecordToGrinder(grinderRecord.Value, grinderRecord.URI); err == nil {
415415+ brew.GrinderObj = grinder
416416+ }
417417+ }
418418+ }
419419+420420+ // Resolve brewer reference
421421+ if brewerRef, ok := record["brewerRef"].(string); ok && brewerRef != "" {
422422+ brewerRecord, err := publicClient.GetRecord(ctx, ownerDID, atproto.NSIDBrewer, atproto.ExtractRKeyFromURI(brewerRef))
423423+ if err == nil {
424424+ if brewer, err := atproto.RecordToBrewer(brewerRecord.Value, brewerRecord.URI); err == nil {
425425+ brew.BrewerObj = brewer
426426+ }
427427+ }
428428+ }
429429+430430+ return nil
326431}
327432328433// Show edit brew form
···16721777 isAuthenticated := err == nil && didStr != ""
16731778 isOwnProfile := isAuthenticated && didStr == did
1674177916751675- // Render profile content partial
16761676- if err := bff.RenderProfilePartial(w, brews, beans, roasters, grinders, brewers, isOwnProfile); err != nil {
17801780+ // Render profile content partial (use actor as handle, which is already the handle if provided as such)
17811781+ profileHandle := actor
17821782+ if strings.HasPrefix(actor, "did:") {
17831783+ // If actor was a DID, we need to resolve it to a handle
17841784+ // We can get it from the first brew's author if available, or fetch profile
17851785+ profile, err := publicClient.GetProfile(ctx, did)
17861786+ if err == nil {
17871787+ profileHandle = profile.Handle
17881788+ } else {
17891789+ profileHandle = did // Fallback to DID if we can't get handle
17901790+ }
17911791+ }
17921792+17931793+ if err := bff.RenderProfilePartial(w, brews, beans, roasters, grinders, brewers, isOwnProfile, profileHandle); err != nil {
16771794 http.Error(w, "Failed to render content", http.StatusInternalServerError)
16781795 log.Error().Err(err).Msg("Failed to render profile partial")
16791796 }
+2-2
justfile
···11run:
22- @LOG_LEVEL=debug LOG_FORMAT=console go run cmd/server/main.go -firehose -known-dids known-dids.txt
22+ @LOG_LEVEL=debug LOG_FORMAT=console go run cmd/server/main.go -known-dids known-dids.txt
3344run-production:
55- @LOG_FORMAT=json SECURE_COOKIES=true go run cmd/server/main.go -firehose
55+ @LOG_FORMAT=json SECURE_COOKIES=true go run cmd/server/main.go
6677test:
88 @go test ./... -cover -coverprofile=cover.out
-18
known-dids.txt.example
···11-# Known DIDs for Development Backfill
22-#
33-# This file contains DIDs that should be backfilled on startup when using
44-# the --known-dids flag. This is useful for development and testing to
55-# populate the feed with known coffee enthusiasts.
66-#
77-# Format: One DID per line
88-# Lines starting with # are comments
99-# Empty lines are ignored
1010-#
1111-# Example DIDs (replace with real DIDs):
1212-# did:plc:example1234567890abcdef
1313-# did:plc:another1234567890abcdef
1414-#
1515-# To use this file:
1616-# 1. Copy this file to known-dids.txt
1717-# 2. Add real DIDs (one per line)
1818-# 3. Run: ./arabica --firehose --known-dids known-dids.txt
+7-14
module.nix
···2828 logFormat = lib.mkOption {
2929 type = lib.types.enum [ "pretty" "json" ];
3030 default = "json";
3131- description = "Log format. Use 'json' for production, 'pretty' for development.";
3131+ description =
3232+ "Log format. Use 'json' for production, 'pretty' for development.";
3233 };
33343435 secureCookies = lib.mkOption {
3536 type = lib.types.bool;
3637 default = true;
3737- description = "Whether to set the Secure flag on cookies. Should be true when using HTTPS.";
3838- };
3939-4040- firehose = lib.mkOption {
4141- type = lib.types.bool;
4242- default = false;
4343- description = ''
4444- Enable firehose-based feed using Jetstream.
4545- This provides real-time feed updates with zero API calls per request,
4646- instead of polling each user's PDS.
4747- '';
3838+ description =
3939+ "Whether to set the Secure flag on cookies. Should be true when using HTTPS.";
4840 };
4941 };
5042···7163 dataDir = lib.mkOption {
7264 type = lib.types.path;
7365 default = "/var/lib/arabica";
7474- description = "Directory where arabica stores its data (OAuth sessions, etc.).";
6666+ description =
6767+ "Directory where arabica stores its data (OAuth sessions, etc.).";
7568 };
76697770 user = lib.mkOption {
···113106 Type = "simple";
114107 User = cfg.user;
115108 Group = cfg.group;
116116- ExecStart = "${cfg.package}/bin/arabica${lib.optionalString cfg.settings.firehose " -firehose"}";
109109+ ExecStart = "${cfg.package}/bin/arabica";
117110 Restart = "on-failure";
118111 RestartSec = "10s";
119112
+160
scripts/diagnose-feed-db.sh
···11+#!/bin/bash
22+# Diagnostic script to check feed database status
33+44+set -e
55+66+DB_PATH="${ARABICA_FEED_INDEX_PATH:-$HOME/.local/share/arabica/feed-index.db}"
77+88+echo "=== Feed Database Diagnostics ==="
99+echo "Database path: $DB_PATH"
1010+echo ""
1111+1212+if [ ! -f "$DB_PATH" ]; then
1313+ echo "ERROR: Database file does not exist at $DB_PATH"
1414+ exit 1
1515+fi
1616+1717+echo "Database file size: $(du -h "$DB_PATH" | cut -f1)"
1818+echo "Last modified: $(stat -c %y "$DB_PATH" 2>/dev/null || stat -f "%Sm" "$DB_PATH")"
1919+echo ""
2020+2121+# Create a simple Go program to inspect the database
2222+cat > /tmp/inspect-feed-db.go << 'EOF'
2323+package main
2424+2525+import (
2626+ "encoding/binary"
2727+ "encoding/json"
2828+ "fmt"
2929+ "os"
3030+ "time"
3131+3232+ bolt "go.etcd.io/bbolt"
3333+)
3434+3535+type IndexedRecord struct {
3636+ URI string `json:"uri"`
3737+ DID string `json:"did"`
3838+ Collection string `json:"collection"`
3939+ RKey string `json:"rkey"`
4040+ Record json.RawMessage `json:"record"`
4141+ CID string `json:"cid"`
4242+ IndexedAt time.Time `json:"indexed_at"`
4343+ CreatedAt time.Time `json:"created_at"`
4444+}
4545+4646+func main() {
4747+ dbPath := os.Args[1]
4848+4949+ db, err := bolt.Open(dbPath, 0600, &bolt.Options{ReadOnly: true, Timeout: 5 * time.Second})
5050+ if err != nil {
5151+ fmt.Printf("ERROR: Failed to open database: %v\n", err)
5252+ os.Exit(1)
5353+ }
5454+ defer db.Close()
5555+5656+ err = db.View(func(tx *bolt.Tx) error {
5757+ // Check buckets
5858+ records := tx.Bucket([]byte("records"))
5959+ byTime := tx.Bucket([]byte("by_time"))
6060+ meta := tx.Bucket([]byte("meta"))
6161+ knownDIDs := tx.Bucket([]byte("known_dids"))
6262+ backfilled := tx.Bucket([]byte("backfilled"))
6363+6464+ if records == nil {
6565+ fmt.Println("ERROR: 'records' bucket does not exist")
6666+ return nil
6767+ }
6868+6969+ recordCount := records.Stats().KeyN
7070+ fmt.Printf("Total records: %d\n", recordCount)
7171+7272+ if byTime != nil {
7373+ timeIndexCount := byTime.Stats().KeyN
7474+ fmt.Printf("Time index entries: %d\n", timeIndexCount)
7575+ }
7676+7777+ if knownDIDs != nil {
7878+ didCount := knownDIDs.Stats().KeyN
7979+ fmt.Printf("Known DIDs: %d\n", didCount)
8080+ knownDIDs.ForEach(func(k, v []byte) error {
8181+ fmt.Printf(" - %s\n", string(k))
8282+ return nil
8383+ })
8484+ }
8585+8686+ if backfilled != nil {
8787+ backfilledCount := backfilled.Stats().KeyN
8888+ fmt.Printf("Backfilled DIDs: %d\n", backfilledCount)
8989+ }
9090+9191+ // Check cursor
9292+ if meta != nil {
9393+ cursorBytes := meta.Get([]byte("cursor"))
9494+ if cursorBytes != nil && len(cursorBytes) == 8 {
9595+ cursor := int64(binary.BigEndian.Uint64(cursorBytes))
9696+ cursorTime := time.UnixMicro(cursor)
9797+ fmt.Printf("\nCursor position: %d (%s)\n", cursor, cursorTime.Format(time.RFC3339))
9898+ } else {
9999+ fmt.Println("\nNo cursor found in database")
100100+ }
101101+ }
102102+103103+ // Get first 5 and last 5 records by time
104104+ if byTime != nil && records != nil {
105105+ fmt.Println("\n=== First 5 records (oldest) ===")
106106+ c := byTime.Cursor()
107107+ count := 0
108108+ for k, _ := c.First(); k != nil && count < 5; k, _ = c.Next() {
109109+ uri := extractURI(k)
110110+ if record := getRecord(records, uri); record != nil {
111111+ fmt.Printf("%s - %s - %s\n", record.CreatedAt.Format("2006-01-02 15:04:05"), record.Collection, uri)
112112+ }
113113+ count++
114114+ }
115115+116116+ fmt.Println("\n=== Last 5 records (newest with inverted timestamps) ===")
117117+ c = byTime.Cursor()
118118+ count = 0
119119+ for k, _ := c.Last(); k != nil && count < 5; k, _ = c.Prev() {
120120+ uri := extractURI(k)
121121+ if record := getRecord(records, uri); record != nil {
122122+ fmt.Printf("%s - %s - %s\n", record.CreatedAt.Format("2006-01-02 15:04:05"), record.Collection, uri)
123123+ }
124124+ count++
125125+ }
126126+ }
127127+128128+ return nil
129129+ })
130130+131131+ if err != nil {
132132+ fmt.Printf("ERROR: %v\n", err)
133133+ os.Exit(1)
134134+ }
135135+}
136136+137137+func extractURI(key []byte) string {
138138+ if len(key) < 10 {
139139+ return ""
140140+ }
141141+ return string(key[9:])
142142+}
143143+144144+func getRecord(bucket *bolt.Bucket, uri string) *IndexedRecord {
145145+ data := bucket.Get([]byte(uri))
146146+ if data == nil {
147147+ return nil
148148+ }
149149+ var record IndexedRecord
150150+ if err := json.Unmarshal(data, &record); err != nil {
151151+ return nil
152152+ }
153153+ return &record
154154+}
155155+EOF
156156+157157+cd "$(dirname "$0")/.."
158158+go run /tmp/inspect-feed-db.go "$DB_PATH"
159159+160160+rm -f /tmp/inspect-feed-db.go
···11+/**
22+ * Smart back button implementation for Arabica
33+ * Handles browser history navigation with intelligent fallbacks
44+ */
55+66+/**
77+ * Initialize a back button with smart navigation
88+ * @param {HTMLElement} button - The back button element
99+ */
1010+function initBackButton(button) {
1111+ if (!button) return;
1212+1313+ button.addEventListener('click', function(e) {
1414+ e.preventDefault();
1515+ handleBackNavigation(button);
1616+ });
1717+}
1818+1919+/**
2020+ * Handle back navigation with fallback logic
2121+ * @param {HTMLElement} button - The back button element
2222+ */
2323+function handleBackNavigation(button) {
2424+ const fallbackUrl = button.getAttribute('data-fallback') || '/brews';
2525+ const referrer = document.referrer;
2626+ const currentUrl = window.location.href;
2727+2828+ // Check if there's actual browser history to go back to
2929+ // We can't directly check history.length in a reliable way across browsers,
3030+ // but we can check if the referrer is from the same origin
3131+ const hasSameOriginReferrer = referrer &&
3232+ referrer.startsWith(window.location.origin) &&
3333+ referrer !== currentUrl;
3434+3535+ if (hasSameOriginReferrer) {
3636+ // Safe to use history.back() - we came from within the app
3737+ window.history.back();
3838+ } else {
3939+ // No referrer or external referrer - use fallback
4040+ // This handles direct links, external referrers, and bookmarks
4141+ window.location.href = fallbackUrl;
4242+ }
4343+}
4444+4545+/**
4646+ * Initialize all back buttons on the page
4747+ */
4848+function initAllBackButtons() {
4949+ const buttons = document.querySelectorAll('[data-back-button]');
5050+ buttons.forEach(initBackButton);
5151+}
5252+5353+// Initialize on DOM load
5454+if (document.readyState === 'loading') {
5555+ document.addEventListener('DOMContentLoaded', initAllBackButtons);
5656+} else {
5757+ initAllBackButtons();
5858+}
5959+6060+// Re-initialize after HTMX swaps (for dynamic content)
6161+document.body.addEventListener('htmx:afterSwap', function() {
6262+ initAllBackButtons();
6363+});
+104-66
web/static/js/brew-form.js
···55 */
66function brewForm() {
77 return {
88- showNewBean: false,
99- showNewGrinder: false,
1010- showNewBrewer: false,
1111- rating: 5,
1212- pours: [],
1313- newBean: {
88+ // Modal state (matching manage page)
99+ showBeanForm: false,
1010+ showGrinderForm: false,
1111+ showBrewerForm: false,
1212+ editingBean: null,
1313+ editingGrinder: null,
1414+ editingBrewer: null,
1515+1616+ // Form data (matching manage page with snake_case)
1717+ beanForm: {
1418 name: "",
1519 origin: "",
1616- roasterRKey: "",
1717- roastLevel: "",
2020+ roast_level: "",
1821 process: "",
1922 description: "",
2323+ roaster_rkey: "",
2024 },
2121- newGrinder: { name: "", grinderType: "", burrType: "", notes: "" },
2222- newBrewer: { name: "", brewer_type: "", description: "" },
2525+ grinderForm: { name: "", grinder_type: "", burr_type: "", notes: "" },
2626+ brewerForm: { name: "", brewer_type: "", description: "" },
2727+2828+ // Brew form specific
2929+ rating: 5,
3030+ pours: [],
23312432 // Dropdown data
2533 beans: [],
···30383139 async init() {
3240 // Load existing pours if editing
3333- const poursData = this.$el.getAttribute("data-pours");
4141+ // $el is now the parent div, so find the form element
4242+ const formEl = this.$el.querySelector("form");
4343+ const poursData = formEl?.getAttribute("data-pours");
3444 if (poursData) {
3545 try {
3646 this.pours = JSON.parse(poursData);
···4454 await this.loadDropdownData();
4555 },
46564747- async loadDropdownData() {
5757+ async loadDropdownData(forceRefresh = false) {
4858 if (!window.ArabicaCache) {
4959 console.warn("ArabicaCache not available");
5060 return;
5161 }
52626363+ // If forcing refresh, always get fresh data
6464+ if (forceRefresh) {
6565+ try {
6666+ const freshData = await window.ArabicaCache.refreshCache(true);
6767+ if (freshData) {
6868+ this.applyData(freshData);
6969+ }
7070+ } catch (e) {
7171+ console.error("Failed to refresh dropdown data:", e);
7272+ }
7373+ return;
7474+ }
7575+5376 // First, try to immediately populate from cached data (sync)
5477 // This prevents flickering by showing data instantly
5578 const cachedData = window.ArabicaCache.getCachedData();
···8410785108 populateDropdowns() {
86109 // Get the current selected values (from server-rendered form when editing)
8787- const beanSelect = this.$el.querySelector('select[name="bean_rkey"]');
8888- const grinderSelect = this.$el.querySelector(
8989- 'select[name="grinder_rkey"]',
9090- );
9191- const brewerSelect = this.$el.querySelector('select[name="brewer_rkey"]');
110110+ // Use document.querySelector to ensure we find the form selects, not modal selects
111111+ const beanSelect = document.querySelector('form select[name="bean_rkey"]');
112112+ const grinderSelect = document.querySelector('form select[name="grinder_rkey"]');
113113+ const brewerSelect = document.querySelector('form select[name="brewer_rkey"]');
9211493115 const selectedBean = beanSelect?.value || "";
94116 const selectedGrinder = grinderSelect?.value || "";
···172194 }
173195174196 // Populate roasters in new bean modal - using DOM methods to prevent XSS
175175- const roasterSelect = this.$el.querySelector(
176176- 'select[name="roaster_rkey_modal"]',
177177- );
197197+ const roasterSelect = document.querySelector('select[name="roaster_rkey_modal"]');
178198 if (roasterSelect && this.roasters.length > 0) {
179199 // Clear existing options
180200 roasterSelect.innerHTML = "";
···204224 this.pours.splice(index, 1);
205225 },
206226207207- async addBean() {
208208- if (!this.newBean.name || !this.newBean.origin) {
227227+ async saveBean() {
228228+ if (!this.beanForm.name || !this.beanForm.origin) {
209229 alert("Bean name and origin are required");
210230 return;
211231 }
212212- const payload = {
213213- name: this.newBean.name,
214214- origin: this.newBean.origin,
215215- roast_level: this.newBean.roastLevel,
216216- process: this.newBean.process,
217217- description: this.newBean.description,
218218- roaster_rkey: this.newBean.roasterRKey || "",
219219- };
232232+220233 const response = await fetch("/api/beans", {
221234 method: "POST",
222235 headers: {
223236 "Content-Type": "application/json",
224237 },
225225- body: JSON.stringify(payload),
238238+ body: JSON.stringify(this.beanForm),
226239 });
240240+227241 if (response.ok) {
228242 const newBean = await response.json();
229229- // Invalidate cache and refresh data
243243+244244+ // Invalidate cache and refresh data in one call
245245+ let freshData = null;
230246 if (window.ArabicaCache) {
231231- await window.ArabicaCache.invalidateAndRefresh();
247247+ freshData = await window.ArabicaCache.invalidateAndRefresh();
232248 }
233233- // Reload dropdowns and select the new bean
234234- await this.loadDropdownData();
235235- const beanSelect = this.$el.querySelector('select[name="bean_rkey"]');
249249+250250+ // Apply the fresh data to update dropdowns
251251+ if (freshData) {
252252+ this.applyData(freshData);
253253+ }
254254+255255+ // Select the new bean
256256+ const beanSelect = document.querySelector('form select[name="bean_rkey"]');
236257 if (beanSelect && newBean.rkey) {
237258 beanSelect.value = newBean.rkey;
238259 }
260260+239261 // Close modal and reset form
240240- this.showNewBean = false;
241241- this.newBean = {
262262+ this.showBeanForm = false;
263263+ this.beanForm = {
242264 name: "",
243265 origin: "",
244244- roasterRKey: "",
245245- roastLevel: "",
266266+ roast_level: "",
246267 process: "",
247268 description: "",
269269+ roaster_rkey: "",
248270 };
249271 } else {
250272 const errorText = await response.text();
···252274 }
253275 },
254276255255- async addGrinder() {
256256- if (!this.newGrinder.name) {
277277+ async saveGrinder() {
278278+ if (!this.grinderForm.name) {
257279 alert("Grinder name is required");
258280 return;
259281 }
282282+260283 const response = await fetch("/api/grinders", {
261284 method: "POST",
262285 headers: {
263286 "Content-Type": "application/json",
264287 },
265265- body: JSON.stringify(this.newGrinder),
288288+ body: JSON.stringify(this.grinderForm),
266289 });
290290+267291 if (response.ok) {
268292 const newGrinder = await response.json();
269269- // Invalidate cache and refresh data
293293+294294+ // Invalidate cache and refresh data in one call
295295+ let freshData = null;
270296 if (window.ArabicaCache) {
271271- await window.ArabicaCache.invalidateAndRefresh();
297297+ freshData = await window.ArabicaCache.invalidateAndRefresh();
298298+ }
299299+300300+ // Apply the fresh data to update dropdowns
301301+ if (freshData) {
302302+ this.applyData(freshData);
272303 }
273273- // Reload dropdowns and select the new grinder
274274- await this.loadDropdownData();
275275- const grinderSelect = this.$el.querySelector(
276276- 'select[name="grinder_rkey"]',
277277- );
304304+305305+ // Select the new grinder
306306+ const grinderSelect = document.querySelector('form select[name="grinder_rkey"]');
278307 if (grinderSelect && newGrinder.rkey) {
279308 grinderSelect.value = newGrinder.rkey;
280309 }
310310+281311 // Close modal and reset form
282282- this.showNewGrinder = false;
283283- this.newGrinder = {
312312+ this.showGrinderForm = false;
313313+ this.grinderForm = {
284314 name: "",
285285- grinderType: "",
286286- burrType: "",
315315+ grinder_type: "",
316316+ burr_type: "",
287317 notes: "",
288318 };
289319 } else {
···292322 }
293323 },
294324295295- async addBrewer() {
296296- if (!this.newBrewer.name) {
325325+ async saveBrewer() {
326326+ if (!this.brewerForm.name) {
297327 alert("Brewer name is required");
298328 return;
299329 }
330330+300331 const response = await fetch("/api/brewers", {
301332 method: "POST",
302333 headers: {
303334 "Content-Type": "application/json",
304335 },
305305- body: JSON.stringify(this.newBrewer),
336336+ body: JSON.stringify(this.brewerForm),
306337 });
338338+307339 if (response.ok) {
308340 const newBrewer = await response.json();
309309- // Invalidate cache and refresh data
341341+342342+ // Invalidate cache and refresh data in one call
343343+ let freshData = null;
310344 if (window.ArabicaCache) {
311311- await window.ArabicaCache.invalidateAndRefresh();
345345+ freshData = await window.ArabicaCache.invalidateAndRefresh();
346346+ }
347347+348348+ // Apply the fresh data to update dropdowns
349349+ if (freshData) {
350350+ this.applyData(freshData);
312351 }
313313- // Reload dropdowns and select the new brewer
314314- await this.loadDropdownData();
315315- const brewerSelect = this.$el.querySelector(
316316- 'select[name="brewer_rkey"]',
317317- );
352352+353353+ // Select the new brewer
354354+ const brewerSelect = document.querySelector('form select[name="brewer_rkey"]');
318355 if (brewerSelect && newBrewer.rkey) {
319356 brewerSelect.value = newBrewer.rkey;
320357 }
358358+321359 // Close modal and reset form
322322- this.showNewBrewer = false;
323323- this.newBrewer = { name: "", brewer_type: "", description: "" };
360360+ this.showBrewerForm = false;
361361+ this.brewerForm = { name: "", brewer_type: "", description: "" };
324362 } else {
325363 const errorText = await response.text();
326364 alert("Failed to add brewer: " + errorText);