The best approach is to track new insertions in the batch and do a single update at the end.
Here's why this is optimal:
Current Flow (per batch of N messages)
- 1 read: Get trace
- N reads: Check if each message exists
- N writes: Insert/update each message
- With caching: +1 write to update
messageCount
Total: Only 1 extra write per batch, regardless of batch size.
Implementation Strategy
In upsertMessagesBatch:
- Modify
upsertMessageForTrace to return { id, isNew }
- Accumulate count of new messages during the loop
- After all messages processed, do single
ctx.db.patch(traceId, { messageCount: trace.messageCount + newCount })
For deletions in deleteTraceBatch:
- Already batches deletions (25 at a time)
- Decrement
messageCount by removedMessages in the same mutation
Benefits:
- Minimal write amplification (1 write per batch, not per message)
- No contention issues from concurrent batches (mutations are serialized per trace)
- Count stays accurate through updates vs new inserts
Alternative considered - updating count inside upsertMessageForTrace would cause N writes to the trace table per batch, which is what we want to avoid.
Does this approach work for your use case?