Unified Dashboard Update System - Strategic Design Document
Status: PHASE 1 & 2 Complete - Recommendation Ready
Date: 2026-01-28
Subagent: unified-dashboard-system-design
---
Executive Summary
After evaluating three architectural approaches, I recommend Option C: Hybrid Approach (Master Event Stream + Per-Dashboard Caches).
This design is the ONLY approach that satisfies the critical non-blocking constraint for the orchestrator while maintaining fast, reliable dashboard updates.
---
PHASE 1: Approach Evaluation
Option A: Single Master Log File + Topic Filtering
Architecture:
┌─────────────────────────────────────┐
│ Orchestrator (Haiku) │
│ - Appends to dashboard-updates.json │
└─────────────────────┬───────────────┘
│ (writes)
↓
┌─────────────────────────────┐
│ dashboard-updates.json │
│ (ALL dashboards, mixed) │
└─────────┬───────────────────┘
│
┌───────────┼───────────────┐
│ │ │
↓ ↓ ↓
[MindMiner] [NovaLaunch] [SchellIP]
(filters) (filters) (filters)
Pros:
- Single source of truth
- Easy for agents to append (one endpoint)
- Conceptually simple
Cons:
- ❌ File grows unbounded — all events from all dashboards accumulate
- ❌ Write contention — multiple dashboards polling + orchestrator writing = potential race conditions
- ❌ Inefficient reads — every dashboard filters the entire file for its updates
- ❌ Orchestrator may wait — if file write is synchronous, Haiku blocks
- ❌ Hard to archive — no natural way to clean up old events across dashboards
- ❌ Performance degrades over time — file grows, parse time grows
Verdict: ❌ Fails non-blocking constraint. Not recommended.
---
Option B: Per-Dashboard Log Files
Architecture:
┌──────────────────────────────┐
│ Orchestrator (Haiku) │
│ - Routes to correct log file │
└──────────┬───────────────────┘
│
┌──────┴──────┬──────────┬─────────┐
│ │ │ │
↓ ↓ ↓ ↓
[mindminer] [nova-launch] [schell-ip] ...
-updates.json -updates.json -updates.json
│ │ │
↓ ↓ ↓
MindMiner NovaLaunch SchellIP
Dashboard Dashboard Dashboard
(direct read) (direct read) (direct read)
Pros:
- Smaller files per dashboard (fast reads)
- Clear separation of concerns
- No filtering logic needed in dashboards
- Natural file lifecycle (one file = one dashboard)
Cons:
- ❌ Agents must know routing — which file updates which dashboard?
- ❌ Orchestrator still decides — Haiku must know the mapping
- ❌ No global view — can't see all activity across dashboards easily
- ❌ Harder to implement patterns — things like "when X changes, update Y" require orchestrator logic
- ❌ Orchestrator still blocks — writes to multiple files sequentially
- ⚠️ Manual file management — no automatic cleanup mechanism
Verdict: ⚠️ Simpler than A, but still blocks orchestrator on writes. Medium recommendation.
---
Option C: Hybrid Approach (Master Event Stream + Per-Dashboard Caches)
Architecture:
┌─────────────────────────────────┐
│ Orchestrator (Haiku) │
│ - Appends to event stream │
│ - RETURNS IMMEDIATELY (no wait) │
└──────────────┬──────────────────┘
│ (async write)
↓
┌──────────────────────────────┐
│ dashboard-events.json │
│ (master event stream) │
│ - Timestamp │
│ - Dashboard ID │
│ - Action/content │
│ (WRITE-AND-RETURN pattern) │
└──────────────┬────────────────┘
│
│ (cron job, async)
│ Every 30-60 seconds
│ Transform events → caches
│ (doesn't block orchestrator)
↓
┌──────────────┬──────────────┬──────────────┐
│ │ │ │
↓ ↓ ↓ ↓
[mindminer- [nova-launch- [schell-ip- [costs-
cache.json] cache.json] cache.json] cache.json]
│ │ │ │
│ │ │ │
↓ (fast read) ↓ (fast read) ↓ (fast read) ↓ (fast read)
│ │ │ │
┌───────────┐ ┌──────────┐ ┌──────────┐ ┌─────────────┐
│ MindMiner │ │NovaLaunch│ │ SchellIP │ │ costs │
│ Dashboard │ │Dashboard │ │ Dashboard│ │ Dashboard │
│(polls 60s)│ │(polls 60s)│ │(polls 60s)│ │ (polls 60s) │
└───────────┘ └──────────┘ └──────────┘ └─────────────┘
Pros:
- ✅ Non-blocking for orchestrator — write-and-return pattern
- ✅ Fast dashboard reads — small, focused cache files
- ✅ No write contention — orchestrator writes once, cron handles distribution
- ✅ Clean separation — orchestrator/events ≠ display/caches
- ✅ Scalable — add new dashboards = add new cache file (cron handles it automatically)
- ✅ Maintainable — clear responsibility: orchestrator writes events, cron transforms, dashboards read caches
- ✅ Auto-cleanup possible — cron can purge old events, old cache entries
- ✅ Easy to debug — can inspect
dashboard-events.json to see what orchestrator sent- ✅ Flexible routing — cron can implement complex event → cache logic (e.g., "cost events update both costs AND CommandCenter")
Cons:
- ⚠️ Slightly more complex architecture (three components vs. one)
- ⚠️ Cron job adds operational overhead (but minimal — runs every 30-60 sec, not on every event)
Verdict: ✅ BEST CHOICE — Satisfies non-blocking constraint and outperforms on every metric.
---
PHASE 2: Recommendation - Option C
Why Option C is Best
1. Non-Blocking Orchestrator (Critical Requirement)
The orchestrator's job is to coordinate business logic, not manage dashboards. Every millisecond saved on dashboard I/O is a millisecond available for real work.
With Option C:
Orchestrator workflow:
1. Decide an event happened (e.g., "Nova Launch inquiry received")
2. Append: { timestamp, dashboard_id, action, content } to dashboard-events.json
3. RETURN IMMEDIATELY ✅ (don't wait for cache transformation or dashboard refresh)
4. Continue orchestrating
The cron job handles the "slow" parts asynchronously:
Cron job workflow (runs every 30-60 seconds):
1. Read dashboard-events.json (small, fast)
2. For each new event, transform to appropriate cache(s)
3. Write cache files (mindminer-cache.json, nova-launch-cache.json, etc.)
4. Clean up old events from master stream if needed
5. Complete. No orchestrator was involved. ✅
2. Dashboard Performance
Each dashboard polls its own cache file — which is small (typically 10-50 KB) and contains only relevant updates.
json
// mindminer-cache.json (example, ~5 KB)
{
"lastUpdate": "2026-01-28T14:32:15Z",
"sections": {
"status": { "text": "2 new leads", "timestamp": "2026-01-28T14:30:00Z" },
"tasks": { "count": 3, "overdue": 1 },
"alerts": [ { "id": 1, "text": "Follow up: Lead from Acme Corp" } ]
}
}
Comparison with Option A (one master file):
- Option A: Dashboard reads 500 KB file, filters 400 KB of irrelevant events
- Option C: Dashboard reads 5 KB cache file, uses 100% of it
3. Scalability
Adding a new dashboard (e.g., "Arbitrage Dashboard") requires:
- One new cache file in cron's logic
- One new dashboard HTML file
- Done. No orchestrator changes. ✅
With Option B, orchestrator would need to know about the new mapping. With Option C, it's automatic.
4. Observability & Debugging
// dashboard-events.json (master stream)
{
"events": [
{ "timestamp": "2026-01-28T14:30:00Z", "dashboard": "mindminer", "action": "new_lead", "content": "..." },
{ "timestamp": "2026-01-28T14:31:00Z", "dashboard": "nova-launch", "action": "inquiry_received", "content": "..." },
{ "timestamp": "2026-01-28T14:32:00Z", "dashboard": "costs", "action": "budget_alert", "content": "..." }
]
}
If a dashboard isn't updating, you can:
1. Check
dashboard-events.json to confirm events were sent ✅2. Check
mindminer-cache.json to confirm transformation happened ✅3. Check browser console to confirm dashboard polling works ✅
Single point of truth for debugging.
---
How It Works (Step-by-Step)
#### 1. Orchestrator Appends Event (Immediate)
javascript
// In orchestrator code (async, non-blocking):
appendDashboardUpdate({
dashboard: "mindminer",
section: "status",
action: "update_leads",
content: { count: 5, new: 2 },
priority: "high"
});
// Returns immediately ✅
Implementation (async, doesn't block):
python
async def appendDashboardUpdate(event):
event["timestamp"] = now()
with open("dashboard-events.json", "a") as f:
f.write(json.dumps(event) + "\n") # append-only is fast
# Return. Cron job will handle transformation.
#### 2. Cron Job Transforms (Every 30-60 Seconds)
Runs independently, doesn't interact with orchestrator.
Reads: dashboard-events.json (all new events)
Writes: mindminer-cache.json, nova-launch-cache.json, etc.
Example cron job logic:
python
def cron_transform_events():
events = read_new_events("dashboard-events.json")
caches = {} # {dashboard_id: {content}}
for event in events:
dashboard = event["dashboard"]
if dashboard not in caches:
caches[dashboard] = load_cache(f"{dashboard}-cache.json")
# Transform event to cache update
if event["action"] == "update_leads":
caches[dashboard]["sections"]["status"] = {
"text": f"{event['content']['count']} total, {event['content']['new']} new",
"timestamp": event["timestamp"]
}
# Write all caches
for dashboard, content in caches.items():
write_cache(f"{dashboard}-cache.json", content)
# Optionally purge old events
purge_old_events("dashboard-events.json", days=7)
#### 3. Dashboard Polls Cache (Every 60-120 Seconds)
javascript
// In dashboard HTML (MindMiner/index.html):
async function pollUpdates() {
const response = await fetch("../mindminer-cache.json");
const cache = await response.json();
// Update DOM
document.getElementById("status").textContent = cache.sections.status.text;
document.getElementById("task-count").textContent = cache.sections.tasks.count;
// Reschedule next poll
setTimeout(pollUpdates, 60000); // 60 seconds
}
pollUpdates(); // Start polling
---
Performance Estimates
| Metric | Option A | Option B | Option C |
|--------|----------|----------|----------|
| Orchestrator latency | 50-200ms (writes to master) | 20-100ms (multiple writes) | <5ms (append, return) ✅ |
| Dashboard read latency | 100-500ms (parse + filter) | 10-50ms (direct read) | 5-20ms (small cache) ✅ |
| Master file size | 50+ MB/month | N/A | 100 KB (rolling, purged weekly) |
| Per-dashboard cache size | N/A | 50-200 KB | 5-50 KB ✅ |
| Scalability (10 dashboards) | Single 50MB file | 10 files, 50-200KB each | 1 event stream + 10 caches |
| Cron overhead | N/A | N/A | ~50ms every 30-60 sec (negligible) |
Conclusion: Option C is fastest for both orchestrator and dashboards, and scales linearly.
---
Orchestrator Impact: Zero Blocking
Haiku continues uninterrupted:
Haiku's job: Orchestrate business logic
Dashboard's job: Display status
Cron's job: Transform events → caches
Haiku doesn't wait for:
✅ Dashboard refresh
✅ Cache updates
✅ File writes to complete
Haiku just appends and returns.
---
Agent Ease-of-Use
All agents use one simple function:
javascript
// In any agent code:
await appendDashboardUpdate({
dashboard: "mindminer", // Required
section: "leads", // e.g., "leads", "tasks", "alerts"
action: "new_inquiry", // e.g., "new_inquiry", "completed_task"
content: { from: "Acme Corp", ... }, // Custom data
priority: "high" // Optional: "low", "normal", "high"
});
That's it. No routing logic, no file naming, no orchestrator involvement. Every agent knows how to update any dashboard.
---
Scalability
Adding Dashboard #10 (Arbitrage Dashboard):
1. Create
arbitrage-cache.json template2. Add logic to cron job to populate it from events with
dashboard: "arbitrage"3. Create
Arbitrage/index.html with polling logic4. Done. Orchestrator unchanged. ✅
---
JSON Schema
Master Event Stream (dashboard-events.json)
json
{
"events": [
{
"timestamp": "2026-01-28T14:30:00.000Z",
"dashboard": "mindminer",
"section": "leads",
"action": "new_inquiry",
"content": {
"from": "Acme Corp",
"message": "Interested in bulk software licensing"
},
"priority": "high"
},
{
"timestamp": "2026-01-28T14:31:15.000Z",
"dashboard": "costs",
"section": "alerts",
"action": "budget_exceeded",
"content": {
"category": "cloud-services",
"current": 1250,
"limit": 1000
},
"priority": "critical"
}
]
}
Per-Dashboard Cache (mindminer-cache.json)
json
{
"lastUpdate": "2026-01-28T14:32:15.000Z",
"dashboard": "mindminer",
"sections": {
"status": {
"text": "5 total leads, 2 new",
"updated": "2026-01-28T14:30:00.000Z",
"alerts": 1
},
"leads": [
{
"id": "lead-001",
"from": "Acme Corp",
"message": "Interested in bulk software licensing",
"added": "2026-01-28T14:30:00.000Z",
"priority": "high"
}
],
"tasks": {
"total": 12,
"overdue": 1,
"due_today": 3
}
}
}
---
Architecture Diagram (ASCII)
┌──────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR (Haiku) │
│ │
│ Business Logic → appendDashboardUpdate() → RETURN (no wait) │
└──────────────────────────┬──────────────────────────────────┘
│ async write (non-blocking)
↓
┌──────────────────────────────────────┐
│ dashboard-events.json │
│ (Master Event Stream) │
│ │
│ { timestamp, dashboard, action } │
│ { timestamp, dashboard, action } │
│ { timestamp, dashboard, action } │
│ │
│ Auto-purges after 7 days │
└────────────┬─────────────────────────┘
│
┌────────────────────────┐
│ CRON JOB │
│ (Every 30-60 sec) │
│ │
│ 1. Read events │
│ 2. Transform caches │
│ 3. Write caches │
│ 4. Cleanup old data │
│ │
│ (No orchestrator │
│ involvement) │
└─────────┬──────────────┘
│
┌─────────────┼─────────────┬─────────────┬──────────────┐
│ │ │ │ │
↓ ↓ ↓ ↓ ↓
[mindminer- [nova-launch- [schell-ip- [costs- [kickstarter-
cache.json] cache.json] cache.json] cache.json] cache.json]
│ │ │ │ │
↓ ↓ ↓ ↓ ↓
Dashboard polls every 60-120 seconds:
┌─────────────────┐ ┌──────────────────┐ ┌───────────────┐
│ MindMiner/ │ │ NovaLaunch/ │ │ SchellIP/ │
│ index.html │ │ index.html │ │ index.html │
│ │ │ │ │ │
│ fetch(../ │ │ fetch(../ │ │ fetch(../ │
│ mindminer- │ │ nova-launch- │ │ schell-ip- │
│ cache.json) │ │ cache.json) │ │ cache.json) │
│ │ │ │ │ │
│ Update DOM │ │ Update DOM │ │ Update DOM │
└─────────────────┘ └──────────────────┘ └───────────────┘
---
Summary: Option C Recommendation
| Aspect | Rating | Notes |
|--------|--------|-------|
| Non-blocks orchestrator | ✅✅✅ | Write-and-return pattern |
| Fast dashboards | ✅✅✅ | Small cache files, no filtering |
| Ease of use | ✅✅✅ | One function for all agents |
| Scalability | ✅✅✅ | Add dashboards without changing orchestrator |
| Debuggability | ✅✅✅ | Single event stream, visible caches |
| Operational complexity | ✅✅ | Cron job adds minimal overhead |
| Architecture elegance | ✅✅✅ | Clean separation of concerns |
---
Next Steps
If Jeff approves this approach:
1. Implement PHASE 3: Create log file structure and update all 9 dashboards
2. Implement PHASE 4: Create cron job and helper functions
3. Deploy to Cloudflare Pages
4. Test non-blocking behavior with live orchestrator
Questions for Jeff:
- Does this architecture align with your vision?
- Any concerns about the cron job overhead?
- Should we start implementation immediately, or would you like adjustments?
---
Design document prepared by: Subagent (unified-dashboard-system-design)
Date: 2026-01-28
Status: Ready for approval before PHASE 3 implementation