Skip to content

Data Flow

This page traces how data moves through the system for the four most important flows in Magia.

Every session has two identifiers that live in different namespaces.

IdentifierOwnerFormatPurpose
Workspace UUIDMagiauuid v4Stable internal identity for a session record
Provider session IDAgent CLIProvider-specific stringThe ID the agent CLI uses in its own data files and hook events

These two IDs are bridged in the session record through a claudeSessionId (or equivalent) field. When a hook event arrives carrying a provider session ID, the Rust backend looks up the corresponding workspace UUID and forwards both identifiers to the frontend.

The frontend maintains two lookup directions:

  • useMagiaStore: indexed by workspace UUID (the primary session store)
  • useAgentEventsStore: indexed by provider session ID (the live event stream)

When displaying data for a session, the app resolves the workspace UUID → provider session ID → live events. The useAgentEventsStore is keyed on provider session ID because hook events arrive with that identifier, and there can be a brief window at session creation where the workspace UUID is known but the provider session ID has not yet been assigned.

Workspace UUID (Magia)
↕ bridged via session.claudeSessionId
Provider Session ID (agent CLI)
↕ keyed in useAgentEventsStore
Agent events (hook stream)

This is the primary real-time data path — how actions taken by a running agent appear in the UI.

1. Agent CLI executes a tool or produces output
2. Agent CLI fires a hook (pre/post tool, message, session.end, etc.)
3. magia-hook-handler binary is invoked by the agent
4. magia-hook-handler connects to the hook Unix socket
and writes a JSON event payload
5. hooks::start_hook_listener reads the payload
6. event_normalizer::normalize() converts it to a unified AgentEvent
7. app_handle.emit("agent:event", agent_event)
8. useAgentEvents hook (src/hooks/useAgentEvents.ts) receives the event
9. useAgentEventsStore.addEvent() stores it under the provider session ID
10. processEvent() updates the incremental SessionProjection
(message list, tool calls, task list, cost, status)
11. React components re-render via Zustand subscriptions

Deduplication. Events can arrive over two channels simultaneously if the IDE integration is active (hook socket + WebSocket). useAgentEventsStore maintains a Set<string> of dedup keys per session. Events are keyed by uuid when present, or by a hash of event_type + content otherwise.

Event cap. Each session in useAgentEventsStore holds at most 1,000 events to bound memory usage. Older events are dropped from the head of the list.

Live sessions map. In parallel with the event stream, hooks::start_hook_listener maintains SharedLiveSessions — an in-memory map of currently running sessions. When a session.start hook fires, the session is added; on session.end, it is removed. The map is emitted to the frontend as a live-sessions:update event on every change. The PID health check (every 5 seconds) also prunes dead sessions from this map.

Discovery is how Magia learns about past sessions on first load and after background syncs.

1. Frontend calls useDiscoveryStore.deferredInitialize()
(deferred with setTimeout(0) to not block first render)
2. invoke("get_active_sessions") — fetches the session cache from Rust
3. Session cache (session_cache) returns all known sessions
from the SQLite database + in-memory state
4. useDiscoveryStore stores the raw list; useDiscoveryCacheStore
persists a lightweight version to localStorage
5. useMagiaStore.upsertSession() is called for each session,
normalising ActiveSession (snake_case IPC) → Session (frontend type)
6. Components subscribe to useMagiaStore and render the session list

Background sync. The Rust session cache runs an incremental sync on startup and periodically (default every 10 minutes). The sync walks provider session directories, diffs against the current cache, and updates the SQLite database. If anything changed, it emits claude-data:projects-invalidated, which triggers a frontend refetch.

File watchers. ActiveSessionWatcher and ProviderSessionWatcher complement the periodic sync by watching key directories for filesystem changes. When a new JSONL file appears (a new agent session), the watcher triggers an incremental sync immediately so the UI updates without waiting for the next periodic interval.

Discovery cache. useDiscoveryCacheStore caches the last known session list in localStorage. On subsequent launches, the UI renders from cache immediately while the full IPC fetch runs in the background, preventing a blank session list flash.

Agent CLIs can request user approval before executing a tool (e.g. file writes, shell commands with broad scope).

1. Agent CLI sends a permission request to the permission Unix socket
(the socket path is injected via environment variables / config)
2. agent::start_permission_listener receives the request
and stores it in SharedPermissionManager (keyed by request_id)
3. app_handle.emit("agent:permission-request", request)
4. Frontend useAgentEvents hook receives the event and
stores it in the session's event list as a "permission.requested" event
5. The PermissionDialog component renders in the chat view
6. User approves or denies
7. invoke("respond_to_permission", { request_id, approved })
8. agent::respond_to_permission looks up the pending request
and writes the response to the waiting socket connection
9. Agent CLI unblocks and continues (or aborts)

The agent process blocks on the permission socket connection until a response is written. If the Magia process crashes while a permission request is pending, the agent times out and treats the request as denied.

Agent CLIs that support OpenTelemetry export (Claude Code ≥ 1.x) send spans and metrics to Magia’s embedded gRPC collector.

1. Agent CLI initialises its OTel exporter with the socket path
injected via OTEL_EXPORTER_OTLP_ENDPOINT environment variable
2. Agent CLI exports spans/metrics over gRPC (protobuf)
to the Unix socket (or TCP fallback)
3. otel::OtelGrpcServer receives the request and decodes it
4. Metrics (token counts, latency histograms, etc.) are stored
in SharedMetricsStore (Arc<Mutex<HashMap<session_id, Metrics>>>)
5. app_handle.emit("otel:metrics-update", metrics_payload)
6. useMetricsListener hook (src/hooks/useMetricsListener.ts)
receives the update and calls useMetricsStore.update()
7. Session cards and the session header display live token counts
and estimated costs

Fallback. If the Unix socket cannot be created (e.g. due to a path length limit on some Linux systems), the OTel server starts on a TCP port instead. The frontend queries the port via invoke("get_otel_port") and the useOtelConfigStore ensures the correct endpoint is used.

Settings follow a simpler bidirectional flow.

Load: invoke("load_settings") → Rust reads settings.json
→ JSON payload → useSettingsStore hydrates on startup
Save: useSettingsStore.saveSettings() → invoke("save_settings", settings)
→ Rust writes settings.json → emits "settings:changed"
→ WatcherManager detects file change → frontend re-fetches if needed

The SharedAppSettings (Arc-wrapped, async-Mutex) in the Rust backend is the authoritative in-memory copy. Background services that need settings (e.g. the OTel port) read from SharedAppSettings rather than re-parsing the file on every access.