Skip to content

Orion Tasking — Wallet Drift Investigation

To: Orion CC: Atlas, Katja From: Vesper Date: 2026-04-19 Priority: Production blocker. Phase 7.3 and all live sessions are gated on this.


Background

The real wallet is at 28 XRP / 154.46 RLUSD vs the post-injection baseline of 66.82 XRP / 97.61 RLUSD. Net: −38.82 XRP / +56.85 RLUSD. This happened across live sessions S36, S37, and S39. Atlas has locked the investigation order below. No live sessions, no Phase 7.3 until root cause is confirmed.

DB to query: neo_engine.db (live session database — NOT neo_paper_state.db)

S39 DB session_id = 38 (confirmed from prior session). Identify S36 and S37 by timestamp if needed.


Step 0 — Schema probe (run first)

Before running the queries below, confirm what columns exist in the key tables. Some columns were added via additive migration and may or may not be present depending on DB version.

import sqlite3
db = sqlite3.connect('neo_engine.db')

for table in ['sessions', 'fills', 'system_metrics', 'market_snapshots', 'inventory_ledger', 'valuation_snapshots']:
    cols = db.execute(f'PRAGMA table_info({table})').fetchall()
    print(f'\n=== {table} ===')
    for c in cols:
        print(f'  {c[1]} ({c[2]})')

Report the full output. The anchor error column may be named last_anchor_divergence_bps, anchor_error_bps, or similar. Confirm before using it in queries C and D.


Investigation A — Session-by-session wallet drift reconstruction

Goal: identify exactly which session(s) created the skew and by how much.

import sqlite3
db = sqlite3.connect('neo_engine.db')

rows = db.execute('''
    SELECT
        session_id,
        started_at,
        ended_at,
        halt_reason,
        starting_xrp,
        starting_rlusd,
        ending_xrp,
        ending_rlusd,
        ROUND(ending_xrp  - starting_xrp,  4) AS net_xrp,
        ROUND(ending_rlusd - starting_rlusd, 4) AS net_rlusd
    FROM sessions
    ORDER BY session_id
''').fetchall()

print('session_id | started_at | halt_reason | starting_xrp | starting_rlusd | ending_xrp | ending_rlusd | net_xrp | net_rlusd')
for r in rows:
    print(' | '.join(str(x) for x in r))

Report: Full table output. Flag any session where net_xrp is meaningfully negative.


Investigation B — Fill count vs fill size asymmetry

Goal: determine whether "balanced count" was hiding unbalanced notional flow.

import sqlite3
db = sqlite3.connect('neo_engine.db')

rows = db.execute('''
    SELECT
        session_id,
        side,
        COUNT(*)                          AS fill_count,
        ROUND(SUM(quantity), 4)           AS total_rlusd_notional,
        ROUND(SUM(quantity / price), 4)   AS total_xrp_moved,
        ROUND(AVG(quantity), 4)           AS avg_fill_size_rlusd,
        ROUND(AVG(quantity / price), 4)   AS avg_xrp_per_fill
    FROM fills
    WHERE session_id IS NOT NULL
    GROUP BY session_id, side
    ORDER BY session_id, side
''').fetchall()

print('session_id | side | fill_count | total_rlusd | total_xrp | avg_rlusd | avg_xrp')
for r in rows:
    print(' | '.join(str(x) for x in r))

Then compute net per session:

rows = db.execute('''
    SELECT
        session_id,
        ROUND(SUM(CASE WHEN side='buy'  THEN  quantity/price ELSE 0 END), 4) AS xrp_bought,
        ROUND(SUM(CASE WHEN side='sell' THEN  quantity/price ELSE 0 END), 4) AS xrp_sold,
        ROUND(SUM(CASE WHEN side='buy'  THEN  quantity/price ELSE 0 END)
            - SUM(CASE WHEN side='sell' THEN  quantity/price ELSE 0 END), 4) AS net_xrp_from_fills
    FROM fills
    WHERE session_id IS NOT NULL
    GROUP BY session_id
    ORDER BY session_id
''').fetchall()

print('session_id | xrp_bought | xrp_sold | net_xrp_from_fills')
for r in rows:
    print(' | '.join(str(x) for x in r))

Report: Both tables. Flag sessions where net_xrp_from_fills is negative despite more buy fills by count.


Investigation C — Anchor saturation distribution

Goal: prove whether wallet drift correlates with persistent saturated anchor regimes.

First confirm column names from Step 0. The anchor divergence may be stored as a column in system_metrics or market_snapshots. If you find it, use the query below substituting the correct column name.

import sqlite3
db = sqlite3.connect('neo_engine.db')

# Substitute ANCHOR_COL with the actual column name found in Step 0
ANCHOR_COL = 'last_anchor_divergence_bps'   # update if different

rows = db.execute(f'''
    SELECT
        session_id,
        COUNT(*)                                          AS tick_count,
        ROUND(AVG({ANCHOR_COL}), 4)                       AS mean_anchor_error,
        ROUND(MIN({ANCHOR_COL}), 4)                       AS min_anchor_error,
        ROUND(MAX({ANCHOR_COL}), 4)                       AS max_anchor_error,
        ROUND(100.0 * SUM(CASE WHEN ABS({ANCHOR_COL}) >= 10.0
                          THEN 1 ELSE 0 END) / COUNT(*), 1) AS pct_at_cap,
        ROUND(100.0 * SUM(CASE WHEN ABS({ANCHOR_COL}) >= 5.0
                          THEN 1 ELSE 0 END) / COUNT(*), 1) AS pct_beyond_5bps
    FROM system_metrics
    WHERE {ANCHOR_COL} IS NOT NULL
    GROUP BY session_id
    ORDER BY session_id
''').fetchall()

print('session_id | ticks | mean_anchor | min | max | pct_at_cap | pct_beyond_5bps')
for r in rows:
    print(' | '.join(str(x) for x in r))

If the anchor column is not in system_metrics, check market_snapshots using clob_mid_price vs amm_price (if both columns exist):

rows = db.execute('''
    SELECT
        session_id,
        COUNT(*) AS tick_count,
        ROUND(AVG(clob_mid_price - amm_price), 6) AS mean_clob_minus_amm,
        ROUND(100.0 * SUM(CASE WHEN ABS(clob_mid_price - amm_price) / mid_price * 10000 >= 10.0
                          THEN 1 ELSE 0 END) / COUNT(*), 1) AS pct_divergence_at_10bps
    FROM market_snapshots
    WHERE clob_mid_price IS NOT NULL AND amm_price IS NOT NULL
    GROUP BY session_id
    ORDER BY session_id
''').fetchall()

print('session_id | ticks | mean_clob_minus_amm | pct_at_10bps')
for r in rows:
    print(' | '.join(str(x) for x in r))

Report: Whichever query returns data. Flag any session where anchor is pinned at or near cap for a high percentage of ticks.


Investigation D — S39 inventory anomaly reconciliation (blocker)

Goal: determine whether the engine was making decisions on phantom / inflated XRP state.

Part 1 — What the sessions table recorded at S39 shutdown:

import sqlite3
db = sqlite3.connect('neo_engine.db')

row = db.execute('''
    SELECT session_id, started_at, ended_at,
           starting_xrp, starting_rlusd, ending_xrp, ending_rlusd
    FROM sessions WHERE session_id = 38
''').fetchone()
print('sessions table:', row)

Part 2 — What the inventory_ledger accumulated through S39:

rows = db.execute('''
    SELECT
        il.asset,
        ROUND(SUM(il.change), 6) AS cumulative_change,
        MAX(il.new_balance)      AS final_ledger_balance
    FROM inventory_ledger il
    JOIN fills f ON f.id = il.fill_id
    WHERE f.session_id <= 38
    GROUP BY il.asset
''').fetchall()

print('inventory_ledger cumulative through S39:')
for r in rows:
    print(r)

Part 3 — What valuation_snapshots recorded during S39 (get_snapshot output):

rows = db.execute('''
    SELECT created_at, xrp_balance, rlusd_balance,
           total_value_rlusd, xrp_avg_cost_rlusd
    FROM valuation_snapshots
    WHERE session_id = 38
    ORDER BY created_at
    LIMIT 5
''').fetchall()
print('valuation_snapshots (first 5 of S39):')
for r in rows:
    print(r)

rows = db.execute('''
    SELECT created_at, xrp_balance, rlusd_balance, total_value_rlusd
    FROM valuation_snapshots
    WHERE session_id = 38
    ORDER BY created_at DESC
    LIMIT 5
''').fetchall()
print('valuation_snapshots (last 5 of S39):')
for r in rows:
    print(r)

Part 4 — inventory_snapshots during S39 (what get_snapshot() returned tick-by-tick):

# inventory_snapshots may not have session_id — filter by timestamp range
session = db.execute(
    'SELECT started_at, ended_at FROM sessions WHERE session_id = 38'
).fetchone()

rows = db.execute('''
    SELECT created_at, xrp_balance, rlusd_balance, total_value_in_rlusd, xrp_pct
    FROM inventory_snapshots
    WHERE created_at BETWEEN ? AND ?
    ORDER BY created_at
    LIMIT 10
''', (session[0], session[1])).fetchall()
print('inventory_snapshots during S39 (first 10):')
for r in rows:
    print(r)

Part 5 — S39 fills: actual XRP/RLUSD flow:

rows = db.execute('''
    SELECT f.side, COUNT(*) as count,
           ROUND(SUM(f.quantity), 4) as total_rlusd,
           ROUND(SUM(f.quantity / f.price), 4) as total_xrp,
           ROUND(AVG(f.price), 6) as avg_price
    FROM fills f
    WHERE f.session_id = 38
    GROUP BY f.side
''').fetchall()
print('S39 fill summary:')
for r in rows:
    print(r)

Report: All five parts. The key question is whether xrp_balance in valuation_snapshots or inventory_snapshots during S39 materially exceeds what the inventory_ledger and sessions table show. A discrepancy here means the engine was trading on phantom inventory.


Reporting format

Structure the delivery as:

INVESTIGATION RESULTS — Wallet Drift Root Cause
Session: [S36 DB ID] [S37 DB ID] S39=38

A. Drift by session:
   [table]

B. Fill asymmetry:
   [count table]
   [net XRP table]
   Conclusion: [which session(s) had negative net XRP despite count balance, if any]

C. Anchor saturation:
   [table]
   Conclusion: [correlation with drift sessions yes/no]

D. S39 anomaly:
   sessions table end state: [values]
   inventory_ledger through S39: [values]
   valuation_snapshots peak: [values]
   Conclusion: [match or mismatch — is phantom inventory real]

Root cause assessment:
   Primary cause: [anchor saturation | fill asymmetry | phantom inventory | combination]
   Session where drift originated: [session ID]
   State mismatch: [yes/no]

Then separately list any DB anomalies or unexpected findings outside the above scope.

— Vesper