tablesalt

Drop a CSV.
Ask a question.
See generative UI.

In-browser data exploration agent. text-to-SQL via the Vercel AI SDK over DuckDB-WASM. No upload, no backend, no signup. The eval scoreboard is on the front page, not buried in a docs site.

drop a CSV

or click to choose a file · runs entirely in your browser

try a sample

The eval scoreboard.

gpt-4o-mini · 2026-05-14 · 12 labeled cases · in-browser corpus

Most AI-for-data demos hide their accuracy numbers. Here are mine — text-to-SQL accuracy across a labeled set of NYC 311 questions, scored against expected SQL + render kind.

  • render-kind correct

    92%

    11 / 12 cases

  • sql executes

    100%

    12 / 12 cases

  • sql semantic match

    83%

    10 / 12 cases

Mean latency: 1840 ms. Known failure modes: render-kind picker is the strongest signal; SQL semantic-match is the rate-limiter. Known failure: 'closed_share' returns null when the model omits the CAST.

see all 12 cases ↗
  • How many complaints are there?stat
  • Complaints by boroughbar
  • Top 5 complaint typesbar
  • How many complaints are still open?stat
  • Complaints by agencybar
  • List all Manhattan complaintstable
  • What share of complaints are closed?stat
  • Complaint volume by dayline
  • How many rodent complaints?stat
  • Heat complaints by boroughbar
  • How many distinct complaint types?stat
  • What descriptors are reported for noise complaints?list