tablesalt
Drop a CSV.
Ask a question.
See generative UI.
In-browser data exploration agent. text-to-SQL via the Vercel AI SDK over DuckDB-WASM. No upload, no backend, no signup. The eval scoreboard is on the front page, not buried in a docs site.
The eval scoreboard.
gpt-4o-mini · 2026-05-14 · 12 labeled cases · in-browser corpus
Most AI-for-data demos hide their accuracy numbers. Here are mine — text-to-SQL accuracy across a labeled set of NYC 311 questions, scored against expected SQL + render kind.
render-kind correct
92%
11 / 12 cases
sql executes
100%
12 / 12 cases
sql semantic match
83%
10 / 12 cases
Mean latency: 1840 ms. Known failure modes: render-kind picker is the strongest signal; SQL semantic-match is the rate-limiter. Known failure: 'closed_share' returns null when the model omits the CAST.
see all 12 cases ↗
- How many complaints are there?→ stat
- Complaints by borough→ bar
- Top 5 complaint types→ bar
- How many complaints are still open?→ stat
- Complaints by agency→ bar
- List all Manhattan complaints→ table
- What share of complaints are closed?→ stat
- Complaint volume by day→ line
- How many rodent complaints?→ stat
- Heat complaints by borough→ bar
- How many distinct complaint types?→ stat
- What descriptors are reported for noise complaints?→ list