LeaderboardBenchmarksModels1v1ArenaResults ExplorerGitHub

Models

27 models evaluated across 3 document AI benchmarks.

1

Nanonets OCR-3

Nanonets

Overall
85.9
2

Nanonets OCR2+

Nanonets

Overall
81.8
3

GPT-5.4

OpenAI

Overall
81.0
4

Qwen3-VL-Plus

Alibaba

Overall
80.1
5

Qwen3-VL-235B

Alibaba

Overall
79.6
6

Gemini-3-Pro

Google

Overall
79.4
7

Claude Sonnet 4.6

Anthropic

Overall
79.1
8

Claude Opus 4.6

Anthropic

Overall
78.8
9

Gemini-3-Flash

Google

Overall
78.6
10

Gemini 3.1 Pro

Google

Overall
78.5
11

GPT-5.2

OpenAI

Overall
78.0
12

Qwen3.5-9B

Alibaba

Overall
76.7
13

Qwen3.5-4B

Alibaba

Overall
72.5
14

GPT-5-Mini

OpenAI

Overall
71.7
15

Mistral Small 4

Mistral AI

Overall
71.5
16

Claude Haiku 4.5

Anthropic

Overall
70.2
17

Ministral-8B

Mistral AI

Overall
69.5
18

GPT-4.1

OpenAI

Overall
69.5
19

GLM-OCR

Zhipu AI

Overall
64.2
20

Qwen3.5-2B

Alibaba

Overall
62.6
21

Qwen3.5-0.8B

Alibaba

Overall
57.8
22

GPT-5-Nano

OpenAI

Overall
52.0
23

Llama-3.2-Vision-11B

Meta

Overall
50.8
24

Pixtral-12B

Mistral AI

Overall
46.5
0

Gemma-3-12B-IT

Google

Overall
0.0
0

Datalab Marker

Datalab

Overall
0.0
0

Qwen-VL-OCR

Alibaba

Overall
0.0

Open benchmark for document AI models.

v1.5