Abstract

Recent studies have demonstrated the efficacy of using Reinforcement Learning (RL) in building reasoning models that articulate chains of thoughts prior to producing final answers. However, despite ongoing advances that aim at enabling reasoning for vision-language tasks, existing open-source visual reasoning models typically generate reasoning content with pure natural language, lacking explicit integration of visual information. This limits their ability to produce clearly articulated and visually grounded reasoning chains. To this end, we propose Reasoning with Images and Texts (GRIT), a novel method for training MLLMs to think with images. GRIT introduces a grounded reasoning paradigm, in which models generate reasoning chains that interleave natural language and explicit bounding box coordinates. These coordinates point to regions of the input image that the model consults during its reasoning process. Additionally, GRIT is equipped with a reinforcement learning approach, GRPO-GR, built upon the GRPO algorithm. GRPO-GR employs robust rewards focused on the final answer accuracy and format of the grounded reasoning output, which eliminates the need for data with reasoning chain annotations or explicit bounding box labels. As a result, GRIT achieves exceptional data efficiency, requiring as few as 20 image-question-answer triplets from existing datasets. Comprehensive evaluations demonstrate that GRIT effectively trains MLLMs to produce coherent and visually grounded reasoning chains, showing a successful unification of reasoning and grounding abilities.

Overview

GRIT Overview

GRIT: Teaching MLLMs to think with images by generating reasoning chains that interleave natural language with bounding boxes.

Key innovations:
(1) Grounded Reasoning Paradigm - models generate reasoning chains interleaving natural language with explicit bounding box coordinates.
(2) GRPO-GR - a reinforcement learning algorithm, which employs novel rewards that enable the grounded reasoning ability of MLLMs efficiently; no manual reasoning annotations needed

Grounded Reasoning Paradigm

We propose grounded reasoning paradigm as a light and efficient way for pre-trained MLLMs to think with images.
The model is trained to generate reasoning chains that interleave natural language with bounding boxes.

<think> Initial natural language reasoning with bounding boxes generated... [x1,y1,x2,y2] ... </think> <rethink> Reasoning continues, with further analysis on image regions... </rethink> <answer> Final response

The model generates reasoning in one continuous pass with
  • Special tokens to facilitate an extensible reasoning process.
  • Bounding boxes tokens:
    • • Generated directly during autoregressive generation.
    • • Influencing on subsequent reasoning directly as tokens, without any external decoding/image-retrieval process.
    • • Flexibly generated:
      • • Can be placed anywhere within the natural language reasoning.
      • • With dynamic number of boxes - from zero to multiple as needed.

GRPO-GR: Reinforcement Learning for Grounded Reasoning

GRPO-GR Method

GRIT employs GRPO-GR, a reinforcement learning approach that optimizes a policy πθ to generate grounded reasoning sequences with three key reward components:

1. Grounded-reasoning-format reward (rformat): Encourages proper use of reasoning structure-related special tokens and valid bounding box syntax

2. Grounded-target-counting reward (rcount): For counting tasks, verifies the number of bounding boxes matches the ground-truth count

3. GPT-aided answer-accuracy reward (rans): Combines GPT-4o evaluation of answer correctness with BLEU similarity to ground truth

This approach eliminates the need for explicit reasoning annotations or bounding box labels, achieving remarkable data efficiency with as few as 20 training examples while maintaining strong performance.

Main Results

GRPO-GR Method

Directly using the GRIT method with GRPO-GR reinforcement learning, we train two pre-trained MLLMs, Qwen2.5-VL-3B and InternVL-3-2B, with only 20 image-question-answer triplets from existing datasets, VSR and TallyQA. Comprehensive evaluations demonstrate that GRIT-trained models overall outperform baselines across six testing sets Datasets .

Metrics:

  • GPT-as-judge answer accuracy (ACC): Measures answer correctness using GPT-4o as judge (0-1 score)
  • Grounding IoU (GIoU): Measures alignment between model-generated bounding boxes and ground-truth regions

Results confirm that GRIT effectively unifies previously separated grounding and reasoning capabilities in MLLMs efficiently and effectively, achieving strong performance across diverse visual reasoning tasks.

Examples

Model provides an answer with regions and then reflects on it in the subsequent reasoning.

Example 1

Question: How many zebras are pictured here?

Model output:

<think>
There are six zebras in the picture. The coordinates for the zebras are as follows:
1. (200, 168, 248, 202)
2. (169, 159, 214, 186)
3. (76, 167, 108, 192)
4. (24, 173, 50, 197)
5. (51, 163, 70, 191)
6. (413, 159, 441, 189)
7. (463, 171, 483, 186)
</think>

<rethink>
The coordinates provided for the zebras are accurate and cover all the zebras visible in the image. There are no overlapping or missing coordinates.
</rethink>

<answer>
7

Ground truth answer: 7

Model first grounds critical image region in its reasoning and then analyzes in the subsequent reasoning.

Example 2

Question: Is the truck beneath the cat?

Model output:

<think>
The bounding box coordinates for the truck are approximately (0, 209, 488, 364). The cat is positioned on top of this truck.
</think>

<rethink>
The cat is on top of the truck, so the truck is beneath the cat.
</rethink>

<answer>
Yes

Ground truth answer: Yes

Model correctly handles queries about non-existent entities without any grounding action.

Example 3

Question: Is there a knife in the image?

Model output:

<think>
There is no knife present in the image. The focus is on a pot of soup with carrots and other ingredients. The bounding box coordinates for the knife would be outside the image area.
</think>

<rethink>
The image shows a pot of soup with carrots and other ingredients, but there is no knife present. The bounding box coordinates for the knife would be outside the image area.
</rethink>

<answer>
No

Ground truth answer: No

Inference examples of Qwen2.5-VL-GRIT.

BibTeX


      @misc{fan2025gritteachingmllmsthink,
        title={GRIT: Teaching MLLMs to Think with Images}, 
        author={Yue Fan and Xuehai He and Diji Yang and Kaizhi Zheng and Ching-Chen Kuo and Yuting Zheng and Sravana Jyothi Narayanaraju and Xinze Guan and Xin Eric Wang},
        year={2025},
        eprint={2505.15879},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2505.15879}, 
  }