[219] GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement LearningRL MLLM 2025Q3
[206] Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models25min RL MLLM 2025Q1
[207] MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement LearningRL MLLM 2025Q1
[187] Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference OptimizationRL MLLM 2024Q4 SHU
[183] MultiMath: Bridging Visual and Mathematical Reasoning for Large Language ModelsMLLM 2024Q3 STEM
[178] RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V TrustworthinessRL MLLM 2024Q2
[171] CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMsECCV RL MLLM 2024Q3
[172] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human FeedbackCVPR RL MLLM 2024Q2
[173] Detecting and Preventing Hallucinations in Large Vision Language ModelsAAAI RL 2023Q3 MLLM ScaleAI
[144] Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyondmultilingual alibaba 2023Q3 MLLM qwen
[138] ShareGPT4V: Improving Large Multi-Modal Models with Better Captionsmultimodal dataset 2023Q4 MLLM