TL;DR
- I read this because.. : video language model. fully open-source model.
- task : video language model
- problem : I want to make it fully open source, not a synthetic model based on a closed model.
- IDEA: a model based on several open source models (almost a meta-model) (similar motivation to molmo)
- input/output : (video, image, (optional) mask) + question -> answer
- architecture : VE {PE L/14, PE G/14} + LLM {Llama3.2 1B-3B, Llama3.1 8B}
- objective : ce loss (alignment, mid-training, SFT)
- baseline : GPT4o, Gemini 1.5 Pro, Gemini 2.0 Flash, Qwen2VL, InternVL2.5, Qwen 3.5VL, Llava-OV
- data : pretrain 1M (from SA-1B + caption), mid-training 64.7M synthetic caption (LLaMa-3V-90B), SFT human-annotated 2.87M
- evaluation : image bench, video bench
- result : Compliant performance
- contribution : fully open source model. data is also public!
- etc. :
Details
- thumbnail
- overview
data
- overall
- details
- all training data[^1]
synthetic data pipeline (66.1M)
- image data engine
- image -natural image, documents
- give {caption, OCR, meta} - Lllama -> caption, QA
- video data
- Extract 30-second video clip using https://www.scenedetect.com/ , {caption from Lllama-3V, video caption from initial PLM, video meta(action, time tags)} – Llama3 –> caption, QA
- scaling law
- Limitation of synthetic data
- No clear scaling law for hard problems -> human annotation is needed.
human-annotated high quality data
- PLM-FGQA
- fine-grained human activity
- PLM-STC
- spatial-teomporal
- Using SAM2, create a mask tublet and ask annotators to find interesting, moving objects, and then ask other annotators to write about the movement of the action over time in the video.
- video-region caption (522.7K / train 476.2 / others PLM-VideoBench)
- RCap (194.2K): Given the video region and timestamps, the model generates a caption;
- RTLoc (194.2K): Given the video region and caption, the model localizes the action; and
- RDCap (122.3K): Given the video region, the model generates dense, localized caption
- video-region caption (522.7K / train 476.2 / others PLM-VideoBench)
- Fine-Grained Question Answering (FGQA) : fine-grained activity understanding (e.g., painting “vertically” vs. “horizontally” in Fig. 6, first)
- MBAcc
- 4371 question
- Smart Glasses Question Answering (SGQA) :
- answer open-ended questions about activities and objects visible in an egocentric video stream recorded by a smartglasses device
- LLM as a judge (Llama3.3 70B)
- 665, human annotated
- Video Region Captioning (RCap).
- LLM as a judge (Llama3.3 70B)
- 10,060 human annotated
- Region Dense Video Captioning (RDCap).
- model must generate a detailed description of all events involving a specific subject of interest (e.g., person, animal, or object)
- must produce a sequence of (start, end, caption) tuples that cover the entire duration of the video, including periods when the subject is not visible
- 2620 samples
- SODA score (Soda: Story oriented dense video captioning evaluation framework)
- MBAcc
Results
benchmarks
- PLM-VideoBench
- If GPT-4o is
- video bench
- VideoMME makes a big difference
- image bench
- Other open source is similar, but MMMU is much different lol
- RealWorldQA
- basic real-world spatial understanding capabilities of multimodal models
- consists of 765 images, with a question and easily verifiable answer for each image. The dataset consists of anonymized images taken from vehicles
- https://huggingface.co/datasets/nirajandhakal/realworldqa
- RealWorldQA
- Ablation studies
- Long video bench
[^1]