Beyond8Bits: A Large-Scale Subjective Quality Dataset for HDR User-Generated Videos

Introduction

High Dynamic Range (HDR) video delivers substantially wider brightness, contrast, and color volume than standard 8-bit SDR content. As HDR capture on mobile devices and consumer cameras becomes ubiquitous and HDR user-generated content (UGC) floods streaming platforms, understanding perceived HDR-UGC quality at scale is essential for compression, delivery, and recommendation.

We present Beyond8Bits — to our knowledge, the largest crowdsourced HDR-UGC video quality dataset to date. Beyond8Bits combines a crowd-captured mobile HDR partition with a curated Vimeo HDR partition, transcoded across a realistic resolution × bitrate ladder, and annotated through a large-scale Amazon Mechanical Turk subjective study. In total the release contains 5,917 HDR source videos that expand into 41,419 transcoded clips with ~1.46Ìýmillion continuous (0–100) quality ratings (≈35 ratings / video) aggregated via SUREAL bias-corrected MOS estimation. Beyond8Bits subsumes and extends the earlier CHUG and BrightRate HDR-UGC subjective studies from our group.

b8b

Sample frames spanning the Beyond8Bits dataset — varied scenes, lighting, motion, orientation, and capture devices across the crowd and Vimeo partitions. Best viewed zoomed in on an HDR-capable display.

Download

We are releasing the Beyond8Bits HDR-UGC Video Quality Dataset to the research community. If you use the dataset in your research, we kindly ask that you cite the paper and this website, and that you also cite the predecessor HDR-UGC subjective studies (CHUG and BrightRate) on which Beyond8Bits is built.

  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli, A. C. Bovik, "Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos," arXiv preprintarXiv:2603.00938 [cs.CV], 2026.
  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli, A. C. Bovik, "Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026 (to appear).
  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli and A. C. Bovik, "Beyond8Bits: HDR-UGC Video Quality Dataset", Online: https://live.ece.utexas.edu/research/beyond8bits/index.html, 2026.
  • S. Saini, A. C. Bovik, N. Birkbeck, Y. Wang, B. Adsumilli, "CHUG: Crowdsourced User‑Generated HDR Video Quality Dataset," 2025 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 2025, pp. 2504‑2509, doi: 10.1109/ICIP55913.2025.11084488.
  • S. Saini, B. Chen, Y. Wang, N. Birkbeck, B. Adsumilli, A. C. Bovik, "BrightRate: Quality Assessment for User-Generated HDR Videos," Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026, pp. 1522-1532.

Access Instructions

Published metadata — the full per-clip CSV (Beyond8Bits_publish.csv), the matching hashed ID list (Beyond8Bits_publish.txt), and a CHUG-compatible crowd-only subset — lives in the GitHub repository. Per-rating raw CSVs and the full archive are mirrored on UT-Box. Videos are hosted on an S3 bucket and can be fetched with the AWS CLI or streamed directly in a browser.

Download Instructions

# Clone the repositoryÌý
git clone https://github.com/shreshthsaini/Beyond8Bits.gitÌý
cd Beyond8BitsÌý

# Download a single video by IDÌý
aws s3 cp s3://ugchdrmturk/videos/VIDEO_ID.mp4 ./Beyond8Bits_Videos/Ìý

# Download all 41,419 published video IDs from Beyond8Bits_publish.txtÌý
# (one ID per line)Ìý
cat data/Beyond8Bits_publish.txt | while read video; do Ìý
Ìý Ìý aws s3 cp s3://ugchdrmturk/videos/${video}.mp4 ./Beyond8Bits_Videos/Ìý
done

You can also stream a video directly by replacing VIDEO_ID in this URL:

https://ugchdrmturk.s3.us-east-2.amazonaws.com/videos/VIDEO_ID.mp4

Example:

Loading the scores in Python

import pandas as pdÌý

df = pd.read_csv("data/Beyond8Bits_publish.csv")Ìý
±è°ù¾±²Ô³Ù(»å´Ú.³¦´Ç±ô³Ü³¾²Ô²õ.³Ù´Ç±ô¾±²õ³Ù())Ìý

# ['video_id', 'mos', 'sos', 'type', 'ref', 'resolution',Ìý
# Ìý'bitrate', 'orientation', 'framerate', 'split',Ìý
# Ìý'height', 'width']Ìý

print(df["mos"].describe()) Ìý Ìý Ìý Ìý Ìý Ìý Ìý Ìý Ìý # MOS distributionÌý
print(df["type"].value_counts()) Ìý Ìý Ìý Ìý Ìý # Crowd vs VimeoÌý
print(df["split"].value_counts()) Ìý Ìý Ìý Ìý Ìý # train / validation / testÌý
print(df["resolution"].value_counts()) Ìý# ref / 360p / 720p / 1080p

Globus

You can also access the dataset via Globus. Please create a free account using Gmail or GitHub and access the data through the link -

Ìý

Database Description

The Beyond8Bits publish release contains 41,419 video sequences derived from 5,917 real-world HDR-UGC source videos. Sources are drawn from two complementary partitions: a Crowd partition (2,153 mobile-captured HDR sources) and a Vimeo partition (3,764 curated professional / prosumer HDR sources). Each source is transcoded across a resolution × bitrate ladder — 360p / 720p / 1080p at 0.2 / 0.5 / 1 / 2 / 3ÌýMbps, plus the native-resolution reference — to produce 12,918 crowd transcodes + 22,584 Vimeo transcodes + 5,917 reference videos.

Perceptual quality was collected via a large-scale Amazon Mechanical Turk subjective study following ITU-R BT.500-14, using a continuous 0–100 Likert instrument with ~35 ratings per video. In total the release aggregates ~1.46 million quality ratings. MOS values are produced with SUREAL maximum-likelihood bias-corrected aggregation; the median inter-subject SRCC is 0.90. A source-identity-respecting 70 / 10 / 20 train / validation / test split (28,987 / 4,151 / 8,281 clips) is provided for standardized benchmarking. Beyond8Bits extends and subsumes our earlier HDR-UGC subjective studies, CHUG (ICIP 2025) and BrightRate (WACV 2026).

Release at a glance
HDR source videos5,917 (2,153 Crowd + 3,764 Vimeo)
Transcoded videos (total)41,419 (12,918 Crowd + 22,584 Vimeo + 5,917 references)
Crowd ratings~1.46 M (Amazon Mechanical Turk)
Avg. ratings per video~35
Resolutions360p / 720p / 1080p (+ native reference)
Bitrate ladder0.2 / 0.5 / 1 / 2 / 3 Mbps
Rating instrumentContinuous 0–100 Likert, ITU-R BT.500-14
MOS aggregationSUREAL MLE (median inter-subject SRCC 0.90)
Split (train / val / test)28,987 / 4,151 / 8,281, by source identity
Per-video metadata schema

data/Beyond8Bits_publish.csv — 41,419 rows × 12 columns.

ColumnDescription
video_idHashed video ID (primary key, used for S3 download)
mosMOS after SUREAL bias-corrected aggregation
sosStandard-of-scores (SUREAL dispersion)
typeSource partition — Crowd or Vimeo
ref1 if source / reference video; 0 if transcoded
resolutionTarget resolution — 360p / 720p / 1080p / ref
bitrateTarget bitrate — 0.2M / 0.5M / 1M / 2M / 3M (or ref)
orientationPortrait or Landscape
framerateNative playback framerate (fps)
splittrain / validation / test (70 / 10 / 20 by source identity)
heightNative frame height (px)
widthNative frame width (px)

Investigators

Copyright Notice & Citation

-----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------
Copyright (c) 2026 The University of Texas at Austin
All rights reserved.

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute this database (the videos, the results and the source files) and its documentation for any purpose, provided that the copyright notice in its entirety appear in all copies of this database, and the original source of this database, Laboratory for Image and Video Engineering (LIVE, ) at the University of Texas at Austin (UT Austin, ), is acknowledged in any publication that reports research using this database. The dataset metadata, CSV / TXT manifests, and website code are released under CC BY 4.0; video payloads retain their original licenses (CC-licensed Vimeo content; crowd-contributed videos released under a non-exclusive research redistribution agreement). Permitted use is non-commercial research only; please reach out for commercial use.

The following paper, website, and predecessor sub-studies are to be cited in the bibliography whenever the database is used:

  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli, A. C. Bovik, "Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos," arXiv preprint arXiv:2603.00938 [cs.CV], 2026.
  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli, A. C. Bovik, "Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026 (to appear).
  • S. Saini, B. Chen, N. Birkbeck, Y. Wang, B. Adsumilli and A. C. Bovik, "Beyond8Bits: HDR-UGC Video Quality Dataset", Online: https://live.ece.utexas.edu/research/beyond8bits/index.html, 2026.
  • S. Saini, A. C. Bovik, N. Birkbeck, Y. Wang and B. Adsumilli, "CHUG: Crowdsourced User-Generated HDR Video Quality Dataset," 2025 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 2025, pp. 2504-2509, doi: 10.1109/ICIP55913.2025.11084488.
  • S. Saini, B. Chen, Y. Wang, N. Birkbeck, B. Adsumilli, A. C. Bovik, "BrightRate: Quality Assessment for User-Generated HDR Videos," Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026, pp. 1522-1532.
BibTeX

@article{Saini_2026_arXiv_Beyond8Bits, Ìý
Ìý Ìý Ìýauthor Ìý= {Saini, Shreshth and Chen, Bowen and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu and Bovik, Alan C.}, Ìý
Ìý Ìý Ìýtitle Ìý = {Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos}, Ìý
Ìý Ìý Ìýjournal = {arXiv preprint arXiv:2603.00938}, Ìý
Ìý Ìý Ìýyear Ìý Ìý= {2026} }Ìý

@inproceedings{Saini_2026_CVPR_Beyond8Bits, Ìý
Ìý Ìý Ìý author Ìý Ìý= {Saini, Shreshth and Chen, Bowen and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu and Bovik, Alan C.}, Ìý
Ìý Ìý Ìý title Ìý Ìý = {Seeing Beyond8Bits: Subjective and Objective Quality Assessment of HDR-UGC Videos}, Ìý
Ìý Ìý Ìý booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, Ìý
Ìý Ìý Ìý note Ìý Ìý Ìý= {to appear}, Ìý
Ìý Ìý Ìý year Ìý Ìý Ìý= {2026} }Ìý

@inproceedings{Saini_2025_ICIP_CHUG, Ìý
Ìý Ìý Ìý author Ìý Ìý= {Saini, Shreshth and Bovik, Alan C. and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu},
Ìý Ìý Ìý book title = {2025 IEEE International Conference on Image Processing (ICIP)},Ìý
Ìý Ìý Ìý title Ìý Ìý = {CHUG: Crowdsourced User-Generated HDR Video Quality Dataset}, Ìý
Ìý Ìý Ìý year Ìý Ìý Ìý= {2025}, Ìýpages Ìý Ìý = {2504-2509}, Ìý
Ìý Ìý Ìý doi Ìý Ìý Ìý = {10.1109/ICIP55913.2025.11084488} }Ìý

@inproceedings{Saini_2026_WACV_BrightRate, Ìý

Ìý Ìý author Ìý Ìý= {Saini, Shreshth and Chen, Bowen and Wang, Yilin and Birkbeck, Neil and Adsumilli, Balu and Bovik, Alan C.}, Ìý
Ìý Ìý title Ìý Ìý = {BrightRate: Quality Assessment for User-Generated HDR Videos}, Ìý
Ìý Ìý booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
Ìý Ìý month Ìý Ìý = {March}, Ìý
Ìý Ìý year Ìý Ìý Ìý= {2026},Ìý
Ìý Ìý pages Ìý Ìý = {1522-1532} }

Ìý

Ìý

IN NO EVENT SHALL THE UNIVERSITY OF TEXAS AT AUSTIN BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS DATABASE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF TEXAS AT AUSTIN HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE UNIVERSITY OF TEXAS AT AUSTIN SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE DATABASE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF TEXAS AT AUSTIN HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

-----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------

Back to Quality Assessment Research page