iBeta Level 1 Dataset

iBeta Level 1 Dataset

Comprehensive dataset for PAD with 30,000+ iBeta level 1 attacks from 85+ IDs

Check samples on Kaggle

iBeta Level 1 dataset summary

Parameter
Value
Volume
Paper attacks: 22,000+ videos from 80+ participants
Replay attacks: 8,000+ videos from 2,500+ participants
Coverage
Paper, cutout, and replay attacks across iBeta Level 1 vectors
Demographics
Adults aged 18–65, balanced gender, multi-ethnic
Devices
iOS and Android phones (multiple device models)
Conditions
Indoor and outdoor, varied lighting and backgrounds

Introduction

The iBeta Level 1 Paper & Replay Attacks Dataset offers a comprehensive collection of presentation attacks (PAD) tailored for iBeta Level 1 testing. Beyond paper-based masks and printouts, it includes a diverse set of replay attacks, photo and video replays on smartphone and laptop displays under varying brightness levels, distances, and angles: to reflect real-world spoofing scenarios. Designed for researchers and developers working on liveness detection, this dataset provides broad coverage for training and validating anti-spoofing models, delivering end-to-end completeness for iBeta Level 1 certification, which tests biometric systems against ISO/IEC 30107-3 – the international standard for biometric presentation attack detection

Dataset Features

  • Active liveness sequences: Zoom-in, zoom-out, head turns, and natural blinking
  • Replay variation axes: Multiple brightness levels, distances, and viewing angles
  • Multi-display replay capture: Mobile, laptop, and PC monitors used as replay surfaces

Source and collection methodology

Data were collected from real-life selfies and short videos provided by participants, followed by two families of presentation attacks

  • Paper attacks: print, cutout, cylinder, and 3D mask variations, recorded on both iOS and Android with controlled changes in angle, distance, and lighting
  • Replay attacks (mobile + PC): photos/videos of the same participants replayed on smartphone screens (iOS/Android) and desktop monitors. Replay clips (~5–12+ s) include slow camera motion, zoom-in/zoom-out phases, varied brightness, viewing angles, and distances; phone borders are hidden when applicable

All sequences contain explicit zoom-in and zoom-out segments to support active liveness detection and to simulate realistic spoofing attempts

This dataset provides 5 variations of spoof attacks

Some of the spoof attacks in our dataset were tested on Doubango, a leading 3D liveness detection framework

Doubango performs advanced 3D liveness checks using a single 2D image and claims to outperform market leaders like FaceTEC, BioID, Onfido, and Huawei in both speed and accuracy

During testing, our attack images bypassed Doubango’s security checks, with the system generating green bounding boxes around the faces (indicating acceptance as “live” users). This confirms that the attacks were not flagged as spoofs, demonstrating their ability to trick even high-performance systems

These results highlight the quality of our dataset for training robust anti-spoofing models capable of defending against evolving threats in real-world scenarios

1. Real life selfie & videos from participants

Genuine facial data collected in various lighting conditions and angles to ensure robust system evaluation

2. Print and Cutout paper attacks

Attackers use printed photos or cutout masks with eye mouth holes to trick recognition systems

3. Cylinder attack to create volume effect

A printed face is wrapped around a cylindrical object to simulate a 3D structure. This method is effective in deceiving simple 2D detection algorithms

4. Paper attacks on Actor with head/eyes variations

A paper face is placed over a real person’s head to mimc real facial movement. Variation include blinking, head tilts, and expressions to test system resilience

5. 3D paper masks with volume based elements such as nose

High-quality 3D masks icorporate raised features sucj as a nose to enhance realism. More challenging for liveness detection algorithms

5. PC/Mobile Replay attacks

A pre-recorded video of a real face is played on a phone, or laptop screen and captured by the camera as if it were a live user. Variations include different angles, distances, screen brightness, and glare to account for screen quality and reflections

Why Axonlabs better than competitors

One of our partners tested our dataset and a competitor’s dataset using their own liveness detection model while preparing for iBeta Level 1 certification. The results show a clear difference in difficulty between the two datasets. Both datasets were tested on a sample of approximately 200 attack attempts each, ensuring a fair comparison

•  Our dataset presents a greater challenge for liveness detection models. The model frequently misclassified attack images as real (label 0), meaning our spoofing techniques are more advanced and harder to detect

•  The competitor’s dataset, on the other hand, was mostly detected as attacks (score 1), except for a single type of attack where the model showed some uncertainty

This demonstrates that our dataset provides more value for training robust liveness detection models, as it exposes them to more deceptive and realistic attacks

Understanding the score:

Horizontal axis: score value (0 – model judges the frame as “live”, 1 – “spoof”). Dot color shows ground truth: green = genuine face, red = spoof attack

By training on a more challenging dataset, models can significantly improve their spoof detection capabilities, making them more resilient against real-world threats

Use cases and applications

iBeta Level 1 Certification Compliance: 

  • Helps to train the models for iBeta level 1 certification tests
  • Allows pre-certification testing to assess system performance before submission

Inhouse Liveness Detection Models: 

  • Used for training and validation of anti-spoofing models
  • Enables testing of existing algorithms and identification of their vulnerabilities against spoofing attacks

How companies achieved iBeta with us

Datasets like this contributed to 21% of companies that passed iBeta certification in 2025 – all Axon Labs clients

Legal & Compliance

We prioritize data privacy, ethical AI development, and regulatory compliance. Our Silicone Mask Attack Dataset is collected and processed in full accordance with global data protection standards including GDPR, ensuring legality, security, and responsible AI practices

Download information

Sample data are available on Kaggle as three separate datasets: Paper Attacks (sample), Replay Attacks — Mobile (sample), and Replay Attacks — PC/Laptop (sample). Request full access or additional samples via the form below

Have a question?

We collect data from our internal team. All information is further verified by our specialists

Once your enquiry has been sent, we will contact you to discuss the details and complete the necessary paperwork. The timing of receiving the dataset depends on the specific request and additional requirements

Our unique selling point is to provide legally clean datasets to our customers. We obtain the consent from all the participants to use their data for AI model development. We are able to provide comprensive reporting on the licensing, data collection and privacy compliance of our datasets. Although there seems to be a diverse response to how to control AI development and deployment, we are able to service global customers seeking to launch global AI products.

The dataset follows iBeta testing protocols and includes diverse attack scenarios that mirror real-world spoofing attempts. It covers both passive and active liveness testing requirements with proper demographic representation and standardized capture conditions essential for certification preparation

The price depends on your specific requirements. Please submit a request to receive a free consultation

Contact us

Tell us about yourself, and get access to free samples of the dataset 

Didn't find what you were looking for?

Our collection includes many datasets for various requests

© 2022 – 2026 Copyright protected