MENU

GET IN TOUCH

Back

DeepFake Detection System

Year

2024

Tech & Technique

TensorFlow, XceptionNet, OpenCV, FFmpeg, Flask, React, Docker

Description

Designed and contributed to a scalable **Deepfake Detection framework** for identifying manipulated media using **CNN- and Transformer-based architectures**. The system provides an extensible pipeline for **training, evaluation, benchmarking, and inference** across widely used deepfake datasets, supporting both research and real-world security applications.

The framework enables standardized benchmarking of multiple state-of-the-art models on **FaceForensics++ (RAW, C23, C40)** and **Celeb-DF**, allowing fair comparison across compression levels and manipulation types.

Key Features:
  • 🕵️ End-to-End Detection Pipeline: Video → frame extraction → preprocessing → model inference → classification.
  • 🧠 Multi-Model Architecture Support: ResNet, Xception, EfficientNet, MesoNet, GramNet, F3Net, ViT, and M2TR.
  • 🎯 High Detection Performance: Achieved up to 95.9% accuracy on FF-DF and 94.7% on Celeb-DF across baseline models.
  • 🧬 Frequency & Attention-Based Learning: Captures spatial, temporal, and frequency-domain forgery cues.
  • ⚙️ Research-Ready Design: Modular, configurable framework for training, evaluation, visualization, and inference.

Architecture Overview:
  • Video Processing Pipeline: Real/Fake video ingestion with frame extraction and preprocessing.
  • CNN-Based Models: Learn local spatial artifacts and texture inconsistencies in manipulated frames.
  • Transformer-Based Models: Capture global context and long-range manipulation patterns.
  • Unified Evaluation Framework: Standardized benchmarking across FF-DF (RAW, C23, C40) and Celeb-DF datasets.

Technical Highlights:
  • Implemented in PyTorch with YAML-based experiment configuration
  • Advanced preprocessing including face alignment, normalization, and frame sampling
  • Supports training, evaluation, visualization, and single-image/video inference
  • Integrated performance metrics: Accuracy, AUC, Precision, Recall, and F1-score

My Role

Team Junior Lead
: Collaborate with Wisen Platform :
  • 🧠 Led development and evaluation of a 3D Attention UNet model for multimodal MRI tumor segmentation.
  • 📈 Improved overall segmentation accuracy by ~20% through multi-model ensemble (Council) strategies.
  • 🤝 Coordinated model experiments, result validation, and research alignment within the team.

VISHAL-KRISHNA

vishalkrishnakkr@gmail.com