Skip to main content
Portrait photo of Orazio Pontorno, AI Researcher

Orazio Pontorno

AI Researcher | Ph.D. in Artificial Intelligence

Italy
Open to Collaborations

AI Researcher with expertise in Deep Learning, Computer Vision, and Mathematical Modelling. Specializing in generative models, synthetic media analysis, and graph-based approaches for complex AI challenges.

4+

Publications

10+

Citations

5+

Projects

3+

Collaborations

Research Interests

  • Deep Learning
  • Mathematical Modelling
  • Deepfake Detection
  • Graph & Hypergraph Theory
  • Generative AI

Curriculum Vitae

Download CV

Biography

AI Researcher with a strong background in applied mathematics, and solid experience as a Data Scientist and ML Engineer in both academic and industrial contexts. My work focuses on machine learning and deep learning, with particular emphasis on generative models, synthetic media analysis, deepfake detection, statistical modeling, and graph and hypergraph-based models.
I hold a PhD in Artificial Intelligence from Campus Bio-Medico University of Rome, along with a Master’s degree in Data Science and a Bachelor’s degree in Mathematics from the University of Catania.
My experience spans both academia and industry, including international research activity at the State University of New York Polytechnic Institute (Utica, NY) and applied work as a Data Scientist and AI Researcher in industrial settings. I combine rigorous applied mathematics with experimental AI research and real-world deployment of advanced models.

About Me

Birthday : 10-13-1999

Email : orazio.pontorno@phd.unict.it

City : Catania, Italy

Languages : Italian, English

Interests : Chess, Fitness, Travel

Skills

Machine Learning
Applied Mathematics
Big Data/Data Analysis
Data Engineering

Digital Skills

Pytorch
Databricks
PySpark
MySQL

Education

11/2023 - today

Ph.D. in Artificial Intelligence
Università Campus Bio-Medico, Rome, Italy
University of Catania, Catania, Italy

10/2021 - 09/2023

Master's degree in Data Sceince
University of Catania, Catania, Italy

Grade: 110/110 summa cum laude
Thesis: An AI Forensics approach for the recognition of synthetic image-generating architecture based on diffusion models

09/2022 - 11/2022

Postgraduate School Course in AI: Deep Learning, Vision and Language for Industry
Università degli Studi di Modena e Reggio Emilia, Modena, Italy

Final Project: An AI-based system for Traffic lights recognition

09/2018 - 12/2021

Bachelor's degree in Mathematics
University of Catania, Catania, Italy

Grade: 105/110
Thesis: Teoria dei Codici: Codici Lineari

Experience

01/2026 - 06/2026

Visiting Researcher
State University of New York, Utica, NY· Full Time

Research activity focused on developing advanced methodologies for Deepfake Detection in Medical Imaging.

03/2025 - 12/2025

AI Researcher
Life360 · Full Time

Research and develop cutting-edge ML models for personalized ad experiences and contextual targeting. Explore deep learning, reinforcement learning, and NLP for innovative ad solutions.

04/2024 - 02/2025

Data Scientist
Fantix Inc. · Contract

Research and development of enrichment models based on Machine Learning systems.

04/2023 - 06/2023

Student Research Assistant
iCTLab s.r.l. · Internship

Research activities aimed at developing deep learning algorithms in the field of Deepfakes Analysis and Detection.

02/2023 - 08/2023

Research Assistant
Vicosystems s.r.l. · Scholarship

Analysis and implementation of deep learning techniques based on GAN networks for anomaly detection in a time-series dataset.

Research Activities

Overview of my research contributions including publications, workshop organization, and reviewing activities in the field of Artificial Intelligence and Deep Learning.

Publications

Preview of DeepFeatureX-SN paper
2025

DeepFeatureX-SN: Generalization of deepfake detection via contrastive learning Journal

Orazio Pontorno, Luca Guarnera, Sebastiano Battiato
Multimedia Tools and Applications (2025): 1-20.
The rapid advancement of generative artificial intelligence, particularly in the domains of Generative Adversarial Networks (GANs) and Diffusion Models (DMs), has led to the creation of increasingly sophisticated deepfakes. These synthetic images pose significant challenges for detection systems and present growing concerns in the realm of Cybersecurity. The potential misuse of deepfakes for disinformation, fraud, and identity theft underscores the critical need for robust detection methods. This paper introduces DeepFeatureX-SN (‘Deep Features eXtractors based Siamese Network’), an innovative deep learning model designed to address the complex task of not only distinguishing between real and synthetic images but also identifying the specific employed generative technique (GAN or DM). Our approach makes use of a tripartite structure of specialized base models, each trained using Siamese networks and contrastive learning techniques, to extract discriminative features unique to real, GAN-generated, and DM-generated images. These features are then combined through a CNN-based classifier for final categorization. Extensive experiments demonstrate the model’s superior performance, with a detection accuracy of 97.29%, strong generalization to unseen generative architectures (achieving an average accuracy of 67.40%, which surpasses most existing approaches by over 10%) and robustness against various image manipulations, all of which are crucial for real-world Cybersecurity applications. DeepFeatureX-SN achieves state-of-the-art results across multiple datasets, showing particular strength in detecting images from novel GAN and DM implementations. Furthermore, a comprehensive ablation study validates the effectiveness of each component in our proposed architecture. This research contributes significantly to the field, offering a more nuanced and accurate approach to identifying and categorizing synthetic images. The results obtained in the different configurations in the generalization tests demonstrate the good capabilities of the model, outperforming methods found in the literature.
Preview of WILD dataset paper
2025

WILD: a new in-the-Wild Image Linkage Dataset for synthetic image attribution Conference

Bongini P., Mandelli S., Montibeller A., et al.
2025 International Joint Conference on Neural Networks (IJCNN)
Synthetic image source attribution is an open challenge, with an increasing number of image generators being released yearly. The complexity and the sheer number of available generative techniques, as well as the scarcity of high-quality open source datasets of diverse nature for this task, make training and benchmarking synthetic image source attribution models very challenging. WILD is a new in-the-Wild Image Linkage Dataset designed to provide a powerful training and benchmarking tool for synthetic image attribution models. The dataset is built out of a closed set of 10 popular commercial generators, which constitutes the training base of attribution models, and an open set of 10 additional generators, simulating a real-world in-the-wild scenario. Each generator is represented by 1,000 images, for a total of 10,000 images in the closed set and 10,000 images in the open set. Half of the images are post-processed with a wide range of operators. WILD allows benchmarking attribution models in a wide range of tasks, including closed and open set identification and verification, and robust attribution with respect to post-processing and adversarial attacks. Models trained on WILD are expected to benefit from the challenging scenario represented by the dataset itself. Moreover, an assessment of seven baseline methodologies on closed and open set attribution is presented, including obustness tests with respect to post-processing.
Preview of DeepFeatureX Net paper
2024

DeepFeatureX Net: Deep Features eXtractors based Network for discriminating synthetic from real images Conference

Orazio Pontorno, Luca Guarnera, Sebastiano Battiato
International Conference on Pattern Recognition, 21, 177-193
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics. The scientific community is working to develop approaches that can discriminate the origin of digital images (real or AI-generated). However, these methodologies face the challenge of generalization, that is, the ability to discern the nature of an image even if it is generated by an architecture not seen during training. This usually leads to a drop in performance. In this context, we propose a novel approach based on three blocks called Base Models, each of which is responsible for extracting the discriminative features of a specific image class (Diffusion Model-generated, GAN-generated, or REAL) as it is trained by exploiting deliberately unbalanced datasets. The features extracted from each block are then concatenated and processed to discriminate the origin of the input image. Experimental results showed that this approach not only demonstrates good robust capabilities to JPEG compression and other various attacks but also outperforms state-of-the-art methods in several generalization tests.
Preview of DCT-Traces paper
2024

On the Exploitation of DCT-Traces in the Generative-AI Domain Conference

Orazio Pontorno, Luca Guarnera, Sebastiano Battiato
2024 IEEE International Conference on Image Processing (ICIP), 3806-3812
Deepfakes represent one of the toughest challenges in the world of Cybersecurity and Digital Forensics, especially considering the high-quality results obtained with recent generative AI-based solutions. Almost all generative models leave unique traces in synthetic data that, if analyzed and identified in detail, can be exploited to improve the generalization limitations of existing deepfake detectors. In this paper we analyzed deepfake images in the frequency domain generated by both GAN and Diffusion Model engines, examining in detail the underlying statistical distribution of Discrete Cosine Transform (DCT) coefficients. Recognizing that not all coefficients contribute equally to image detection, we hypothesize the existence of a unique “discriminative fingerprint”, embedded in specific combinations of coefficients. To identify them, Machine Learning classifiers were trained on various combinations of coefficients. In addition, the Explainable AI (XAI) LIME algorithm was used to search for intrinsic discriminative combinations of coefficients. Finally, we performed a robustness test to analyze the persistence of traces by applying JPEG compression. The experimental results reveal the existence of traces left by the generative models that are more discriminative and persistent at JPEG attacks.

Workshop & Challenge Organization

(DFF '25) 1st Deepfake Forensics Workshop: Detection, Attribution, Recognition, and Adversarial Challenges in the Era of AI-Generated Media

Co-organizer ACM Multimedia 2025

This workshop aims to bring together researchers and practitioners from diverse fields, including computer vision, multimedia forensics and adversarial machine learning, to explore emerging challenges and solutions in deepfake detection, attribution, recognition and counter-forensic strategies.

Adversarial Attacks on Deepfake Detectors: A Challenge in the Era of AI-Generated Media (AADD-2025)

Co-organizer ACM Multimedia 2025

The goal of this challenge is to investigate adversarial vulnerabilities of deepfake detection models by generating adversarial perturbed deepfake images that evade state-of-the-art classifiers while maintaining high visual similarity to the original deepfake content.

Reviewing Activity

Conference Reviewer

ACM Multimedia 2025
ICPR 2024
MetroXRAINE 2024