By Lauren McCarty ’26
Artificial intelligence has many beneficial uses, but it also poses security risks from bad actors who use the technology to target individuals and institutions for financial gain.
AI's ability to create “deepfakes,” realistic photos, videos and audio manipulated by these bad actors, has created a problem for vulnerable people who fall victim to digital scams.
University of Dayton computer science major Sai Woon Tip worked alongside lecturer Tasnia Ashrafi Heya to create a deepfake detection tool through the College of Arts and Sciences Dean's Summer Fellowship program. The program allows undergraduate students to conduct summer research in any academic discipline under the guidance of a faculty mentor with funding from the College Dean’s Fund for Excellence.
“I chose the project to research cloning the human voice, and given the situation of AI technology advancements, how easily deepfake attacks happen,” said Woon Tip, a sophomore from Keng Tong, Myanmar.
His project focused on digital voice assistants such as Amazon's Alexa, Google Assistant and Apple's Siri, which have transitioned from technological novelties to deeply integrated components of daily life. The convergence of voice assistants and accessible deepfake technology, which generates artificial speech nearly indistinguishable from that of a real person, has created a critical security vulnerability.
Woon Tip trained two types of machine learning tools with 96 samples each of human and AI-generated voice recordings to determine whether an audio sample is real or fake.
One tool used a classical training model that extracts audio features, such as voice, frequency and loudness. The other used an image-based machine learning model that can turn the voices into images called spectrograms; the machine then compares the pictures and distinguishes which one is a deepfake.
He noted that more research should be conducted to understand which machine learning tool better supports the identification of deepfakes.
“Sai achieved exceptional results, demonstrating how data-driven approaches can enhance the safety, trust and ethical deployment of AI technologies that reflect the University of Dayton’s commitment to preparing students to lead responsibly in the rapidly advancing tech era," said Ashrafi Heya, a lecturer in the UD Department of Computer Science.
Woon Tip, a student employee in the University's IT department, also encourages efforts to educate the public on recognizing signs of being scammed.
Professor of Political Science Grant Neeley and a team of dedicated students are taking steps to educate students about these risks. Cyber Flyers is a student group aiming to alert the campus community to deepfake and phishing scam prevention through information tables, guest speaker events with cybersecurity professionals and with their Instagram account @cyberflyersud.
The group started in 2024 and includes students from management information systems, computer science, and criminal justice and security studies majors.
Neely is director of the UD Center for Cybersecurity and Data Intelligence, which was designated by the National Security Agency as a Center of Academic Excellence in Cyber Defense. The center helps address the growing need for a trained cybersecurity workforce and new, better methods for mitigating cybersecurity threats.
As a part of the center's mission toward online safety, both on campus and beyond, the Cyber Flyers initiative emphasizes the importance of cyber awareness and community engagement.
“Sometimes it's as simple as reminding students that if you get something out of the ordinary, ask yourself the question: ‘Am I really expecting this?’” Neeley said. “We are all cyber citizens, and cyber safety is something we should all be concerned about.”