Massive advances in image processing and AI computation have made it much easier to create, modify and create stunning images. With the advancement of modern image editing tools, creating fake images, such as replacing your own face with someone else's, has become much easier. Generic Adversarial Networks (GANs) can also be applied to generate generic human images. In case, fake images can cause numerous potential problems as they can mishandle data, harm people, and be used create recognizable fake evidence. In this research, we recommended Fake Face Detect, a criminological image stage using neural tissue to distinguish various fake face images, and a neural tissue based classifier to detect fake human appearances. We are focusing on recognizing fake images that are created not only physically by humans but also naturally created by Generative Adversarial Networks. Furthermore, we accept a trusted adversary who can modify and delete the metadata of the first image at will. We show that Fake Face Detect provides high accuracy in recognizing fake face images created by humans and Generative Adversarial Networks. Therefore, the recognition of fake facial images is fundamental to protecting people from various abuses.
Article Details
Unique Paper ID: 154912
Publication Volume & Issue: Volume 8, Issue 12
Page(s): 633 - 638
Article Preview & Download
Share This Article
Join our RMS
Conference Alert
NCSEM 2024
National Conference on Sustainable Engineering and Management - 2024