Page 411 - Emerging Trends and Innovations in Web-Based Applications and Technologies
P. 411
International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
expected distribution matches the actual segmentation
distribution. If the predicted distribution diverges much
from the true distribution of segmentation, the loss will be
severe. indicating a high degree of ambiguity or chaos in the
prediction. If the expected distribution of segmentation
closely matches the true distribution of segmentation, the
loss will be modest.
4. Experiment and Comparison
4.1. Experimental Design and Data Collection
Images from the FaceForensics++ dataset, edited with the
DeepFakes, Face2Face, FaceSwap, and NeuralTextures
methods, were used in the trials to confirm the success of the
proposed strategy. In this project, we blended authentic
photographs with manufactured images using four different
techniques: DeepFakes, Face2Face, FaceSwap, and
NeuralTextures. The resulting images were then submitted
to the proposed detector. In each trial, there is a 50/50 split
between bogus and actual images. This mixed group of
photos is fed into the detector we recommended to identify
fabricated parts and determine whether the image is
genuine. We may compare our findings to those of other
relevant research that have used the same dataset,
FaceForensics++, to evaluate the detection capabilities of
Their algorithms and ability to detect phony pictures.
We used images from the FaceForensics++ dataset in C23
format, compressed with H264 and a constant rate
quantization setting of 23. C23 photos are used to simulate
real-world conditions in which compression or other
variables might reduce the quality of edited photographs. A
high compression ratio, such as c40, will render the image
highly fuzzy. A blurry image like this cannot be used in
everyday situations, even though it is difficult to discern if it
is genuine. The developers of the FaceForensics++ dataset
acquired the 1000 flawless films from YouTube. The
FaceForensics++ dataset is made up of 1,000 flawless videos.
Figure 1. Randomly selected DeepFakes photos and their
anticipated segmentation results with one-shot fine-tuning
are shown, with the top sub-row exhibiting the results
acquired using the proposed method and the bottom sub-
row displaying the results obtained without it. DeepFakes'
altered images appear on the far right of each sub-row,
followed by the ground truth of the altered region(mask), the
binary predicted output, and the grey-scale predicted output
in that order, from right to left.
DeepFakes left.
Figure 2. shows four rows of randomly picked DeepFakes
photos and the anticipated results for the fake region using
one-shot fine-tuning. Each row's top sub-row is the result of
the proposed approach, while the bottom sub-row is the
consequence of not employing the proposed method.
Figure 3. Comparison of AUC between random initial
weights (without the proposed method) and meta-learning
of detecting images altered by Face2Face manipulation
methods. The x-axis is the size of the fine-tuned training set
and the y-axis is the value of AUC.
IJTSRD | Special Issue on Emerging Trends and Innovations in Web-Based Applications and Technologies Page 401