Accurate deepfake detection has been an urgent issue for some years. Some researchers believe that human capability of deepfake detection should not be discarded.
Researchers name 4 main criteria with which an observer gets a chance to expose a deepfake:

  • Segmentation. A doubt factor may arise if a photo/video demonstrates low quality. Visual artifacts, poor illumination, pixelation and noises can evoke suspicions. (Especially true for cheapfakes)
  • Face blending. Blending a few faces into one is a common practice among fraudsters. By receiving "extra" facial features, a perpetrator can successfully be identified as someone else. However, this technique may leave blunders visible to a naked eye.
  • Fake Faces. A fake face can look extremely authentic and lifelike. However, this level of trickery requires a lot of resources that are unavailable to most fraudsters. In turn this results in low quality forgeries.
  • Low synchronization. Mismatches between uttered words and lip movements often serve as a cue that the video is not real.

Source: https://antispoofing.org/Manual_Deep...neral_Overview