With the advances in generative adversarial networks (GAN), facial manipulations called DeepFakes have caused major security risks and raised severe societal concerns. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and fails in blocking the disinformation spread in advance. Alternatively, precautions such as adding perturbations to the real data for unnatural distorted DeepFake output easily spotted by the human eyes are introduced as proactive defenses. Recent studies suggest that these existing proactive defenses can be easily bypassed by employing simple image transformation and reconstruction techniques when applied to the perturbed real data and the distorted output, respectively. The aim of this paper is to propose a novel proactive DeepFake detection technique using GAN-based visible watermarking. To this front, we propose a reconstructive regularization added to the GAN’s loss function that embeds a unique watermark to the assigned location of the generated fake image. Thorough experiments on multiple datasets confirm the viability of the proposed approach as a proactive defense mechanism against DeepFakes from the perspective of detection by human eyes. Thus, our proposed watermark-based GANs prevent the abuse of the pretrained GANs and smartphone apps, available via online repositories, for DeepFake creation for malicious purposes. Further, the watermarked DeepFakes can also be detected by the SOTA DeepFake detectors. This is critical for applications where automatic DeepFake detectors are used for mass audits due to the huge cost associated with human observers examining a large amount of data manually.