Our paper formalises the notion and security model of proofs of video authenticity and describes the cryptographic video authentication protocol (called Eva), which supports lossy codecs and arbitrary edits and is proven secure under well-established cryptographic assumptions. Compared to previous cryptographic methods for image authentication, Eva is not only capable of handling significantly larger amounts of data originating from the complex lossy video encoding but also achieves linear prover time, constant RAM usage, and constant proof size with respect to video size. Generating zero-knowledge proofs is known to be slow, but our implementation of Eva integrates the Nova folding scheme and other optimizations to make Eva practical. For a 2-minute HD (1280 × 720) video encoded in H.264 at 30 frames per second, Eva generates a 448 byte proof in about 2.4 hours on consumer-grade hardware at 2.6 μs per pixel, surpassing state-of-the-art cryptographic image authentication schemes by more than an order of magnitude in terms of prover time and proof size.
Further info:
In Arash, the recording device divides the video into blocks. For each block, it computes a discrete cosine transform (DCT) from the pixel values in the block. It then truncates or quantises the DCT (as done in JPEG encoding), and signs hashes derived from it. A publisher can edit the video and the signatures; the signed material for blocks that are edited will be eliminated from the data that the user receives, ensuring confidentiality of the redacted material. Full details of how this works are in the (forthcoming) paper. When the user verifies the video, the user interface offers them the possibility to see a representation of the signed data, as well as a colour-coded video showing which blocks have been edited (coloured red) and which ones still match the original signature (coloured green).
The video above shows the signed data from an original video of actor Colin Farrell (on the left), which is manipulated using DeepFake maker to replace the face with that of actor Ryan Reynolds (on the right). The data and the manipulation method come from the CelebDF paper. The visualisation on the right shows that the face replacement has been detected around the eyes, nose and mouth.