19 June 2021 - deepfakes 2

< yesterday -- tomorrow >

Self-Defeating Strategy

Once a month or so, another research group comes out with a claim that their new software detects many or all deepfakes, whew, problem solved. It is an example of telling the truth selectively: They can only count images that they knew ahead of time were fake; otherwise, if they collected data from the internet, then any fakes too good to be detected were not detected. There is a less fundamental shortcoming, which is that academic research has value only if it is published. Any faker who wants to bypass the method can include it in their deepfake program, which will then automatically learn to generate fakes that the method can’t detect.

clue:

Deepfake programs work like this: They have one learning agent that fakes up pictures, and another that tries to detect fake pictures. They compete, each one-upping the other. You can make the detector smarter by feeding it outside information from other algorithms. Then the faker either becomes smarter to match, or else runs into its limits. In the second case, give the faker more resources to relax its limits.

give me a clue so sweet and true

the Daily Whale || copyright 2021, 2024 Jay J.P. Scott <jay@satirist.org>