Once a month or so, another research group comes out with a claim that their new software detects many or all deepfakes, whew, problem solved. It is an example of telling the truth selectively: They can only count images that they knew ahead of time were fake; otherwise, if they collected data from the internet, then any fakes too good to be detected were not detected. There is a less fundamental shortcoming, which is that academic research has value only if it is published. Any faker who wants to bypass the method can include it in their deepfake program, which will then automatically learn to generate fakes that the method can’t detect.
the Daily Whale
copyright 2021, 2024 Jay J.P. Scott
<jay@satirist.org>