Again this nonsense of "imperceptible changes to creators’ images".
I'd bet it either makes images look significantly synthetic or the effect can be completely removed by simple gaussian/median filter (that is transparent for humans/learning given sufficient input resolution).
You really ought to read the paper. As usual, the article has a link (which goes to
the arxiv.org page and that has a
PDF link). I'll just say I was very impressed by not only how thorough the authors were in anticipating all of the applications and possible countermeasures against the technique, but also various scenarios and their consequences.
It's one of those papers you need to spend a bit of time with. They don't give away the goods on the technique until about section 5, so don't think you can just read the abstract, conclusion, and glance at some results.
As impressed as I was with the research, I was even more disturbed by their findings, because it indeed seems truly imperceptible and very destructive to models - especially when multiple such attacks are independently mounted. Not only that, but it seems exceptionally difficult to defend against. Perhaps someone will find an effective countermeasure, but I came away with the impression that an analogous attack against spy satellites would be to fill low-earth orbit with a massive amount of debris. That's because it's both indiscriminate and, like a good poison, you don't need very much of it.
What it's
not is a protection mechanism like Glaze:
The tool shifts pixels around to prevent artwork from being used as training data.
www.tomshardware.com
They do outline a scenario in which a company might use it to protect their copyright characters, but you couldn't use it to protect individual artworks.