Human perception can be subtly influenced by watermarked AI images, Deepmind study finds


Watermarked AI-generated images can make people think of cats without them knowing why, according to a new study by Deepmind.

Watermarks in AI-generated images can be an important safety measure, for example, to quickly prove that an image is (not) real without the need for forensic investigation.

Watermarking often involves adding features to the image that are invisible to the human eye, such as slightly altered pixel structures, also known as adversarial perturbations. They cause a machine learning model to misinterpret what it sees: For example, an image might show a vase, but the machine labels it a cat.

Until now, researchers believed that these image distortions, which are intended for computer vision systems, would not affect humans.



Watermarks can affect human perception

Researchers at Deepmind have now tested this theory in an experiment and shown that subtle changes to digital images also affect human perception.

Gamaleldin Elsayed and his Deepmind team showed human test subjects pairs of images that had been subtly altered with pixel modifications.

In a sample image showing a vase, an AI model incorrectly identified the vase as a cat or a truck after manipulation. The human subjects still saw only the vase.

However, when asked which of the two images looked more like a cat, they tended to choose the image that had been manipulated to look like a cat for the AI model. This was even though both images looked the same.

Video: Deepmind


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top