“We refer to these attacks as typographic attacks. They’re far from simply an academic concern.”
OpenAI researchers, on the ‘blindspots’ of the lab’s latest computer vision model CLIP. While CLIP shows remarkable capacity for abstraction—its multimodal neurons respond to literal, symbolic, and conceptual representations—it is also easily fooled: “when we put a label saying ‘iPod’ on this Granny Smith apple, the model erroneously classifies it as an iPod.”

1,124 days, 1,723 entries ...

Newsticker, link list, time machine: HOLO.mg/stream logs emerging trajectories in art, science, technology, and culture––every day
$40 USD