“Enough risks were found that maybe it shouldn’t generate people or anything photorealistic.”
– AI researcher
Maarten Sap, on
OpenAI’s latest image and natural language model
DALL-E 2 showing bias “toward generating images of white men by default, overly sexualizing images of women, and reinforcing racial stereotypes.” People with early access to DALL-E 2 were told not to share photorealistic images in public, in large part due to these issues, reports
Wired’s
Khari Johnson.