The goal is to be able respond with the highest possible precision to a request from a user. To achieve such a feat, you have to process data massively and in all their forms. Data libraries from the Internet are strongly involved in the initiation to artificial intelligence. The latter feeds on everything that can be found on the web, stereotypes, prejudices or discriminations included.
In the presentation of its new product, Google once again warns of this reality which prevents the company from its model:
Several ethical challenges have arisen for text-to-image search writ large, Google says. The downstream applications of text-image models are varied and can have a complex impact on society. Risks of misuse raise concerns about responsibly opening code and demos.
For now, and as with Dall-E, the American company has decided to do not publish the code or perform a public demo. “Preliminary assessment also suggests that Imagen encodes several social biases and stereotypes, including a general bias towards generating images of people with lighter complexions and a tendency to align images depicting different professions with revealed gender stereotypes.“. The company hopes to make more progress on these next challenges to definitely open up its model and avoid potential dangers and excesses.