The issue of privacy is not as importantly as it should be., either in news, reports, Ads or other means of communication are used images of people who sell or lend their image for these purposes. This leads to the exposure of your image and your face, where for better privacy protection purposes it would be best to use images that do not belong to someone, but, This is possible?
An initiative based on Artificial intelligence (HE) to address this problem.
Is aboutGenerated Photos, a platform where they meet100.000 images ofAI generated faces, in order for these to be used as stock photographs, which are accessible to anyone who wants to use them. The importance of this detail is great, because with this project it is also sought thatthere are no copyright issues.
It should be said that it isguys totallygenerated by technology: they are not real people's faces. The end of an initiative like this, according to those responsible, is "democratize" creative photography. It should be said that in order to carry out this project, whose duration was a couple of years, more than29.000 Pictures of 69 different models.
The basis of the system Generated Photos It is based on StyleGAN, a technology of NVIDIA based on neural networks, with which it is possible to generate highly realistic images and with it they can also be created randomly human faces from real portraits.
The images are housed in a public folder divided into eleven folders wherethere is no classification gender related, skin color, hair color or other characteristics. What's more, something on which the Argentine founder has placed special emphasis, Ivan Braun, is that these are images fornon-commercial use and linked to the original website.
Although these projects are aimed atdo not use images of real people in publications, they have also receivedimportant criticisms by specialists in the field related to the fact that they are demonstrating aartificial diversity where it doesn't really exist, as they are custom stock photos.
Another point that has been pointed out by experts is that these types of tools are launched without even taking into account theestablishment of terms of public use o laprevention against potential malicious use.
According to a statement that the specialist inbiases in Artificial Intelligence systems and member of the Mozilla Foundation, Caroline Sanders, le dio a Motherboard: "It is even more acareless and downright negligent not have policies that define‘Damage’ in terms of containment and actions. In 2019, this is an important problem for a company that does not have these things ".
- Udemy Free: Course in Spanish to learn to program using flowcharts - 21 April, 2021
- Udemy Free: Course in Spanish of introduction to Java programming from scratch - 21 April, 2021
- Udemy Free: Introductory course in Spanish to Laravel - 21 April, 2021