25.4.20

no inicio foi o porno

imaxe 'deepfake' de 'el correo' que ilustra un artigo sobre o tema
a finais de 2017 comezaron a circular por internet vídeos pornográficos protagonizados por algunha das actrices e cantantes máis famosas do momento. os vídeos fixéronse virais e foron vistos por millóns de persoas. ao pouco sóubose que elas nom eran as verdadeiras protagonistas dos vídeos, mais as vítimas dunha nova ferramenta tecnolóxica que, utilizando intelixencia artificial e outras ferramentas avanzadas, permite incluír a imaxe facial de calquera persoa nun vídeo.

foi só o inicio. ao pouco, Merkel, Trump ou Macri tamén foron vítimas do que se coñece como 'deepfake'. Obama foi utilizado, sen el autorizalo, para exemplificar os posíbeis usos nefastos desta tecnoloxía. vémolo nun discurso no que di o que o falsificador quería que dixera mais nunca dixera. pero o resultado é un vídeo moi 'real'.

a manipulación de imaxes ten unha longa tradición, pero isto é moito máis perigoso. a imaxe corporal e a expresión da cara son extraordinariamente realistas, e a imitación da voz e dos xestos da persoa, tan exactas que resulta imposíbel descubrir que é unha falsificación, a menos que se dispoña de sofisticados programas de verificación dixital. e o perigo de 'deepfake' é que esta tecnoloxía está ao alcance de calquera.

un antigo amante, aluado, pode producir e espallar de xeito anónimo polas redes sociais un vídeo que arremede a voz, os xestos e a cara da muller que o abandonou. un vídeo no que faga ou diga as barbaridades máis comprometedoras. as imaxes de policías dándolle unha boa tunda a unha vella que participa nunha protesta contra o goberno poden provocar violentos enfrontamentos entre as mareas que protestan e a policía. algúns estudantes poden gravar un vídeo comprometedor sobre un profesor ao cal nom poden ver.

os usos posíbeis de 'deepfake' na política, na economía ou as relacións internacionais son tan variados como sinistros. a divulgación dun vídeo que mostra a un candidato á presidencia dun país dicindo ou facendo cousas reprobábeis pouco antes dos comicios volverase a artimaña electoral máis frecuente.

o potencial dos vídeos falsificados para enrarecer as relacións entre países e exacerbar os conflitos internacionais tamén é enorme. xa ten pasado.

o emir de Qatar apareceu nun vídeo eloxiando e apoiando a Hamás, Hezbollah, os Irmáns Musulmáns, e Irán. isto provocou unha reacción furibunda de Arabia Saudí, os Emiratos Árabes, Bahrain e Exipto, que xa viñan tendo diferenzas importantes con Catar. denunciaron o discurso do emir como un apoio ao terrorismo e romperon relacións diplomáticas, clausuraron fronteiras e impuxéronlle un bloqueo por aire, mar e terra. a verdade é que o emir nunca dixera tal. o vídeo era falso. o que foi moi real foi o boicot que tivo que soportar esa nación.

a ameaza que 'deepfake' representa para a harmonía social, a democracia e a seguridade internacional é obvia. os antídotos contra esta ameaza o son moito máis reducidos, inda que hai algunhas propostas. todas as organizacións que producen ou distribúen fotografías ou vídeos deben obrigarse a usar bloqueos tecnolóxicos que fagan que o seu material visual sexa inalterábel. as persoas tamén deben ter acceso a tecnoloxías que as protexan de ser vítimas de 'deepfake'. as leis deben adaptarse. hai que facer máis difícil o uso do anonimato na rede. todo isto é necesario pero nom abondo. haberá que facer moito máis.

temos entrado nunha era na que a diferenza entre verdade e mentira, entre feitos e falsidades, foise erosionando. e con iso, a confianza nas institucións e a democracia. 'deepfake' nom é máis que outra arma no arsenal que teñen á súa disposición os mercadores da mentira.

hai que plantarlles cara.

tradución e deturpación por @xindiriz
xindiriz@gmail.com
texto orixinal por @moisesnaim

it started with porn

at the end of last year a series of pornographic videos began showing up on the internet. this is nothing new, but these were different because they starred some of the world's top actresses and singers. naturally, they went viral: millions of people around the world saw them. very quickly it became clear that Scarlett Johansson, Taylor Swift, Katy Perry, and other artists were not the real protagonists of the sex videos, but rather the victims of a new technology that - using artificial intelligence and other advanced digital tools - allows their creators to insert anyone's face into a very credible video.

and this was just the beginning. it wasn't long before Angela Merkel, Donald Trump, and Mauricio Macri were also victims of what is known as 'deepfake'. Barack Obama was used, without his consent, to exemplify the possible nefarious use of the new technology. we can watch Obama saying what the forger wants him to but has never said before. but is it, nevertheless, a very realistic video.

image manipulation is nothing new. authoritarian governments have a long history of 'disappearing' disgraced leader from official photographs. and since 1990 Photoshop has allowed users to alter digital photographs, a practice that has become so common it is considered a verb by Merriam-Webster.

but deepfake is different. and much more dangerous. in just the year since the fake celebrity porn videos appeared, the technology has improved dramatically. everything about these videos is hyper realistic, and the person's voice and gestures are so exactly rendered that it becomes impossible to know it is a forgery without using sophisticated verification programs. and perhaps the biggest danger of deepfake is that the technology is available to anyone.

a distraught ex could create (and anonymously distribute) a video that perfectly imitates the voice, gestures, and face of the woman who left him and in which she appears to be doing and saying the most shameful and degrading things. a video fo the police brutally beating an elderly woman who is participating in a street march could provoke violent clashes between protesters and the police. the respected leader of a racial or religious group could incite this followers to attack members of another race or religion. some students could produce a compromising video of a teacher they despise. or digital extortionists could threaten a company with disclosing a damaging video, if the company does not pay a hefty ransom.

 the possible uses of deepfake in politics, economics, or international relations are as varied as they are sinister. the release of a video showing a presidential candidate saying or doing reprehensible things shortly before the elections will certainly become a more commonly used election trick. even if the candidate's opponent doesn't approve the hoax, his most radical followers can produce and distribute the video without asking for anyone's permission.

the counterfeit videos' potential to cloud relations between countries and exacerbate international conflicts is also enormous.

and this is not hypothetical. it has already happened. last year, the Emir of Qatar, Tamin bin Hamad al-Thani, appeared in a video praising and supporting Hamas, Hezbollah, the Muslim Brotherhood, and Iran. this provoked a furious reaction from Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt, countries that already had strained ties with Qatar. they denounced the emir’s speech as supporting terrorism, broke diplomatic relations, closed the borders, and imposed a blockade by air, sea, and land. the reality, however, is that the Emir of Qatar never gave that speech; while the video that escalated the conflict was not produced with deepfake technologies it was sufficient to provoke a dangerous escalation of the conflict that was already simmering. the video was still a fake but the boycott that resulted is very real, and remains in force.

the threat that deepfake represents to social harmony, democracy, and international security is obvious. the antidotes to this threat are much less clear, although there are some proposals. all organizations that produce or distribute photographs or videos should be forced to use technology blocks that make their visual and audio material unalterable. people must also have access to technologies that protect them from being victims of deepfake. laws must be adapted so that those who defame or cause harm to others through the use of these technologies can be brought to justice. the ease with which it is now possible to operate anonymously on the web should not be tolerated. all this is necessary, but insufficient. We will need to do much more.

we have entered an era in which the ability to differentiate the truth from lies, facts from fiction, is being eroded. And with it, trust in institutions and in democracy. deepfake is another new and powerful weapon in the arsenal that the merchants of lies have at their disposal.

we have to fight them.

No hay comentarios:

visitantes