Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Artificial Intelligence-Altered Videos , Image-Based Sexual Abuse, and Data Privacy Concerns
Ist Teil von
Journal of international women's studies, 2023-03, Vol.25 (2), p.COV2
Ort / Verlag
Bridgewater State College
Erscheinungsjahr
2023
Link zum Volltext
Quelle
EZB Electronic Journals Library
Beschreibungen/Notizen
Artificial Intelligence (AI) is a phenomenon that has become embedded in human life, and this symbiotic relationship between technology and humanity is here to stay. One such use of AI is deepfakes. The use of AI for deepfakes is arguably one of the most controversial topics because it raises ethical issues. Deepfakes are images or recordings that have been convincingly altered and manipulated to misrepresent someone as doing or saying something that they did not actually do or say. These manipulations thrive in the political arena and recently in the pornography industry, in which women's faces are masked onto other bodies to create video illusions that cause non-consensual sexual-image abuse and other harms. It is no surprise that the malicious use of deepfake technology has prompted regulatory legislation like the United States National Defense Authorization Act (NDAA), and the recent ratification of amendments to the Digital Services Act (DSA) on criminalizing malicious deepfakes. Scholars, advocates, and victims continue to call for more specific and stricter laws to regulate deepfakes and assign penalties for non-adherence. This paper presents a timely analysis of deepfake pornography as a type of image-based sexual abuse, and of the position of the law on malicious use of deepfake technology. Data protection concerns under the General Data Protection Regulation, and policy recommendations and measures for redress, control, and eradication are also addressed.