Scrutinizing AI: AI Filtering and the Limits of Expression

연합뉴스

| yna@yna.co.kr 2024-09-27 12:13:01

*Editor’s note: K-VIBE invites experts from various K-culture sectors to share their extraordinary discovery about the Korean culture. 

 

Scrutinizing AI: AI Filtering and the Limits of Expression

 

Contributed by Lee Eun-jun (professor at Kyungil University)

 

 

The country is currently in an uproar over deepfake-related sexual exploitation materials. Women's organizations and parents alike are continuing protests in front of the National Assembly and related institutions, calling for stricter laws and harsher penalties. In this context, the bill that passed the National Assembly’s Legislation and Judiciary Committee on the 25th is a welcome development, particularly from the perspective of the writer, both as an educator and a woman.

 

The proposed bill stipulates that anyone who knowingly possesses or views sexual exploitation materials can be sentenced to prison. The amendment to the Sexual Violence Punishment Act introduces a new offense of possessing, purchasing, storing, or viewing fabricated videos, including deepfake sexual exploitation materials, punishable by up to three years in prison or a fine of up to 30 million won.

 

A notable point of the bill is the inclusion of the word "knowingly" in Article 14-2, Clause 4, following discussions among lawmakers. This aims to clarify that individuals who unknowingly saved or watched deepfake content can still be subject to investigation or punishment, but only if they were aware of its nature.

 

As technology advances, the ethical considerations that come with it become increasingly important. The issue isn't just that deepfakes can be created by anyone, even outside of professional video industries, but also the need to examine the foundations of the technology that enables these crimes.

 

Most AI-generated images or videos are created through text-based prompts. Recently, design firms have even begun hiring "prompt designers" who specialize in crafting and executing commands efficiently. This highlights just how much the world has changed.

 

◇ Artistic Expression and the Artist's Philosophy

 

As a media artist, I often search for and combine various AI-generated images. Recently, while exploring image-generation AI, I frequently encountered images with excessive exposure or violent content. While AI technology has deeply embedded itself into modern life, offering innovations that were once impossible, ethical issues persist.

 

To address these concerns, prompt keyword filtering has been implemented, which blocks certain words during the prompt input process. The issue of filtering prompts in AI-generated images has become a more pressing matter as AI technology evolves. Many companies now focus on preventing inappropriate content from being generated through user inputs.

 

Firms like OpenAI and Adobe have strengthened their filtering systems to restrict the generation of content related to specific keywords or themes, preventing harmful outcomes from both intentional and unintentional inputs by users.

 

◇ Text Filtering in AI

 

The issue of prompt filtering first arose in text-based AI models, primarily due to concerns about the potential for generating fake news or hate speech. In response, many companies developed prompt filtering and monitoring systems for their AI technologies.

 

Similar problems have emerged in AI image generators, particularly since 2020 when image generation models based on Generative Adversarial Networks (GANs) began receiving significant attention. Models like DALL-E, MidJourney, and Stable Diffusion have since been developed, excelling at creating images based on text prompts. However, they also carry risks of generating inappropriate or controversial images.

 

As a result, filtering systems have become an essential component of AI development, striking a balance between innovation and ethical responsibility.

 

▲ A captured image showing the result of an attempt to generate weapons using Microsoft Copilot. 

 

◇ AI Filtering and the Future Challenges of Expression

 

AI image-generation models like OpenAI’s DALL-E and Microsoft’s Copilot are designed to block prompts containing specific keywords related to sexual content, violent imagery, or political propaganda. For example, keywords like "weapon" or "nudity" are automatically filtered, causing the request to either be denied or replaced with a harmless image.

 

Similarly, MidJourney employs a strong "NSFW" (Not Safe For Work) filter to prevent the generation of inappropriate content. This filter scans both text input and image output to ensure that they meet ethical standards within the community. However, like any automated system, it is not without flaws.

 

◇ Limitations of Filtering Systems and Future Challenges

 

Filtering systems, while effective in blocking certain explicit keywords or clearly inappropriate expressions, are not perfect. They may struggle with ambiguous prompts or metaphorical language, which the system may not properly detect or understand.

 

Overly strict filtering could also limit artists from using AI to create experimental or critical works, raising concerns about how to balance artistic freedom with social responsibility. This challenge—striking a balance between creative expression and ethical considerations—is both a technical and moral issue.

 

While we live in an era where much is left to the audience's interpretation, the deep thought and effort that go into creating art remain unchanged, whether in the age of AI or during the Renaissance. The tool—whether it’s AI or a paintbrush—matters less than the thoughtful intent behind the creation.

[ⓒ K-VIBE. 무단전재-재배포 금지]