연합뉴스
| yna@yna.co.kr 2024-10-30 07:18:13
*Editor’s note: K-VIBE invites experts from various K-culture sectors to share their extraordinary discovery about the Korean culture.
Scrutinizing AI: Ethics and Freedom of Expression
Contributed by Lee Eun-jun (professor at Kyungil University)
On October 18, a coalition of 26 university student groups, known as "Out with Deepfake Sexual Crime - Student Joint Action," held a press conference at Gwanghwamun Plaza in Seoul. The group raised concerns about the prevalence of deepfake sexual exploitation within universities and criticized both the Ministry of Education and universities for failing to investigate and prevent these crimes.
It stated, “while universities are under the Ministry’s guidance and oversight, they remain blind to the reality and the steps needed to prevent deepfake sexual crimes.”
The students pointed out that over 70 universities, including Seoul National University and Inha University, were discovered to have had deepfake sexual exploitation material circulating among students. Despite this, they argued, the focus remains largely on youth-centered issues, leaving university-based victims marginalized and unaddressed. They called for a nationwide investigation into deepfake sexual crimes at universities, victim support measures, and expanded funding and staffing for university human rights centers.
With young people’s increasing familiarity with AI-generated prompts and programming, the issue has spurred a fresh debate on where to begin educating on AI usage ethics, including the essentials of prompt filtering. However, this brings up a critical tension: while filtering systems are necessary, their implementation can inadvertently limit artistic expression by over-regulating creative freedom.
◇ The Complexity of Setting Filtering Standards
As both an educator and a media artist, the author emphasizes the challenge of balancing filtering policies with artistic freedom.
While AI advancements allow artists to experiment with visual ideas and diverse prompts, overly strict filtering constraints can interrupt this creative process. In recent months, filtering systems have evolved, leading to unexpected restrictions during creative work, adding uncertainty and disrupting workflows for artists.
Given that AI model developers and platforms independently set these filtering standards, the lack of transparency and consistency often complicates matters. When artists are unable to predict these filters, it creates an unpredictable environment that can interfere with the creative process. The author argues that filtering systems should be designed to respect artistic freedom while ensuring social responsibility. The automatic blocking of specific content, without context, can hinder artistic exploration, posing a broader issue that affects freedom of expression at a fundamental level.
◇ Seeking Social Consensus and Transparent Policies
The author suggests the need for a transparent policy framework and social consensus around the scope and application of filtering criteria. Similar to ongoing discussions in fields like music and intellectual property, there should be collaborative platforms where artists, policymakers, and platforms can discuss and negotiate filtering criteria.
Presently, there is a notable concern that both policymakers and academia appear to align closely with platform providers without adequate critique or collaborative frameworks, leaving creators with limited avenues to advocate for their expressive needs.
To foster creative freedom and responsible AI use, it is crucial to establish support structures where artists can engage with platform providers and influence the frameworks governing their art. This approach would not only uphold ethical standards but also nurture a balanced creative ecosystem that respects both freedom of expression and societal accountability.
◇ The Need for Ethical Education alongside AI Development
As AI model developers innovate, they must understand contextual meaning in prompts and respect artistic expression. However, addressing social issues like deepfake misuse necessitates robust ethical education, overseen by relevant authorities.
To refine AI systems responsibly, dynamic filtering mechanisms should be introduced, which adapt based on user feedback. This approach recalls AVID’s development of AVID Newscutter, a video editing program that incorporated feedback from broadcast journalists to create an intuitive system for news editing over two decades ago. Similarly, today’s filtering systems can integrate contextual assessments and adaptive learning, allowing AI to fine-tune filtering criteria based on user interactions and assessments. This dynamic approach encourages a balanced collaboration between creators and platforms, upholding both creative freedom and societal responsibility.
Transparency remains essential. Companies must disclose their filtering standards and policies, providing clear guidelines for creators. In cases where artistic context is relevant, adaptable policies that permit exceptions are equally important. Additionally, giving users the option to personalize filtering settings would enhance creative control while ensuring that generated content aligns with societal standards.
By embracing these principles—ethical grounding, transparent policies, dynamic filtering, and user autonomy—AI developers can foster a responsible yet innovative ecosystem. Ultimately, AI should serve humanity, enriching creative and social landscapes alike.
[ⓒ K-VIBE. 무단전재-재배포 금지]