Matthew Lim's AI Innovation Story: What Can We Trust in the Age of AI?

연합뉴스

| yna@yna.co.kr 2025-04-10 14:17:12

*Editor’s note: K-VIBE invites experts from various K-culture sectors to share their extraordinary discovery about the Korean culture. 

 

Matthew Lim's AI Innovation Story: What Can We Trust in the Age of AI?

 

By Matthew Lim, AI expert and director of the Korean Association of AI Management (Former Head of Digital Strategy Research at Shinhan DS)

 

 

 

 

Recently, I saw this image on a social media platform, and it made me laugh while also provoking many thoughts.

 

The reason this image is humorous on the surface is that it satirizes the dependency between AI and programmers. However, the deeper message it conveys is that "AI and humans are dependent on each other."

 

When the string is cut, it appears that the programmer will fall, but AI, like ChatGPT, also cannot hang on to the cliff alone. The two are inextricably linked, sharing a common fate.

 

This image symbolizes the fundamental challenge of the AI age we face today. AI learns and grows based on data created by humans. What would happen if humans, as the creators of content, neglected their responsibility and left everything to AI?

 

AI would no longer receive quality learning data. Additionally, humans would lose the authenticity of the information they create.

 

Ultimately, both would fall off the cliff.

 

This is why the role of human creators becomes more important in the AI era. Creators must continue to produce authentic content that only humans can make, while also enjoying the convenience of AI.

 

Only then can AI learn properly, and users can obtain trustworthy information.

 

▲ Satirical cartoon about the relationship between ChatGPT and developers, captured from Facebook. (PHOTO NOT FOR SALE) (Yonhap)

 

◇ The Changing Paradigm of Search

 

The way we search the internet is rapidly changing. Just a few years ago, searching involved entering keywords and manually finding the desired information across various web pages.

 

Now, AI-based search engines like Search GPT and Perplexity perform all of these tasks, providing "complete answers."

 

This change has certainly made life more convenient for modern people. There is no need to jump between multiple web pages to compare information anymore.

 

However, behind this convenience lies a new danger: the issue of the source and reliability of information.

 

◇ The Self-Referencing of Information: A New Digital Risk

 

Currently, AI learns from content produced by humans and provides information based on that. It analyzes various texts like news articles, blogs, academic papers, etc., to search for and generate the most appropriate answers.

 

But what happens if AI-generated content is posted back on the internet, and that content becomes part of another AI's learning data?

 

This is the issue of "self-referencing" information. A circular structure is formed where AI learns from other AI-generated content, and this causes the original source of information to become unclear.

 

AI tends to consider information that repeatedly appears across multiple sources as "reliable," which can lead to serious problems.

 

Let’s assume that AI A generates content with a slight error about a specific topic. If this content is posted on several websites, AI B, C, and D are likely to recognize it as "factual" simply because it appears on multiple sources.

 

In the end, erroneous information spreads as if it were verified fact.

 

◇ The Fundamental Crisis of Digital Information

 

In the AI era, what is needed goes beyond simple digital literacy to include "digital information literacy." Here lies a more serious problem. Just verifying the source of information is no longer sufficient.

 

Many people have believed that as long as information comes from a verifiable source, it can be trusted to some extent. However, in the AI era, the source itself may already be based on content generated by AI.

 

In other words, even news articles, academic materials, and expert blogs that we have trusted until now may have been written based on information generated by AI.

 

This creates a situation similar to "a mirror reflecting a mirror" in an infinite loop. No matter how carefully you check the source, if the ultimate author of that source is AI, it becomes difficult to determine the authenticity of the information.

 

This is precisely the most concerning point I emphasize to power bloggers and internet media professionals in various lectures.

 

◇ The Ethical Responsibility of Information Producers

 

This issue cannot be solved solely by the vigilance of information consumers.

 

The most important factor is the ethical responsibility of influential information producers and experts. The problem lies not just in using AI-generated content without cross-verification, but also in publishing content with only superficial or passive checks.

 

Some claim they have "verified" content, but in reality, they might only check for spelling errors and never properly review the accuracy of the content or the sources. There are also cases where the expression is slightly altered, but the claims and data presented by AI are accepted as is.

 

Such minimal verification, as expected, offers little help in solving the problem.

 

To prevent the self-referencing issue of information, content producers must adhere to the following two key principles:

 

First, they should create content themselves as much as possible. Secondly, even when using AI’s help, they must carefully verify every detail, compare and cross-check all claims and data by themselves.

 

This means thorough verification that goes beyond simple review and approaches the level of rewriting from scratch.

 

Thus, all information producers must ensure that content based on AI-generated materials is thoroughly verified and rewritten before publishing or posting under their names. Even if it’s not the case, blog posts or social media accounts where AI can search should also be treated with caution.

 

In particular, internet media outlets, academic institutions, and influential media platforms must establish strong internal policies to prevent the publication of AI-generated content without proper verification.

 

The future of coexisting AI and humans has already begun. The most important task in this era is to break the self-referencing loop of information. This is not about adjusting the development direction of AI technology but about the responsible actions of humans as information producers.

 

To prevent AI-generated content from masquerading as official sources of information, journalists, scholars, power bloggers, corporate communicators, and all information producers must either create content themselves or, if using AI’s help, meticulously verify and confirm every sentence, claim, and piece of data.

 

This is the most basic responsibility for maintaining the health of the information ecosystem.

 

AI development companies must improve technologies to identify and filter AI-generated content from the data used to train their AI. More importantly, information platforms and social media companies must clearly label AI-generated content and establish systems to track derivative works based on it.

 

Users must also make an effort to distinguish whether the information they encounter was written by a human or generated by AI. In fact, these efforts are only a supplementary defense, and the fundamental solution lies in the voluntary efforts of information producers to create content.

 

As Morpheus said in the movie The Matrix, "Believing is not knowing."

 

In the AI era, we must not only consume information but also take responsibility for how we produce and distribute information. Breaking the vicious cycle where AI-generated content masquerades as human-made content, which then becomes AI's learning data, is the most urgent task.

 

Otherwise, we will soon live in a chaotic world where the knowledge of humans and AI cannot be distinguished. In that world, the boundary between truth and fiction will gradually blur, and eventually, we may lose the origin of information forever.

 

This is the greatest challenge everyone will face in the AI era, and the responsibility that must be recognized by all information producers.

 

 

[ⓒ K-VIBE. 무단전재-재배포 금지]