Matthew Lim's AI Innovation Story: AI and Privacy

연합뉴스

| yna@yna.co.kr 2024-09-05 17:38:02

*Editor’s note: K-VIBE invites experts from various K-culture sectors to share their extraordinary discovery about the Korean culture. 

 

Matthew Lim's AI Innovation Story: AI and Privacy

 

By Matthew Lim, AI expert and director of the Korean Association of AI Management (Former Head of Digital Strategy Research at Shinhan DS)

 

 

George Orwell's novel 1984 depicted a dystopian world where "Big Brother" surveils all citizens. At the time of its publication, the book was considered mere science fiction. However, reading it today brings discomfort as the novel’s once-fanciful ideas seem to be increasingly becoming reality.

 

In 2013, Edward Snowden shocked the world by exposing the U.S. National Security Agency's (NSA) mass surveillance programs. Snowden revealed that the NSA, through a secret program called "PRISM," indiscriminately collected communication data not only from American citizens but also from global figures. This program accessed the servers of major IT companies like Google, Facebook, and Apple, collecting personal digital information such as emails, chats, photos, and videos. Snowden's revelations ignited a global debate on privacy, drawing widespread criticism of the U.S. government’s surveillance apparatus. Many at the time referred to this as the birth of "Big Brother." A decade later, the world faces even more sophisticated and powerful surveillance technologies—this time through artificial intelligence (AI).

 

AI advancements have brought many conveniences into our lives. Voice recognition assistants on smartphones answer simple questions and control devices, AI-based navigation systems analyze real-time traffic to suggest optimal routes, and online shopping recommendation algorithms suggest products tailored to individual tastes based on purchase history. However, behind these conveniences lies a growing concern about privacy invasion, particularly with the use of facial recognition technology and big data for personal information collection.

 

◇ The Age of Big Brother Has Already Arrived

 

China's example suggests that a Big Brother society is already upon us. The Chinese government operates a system called "Skynet," which integrates AI facial recognition with a nationwide network of hundreds of millions of CCTV cameras. While the system is used to track criminals, it also serves as a tool for monitoring the everyday movements of ordinary citizens. In 2017, after a BBC journalist registered his photo with a Chinese police control room, it took only seven minutes for authorities to locate him using CCTV. Similarly, in 2018, a student from Hunan University took part in an experiment, and the police found him within five minutes. Is this system solely used for identifying criminals?

 

▲ China's Skynet - (captured from www.cps.com.cn)

 

Western countries are not exempt from this trend. In several U.S. cities, police use AI facial recognition technology for crime prevention, though its accuracy and potential biases have raised concerns. In response, cities like San Francisco have banned the use of facial recognition by public agencies. On May 14, 2019, San Francisco's Board of Supervisors passed a landmark ordinance prohibiting the use of facial recognition technology by public agencies in an 8-1 vote. This marked the first instance of such a ban in a major U.S. city, with concerns over the technology's inaccuracies and its potential to violate civil rights. Particularly, the lower accuracy rates in recognizing people of color and women, which could lead to discriminatory practices, played a significant role in the decision. Supervisor Aaron Peskin, who championed the bill, stated, "We have to strike a balance between safety and freedom," highlighting the ordinance's aim. This action set an important precedent for AI technology and privacy concerns, later spreading to cities like Oakland and Berkeley.

 

◇ Big Tech is Not Immune to Privacy Violations

 

The collection of personal data through big data has become a significant issue. Every digital footprint, including social media posts, online shopping habits, and location data, is collected and analyzed. AI can use this data to predict a person's personality, preferences, and even political inclinations. A prime example of how such data can be misused is the 2018 Facebook-Cambridge Analytica scandal.

 

This scandal involved the British data analytics firm Cambridge Analytica, which unlawfully harvested the personal data of up to 87 million Facebook users for political purposes. The company collected information from around 270,000 Facebook users through a "personality test" app, but it also illegally accessed data from users' friends. This massive amount of personal data was then used to target voters with personalized political ads during the 2016 U.S. presidential election and the UK’s Brexit referendum.

 

The scandal brought immense criticism to Facebook (now Meta), and CEO Mark Zuckerberg was summoned to testify before the U.S. Congress. In July 2019, the U.S. Federal Trade Commission (FTC) imposed a record-breaking $5 billion fine on Facebook. This penalty, which was not just a financial punishment but also a symbolic act, underscored the importance of privacy protection and the responsibility of big tech companies. Facebook was also required to establish a privacy committee at the board level and significantly strengthen its privacy policies.

 

In South Korea, a similar case occurred in May, when Kakao was fined over ₩15.1 billion for leaking personal information from open chatrooms. The situation escalated further in August with the KakaoPay controversy, raising more concerns about data breaches. The Financial Supervisory Service (FSS) conducted an on-site inspection of KakaoPay’s overseas payment services from May to July, revealing that since April 2018, KakaoPay had provided customer information—including data from 44 million users—to Alipay without proper consent. This included personal information of customers who had never even used the overseas payment service. The FSS reported a staggering 54.2 billion instances of shared data, including IDs, phone numbers, and KakaoPay transaction records.

 

▲ The 4th 2024 Future of Personal Information Forum is held at the Central Post Office in Jung-gu, Seoul on September 21, 2024.(Yonhap)

 

KakaoPay denied the FSS’s findings, claiming they were inaccurate. However, if this case is determined to be a violation of credit information laws, the scale of the data breach suggests that the resulting fines could be unprecedented.

 

◇ The Need for Balance Between Technological Advancement and Privacy Protection

 

Of course, AI technology brings undeniable benefits. From crime prevention to improving medical diagnoses and enabling efficient city management, AI has the potential to offer tremendous advantages to society. However, do these benefits justify sacrificing personal privacy? While the progress of AI cannot be halted, its unchecked use cannot be allowed either. What is urgently needed now is a balance between technological advancement and the protection of individual privacy.

 

First, raising public awareness is essential. Many people use services without understanding how their data is being collected and utilized. Individuals must recognize the importance of privacy protection and become wise users who can refuse to provide unnecessary information.

 

Next, companies need to take responsibility. In developing and applying AI technology, they must adhere to ethical guidelines. Transparency in data collection, secure management of the collected information, and strict prohibition of misuse are crucial responsibilities that companies should uphold.

 

Finally, the role of the government is critical. Clear legal regulations regarding AI technology use must be established. By referencing frameworks like the EU's General Data Protection Regulation (GDPR) or California's Consumer Privacy Act (CCPA), South Korea can build a regulatory system suited to its circumstances. At the same time, these regulations must strike a balance that does not hinder technological advancement.

 

The issue of privacy in the AI era is no longer a distant concern. Every moment, personal data is being collected and analyzed. The choices society makes now will shape the future. Will we surrender our privacy for the sake of convenience and security, or will we protect individual freedom despite the inconvenience? To prevent George Orwell’s dystopia from becoming a reality, everyone in society must pay attention to this issue and voice their concerns. The time to make our choice is now, for a healthy society that evolves alongside AI while respecting personal privacy.

[ⓒ K-VIBE. 무단전재-재배포 금지]