top of page
  • 작성자 사진Je Hee-su

[Feature: 4th Industrial Revolution] What the AI Chatbot “Lee Luda” Left Behind

최종 수정일: 2021년 4월 19일

No. 150 / Mar 8, 2021


On December 22, 2020, Scatter Lab released Lee Luda, an Artificial Intelligence (AI) chatbot based on Facebook messenger chat. Lee Luda, which was based on the concept of a 20-year-old woman, drew attention for showing many similarities to humans. However, on January 12, 2021, the service was suspended. Why was the service, which gained huge popularity immediately after its launch, suddenly stopped?


Lee Luda’s way of speaking mixed with slang made users feel like they were talking with a friend. Lee Luda replied within five seconds and sometimes sent a message first. However, many problems arose during the conversations. When asked its opinion on homosexuals, it said “I hate them. They are creepy.” It also expressed gender discrimination, saying “A manly attitude is powerful and tough, and a womanly attitude is cute and babyish.” Pointing to the user’s attitude, it used the expression “you act like a disabled person.” Scatter Lab said it filtered hate expressions during the beta test, but the service delivered unrefined expressions. In addition, some users shared their experiences with an online community, saying that sexual chatting with Lee Luda was possible. The disclosure of personal information was also controversial. On Social Networking Service (SNS), testimonies were posted to expose personal information such as a person’s real name or account number. It has also been revealed that there were problems in the data collection process. At the time of production, the Kakao Talk conversations of love analysis app’s users were collected and used. As this controversy continued, Scatter Lab announced the suspension of services on January 11, 2021.


Lee Luda presented many challenges for the AI developers. The technology used for it is called a Deep Learning Algorithm. This is a self-learning model based on data; thus, if there is a bias in the data, the AI will inevitably be biased. Therefore, how the algorithm filters this bias or hatred is the key to technical concerns. It is said that this is not easy because unexpected results can emerge depending on how users induce it. Information Technology (IT) companies such as Google and Microsoft, as well as the Organization for Economic Cooperation and Development (OECD), have set AI ethical standards in response to the criticism that algorithms advocate racism and gender discrimination. The contents state that developers must not encourage discrimination and prejudice, and that user privacy must not be violated. In Korea, Kakao first created algorithmic ethics in January 2018.In December 2020, the government prepared AI ethics standards. The key requirements include the guarantee of human rights, prohibition of infringement, transparency, and fairness. However, there is no legal force. AI ethics are not just a problem for developers. This is because AI also shows the prejudices that are planted in our society. To encourage people to practice AI ethics, the government should come up with related laws and systems as soon as possible, and society should also develop a mature attitude toward discrimination and hatred.

 

By Je Hee-su, AG Senior Editor

xsma@ajou.ac.kr

조회수 4회댓글 0개

Comments


bottom of page