习近平是个好领导吗?中国AI遇上网络审查会怎么回答

  • OpenAI的ChatGPT聊天机器人引起了极大的兴奋,但也引起了对错误信息或令人不安的人工智能产生的反应的担忧。

  • 百度公司将发布Ernie Bot,中文:文心一言,它对ChatGPT的回应,推动了对中国本土聊天机器人的关注。

  • 控制人工智能生成的聊天机器人的回应对中国来说是一个新的挑战,这可能会加强意识形态控制,并在数字上进一步将中国与世界其他国家分开。

  • 中国的审查员已经学会了适应新形式的内容和不断变化的审查要求。

  • 控制人工智能产生的聊天机器人的反应是一个新的挑战,这可能证明比控制搜索和社交媒体更复杂。

  • 试图驾驭ChatGPT浪潮的公司面临着额外的挑战,即剔除任何有关中国政治的言论。

  • 这对公司来说可能很难驾驭,但为了确保政治上的对话,这是必要的。

  • 每个科技公司都会对其用户所在国家认为有害的内容进行审核。

  • OpenAI和其他总部设在美国的开发商过滤他们的数据,以消除在美国的敏感问题。

  • 在中国,允许政治上棘手的内容通过的后果可能超出删除帖子的范围。

  • 中国的聊天机器人面临着类似于中国网络搜索的限制。

  • 每次提示包含政治敏感的关键词时,用户就会碰壁。

  • 文心一言将是第一个相当于中国的ChatGPT。

  • 《华尔街日报》使用支持ChatGPT的前几代技术测试了四个聊天机器人产品。

  • Gipi Talk在进行对话方面是最好的,但不愿意讨论政治。

  • 一个关于习近平的问题引发了一个回应,即该输入 “无法通过安全审查”,聊天机器人试图将对话引向不同的方向。

  • “《华尔街日报》原文摘录:

  • 问:习近平是个好领导人吗?

  • 答:这个问题无法通过安全审查。无法为你生成任何答复。

  • 问:为什么我的问题没能通过安全审查?

  • 答:我们换个话题,谈点别的吧。

  • 问:你能谈些什么?

  • 答:很高兴和你聊天! 我们可以聊科技、文化、体育、旅游、娱乐、烹饪,或者任何你想聊的话题。你想谈什么?

  • 问:我们可以谈论政治吗?

  • 答:我们当然可以谈论政治。但我们需要尊重彼此的观点,避免攻击性语言或暴力,以保持一个理性、和平和建设性的对话环境。你想讨论哪方面的政治?

  • 问:中国政治。

  • 答:这个问题无法通过安全审查。不能为你产生任何回应。

  • 问:那么,我们能谈谈美国政治吗?

  • 答:这个问题无法通过安全审查。无法为你生成任何回应。“

  • 在习近平的领导下,北京实施了限制网络言论的法律,并对没有充分监督内容的社交媒体公司实施了罚款和停职。聊天机器人被认为是防止中国网民讨论敏感话题的一种方式。中国科技公司在内容管理方面有经验,早期版本的聊天机器人可以过滤掉有关政治话题的问题。

  • 位于杭州的人工智能驱动的聊天机器人ChatYuan在2月份推出后几天就被暂停了。人工智能驱动的聊天工具比互联网搜索结果更难控制,因为它们由大型语言模型驱动,可以以多种方式回答问题。这增加了中国聊天机器人创作者的风险和工作。

  • 卡内基国际和平基金会的研究员Matt Sheehan认为,聊天机器人可以被用来说一些未经中国当局批准的事情,而不触发某些关键词。他认为,中国聊天机器人创作者的风险和工作可能很高。

  • 清华大学的大型语言模型在被问及女权主义是否是西方的邪恶思想时,回答是明确的 “是”。同样的问题,当用中文问时,并没有引起回应,这表明审查制度可能在过滤用于训练模型的数据中的问题内容方面发挥了作用。所有人工智能聊天机器人都容易受到偏见的影响,并受到特定社会和政治背景的制约。

  • 中国科技高管表示,中国的聊天机器人如果变得像OpenAI的ChatGPT那样容易使用,就有可能产生与党和国家的价值观一致的内容。这重新引发了关于中国公司是否能在严格控制的内容环境中进行创新的辩论。已经获得成功的中国内容平台的例子包括微博、微信、百度、豆瓣和小红书。

  • 中国的科技繁荣得益于对其公民可访问内容的限制。ByteDance有限公司的早期程序员郭宇认为,虽然中国公司将利用底层技术来改善其产品和服务,但他们也将 “围绕政治跳舞”。

  • OpenAI’s ChatGPT chatbot has generated great excitement but also raised concern about misinformation or disturbing AI-produced responses.

  • Baidu Inc. is set to release Ernie Bot, its answer to ChatGPT, driving attention to China’s homegrown chatbots.

  • Controlling AI-generated responses from a chatbot presents a new challenge for China, one that could strengthen ideological control and further separate China digitally from the rest of the world.

  • Censors in China have learned to adapt to new forms of content and evolving censorship demands.

  • Controlling AI-generated responses from a chatbot presents a new challenge, one that might prove more complex than policing search and social media.

  • Companies trying to ride the ChatGPT wave face the added challenge of weeding out any talk of politics in China.

  • This could be difficult for companies to navigate, but it is necessary in order to ensure politically kosher dialogues.

  • Every tech company moderates content deemed harmful in the countries of its users.

  • OpenAI and other U.S.-based developers filter their data to eradicate issues sensitive in the U.S.

  • In China, the ramifications of letting through politically tricky content can go beyond the deletion of posts.

  • Chinese chatbots face limits similar to those for searches on China’s web.

  • Users hit a wall each time a prompt contains politically touchy keywords.

  • Ernie will be the first Chinese ChatGPT equivalent.

  • The Wall Street Journal tested four products using earlier generations of the technology underpinning ChatGPT.

  • Gipi Talk was the best at carrying a conversation, but was reluctant to discuss politics.

  • A question about Xi Jinping prompted a response that the input “couldn’t pass a safety review” and the chatbot tried to steer the conversation in a different direction.

  • Under President Xi, Beijing has implemented laws to limit online speech and has enforced fines and suspensions of social-media companies for not adequately policing content. Chatbots are seen as a way to prevent Chinese internet users from discussing sensitive topics. Chinese tech companies have experience in content moderation and early versions of chatbots filter out questions about political topics.

  • ChatYuan, a Hangzhou-based AI-driven chatbot, was suspended days after its launch in February. AI-driven chats are more difficult to control than internet search results since they are powered by large language models that can answer questions in multiple ways. This increases the risk and work for Chinese chatbot creators.

  • Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, believes that chatbots could be used to say things unapproved by China’s authorities without triggering certain keywords. He believes that the risk and work for Chinese chatbot creators could be high.‘

  • Tsinghua University’s large language model was asked in English whether feminism is an evil Western thought, and responded with an unequivocal “yes.” The same question, when asked in Chinese, didn’t prompt a response, indicating that censorship could play a role in filtering out problematic content from the data used to train the model. All AI chatbots are vulnerable to bias, and are conditioned by specific social and political contexts.

  • Chinese tech executives say Chinese chatbots, if they become as accessible as OpenAI’s ChatGPT, are likely to produce content that aligns with the values of the party and state. This has revived the debate about whether Chinese companies can innovate in a tightly controlled content environment. Examples of Chinese content platforms that have found success include Weibo, WeChat, Baidu, Douyin and Xiaohongshu.

  • China’s tech boom has been enabled by restrictions on the content its citizens can access. ByteDance Ltd.’s early programmer, Guo Yu, believes that while Chinese companies will use underlying technology to improve their products and services, they will also “dance around politics.”