[返回学习园地首页]·[所有跟帖]·[ 回复本帖 ] ·[热门原创] ·[繁體閱讀]·[版主管理]
OpenAI、Meta和谷歌同意就儿童剥削采取新安全措施
送交者: icemessenger[♂☆★★★SuperMod★★★☆♂] 于 2024-04-23 23:42 已读 4345 次 1 赞  

icemessenger的个人频道


OpenAI, Meta and Google Sign On to New Child Exploitation Safety Measures



包括OpenAI、Meta Platforms和谷歌在内的主要AI公司周二同意采用新的安全措施来保护儿童免受剥削,并堵上现有防御系统中的几个漏洞。



Meta公司展示虚拟实境头盔的展台。


包括OpenAI、Meta Platforms和谷歌在内的主要AI公司周二同意采用新的安全措施来保护儿童免受剥削,并堵上现有防御系统中的几个漏洞。 6park.com

Major artificial-intelligence companies including OpenAI, Meta Platforms and Google agreed on Tuesday to incorporate new safety measures to protect children from exploitation and plug several holes in their current defenses. 6park.com


大量新出现的生成式AI工具使掠夺者创造儿童性剥削影像和其他剥削类内容的能力大大增强。儿童安全组织Thorn称,新联盟的目标是在这些工具蔓延并伤害更多儿童之前扼杀此类内容的创作。Thorn与非营利组织All Tech Is Human共同组织了上述倡议行动。 6park.com

A host of new generative-AI powered tools have supercharged predators’ ability to create sexualized images of children and other exploitative material. The goal of the new alliance is to throttle the creation of such content before these tools can proliferate and hurt more children, according to Thorn, the child-safety group that helped organize the initiative along with the nonprofit organization All Tech Is Human. 6park.com


Thorn和几家AI公司同意实施一些原则,最大限度地降低AI工具的风险。其中一项原则要求AI实验室避免使用可能包含儿童性内容的数据集,并从自己的培训材料中删除此类材料。 6park.com

Thorn and several AI companies agreed to implement principles to minimize the risks of their tools. One such principle calls for AI labs to avoid data sets that might contain child sexual content and scrub such material from their own training materials. 6park.com


该原则希望各公司加大对常规模拟对抗演练(red-teaming)的投资,或进行测试以发现并修复允许生成此类材料的漏洞。Thorn将推动AI平台和谷歌等搜索引擎删除那些“裸化”无害儿童影像的服务链接;在过去的一年里,一些高中曾发生过这种问题。 6park.com

It wants companies to invest more in regular red-teaming, or testing to find and fix gaps that allow such material to be generated. Thorn is pushing AI platforms and search engines such as Google to remove links to services that “nudify" otherwise benign images of children—a problem that has popped up at high schools over the past year. 6park.com


Thorn的数据科学副总裁Rebecca Portnoff说,“这个项目旨在充分说明,你无需对这种环境投降。”她说,“我们希望能够改变这项技术的发展方向,及时斩断这项技术的现有危害。” 6park.com

“This project was intended to make abundantly clear that you don’t need to throw up your hands," said Rebecca Portnoff, vice president of data science at Thorn. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees." 6park.com


涉及该领域的AI公司的高管表示,他们并不想让自己的工具创造出剥削儿童的材料。一些高管和儿童安全倡导者表示,如果这些数据集被过度净化,AI产品给消费者提供的服务就会大打折扣。 6park.com

Executives at the AI companies involved said they had no desire to allow their tools to create child-exploitation material. Some executives and child-safety advocates said that if these data sets are overly sanitized, AI products can become less useful to consumers. 6park.com


美国国家失踪与被剥削儿童保护中心(National Center for Missing and Exploited Children)去年收到了3,600份有关剥削儿童的报告。由于资金有限、技术落后以及对敏感材料的处理受到法律限制,该中心根本无力应对如此大量的报告。 6park.com

Last year, the National Center for Missing and Exploited Children, or NCMEC, received 36 million reports of child exploitation. The center is overwhelmed by the volume of reports as it contends with limited funding, outdated technology and legal constraints in handling the sensitive materials.



位于弗吉尼亚州阿灵顿的美国国家失踪与被剥削儿童中心办公室。



存储在美国国家失踪与被剥削儿童保护中心办公室的包含证据的硬盘驱动器。


在Thorn与AI公司接触时,他们发现,虽然一些公司已经有大型团队专注于删除儿童性虐待材料,但其他公司并没有意识到这个问题和潜在的解决方案。并且,在保障这些工具安全的迫切性与企业领导者推动AI新技术快速发展之间也存在矛盾。 6park.com

When Thorn approached AI companies, they found that while some companies already had large teams focused on removing child-sexual-abuse material, others were unaware of the problem and potential solutions. There is also a tension between the imperative to safeguard these tools and business leaders’ push to move quickly to advance new AI technology. 6park.com


开源AI平台Civitai的创始人Justin Maier说,“我们不想因为恐惧而停止积极推动技术进步。”他表示,“我认为与其逃避,不如想想如何让这个空间变得更安全。” 6park.com

“We did not want to avoid helping progress the technology out of fear," said Justin Maier, founder of open-source AI platform Civitai, which is part of the alliance. “I think rather than running away from that, it’s better to think about how we can make that space safer." 6park.com


获得风投公司Andreessen Horowitz支持的Civitai因在保护儿童方面做得不够而招致批评。科技新闻平台404 Media在去年12月曾报道称,Civitai平台上的一些图片或可被视为儿童色情图片。Civitai表示,该平台非常重视这个问题,并在加强防御措施,以杜绝出现儿童剥削影像。 6park.com

Civitai, backed by venture-capital firm Andreessen Horowitz, drew criticism for not doing enough to protect children. Tech news platform 404 Media reported in December that some images on the platform could be considered child pornography. Civitai says it takes the issue seriously and is bolstering its defenses to stamp out exploitative images. 6park.com


Thorn和其他倡导者担心,新的AI工具会“扩大”潜在违规材料的集合群,导致执法人员不得不花费更多时间来确定图片中的儿童是否真实。为了避免这一问题,Thorn和该联盟中的其他公司还同意在图片中添加一些标记符号,以帮助其他人来确定内容是否由AI生成或增强。 6park.com

Thorn and other advocates are worried that new AI tools will “grow the haystack" of potentially violating material, forcing law-enforcement officers to spend more time determining if a child in an image is real. To help avoid this issue, Thorn and the companies in this alliance have also agreed to add signals that help others determine whether content is generated or enhanced by AI. 6park.com

6park.com

社交媒体平台依赖自动图像检测系统,该系统主要匹配所谓的哈希值(hashes),即已知的儿童性虐待图像和视频的“指纹”。然而,AI生成的内容是新颖的,不太可能包含这些指纹,这使得现有检测工具的效果大打折扣。 6park.com

Social-media platforms rely on automated image-detection systems that primarily match so-called hashes, or fingerprints of known child-sexual-abuse images, and videos. AI-generated content is novel, however, and unlikely to include those fingerprints, rendering the current tools less effective. 6park.com


许多企业已尝试通过过滤器和提示工程来清除剥削儿童的AI内容。但这些防御措施可能会失败。 6park.com

Many companies already try to weed out AI content that exploits children through filters and prompt engineering. But those defenses can fail. 6park.com


去年12月,斯坦福大学的一位研究人员发表了一篇论文,显示在一个名为LAION的热门公共数据集中,存在成百上千张剥削儿童的图片,LAION用于文本生成图片的AI模型。该数据集已被删除。 6park.com

In December, a Stanford researcher published a paper showing that there were hundreds of exploitative images of children in a popular public data set used for AI text-to-image generation models called LAION. The data set was taken down. 6park.com


不过,即使不包含儿童剥削材料,AI模型仍可以生成此类内容。这是因为文本生成图片的AI模型可以将不同图像的部分内容组合在一起,创建出一张新的图像,例如生成一张熊猫站在冲浪板上的逼真图像。这种能力也使得一些AI系统有可能通过将儿童的非性图像与成人的性内容相结合来生成儿童色情内容。 6park.com

But AI models can still generate such content even without the inclusion of child-exploitation material. That’s because the AI models that power text-to-image services can combine parts of different images to create a new image, such as a realistic image of a panda on a surfboard. This capability also makes it possible for some AI systems to create child pornography by combining nonsexual images of children and adult sexual content. 6park.com


Portnoff说,新联盟中的一些公司已同意在其开源模型中将儿童的图像、视频和音频从包含成人性爱内容的数据集中分离出来,以减少生成儿童剥削内容的风险。 6park.com

Portnoff said some companies in the alliance have agreed to separate images, video and audio of children from data sets that also contain adult sexual content in their open-source models because that increases the risk of creating child exploitation. 6park.com


企业高管们在接受采访时说,Thorn概述的一些措施如果要付诸实施,将需要一些技术突破,还可能需要与儿童安全监管机构达成新的协议,以便进行更严格的测试。 6park.com

If implemented, some of the steps outlined by Thorn will require some technical breakthroughs and possibly new agreements with child-safety regulators to allow for more rigorous testing, executives said in interviews. 6park.com


决策者们已敦促AI公司设法在图片上添加水印,这样就可以追溯到创作者。开源图像生成模型Stable Diffusion背后的公司Stability AI负责诚信事务的高级副总裁Ella Irwin表示,如今,水印是可以去除的,AI公司仍在寻找对AI生成的图像进行永久标记的方法。 6park.com

Policymakers have urged AI companies to find ways to add watermarks to their images so they can be traced back to their creator. Today, watermarks are removable, and AI companies are still looking for ways to mark AI-generated images permanently, said Ella Irwin, senior vice president of integrity at Stability AI, the company behind the open-source image-generation model Stable Diffusion. 6park.com


Irwin补充说,目前企业很难就儿童剥削内容对其系统进行压力测试,因为如果生成了此类材料,这种测试本身就会触犯法律。 6park.com

Irwin added that it was difficult for companies to stress-test their systems for child-exploitative content today because the testing itself can break the law if such material is generated.


喜欢icemessenger朋友的这个贴子的话, 请点这里投票,“赞”助支持!
[举报反馈]·[ icemessenger的个人频道 ]·[-->>参与评论回复]·[用户前期主贴]·[手机扫描浏览分享]·[返回学习园地首页]
帖子内容是网友自行贴上分享,如果您认为其中内容违规或者侵犯了您的权益,请与我们联系,我们核实后会第一时间删除。

所有跟帖:        ( 主贴楼主有权删除不文明回复,拉黑不受欢迎的用户 )


用户名:密码:[--注册ID--]

标 题:

粗体 斜体 下划线 居中 插入图片插入图片 插入Flash插入Flash动画


     图片上传  Youtube代码器  预览辅助

打开微信,扫一扫[Scan QR Code]
进入内容页点击屏幕右上分享按钮

楼主本栏目热帖推荐:

>>>>查看更多楼主社区动态...






[ 留园条例 ] [ 广告服务 ] [ 联系我们 ] [ 个人帐户 ] [ 版主申请 ] [ Contact us ]