Ever since the Chinese government passed a law on generative AI back in July, I've been wondering how exactly China's censorship machine would adapt for the AI era. The content produced by generative AI models is more unpredictable than traditional social media. And the law left a lot unclear; for instance, it required companies "that are capable of social mobilization" to submit "security assessments" to government regulators, though it wasn't clear how the assessment would work.
Last week we got some clarity about what all this may look like in practice.
On October 11, a Chinese government organization called the National Information Security Standardization Technical Committee released a draft document that proposed detailed rules for how to determine whether a generative AI model is problematic. Often abbreviated as TC260, the committee consults corporate representatives, academics, and regulators to set up tech industry rules on issues ranging from cybersecurity to privacy to IT infrastructure.
Unlike many manifestos you may have seen about how to regulate AI, this standards document is very detailed: it sets clear criteria for when a data source should be banned from training generative AI, and it gives metrics on the exact number of keywords and sample questions that should be prepared to test out a model.
From MIT Technology Review
View Full Article
No entries found