Discussion about this post

User's avatar
Ashwin's avatar

Super interesting, thanks for writing!

The well of Chinese thinking here seems narrow and shallow, given that these are the senior folks you've selected for having relevant thoughts. Do you have a sense of how healthy the broader Chinese AI safety ecosystem is? (Should someone make an Alignment Forum But WeChat?)

Actually, lemme take a look myself. Looking at the CSET Map of Science, there are only 8 AI safety research clusters with >10% Chinese paper share. [https://sciencemap.eto.tech/?ai_safety_pred=30%2C64&china_affiliation_share=10%2C100&mode=list]

The relevant clusters' topics are:

* Adversarial robustness and backdoors: Clusters 60, 11944, and 33998.

* Explainability: Cluster 751.

* Safe reinforcement learning: Cluster 18683. (Fun note: this includes the classic "Concrete problems in AI safety" paper.)

An edge case is Cluster 75898, on autonomous driving, which includes a bunch of multi-agent interaction theory.

Zilan Qian's avatar

Interesting (but reasonable) that most AI safety convos happen in academia or academic adjacent labs/startups (Zhipu). Probably a good thing for public and policy awareness, given the aura surrounding academia. Hopefully one day BAAI or SHLAB will become China's real AISI.

2 more comments...

No posts

Ready for more?