REALISTIC PATH OF THE
RULE OF LAW ON ARTIFICIAL INTELLIGENCE Jiang Wei TABLE OF CONTENTS I. INTERVENTION TIMING OF LEGAL GOVERNANCEII. BALANCE BETWEEN DEVELOPMENT AND SAFETY III. LEGISLATIVE MODEL OF AI REGULATION The year 2024 marks the 30th anniversary of China’s full access to the Internet. After long-term unremitting efforts, China has formulated and
promulgated more than 150 laws on cyberspace, and the cyberspace rule of law system has been gradually improved from scratch. A path for governing the cyberspace in accordance with the law has been explored, which not only conforms to international common practices but also features Chinese characteristics. In the era of intelligence, the cyberspace environment faced by the world is different from that of 30 years ago, or even 10 years ago. The risks of digital technology represented by artificial intelligence (AI) have spilled over from cyberspace and become a common challenge and issue faced by all countries in the world. At present, large AI models are growing explosively, their risks have directly threatened the real society, and the European Union, China, and the United States have put AI legislation on the agenda. Generative AI has impacted the existing rules of order, and will inevitably give rise to AI law that is different from both traditional and cyber laws. In terms of AI governance, countries around the world have both cooperation and competition, and both consensus and disagreement. They are
faced with confusion in both theoretical research and system construction. I. INTERVENTION TIMING OF LEGAL GOVERNANCE In the development of human society, every major technological revolution will bring new challenges to state governance. The biggest problem facing AI legislation is that the panorama of AI technology development has not yet been fully revealed, and the unknown is far greater than the known. How are the AI characteristics reflected in specific rules? Lawmakers need a process of understanding development and governance
laws of AI technology. According to the Collingridge Dilemma proposed by a British scholar, if the law intervenes AI too early or regulates AI too late,itis notconducivetohumanwelfare.Therefore,itisunderstandabletobe cautiousabouttheenactmentofAIlegislation.However,lawsshouldbe
formulatedtoregulaterisksthatareactuallydangerousinatimelymanner.
The research,development and application ofAI must
notharm humanrights anddignity,whichisthegreatestconsensusofallcountriesintheworld, andAIethicsandsecurityhavebeenwidelyvaluedbytheinternational community.Chinahassuccessivelyreleaseddocuments suchastheGlobal InitiativeonDataSecurityandtheGlobalAIGovernanceInitiative, which have providedvaluablereferencesforrelevantinternationaldiscussionsand rulemaking.In2023,28countries,includingChina,theUnitedStates,and theUnitedKingdom,aswellastheEuropeanUnion,signedtheworld-first AIsafetydocument,theBletchleyDeclaration,callingonallcountriesto fully understand thenecessityandurgency ofAI riskregulation.Therefore,it
isnecessarytointroduceregulatorylegislationinatimelymannertoaddress
the risk ofAI endangering humans. II. BALANCE BETWEEN DEVELOPMENT AND SAFETY Encouraging innovation and regulating risk are fundamental principles of AI governance. Lawmakers who have introduced AI acts claim to embody the principle of equal emphasis on development and safety, but the AI plans introduced by each country are highly controversial, and various parties have different feelings, especially regulators and technology companies often find it difficult to reach a consensus on whether the act is conducive to innovation and development. Dozens of large European companies, including Airbus and Siemens, have taken collective action to publicly oppose the adoption of the EU AI Act on the grounds that it may damage European competitiveness. The author believes that in the development of digital technology, there can be no absolute safety, only relative safety. The best legislation policy is the development-oriented protection for digital technology, not the prohibition-oriented protection.
The widespread consensus in the international
community is that there can be no sustainable development without safety, but the absence of development is the greatest unsafety. China has always adhered to the principle of ‘attaching equal importance to development and safety, and promoting
innovation and law-based governance’ in AI governance. On September 22, 2024, the United Nations Summit of the Future officially adopted the GlobalDigitalCompact,emphasizingdevelopment,inclusiveness,innovation, protectionofculturaldiversityinthedigitalspace,andcreatinganopen,
fair,inclusiveandnon-discriminatoryenvironmentfordigitaldevelopment. Therefore,tobalancetherelationshipbetween development andsafety, weshouldemphasizethatdevelopmentisthemainlineandsafetyisthe bottomline,andclarifydevelopmentasapriority,soastoensuresafety throughdevelopment,promotedevelopmentwithsafety,preventrisksin development,andcontrolchaosininnovation. LawmakersshouldpositionAI
asanemergingtechnologyandanewqualityproductiveforce,givepriority to theenactmentoftheAIPromotionLaw,encourageinnovation,advocate development,provideguarantees,fullyrespecttheautonomyofscienceand technology,andpayattentiontotheethicsofscienceandtechnology.For theriskregulation ofAIresearch,developmentandapplication,weshould upholdtheconceptof‘prudenceandmoderation’,andpromoteinclusive andcircumspectsupervision,suchasadoptingfault-tolerantsupervision, supportivesupervision,andpunitivesupervision,soastoimprovethe levelofnormalizedsupervision,especiallytoenhancethepredictability ofsupervision.Forsituationsthathavealreadyledtoharmfulsocial
consequences, sanctionable legal norms can be supplemented for penalty. III. LEGISLATIVE MODEL OF AI
REGULATION Legislation on AI is new, and all countries are in the exploratory stage. Whether AI law intervenes in the field of AI in an all-round way, or selectively intervenes in different fields and stages, countries have chosen different legislative paths based on their specific national conditions, development realities and value orientations. The European Union has adopted a harmonized specific law, and the AI Act is of a codified nature, with the expectation of resolving all issues on development and risk. In the United States, local legislation has been promoted to prevent AI risks through state autonomy. At present, more than 20 states in the United States have proposed AI acts, but the content of the acts varies greatly from state to state. In 2024, the Utah AI Policy Act and the Colorado Act Concerning Consumer Protections in Interactions with AI Systems in the United States came into effect. China has adopted interim methods to carry out experimental regulation, such as the Interim Measures for the Management of Generative AI Services. These legislative models have theirownadvantagesanddisadvantages.Sofar,alltheexplanationsandtheories aboutAIaretransitionalandstillneedtobeconstantlytestedbypractice. AI legislationalsorequiresaprocessofgradualdeepeningofunderstanding, andrequiresthedevelopmentofalong-termgovernancemindset.Itisnot advisabletoseekquicksuccessesandinstantbenefits,letaloneadopta radical approach to solving the issue.The
author believes that the appropriate wayoflegalregulationistoadoptcompositelegislationinspecificfields forthecurrentoutstandingissues,selectthedifferentareasandstagesofAI tocarryoutexperimentalregulation,andreleaseaunifiedandstandardized ‘comprehensiveAIlaw’inatimelymannerafterithasbeenproventobe effectiveinpractice.Interimlawisconvenienttofillthegapinthesystem
throughshort-termpolicies,andiftheinterimlawdoesnotmeettheneeds ofpractice,itisalsoeasytobequickly correctedandimprovedintime.Of course,theinterimlawalsoneedstocoordinatealllegislative activitiesto ensure
the steady advancement of the rule of law onAI.