Artificial intelligence marks a technological turning point unlike any previous invention. Whereas past tools mainly enhanced our physical abilities, AI challenges humanity’s defining feature: the capacity to create and apply knowledge.
This gives it an exceptional power to reshape personal identity, economic systems and social organisation.
Because of this, AI’s vast potential benefits are matched by equally significant risks, making a comprehensive, globally coordinated governance strategy essential. The usual debate that reduces the issue to a choice between efficiency and safety is insufficient. We must instead adopt a holistic understanding of AI - its different forms, its uses and its likely future evolution.
Public discussion often fixates on artificial general intelligence (AGI), a hazy concept suggesting machines might one day match humans in every cognitive domain. The term is vague - human cognitive abilities cannot be fully enumerated - and it ignores key aspects of human intelligence. Outperforming humans in individual tasks is not enough; what matters is autonomy: the ability to comprehend the world and act adaptively toward goals by drawing on many skills when needed.
Today’s conversational systems remain very far from autonomous agents capable of replacing humans within complex organisations.
Systems such as autonomous vehicles, smart grids, smart factories, smart cities or automated telecommunications networks involve highly intricate, often critical infrastructures composed of many agents with their own objectives (individual intelligence) that must coordinate to achieve overall system goals (collective intelligence).
The technical barriers to achieving this level of autonomy are enormous and far beyond what current machine-learning techniques can deliver. Setbacks in the autonomous vehicle sector - where some companies once promised full autonomy by 2020 - illustrate these limits. Present-day AI agents are confined to low-risk digital tasks.
For future AI systems to be trustworthy in critical roles, they must exhibit strong reasoning, pursue goals in line with technical, legal and ethical requirements, and reach reliability levels that currently seem unrealistic.
At the root of the problem is the difficulty of obtaining solid guarantees of reliability. While extremely good at extracting knowledge from data, modern AI systems are opaque, making it nearly impossible to achieve the high assurance required for safety-critical use. As a result, AI safety cannot rely on the traditional certification frameworks that apply to technologies like elevators or aircraft.
Beyond technical safety, AI systems are built to model aspects of human cognition and must therefore meet human-centred cognitive criteria. Although many initiatives address “responsible,” “aligned” or “ethical” AI, most remain superficial, as these qualities depend on complex cognitive processes that are not well understood even in humans.
An AI capable of passing a medical exam does not attain the understanding or responsibility of a human doctor. Designing AI that genuinely respects social norms and exhibits responsible collective intelligence is still a major challenge.
AI risks fall into three interconnected categories.
Technological risks
They arise from the black-box nature of AI, which introduces new and poorly understood safety and security issues. Current risk-management principles demand extremely high reliability in high-criticality environments - standards that today’s AI cannot meet. Global technical standards, central to modern civilisation, are essential to building trust, but progress is slowed by both technical constraints and resistance from major technology companies and U.S. authorities, who claim that standards hinder innovation and instead favour weak self-certification.
Anthropogenic risks
They stem from human behaviour, including misuse, abuse and governance failures. In autonomous driving, skill degradation, overconfidence and mode confusion are common examples. Compliance risks originate from corporate governance models that prioritise rapid commercial expansion over safety and transparency. Tesla’s “Full Self-Driving” system—which still requires active human supervision despite its name - demonstrates the danger of the gap between marketing rhetoric and technical reality.
Systemic risks
They involve long-term or large-scale disruptions to social, economic, cultural, environmental or governance systems. While some risks—like technological monopolies, job loss and environmental impact—are more widely recognised, others remain poorly understood. One critical but often overlooked risk is cognitive outsourcing: the delegation of intellectual work to machines. This can erode critical thinking, weaken individual responsibility and lead to homogenised thought. Raising awareness of these subtle cognitive effects is essential for mitigation.
Addressing this complex array of risks requires a comprehensive, human-centred vision for AI-one that moves beyond the narrow pursuit of AGI promoted by major tech firms. This vision must honestly acknowledge current technical limitations and mobilise international research to explore new pathways for AI across science, industry and services. It must also reject the “move fast and break things” mindset, which breeds technical debt and long-term fragility, and challenge the ideology of technological determinism that denies society’s role in shaping technology.
China is well positioned to help advance such a vision - one focused not on building the most powerful AI, but on using AI to serve society. With a strong industrial base that increasingly requires intelligent products and services, China can play an important role in developing global standards and regulations. Working with other countries, it can help rebalance global power and steer AI development toward reliability and safety. Initiatives such as the China AI Safety and Development Association and the World AI Cooperation Organisation represent early steps in this direction.







