Published on

April 28, 2026

Last updated on

April 28, 2026

China’s New AI Ethics Measures Increase Scrutiny of High-Impact AI, Including Health-Related Technologies

On March 20, 2026, the Ministry of Industry and Information Technology, together with 10 other government departments, issued the “Administrative Measures for the Ethics Review and Services of Artificial Intelligence Science and Technology (Trial)” under “MIIT Joint Science [2026] No. 75.” The Measures create China’s first dedicated framework for the ethics review and service management of AI science and technology activities.

For companies in regulated health sectors, the immediate significance is clear: Where AI systems may affect public order, life and health, ecological environment, sustainable development, or other sensitive outcomes, ethics governance is now more clearly formalized as part of the compliance landscape.

Regulatory Significance of China’s New AI Ethics Measures

The Measures do not introduce a new product approval pathway for AI-enabled technologies. Instead, they establish a governance layer focused on how AI systems are developed, evaluated, and overseen from an ethics perspective.

In practice, this signals that existing regulatory pathways may not be sufficient on their own where AI has more consequential effects on people or society. For in-scope activities, the relevant ethics committee, service center, and in some cases expert recheck bodies may look beyond technical performance alone and examine how ethical risks are identified, monitored, and controlled.

The framework therefore broadens the practical compliance conversation. Review bodies may focus on issues such as:

  • How risks are identified and controlled
  • Whether the model or system can be guided, intervened in, or otherwise kept controllable
  • How transparent the system is about its purpose, operating logic, interaction methods, and potential risks
  • Whether the activity raises issues that trigger additional expert recheck

For companies, the compliance question is therefore becoming broader. It is no longer only whether a product meets regulatory requirements, but whether the AI function within that product is governed in a disciplined, transparent, and accountable way.

Core Elements of the New Governance Framework

The Measures establish both review procedures and supporting service mechanisms intended to make AI ethics oversight operational. A distinctive feature of the framework is its dual-track governance structure combining ethics review with institutional support services.

A notable feature of the framework is that it does not rely only on high-level principles. It also sets out supporting infrastructure around standards, services, training, risk monitoring, and external review support. Together, these provisions create a more structured operating environment for organizations conducting covered AI activities in China.

Institutional Support Mechanisms

Articles 4–8 describe a national support system that includes:

  • Standard setting
  • Ethics risk monitoring and early warning
  • Testing and evaluation services
  • Certification support
  • Technical research support
  • Education and training
  • Local “Artificial Intelligence Technology Ethics Review and Service Centers.”

These mechanisms collectively form what regulators describe as a “review + services” governance model. In addition to formal ethics assessment, the framework encourages the establishment of cooperation platforms that support ethics risk evaluation, certification, and compliance guidance.

Local authorities are permitted to establish Artificial Intelligence Technology Ethics Review and Service Centers, which may provide review services and compliance support — particularly for small and medium-sized enterprises or foreign companies entering the Chinese market.

For overseas companies operating in China, this development increases the likelihood that ethics expectations will become more structured and predictable rather than remaining discretionary or ad hoc.

Review Pathways Under the Measures

The Measures provide four procedural tracks, reflecting the idea that scrutiny should be calibrated to the nature and risk profile of the AI activity:

  • General procedure
  • Simplified procedure
  • Emergency procedure
  • Expert recheck procedure

The framework also includes defined timelines intended to support both oversight and operational practicality. These include:

  • General review: decision within 30 days after application acceptance
  • Expert review: feedback within 30 days after receipt of materials
  • Emergency review: completed within 72 hours
  • Pre-recheck review in emergency cases: generally completed within 36 hours

This tiered structure matters in practice. Activities with relatively limited ethics risk may qualify for simplified handling, while AI activities involving stronger effects on life, health, behavior, safety, or broader social impact are more likely to face deeper review.

Expert Review Triggers for Health-Related AI Applications

The annexed “List of Artificial Intelligence Scientific and Technological Activities Requiring Expert Ethics Recheck” is especially important because it identifies the activities that require expert recheck after initial review.

The list covers three categories:

  1. Research and development of human-machine integration systems with relatively strong effects on human subjective behavior, psychological emotion, or life and health
  2. Research and development of algorithm models, applications, and systems with public-opinion mobilization capability or social-awareness guidance capability
  3. Research and development of highly autonomous automated decision-making systems for scenarios involving safety or personal health risks

The list is expected to be dynamically updated by regulators as technologies evolve and new risk categories emerge.

For life sciences companies, the first and third categories are likely to be the most relevant. They point to closer scrutiny where AI may materially affect health-related outcomes, safety-related decisions, or sensitive human-facing judgments. That does not mean every health-related AI tool automatically falls within the expert recheck list, but it does suggest heightened attention where AI has strong effects on people or where automated decision-making is used in higher-risk scenarios.

Legal Liability Across Multiple Regulatory Frameworks

The Measures state that violations may be investigated and handled under multiple Chinese laws and regulations, including:

  • Cybersecurity Law
  • Data Security Law
  • Personal Information Protection Law
  • Law on Scientific and Technological Progress

For international companies, this reinforces an important point: AI ethics review should not be viewed in isolation. It may intersect with broader obligations around data handling, privacy, governance, and sector-specific compliance in China.

Relevance for Medical, Digital Health, Pharma, and Consumer Health Companies

Although the Measures apply across industries, their practical impact may be especially visible in sectors where AI contributes to health-related or human-facing decisions.

Depending on the functionality, deployment context, and risk profile, potentially relevant examples may include:

  • Medical software and software as a medical device (SaMD)
  • AI-assisted diagnostic or imaging tools
  • Clinical decision support systems
  • Remote monitoring platforms
  • Digital therapeutics
  • Research and drug discovery platforms
  • AI-supported clinical trial analytics
  • Patient engagement and triage systems
  • Consumer-facing applications that generate individualized health or skin-related assessments based on image or other personal data inputs

The central issue is therefore not the product label alone. Scrutiny is more likely to increase where AI helps assess, categorize, prioritize, guide, or influence people in ways that could affect health-related outcomes or other protected interests.

Sector-Specific Implications

The implications differ slightly across regulated industries.

Medical devices and digital health

AI already plays a central role in many software-driven medical technologies. Where algorithms influence diagnosis, triage, patient monitoring, or treatment pathways, companies may face increasing expectations to demonstrate:

  • Transparent operating logic and explainability, to the extent feasible
  • Robust risk management and validation processes
  • Meaningful human oversight or intervention capability
  • Clear documentation of intended use, limitations, and risk controls

In this context, ethics review may sit alongside existing regulatory expectations rather than replace them.

Pharmaceuticals and biotechnology

For pharmaceutical and biotech companies, the impact may arise less from finished commercial products and more from AI used in research and development. Drug discovery tools, clinical trial design models, data interpretation systems, and other R&D applications may become relevant where they materially shape research choices or safety-related decisions.

That suggests a need for stronger governance around:

  • How AI outputs are reviewed by human experts
  • How model limitations and uncertainty are documented
  • How decisions influenced by AI are recorded and justified
  • How risks are monitored over the course of development

Cosmetics and Consumer Health

Cosmetics and consumer-health companies should not assume they fall outside the framework. AI-enabled consumer tools such as skin analysis apps, personalized product recommendation engines, or image-based assessment tools may become relevant where they produce individualized outputs that influence consumer perceptions, behavior, or health-adjacent decision-making,

Implications for Product Development, Documentation, and Governance

Because the Measures focus on issues such as fairness, controllability, transparency, traceability, privacy protection, and ethics risk prevention, compliance cannot be treated only as an end-stage filing exercise.

Instead, the practical impact is likely to show up earlier in the development lifecycle. Companies may need to think about ethics governance during:

  • Model design and architecture decisions
  • Dataset sourcing and quality controls
  • Definition of intended use and system limitations
  • Validation and testing strategies
  • Human oversight design
  • Documentation development
  • Internal governance and review procedures
  • Ongoing risk monitoring and change management

In other words, ethics considerations may need to be integrated into broader regulatory, quality, R&D, and governance processes rather than handled as a standalone formality.

Next Steps for Foreign Companies

The Measures also create concrete operational expectations for organizations conducting covered AI activities in China.

Establish or appoint an ethics review body

Companies conducting AI activities in China are expected to establish an Artificial Intelligence Technology Ethics Committee.

Where an organization has not established a committee, or its committee cannot competently carry out the required review, the person responsible for the AI activity must apply to an entrusted service center for ethics review.

For international companies, the practical issue is therefore not simply whether the Measures apply in theory, but how review responsibility is mapped onto their China operating structure, local R&D footprint, affiliated entity, or entrusted service-center arrangements.

Prepare Application Materials

Ethics review applications typically require submission of:

  • An AI activity proposal describing algorithm mechanisms, data acquisition methods, and intended products
  • An ethics risk assessment and emergency response plan
  • A statement of integrity from the applying organization

Review assessments typically examine factors such as:

  • Fairness and impartiality (bias prevention)
  • Controllability and reliability, including human intervention capability
  • Transparency and explainability of algorithmic output
  • Privacy and personal information protection
  • Accountability and traceability of system decisions

Implement Ongoing Compliance Monitoring

The Measures introduce continuing supervision requirements after approval:

  • General AI projects must undergo follow-up review every 12 months
  • Projects listed under expert review must undergo follow-up review every six months

If significant changes occur in an AI system’s ethical risk profile, companies must initiate a new ethics review and expert reassessment.

Strategic Takeaways for International Life Sciences Companies

Taken together, the Measures show that AI governance in China is becoming more concrete in sectors where technology can affect health, safety, and public trust. AI oversight is moving beyond technical compliance toward expectations that companies can demonstrate responsible governance of algorithmic technologies.

For life sciences companies, this development matters most where AI affects diagnosis, patient management, research decisions, or personalized health assessments. In these areas, regulators may increasingly expect companies to show how ethical risks are evaluated, documented, and controlled.

Organizations that integrate ethics governance into their development processes early — alongside regulatory, clinical, and quality functions — will be better positioned to support market access and regulatory engagement in China.

Cisema supports companies in regulated sectors in assessing how new Chinese requirements may apply to actual products, development programs, and market-entry strategies. This includes governance gap analysis, documentation planning, and alignment with broader regulatory obligations in China.

For companies seeking practical guidance on AI ethics compliance and regulatory strategy in China, contact Cisema today.

Contact Our Consultants & Discover How We Can Support You

Let Cisema help turn your plans into reality.

Request Proposal