Eye on Global Responsibility - Smart Use of AI in Drug Discovery

Short summary deck - here ...

Artificial intelligence is rapidly reshaping the pharmaceutical landscape. AI-native biotech companies, platform providers, and large pharmaceutical partners are striking headline-grabbing deals aimed at accelerating discovery and improving the probability of success. From multimillion-dollar partnerships between major pharma and AI platform companies, to venture-backed start-ups using foundation models to design novel proteins or small molecules, the sector has clearly entered a phase of meaningful adoption.

The key shift is that AI is no longer an experimental add-on. Rather, Machine Learning is rapidly becoming a core part of discovery infrastructure: target discovery, lead identification, generative chemistry, predictive ADME/tox modelling, and even early clinical design. The potential is enormous and encouraging early results are already emerging. But alongside the excitement sits a growing recognition that AI-enabled drug discovery raises profound ethical, social, and regulatory questions that should also be considered, alongside the rush to implementation.


1. The Promise: Momentum Builds Behind AI in Drug Discovery

Recent years have seen a wave of major collaborations and investments. Large pharmaceutical companies are signing multi-year discovery deals with AI-first biotechs, providing them with access to generative design engines, multimodal biological models, and cloud-scale infrastructure. AI start-ups are reporting accelerated timelines with candidates entering lead optimisation in months as opposed to years. And some molecules first conceptualised by generative models have already reached clinical trials.

Encouraging data is emerging across multiple layers of R&D:

  • Target discovery: AI is uncovering novel biological relationships that would be invisible to traditional heuristics.

  • Molecule generation: Generative chemistry models can explore vast chemical spaces efficiently, producing diverse scaffolds with predicted properties.

  • Predictive modelling: AI-based ADME and toxicity prediction is helping teams prioritise better candidates earlier.

  • Synthetic biology: Protein-design systems and sequence-to-function models are unlocking modalities once considered un-druggable.

Together, these advances hold real promise for reducing R&D costs, shortening timelines, and expanding the range of molecules available to us. All of which ultimately drive better patient outcomes.

Yet, with such power comes responsibility.

 

2. When Innovation Risks Reinforcing Existing Bias

Bias in Algorithms and Science Itself

AI systems can only learn from the data they are given and in drug discovery, that data is uneven. Certain pathways, targets, and chemical spaces are extensively studied, while emerging or neglected diseases have received comparatively little attention, and hence smaller data sets.

The risk is clear, the disparity between well-studied therapy areas and rare diseases continues to grow:

AI could unintentionally amplify existing scientific inequities by focusing discovery efforts on what is already well understood.

If historical data strongly represent oncology, neurodegeneration, or metabolic disorders, then AI models may naturally “gravitate” toward solutions in these domains,  in a world that urgently needs treatments for tuberculosis, dengue, or antimicrobial resistance.

Market Incentives Can Compound the Problem

Many AI-driven partnerships today focus on high-value markets: chronic diseases in wealthy nations, rare but profitable genetic conditions, or precision oncology. Without deliberate intervention, the danger is that AI will simply accelerate innovation where profit incentives already point, widening the divide between well-resourced regions and countries burdened by neglected diseases.

This raises an ethical question:

Are we using the world’s most advanced discovery tools to address the world’s biggest health burdens ? Or just the most profitable ones?

 

3. Data Privacy and Ownership: Who Controls the Inputs and Outputs?

AI-enabled discovery relies heavily on data: genomic databases, proprietary screening results, patient registries, clinical datasets, and literature-derived knowledge graphs. This triggers multiple questions that affect ethical concerns:

  • Data representativeness: If certain populations are absent from datasets, AI may generate solutions that are less safe or less effective for those groups.

  • Intellectual property: Patent law is still catching up. When an algorithm designs a molecule, who is the “inventor”? The algorithm? The human team? The company?

  • Patient privacy: Sensitive clinical and genomic data must be handled with rigorous safeguards. Consent frameworks for AI use are still evolving.

The question of credit matters not just for legal clarity but for responsible innovation. Without clear frameworks, disputes could stifle collaboration or discourage transparent science. Innovation can only prosper and support humanity from within an enabling environment.

 

4. Dual-Use Risks: When AI Can Create Harmful Molecules

AI’s power to design molecules quickly is an undeniable scientific breakthrough, but it introduces new security concerns. A sophisticated generative model could theoretically be misused to design toxic agents or optimise known harmful compounds, in the same way as aeroplane can be used to comfortably travel or conceivably be weaponised.

And as the plane (as a utility) does not distinguish, neither do deep learning algorithms as a utility. This is not a hypothetical issue. Global biosecurity bodies warn that:

Bad actors could leverage open-source models or poorly governed tools to design harmful or weaponisable molecules.

Ensuring responsible release of models, monitoring access controls, and embedding misuse-detection systems will be essential to preventing dual-use vulnerabilities.

 

5. Safety and Informed Consent in the AI-Accelerated Era

If AI-designed molecules enter clinical trials faster than before, regulators and ethics boards must necessarily adapt. Key questions include:

  • How should regulators evaluate candidates emerging from opaque or black-box generative systems?

  • Should trial participants be informed that a molecule was designed by AI?

  • Does AI-driven structural novelty create hitherto unknown risks requiring enhanced safety monitoring?

The core principle remains unchanged: participants must be able to give fully informed consent — which may require new educational and regulatory frameworks.

 

6. Harnessing AI for Global Equity, Not Just Private Gain

AI can help us discover new drugs faster, but speed alone is not a moral compass. As experts at the WHO and leading academic institutions have argued, global governance frameworks are required to ensure that AI-driven pharmaceutical innovation benefits all countries, not just those with capital and computational resources.

This means:

  • Incentivising AI research into neglected diseases.

  • Ensuring global population diversity in training datasets.

  • Creating equitable partnerships with low- and middle-income countries.

  • Embedding ethics, transparency, and safety across AI-enabled R&D pipelines.

  • Developing international standards for safety, consent, and dual-use monitoring.

The real promise of AI in drug discovery is not just faster discovery ...

Dr. Ivan Fisher, Peter Leister



Previous
Previous

AI in Life Sciences - Agentic vs. Generative or from "Drafting" to "Doing"

Next
Next

Precision Medicine: Finally Delivering on the Promise?