AI Transparency
Last updated: 2026-05-04 · EU AI Act / EU AI Pact note
This page exists to give clinical, regulatory, and procurement teams a one-page answer to the question “what kind of AI is HEORAgent and how do I use it safely?” It is structured around the disclosure expectations emerging from the EU AI Act (Reg. 2024/1689) and the voluntary commitments of the EU AI Pact.
Intended use
HEORAgent is a research and analysis assistant for Health Economics and Outcomes Research (HEOR), Health Technology Assessment (HTA), and Pharmacovigilance (PV) professionals. It automates literature retrieval across 44 data sources, runs cost-effectiveness and budget-impact models, drafts HTA dossiers (NICE/EMA/FDA/IQWiG/HAS/EU JCA), classifies pharmacovigilance studies, and assesses indirect comparison feasibility.
It is not a clinical decision-support system. It does not advise on individual patient diagnosis, treatment selection, or monitoring. Outputs are inputs to professional decision-making, not substitutes for it.
Risk classification under the EU AI Act
Based on Annex III of Regulation (EU) 2024/1689, HEORAgent is not a high-risk AI system because it does not:
- Determine access to medical care or insurance
- Drive individual diagnosis, treatment, or monitoring
- Perform safety-critical functions in a regulated medical device
- Evaluate, profile, or score individual natural persons
It falls under the limited risk / general-purpose AI tier, with transparency obligations only: users must be told they are interacting with an AI system, and AI-generated content must be labelled. Both are implemented (see below).
Customers deploying HEORAgent inside a workflow that does feed individual care decisions are responsible for performing their own conformity assessment under the AI Act. The classification above applies to the tool as delivered, not to every downstream use.
Human oversight model
Every HEORAgent output is intended to be reviewed by a qualified HEOR/HTA/PV professional before any action is taken on it. Concretely:
- A human selects the question, the comparators, the perspective, and the time horizon.
- A human chooses which data sources to query, which to exclude, and which results to include in a synthesis.
- A human applies methodological judgement on transitivity assumptions (Cope 2014), GRADE certainty rationale (Guyatt 2011), utility instrument selection (NICE DSU TSDs), and ITC feasibility (TSD 18).
- A human signs off on every dossier, ICER, RoB judgement, or PV plan before it leaves their organisation.
The tool surfaces the relevant framework and runs the math; it does not own the conclusion.
Transparency & audit trail
- PRISMA-style audit record on every tool call. Every response includes an
auditobject listing which sources were queried, which succeeded, which failed and why, and which assumptions were applied. This record is part of the structured output, not a summary the model can hallucinate. - AI commentary is explicitly labelled. Any narrative text the model produces outside the audited tool outputs is tagged
AI Commentary (not from audited tools). Domain claims (ICERs, trial results, regulatory decisions) come exclusively from tool outputs, never from training-data recall. - URL validation. Every cited URL is validated via the
validate_linkstool before presentation. No fabricated links, no “search linked” placeholders. - Reproducibility. Literature searches default to
runs=3with deduplication; same query produces the same presentation. The audit trail makes a result reviewable months later.
Models and providers
- Web UI: Anthropic Claude Sonnet 4.6 (BYOK — user provides their own API key).
- ChatGPT Custom GPT: OpenAI GPT-5.3 via the official ChatGPT Actions framework.
- MCP server: tool logic is deterministic TypeScript (no LLM in the path); model only sits in the calling client.
The MCP server itself is open source under MIT. Tool logic can be audited line by line at github.com/neptun2000/heor-agent-mcp.
Methodological references
HEORAgent is built on published methodology, not in-house heuristics. Where a tool implements a framework, the framework is cited inline:
- ISPOR good-practice guides (CEA, BIA, indirect treatment comparison)
- NICE DSU Technical Support Documents (TSD 14, TSD 18)
- NICE PMG36 Health Technology Evaluations Manual (2026 update)
- Cochrane Handbook for Systematic Reviews (Ch 10, 11)
- GRADE Working Group (Guyatt 2011, 2013)
- EMA Good Pharmacovigilance Practices (GVP rev 4)
- EU Regulation 2021/2282 (HTA Regulation, Joint Clinical Assessment)
- Cope 2014, Phillippo 2016, Signorovitch 2023 (ITC methodology)
- Biz, Hernández Alava, Wailoo 2026 (UK EQ-5D-5L value set)
EU AI Pact
The maintainer is in the process of signing the European Commission's voluntary EU AI Pact, committing to:
- Adopt an AI governance strategy aligned with the AI Act
- Identify and map AI systems likely to be high-risk under the AI Act
- Promote AI literacy and responsible-development awareness among contributors
Things HEORAgent does NOT do
- Diagnose, treat, or monitor individual patients
- Generate FDA / EMA / NICE submissions in final form
- Replace HEOR consultancy review
- Make pricing or reimbursement decisions
- Profile, score, or evaluate individual people
- Process patient-identifying health data
Reporting concerns
If you encounter a tool output that you believe is unsafe, materially wrong, or misleading, please open a public issue at github.com/neptun2000/heor-agent-mcp/issues with the prompt, response excerpt, and a description of the concern. Public issues are the preferred channel because they create a reviewable record.
See also the Privacy Policy.