
I track new publications about fetal monitoring daily. Over the past few years the most common type of publications I see relate to the development and testing of artificial intelligence models to achieve computer interpretation of CTG recordings. Most are designed for use in labour, and some for the antenatal period. If you are after a summary of the evidence about whether or not they work (so far) – you can find that here. Some systems – like the INFANT system from K2 are in clinical use already.
It seems to me that the next decade will see a huge explosion in the use of artificial intelligence systems in fetal monitoring. There will be a move towards offering a “decision support” approach, rather than simply categorising the heart rate pattern as normal or some flavour of not normal. Decision-support systems bring in other data about the woman and her fetus and generate recommendations for care based on computation of risk from more than just the CTG pattern.
Much of what I am seeing published focusses on the technical aspects of how to build and train an artificial intelligence system. Some reports on how accurate these systems are in comparison are in relation to experienced health professionals (and I have shared some of this research on the blog before – here, and here for example). While this research is important, there is another conversation that needs to happen in parallel with this technology development. Fewer publications are happening in this space, and even fewer specifically focus on fetal heart rate monitoring technology, but one caught my eye recently.
Patient rights in artificial intelligence driven healthcare
Ploug and colleagues (2025) write about the importance of having robust regulatory systems to approve, or not, the use of specific instances of artificial intelligence technology in healthcare. Their paper has a European focus, examining the recent legislation known as the AI Act. Globalisation means that approaches in Europe are likely to reflect / lead to approaches in other parts of the world, so their discussion is relevant in other settings too.
The AI Act sets out the responsibilities for providers (the system must be well tested, accurate, and transparent) and for those who deploy the technology – like hospitals. This must include a human rights impact, assessing the way the artificial intelligence system might impinge on rights of privacy, data-protection, or how it might reproduce or magnify discrimination. Ploug and colleagues set out three problems with the current approach, and specific five rights that should also be considered.
The problems
First, users of healthcare systems are not a single amorphous mass. Each person will have their own view on whether they face acceptable or unacceptable risks when an artificial technology is applied in their healthcare. But these technologies are deployed system-wide, so the decision to deploy them is based on an average of opinions about the level of risk.
Second, healthcare systems are complex social beasties. Introduce a change, and the one thing you can expect, is something unexpected. The unintended consequences of AI in healthcare are under explored. My research was the first (and so far remains the only) to explore the unintended consequences of central fetal monitoring systems (Small et al., 2021). The potential for harmful unintended impacts from fetal monitoring technolgy remains vastly under investigated.
Finally, the AI Act shifts power from individual users of healthcare to a handful of consumer stakeholders who are consulted when AI systems are developed. This stakeholder group may or may not be broadly representative of the people who will use the technology. Once introduced, individual healthcare users may no longer have the power to opt out.
The rights
The five rights that have been proposed are:
- The right to an explanation of an artificial intelligence generated diagnosis or treatment plan. Of course, if the system is a black-box system – one where no one can tell how the computer arrived at the recommendation it is making, then this is impossible.
- The right to withdraw from having artificial intelligence used for diagnosis and treatment planning, and to have a clinician provide this instead.
- The right to contest an artificial intelligence diagnosis or plan. To be able to achieve this, healthcare users would need to be provided information about how the artificial intelligence system works, how well it performs, and its potential biases, so they can realistically form an impression about whether the diagnosis or plan is relevant to them.
- The right to a second opinion about an artificial intelligence diagnosis or treatment plan. This might be through the use of a second, independent artificial intelligence system or involve a clinician.
- The right to not be screened or diagnosed on the basis of publicly available data without consent. This refers to the potential for artificial intelligence systems to make use of data from social media as part of the algorithm.
How does this apply to fetal monitoring use in maternity care settings?
Artificial intelligence systems for fetal monitoring already deployed in maternity care systems do not make allowance for these rights. They are sometimes designed and introduced with minimal input from consumers. Once adopted, they are impossible to opt out of, other than to refuse CTG use (which might result in the clinician withdrawing access to a treatment, like induction, that has been requested by the healthcare user). Women accessing maternity care are rarely (in my experience – no one has even started to investigate this) informed that their CTG data is being subject to interpretation by an artificial intelligence system. They are also unlikely to be told the extent to which a care recommendation is based on the output of an artificial intelligence system, and the clinician is not well positioned to critically consider the way in which their recommendation is shaped by that technology.
A human rights discussion about artificial intelligence technology for fetal monitoring must be prioritised at this point. There may need to be a revision and / or withdrawal of some of these technologies while the human rights aspects are investigated and appropriate protections built in. All developers of new technologies need to design human rights considerations into the technology from day one – and not just stick a warning label on the box after it is commercially available.
Sign Up for the BirthSmallTalk Newsletter and Stay Informed!
Want to stay up-to-date with the latest research and course offers? Our monthly newsletter is here to keep you in the loop.
By subscribing to the newsletter, you’ll gain exclusive access to:
- Exciting Announcements: Be the first to know about upcoming courses. Stay ahead of the curve and grab your spot before anyone else!
- Exclusive Offers and Discounts: As a valued subscriber, you’ll receive special discounts and offers on courses. Don’t miss the chance to save money while investing in your knowledge development.
Join the growing community of BirthSmallTalk folks by signing up for the newsletter today!
Sign up to the Newsletter

References
Ploug T, Jørgensen RF, Motzfeldt HM, Ploug N, Holm S. (20250). The need for patient rights in AI-driven healthcare – risk-based regulation is not enough. Journal of the Royal Society of Medicine, 0(0). doi:10.1177/01410768251344707
Small, K. A., Sidebotham, M., Gamble, J., & Fenwick, J. (2021, Jun 24). “My whole room went into chaos because of that thing in the corner”: Unintended consequences of a central fetal monitoring system. Midwifery, 102, 103074. https://doi.org/10.1016/j.midw.2021.103074
Categories: CTG, EFM, Reflections
Tags: Artificial intelligence, Human rights