Medical care is increasingly diffused across a variety of institutions, personnel, and technologies. The doctor-patient relationship has always adapted over time to advances in medicine, biomedical research, and care practices. At the same time, the capacity of AI to replace or augment human clinical expertise utilising highly complex analytics and unprecedented volumes and varieties of data suggests the impact of the technology on the doctor-patient relationship may be unprecedented.

The adoption of AI need not be a fundamental barrier to good doctor-patient relationships. AI has the potential to alter care relationships and displace responsibilities traditionally fulfilled by medical professionals, but this is not a foregone conclusion. The degree to which AI systems inhibit ‘good’ medical practice hinges upon the model of service. If AI is used solely to complement the expertise of health professionals bound by the fiduciary obligations of the doctor-patient relationship, the impact of AI on the trustworthiness and human quality of clinical encounters may prove to be minimal.

At the same time, if AI is used to heavily augment or replace human clinical expertise, its impact on the caring relationship is more difficult to predict. It is entirely possible that new, broadly accepted norms ‘good’ care will emerge through greater reliance on AI systems, with clinicians spending more time face-to-face with patients and relying heavily on automated recommendations.

The impact of AI on the doctor-patient relationship remains highly uncertain. We are unlikely to see a radical reconfiguration of care in the next five years in the sense of human expertise being replaced by artificial intelligence. With that said, developments like the COVID-19 pandemic and the increased pressures it has placed on health services may transform the mode of delivery of care if not the expertise behind it. Remote delivery of care, for example, may become increasingly commonplace even if diagnosis and treatment remain firmly in the hands of human health professionals.

A radical reconfiguration of the doctor-patient relationship of the type imagined by some commentators, in which artificial systems diagnose and treat patients directly with minimal interference from human clinicians, continues to seem far in the distance. Movement in this direction continues to hinge on proof of clinical efficacy which, as noted above, continues to prove a barrier to commercialisation and widespread adoption. Likewise, new modes of clinical care would need to be derived that utilise the best aspects of human clinicians and artificial systems, implement appropriate safety and resilience checks, and minimise the weaknesses and implicit biases of both agents. Without due consideration of the implications of AI for medical practice, the “moral integrity of the doctor-patient relationship” may come to be dominated by institutional and external interests, with patient experiences of care suffering as a result.

As AI is adopted across different healthcare systems and jurisdictions, it is important to remember that the moral obligations of the doctor-patient relationship are always affected and perhaps displaced by the introduction of new care providers. While technology continues to develop at a rapid pace, the patient’s experience of illness (e.g., vulnerability, dependency) and expectations of the healing relationship do not radically or quickly change. The doctor-patient relationship is a keystone of ‘good’ medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge facing AI providers, regulators, and policymakers is to set robust standards and requirements for this new type of healing relationship to ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of disruptive emerging technologies.