Future-Ready Medicine: Bridging Ethics, Technology, and Care

A Summary of the 2024 Center for Digital Health Symposium

November 11, 2024 - by Rebecca Handler

On October 29, the Stanford Center for Digital Health (CDH) held its annual symposium, drawing thought leaders from medicine, technology, global health, philanthropy, and ethics to explore the evolving field of digital health. The event centered around the challenges and opportunities of integrating technology into healthcare, with a focus on critical issues like ethical boundaries, global equity, and fostering clinician-patient connections through human-centered design.

Eleni Linos, MD, MPH, DrPH, Director of the Center for Digital Health, opened the symposium by underscoring the shared responsibility of leaders across sectors to guide digital health innovations in ways that not only enhance clinical outcomes but also prioritize ethical and transparent practices. "The rapid pace of technological change in healthcare is such that no single individual, no single organization, or sector can address these complexities alone,” she remarked. “Our work thrives when we come together, drawing on our diverse strengths and expertise. The CDH symposium is an ideal platform for fostering this interdisciplinary spirit.”

This event underscores CDH's mission to convene diverse communities of experts to collaboratively tackle the most pressing questions in digital health. In the following highlights, we explore some of the impactful presentations from this year’s discussions, as CDH continues its commitment to a collaborative approach in addressing the future of healthcare.

Eleni Linos, MD, MPH, DrPH, gives opening remarks 

Nicholas Christakis, MD, PhD, MPH

Integrating Social AI: Rethinking Human Dynamics

Nicholas Christakis, MD, PhD, MPH, of Yale University opened the symposium with a provocative keynote on “Social Artificial Intelligence,” a concept that explores how AI can be embedded within social networks to influence and potentially enhance human interactions. Christakis explained that social AI goes beyond traditional human-AI interaction, examining how AI can operate as a “social agent” that actively shapes the dynamics within human groups. By introducing AI-driven agents or bots into social networks, researchers can study how these non-human participants affect group behavior, cooperation, and conflict resolution.

Social AI’s transformative potential lies in its ability to subtly alter group dynamics, as seen in Christakis’s experiments. For example, when AI bots were placed in group interactions, they fostered greater cooperation among participants and reduced interpersonal tensions. However, Christakis cautioned that embedding AI in social contexts introduces complex ethical challenges. “Social AI has a tremendous capacity to foster cooperation, but it also holds the power to disrupt, especially among children and impressionable groups,” he noted.

Christakis’s work suggests that social AI could be used as a tool to promote positive behaviors, such as encouraging teamwork in schools or supporting mental health in vulnerable populations. Yet, he also emphasized the potential risks, particularly the unintended consequences on social norms and behaviors. As social AI systems become more integrated into our daily lives – whether through digital assistants in collaborative workspaces, chatbots in social media, or AI-driven games – these technologies could reshape how people interact, think, and even make decisions.

Erika Cheung, co-founder of Ethics in Entrepreneurship

Ethical Boundaries in Digital Health: Lessons from Theranos

A particularly resonant session featured Erika Cheung, co-founder of Ethics in Entrepreneurship, who shared her personal journey as a Theranos whistleblower and how the Theranos scandal highlighted the dangers of unregulated health tech innovation. Cheungs experience at Theranos became a case study in how the drive for groundbreaking technologies can sometimes lead companies to prioritize profit over ethics, with severe consequences for patients and public trust.

In her talk, Cheung explained that Theranos’ approach was driven by secrecy and an excessive focus on rapid growth, with little attention to the safety or efficacy of its technology. “Theranos taught me that without transparency, even the most promising innovations can cause real harm,” she said, reflecting on the toll that unethical practices can take on patients and employees alike. 

Cheung’s session was a call to action, urging digital health startups to adopt strict ethical oversight and prioritize patient welfare over market dominance.

Jessica Mega, MD, MPH, Dana Cho, and Steven Rosenbush

Collaboration and Trust as Catalysts for Impactful Innovation

In a session on bridging innovation gaps, Jessica Mega, MD, MPH, co-founder of Verily Life Sciences, Dana Cho, VP of Design from Pinterest, and Steven Rosenbush of The Wall Street Journal shared insights into the unique opportunities and challenges of collaboration in the digital health sphere. 

The panelists discussed how collaborations between technology and medicine have pushed forward mental health tools, patient engagement platforms, and other digital health advancements, often with surprising benefits for patient care. 

As an example, the panel examined how the COVID-19 pandemic drove rapid adoption of telemedicine, virtual care platforms, and mental health apps, revealing that digital tools could provide not just practical but also emotional support to patients.

The pandemic highlighted that digital health tools are not merely substitutes for in-person care but unique resources that can foster different forms of connection. Cho noted, “Telemedicine allows patients, even in remote settings, to feel a deeper sense of connection and engagement that often isn’t possible in traditional clinics.” 

Mega, meanwhile, emphasized the value of collaboration among academia, tech companies, and healthcare providers to refine and scale these innovations effectively. “This is the start of a new era for patient care, where innovation can truly bring us closer,” she said.

Kirsten Bibbins-Domingo, PhD, MD, MAS

Rethinking Medical Publishing with AI

Kirsten Bibbins-Domingo, PhD, MD, MAS, editor-in-chief of JAMA, introduced JAMA + AI, a new platform that aims to bring AI research directly into medical publishing. As AI in healthcare grows, so does the need for rigorous editorial standards to ensure that published research is both relevant and accessible to clinicians. Bibbins-Domingo Explained that JAMA’s role is to bridge the gap between AI developers and healthcare providers, offering a space where high-quality AI research can meet clinical needs and be communicated effectively.

“A core challenge is to provide a rigorous yet accessible platform that keeps pace with the rapid advancements in AI,” Domingo said. Through JAMA + AI, the journal has introduced a peer-review framework that considers AI-specific concerns such as algorithm transparency, data validity, and reproducibility. This platform aims to set new standards in publishing, where innovative studies on AI’s application in medicine can be vetted and shared with healthcare professionals globally.

Bibbins-Domingo hopes that JAMA + AI will attract diverse voices, including both technology experts and clinical practitioners, fostering a dialogue that ensures AI applications are both technically sound and practically applicable in medical settings. She emphasized that maintaining these high standards is essential to building trust in AI as it becomes a core part of healthcare.

Global Health and the Digital Divide: Reimagining Equity

In a session on global health, Ruth O’Hara, PhD, Maya Adam, MD, PhD, CK Cheruvettolil, Till Bärnighausen, MD, MSc, and Michelle Williams, SM, ScD, discussed the unique challenges that digital health faces in low-resource settings, where infrastructure and healthcare access are often limited. O’Hara emphasized the role of digital health in bridging disparities, but cautioned that without careful design, AI could exacerbate the digital divide rather than alleviate it.

The panelists highlighted that digital tools often fail to address the realities in underserved areas where consistent access to clinics, pharmacies, or even stable internet is lacking. Cheruvettolil, former senior strategy officer of Digital Health and AI  at the Bill & Melinda Gates Foundation explained, “Telehealth, as we know it, requires a lot of assumptions that just don’t hold up in low-resource settings. To bridge the digital divide, we have to make sure that the technology meets people where they are.” This means adopting a “mobile-first” approach, focusing on cell phone accessibility, which is often the most reliable digital resource available in these communities.

The discussion emphasized that to be effective, digital health solutions must be tailored to each community’s needs, incorporating local cultural norms, available resources, and healthcare infrastructure. By integrating these considerations, digital health tools have the potential to make a meaningful impact on global health, addressing disparities and improving health outcomes in areas that lack traditional healthcare services.

Maya Adam, MD, PhD, Michelle Williams, SM, ScD, CK Cheruvettolil, Till Bärnighausen, MD, MSc, and Ruth O’Hara, PhD

 

Neesh Pannu, MD, Michael Pfeffer, MD, FACP, and Julia Fridman Simard, ScD

Data as a Cornerstone of AI in Healthcare

In a compelling session on the importance of data in AI, moderated by Julia Fridman Simard, ScD, Neesh Pannu, MD, from the University of Alberta and Michael Pfeffer, MD, FACP, from Stanford Health Care and School of Medicine discussed how high-quality, well-managed data serves as the foundation for impactful AI applications in healthcare. Pannu highlighted the advantages of Canada’s centralized health data system, which aggregates patient information across provinces and allows for population-wide insights and effective application of AI at scale. “Canada’s centralized system offers a unique model for integrating AI in a way that serves the entire population,” she explained.

Pfeffer elaborated on the need for accurate “small data” collection – such as individual patient interactions and daily health metrics – that feeds into broader datasets and improves the reliability of AI models. He explained that while “big data” often takes the spotlight, the fine-grained details captured in small data are essential for developing accurate, adaptable AI applications.

Together, the speakers advocated for “federated learning” as a secure way for institutions to collaborate on AI research without sharing actual patient data. In this approach, each institution trains an AI model locally with its own data, then shares only the model updates – not the data itself – with a central server. These updates are combined to create a stronger, shared model that all institutions can use. Federated learning enables collaboration across healthcare organizations, advancing AI while protecting patient privacy.

 “Our goal is to allow data sharing in a way that preserves privacy while fostering discovery,” Pfeffer said, underscoring the importance of privacy in data sharing practices.

Ethics of AI: Balancing Innovation with Humanity

The symposium’s closing panel, “Innovation Meets Humanity,” brought together Harvey Fineberg, MD, PhD, of the Gordon and Betty Moore Foundation, David Webster of Google Labs, and David Magnus, PhD, of Stanford, to discuss the delicate balance between technological advancement and human touch in healthcare.

Webster shared optimism, explaining that AI can provide real-time, context-aware insights, meaning it can analyze and deliver relevant information about a patient’s condition, medical history, and current needs when a clinician most needs it. For example, during a consultation, AI might quickly highlight key health indicators, flag potential issues, or suggest personalized treatment options based on the patient’s unique data and broader clinical research.

With these insights readily available, clinicians can spend less time on data retrieval and analysis and more time directly engaging with the patient. This could lead to more meaningful interactions, as healthcare providers can focus on listening, discussing concerns, and building trust with patients rather than spending time on administrative or repetitive tasks. Webster believes this would make healthcare encounters feel more personal and supportive, making AI a partner in enhancing the quality and depth of clinician-patient relationships. “AI has the power to make interactions richer if implemented thoughtfully,” Webster notes.

David Magnus, PhD, David Webster, and and Harvey Fineberg, MD, PhD

Magnus, however, voiced a note of caution, pointing out the potential risks of relying too heavily on AI in clinical settings, especially under economic pressures that might prioritize cost savings over quality interactions. “We have to be vigilant about how AI is deployed; cost-saving strategies shouldn’t come at the expense of human empathy,” Magnus emphasized. He warned that an overreliance on AI could risk depersonalizing healthcare, making patients feel more like data points than individuals.

Fineberg highlighted the ethical responsibility of the healthcare industry to prioritize patient-centered AI applications. He argued that while AI could streamline certain aspects of care, its integration should never undermine the core human values that define medicine. The panelists collectively reinforced the idea that while AI has the potential to transform healthcare, its deployment must be managed thoughtfully to ensure that technology serves as an enhancement of – not a replacement for – human connection in patient care.

From social AI’s influence on human behavior to the clinician-patient relationship, the symposium underscored that the future success of technology in healthcare depends essentially on a careful balance between innovation and core human-centered values.

Jonathan D. Levin, President of Stanford University, Fei-Fei Li, PhD, Euan Ashley, MB ChB, DPhil


Your next recommended read

AI in Medicine: Can GPT-4 Improve Diagnostic Reasoning?

A recent Stanford study explores GPT-4's potential in aiding diagnostic reasoning. Conducted by the Center for Biomedical Informatics Research, it tested GPT-4's ability to assist doctors in diagnosing complex cases.