Presence Announced 2019 Seed Grant Awardees for AI in Medicine: Inclusion & Equity

March 2019-

Stanford’s Presence Center’s AI in Medicine: Inclusion & Equity Initiative has just announced its seed grant awardees for 2019.  The program’s goal is to “explore solutions, frameworks, and concepts to address the challenges raised at the symposium relating to the use of artificial intelligence, machine learning and associated technologies, processes, and policies to address inclusion and equity in medicine and healthcare.”  Six grants were awarded for the inaugural program and two more for 2019; the details of each are displayed below.


Promoting Breast Health Awareness in Underserved Communities: Can Haptic-AI Technology Help?

Team: Carla M. Pugh, Professor of Surgery, Stanford University;  Hossein Mohamadipanah, Senior Research Engineer, Stanford University; Su Yang, Research Engineer, Stanford University.

Project Description: 

   Broad interest in the use of social media and technology to build health-related ecosystems is usually hampered by cultural norms and the digital divide which further exacerbates equity and inclusion in healthcare’s ongoing disparities. Our team proposes to use a community and culturally grounded approach to engage a diverse group of underserved women regarding the topic of breast health awareness. Our protocol involves a personally interactive, verbal and hands-on experience using haptic technology and AI to empirically investigate the interests, fears, and baseline understandings regarding breast health.

Scholars/Schools/Centers: Surgery, Medicine


Evidence-Based (Global) Medicine: Evaluating International Data Infrastructure for Healthcare Delivery in Developing Countries

Team Leads: Tina Hernandez-Boussard, Associate Professor of Medicine, Biomedical Data Science, and Surgery, Stanford University;  Eli Cahan, MS in Health Policy Candidate/MD Candidate, Center for Health Research and Policy at Stanford University/NYU School of Medicine.

Project Description: 

Data poverty can lead to algorithmic biases potentially exacerbating existing inequities. For global health, data are often lacking, leading to unreliable information to guide decision-making. The goal of this project is to develop a predictive model to evaluate likelihood of data compromise, validate this model retrospectively using the historical global health literature, and utilize the predictive model prospectively to pilot targeted interventions for data quality improvement through the Wadhwani AI network. This work will demonstrate the necessity for high quality data infrastructure as a prerequisite to equitable and effective “evidence-based” global health.

Scholars/Schools/Centers: Stanford BMIR; Stanford PHS; Wadhwani AI; NYU Department of Bioethics.



Intimate AI: The Impact of Machine Carers on Home Care

Team: Margaret Levi, Director, CASBS; Federica Carugati, Program Director, CASBS; Andrew Elder, Professor, University of Edinburgh; Peter Loewen, Professor, University of Toronto; Dominique Lestel, Professor of Philosophy, Ecole Normale Supérieure de Paris; John Markoff, New York Times; Sonoo Thadaney, executive director, Stanford Presence Center.

Project Description: 

    While AI enhanced patient care holds tremendous potential for enabling patients to stay in their homes longer and for reducing the burgeoning costs of healthcare, its positive impact depends on proactively and preemptively building solutions that address the needs, experiences and reality of all the humans involved.                   

    In traditional models of health and social care, the human in receipt of care (the “patient”) interacts with one or more humans providing that care - e.g., health professionals (doctors, nurses or other allied professions) or informal caregivers. For simplicity, we term all these groups “human carers.”

    Such human-human interactions are extremely complex: they involve trust, delegation, and accountability. For this reason, they are regulated not only by informal norms, but also by formal rules designed to define and evaluate risks, liabilities and mutual responsibilities.

    This already complex set of human-human interactions is further complicated by the introduction of technology that will soon possess varying degrees of autonomy of decision-making, communication and action. Care robots and AI companions, or “machine carers,” will have more direct involvement in physical or emotional patient care.

Scholars/Schools/Centers: Political Science, Medicine, Philosophy, Business, Classics, Journalist


The ‘Empulse’ App To Fund Homelessness Organizations

Team: Dr. Sanjay Basu; Assistant Professor, Health Research, and Policy; Dr. Drea Burbank, CEO, H4A todreamalife; Dr. Nima Aghaeepour, Assistant Professor, Bioinformatics; Ayo Roberts, Biodesign Innovation Fellow Alumni, Stanford Byers Center for Biodesign

Project Description: 

    Homelessness is an unsolved problem. People experiencing homelessness are under-represented in population datasets (inclusion), and may be harmed when computer-aided decision-making is used to allocate funding of basic needs (equity). The desire to give a panhandler $1 might be considered an impulse purchase of homeless services. We hypothesize that facilitating this interaction with the ‘Empulse’ app could capture impulse-donations more effectively and provide thin-sliced human assessments of need for policymakers. Our objective of this proposal is proof of concept through the creation and distribution of the app, pre/post feedback with local stakeholders, formal data analysis, and development of a predictive ML algorithm. It is likely homelessness is an algorithm a computer can never solve, but a computer can help humans fund humans to solve it.

Scholars/Schools/Centers: Medicine, Mechanical Engineering, Brain and Cognitive Sciences, Medical Anthropology, Public Health, Bioinformatics


Coding Caring

Team: Rob Reich, Professor, Faculty Director of the Center for Ethics in Society, Human-Centered AI Initiative core faculty, Stanford University; Morgan Currie, Lecturer in Data and Society, Science, Technology and Innovation Studies, University of Edinburgh; Jessica Feldman, Professor, American University of Paris; Johannes Himmelreich, Ethics and Society Postdoctoral Fellow; Fay Niker, Ethics and Society Postdoctoral Fellow

Project Description: 

    Virtual assistants help raise your children, virtual therapists treat veterans living with PTSD, and virtual companions ease loneliness. “Intimate AI” increasingly supplements and replaces human care on the promise that serious social problems can be solved by developing technologies of care that are cheap and accessible. At the same time, these technologies give rise to deep challenges regarding equity, inclusion, and the social value of care. We propose a workshop that brings together theory and design in a meeting of practitioners and academics to address conceptual, ethical, and political issues of coding caring.

Scholars/Schools/Centers: Political Science, Ethics


Machine Learning for Making Fair and Equitable Predictive Models

Team: Nigam H. Shah, Professor, BMIR; Adrien Coulet, Professor, Université de Lorraine,; Stephen R. Pfohl, Stanford Center for Biomedical Informatics Research

Project Description: 

    Personalized risk stratification models built from electronic health records (EHR) have the potential to improve quality of care. However, the naive usage of such models may introduce and reinforce care disparities if a model under- or over- predicts risks for certain populations. We propose using external data sources to supplement local data and increase population diversity, when learning predictive models. Such external data have differences in care delivery, coding practices, demographics, case-mix, and outcome definitions across the different sources. This project will examine whether transfer learning and domain adaptation can create a shared latent space across local and external databases to improve the performance of predictive models for populations that are underrepresented in the Stanford health system.

Scholars/Schools/Centers: Medicine, Bioinformatics, Biomedical Data Sciences


Demographic Biases in Machine Learning Algorithms Using Electronic Health Record Data: A Systematic Review

Team: Tina Hernandez-Boussard, Professor, Biomedical Informatics, Biomedical Data Science, Surgery
Stanford University; John Ioannidis, Professor, Health Research & Policy; Martin Seneviratne, Graduate student, Biomedical Informatics; Stelios Serghiou, Graduate student, Epidemiology & Clinical Research

Project Description: 

    The rise of electronic health records (EHRs) has enabled the development of machine learning algorithms to assist with clinical decision-making for diagnosis, treatment and prognosis. As these algorithms infiltrate into clinical practice, it is important to understand hidden biases in how they were developed and whether machine learning algorithms in healthcare will perpetuate discrimination if they are trained on historical data. To date, little evidence exists on the equitable benefit of machine learning algorithms in healthcare, which begins with transparency of the demographic distribution of the population(s) studied. Such information is critical to understand how such advances in medical artificial intelligence may benefit everyone equally. We propose a systematic review of machine learning algorithms using EHR data, evaluating 1) whether studies disclose the breakdown of their training/test datasets in terms of ethnicity, gender, age groups, and socioeconomic/insurance status; 2) how representative the training data is of the broader population; 3) whether these demographic parameters are included in the model; and 4) whether any comparative assessment of model performance on vulnerable populations has been performed. This work will fill a critical literature gap regarding the potential benefit of emerging medical informatic technologies across all populations.

Scholars/Schools/Centers: Medicine, Bioinformatics, Public Health, Statistics, Epidemiology


A Bayesian Decision Network Approach to Address Inequity in the Neonatal Intensive Care Units in California

Team: Dr. Anoop Rao, Dr. Tavpritesh Sethi, Prof. Vinod Bhutani

Project Description: 

    Healthcare inequity in the neonatal care has been demonstrated using traditional discriminatory models such as logistic regression. However, there is a concern about biases that may inadvertently arise from unaccounted factors while training statistical or AI models. To overcome this concern, we will employ an interpretable and explainable AI pipeline using a validated Graphical approach based upon Bayesian networks. Using this approach and the California Perinatal Quality Care collaborative (CPQCC) data registry, our team will quantify the role of heterogeneous covariates (including race/ethnicity) influencing four major adverse morbidity outcomes, necrotizing enterocolitis (NEC), bronchopulmonary dysplasia (BPD), retinopathy of prematurity (ROP) and intraventricular hemorrhage (IVH).

Scholars/Schools/Centers: Pediatrics - Neonatal and Developmental Medicine, Philosophy, Biotechnology