Exploring the Expanding Realms of Artificial Intelligence in Psychological Well-being

So Much Digital Data

Exploring the Expanding Realms of Artificial Intelligence in Psychological Well-being

In 2017, uneasy about the enormous amount of personal health information being generated through mobile apps, wearable devices, and social networks, a team of academics from the University of California decided to investigate the ethical considerations of technology-driven research and AI, aiming to offer suggestions to foster responsible conduct of this research. While the number of steps one takes each day might appear benign, the amount of health data companies are gathering is growing rapidly. As technology continues to permeate healthcare services, these scholars caution that the pace of innovation is outrunning the ability of institutional review boards to evaluate risk effectively. One of their main conclusions was the importance of strengthening collaboration between stakeholders to develop and disseminate resources that boost understanding of technologies and reduce potential privacy and confidentiality hazards for participants. In simpler terms, these scholars urged for better education for users regarding available technology and the associated risks.

AI in Mental Health Support

In the realm of artificial intelligence (AI) and mental health, technology can open up numerous opportunities (and challenges). At the moment, ClareandMe.com, a berlin-based company, offers mental health assistance via WhatsApp, text, and phone calls during business hours at no extra cost. As part of demonstrating how AI might bolster clinical practices particular to mental health, another group of researchers from California examined 28 studies that employed AI to identify, classify, or subgroup mental health conditions such as depression, schizophrenia, and suicidal thoughts. They discovered a sharp increase in the number of studies in this area starting in 2015, and many of the studies they reviewed were accurate in diagnosing the mental health conditions they addressed. For instance, one study employed an artificial neural network with a 97% accuracy rate in predicting depression based on sociodemographic data and clinical information from a large electronic health records database. Another study revealed that depression could be predicted through analysis of users’ social media posts.

Evaluating Innovation and Knowledge Gaps

Regardless of whether technology is used for preventive care, diagnosis, or active treatment of mental health issues, current applications in this field are being explored but remain understudied. This topic is so intricate and rapidly evolving that acquiring knowledge can feel overwhelming or even impossible, yet leaders may benefit from adopting a framework to assess the risks and benefits of technology. Recently, Munmun De Choudhury, Associate Professor in the School of Interactive Computing at Georgia Institute of Technology, and Emre Kiciman, Senior Principal Researcher at Microsoft Research, published a paper proposing four considerations to support the integration of AI with human intelligence in healthcare settings. For those interested in the implications of AI in our daily lives and personal choices, Dr. Choudhury and Dr. Kiciman’s four factors may serve as a useful reference the next time your smartwatch encourages you to take more steps.

  1. Construct Validity - when AI is analyzing data to draw conclusions, construct validity asks, "are we actually measuring what we believe we are measuring"? For example, when predicting depression from social media posts, are the words and images truly representative of the mood the individuals are experiencing?
  2. Unobserved Contextual Factors - every observed data point is influenced by situational or other unobserved factors that may not be immediately apparent. Depression rates, for instance, may be higher in certain seasons or in certain cultures may view mental illness as taboo, leading to fewer disclosures of negative emotions on social media. In such cases, these factors could skew our interpretation of the data, leading to inaccurate diagnoses or incorrect conclusions.
  3. Data Biases - AI makes decisions based on established rules or trends derived from analyzing large datasets. However, these datasets are not devoid of bias. Consider a dataset centered on individuals who have received one-on-one therapy. Certain sociodemographic variables are more likely to participate in therapy, which means this dataset is not fully representative of the entire population. This can lead to the mistaken assumption that trends observed in this dataset apply to other subpopulations or the general population.
  4. The Cost of AI Errors - the issues we ask AI to address in healthcare settings may have severe consequences if technology fails or provides incorrect predictions. In healthcare settings, even a simple typo in a prescription can have life-threatening consequences, so it's crucial for leaders to devote resources to understanding the potential mistakes AI might make in specific scenarios.

As a final thought experiment, given that regulations for the deployment of AI in healthcare settings are still in development, leaders and their employees should consider the advantages and disadvantages of how wearable technology, email content, and any digital data associated with their identity can be used to diagnose health conditions and personalize treatment, or on the darker side, impact insurance coverage and employment decisions.

In the realm of evaluating innovation and knowledge gaps in AI and mental health, leaders may benefit from considering the construct validity of AI data analysis, ensuring that they are indeed measuring what they believe they are. Furthermore, understanding and addressing unobserved contextual factors, data biases, and the cost of AI errors can lead to more accurate and responsible AI integration in healthcare settings.

In the context of strategy and leadership, fostering collaboration between stakeholders to develop and disseminate resources that boost technology understanding and mitigate potential privacy and confidentiality hazards can be an effective strategy. Leaders should also consider the potential implications of AI in diagnosing health conditions and personalizing treatment, as well as its impact on insurance coverage and employment decisions.

Read also: