The role of technology in mental healthcare

Policy Briefing

Published 28/04/2022

The role of technology in mental healthcare Page 1

Our briefing note explores the ethical and social issues arising from the use of technology in mental healthcare.

Woman looking at phone by fountain crop

Ethical and social issues

Here we describe some of the ethical and social issues raised by the potential use of emerging technologies in mental healthcare. Some of these overlap with the issues raised by the use of telepsychiatry and other forms of online medicine. These have been considered in a previous Nuffield Council report.

Human connections

Technology can play a key role in connecting people, for example by linking service users who may be geographically very distant but who share similar experiences and needs. This may be especially important for people with rare conditions who would normally not have a chance to meet.

If mental healthcare becomes increasingly contactless and automated, it is important to reflect on the impact that lack of human contact may have on those seeking care. For many service users, in-person therapy, community participation, peer and group support, and other activities involving face-to-face interactions can make the difference for improving clinical outcomes. Initial studies examining experiences of care during COVID-19 have shown that while some people adapted quickly and appreciated the flexibility offered by remote care, others experienced exacerbated feelings of loneliness, isolation, a sense of disconnection from communities, and an overall deterioration of mental health.

Therapeutic relationship

Some service users might find it difficult to build a relationship of trust with health professionals without face-to-face contact.[v] This may have implications for clinical outcomes, given the importance of the therapeutic relationships for the recovery process. There are calls for more research on the effects of cultivating therapeutic relationships with non-human agents. Despite recent advances in affective computing*, automated systems are still a long way from understanding a person’s subjective experience of mental illness. It is also not known whether machines will ever be able to fully replicate the richness of human emotions and interactions.

Affective computing is the study and development of systems that can recognise, process, and simulate human affects.

Effectiveness and safety

Interventions delivered via digital technology can be very effective for some. Individuals affected by phobias might find it easier to confront their fears in virtual environments compared to real ones, making VR treatments very effective and appealing for some.

Concerns have been raised about the lack of evidence supporting the use of some technologies. Despite having millions of users, most commercially-developed apps have not undergone rigorous scientific testing and, when they have, they often use a small sample size with no follow-up. In addition, studies are sometimes conducted by the apps' own developers, rather than by independent research teams. As a result, it is not always clear whether these tools are effective or not, and whether they could cause harm. There have been calls for further research on the effectiveness and safety of mental health apps and for the introduction of more robust regulatory frameworks.

Similar concerns have been raised about wearable neurotechnologies. Claims of effectiveness made by developers are often based on the effectiveness of the treatment on which a given product is based, such as tDCS, rather than on the product itself.

Accuracy

There are questions about how accurate diagnostic and prediction tools need to be for them to be used in clinical decision-making, particularly if these are to be used without the involvement of clinicians. The way in which accuracy is defined is important. In some research studies, accuracy is measured by how closely the tool matches clinicians' determinations.

Concerns about accuracy also arise in non-clinical settings. For example, some have warned against the use of automated social media content analysis in decision-making in areas such as law enforcement, highlighting that the accuracy level commonly reached by NLP tools is not sufficiently high.

Some ethnic and age groups are under-represented in mental health data sets. Technologies that draw on biased data sets will not have the same accuracy or predictive validity for those groups, potentially exacerbating inequalities in experience of mental healthcare. Questions of reliability and data bias and other issues raised by the use of AI in healthcare are explored in a previous Nuffield Council briefing note.

Access to care

There are inequalities in access to mental health services among some population groups. These include children and young people, people with ethnic minority backgrounds, homeless people, older adults, refugees, and people living in poverty. Technology might increase access to care and could help reach some under-served population groups. For example, virtual support may encourage people to seek help who would not otherwise feel comfortable in doing so, because of the perceived stigma associated with mental illness. Others might feel less judged or embarrassed in disclosing symptoms to virtual agents.

As highlighted during the COVID-19 pandemic, increased reliance on technology can exacerbate inequalities by excluding individuals and communities who experience difficulties in using or accessing technology, or those for whom home is not a private or safe place, such as victims of domestic violence. Factors that influence access to and use of technology include health and digital literacy, socio-economic status, age, ethnicity, and level of education. Significantly, many people affected by mental health problems do not have access or are reluctant to use technology and therefore start from a position of digital exclusion.

If mental healthcare systems increasingly rely on technology, important questions arise on what technological interventions are prioritised by developers and why. Mental health technologies tend to focus on the most common mental health conditions, such as mild anxiety and depression. This might lead to more technological interventions being available for common mental health conditions than for rarer conditions.

Individual responsibility for health

The availability and use of mental health technologies might increase individual responsibility for mental wellbeing. Technology could empower people to take responsibility for their own health, for example by increasing access to information about mental health and encouraging self-reflection and self-care. However, in order to be empowered, people need to have access to technology and have certain levels of health and digital literacy (see above).

It is also important to reflect on the possible impact that an increased medicalisation of everyday life can have on individuals. People might become excessively preoccupied with changes in mood and behaviour and experience anxiety, or they might worry unnecessarily that these could be interpreted as symptoms of ill health.

Data privacy and security

While some believe all types of health data to be equally sensitive, others argue that mental health information is particularly sensitive. When personal information is collected and used in the context of mental healthcare, it is important to ensure that transparent data and privacy policies are in place. If mental health technologies become increasingly used in both healthcare and non-healthcare settings, there are concerns that information about a person’s mental health could be used in ways that result in discrimination, justify unnecessary coercive interventions, or be used for commercial purposes that the person did not intend.

Some have warned of an increased risk of cyberattacks targeting mental health service providers. Data breaches involving sensitive mental health information could have devastating consequences for users and providers, as shown by recent high-profile cyberattacks to service providers. There are calls for higher security standards to protect users and support victims of attacks, and for more research to be conducted on implications of mental health data breaches.

The importance of choice

Emerging technologies are full of promise by nature and often accompanied by an optimism bias. However, it is important to recognise that technology might not always represent a good or better solution for everybody, and different people will have different needs.

There are concerns that an excessive focus on technology solutions could divert resources from other important interventions, such as increasing social interactions or tackling the social determinants of poor mental health. This may impact the quality of support, as mental wellbeing depends on a number of intertwined factors, including social connectedness, housing, employment, and education. If technological forms of mental health support become widespread, service users may fear being left without a choice, especially those whose experience of care has been characterised by a lack of autonomy and choice. In clinical settings, these technologies should be used as an addition to what is already available, rather than a replacement, and alternatives to technological interventions should always be available, including hybrid forms of support.

Trust and acceptability

Some mental health technologies involve a high level of surveillance. This may be perceived as excessively intrusive and could undermine trust in mental healthcare and in the organisations that deploy digital technologies, with implications for their acceptance and uptake. If used inappropriately, remote monitoring could increase symptoms of mental distress and anxiety in people with mental health problems, damage the relationship between service users and clinicians, and breach the basic human right to privacy.

There are questions as to whether people would always be able to give informed consent to mental health monitoring and support tools, particularly with direct-to-consumer technologies. Novel technologies may be unfamiliar to many. Users may, for example, consent to specific forms of surveillance and data collection without fully understanding all the implications of their use.

To improve public trust and acceptability, service users, their family, and care professionals need to be involved in research and development of technology for mental health and in the development of future regulation and research priorities.



Share