ChatGPT and other language models could detect signs of depression

A study by the Jama Open Network suggests that large language models like ChatGPT have the potential to detect mental health risks in patients undergoing psychiatric treatment.
Research conducted by a Korean team indicates that although these models "demonstrate potential" for detecting these types of risks, " it is essential to further improve performance and safety before clinical application."
The team examined the potential of large language models, which are artificial intelligence systems trained on large amounts of data, so they are capable of understanding and generating natural language.
This same potential is demonstrated by embeddings, a natural language processing technique that converts human language into mathematical vectors, which were also analyzed by the team.
The study was based on data from 1,064 psychiatric patients aged 18 to 39 who completed various self-assessment and sentence completion tests .
Read: Get the second product at half price for Soriana's Julio RegaladoThe latter consist of proposing to a patient a series of unfinished sentences that he has to finish with the first thing that comes to mind and give subjective information , for example, about the concept he has of himself or of interpersonal relationships.
The data was processed by large language models such as GPT-4, Gemini 1.0 Pro or Google Deepmind and by text embedding models such as text-embedding-3-large OpenAI .
The research notes that these models "have demonstrated their potential in assessing mental health risks ," including depression and suicide, "using narrative data from psychiatric patients."
In a commentary on the study, in which he did not participate, Alberto Ortiz of La Paz University Hospital (Madrid) noted that the study was conducted on patients already undergoing psychiatric treatment, so generalizing its results to apply this methodology to risk detection in the general population "is not possible, for the moment."
Ortiz said at the Science Media Centre, a scientific resource platform, that the application of AI in mental health will, in any case, have to focus on people's subjective narratives, as is done in this research.
However, he considered that "it's one thing to detect risks and conduct screening, and quite another to treat people with psychological distress, a task that goes beyond applying a technological solution and in which the subjectivity of the professional is essential to developing the therapeutic bond."
You might be interested in: Julio Regalado from Soriana: These are ALL the products that are 3 for 2* * * Stay up to date with the news, join our Whatsapp channel * * *
OB
informador