The following info and data is provided "as is" to help patients around the globe.
We do not endorse or review these studies in any way.
Brief Title: Machine Learning to Analyze Facial Imaging, Voice and Spoken Language for the Capture and Classification of Cancer/Tumor Pain
Official Title: A Feasibility Study Investigating the Use of Machine Learning to Analyze Facial Imaging, Voice and Spoken Language for the Capture and Classification of Cancer/Tumor Pain
Study ID: NCT04442425
Brief Summary: Background: Cancer pain can have a very negative effect on people s daily lives. Researchers want to use machine learning to detect facial expressions and voice signals. They want to help people with cancer by creating a model to measure pain. They want the model to reflect diverse faces and facial expressions. Objective: To find out whether facial recognition technology can be used to classify pain in a diverse set of people with cancer. Also, to find out whether voice recognition technology can be used to assess pain. Eligibility: People ages 12 and older who are undergoing treatment for cancer Design: Participants will be screened with: Cancer history Information about their gender and skin type Information about their access to a smart phone and wireless internet Questions about their cancer pain Participants will have check-ins at the clinic and at home. These will occur over about 3 months. They will have 2-4 check-ins at the clinic. They will check in at home about 3 times per week. During check-ins, participants will answer questions and talk about their cancer pain. They will use a mobile phone or a computer with a camera and microphone to complete a questionnaire. They will record a video of themselves reading a 15-second passage of text and responding to a question. During the clinic check-ins, professional lighting, video equipment, and cameras will be used for the recordings. During remote check-ins, participants will be asked to complete the questionnaire and recordings alone. They should be in a quiet and bright room. The room should have a white wall or background.
Detailed Description: Background: * Pain related to cancer/tumors can be widespread, wield debilitating effects on daily life, and interfere with otherwise positive outcomes from targeted treatment. * The underpinnings of this study are chiefly motivated by the need to develop and validate objective methods for measuring pain using a model that is relevant in breadth and depth to a diversity of patient populations. * Inadequate assessment and management of cancer/tumor pain can lead to functional and psychological deterioration and negatively impact quality of life. * Research of objective measurement scales of pain based on automated detection of facial expression using machine learning is expanding but has been limited to certain demographic cohorts. * Machine learning models demonstrate poor performance when training sets lack adequate diversity of training data, including visibly different faces and facial expressions, which yields opportunity in the proposed study to lay a guiding foundation by constructing a more general and generalizable model based on faces of varying sex and skin phototypes. Objectives: -The primary objective of this study is to determine the feasibility of using facial recognition technology to classify cancer/tumor related pain in a demographically diverse set of participants with cancer/tumors who are receiving standard of care or investigational treatment for their cancer/tumor. Eligibility: * Adults and children (12 years of age or older) with a diagnosis of a cancer or tumor who are receiving standard of care or investigational treatment for their underlying cancer/tumor. * Participant must have access to internet connected smart phone or computer with camera and microphone and must be willing to pay any charges from service provider/carrier associated with the use of the device. Design: * The design is a single institution, observational, non-intervention clinical study at the National Institutes of Health Clinical Center. * All participants will participate in the same activities in two different settings (remotely and in-clinic) for a three-month period. * At home, participants will utilize a mobile application for self-reporting of pain and will audio- visually record themselves reading a passage of text and describing how they feel. In the clinic, participants will perform the same activities with optimal lighting and videography, along with infrared video capture. * Visual (RGB) and infrared facial images, audio signal, self-reported pain and natural language verbalizations of participant feelings feel will be captured. Audio signal and video data will be annotated with self-reported pain and clinical data to create a supervised machine learning model that will learn to automatically detect pain. * Care will be taken with the study sample to include a diversity of genders and skin types (a proxy for racial diversity) to establish a broad applicability of the model in the clinical setting. Additionally, video recordings of participant natural language to describe their pain and how they feel will be transcribed and auto-processed against the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PROCTCAE) library to explore the presence and progression of self-reporting of adverse events.
Minimum Age: 12 Years
Eligible Ages: CHILD, ADULT, OLDER_ADULT
Sex: ALL
Healthy Volunteers: No
National Institutes of Health Clinical Center, Bethesda, Maryland, United States
Name: James L Gulley, M.D.
Affiliation: National Cancer Institute (NCI)
Role: PRINCIPAL_INVESTIGATOR