Blog

Research: It Works, But Does It Function?

By Taylorlyn Mehnert, MusicWorx Intern

When I started working in a children’s hospital for the first time, I had several conversations with my supervisor about hurdles to providing quality care in this environment. The hurdle that stuck out to me the most was that there is no standardized tool for allowing children to self-report emotions. We’re all familiar with the Wong-Baker Faces scale for pain, and I just assumed something similar existed for reporting emotions in the hospital setting. With a little bit less than three months left in my internship, I decided that this would be an excellent thing for me to study for my internship “special project”: cue me biting off much more than I could chew. 

I started out by coming up with a list of questions that I thought would guide my understanding of self-reporting emotions in a children’s hospital:

  • Do children and adults have correlated perceptions of emotions? 
  • How was the Wong-Baker scale developed and validated?
  • Do people more accurately identify emotions using photographs or graphics?
  • Is there a related  “gold standard” scale that could be adapted to fit our needs? 

Anyone who has conducted any sort of research before will, I’m sure, recognize the black hole of studies I found myself in while trying to answer these questions. I quickly realized that I had to answer about 50 sub-questions just to reach my first bullet point. What are emotions? Where are they processed in the brain? Are emotions a singular thing or a combination of other components? Do emotions cause physiological, neurological and behavioral changes or do those changes cause emotions? IS “EMOTIONS” EVEN THE WORD I SHOULD BE USING?? 

The answer to that last question is that I’m still not exactly sure whether I’m technically talking about emotions, feelings, or mood, but all of the related research I could find used “emotions” so that’s what I’m using for consistency’s sake.

I got so far away from my focus that I had to sit back and acknowledge that there is a reason we don’t have a standardized tool yet – we’re not exactly sure what we’re talking about. The research base to definitively determine the best way to self-report emotions just isn’t there yet. But it was too late for me to change my special project, so I had to forge ahead. Now I was in the least comfortable position for a detail-oriented perfectionist trying to do research: surrounded by many, many variables. I knew that this was the nature of field research, I just didn’t realize just how difficult it was going to be.

 I attempted to narrow down my study to look at whether the self-report of a client would be more closely correlated to a clinician’s perception of their vocalizations, body language, and affect if they used photographs or graphics to self-report. Below is the scale that I used to test my hypothesis: patients in a children’s hospital would be more likely to choose to report using the colorful graphic scale and those reports would be more closely correlated to the practitioner’s perception of their emotions.

Three children’s hospitals in California were kind enough to run this study with me. We invited music therapists, child life specialists, and speech therapists to participate. I wasn’t able to do any sort of training on a protocol, so I sent out the written protocol followed by the digital scale. 

Protocol: 

  1. Practitioner shows client paper and asks,  “Which one would you like to use to tell me how you feel- this one (point to top scale) or this one (point to bottom scale)?” Practitioner repeats prompt as many times as necessary until client selects scale. 
  2. Once client has selected a scale, practitioner folds the paper in half so that only the selected scale is visible. 
  3. Practitioner asks client, “Which one looks the most like the way you feel right now?”. If appropriate, client marks which picture best represents their current emotional state, if inappropriate, practitioner marks selected picture post-session. 
  4. Practitioner circles the age range that the client falls in.
  5. Practitioner checks inhibitor box if a client’s diagnosis, medication, or other factor may extremely alter client’s ability to self-report- for example, a client may have a significant developmental delay or a client may be mildly sedated.
  6. Practitioner checks congruent or incongruent for affect, body language, and/or vocalizations to note whether the client’s body language and vocalizations match their selected emotional state. For example, if a client is crying but selects “happy” the practitioner would check the incongruent box under vocalizations

Some of the variables that created a lot of limitations for my study were easier to accept; multiple practitioners, small sample size, trauma in the hospital, grey areas in the research and many others. 

Other limitations caused me to struggle for a while. The photographs are from a validated set, the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-chEFS). Even the acronym is a mouthful, I know. This set consisted of photographs of 59 different child actors and I chose this one because she had a photograph in each category, her photographs were highly correlated to the intended emotion in validation studies, and her expressions matched up well with the graphics. 

Here, I was forced to consider the variables that come with her appearance; she is a girl of color. I would venture to say that things like sexism and racism play a role in all of our research studies whether we realize it or not (check out my blog post about implicit bias!), although they could have a more direct role in my study. I wrestled with this for quite some time, and if I’m being honest, I still don’t have a good solution. 

I don’t have access to the software I need to really analyze my results yet, however, the purpose of this study was less about creating pristine data and more about me getting through the process of a research study from start to finish. I learned a lot of little lessons along the way, but the biggest take away I had was this: research in the field is messy, but also necessary. I believe more useful research could be done in music therapy by simply asking professionals what they need and attempting to provide that. 

Yes, we need a firm knowledge base, and yes, we need someone with more funding and advisors than myself to perform these studies, but what happens in the meantime? For some patients, non-verbal self-reporting is crucial. If the research base isn’t ready for years, what happens to the clients who need it now? Practitioners fill in the gaps the best they can, but I think we can do better. It was a hard lesson for me to learn, but I believe that having practical research with many limitations is better than having only pristine theoretical research. Our clients need us right now, perfect or not.

No Comment