Milestone 2 Final PROJECT PDF

Title Milestone 2 Final PROJECT
Course Intro to Statistical Analysis
Institution Southern New Hampshire University
Pages 11
File Size 66.7 KB
File Type PDF
Total Downloads 2
Total Views 162

Summary

MILESTONE FOR ESSAY. PART OF THE FINAL PAPER....


Description

Children’s recognition of emotion in music and speech

4-2 Final Project Milestone Two: Introduction, Group Discussion, Analysis, and Peer Review

Introduction This article is a research study on the associations between music and speech sounds, through their acoustic tones, which can be recognized by the brain and translated into emotions. The research presented a group of children and adults with music and speech tones such as, happiness, sadness, anger, fear, and pride, which was designed to convey these emotions and be recognized by the brain. The group in this research was involved in recognizing 10 musical excerpts, 10 inflected speech clips, and 10 burst clips to decipher the emotional content of the clips. The intent for this research is to ascertain if recognition of music and speech are related by their rhythm and sound patterns. The participants involved in this study consisted of 60 children aged 6 to 11 years old, and 51 university students. The children were divided into three age groups. The young age group consisted of 6 to 7 years females. The middle age group consisted of 8 to 9 years females. Lastly, the older age group consisted of 10 to 11 years females. The adult participants were between the ages of 17 and 47 females. The children were volunteers from local museums and families who volunteered to participate in a developmental research program at the university. The adult participants were undergraduates from the psychology course, participating for course credits. Half of the participants had some form of musical training. A second pilot group was used, but did not participate in the main experiment, which consisted of adult undergraduate psychology students, who were also used to evaluate the emotional contents of the musical clips. These students were recruited by word of mouth and participated for course credit. The age range for this pilot group was between 18 and 52 years (Vidas, D., Dingle, G. A., & Nelson, N. L. 2018).

The authors (Vidas, Dingle & Nelson, 2018) hypothesized children’s recognition of emotions in music and speech develop in parallel, which were related in another article research that suggested a hypothesis that acoustic cues used to convey emotion in speech and music are related (Justin & Laukka, 2003), and that emotional inferences from vocalizations and music recruit overlapping net-works in the brain (Escoffier et al., 2013). The authors also inquired if musical training impacted children’s recognition of emotions in music and speech tones. These hypotheses are significant in humanities, as the results of the studies can help benefit people who suffer from post-traumatic stress events. The study can help create therapeutic programs using music and speech, to help patients, and physicians recognize the patient’s feelings, which can lead to a better understanding in emotional intelligence for the patient, and how to deal with the stress. The method of analysis used in this article is ANOVA (Analysis of Variance). An ANOVA test is a way to find out if the experiment result is significant. In other words, they help you to find out if you need to reject the null hypothesis or accept the alternate hypothesis. In general, result is statistically significant if p-value is less than the stated alpha of 0.05. The author’s null hypothesis was that emotions in music and speech develop in parallel and that the adult-levels of recognition develop later. The author has performed several tests to analyze emotional responses to music stimuli, speech stimuli, and affect burst stimuli within different age groups as summarized in Table 1, 2, and 3. In the analysis of age groups, it was determined that adults were more accurate than all children, with p-value less than 0.001. In the analysis of type of stimulus, there were no significant difference between scores on music and speech, with p-value less than 0.001. Both of these results supports author’s hypothesis.

In age by type interaction, as summarized in Table 4, it was determined that for music stimuli, the 6-7 year olds and 8-9 year olds scored similarly, and 10-11 year olds and adults scored higher than the younger groups. For speech stimuli, all groups scored similarly, though adults scored higher than the two younger groups. For affect bursts, all age groups scored similarly. Additionally, age by type interaction also revealed that for younger children, scores on music and speech were similar. The results of this research found the children participant’s recognition of emotions for music and speech were comparable in development. The development for children aged 6-9 scored similarly. By 10 years old, children were as likely to select the target emotion as adults for music and speech, while all age groups still scored similarly for the affect burst (Vidas, D., Dingle, G. A., & Nelson, N. L. 2018). Also, children’s recognition did depend on the type of stimulus, with music and speech stimuli being overall less well recognized than affect bursts (Vidas, D., Dingle, G. A., & Nelson, N. L. 2018).

References Vidas, D., Dingle, G. A., & Nelson, N. L. (2018). Children’s recognition of emotion in music and speech. Music & Science. https://doi.org/10.1177/2059204318762650...


Similar Free PDFs