You are on page 1of 10

https://doi.org/10.20965/jaciii.2020.

p0872
He, J.-X. et al.

Paper:

Digital Empirical Research of Influencing Factors of


Musical Emotion Classification Based on
Pleasure-Arousal Musical Emotion Fuzzy Model
Jing-Xian He∗1,∗2 , Li Zhou∗1,† , Zhen-Tao Liu∗3,∗4 , and Xin-Yue Hu∗1
∗1
School of Arts and Communication, China University of Geosciences
388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China
E-mail: zhouli@cug.edu.cn
∗2 School of Danyang Normal, Zhenjiang College

No.518 Changxiang West Avenue, College Park, Zhenjiang City, Jiangsu 212000, China
∗3 School of Automation, China University of Geosciences

388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China


∗4 Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems

Wuhan, Hubei 430074, China


† Corresponding author

[Received October 20, 2020; accepted October 27, 2020]

In recent years, with the further breakthrough of ar- as early as 1985: “The question is not whether intelli-
tificial intelligence theory and technology, as well as gence machines have emotions, but how intelligent ma-
the further expansion of the Internet scale, the recog- chines without emotions are intelligent.” In 1997, Picard
nition of human emotions and the necessity for satis- of Massachusetts Institute of Technology proposed affec-
fying human psychological needs in future artificial tive computing [1], which can not only be measured and
intelligence technology development tendencies have analyzed, but also can affect emotions. The introduction
been highlighted, in addition to physical task accom- of affective computing has introduced a new field of com-
plishment. Musical emotion classification is an im- puter science and resulted in a breakthrough in research
portant research topic in artificial intelligence. The regarding accessible human–robot interaction technology.
key premise of realizing music emotion classification Musical emotion classification is an important com-
is to construct a musical emotion model that conforms ponent of emotional classification. Music is an essen-
to the characteristics of music emotion recognition. tial form of the artistic expression of human emotions.
Currently, three types of music emotion classification Through the recognition and classification of musical
models are available: discrete category, continuous di- emotions, construction of emotional models, and reason-
mensional, and music emotion-specific models. The ing optimization, the digital expression patterns of human
pleasure-arousal music emotion fuzzy model, which music emotions can be obtained, thereby improving the
includes a wide range of emotions compared with smoothness of human–robot interaction technology.
other models, is selected as the emotional classification
system in this study to investigate the influencing fac-
tor for musical emotion classification. Two representa- 1.1. Musical Emotion Model
tive emotional attributes, i.e., speed and strength, are An important prerequisite for realizing music emotion
used as variables. Based on test experiments involving classification is to construct a musical emotion model
music and non-music majors combined with question- that conforms to the characteristics of emotional cogni-
naire results, the relationship between music proper- tion. Constructing a musical emotion model requires the
ties and emotional changes under the pleasure-arousal extraction of different musical clue features from musi-
model is revealed quantitatively. cal and acoustic symbols, such as pitch, timbre, strength,
modulation, rhythm, and melody, followed by emotional
classification integration. The music emotion model can
Keywords: music emotion, classification, affective com- be categorized into continuous dimension, discrete cate-
puting, fuzzy model, pleasure-arousal emotion space gory, and music emotion-specific models. Currently, the
continuous dimension and discrete category models are
used frequently.
1. Introduction
1.1.1. Continuous Dimensional Model
Marvin Minsky, one of the founders of artificial intel-
ligence, mentioned the following in The Society of Mind A continuous dimensional model represents a human
emotional state as a point in a two-dimensional (2D)

872 Journal of Advanced Computational Intelligence Vol.24 No.7, 2020


and Intelligent Informatics
© Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of
the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).
Empirical Research of Influencing Factors of Musical Emotion

Fig. 2. Fuzzy PA model [10].


Fig. 1. VA model.

Chinese researchers have implemented certain deletions


or three-dimensional (3D) contiguous space. The most and additions to the emotional modifiers in the Hevner
widely used continuous dimensional models are the ring model [8].
emotion model (VA model) and the Pleasure-Arousal- The Geneva Musical Emotional Scale (GEMS) based
Dominance model (PAD model). Russell’s circular emo- on music-specific emotion models compiled by M. Zent-
tion model (VA model) (Fig. 1) comprises two important ner et al. is currently the most systematic constructive
emotional attributes: arousal and valence [2, 3]. Thayer’s musical emotion measurement tool. The GEMS contains
2D emotion model maps energy to the ordinate and pres- 45 emotional words, and these 45 emotional words are
sure to the abscissa, describing the emotional state from categorized into nine major categories of wonder, tran-
two dimensions: energy and tension [4]. The PAD model scendence, tenderness, nostalgia, peacefulness, power,
proposed by Mehrabian et al. comprises three dimensions: joyful activation, tension, and sadness [9].
P (pleasure), A (activation), and D (dominance), which
can describe human emotion tendency effectively [5, 6].
Compared with the VA model, the PAD model can distin- 1.1.3. Pleasure-Arousal (PA) Music Emotion Fuzzy
guish some emotional states that cannot be distinguished Model
by the VA model. Furthermore, a continuous dimension In 2018, a PA music emotion fuzzy model (Fig. 2)
model can describe the nuances of emotional state and is was proposed [10], which belonged to the music emotion-
not only limited for describing the subjective experience specific model. This 2D PA model introduces the fuzzy
of emotions, but also for describing emotional external classification method for musical emotion modeling con-
performance and physiological arousal. However, a large sidering the ambiguity and subjectivity of musical emo-
deviation from people’s cognitive emotional semantics is tions, and it describes the musical emotional space based
yielded. on two dimensions: pleasure and arousal. The entire emo-
tional space is categorized into nine separate subspaces:
1.1.2. Discrete Category Mode angry, vigorous, happy, nervous, neutral, delicate, sad,
The earliest, most influential, and most widely recog- bored, and serene. The transfer between adjacent sub-
nized musical emotional discrete category model is the spaces in these nine subspaces is favorable.
Hevner model proposed by American scholar Hevner in Based on the proposed method for the PA model, the
1970 [7]. The Hevner model uses 67 emotional adjectives model classifies the basic emotional attributes of 86 music
to describe the different attributes of musical expressions, expression terms. The specific statistical classification is
and these adjectives can be categorized into eight syn- shown in Fig. 3 [10].
onym clusters, which are used to represent similar emo- Based on this classification and statistics as well as re-
tion types. These eight categories form a vector based on ferring to the representation method of the Plutchik emo-
their relationship, known as Hevner’s emotional ring. The tional wheel, the music expression terminology can be
Hevner model is presented from the perspective of musi- categorized by strength, similarity, and polarity; hence,
cology based on the psychological feelings of composers, a 3D vertebral body “emotional wheel” is formed. As
performers, and listeners. Hevner’s study involves pri- shown in Fig. 4 [10], the nine sectors represent nine basic
marily Western audience groups, and the research content emotions. The top nine basic emotions are derived from
is mainly Western music. However, for the Hevner model psycho-emotional adjectives, whereas the adjectives be-
to be applicable to Chinese groups, the emotional modi- longing to nine sectors are derived from musical expres-
fiers in the model must be adapted accordingly. Therefore, sion terms.

Vol.24 No.7, 2020 Journal of Advanced Computational Intelligence 873


and Intelligent Informatics
He, J.-X. et al.

1.2.1. Music Factors


Pitch, timbre, rhythm, melody, tone, and harmony are
cues in music, and different elements of music express
different emotions [13, 14]. Huang [15] used different
classical music to test the effects of music components
on college students’ emotions. The results show that mu-
sic speed significantly affected the college students’ emo-
tions. Slow music can easily induce negative emotions,
whereas fast music can easily induce positive emotions.
Zhang et al. [14] developed experiments using a multi-
channel physiological recorder to analyze the effects of
tone and speed on emotion and demonstrated that differ-
ent speeds and tone music induced different emotional re-
sponses, and that different styles of Chinese and Western
music induced different emotional reactions. Hevner’s re-
search results indicated that the major tone induced happi-
ness in individuals, whereas the minor tone induced sad-
ness. Simple harmonies induced calmness, whereas com-
plex harmonies induced excitement or sadness. Steady
rhythms caused people to feel noble, whereas flowing
rhythms induced happiness [7]. Fan [16] experimen-
Fig. 3. Circumplex model of expression marks (1) [10]. tally demonstrated that Guqin music can induce calm and
peaceful emotions, in addition to depressed, confused,
and sad emotions. It is believed that Guqin’s unique mu-
sical elements such as volume, tone, sound zone, speed,
rhythm, and deep cultural thoughts are the main cues
affecting emotion, except for sound, sound zone, tone,
speed, and other cues. Additionally, music emotional
factors can include musical styles, whereas classical mu-
sic can induce individual musical emotions [11]. Alten-
muller et al. [17] used echo, pop, classical, and ambient
sounds to perform Electroencephalogram (EEG) experi-
ments on subjects. The result showed that pop music in-
duced the strongest positive emotions, whereas jazz in-
duced the most negative emotions.

1.2.2. Environmental Factors


Music is ubiquitous in life. Whether it is the listener’s
active or passive choice, music can appear in any scene.
Currently, many scholars have investigated the effect of
music on emotions in different environments. The mix-
ing effect differs in different environments. Reverberation
not only affects the clarity and spatial sense of music, but
also affects the emotional characteristics of musical in-
struments to some extent [18]. Västfjäll et al. [19] discov-
ered that long-term reverberation time was the most un-
pleasant. Zhang’s study regarding store background mu-
Fig. 4. Circumplex model of expression marks (2) [10]. sic showed that background music significantly affected
consumers’ shopping emotion, and that background mu-
sic contributed significantly to consumer behavior [20].

1.2. Factors Affecting Music 1.2.3. Individual Factors


Factors affecting music emotions have always been a In addition to musical and environmental factors, in the
popular topic in musicology and psychology. Although study of musical emotions, differences between individ-
the factors are complex and multi-faceted, all musical ual subjects are important and inevitable. Music-induced
emotions occur among listeners, music, and scenes. In emotions can be regarded as a personal response that
complex interactions [11], a series of influencing factors depend on the individual characteristics of the listener’s
are included in the audience, music, and context [12]. learning ability, musical ability, and imagination [21, 22].

874 Journal of Advanced Computational Intelligence Vol.24 No.7, 2020


and Intelligent Informatics
Empirical Research of Influencing Factors of Musical Emotion

Relevant research revealed the preference for music based


Table 1. Information regarding subjects.
on gender, age, and music education background. Ad-
ditionally, individual differences among subjects such Number of Ratio
as music culture experience affected their musical emo- Attribute Category
participants [%]
tions [23–25].
Male 73 45.91
Gender
Female 86 54.09
1.3. Comparative Analysis of Musical Emotional Art and media 54 33.96
Classification Model and Musical Influencing Automation 22 13.84
Factors Economy and
18 11.32
Based on studies regarding the three categories of mu- management
sic emotion classification models, it is discovered that the Geology 17 10.69
Major
main problem in the existing music emotion classification Machinery and
8 5.03
model is that music emotional adjectives (emotional cat- electronics
egories) do not have unified norms and generally adopt Material chemistry 9 5.66
discrete classification methods, which is inconsistent with Resource 9 5.66
musical emotions. A composite feature of cognition in a IT 14 8.80
piece of music may contain multiple emotion categories. Information engineering 8 5.03
Through analysis and comparison, it was discovered that
the PA 2D model can include all emotions in music.
Through the study of three factors that affect music
emotion, i.e., music, environmental, and personal factors, 2.1. Experimental Objectives
it was discovered that the music factors were the most In the experiments, we selected two musical factors
influential, followed by the individual and environmental (i.e., strength and speed) that can be digitally expressed
factors. and affected the music emotion as the experimental vari-
In this study, the PA music emotion fuzzy model, which ables. The objectives of the experiments were to prove the
includes the most emotional emotions, was selected as the feasibility of the PA model and to investigate the effects of
emotion classification system. Furthermore, the environ- different speeds and energies on emotional changes based
mental and individual audience factors that affect the mu- on the PA model.
sic emotions were fixed to a unified standard. Two emo-
tion attributes, i.e., the maximum speed and strength of 2.2. Method
emotional change, were used as variables. Through test
2.2.1. Participants
experiments involving music and non-music subjects, the
results of questionnaires were used to statistically ana- A total of 157 college students from the China Uni-
lyze the experimental results, and the different musical versity of Geosciences (Wuhan) were randomly selected
attributes were analyzed using the PA model. The digital as subjects for the experiment. The selected music ma-
influence of change affected this relationship. jors were undergraduate and postgraduate students from
The remainder of this paper is organized as follows. the Department of Music of the China University of Geo-
In Section 2, the experiments are introduced in detail, sciences (Wuhan). The non-music majors were students
including the experimental methods, selection of exper- from the China University of Geosciences (Wuhan) who
imental samples, and experimental process. The experi- did not receive systematic normative music training, such
ments are introduced in detail. In Section 3, the detailed as automatic control, accounting, as well as English un-
analysis and statistics of the experimental data are pre- dergraduate and graduate students. The age of these sub-
sented. In Section 4, the analysis results are discussed. jects ranged from 18 to 23 years, and they did not ex-
hibit any brain disease or hearing impairment. The mu-
sic professional group distributed 54 questionnaires and
2. Experiment 47 valid questionnaires; the non-music professional group
actually distributed 105 questionnaires and 97 valid ques-
To verify the effects of different music attributes on tionnaires. The specific subjects are listed in Table 1.
emotional changes, under the classification system of the
PA music emotion fuzzy model, music auditory emo- 2.2.2. Experimental Equipment and Instruments
tion test classification experiments were conducted in this An Acer laptop was used in the experiment to play the
study. We selected the speed and strength of music that test audio, and the subject wore a Beats Solo3Wireless
exerted the greatest effect on music emotion and were the model earphone to listen to the test music piece.
most easily quantified. Through the questionnaire sur-
vey, the results of emotion classification of different mu-
sic samples after the tester in the same environment were 2.2.3. Experimental Materials
statistically analyzed, and the PA model results were com- In this experiment, music fragments were used as ma-
pared. terials. Eight single-melody songs were selected from

Vol.24 No.7, 2020 Journal of Advanced Computational Intelligence 875


and Intelligent Informatics
He, J.-X. et al.

Table 2. Tempo class. Table 3. Common strength marks.

Tempo mark Tempo Class Strength marks Meaning Strength


Grave 40 fff Forte fortissimo 115–127
Largo 46 ff Fortissimo 90
Lento 52 f Forte 80
slow
Adagio 56 mf Mezzo-forte 70
Larghtto 60 mp Mezzo-piano 64
Andante 66 p Piano 50
Andantino 69 pp Pianissimo 40
Modera 88 ppp Piano pianissmo 20
mid
Allegretto 108
Allegro 132
Table 4. Ability of dynamic class.
Vivace 160
fast
Presto 189 Class Strength mark Strength
strong fff, ff, f 80–127
mid mf, mp 64–70
related music textbooks. Each song intercepts the main soft p, pp, ppp 20–50
melody fragments that satisfy the emotional integrity and
timing of the music. Each clip lasted 25–30 s. Based
on a previous study [23], the familiarity and fluency of 2.2.4. Experimental Paradigms and Orders
the eight fragments were graded from low to high (1–5);
the results indicated that the familiarity was between 2.5 A questionnaire format was used in the experiment, and
and 3.5, and the fluency exceeded 2.5. The emotional at- all experiments included five steps.
tributes of the two were located in the happy (happy) and Step 1: The subject enters the laboratory to provide
serene (quiet) poles, and the music fragments were “Wild his/her basic information for the questionnaire
Bee Flying” and “Silver.” and complete the self-assessment form regarding
In audio software Cubase7, the selected music piece emotional state.
was input as a MIDI signal, excluding the interference Step 2: The main tester introduces the experimental task
of music attributes such as tone, harmony, and accompa- and reads the instructional language.
niment texture. The “bright acoustic piano” sound was Step 3: Three minutes of soothing and quiet music is
selected, and the speed and strength were changed as fol- played, and the subjects are guided to stabilize
lows: their emotions to reach a relatively calm state.
• Speed Step 4: This is the formal experimental stage. Random
In this experiment, based on the division of the com- music pieces are played, and each piece of mu-
mon speed, the speed term was classified into slow sic is played after the subject has measured the
(60 beats/min), medium (60–108 beats/min), and fast subjective emotional state based on the degree of
(110 beats/min), as shown in Table 2. pleasure and arousal felt when listening to the
music piece. Each group of three segments is
• Strength scored using a controlled variable method, sep-
In the field of music, people typically use qualita- arate experiments involving two types of music
tive rather than quantitative representations to express the emotions are performed, a minute of pink noise is
mid-range strength of the music, i.e., the strength of the played between each group of samples, and then
sound, as shown in Table 3. These strength indications a break is introduced for one minute. The emo-
are a relative strength change indication provided by the tions of the subjects are soothed and adjusted,
composer to the performer and singer, and they are rela- and then the next piece of music assessment is en-
tively ambiguous. tered. The average length of each piece of music
The ability of the human ear for perceiving the sound is 27 s. A total of 18 music evaluations are com-
strength is much lower than that for perceiving the pitch. pleted.
In MIDI music, the strength value is from 0 to 127, di- Step 5: The end of the experiment. The questionnaire is
vided into eight levels, as shown in Table 3. completed and a gift is presented to the partici-
To facilitate the experiment, the strengths were classi- pant.
fied into three categories, as shown in Table 4.
Finally, the deformed experimental samples were uni-
formly intercepted to approximately 30 s to reduce the 2.2.5. Experimental Design
length of the experimental samples, resulting in fatigue in A 3 (speed type: fast, medium speed, slow speed) ×
the subjects. 3 (strength size: strong strength, medium strength, soft

876 Journal of Advanced Computational Intelligence Vol.24 No.7, 2020


and Intelligent Informatics
Empirical Research of Influencing Factors of Musical Emotion

Table 5. Arousal multiple comparison results.

Source SS df MS F Sig.
A 902.16 2 220.75 1034.95a 0.000
B 184.65 2 9.31 190.40a 0.000
C 817.88 1 2.35 1482.86a 0.000
AB 88.20 4 30.69 47.32a 0.000
AC 82.97 2 117.05 99.78a 0.000
Fig. 5. Sample coincidence rate.
BC 3.11 2 5.55 3.56a 0.032
ABC 150.96 4 13.51 131.36a 0.000
a: Exact Statistic. In the table, A: represents speed,
strength) × 2 (music type: happy, serene) experimental
B: strength, and C: music type.
design was adopted.

2.2.6. Experimental Measurement Tools Table 6. Arousal multiplex analysis results.


Subjective index collection: The emotional state self-
assessment table comprised 90 symptom lists (Symptom 95% confidence
interval for
Checklist 90, SCL-90), the degree of which was from low Mean Std. Sig.a differencea
to high (1–5 points), and five levels were included. A to- difference error
(I–J) Lower Upper
tal of 23 depression and anxiety factors were selected for bound bound
self-assessment. Pleasure and arousal were rated using the (I)A (J)A
Likert scale, the degree of pleasure was scored from low 2 −0.990∗ 0.042 0.000 −1.073 −0.906
1
to high (1–5 points), and arousal was scored from calm 3 −1.756∗ 0.039 0.000 −1.833 −1.679
to active (1–5 points) for 18 music clips. For the sample, 1 0.990∗ 0.042 0.000 0.906 1.073
2
18 scoring questions were designed for the questionnaire. 3 −0.766∗ 0.045 0.000 −0.856 −0.676
Both questionnaires were scored using self-assessment. 1 1.756∗ 0.039 0.000 1.679 1.833
3
2 0.766∗ 0.045 0.000 0.676 0.856
(I)B (J)B
2.2.7. Experimental Data Set Processing 2 −0.653∗ 0.039 0.000 −0.730 −0.576
1
The questionnaire data were edited and compiled in 3 −0.722∗ 0.042 0.000 −0.805 −0.639
Excel. The data of the questionnaire survey were ana- 1 0.653∗ 0.039 0.000 0.576 0.730
2
lyzed using analysis of variance (ANOVA) and visual data 3 −0.069 0.040 0.966 −0.148 −0.011
statistics. Post-statistical data processing was analyzed 1 0.722∗ 0.042 0.000 0.639 0.805
3
using SPSS 20.0. 2 −0.069 0.040 0.966 −0.011 0.148
(I)C (J)C
1 2 −1.369∗ 0.036 0.000 −1.439 −1.298
2 1 1.369∗ 0.036 0.000 1.298 1.439
3. Statistical Analysis of Experimental Results
Based on estimated marginal means.
∗ : The mean difference is significant at the 0.05 level.
3.1. Coincidence Rate a : Adjustment for multiple comparisons: least significant difference
(equivalent to no adjustments).
According to the states of two dimensions of each sam-
ple, the proportion of the emotional state to which it be-
longed was counted, the statistical results above were
compared with the statistical results of labeling of 47 mu- 3.2.1. Arousal Dimension Score Data Statistics
sic professionals, and the coincidence rate results for each As shown in Table 5, the main effects of speed,
sample were obtained. As shown in Fig. 5, the average strength, and music type were extremely significant in the
coincidence ratio was 80.0%. arousal dimension (P < 0.001).
The interaction between strength and music type was
3.2. Effects of Different Music Attributes on Music significant (P < 0.05). The interaction among speed,
strength, and speed and music type was extremely signifi-
Emotion
cant (P < 0.001), and the third-order interaction among
Additionally, the PA model was used in this study to speed, strength, and music type was significant (P <
investigate the effects of different music attributes on mu- 0.001).
sic emotions. The control variable method was used As shown in Table 6, the results of multiple analyses
to analyze the emotions of different speeds, strengths, based on the arousal score indicate that
and music types heard by the subjects. Scores of the
pleasure and wake-up dimensions were analyzed using i. A significant difference existed between the levels of
SPSS 20.0 statistical soft armor, and 3×3×2 repeated speed factors 1, 2, and 3 (P < 0.001); level 3 was
measures ANOVA was used. better than level 2, and level 2 was better than level 1

Vol.24 No.7, 2020 Journal of Advanced Computational Intelligence 877


and Intelligent Informatics
He, J.-X. et al.

Table 7. Pleasure multiple comparison results.

Source SS df MS F Sig.
A 441.50 2 220.75 431.11a 0.000
B 18.62 2 9.31 22.64a 0.000
C 2.346 1 2.35 3.83a 0.053
AB 122.76 4 30.69 78.53a 0.000
AC 234.10 2 117.05 179.07a 0.000
BC 11.11 2 5.55 12.98a 0.000
ABC 54.16 4 13.54 40.36a 0.000

Table 8. Arousal multiplex analysis results.

95% confidence
interval for
Mean Std. Sig.a differencea
difference error
Fig. 6. ABC pleasure degree simple effect test interaction (I–J) Lower Upper
diagram. bound bound
(I)A (J)A
2 −0.412∗ 0.035 0.000 −0.482 −0.342
1
3 −1.211∗ 0.041 0.000 −1.293 −1.130
(P < 0.001), i.e., the abrupt degree of fast music was
1 0.412∗ 0.035 0.000 0.342 0.482
significantly better than that of the medium-speed mu- 2
3 −0.799∗ 0.040 0.000 −0.879 −0.719
sic. The medium-speed music was significantly better 1 1.211∗ 0.041 0.000 1.130 1.293
than the slow music. 3
2 0.799∗ 0.040 0.000 0.719 0.879
ii. A significant difference existed between levels 1 and 2 (I)B (J)B
and between levels 1 and 3 of the strength fac- 2 −0.220∗ 0.037 0.000 −0.293 −0.147
tor (P < 0.001); no significant difference was ob- 1
3 −0.218∗ 0.040 0.000 −0.298 −0.139
served between levels 2 and 3 (P = 0.09 > 0.05), 1 0.220∗ 0.037 0.000 0.147 0.293
2
i.e., the arousal of the medium-strength music was 3 −0.002∗ 0.041 0.966 −0.079 −0.082
significantly better than that of the soft music. The 1 0.218∗ 0.040 0.000 0.139 0.298
3
arousal of the strong music was significantly better 2 −0.002 0.041 0.966 −0.082 0.079
than that of the soft-strength music, and the effect of (I)C (J)C
the medium-strength and strong music on the awak- 1 2 −0.073∗ 0.037 0.053 −0.148 0.001
ening of music was insignificant. 2 1 0.073∗ 0.037 0.053 −0.001 0.148
iii. A significant difference existed between levels 1 and 2 Based exact statistic. → Exact statistic on estimated marginal means.
∗ : mean difference is significant at the 0.5 level.
of the music type (P < 0.001), i.e., the arousal of mu- a : adjustment for multiple comparisons: least significant difference
sic type “happy” was significantly better than that of (equivalent to no adjustments).
music type “serene.”
Additionally, because of the significant interaction
among the three factors, the simple effect analysis re- nificant (P < 0.001), and the main effect of music type
vealed that, at any strength regardless of the music type, was insignificant (P = 0.053 > 0.05). The interaction
the arousal of the fast and medium-speed music was sig- effects between speed and strength, strength and music
nificantly better than that of the slow music. Furthermore, type, and speed and music type were extremely significant
at any speed regardless of the music type, the arousal of (P < 0.001), and the third-order interaction among speed,
the strong and medium-strength music was significantly strength, and music type was significant (P < 0.001).
better than that of the soft-strength music, but the degree As shown in Table 8, the results of multiple analyses
of arousal of the strong and medium-strength music was based on the degree of pleasure indicate that
insignificant; the music of type “happy” was a passing i. A significant difference existed between the levels of
speed. After the change in strength, the awakening of speed factors 1, 2, and 3 (P < 0.001); level 3 was
the music was significantly better than that of music-type better than level 2, and level 2 was better than level 1
“serene.” Fig. 6 shows a simple effect test interaction di- (P < 0.001), i.e., the pleasure of fast music was sig-
agram of BC (A: speed, B: strength, C: music type) plea- nificantly better than that of the medium speed mu-
sure when A = level 3. sic. The medium-speed music was significantly better
than the slow music.
3.2.2. Awakening Dimension Score Statistics ii. A significant difference existed between levels 1
As shown in Table 7, in the dimension of pleasure, the and 2, and between levels 1 and 3 of the strength
main effects of speed and strength were extremely sig- factor (P < 0.001); no significant difference was ob-

878 Journal of Advanced Computational Intelligence Vol.24 No.7, 2020


and Intelligent Informatics
Empirical Research of Influencing Factors of Musical Emotion

music was more likely to induce negative emotions, and


that slow music was induced by δ , θ , β , and γ waves
in various brain regions. The strength and coherence of
each band were higher than those of fast music. Further-
more, research shows that fast-paced music and language
are more energetic than slow-paced music, and that fast
music is more motivating [27]. Lu’s [28] study revealed
that when music is a genre of popular music, the selection
of music significantly affects musical emotions, and that
positive music is stronger than negative music.

5. Conclusion
Based on music emotion classification experiments,
it was discovered that the statistical results of the ex-
perimental data were consistent with those of the con-
structed PA model music emotion classification fuzzy
Fig. 7. ABC awakening degree simple effect test interaction model, proving the feasibility of the PA model.
diagram. Furthermore, results regarding the effects of different
music attributes on music emotions using this model in-
dicated that the speed of music affected music emotion
served between levels 2 and 3 (P = 0.966 > 0.05), significantly. The strength of subjective emotional ex-
i.e., the pleasure of the medium-strength music was perience induced by fast music was higher than that of
significantly better. The pleasure of the strong music slow music. Fast music was more likely to induce positive
was significantly better than that of the soft-strength emotions. The faster the speed, the stronger was the plea-
music. The medium-strength and strong music did not sure and arousal of the music. Furthermore, the strength
differ significantly in terms of music pleasure. of music affected music emotions. The strength of sub-
jective emotions induced by strong music was higher than
In addition, owing to the significant interaction among that of soft music, but the distinction between strong mu-
the three factors, the analysis of simple effects revealed sic and the strength of emotional experience induced by
that, at any strength regardless of the music type, the plea- medium-strength music was insignificant. In the dimen-
sure of fast music was significantly better than that of slow sion of pleasure, strong music was more likely to induce
music. At any speed, regardless of the music type, the negative emotions; in the dimension of arousal, music was
pleasure of the strong and medium-strength music was more likely to induce positive emotions. The stronger the
significantly better than that of the soft-strength music, music, the more active it was, whereas the softer the mu-
but the arousal of the strong and medium-strength mu- sic, the calmer it was. The type of music exerted a more
sic did not differ significantly; the speed was low and pronounced effect on the awakening of music; music type
medium, regardless of the strength. The value of mu- “happy” had a higher effect than music type “serene.”
sic was serene, which was significantly better than mu- The results of this experiment can provide theoretical
sic type “happy.” When the speed was high, regardless guidance for composers and performers when creating
of the strength, the music pleasure of music type “happy” music as well as facilitate the construction of relevant mu-
was significantly better than those of other types of serene sic emotion recognition models.
music. Fig. 7 shows a simple effect test interaction dia- Finally, although some of the results were obtained
gram of AB (A: speed, B: strength, C: music type) arousal through experiments, certain deficiencies were discov-
when C = level 1. ered. For example, many musical factors affected the mu-
sic emotions. In this study, we selected two of the most
influential factors, i.e., strength and speed, to conduct the
4. Discussion experiments. In future studies, we plan to add other mu-
sic attributes to our study, including rhythm, beat, mode,
Based on the results above, it was discovered that the loudness, and reverberation, to investigate their effects on
speed of music significantly affected the emotion from musical emotions. The results of these studies will be arti-
music. The music strength affected the emotion from mu- ficial but will be facilitated by the field of intelligent com-
sic, whereas the music type exerted a more significant ef- position.
fect on the degree of awakening of music. The results
of these studies were consistent with those of many pub-
lished studies. For example, Hou and Liu [26] used EEG Acknowledgements
to investigate the EEG characteristics of speed and tone- This study was supported in part by the China Hubei Province
induced emotional activity. The results indicated that slow Natural Science Foundation for the Surface of the Project:

Vol.24 No.7, 2020 Journal of Advanced Computational Intelligence 879


and Intelligent Informatics
He, J.-X. et al.

Based on Emotion Calculation of the Music Robot Intelligent [25] S. Garrido and E. Schubert, “Individual differences in the enjoy-
Composition and Performance of Key Technology Research ment of negative emotion in music: A literature review and experi-
ment,” Music Perception, Vol.28, No.3, pp. 279-296, 2011.
No.2019CFB581, and the China Ministry of Education Human- [26] J. Hou and C. Liu, “The Research of Brain Waves about Emotional
ities and Social Sciences Fund General Project: Dulcimer Music Activity Induced by Different Musical Conformations Composing
Robot Intelligent Knowledge Spectrum and Playing Key Technol- of Mode and Tempo: A Combined Representational Character of
Musical Training Experience and Gender,” Psychological Explo-
ogy Research No.16YJAZH080. ration, Vol.30, No.6, pp. 86-93, 2010.
[27] W. F. Thompson and G. Ilie, “A Comparison of Acoustic Cues in
Music and Speech for Three Dimensions of Affect,” Music Percep-
tion, Vol.23, No.4, pp. 319-330, 2006.
References:
[1] R. W. Picard, “Affective Computing,” MIT Press, 1997. [28] Y. Lu, “The influence of music factors, personality and situation
on music emotions,” Master Thesis, Southwest University, 2014 (in
[2] J. A. Russell, “A cicumplex model of affect,” J. of Personality and Chinese).
Social Psychology, Vol.39, No.6, pp. 1161-1178, 1980.
[3] J. Posner, J. A. Russell, and B. S. Peterson, “The cicumplex model
of affect: An integrative approach to affective neuroscience, cog-
nitive development, and psychopathology,” Development and Psy-
chopathology, Vol.17, No.3, pp. 715-734, 2005.
[4] P. Gilbert, “The Biopsychology of Mood and Arousal. Rovert E.
Thayer,” The Quarterly Review of Biology, Vol.67, No.3, p. 406, Name:
1992. Jing-Xian He
[5] A. Mehrabian, “Pleasure-arousal-dominance: A general framework
for describing and measuring individual differences in Tempera-
ment,” Current Psychology, Vol.14, No.4, pp. 261-292, 1996. Affiliation:
School of Arts and Communication, China Uni-
[6] A. Bittermann, K. Kuhnlenz, and M. Buss, “On the Evaluation of
Emotion Expressing Robots,” Proc. of 2007 IEEE Int. Conf. on versity of Geosciences
Robotics and Automation, pp. 2138-2143, 2007. School of Danyang Normal, Zhenjiang College
[7] K. Hevner, “Experimental studies of the elements of expression
in music,” American J. of Psychology, Vol.48 No.2, pp. 246-268,
1936.
[8] S. Sun et al., “Study on Linguistic Computing for Music Emotion,” Address:
J. of Beijing University of Posts and Telecommunications, No.z2,
pp. 35-40, 2006. 388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China
No.518 Changxiang West Avenue, College Park, Zhenjiang City, Jiangsu
[9] M. Zentner, D. Grandjean, and K. R. Scherer, “Emotions evoked by
the sound of music: characterization, classification, and measure- 212000, China
ment,” Emotion, Vol.8, No.4, pp. 494-521, 2008. Brief Biographical History:
[10] S. Huang, L. Zhou, Z. Liu, S. Ni, and J. He, “Empirical Research on 2017 Received bachelor’s degree from China University of Geosciences
a Fuzzy Model of Music Emotion Classification Based on Pleasure- 2020 Received MFA from China University of Geosciences
Arousal Model,” The 37th Chinese Control Conf., pp. 3239-3244, 2020- Teacher, Zhenjiang College
2018. Main Works:
[11] A. Gabrielsson, “Emotions in strong experiences with music,” P. • “Empirical Research on a Fuzzy Model of Music Emotion Classification
Juslin and J. Sloboda (Eds.), “Music and Emotion: Theory and Re- Based on Pleasure-Arousal Model,” The 37th Chinese Control Conf.,
search,” pp. 431-449, Oxford University Press, 2001.
pp. 3239-3244, 2018.
[12] K. R. Scherer and M. R. Zentner, “Emotional effects of music: Pro-
duction rules,” P. N. Juslin and J. A. Sloboda (Eds.), “Music and Membership in Academic Societies:
Emotion: Theory and Research,” pp. 361-392, Oxford University • Chinese Association for Artificial Intelligence (CAAI)
Press, 2001. • Zhenjiang Musicians Association
[13] X. Zhou, “An Empirical Study on the Influence of Melody Forms
on Emotion Induction,” Southwest University, 2019.
[14] L. Zhang and F. Pan, “The role of speed and mode in inducing emo-
tional response: Evidence from traditional Chinese and Western
music,” Exploratory Psychology, Vol.37, No.6, pp. 549-554, 2017.
[15] W. Huang, “An Empirical Study of the Influence of Classical Music
on College Students’ Emotions,” Hunan Normal University, 2007.
[16] X. Fan, “An Empirical Study of the Influence of Guqin Music on
the Emotion of College Students in Music Colleges,” Wuhan Con-
servatory of Music, 2013.
[17] E. Altenmüller et al., “Hits to the left, flops to the right: different
emotions during listening to music are reflected in cortical laterali-
sation patterns,” Neuropsychologia, Vol.40, No.13, 2002.
[18] R. Mo, R. H. Y. So, and A. B. Horner, “How Does Parametric Re-
verberation Change the Space of Instrument Emotional Character-
istics?,” 43rd Int. Computer Music Conf., 2017.
[19] D. Västfjäll, P. Larsson, and M. Kleiner, “Emotion and the auditory
virtual environment: Affect-Based judgment of virtual reverbera-
tion times,” Cyberpsychology and Behavior, Vol.5, No.1, pp. 19-32,
2002.
[20] D. Zhang, “The influence of store background music on consumer
purchase approach behavior,” Dongbei University of Finance and
Economics, Master Thesis, 2013 (in Chinese).
[21] P. N. Juslin and D. Västfjäll, “Emotional responses to music: The
need to consider underlying mechanisms: Erratum,” Behavioral and
Brain Sciences, Vol.31, No.6, p. 751, 2008.
[22] R. He, “An Empirical Study on the Influence of Musical Melody
Expectation on Emotional Experience,” Master Thesis, Southwest
University, 2019 (in Chinese).
[23] M. Li, “An Empirical Study of the Influence of Rhythm Structure on
Emotion,” Master Thesis, Southwest University, 2015 (in Chinese).
[24] H. Ma et al., “The influence of music culture experience on music
emotional processing,” Chinese Science Bulletin, Vol.20, pp. 2287-
2300, 2017.

880 Journal of Advanced Computational Intelligence Vol.24 No.7, 2020


and Intelligent Informatics
Empirical Research of Influencing Factors of Musical Emotion

Name: Name:
Li Zhou Xin-Yue Hu

Affiliation: Affiliation:
Arts and Communication School, China Univer- School of Arts and Communication, China Uni-
sity of Geosciences versity of Geosciences

Address: Address:
388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China 388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China
Brief Biographical History: Brief Biographical History:
2004 Received M.A. from Wuhan University 2016 Received M.E. from China University of Geosciences
2005 Certificate of Computer Music Composition, Central Conservatory Main Works:
of Music • X. Hu and Z. Li, “The Creative Features and Enlightenment of Joe
2004-2006 Assistant Professor, China University of Geosciences Hisaishi’s Animation Film Music,” Art Evaluation,
2006- Associate Professor, China University of Geosciences CNKI:SUN:YSPN.0.2019-07-069, 2019 (in Chinese).
2013-2015 Visiting Scholar, Center of Music Technology of Georgia
Institute of Technology
Main Works:
• “Study on Adaptive Chord Allocation Algorithm Based on Dynamic
Programming,” J. of Fudan University (Natural Science), Vol.58, No.3,
pp. 393-400, 2019.
• “Design of Mechanical Arm of Auto-Play Dulcimer Robot,” Machinery
Design & Manufacture, No.1, pp. 251-253, 2018.
• “Gait planning for biped dancing robot,” Engineering J. of Wuhan
University, Vol.49, No.6, pp. 949-954+960, 2016.
Membership in Academic Societies:
• Electronic Music Association, Chinese Musicians’ Association (CMA),
Senior Member
• Chinese Association for Artificial Intelligence (CAAI)

Name:
Zhen-Tao Liu

Affiliation:
School of Automation, China University of Geo-
sciences

Address:
388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China
Brief Biographical History:
2008 Received M.E. from Central South University
2013 Received Dr.Eng. from Tokyo Institute of Technology
2013-2014 Lecturer, Central South University
2014-2018 Lecturer, China University of Geosciences
2018- Associate Professor, China University of Geosciences
Main Works:
• “Speech Personality Recognition Based on Annotation Classification
Using Log-likelihood Distance and Extraction of Essential Audio
Features,” IEEE Trans. on Multimedia, doi: 10.1109/TMM.2020.3025108,
2020.
• “Electroencephalogram Emotion Recognition Based on Empirical Mode
Decomposition and Optimal Feature Selection,” IEEE Trans. on Cognitive
and Developmental Systems, Vol.11, No.4, pp. 517-526, 2019.
• “Speech Emotion Recognition Based on Feature Selection and Extreme
Learning Machine Decision Tree,” Neurocomputing, Vol.273, pp. 271-280,
2018.
Membership in Academic Societies:
• The Institute of Electrical and Electronics Engineers (IEEE)
• Chinese Association for Artificial Intelligence (CAAI)

Vol.24 No.7, 2020 Journal of Advanced Computational Intelligence 881


and Intelligent Informatics

Powered by TCPDF (www.tcpdf.org)

You might also like