hig.sePublications
Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Laukka, Petri
    University of Gävle, Department of Education and Psychology, Ämnesavdelningen för psykologi.
    Research on vocal expression of emotion: State of the art and future directions2008In: Emotions in the human voice: Volume 1. Foundations, San Diego: Plural Publishing Inc. , 2008, p. 153-169Chapter in book (Other academic)
  • 2.
    Laukka, Petri
    et al.
    University of Gävle, Department of Education and Psychology, Ämnesavdelningen för psykologi.
    Audibert, Nicolas
    Aubergé, Véronique
    Exploring the graded structure of vocal emotion expressions2009In: The role of prosody in affective speech / [ed] Sylvie Hancil, Bern: Peter Lang , 2009, p. 241-258Chapter in book (Other academic)
    Abstract [en]

    Not all members of a category are equally good members; for example, a robin is generally considered to be a more typical member of the category ‘birds’ than an ostrich. Similarly, one vocal expression of, for example, happiness can be more typical than another expression of the same emotion, though both expressions clearly are perceived as ‘happy’. This chapter presents ongoing studies investigating the determinants of the typicality (graded structure) of vocal emotion expressions. In two experiments, separate groups of judges rated expressive speech stimuli (both acted and spontaneous expressions) with regard to typicality, ideal (suitability to express the respective emotion), and frequency of instantiation. A measure of similarity to central tendency was also obtained from listener judgments. Partial correlations and multiple regression analyses revealed that similarity to ideal, and not frequency of instantiation or similarity to central tendency, explained most variance in judged typicality. In other words, the typicality of vocal expressions was mainly determined by their similarity to ideal category members. Because ideals depend on the goals that people have, they can be independent of the particular category members that a person usually encounters. Thus it is argued that these results may indicate that prototypical vocal expressions are best characterized as goal-derived categories, rather than common taxonomic categories. This could explain how prototypical expressions can be acoustically distinct and highly recognizable, while at the same time occur relatively rarely in everyday speech.

  • 3.
    Laukka, Petri
    et al.
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Juslin, Patrik N.
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Similar patterns of age-related differences in emotion recognition from speech and music2007In: Motivation and Emotion, ISSN 0146-7239, E-ISSN 1573-6644, Vol. 31, no 3, p. 182-191Article in journal (Refereed)
    Abstract [en]

    The ability of young and old adults to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), b) synthesized speech (anger, fear, happiness, and sadness), and c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). Both groups of listeners rated the stimuli using forced-choice and also rated the emotion intensity of each stimulus. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition accuracy for negative, but not for positive, emotions across all types of stimuli. Age-related differences in recognition of emotion intensity were also found. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.

  • 4.
    Laukka, Petri
    et al.
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Linnman, Clas
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Åhs, Fredrik
    Pissiota, Anna
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Frans, Örjan
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Faria, Vanda
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Michelgård, Åsa
    Department of Neuroscience, Psychiatry, Uppsala University, Uppsala, Sweden.
    Appel, Lieuwe
    Uppsala Imanet AB, Uppsala, Sweden.
    Fredrikson, Mats
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Furmark, Tomas
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    In a nervous voice: Acoustic analysis and perception of anxiety in social phobics' speech2008In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 32, no 4, p. 195-214Article in journal (Refereed)
    Abstract [en]

    This study investigated the effects of anxiety on nonverbal aspects of speech using data collected in the framework of a large study of social phobia treatment. The speech of social phobics (N = 71) was recorded during an anxiogenic public speaking task both before and after treatment. The speech samples were analyzed with respect to various acoustic parameters related to pitch, loudness, voice quality, and temporal apsects of speech. The samples were further content-masked by low-pass filtering (which obscures the linguistic content of the speech but preserves nonverbal affective cues) and subjected to listening tests. Results showed that a decrease in experienced state anxiety after treatment was accompanied by corresponding decreases in a) several acoustic parameters (i.e., mean and maximum voice pitch, high-frequency components in the energy spectrum, and proportion of silent pauses), and b) listeners' perceived level of nervousness. Both speakers' self-ratings of state anxiety and listeners' ratings of perceived nervousness were further correlated with similar acoustic parameters. The results complement earlier studies on vocal affect expression which have been conducted on posed, rather than authentic, emotional speech.

  • 5.
    Laukka, Petri
    et al.
    University of Gävle, Faculty of Health and Occupational Studies, Department of Social Work and Psychology, Psychology. Uppsala Univ, Dept Psychol, Uppsala, Sweden.
    Neiberg, Daniel
    KTH, Ctr Speech Technol, Dept Speech Mus & Hearing, Stockholm, Sweden.
    Forsell, Mimmi
    KTH, Ctr Speech Technol, Dept Speech Mus & Hearing, Stockholm, Sweden.
    Karlsson, Inger
    KTH, Ctr Speech Technol, Dept Speech Mus & Hearing, Stockholm, Sweden.
    Elenius, Kjell
    KTH, Ctr Speech Technol, Dept Speech Mus & Hearing, Stockholm, Sweden.
    Expression of affect in spontaneous speech: Acoustic correlates and automatic detection of irritation and resignation2011In: Computer speech & language (Print), ISSN 0885-2308, E-ISSN 1095-8363, Vol. 25, no 1, p. 84-104Article in journal (Refereed)
    Abstract [en]

    The majority of previous studies on vocal expression have been conducted on posed expressions. In contrast, we utilized a large corpus of authentic affective speech recorded from real-life voice-controlled telephone services. Listeners rated a selection of 200 utterances from this corpus with regard to level of perceived irritation, resignation, neutrality, and emotion intensity. The selected utterances came from 64 different speakers who each provided both neutral and affective stimuli. All utterances were further automatically analyzed regarding a comprehensive set of acoustic measures related to F0, intensity, formants, voice source, and temporal characteristics of speech. Results first showed that several significant acoustic differences were found between utterances classified as neutral and utterances classified as irritated or resigned using a within-persons design. Second, listeners’ ratings on each scale were associated with several acoustic measures. In general the acoustic correlates of irritation, resignation, and emotion intensity were similar to previous findings obtained with posed expressions, though the effect sizes were smaller for the authentic expressions. Third, automatic classification (using LDA classifiers both with and without speaker adaptation) of irritation, resignation, and neutral performed at a level comparable to human performance, though human listeners and machines did not necessarily classify individual utterances similarly. Fourth, clearly perceived exemplars of irritation and resignation were rare in our corpus. These findings were discussed in relation to future research.

  • 6.
    Laukka, Petri
    et al.
    University of Gävle, Faculty of Health and Occupational Studies, Department of Social Work and Psychology, Psychology.
    Quick, Lina
    University of Gävle, Faculty of Health and Occupational Studies, Department of Social Work and Psychology.
    Emotional and motivational uses of music in sports and exercise: A questionnaire study among athletes2013In: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 41, no 2, p. 198-215Article in journal (Refereed)
    Abstract [en]

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both quantitative items that assessed the prevalence of various uses of music, and open-ended items that targeted specific emotional episodes in relation to music in sports. Results showed that the athletes most often reported listening to music during pre-event preparations, warm-up, and training sessions; and the most common motives for listening to music were to increase pre-event activation, positive affect, motivation, performance levels and to experience flow. The athletes further reported that they mainly experienced positive affective states (e.g., happiness, alertness, confidence, relaxation) in relation to music in sports, and also reported on their beliefs about the causes of the musical emotion episodes in sports. In general, the results suggest that the athletes used music in purposeful ways in order to facilitate their training and performance.

  • 7.
    Leitman, David I
    et al.
    Program in Cognitive Neuroscience and Schizophrenia, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Program in Cognitive Neuroscience, The City College of the City University of New York, New York, NY, USA; Brain Behavior Laboratory, Department of Neuropsychiatry, University of Pennsylvania, Philadelphia, PA, USA.
    Laukka, Petri
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Juslin, Patrik N.
    Department of Psychology, Uppsala University, Uppsala, Sweden.
    Saccente, Erica
    Program in Cognitive Neuroscience and Schizophrenia, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
    Butler, Pamela
    Program in Cognitive Neuroscience and Schizophrenia, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department of Psychiatry, New York University School of Medicine, New York, NY, USA.
    Javitt, Daniel C.
    Nathan S Kline Inst Psychiat Res, Program Cognit Neurosci & Schizophrenia, Orangeburg, NY, USA; Program in Cognitive Neuroscience and Schizophrenia, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Program in Cognitive Neuroscience, The City College of the City University of New York, New York, NY, USA; Brain Behavior Laboratory, Department of Neuropsychiatry, University of Pennsylvania, Philadelphia, PA, USA; Department of Psychiatry, New York University School of Medicine, New York, NY, USA.
    Getting the Cue: Sensory Contributions to Auditory Emotion Recognition Impairments in Schizophrenia.2008In: Schizophrenia Bulletin, ISSN 0586-7614, E-ISSN 1745-1701, Vol. 36, no 3, p. 545-556Article in journal (Refereed)
    Abstract [en]

    Individuals with schizophrenia show reliable deficits in the ability to recognize emotions from vocal expressions. Here, we examined emotion recognition ability in 23 schizophrenia patients relative to 17 healthy controls using a stimulus battery with well-characterized acoustic features. We further evaluated performance deficits relative to ancillary assessments of underlying pitch perception abilities. As predicted, patients showed reduced emotion recognition ability across a range of emotions, which correlated with impaired basic tone matching abilities. Emotion identification deficits were strongly related to pitch-based acoustic cues such as mean and variability of fundamental frequency. Whereas healthy subjects' performance varied as a function of the relative presence or absence of these cues, with higher cue levels leading to enhanced performance, schizophrenia patients showed significantly less variation in performance as a function of cue level. In contrast to pitch-based cues, both groups showed equivalent variation in performance as a function of intensity-based cues. Finally, patients were less able than controls to differentiate between expressions with high and low emotion intensity, and this deficit was also correlated with impaired tone matching ability. Both emotion identification and intensity rating deficits were unrelated to valence of intended emotions. Deficits in both auditory emotion identification and more basic perceptual abilities correlated with impaired functional outcome. Overall, these findings support the concept that auditory emotion identification deficits in schizophrenia reflect, at least in part, a relative inability to process critical acoustic characteristics of prosodic stimuli and that such deficits contribute to poor global outcome.

1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf