Abstract
Laughter, particularly self-induced laughter, has known therapeutic benefits. However, its acoustic-emotional dynamics remain underexplored compared to spontaneous or social laughter. This study presents a novel approach to classifying post-laughter emotional states (positive vs. neutral) based on both global and time-segmented MFCC features—including delta and delta-delta coefficients—extracted from recordings of 126 participants under controlled conditions. Emotion labels were obtained via immediate self-reports to minimize subjective bias. Analysis revealed that acoustic features from the later segments of laughter sessions are most predictive of emotional outcome. Among several models evaluated, BiLSTM achieved the highest performance (86.67% accuracy, F1 score = 0.87, and AUC = 0.96), indicating its strength in modeling temporal patterns in laughter. These findings not only advance emotion recognition from nonverbal cues but also offer insights for designing AI systems capable of generating or interpreting context-sensitive, emotionally relevant laughter—such as in therapeutic or assistive human-computer interactions.