Ji Chul Kim, PhDPostdoctoral Fellow
Department of Psychological Sciences
University of Connecticut
406 Babbidge Road, Unit 1020
Storrs, CT 06269
I am currently a postdoctoral fellow in the Music Dynamics Lab at University of Connecticut. I studied physics and music theory at Seoul National University, South Korea and obtained a PhD degree in music theory and cognition at Northwestern University. I am also a research scientist at Oscilloscape, a music tech company based in East Hartford, CT.
My research areas include music cognition, music theory, auditory modeling, computational neuroscience, and dynamical systems. In music theory and cognition, my primary interest is the perceptual and cognitive basis for music-theoretical concepts and analytic procedures, especially those related to tonal and melodic structures. I attempt to explain subjective experiences and intuitions attributed to tonal-metrical music in terms of the dynamical but mostly unconscious process of perceptual organization. I am also working on computational modeling of music perception based on a nonlinear dynamical systems approach to auditory processing. In this line of research, my colleagues and I develop gradient frequency neural network models of auditory processing and perception.
Tones in a tonal melody are heard to be under the influence of prevailing key and harmony. For example, when a tonal context is set up, an unstable tone is heard to be "attracted" to the nearest stable tone. At the same time, melodic tones, through their intervallic pattern and motion, establish key and harmony. I propose that this two-way relationship between melodic surface and underlying tonal/harmonic structure can be explained in terms of the bottom-up (stimulus-driven) and top-down (knowledge-driven) aspects of perceptual organization, which is low-level perceptual processing that constructs maximally stable mental representations out of incoming sensory data. This approach allows us to identify the perceptual principles underlying traditional theoretical concepts and compositional procedures concerning the construction of tonal melody, such as melodic prolongation and diminution, and provides us with new analytic insights on the dynamical structure of tonal melody arising from the interactions between bottom-up and top-down processes.
Neurodynamics of Harmony and Tonality
Neural oscillation is a dynamic activity observed throughout the central nervous system, including various stages of the auditory system. We propose that nonlinear oscillatory dynamics in the auditory system give rise to perceptual phenomena related to harmony and tonality in music. To study neurodynamic properties of the auditory system, we use simple mathematical models of neural oscillation (called canonical models) and simulate auditory perception using multilayered networks of neural oscillators. We show that the intrinsic dynamics and network properties of neural oscillators can explain many aspects of harmony and tonality perception, such as perceived hierarchies of tonal stability, melodic attraction and expectation, Hebbian learning of tonal sequences, relative stability of pitch intervals in memory, and categorical perception of pitch intervals.
Analysis of Gradient Frequency Neural Networks (GrFNNs)
To model nonlinear transformation of acoustic signals into neural patterns in the auditory system, we use a canonical model for gradient frequency neural networks, a mathematical model that captures essential properties shared by such networks, regardless of their scale and biophysical mechanisms. Although it is a simple model, its behavior is complex and difficult to analyze because it consists of multiple components with distinct dynamics (e.g., autonomous dynamics, external forcing, coupling interaction, plasticity). Our approach is to analyze individual network components separately and attempt to understand the overall dynamics of the model by combining component dynamics. We developed GrFNN Toolbox for simulating and analyzing gradient frequency neural networks. A MATLAB version of the toolbox is available on GitHub [link].
Dynamical Model of Auditory Scene Analysis
Auditory scene analysis refers to segregation of individual sound sources from a mixture of acoustic signals. We explain auditory scene analysis as dynamic pattern formation in nonlinear oscillatory systems. Our current focus is segregation of concurrent harmonic sounds. The emergent pattern of mode-locked synchronization between neural oscillators provides a biologically realistic account of the grouping and segregation of harmonics of multiple concurrent F0s.
Tonality in Music Arises from Perceptual Organization
The perception of tonality has been commonly attributed to the properties of pitch structure, with little attention paid to the role of temporal structure. My dissertation proposes a new psychological theory based on the idea that the perceived sense of tonality, including stability and tendency, arises from the low-level mental processes of perceptual organization through which individual tones in a melodic surface are structured into coherent and articulate tonal-temporal units. The role of low-level (primitive) grouping/segmentation in the perceptual organization of tonal structure is emphasized in an effort to bring to light on the "bottom-up" aspects of tonality perception which have been largely neglected in both music theory and music cognition. Also discussed in light of the proposed theory are the relationship between tonal hierarchies and event hierarchies, the bottom-up (stimulus-driven) and top-down (knowledge-driven) sources of tonal stability, perceptual mechanisms involved in pitch centricity and melodic anchoring, processing advantages in the law of return, and the distinction between sensory consonance and musical consonance.
Last updated: 4/13/2017