Ji Chul Kim, PhD
Postdoctoral Fellow
Department of Psychological Sciences
University of Connecticut
406 Babbidge Road, Unit 1020
Storrs, CT 06269

E-mail: jichulkim21(at)gmail(dot)com




BRIEF BIO

I am currently a postdoctoral fellow in the Music Dynamics Lab at University of Connecticut. I studied physics and music theory at Seoul National University, South Korea and obtained a PhD degree in music theory and cognition at Northwestern University. I am also a research scientist at Oscilloscape, a music tech company based in East Hartford, CT.


RESEARCH INTEREST

My research areas include music cognition, music theory, auditory modeling, computational neuroscience, and dynamical systems. In music theory and cognition, my primary interest is the perceptual and cognitive basis for music-theoretical concepts and analytic procedures, especially those related to tonal and melodic structures. I attempt to explain subjective experiences and intuitions attributed to tonal-metrical music in terms of the dynamical but mostly unconscious process of perceptual organization. I am also working on computational modeling of music perception based on a nonlinear dynamical systems approach to auditory processing. In this line of research, my colleagues and I develop gradient frequency neural network models of auditory processing and perception.


CURRENT PROJECTS

Perceptual Organization of Tonal Melody



Tones in a tonal melody are heard to be under the influence of prevailing key and harmony. For example, when a tonal context is set up, an unstable tone is heard to be "attracted" to the nearest stable tone. At the same time, melodic tones, through their intervallic pattern and motion, establish key and harmony. I propose that this two-way relationship between melodic surface and underlying tonal/harmonic structure can be explained in terms of the bottom-up (stimulus-driven) and top-down (knowledge-driven) aspects of perceptual organization, which is low-level perceptual processing that constructs maximally stable mental representations out of incoming sensory data. This approach allows us to identify the perceptual principles underlying traditional theoretical concepts and compositional procedures concerning the construction of tonal melody, such as melodic prolongation and diminution, and provides us with new analytic insights on the dynamical structure of tonal melody arising from the interactions between bottom-up and top-down processes.
  • Kim, J. C., & Large, E. W. (under revision). Establishing tonal stability: Dynamics of melodic steps and leaps. [abstract]
  • Kim, J. C. (2017). A dynamical model of pitch memory provides an improved basis for implied harmony estimation. Frontiers in Psychology, 8:666. [link]
  • Kim, J. C. (2014). Pitch dynamics in tonal melody: The role of melodic step and leap in establishing tonal stability. The 18th Annual Meeting of American Musicological Society and the 37th Annual Meeting of Society for Music Theory, Milwaukee, WI. [poster]

Neurodynamics of Harmony and Tonality


Neural oscillation is a dynamic activity observed throughout the central nervous system, including various stages of the auditory system. We propose that nonlinear oscillatory dynamics in the auditory system give rise to perceptual phenomena related to harmony and tonality in music. To study neurodynamic properties of the auditory system, we use simple mathematical models of neural oscillation (called canonical models) and simulate auditory perception using multilayered networks of neural oscillators. We show that the intrinsic dynamics and network properties of neural oscillators can explain many aspects of harmony and tonality perception, such as perceived hierarchies of tonal stability, melodic attraction and expectation, Hebbian learning of tonal sequences, relative stability of pitch intervals in memory, and categorical perception of pitch intervals.
  • Large E. W., Kim J. C., Flaig N. K., Bharucha J. J., & Krumhansl C. L. (2016). A neurodynamic account of musical tonality. Music Perception, 33, 319-331. [pdf]
  • Kim, J. C., & Large, E. W. (2015). Nonlinear resonance and plasticity as a basis for musical consonance. Neuroscience 2015, Chicago, IL. [poster]
  • Lerud, K., Almonte, F. V., Kim, J. C., & Large, E. W. (2014). Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hearing Research, 308, 41-49. [pdf]

Analysis of Gradient Frequency Neural Networks (GrFNNs)


To model nonlinear transformation of acoustic signals into neural patterns in the auditory system, we use a canonical model for gradient frequency neural networks, a mathematical model that captures essential properties shared by such networks, regardless of their scale and biophysical mechanisms. Although it is a simple model, its behavior is complex and difficult to analyze because it consists of multiple components with distinct dynamics (e.g., autonomous dynamics, external forcing, coupling interaction, plasticity). Our approach is to analyze individual network components separately and attempt to understand the overall dynamics of the model by combining component dynamics. We developed GrFNN Toolbox for simulating and analyzing gradient frequency neural networks. A MATLAB version of the toolbox is available on GitHub [link].
  • Kim, J. C., & Large, E. W. (book in preparation). Signal processing, plasticity and pattern formation in networks of neural oscillators.
  • Kim, J. C., & Large, E. W. (2015). Signal processing in periodically forced gradient frequency neural networks. Frontiers in Computational Neuroscience, 9:152. [link]

Dynamical Model of Auditory Scene Analysis

Auditory scene analysis refers to segregation of individual sound sources from a mixture of acoustic signals. We explain auditory scene analysis as dynamic pattern formation in nonlinear oscillatory systems. Our current focus is segregation of concurrent harmonic sounds. The emergent pattern of mode-locked synchronization between neural oscillators provides a biologically realistic account of the grouping and segregation of harmonics of multiple concurrent F0s.
  • Kim, J. C., & Large, E. W. (2016). Multiple F0 estimation by gradient frequency neural networks. The 6th Annual Seminar on Cognitively Based Music Informatics Research, New York, NY. [slides]
  • Kim, J. C., & Large, E. W. (2016). A nonlinear dynamical systems approach to auditory scene analysis. The 14th International Conference for Music Perception and Cognition, San Francisco, CA. [poster]

DISSERTATION

Tonality in Music Arises from Perceptual Organization
[ProQuest link]

The perception of tonality has been commonly attributed to the properties of pitch structure, with little attention paid to the role of temporal structure. My dissertation proposes a new psychological theory based on the idea that the perceived sense of tonality, including stability and tendency, arises from the low-level mental processes of perceptual organization through which individual tones in a melodic surface are structured into coherent and articulate tonal-temporal units. The role of low-level (primitive) grouping/segmentation in the perceptual organization of tonal structure is emphasized in an effort to bring to light on the "bottom-up" aspects of tonality perception which have been largely neglected in both music theory and music cognition. Also discussed in light of the proposed theory are the relationship between tonal hierarchies and event hierarchies, the bottom-up (stimulus-driven) and top-down (knowledge-driven) sources of tonal stability, perceptual mechanisms involved in pitch centricity and melodic anchoring, processing advantages in the law of return, and the distinction between sensory consonance and musical consonance.


Last updated: 4/13/2017