My research develops mathematical frameworks for meaning, reasoning, and abstraction, using category theory and formal semantics to study symbolic and subsymbolic approaches to language and cognition.
My dissertation proved homomorphisms between intensional semantics and vector spaces, developed a vector logic for extensional formal semantics, and proposed a taxonomy of grounding problems for neurosymbolic AI. My current work applies categorical and relational methods to abstraction and analogy in ARC-style reasoning tasks.
Intelligence manifests across substrates, be they neural, artificial, or collective, but research often proceeds in isolation. I am interested in whether formal frameworks can identify shared principles, particularly around representation and generalization, without anthropocentric assumptions.
Some core questions of interest include:
- What structural correspondences hold between discrete symbolic systems and continuous learned representations?
- Can category-theoretic methods characterize abstraction substrate-neutrally?
- How do we measure whether formal models are tracking something cognitively real?
