← Back to News List

UMBC Prof. Lara Martin video on Neurosymbolic AI and LLMs

How I Learned to Stop Worrying & Love Large Language Models

UMBC professor Lara Martin gave a presentation recently at the JHU Center for Language & Speech Processing (CLSP). Her talk, Neurosymbolic AI or: How I Learned to Stop Worrying and Love the Large Language Model, covered some of her recent research on natural language processing, human-centered AI, and story generation.  Here is the abstract of her talk.

Large language models like ChatGPT have shown extraordinary abilities for writing. While impressive at first glance, large language models aren't perfect and often make mistakes humans would not make. The main architecture behind ChatGPT mostly doesn't differ from early neural networks and, as a consequence, carries some of the same limitations. My work revolves around the use of neural networks like ChatGPT mixed with symbolic methods from early AI and how these two families of methods can combine to create more robust AI. I talk about some of the neurosymbolic methods I used for applications in story generation and understanding -- with the goal of eventually creating AI that can play Dungeons & Dragons. I also discuss pain points that I found for improving accessible communication and show how large language models can supplement such communication.

Dr. Martin will teach a special topics class at UMBC in Fall 2024 on Interactive Fiction and Text Generation (CMSC 491/691) based on a similar course she developed and co-taught at the University of Pennsylvania in 2022.



Posted: April 12, 2024, 11:52 AM