A Turing AI UK Fringe Event | 30th March 2023 | 2pm BST


About

After years of rapid progress in Artificial Intelligence and Natural Language Processing, ChatGPT has captured the public imagination in a way that is unprecedented for an AI system, reportedly acquiring over 100 million active users within two months of its launch. Researchers and other stakeholders must come to grips with the scientific and societal implications of Large Language Models (LLMs), and how they will impact the language sciences, language technology and society more generally.

  • What does it mean for a system (either artificial or biological) to “understand” language, and how might we investigate the linguistic competence of LLMs?
  • How do the organising principles of LLMs relate to neurocognitive processes in the human brain, and what can LLMs tell us about the neurobiology of language? Can LLMs be utilised to decode or “read out” the linguistic content of human thoughts?
  • What are the major governance and ethical implications of LLMs, and how can we ensure that they are trained, evaluated and used in a fair, transparent and responsible way?

In this symposium, experts in AI and the language sciences describe their recent research on the scientific and governance implications of large language models (LLMs). The event will also include a Q & A section, facilitating a more general discussion of the capabilities and impact of LLMs.

Organisers

Barry Devereux & Hui Wang, Queen's University Belfast
qubturinguk2023@gmail.com

Registration

Date and Time: Thursday, 30th March 2023 at 2pm BST

This is an online event, taking place via Zoom. Registration is free and is open to all.

You can register via Eventbrite here. Zoom Meeting ID details will be emailed to attendees in advance of the event.

Talks

    The Semantic Competence of Large Language Models

    Raphaël Millière

    Over the past decade, artificial intelligence has made remarkable advancements, largely due to sophisticated neural networks capable of learning from vast amounts of data. One area where these advancements have been particularly evident is in natural language processing, as demonstrated by the impressive capabilities of Large Language Models (LLMs) like GPT-4 and ChatGPT. LLMs possess the astonishing ability to produce well-structured and coherent paragraphs on a wide array of subjects. Moreover, they can perform a diverse set of tasks, such as summarizing extensive articles, translating languages, answering complex questions, solving elementary problems, generating code for rudimentary programs, and even elucidating jokes. As AI becomes increasingly proficient in language-related tasks, it prompts the question of whether these models truly comprehend the language they process. This inquiry has reignited long-standing philosophical debates concerning the nature of language understanding. However, defining 'understanding' in this context and distinguishing it from consciousness remains a challenge. In this presentation, I will propose to focus on the more specific notion of 'semantic competence,' which refers to the ability to comprehend the meanings of words and phrases and to employ them effectively. I will explore two facets of semantic competence: inferential, which pertains to connecting linguistic expressions with other expressions, and referential, which relates expressions to the real world. Drawing from philosophy, linguistics, and recent research on LLMs, I will argue that these models possess considerable inferential competence and even a limited degree of referential competence. In conclusion, I will contemplate the elements still absent from AI models and consider what steps are necessary for them to achieve a level of semantic competence comparable to that of humans.

    Encoding & Decoding Natural Language in fMRI

    Alex Huth

    The meaning, or semantic content, of natural speech is represented in highly specific patterns of brain activity across a large portion of the human cortex. Using recently developed machine learning methods and very large fMRI datasets collected from single subjects, we can construct models that predict brain responses with high accuracy. Interrogating these models enables us to map language selectivity and potentially uncover organizing principles. The same techniques also enable us to construct surprisingly effective decoding models, which predict language stimuli from brain activations recorded using fMRI. Using these models we are able to decode language while subjects imagine telling a story, and while subjects watch silent films with no explicit language content.

    Data governance and transparency for Large Language Models: lessons from the BigScience Workshop

    Anna Rogers

    The continued growth of LLMs and their wide-scale adoption in commercial applications such as chatGPT make it increasingly important to (a) develop ways to source their training data in a more transparent way, and (b) to investigate it, both for research and for ethical issues. This talk will discuss the current state of affairs and some data governance lessons learned from Big Science, an open-source effort to train a multilingual LLM - including an ongoing effort for investigating the 1.6 Tb multilingual ROOTS corpus.

Speakers

  • Raphaël Millière

    Raphaël Millière @raphaelmilliere

    Columbia University

    Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. He previously completed my DPhil (PhD) in philosophy at the University of Oxford, where he worked on self-consciousness. His interests lie mainly within the philosophy of artificial intelligence, cognitive science, and mind.

  • Alex Huth

    Alex Huth @alex_ander

    University Of Texas at Austin

    Alex Huth is an Assistant Professor at The University of Texas at Austin in the departments of neuroscience and computer science. His lab uses natural language stimuli and fMRI to study language processing in human cortex in work funded by the Burroughs Wellcome Foundation, Sloan Foundation, Whitehall Foundation, NIH, and others. Before joining UT, Alex did his PhD and postdoc in Jack Gallant’s laboratory at UC Berkeley, where he developed novel methods for mapping semantic representations of visual and linguistic stimuli in human cortex.

  • Anna Rogers

    Anna Rogers @annargrs

    IT University of Copenhagen

    Anna Rogers is an assistant professor at IT University of Copenhagen. After receiving her PhD in computational linguistics from the University of Tokyo, she was a postdoctoral associate in machine learning for NLP in University of Massachusetts (Lowell), and in social data science (University of Copenhagen). She is one of the program chairs for ACL'23, and an organizer of the Workshop on Insights from Negative Results in NLP.

Partners