WEBINAR
How can we improve AI alignment?
Meet the speakers
.png?width=405&height=498&name=David%20Jurgens%20(1).png)
Dr David Jurgens
Assistant Professor, University of Michigan
.png)
Dr Hua Shen
Research Fellow, University of Washington, RAISE Center

On-demand webinar

1 hour
LLMs give set responses - but whose responses are they giving?
Humans in the loop are informing these responses, but the way they inform the LLM is not standardized.
Dr David Jurgens and Dr Hua Shen discuss how we can get AI to align better with human practices, so that models can work in more complex situations, with nuance.
We'll discuss:
✅ What AI alignment is, and why it matters.
✅ Major gaps in alignment, and their implications for safety and future research.
✅ What good data collection for AI alignment looks like.