← Previous · All Episodes
Interviewing Riley Goodside on the science of prompting Episode 56

Interviewing Riley Goodside on the science of prompting

· 01:08:39

|
More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-prompting

Riley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, he is often seen as the default for the new role of a “prompt engineer.” He regularly posts incisive prompts that illicit notable behavior from the most popular AI models.

I really resonated with this saying from Anthropic’s recent podcast on prompt engineering — “now we write essays and treat them as code.” In order to be good at prompting, you need to understand that natural language operates as our code used to.

This episode is a masterclass on why you should care about prompting and how it impacts results. Of course, there’s a bunch of great discussion on recent models that reflect the need for different and or better prompting. Enjoy it!

00:00:09 Introduction
00:02:40 Riley's path to LLMs
00:07:54 Impact of ChatGPT on prompt engineering
00:12:03 OpenAI's o1
00:18:21 Autoregressive inference and prompting sensitivities
00:24:48 Reflection 70B model and its implications
00:28:00 Impact of prompting on evaluation
00:32:43 Prompting vs. Google search
00:46:55 Prompting and RLHF/post-training
00:56:57 Prompting of AI agents
01:01:20 Importance of hands-on experience with language models
01:05:00 Importance and challenges of AI model evaluation

Subscribe

Listen to Interconnects Audio using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts YouTube
← Previous · All Episodes