Episodes

Latest Episode
AI Safety's Crux: Culture vs. Capitalism

AI Safety's Crux: Culture vs. Capitalism

Episode 58 · · 10:30

SB1047's veto, OpenAI's turnover, and a constant treadmill pushing AI startups to be all too similar to big technology name brands.This is AI generated audio with Python and 11Labs.S...

Interviewing Riley Goodside on the science of prompting

Interviewing Riley Goodside on the science of prompting

Episode 56 · · 01:08:39

More information: https://www.interconnects.ai/p/riley-goodside-on-science-of-promptingRiley Goodside is a staff prompting engineer at Scale AI. Previously working in data science, h...

Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem

Llama 3.2 Vision and Molmo: Foundations for the multimodal open-source ecosystem

Episode 57 · · 14:04

Sorry this one was late! Thanks for bearing with me, and keep sending feedback my way. Still a year or two away from when I have time to record these, but I would love to.Open-source...

Reverse engineering OpenAI's o1

Reverse engineering OpenAI's o1

Episode 55 · · 18:52

What productionizing test-time compute shows us about the future of AI. Exploration has landed in language model training.This is AI generated audio with Python and 11Labs.Source cod...

Futures of the data foundry business model

Futures of the data foundry business model

Episode 54 · · 11:32

Scale AI's future versus further scaling of language model performance. How Nvidia may take all the margins from the data market, too.This is AI generated audio with Python and 11Lab...

A post-training approach to AI regulation with Model Specs

A post-training approach to AI regulation with Model Specs

Episode 53 · · 05:39

And why the concept of mandating "model spec's" could be a good start.(Oops, forgot to upload this yesterday!)This is AI generated audio with Python and 11Labs.Source code: https://g...

OpenAI's Strawberry, LM self-talk, inference scaling laws, and spending more on inference

OpenAI's Strawberry, LM self-talk, inference scaling laws, and spending more on inference

Episode 52 · · 10:40

Whether or not scaling works, we should spend more on inference.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOrigi...

OLMoE and the hidden simplicity in training better foundation models

OLMoE and the hidden simplicity in training better foundation models

Episode 51 · · 10:31

Ai2 released OLMoE, which is probably our "best" model yet relative to its peers, but not much has changed in the process.This is AI generated audio with Python and 11Labs.Source cod...

On the current definitions of open-source AI and the state of the data commons

On the current definitions of open-source AI and the state of the data commons

Episode 50 · · 08:01

The Open Source Initiative is working towards a definition.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal p...

Nous Hermes 3 and exploiting underspecified evaluations

Nous Hermes 3 and exploiting underspecified evaluations

Episode 49 · · 08:32

The latest model from one of the most popular fine-tuning labs makes us question how a model should be identified as a "frontier model."This is AI generated audio with Python and 11L...

Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Episode 47 · · 01:02:22

I had the pleasure of Talking with Ross Taylor (https://x.com/rosstaylor90), who has a great spectrum of unique experiences in the language modeling space — evaluation experience, Ga...

A recipe for frontier model post-training

A recipe for frontier model post-training

Episode 48 · · 10:24

Apple, Meta, and Nvidia all agree -- synthetic data, iterative training, human preference labels, and lots of filtering.This is AI generated audio with Python and 11Labs.Source code:...

Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education

Interviewing Sebastian Raschka on the state of open LLMs, Llama 3.1, and AI education

Episode 46 · · 01:03:42

This week, I had the pleasure of chatting with Sebastian Raschka. Sebastian is doing a ton of work on the open language model ecosystem and AI research broadly. He’s been writing the...

GPT-4o-mini changed ChatBotArena

GPT-4o-mini changed ChatBotArena

Episode 45 · · 07:55

And how to understand Llama three point one's results.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal post: ...

Llama 3.1 405b, Meta's AI strategy, and the new open frontier model ecosystem

Llama 3.1 405b, Meta's AI strategy, and the new open frontier model ecosystem

Episode 44 · · 15:22

Defining the future of the AI economy and regulation. Is Meta's AI play equivalent to the Unix stack for open-source software?This is AI generated audio with Python and 11Labs.Source...

SB 1047, AI regulation, and unlikely allies for open models

SB 1047, AI regulation, and unlikely allies for open models

Episode 43 · · 14:20

SB 1047, AI regulation, and unlikely allies for open modelsThe rallying of the open-source community against CA SB 1047 can represent a turning point for AI regulation.This is AI gen...

Switched to Claude 3.5

Switched to Claude 3.5

Episode 42 · · 06:40

I Switched to Claude 3.5Speculations on the role of RLHF and why I love the model for people who pay attention.This is AI generated audio with Python and 11Labs.Source code: https://...

Interviewing Dean Ball on AI policy

Interviewing Dean Ball on AI policy

Episode 41 · · 56:31

I’m really excited to resume the Interconnects Interviews with Dean W. Ball from the Hyperdimensional Substack. We cover the whole stack of recent happenings in AI policy, focusing o...

RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition

RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition

Episode 40 · · 11:52

Things to be aware of if you work on language model fine-tuning.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOrigi...

Frontiers in synthetic data

Frontiers in synthetic data

Episode 39 · · 11:27

Synthetic data is known to be a super powerful tool for every level of the language modeling stack. It's documented as being used for expanding vanilla pretraining data and creating ...

Text-to-video AI is already abundant

Text-to-video AI is already abundant

Episode 38 · · 08:18

Signs point to a general-use Sora-like model coming very soon, maybe even with open-weights.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolamb...

AI for the rest of us

AI for the rest of us

Episode 37 · · 12:35

Apple Intelligence makes a lot of sense when you get out of the AI bubble.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-...

A realistic path to robotic foundation models

A realistic path to robotic foundation models

Episode 36 · · 07:49

A realistic path to robotic foundation modelsNot "agents" and not "AGI." Some thoughts and excitement after revisiting the industry thanks to Physical Intelligence founders Sergey Le...

We aren't running out of training data, we are running out of open training data

We aren't running out of training data, we are running out of open training data

Episode 35 · · 08:29

Data licensing deals, scaling, human inputs, and repeating trends in open vs. closed.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/int...

Name, image, and AI's likeness

Name, image, and AI's likeness

Episode 34 · · 09:03

Celebrity's power will only grow in the era of infinite content.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOrigi...

OpenAI chases Her

OpenAI chases Her

Episode 33 · · 12:28

ChatGPT leaves the textbox, and Google is building the same, and more, as practical tools.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolamber...

OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions

OpenAI's Model (behavior) Spec, RLHF transparency, and personalization questions

Episode 32 · · 14:05

Now we will have some grounding for when weird ChatGPT behaviors are intended or side-effects -- shrinking the Overton window of RLHF bugs.This is AI generated audio with Python and ...

RLHF: A thin line between useful and lobotomized

RLHF: A thin line between useful and lobotomized

Episode 31 · · 13:08

Many, many signs of life for preference fine-tuning beyond spoofing chat evaluation tools.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolamber...

Phi 3 and Arctic: Outlier LMs are hints

Phi 3 and Arctic: Outlier LMs are hints

Episode 30 · · 09:46

Models that seem totally out of scope from recent open LLMs give us a sneak peek of where the industry will be in 6 to 18 months.This is AI generated audio with Python and 11Labs.Sou...

AGI is what you want it to be

AGI is what you want it to be

Episode 29 · · 10:38

Certain definitions of AGI are backing people into a pseudo-religious corner.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnec...