DiscoverInterconnectsInterviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents
Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Update: 2024-08-08
Share

Description

I had the pleasure of Talking with Ross Taylor, who has a great spectrum of unique experiences in the language modeling space — evaluation experience, Galactica lead author, Llama post training, etc. This is a really great conversation on the frontier of language model (LM) reasoning, LM deployments and demos, LM’s for science, RLHF, and other topics. I’ve been trying to get Ross to come on for a bit. He’s one of those people in the LM space that doesn’t speak too much, but when you do, you listen.

Ross Taylor was previously an LLM lead at Meta AI, heading up the reasoning team. Previously he led the early work on LLM agents, and was the research lead on the Galactica project. Before that, he was a co-founder of Papers with Code, which was acquired by Meta in 2019. Before that, he has worked as a quant in sports betting and finance, and before that a policy advisor for the UK Government. He is currently working on a new startup.

Listen on Apple Podcasts, Spotify, and where ever you get your podcasts. For other Interconnects interviews, go here.

YouTube

Chapters

* [00:00:00 ] Introduction of Ross Taylor and his background

* [00:02:12 ] Papers with Code

* [00:09:58 ] Galactica, goals, controversy, legacy

* [00:18:12 ] Technical details of the Galactica model

* [00:23:18 ] Potential for language models to make scientific discoveries

* [00:25:21 ] Defining and improving reasoning in language models

* [00:32:38 ] Process-based reward models and their potential applications

* [00:35:00 ] Generating synthetic data for SFT

* [00:40:23 ] Evaluating the effectiveness of language models as judges for human preference data

* [00:42:43 ] Considerations for creating base models that are easy to fine-tune

* [00:46:45 ] Balancing SFT and RLHF

* [00:54:13 ] Characteristics of successful post-training teams

* [00:58:26 ] Future directions for language model development

We mention

* Galactica

* Papers with Code

* Rob Stojnic (co-founder of Papers with Code)

* DPO, PPO

* Armen Aghajanyan (Chameleon)

* Tom Scialom on Latent Space

* Soumith Chintala (PyTorch)

* Alex Graves

* Llama 3 paper

* Process Reward Models / Let’s Verify Step by Step

Transcript

Built with smol-podcaster and with love of Latent Space.

Nathan Lambert [00:01:07 ]: Today, we're here with Ross. This is a really exciting one. I've been trying to get Ross on the show for a while. Ross has done a lot of interesting work. And also the path to where you ended up with working on state-of-the-art LLaMA work at Meta is very interesting to me. So we're going to start with some of that, but then there are a few people that want to know more about reasoning and some of the RLHF stuff. We won't cover the secretive new start-up - I don't know what it is, but that's how it goes these days. I'm sure it'll be great. So welcome to the show!

Ross Taylor [00:01:41 ]: Thanks for having me.

Nathan Lambert [00:01:44 ]: So I wanted to start with Papers with Code. For people that don't know, Papers with Code is one of these platforms - I never was a heavy user of it - but it collates papers, people can upvote them, popular papers, attaching code and dataset and evaluations to papers, which is great - it was like sort of ahead of its time. It fits into a lot of these open ecosystem things. So I'm kind of curious, like, how you ended up there and why you all started this startup that ended up building this thing that got acquired by Meta?

Ross Taylor [00:02:12 ]: Yeah, that was a weird one. This was like back in 2018. So I was at an incubator, I just quit my previous job and I was like, okay, I want to do a startup. And I met Rob, my co-founder, who came along with me for the journey. We both came from different backgrounds. I was from a sports betting / quant finance kind of background, which is a whole other episode I guess. And Rob was in various startups, like applying ML to things like hate speech detection, that kind of stuff. And the cool thing was, we both resonated on similar kinds of problems within the ML space, even though we came from different domains. So we spent a lot of time doing various experiments, trying to make new kinds of ML tooling, thinking of these stupid questions like “what is the Git equivalent for ML?” - that kind of stuff. One of those experiments was hacking around on this little website to solve a really basic problem: I'm trying to reproduce this paper, but I can't find the code. That was the thing that really blew up beyond our expectations. It was weird because we thought it was fairly trivial at first.

Nathan Lambert [00:03:16 ]: What year was this? 2018?

Ross Taylor [00:03:18 ]: Yeah.

Nathan Lambert [00:03:19 ]: This makes sense. I think this was like, I was starting Deep RL then, but Deep RL was so hot, which was like the worst evaluation has ever been probably for ML. Like people complain about it today, but like Deep RL evaluation was like, every single person was just lying to make themselves look better.

Ross Taylor [00:03:38 ]: The interesting thing now is that the open ecosystem has shifted to focus more on weights as a central artifact rather than code. I think there's an interesting debate there. Would it be more useful to have the LLaMA-3 8B model weights or all the code for training LLaMA-3? I think there's still interesting debates to be had about what's actually useful.

Nathan Lambert [00:03:56 ]: I think the code would be more useful. Like OpenAI released their rules-based reward models, but it's like code washing because it's like just a bunch of people just released like eval code now. And it's like, that's a whole another tier is like actual training code versus eval code. But yeah, I guess I'll just skip ahead.

Ross Taylor [00:04:12 ]: So essentially Papers with Code was the thing that didn't die for us. We always thought we were going to make something else and Papers with Code was more of a marketing thing. But eventually we were like: okay, our users are telling us this is what we should be working on. And we expanded from that very simple use case of finding code towards indexing various artifacts in ML.

Another big problem was trying to find the state of the art in something like ImageNet and all these different benchmarks. There just wasn't a central place to find this information…So we had this quite good Christmas - me and Robert - where we hacked for the whole month, indexing every leaderboard we could and all the related papers. I didn't want to do any annotation again after that! But that took things to the next tier, and that's when things really started to blow up.

Nathan Lambert [00:05:03 ]: Because this is like the first round of leaderboards, because now it's really popular with Hugging Face again. And I was like, yeah, is that just because it became like a Meta thing and it's just kind of a thing that existed? You're like the first leaderboard company in a way, which I don't think many people think about. Yeah, which is weird.

Ross Taylor [00:05:19 ]: Yeah. And the interesting thing about us was that we never had to do any marketing because everything was from organic traffic. So you would type in “state of the art ImageNet” and we would come to the top as the most useful site. That was really the source of our growth, and we grew to a million MAU fairly quickly. And as for Meta, we were in touch with the PyTorch folks at the time who we really liked. You know - Soumith, Joe - those folks, and they had a shared interest in promoting the open source ecosystem back in 2018/19. And while it was like a tough decision, we were just like “we really like working with these people, we want to work more closely with them”, and that got us into Meta.

And then within Meta, we originally continued to develop the platform. But the big shift for us was that, even then, we saw we were moving to a world where compute was the currency. And we saw that, if we wanted to be well positioned in five years time, we needed to be building these large-scale systems. Even for our own platform, we had lots of ML in the backend and we saw we were using fewer and fewer models to do more and more tasks. So that kind of shifted us into research, into Galactica, and then eventually LLaMA and that kind of stuff.

It was a weird shift because we were

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Interviewing Ross Taylor on LLM reasoning, Llama fine-tuning, Galactica, agents

Nathan Lambert