DiscoverAI + a16zNeural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity
Neural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity

Neural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity

Update: 2024-10-25
Share

Description

In this episode of AI + a16z, General Partner Anjney Midha shares his perspective on the recent collection of Nobel Prizes awarded to AI researchers in both Physics and Chemistry. He talks through how early work on neural networks in the 1980s spurred continuous advancement in the field — even through the "AI winter" — which resulted in today's extremely useful AI technologies.

Here's a sample of the discussion, in response to a question about whether we will see more high-quality research emerge from sources beyond large universities and commercial labs:

"It can be easy to conclude that the most impactful AI research still requires resources beyond the reach of most individuals or small teams. And that open source contributions, while valuable, are  unlikely to match the breakthroughs from well-funded labs. I've even heard heard some dismissive folks call it cute, and undermine the value of those.

"But on the other hand, I think that you could argue that open source and individual contributions are becoming increasingly more important in AI development. I think that the democratization of AI will lead probably to more diverse and innovative applications. And I think, in particular, the reason we should expect an explosion in home scientists — folks who aren't necessarily affiliated with a top-tier academic, or for that matter,  industry lab — is that as open source models get more and more accessible, the rate limiter really is on the creativity of somebody who's willing to apply the power of that model's computational ability to a novel domain. And there are just a ton of domains and combinatorial intersections of different disciplines.

"Our blind spot for traditional academia [is that] it's not particularly rewarding to veer off the publish-or-perish conference circuit. And if you're at a large industry lab and you're not contributing directly to the next model release, it's not that clear how you get rewarded. And so being an independent actually frees you up from the incentive misstructure, I think, of some of the larger labs. And if you get to leverage the millions of dollars that the Llama team spent on pre-training, applying it to data sets that nobody else has perused before, it results in pretty big breakthroughs."

Learn more:

They trained artificial neural networks using physics

They cracked the code for proteins’ amazing structures

Notable AI models by year

Follow on X:

Anjney Midha

Derrick Harris


Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Neural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity

Neural Nets and Nobel Prizes: AI's 40-Year Journey from the Lab to Ubiquity

Anjney Midha, Derrick Harris