DiscoverGPT ReviewsMeta's Llama 3.1 vs. GPT-4o 🀯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬
Meta's Llama 3.1 vs. GPT-4o 🀯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬

Meta's Llama 3.1 vs. GPT-4o 🀯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬

Update: 2024-07-23
Share

Description

Meta's upcoming Llama 3.1 models could outperform the current state-of-the-art closed-source LLM model, OpenAI's GPT-4o.


OpenAI is planning to develop its own AI chip to optimize performance and potentially supercharge their progress towards AGI.


Apple's SlowFast-LLaVA is a new training-free video large language model that captures both detailed spatial semantics and long-range temporal context in video without exceeding the token budget of commonly used LLMs.


Google's Conditioned Language Policy (CLP) framework is a general framework that builds on techniques from multi-task training and parameter-efficient finetuning to develop steerable models that can trade-off multiple conflicting objectives at inference time.


Contact:Β Β sergi@earkind.com


Timestamps:


00:34 Introduction


01:28 Β LLAMA 405B Performance Leaked


03:01 Β OpenAI Wants Its Own AI Chips


04:25 Β Towards more cooperative AI safety strategies


06:01 Fake sponsor


07:35 Β SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models


09:17 Β AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?


10:56 Β Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning


12:46 Outro

CommentsΒ 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Meta's Llama 3.1 vs. GPT-4o 🀯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬

Meta's Llama 3.1 vs. GPT-4o 🀯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬

Earkind