Meta's Llama 3.1 vs. GPT-4o π€― // OpenAI's own AI chips π§ // SlowFast-LLaVA for Video LLMs π¬
Description
Meta's upcoming Llama 3.1 models could outperform the current state-of-the-art closed-source LLM model, OpenAI's GPT-4o.
OpenAI is planning to develop its own AI chip to optimize performance and potentially supercharge their progress towards AGI.
Apple's SlowFast-LLaVA is a new training-free video large language model that captures both detailed spatial semantics and long-range temporal context in video without exceeding the token budget of commonly used LLMs.
Google's Conditioned Language Policy (CLP) framework is a general framework that builds on techniques from multi-task training and parameter-efficient finetuning to develop steerable models that can trade-off multiple conflicting objectives at inference time.
Contact:Β Β sergi@earkind.com
Timestamps:
00:34 Introduction
01:28 Β LLAMA 405B Performance Leaked
03:01 Β OpenAI Wants Its Own AI Chips
04:25 Β Towards more cooperative AI safety strategies
06:01 Fake sponsor
07:35 Β SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
09:17 Β AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
10:56 Β Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
12:46 Outro