Google vs Open-Source AI: The Battle Heats Up
Update: 2023-05-05
Description
In a leaked internal document, a Google researcher claims that open-source AI is outcompeting Google and OpenAI in terms of speed, customization, privacy, and capability. While Google's models may hold a slight edge in quality, the gap is rapidly closing. To stay ahead, the author suggests that Google collaborate with open-source AI and enable third-party integrations.
Google can invest in innovative open-source technologies such as LoRA that can reduce matrix size, making model fine-tuning cheaper and faster. Moreover, stackable fine-tuning allows iterative improvements to be the dominant approach, avoiding full model retraining. Google should also invest in more aggressive forms of distillation, be thoughtful about whether each new application or idea really needs a new model, and train on small, highly curated datasets.
While open-source AI has significant advantages, corporations like Google and OpenAI have a greater competitive advantage due to individuals' limited access to cutting-edge research in language learning models (LLMs). However, owning the platform where LLM innovation occurs, Meta effectively garners free labor from all around the world.
Google should collaborate with open-source AI and prioritize enabling third-party integrations, and invest in innovative open-source technologies such as LoRa and fine-tuning. Despite relinquishing some control over their models, they can establish themselves as an open-source leader by publishing model weights for smaller LLM variants.
Meanwhile, the open-source AI community has continued its significant progress, with new models and techniques being developed and released frequently, including instruction tuning and multimodality in one hour of training and achieving "parity" with the Bard model. Open Assistant has launched an Alignment via RLHF model and dataset that is close to ChatGPT in terms of human preference and accessible to a wider audience.
So, should Google compete or collaborate with open-source AI? Tune in next time to find out!
Google can invest in innovative open-source technologies such as LoRA that can reduce matrix size, making model fine-tuning cheaper and faster. Moreover, stackable fine-tuning allows iterative improvements to be the dominant approach, avoiding full model retraining. Google should also invest in more aggressive forms of distillation, be thoughtful about whether each new application or idea really needs a new model, and train on small, highly curated datasets.
While open-source AI has significant advantages, corporations like Google and OpenAI have a greater competitive advantage due to individuals' limited access to cutting-edge research in language learning models (LLMs). However, owning the platform where LLM innovation occurs, Meta effectively garners free labor from all around the world.
Google should collaborate with open-source AI and prioritize enabling third-party integrations, and invest in innovative open-source technologies such as LoRa and fine-tuning. Despite relinquishing some control over their models, they can establish themselves as an open-source leader by publishing model weights for smaller LLM variants.
Meanwhile, the open-source AI community has continued its significant progress, with new models and techniques being developed and released frequently, including instruction tuning and multimodality in one hour of training and achieving "parity" with the Bard model. Open Assistant has launched an Alignment via RLHF model and dataset that is close to ChatGPT in terms of human preference and accessible to a wider audience.
So, should Google compete or collaborate with open-source AI? Tune in next time to find out!
Comments
In Channel



