Tokens, Prompts & Meta's Megabyte + AI News
Description
MedPaLM 2 outperforms human doctors in 86.5% of cases
Meta AI’s 1.2m Token model Megabyte
Meta’s LIMA: Less Is More for Alignment
Photoshop & Adobe Firefly gets even better
Crazy things
Job interview real-time live transcription and responses
Drag GAN - Modifying images by dragging certain points
Prompt tips
- Knowing how the model was trained to choose your prompt engineering style (e.g. if it was trained on Q&As, then format your prompt with a Q&A).
- Give the model many examples of what you expect (e.g. if you want your answer in a certain style)
- Specificity vs generality. The more specific you are, the better the answer you’ll get.
- Always remember that the (transformer) model is predicting the next most probably sequence of tokens or words, whenever you are creating a prompt.
Links to other companies / models / startups mentioned
Clearword meeting notes summariser
Tactiq
Mentions: Model conditioning, Tokens, Prompt Engineering, Model Training, Sequence2Sequence, RLHF, model weights, hyper parameters, supervised learning, unsupervised learning, annotated data, caselaw, finetuning, model selection, LLMs, Google, Rainbow Corn, Generative AI, Emergent capabilities
Book: Michio Kaku: Physics of the future (misquoted as Science of the future)