MPT-30B: Raising the bar for open-source foundation models : r/LocalLLaMA

Description

้•ฟๆ–‡ๆœฌ Effective Long-Context Scaling of Foundation Models, PDF, Data

๐Ÿ”ฅ ThursdAI Sunday special - Deep dives into Crew AI with Joao then a tasty Bagel discussion with Jon Durbin โ€“ ThursdAI - The top AI news from the past week โ€“ Podcast โ€“ Podtail

reeducator/vicuna-13b-cocktail ยท Cocktail Testing and Discussion

awesome-stars/README.md at master ยท utensil/awesome-stars ยท GitHub

A lean-ish web client for Lemmy : r/Lemmy

๐Ÿบ๐Ÿฆโ€โฌ› LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4 : r/LocalLLaMA

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models

้•ฟๆ–‡ๆœฌ Effective Long-Context Scaling of Foundation Models, PDF, Data

How to auto-label images with OWL-ViT - SOTA Google's foundation object detector - Supervisely : r/learnmachinelearning

MPT-30B-Instruct : r/LocalLLaMA

R] New Open Source LLM: GOAT-7B (SOTA among the 7B models) : r /MachineLearning

Nick Carlino on LinkedIn: Dell Project Helix & Generative AI 101 Part 4: Inferencing (Running yourโ€ฆ

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models

MPT-30B-Instruct : r/LocalLLaMA

$ 14.00USD
Score 4.7(356)
In stock
Continue to book