Summary We created a guide for fine-tuning and evaluating LLMs using LangSmith for dataset management and evaluation. We did this both with an open source LLM on CoLab and HuggingFace for model training, as well as OpenAI's new finetuning service. As a test case, we fine-tuned LLaMA2-7b-chat and gpt-3.5-turbo for an extraction task (knowledge graph triple extraction) using training data exported from LangSmith and also evaluated the results using LangSmith. The CoLab guide is here. Context I
Multi-Vector Retriever for RAG on tables, text, and images 和訳|p
Query Construction (LangChain Blog) - nikkie-memos
Feature]: Finetuning wrapper for openai, hf, etc. · Issue #396 · BerriAI/litellm · GitHub
Nicolas A. Duerr on LinkedIn: #business #strategy #partnerships
LangSaaS - No Code LangChain SaaS - Product Information, Latest Updates, and Reviews 2024
Multi-Vector Retriever for RAG on tables, text, and images 和訳|p
Multi-Vector Retriever for RAG on tables, text, and images 和訳|p
컴퓨터 vs 책: 8월 2023
Nicolas A. Duerr on LinkedIn: #business #strategy #partnerships
Thread by @RLanceMartin on Thread Reader App – Thread Reader App
Nicolas A. Duerr on LinkedIn: #futurebrains #platform #marketplace #strategy #innovation
Thread by @RLanceMartin on Thread Reader App – Thread Reader App