The pace of AI development over the last few months has been simply exhausting. Exhilarating, but exhausting.
Just look at this chart. This is just the open source LLMs.
I don’t check this leaderboard very often, but the Mistral models that were winning two weeks ago aren’t even in the top 20.
Here’s some cool stuff I found since I ate dinner an hour ago. (Sorry, I literally don’t know what else to do…there’s too much cool stuff.)
- WikiChat on GitHub: WikiChat enhances the factuality of large language models by retrieving data from Wikipedia.
- LLMCompiler on GitHub: LLMCompiler is a framework for efficient parallel function calling with both open-source and close-source large language models.
- Flowise on GitHub: Flowise offers a drag & drop user interface to build customized flows for large language models.
1/n Was December 8th, 2023, the day when we've come to realize that AGI technology has been democratized? That it cannot be confined to the few and the GPU-rich? Let me explain to you what happened yesterday. pic.twitter.com/syLBuCVqG6
— Carlos E. Perez (@IntuitMachine) December 9, 2023
Mixtral 8x7B in LangSmith Playground
— LangChain (@LangChainAI) December 10, 2023
Thanks to our friends at @thefireworksai, you can try out the newest @MistralAI mixtral-8x7B model from LangSmith Playground and Hub for free!
s/o to @fireworksai for the experimental chat fine-tune as well!
Sign up for LangSmith here:… pic.twitter.com/tYHvP2jCaT
RAG over Complex PDFs 📑
— LlamaIndex 🦙 (@llama_index) December 10, 2023
The issue with basic RAG strategies (chunking, top-k), is that they’re fine with plain .txt essays, but they do terribly over complex documents - w/ embedded objects like tables, diagrams 📊, and hierarchical sections 🪆
You can solve this with… pic.twitter.com/jSMnzdDXIS
An increasing use case in retrieval is not only fetching the top-k most similar chunks to queries, but exploring entity relationships.
— Jerry Liu (@jerryjliu0) December 8, 2023
The way you can do that is with knowledge graphs, and now it's easier than ever to explore how to use them in @llama_index.
Simply download our… https://t.co/ibDZvSpfAf