New Show Hacker News story: Show HN: Neum AI – Open-source large-scale RAG framework
Show HN: Neum AI – Open-source large-scale RAG framework
5 by picohen | 0 comments on Hacker News.
Over the last couple months we have been supporting developers in building large-scale RAG pipelines to process millions of pieces of data. We documented our approach in an HN post ( https://ift.tt/PmqNBga ) a couple weeks ago. Today, we are open sourcing the framework we have developed. The framework focuses on RAG data pipelines and provides scale, reliability, and data synchronization capabilities out of the box. For those newer to RAG, it is a technique to provide context to Large Language Models. It consists of grabbing pieces of information (i.e. pieces of news articles, papers, descriptions, etc.) and incorporating them into prompts to help contextualize the responses. The technique goes one level deeper in finding the right pieces of information to incorporate. The search for relevant information is done through the use of vector embeddings and vector databases. Those pieces of news articles, papers, etc. are transformed into a vector embedding that represents the semantic meaning of the information. These vector representations are organized into indexes where we can quickly search for the pieces of information that most closely resembles (from a semantic perspective) a given question or query. For example, if I take news articles from this year, vectorize them, and add them to an index, I can quickly search for pieces of information about the US elections. To help achieve this, the Neum AI framework features: Starting with built-in data connectors for common data sources, embedding services and vector stores, the framework provides modularity to build data pipelines to your specification. The connectors support pre-processing capabilities to define loading, chunking and selecting strategies to optimize content to be embedded. This also includes extracting metadata that is going to be associated to a given vector. The generated pipelines support large scale jobs through a high throughput distributed architecture. The connectors allow you to parallelize tasks like downloading documents, processing them, generating embedding and ingesting data into the vector DB. For data sources that might be continuously changing, the framework supports data scheduling and synchronization. This includes delta syncs where only new data is pulled. Once data is transformed into a vector database, the framework supports querying of the data including hybrid search using the available metadata added during pre-processing. As part of the querying process, the framework provides capabilities to capture feedback on retrieved data as well as run evaluations against different pipeline configurations. Try it out and if interested in chatting more about this shoot us an email founders@tryneum.com
5 by picohen | 0 comments on Hacker News.
Over the last couple months we have been supporting developers in building large-scale RAG pipelines to process millions of pieces of data. We documented our approach in an HN post ( https://ift.tt/PmqNBga ) a couple weeks ago. Today, we are open sourcing the framework we have developed. The framework focuses on RAG data pipelines and provides scale, reliability, and data synchronization capabilities out of the box. For those newer to RAG, it is a technique to provide context to Large Language Models. It consists of grabbing pieces of information (i.e. pieces of news articles, papers, descriptions, etc.) and incorporating them into prompts to help contextualize the responses. The technique goes one level deeper in finding the right pieces of information to incorporate. The search for relevant information is done through the use of vector embeddings and vector databases. Those pieces of news articles, papers, etc. are transformed into a vector embedding that represents the semantic meaning of the information. These vector representations are organized into indexes where we can quickly search for the pieces of information that most closely resembles (from a semantic perspective) a given question or query. For example, if I take news articles from this year, vectorize them, and add them to an index, I can quickly search for pieces of information about the US elections. To help achieve this, the Neum AI framework features: Starting with built-in data connectors for common data sources, embedding services and vector stores, the framework provides modularity to build data pipelines to your specification. The connectors support pre-processing capabilities to define loading, chunking and selecting strategies to optimize content to be embedded. This also includes extracting metadata that is going to be associated to a given vector. The generated pipelines support large scale jobs through a high throughput distributed architecture. The connectors allow you to parallelize tasks like downloading documents, processing them, generating embedding and ingesting data into the vector DB. For data sources that might be continuously changing, the framework supports data scheduling and synchronization. This includes delta syncs where only new data is pulled. Once data is transformed into a vector database, the framework supports querying of the data including hybrid search using the available metadata added during pre-processing. As part of the querying process, the framework provides capabilities to capture feedback on retrieved data as well as run evaluations against different pipeline configurations. Try it out and if interested in chatting more about this shoot us an email founders@tryneum.com
Comments
Post a Comment