Skip to content

Releases: InternLM/MindSearch

v0.1.0

05 Nov 02:51
19f1948
Compare
Choose a tag to compare

An open-source AI Search Engine Framework with Perplexity.ai Pro performance

Technical Report: https://arxiv.org/abs/2407.20183

Search engines provide extensive and the latest web information but often fail to tailor search results to align with complex human intentions. Inspired by the remarkable progress of Large Language Models (LLMs), recent works enhance search engines with LLMs. However, these methods still obtain unsatisfying performance due to three critical problems:

  • LLMs fail to decompose complex requests into atomic queries, which adds difficulty in accurately and completely retrieving relevant information.
  • Search results are relatively massive compared to other tasks, requiring dedicated pre-selection.
  • iterative web search content may quickly exceed the maximum capacity of LLM input length.

Figure 1: The overall framework of MindSearch. It consists of two main ingredients: WebPlanner and WebSearcher. WebPlanner acts as a high-level planner, orchestrating the reasoning steps and multiple WebSearchers. WebSearcher conducts fine-grained web searches and summarizes valuable information back to the planner, formalizing a simple yet effective multi-agent framework.

To address these issues, we introduce MindSearch, a simple yet effective LLM-based multi-agent framework for web search, consisting of a Web Planner and Web Searcher. WebPlanner models the complex problem-solving minds as a dynamic graph construction process: it decomposes the question into sub-queries as graph nodes and progressively extends the graph based on the search result from WebSearcher. Tasked with each sub-query, WebSearcher performs hierarchical information retrieval with search engines and collects valuable information for WebPlanner.


Figure 2: Subjective evaluation results judged by human experts on open-set QA questions. MindSearch outperforms ChatGPT-Web and Perplexity.ai Pro by a large margin in terms of depth, breadth, and actuality.

The multi-agent design of MindSearch dispatches a load of massive information to different agents, enabling the whole framework to process a much longer context i.e., more than 300 web pages). To validate the effectiveness of our approach, we extensively evaluate MindSearch on both closed-set and open-set QA problems with GPT-4o and InternLM2.5-7B models. Experimental results demonstrate that our approach significantly improves the response quality in terms of depth and breadth. Besides, we also show that responses from MindSearch based on InternLM2.5-7B are preferable by humans to ChatGPT-Web (by GPT-4o) and Perplexity.ai applications, which implies that MindSearch delivers a competitive solution to the AI search engine.

The code is available at https://github.com/InternLM/MindSearch.