loading page

Beyond Extraction: Contextualising Tabular Data for Efficient Summarisation by Language Models
  • Uday Allu,
  • Biddwan Ahmed,
  • Vishesh Tripathi
Uday Allu
NLP Research Team
Biddwan Ahmed
NLP Research Team
Vishesh Tripathi
NLP Research Team

Corresponding Author:[email protected]

Author Profile

Abstract

The conventional use of the Retrieval-Augmented Generation (RAG) architecture has proven effective for retrieving information from diverse documents. However, challenges arise in handling complex table queries, especially within PDF documents containing intricate tabular structures. This research introduces an innovative approach to enhance the accuracy of complex table queries in RAG-based systems. Our methodology involves storing PDFs in the retrieval database and extracting tabular content separately. The extracted tables undergo a process of context enrichment, concatenating headers with corresponding values. To ensure a comprehensive understanding of the enriched data, we employ a fine-tuned version of the Llama-2-chat language model for summarisation within the RAG architecture. Furthermore, we augment the tabular data with contextual sense using the ChatGPT 3.5 API through a one-shot prompt. This enriched data is then fed into the retrieval database alongside other PDFs. Our approach aims to significantly improve the precision of complex table queries, offering a promising solution to a longstanding challenge in information retrieval.
10 Feb 2024Submitted to TechRxiv
14 Feb 2024Published in TechRxiv