loading page

On the Generation of Medical Dialogues for COVID19
  • +9
  • Wenmian Yang ,
  • Guangtao Zeng ,
  • Bowen Tan ,
  • Zeqian Ju ,
  • Subrato Chakravorty ,
  • Xuehai He ,
  • Shu Chen ,
  • Xingyi Yang ,
  • Qingyang Wu ,
  • Zhou Yu ,
  • Eric Xing ,
  • Pengtao Xie
Wenmian Yang
Author Profile
Guangtao Zeng
Author Profile
Bowen Tan
Author Profile
Zeqian Ju
Author Profile
Subrato Chakravorty
Author Profile
Xuehai He
Author Profile
Xingyi Yang
University of California

Corresponding Author:[email protected]

Author Profile
Qingyang Wu
Author Profile
Eric Xing
Author Profile
Pengtao Xie
Author Profile

Abstract

Under the pandemic of COVID-19, people experiencing COVID19-related symptoms or exposed to risk factors have a pressing need to consult doctors. Due to hospital closure,
a lot of consulting services have been moved online. Because of the shortage of medical professionals, many people cannot receive online consultations timely. To address this problem, we aim to develop a medical dialogue system that can provide COVID19-related consultations. We collected two dialogue datasets - CovidDialog - (in English and Chinese respectively) containing conversations between doctors and patients about COVID-19. On these two datasets, we train several dialogue generation models based on Transformer, GPT, and BERT-GPT. Since the two COVID-19 dialogue datasets are small in size, which bear high risk of overftting, we leverage transfer learning to mitigate data deficiency. Specifically, we take the pretrained models of Transformer, GPT, and BERT-GPT on dialog datasets and other large-scale texts, then finetune them on our CovidDialog datasets. Experiments demonstrate that these approaches are promising in generating meaningful medical dialogues about COVID-19. But more advanced approaches are needed to build a fully useful dialogue system that can offer accurate COVID-related consultations. The data and code are available at https://github.com/UCSD-AI4H/COVID-Dialogue