loading page

A Weakly Supervised Deep Generative Model for Complex Image Restoration and Style Transformation
  • Weixing Dai ,
  • Ivy H. M. Wong ,
  • Terence T. W. Wong
Weixing Dai
The Hong Kong University of Science and Technology .

Corresponding Author:[email protected]

Author Profile
Ivy H. M. Wong
Author Profile
Terence T. W. Wong
Author Profile

Abstract

The datasets for transforming autofluorescence images to the histochemically stained images were acquired from a human breast biopsy tissue and a human liver cancer tissue. Tissues of breast cancer and liver cancer were extracted surgically or through tissue biopsy. The tissues were formalin-fixed and paraffin-embedded (FFPE). Thin tissue slices, with a thickness of 4 µm, were sectioned and placed on a quartz slide. The tissue slices were deparaffined prior to imaging. The autofluorescence images were acquired from a wide-field inverted microscope equipped with a 10X/0.3 numerical aperture (NA) objective lens (Plan Fluorite, Olympus Corp.), an infinity-corrected tube lens (TTL-180-A, Thorlabs Inc.), and a monochrome scientific complementary metal-oxide-semiconductor camera (pco.panda 4.2, PCO. Inc.). A deep ultraviolet light-emitting diode of 265 nm (M265L4, Thorlabs Inc.) was used as an excitation light source because of its high absorption in cell nuclei [28], consequently providing high nuclear contrast without labels [29]. After acquiring the autofluorescence image, the same slide was stained with H&E and its bright-field images were captured using a whole-slide scanner equipped with a 20X/0.75 NA objective lens (NanoZoomer-SQ, Hamamatsu Photonics K.K). All human experiments were carried out in conformity with a clinical research ethics review approved by the Institutional Review Board of the Chinese University of Hong Kong/ New Territories East Cluster (reference number: 2021.597).