DeepCAN: A Modular Deep Learning System for Automated Cell Counting and Viability Analysis

Furkan Eren, Mete Aslan, Dilek Kanarya, Yigit Uysalli, Musa Aydin, Berna Kiraz, Omer Aydin, Alper Kiraz*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Citations (Scopus)

Abstract

Precise and quick monitoring of key cytometric features such as cell count, size, morphology, and DNA content is crucial in life science applications. Traditionally, image cytometry relies on visual inspection of hemocytometers. This approach is error-prone due to operator subjectivity. Recently, deep learning approaches have emerged as powerful tools enabling quick and accurate image cytometry applicable to different cell types. Leading to simpler, compact, and affordable solutions, these approaches revealed image cytometry as a viable alternative to flow cytometry or Coulter counting. In this study, we demonstrate a modular deep learning system, DeepCAN, providing a complete solution for automated cell counting and viability analysis. DeepCAN employs three different neural network blocks called Parallel Segmenter, Cluster CNN, and Viability CNN that are trained for initial segmentation, cluster separation, and viability analysis. Parallel Segmenter and Cluster CNN blocks achieve accurate segmentation of individual cells while Viability CNN block performs viability classification. A modified U-Net network, a well-known deep neural network model for bioimage analysis, is used in Parallel Segmenter while LeNet-5 architecture and its modified version Opto-Net are used for Cluster CNN and Viability CNN, respectively. We train the Parallel Segmenter using 15 images of A2780 cells and 5 images of yeasts cells, containing, in total, 14742 individual cell images. Similarly, 6101 and 5900 A2780 cell images are employed for training Cluster CNN and Viability CNN models, respectively. 2514 individual A2780 cell images are used to test the overall segmentation performance of Parallel Segmenter combined with Cluster CNN, revealing high Precision/Recall/F1-Score values of 96.52%/96.45%/98.06%, respectively. Cell counting/viability performance of DeepCAN is tested with A2780 (2514 cells), A549 (601 cells), Colo (356 cells), and MDA-MB-231 (887 cells) cell images revealing high analysis accuracies of 96.76%/99.02%, 93.82%/95.93%, and 92.18%/97.90%, 85.32%/97.40%, respectively.

Original languageEnglish
Pages (from-to)5575-5583
Number of pages9
JournalIEEE Journal of Biomedical and Health Informatics
Volume26
Issue number11
DOIs
Publication statusPublished - 1 Nov 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Bioimage segmentation
  • bright field imaging
  • cell counting
  • convolutional neural network
  • viability analysis

Fingerprint

Dive into the research topics of 'DeepCAN: A Modular Deep Learning System for Automated Cell Counting and Viability Analysis'. Together they form a unique fingerprint.

Cite this