Authors :
V. Kowsalya; D. Hariprasath
Volume/Issue :
Volume 10 - 2025, Issue 4 - April
Google Scholar :
https://tinyurl.com/4an5ekka
Scribd :
https://tinyurl.com/674epu7b
DOI :
https://doi.org/10.38124/ijisrt/25apr391
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
Medical image fusion is to synthesize multiple medical images from single or different imaging devices. This
paper aims to improve imaging quality with accurate preserving for accurate diagnosis and treatment. This work plays an
important role in the fields of surgical navigation, routine staging, and radio-therapy planning of malignant disease.
Nowadays, computerized tomography (CT), magnetic resonance imaging (MRI), single-photo emission computed
tomography (SPECT) modalities, and positron emission tomography (PET) are focused using medical image fusion. Bones
and implants are clearly reflected by CT Image. High-resolution anatomical details for soft tissues are recorded using
MRL images. However, the MRI image is not sensitive to the diagnosis of fractures compared to CT image. SPECT image
is utilized to study the blood flow of tissues and organs by nuclear imaging technique. Our proposed work is Multi-Modal
Based Medical Image fusion for directly learning image features from original images. Medical image fusion is a powerful
tool that enhances the clinical value of individual imaging modalities, leading to better patient outcomes. As imaging
technology advances and computational techniques evolve, the role of image fusion in modern medicine continues to grow.
Keywords :
Medical Image Fusion, Computed Tomography, Magnetic Resonance Imaging and Multi-Modal Image Fusion.
References :
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612.
- Wang, Q.; Shen, Y.; Zhang, J.Q. A nonlinear correlation measure for multivariable data set. Phys. D Nonlinear Phenom. 2005, 200, 287–295.
- Koroleva, O.A.; Tomlinson, M.L.; Leader, D.; Shaw, P.; Doonan, J.H. High-throughput protein localization in Arabidopsis using Agrobacterium-mediated transient expression of GFP-ORF fusions. Plant J. 2005, 41, 162–174.
- He, C.; Liu, Q.; Li, H.; Wang, H. Multimodal medical image fusion based on IHS and PCA. Procedia Eng. 2010, 7, 280–285.
- Ismail, W.Z.W.; Sim, K.S. Contrast enhancement dynamic histogram equalization for medical image processing application. Int. J. Imaging Syst. Technol. 2011, 21, 280–289.
- Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion 2013, 14, 127–135.
- Du, J.; Li,W.; Xiao, B.; Nawaz, Q. Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 2016, 194, 326–339.
- Du, J.; Li,W.; Xiao, B. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Trans. Image Process. 2017, 26, 5855–5866.
- Jiang, W.; Yang, X.; Wu, W.; Liu, K.; Ahmad, A.; Sangaiah, A.K.; Jeon, G. Medical images fusion by using weighted least squares filter and sparse representation. Comput. Electr. Eng. 2018, 67, 252–266.
- Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.P. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995.
- Dong, X.; Bao, J.; Chen, D.; Zhang, W.; Yu, N.; Yuan, L.; Chen, D.; Guo, B. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 12124–12134.
- Wu, S.; Wu, T.; Tan, H.; Guo, G. Pale transformer: A general vision transformer backbone with pale-shaped attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; pp. 2731–2739.
- Li, C.; Zhou, A.; Yao, A. Omni-Dimensional Dynamic Convolution. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022.
- Li, W.; Peng, X.; Fu, J.; Wang, G.; Huang, Y.; Chao, F. A multiscale double-branch residual attention network for anatomical– functional medical image fusion. Comput. Biol. Med. 2022, 141, 105005.
- Lamba, K.; Rani, S. A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare. J. Neurosci. Methods 2024, 408, 110159.
Medical image fusion is to synthesize multiple medical images from single or different imaging devices. This
paper aims to improve imaging quality with accurate preserving for accurate diagnosis and treatment. This work plays an
important role in the fields of surgical navigation, routine staging, and radio-therapy planning of malignant disease.
Nowadays, computerized tomography (CT), magnetic resonance imaging (MRI), single-photo emission computed
tomography (SPECT) modalities, and positron emission tomography (PET) are focused using medical image fusion. Bones
and implants are clearly reflected by CT Image. High-resolution anatomical details for soft tissues are recorded using
MRL images. However, the MRI image is not sensitive to the diagnosis of fractures compared to CT image. SPECT image
is utilized to study the blood flow of tissues and organs by nuclear imaging technique. Our proposed work is Multi-Modal
Based Medical Image fusion for directly learning image features from original images. Medical image fusion is a powerful
tool that enhances the clinical value of individual imaging modalities, leading to better patient outcomes. As imaging
technology advances and computational techniques evolve, the role of image fusion in modern medicine continues to grow.
Keywords :
Medical Image Fusion, Computed Tomography, Magnetic Resonance Imaging and Multi-Modal Image Fusion.