Home > Published Issues > 2021 > Volume 9, No. 4, December 2021 >

Optimization of Gabor Filters by Employing NVIDIA GPUs in Python

Conner McInnes and Shadi Alawneh
Oakland University, Rochester Hills, United States

Abstract—In this paper, a through rigorous testing and benchmarks the efficacy of the utilization of CUDA’s GPU accelerated libraries for a Gabor filter was examined. Following a series of benchmarks, the change in computational time between a program that applied a set of Gabor kernels to images using CuPy and SciPy was recorded. The benchmark’s results provided statistical evidence in favor of future utilization of CuPy’s GPU accelerated libraries in such a program. With this data in hand, further work can be carried out that leverages a GPU to be incorporated in a compression algorithm using the Gabor transform. This will offer a fast compression technique that allows the fine tuning of the compression ratio of a target image.

Index Terms—GPU, CUDA, Gabor filter, image filtering, Python

Cite: Conner McInnes and Shadi Alawneh, "Optimization of Gabor Filters by Employing NVIDIA GPUs in Python," Journal of Image and Graphics, Vol. 9, No. 4, pp. 146-151, December 2021. doi: 10.18178/joig.9.4.146-151

Copyright © 2021 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.