Generalized fixed point theory in b-metric spaces with applications to optimization and machine learning algorithms
Abstract
This paper develops a comprehensive fixed point frame work in b-metric spaces and demonstrates its relevance to modern optimization and machine learning. By introducing an auxiliary control function we establish a generalized contraction condition ensuring , existence, uniqueness and geometric convergence of iterative schemes. Theoretical results extend the classical Banach contraction principle to settings where distances are non Euclidean or structurally modified, providing greater flexibility for high dimensional learning problems. Gradient descent proximal and inertial optimization algorithms are reformulated as fixed point iterations within this framework, offering imposed convergence grantees . Applications to deep learning particularly recurrent neural networks highlight stability conditions based on spectral radius and Lipschitz properties. Furthermore, we show that fixed point theory naturally models equilibria in biological systems including population dynamics and epidemiological models The results unify classical mathematical models and comtemporary data driven methods supporting the development of robust algorithms in generalized metric enviroment.
Published
Issue
Section
License
Copyright (c) 2026 Shilpa Patra , Sudipta Sarkar , Kulbhushan Agnihotri, Krishna Pada Das

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
