A Novel Bellman-Inspired Gated Activation Mechanism for Dynamic Neural Networks
Abstract
In deep learning, activation function, as the core mechanism of nonlinear transformation, determines the upper limit of the model's expressive power. However, traditional activation functions are mostly static, point-by-point transformations, lacking the ability to respond to and adjust input structures. This paper proposes a "gated activation mechanism" based on the idea of the Bellman equation in reinforcement learning, which introduces state valuation and future reward modeling into the neural network activation mechanism. Experiments were conducted by replacing the activation method in the standard multilayer perceptron (MLP) structure. The results show that this mechanism brings stable performance improvements without increasing the inference overhead. More importantly, this activation method has good versatility as a module and can be widely embedded in a variety of deep neural networks, providing a new paradigm for activation function design.
Downloads
Copyright (c) 2025 ITEGAM-JETIA

This work is licensed under a Creative Commons Attribution 4.0 International License.








