Authors :
Mohan Patra; Ramchandra Kisku; Janmejay Patra; Bikash Chandra Dwari
Volume/Issue :
Volume 11 - 2026, Issue 2 - February
Google Scholar :
https://tinyurl.com/wf9btav4
Scribd :
https://tinyurl.com/5dhf6ssf
DOI :
https://doi.org/10.38124/ijisrt/26feb914
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Adaptive optimization of superheterodyne radio frequency receivers remains challenging under fading wireless
channels. Conventional gain and bandwidth tuning approaches rely on fixed heuristics. These methods often fail in dynamic
noise conditions. This paper presents DRIFT-Rx, a deep reinforcement learning based intelligent superheterodyne receiver
for adaptive intermediate frequency parameter control. The proposed framework integrates Rayleigh fading channel
modeling, additive white Gaussian noise, and realistic RF demodulation stages within a reinforcement learning environment.
A Deep Q-Network agent learns optimal gain and bandwidth policies through reward feedback combining signal-to-noise
ratio improvement and bit error reduction. Training conducted over more than one thousand episodes shows stable
convergence. Performance improvement observed around 2–3 dB average SNR compared to classical fixed receivers. Bit
error rate reduction also recorded in most fading scenarios. Baseline comparisons include classical fixed tuning and heuristic
adaptive control. Results indicate reinforcement learning provides better robustness to channel variation. The receiver shows
consistent decoding stability after convergence. Some performance dips remain during severe fading. Overall findings
suggest intelligent RF front-end tuning is feasible using reinforcement learning. The DRIFT-Rx framework demonstrates
potential for cognitive communication receiver design under realistic channel uncertainty.
References :
- S. Haykin, “Cognitive radio: brain-empowered wireless communications,” Proceedings of the IEEE, vol. 93, no. 2, pp. 201–220, 2005. DOI: 10.1109/JPROC.2005.857787
- V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, 2015.
DOI: 10.1038/nature14236
- T. J. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 563–575, 2017.
DOI: 10.1109/TCCN.2017.2758370
- N. C. Luong et al., “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3133–3174, 2019.
DOI: 10.1109/COMST.2019.2920899
- N. M. Nasrabadi, “Pattern recognition and machine learning,” Proceedings of the IEEE, vol. 95, no. 11, pp. 2046–2063, 2007. DOI: 10.1109/JPROC.2007.913499
- K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017.
DOI: 10.1109/MSP.2017.2743240
- D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, pp. 484–489, 2016. DOI: 10.1038/nature16961
Adaptive optimization of superheterodyne radio frequency receivers remains challenging under fading wireless
channels. Conventional gain and bandwidth tuning approaches rely on fixed heuristics. These methods often fail in dynamic
noise conditions. This paper presents DRIFT-Rx, a deep reinforcement learning based intelligent superheterodyne receiver
for adaptive intermediate frequency parameter control. The proposed framework integrates Rayleigh fading channel
modeling, additive white Gaussian noise, and realistic RF demodulation stages within a reinforcement learning environment.
A Deep Q-Network agent learns optimal gain and bandwidth policies through reward feedback combining signal-to-noise
ratio improvement and bit error reduction. Training conducted over more than one thousand episodes shows stable
convergence. Performance improvement observed around 2–3 dB average SNR compared to classical fixed receivers. Bit
error rate reduction also recorded in most fading scenarios. Baseline comparisons include classical fixed tuning and heuristic
adaptive control. Results indicate reinforcement learning provides better robustness to channel variation. The receiver shows
consistent decoding stability after convergence. Some performance dips remain during severe fading. Overall findings
suggest intelligent RF front-end tuning is feasible using reinforcement learning. The DRIFT-Rx framework demonstrates
potential for cognitive communication receiver design under realistic channel uncertainty.