Abstract
Reconfigurable intelligent surface (RIS) has emerged as a cutting-edge technology for beyond 5G and 6G networks due to its low-cost hardware production, nearly passive nature, easy deployment, communication without new waves, and energy-saving benefits. Unmanned aerial vehicle (UAV)-assisted wireless networks significantly enhance network coverage.
Resource allocation and real-time decision-making optimisation play a pivotal role in approaching the optimal performance in UAV- and RIS-aided wireless communications. But the existing contributions typically assume having a static environment and often ignore the stringent flight time constraints in real-life applications. It is crucial to improve the decision-making time for meeting the stringent requirements of UAV-assisted wireless networks. Deep reinforcement learning (DRL), which is a combination of reinforcement learning and neural networks, is used to maximise network performance, reduce power consumption, and improve the processing time for real-time applications. DRL algorithms can help UAVs and RIS work fully autonomously, reduce energy consumption and operate optimally in an unexpected environment.
This co-authored book explores the many challenges arising from real-time and autonomous decision-making for 6G. The goal is to provide readers with comprehensive insights into the models and techniques of deep reinforcement learning and its applications in 6G networks and internet-of-things with the support of UAVs and RIS.
Deep Reinforcement Learning for Reconfigurable Intelligent Surfaces and UAV Empowered Smart 6G Communications is aimed at a wide audience of researchers, practitioners, scientists, professors and advanced students in engineering, computer science, information technology, and communication engineering, and networking and ubiquitous computing professionals.
Resource allocation and real-time decision-making optimisation play a pivotal role in approaching the optimal performance in UAV- and RIS-aided wireless communications. But the existing contributions typically assume having a static environment and often ignore the stringent flight time constraints in real-life applications. It is crucial to improve the decision-making time for meeting the stringent requirements of UAV-assisted wireless networks. Deep reinforcement learning (DRL), which is a combination of reinforcement learning and neural networks, is used to maximise network performance, reduce power consumption, and improve the processing time for real-time applications. DRL algorithms can help UAVs and RIS work fully autonomously, reduce energy consumption and operate optimally in an unexpected environment.
This co-authored book explores the many challenges arising from real-time and autonomous decision-making for 6G. The goal is to provide readers with comprehensive insights into the models and techniques of deep reinforcement learning and its applications in 6G networks and internet-of-things with the support of UAVs and RIS.
Deep Reinforcement Learning for Reconfigurable Intelligent Surfaces and UAV Empowered Smart 6G Communications is aimed at a wide audience of researchers, practitioners, scientists, professors and advanced students in engineering, computer science, information technology, and communication engineering, and networking and ubiquitous computing professionals.
Original language | English |
---|---|
Publisher | Institution of Engineering and Technology (IET) |
Number of pages | 293 |
ISBN (Electronic) | 9781839536427 |
ISBN (Print) | 9781839536410 |
DOIs | |
Publication status | Published - 12 Dec 2024 |