Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023, International Journal of Electronics and Communication Engineering
https://doi.org/10.14445/23488549/IJECE-V10I8P109…
13 pages
1 file
The number of devices linked to the Internet is continuously rising along with the development of the Internet of Things (IoT). The IoT and the expanding volume of data it communicates place constraints on cloud-based data processing and storage. Both fog and cloud computing allow users to store apps and data, but Fog has a broader geographic reach and is closer to the end user. Managing rapidly changing resource provisioning and allocation of resources in fog computing will create new challenges when developing IoT applications and satisfying user requests. To control resource consumption and Service Level Agreements (SLA), flexible and often autonomous systems must choose the appropriate virtual resources. This work presents a Deep Reinforcement Learning (DRL) based structure for resource provisioning for improving resource management efficiency in IoT ecosystems. A Deep Neural Network (DNN) is used for assessing value functions, and it allows for better compliance to diverse conditions, learning from prior sensible approaches, and acting as a self-learning adaptive system. Using the DRL algorithm and the Proximal Policy Optimization (PPO), IoT services can be established by reducing average consumption of energy and latency, cutting expenses, and wisely utilising and allocating resources. Simulations with the iFogSim show that the PPO policy increases utilization, reduces delay rates, and maintains acceptable service quality while reducing energy consumption and increasing utilization under varying loading rates.
IEEE ACCESS, 2023
Fog computing has emerged as a computing paradigm for resource-restricted Internet of things (IoT) devices to support time-sensitive and computationally intensive applications. Offloading can be utilized to transfer resource-intensive tasks from resource-limited end devices to a resource-rich fog or cloud layer to reduce end-to-end latency and enhance the performance of the system. However, this advantage is still challenging to achieve in systems with a high request rate because it leads to long queues of tasks in fog nodes and reveals inefficiencies in terms of delays. In this regard, reinforcement learning (RL) is a well-known method for addressing such decision-making issues. However, in large-scale wireless networks, both action and state spaces are complex and extremely extensive. Consequently, reinforcement learning techniques may not be able to identify an efficient strategy within an acceptable time frame. Hence, deep reinforcement learning (DRL) was developed to integrate RL and deep learning (DL) to address this problem. This paper presents a systematic analysis of using RL or DRL algorithms to address offloadingrelated issues in fog computing. First, the taxonomy of fog computing offloading mechanisms based on RL and DRL algorithms was divided into three major categories: value-based, policy-based, and hybridbased algorithms. These categories were then compared based on important features, including offloading problem formulation, utilized techniques, performance metrics, evaluation tools, case studies, their strengths and drawbacks, offloading directions, offloading mode, SDN-based architecture, and offloading decisions. Finally, the future research directions and open issues are discussed thoroughly. INDEX TERMS Fog computing, Internet of Things (IoT), offloading, reinforcement learning, deep reinforcement learning.
ArXiv, 2021
Fog computing is introduced by shifting cloud resources towards the users’ proximity to mitigate the limitations possessed by cloud computing. Fog environment made its limited resource available to a large number of users to deploy their serverless applications, composed of several serverless functions. One of the primary intentions behind introducing the fog environment is to fulfil the demand of latency and location-sensitive serverless applications through its limited resources. The recent research mainly focuses on assigning maximum resources to such applications from the fog node and not taking full advantage of the cloud environment. This introduces a negative impact in providing the resources to a maximum number of connected users. To address this issue, in this paper, we investigated the optimum percentage of a user’s request that should be fulfilled by fog and cloud. As a result, we proposed DeF-DReL, a Systematic Deployment of Serverless Functions in Fog and Cloud environm...
Journal of Network and Computer Applications, 2019
In order to fulfill the tremendous resource demand by diverse IoT applications, the large-scale resource-constrained IoT ecosystem requires a robust resource management technique. An optimum resource provisioning in IoT ecosystem deals with an efficient request-resource mapping which is difficult to achieve due to the heterogeneity and dynamicity of IoT resources and IoT requests. In this paper, we investigate the scheduling and resource allocation problem for dynamic user requests with varying resource requirements. Specifically, we formulate the complete problem as an optimization problem and try to generate an optimal policy with the objectives to minimize the overall energy consumption and to achieve a long-term user satisfaction through minimum response time. We introduce the paradigm of a deep reinforcement learning (DRL) mechanism to escalate the resource management efficiency in IoT ecosystem. To maximize the numerical performance of the entire resource management activities, our method learns to select the optimal resource allocation policy among a number of possible solutions. Moreover, the proposed approach can efficiently handle a sudden hike or fall in users' demand, which we call demand drift, through adaptive learning maintaining the optimum resource utilization. Finally, our simulation analysis illustrates the effectiveness of the proposed mechanism as it achieves substantial improvements in various factors, like reducing energy consumption and response time by at least 36.7% and 59.7% respectively and increasing average resource utilization by at least 10.4%. Our approach also attains a good convergence and a trade-off between the monitoring metrics.
IEEE Transactions on Network and Service Management, 2021
Currently, researchers have motivated a vision of 6G for empowering the new generation of the Internet of Everything (IoE) services that are not supported by 5G. In the context of 6G, more computing resources are required, a problem that is dealt with by Mobile Edge Computing (MEC). However, due to the dynamic change of service demands from various locations, the limitation of available computing resources of MEC, and the increase in the number and complexity of IoE services, intelligent resource provisioning for multiple applications is vital. To address this challenging issue, we propose in this paper IScaler, a novel intelligent and proactive IoE resource scaling and service placement solution. IScaler is tailored for MEC and benefits from the new advancements in Deep Reinforcement Learning (DRL). Multiple requirements are considered in the design of IScaler's Markov Decision Process. These requirements include the prediction of the resource usage of scaled applications, the prediction of available resources by hosting servers, performing combined horizontal and vertical scaling, as well as making service placement decisions. The use of DRL to solve this problem raises several challenges that prevent the realization of IScaler's full potential, including exploration errors and long learning time. These challenges are tackled by proposing an architecture that embeds an Intelligent Scaling and Placement module (ISP). ISP utilizes IScaler and an optimizer based on heuristics as a bootstrapper and backup. Finally, we use the Google Cluster Usage Trace dataset to perform real-life simulations and illustrate the effectiveness of IScaler's multi-application autonomous resource provisioning.
ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019
Fog radio access network (F-RAN) has been recently proposed to satisfy the low-latency communication requirements of Internet of Things (IoT) applications. We consider the problem of sequentially allocating the limited resources of a fog node to a heterogeneous population of IoT applications with varying latency requirements. Specifically, for each service request, the fog node needs to decide whether to serve that user locally to provide it with low-latency communication service or to refer it to the cloud control center to keep the limited fog resources available for future users. We formulate the problem as a Markov Decision Process (MDP), for which we present the optimal decision policy through Reinforcement Learning (RL). The proposed resource allocation method learns from the IoT environment how to strike the right balance between two conflicting objectives, maximizing the total served utility and minimizing the idle time of the fog node. Extensive simulation results for various IoT environments corroborate the theoretical underpinnings of the proposed RL-based resource allocation method.
Majlesi Journal of Electrical Engineering, 2021
In recent years, exponential growth of communication devices in Internet of Things (IoT) has become an emerging technology which facilitates heterogeneous devices to connect with each other in heterogeneous networks. This communication requires different level of Quality-of-Service (QoS) and policies depending on the device type and location. To provide a specific level of QoS, we can utilize emerging new technological concepts in IoT infrastructure, Software-Defined Network (SDN) and, machine learning algorithms. We use deep reinforcement learning in the process of resource management and allocation in control plane. We present an algorithm that aims to optimize resource allocation. Simulation results show that the proposed algorithm improved network performances in terms of QoS parameters, including delay and throughput compared to Random and Round Robin methods. Compared to similar methods, the performance of the proposed method is also as good as the fuzzy and predictive methods.
Journal of Network and Computer Applications
Fog computing is an emerging paradigm that aims to meet the increasing computation demands arising from the billions of devices connected to the Internet. Offloading services of an application from the Cloud to the edge of the network can improve the overall Quality-of-Service (QoS) of the application since it can process data closer to user devices. Diverse Fog nodes ranging from Wi-Fi routers to mini-clouds with varying resource capabilities makes it challenging to determine which services of an application need to be offloaded. In this paper, a context-aware mechanism for distributing applications across the Cloud and the Fog is proposed. The mechanism dynamically generates (re)deployment plans for the application to maximise the performance efficiency of the application by taking the QoS and running costs into account. The mechanism relies on deep Q-networks to generate a distribution plan without prior knowledge of the available resources on the Fog node, the network condition and the application. The feasibility of the proposed context-aware distribution mechanism is demonstrated on two use-cases, namely a face detection application and a location-based mobile game. The benefits are increased utility of dynamic distribution in both use cases, when compared to a static distribution approach used in existing research.
International Journal of Electrical and Computer Engineering (IJECE), 2024
A fog-cloud internet of things (IoT) system integrates fog computing with cloud infrastructure to efficiently manage processing data closer to the source, reducing latency and bandwidth usage. Efficient task scheduling in fog-cloud system is crucial for optimizing resource utilization and minimizing energy consumption. Even though many authors proposed energy efficient algorithms, failed to provide efficient method to decide the task placement between fog nodes and cloud nodes. The proposed hybrid approach is used to distinguish the task placement between fog and cloud nodes. The hybrid approach comprises the parametric task categorization algorithm (PTCA) for task categorization and the multi metric forecasting model (MMFM) based on deep deterministic policy gradient (DDPG) recurrent neural networks for scheduling decisions. PTCA classifies tasks based on priority, quality of service (QoS) demands, and computational needs, facilitating informed decisions on task execution locations. MMFM enhances scheduling by optimizing energy efficiency and task completion time. The experimental evaluation outperforms the existing models, including random forest (RF), support vector machine (SVM), and k-nearest neighbors (KNN). The proposed result shows an accuracy rate of 89%, and energy is consumed 50% lesser than the existing models. The proposed research advances energy-efficient task scheduling, enabling intelligent resource management in fog-cloud IoT environments.
Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
IEEE Access, 2019
In light of the quick proliferation of Internet of things (IoT) devices and applications, fog radio access network (Fog-RAN) has been recently proposed for fifth generation (5G) wireless communications to assure the requirements of ultra-reliable low-latency communication (URLLC) for the IoT applications which cannot accommodate large delays. To this end, fog nodes (FNs) are equipped with computing, signal processing and storage capabilities to extend the inherent operations and services of the cloud to the edge. We consider the problem of sequentially allocating the FN's limited resources to IoT applications of heterogeneous latency requirements. For each access request from an IoT user, the FN needs to decide whether to serve it locally at the edge utilizing its own resources or to refer it to the cloud to conserve its valuable resources for future users of potentially higher utility to the system (i.e., lower latency requirement). We formulate the Fog-RAN resource allocation problem in the form of a Markov decision process (MDP), and employ several reinforcement learning (RL) methods, namely Q-learning, SARSA, Expected SARSA, and Monte Carlo, for solving the MDP problem by learning the optimum decision-making policies. We verify the performance and adaptivity of the RL methods and compare it with the performance of the network slicing approach with various slicing thresholds. Extensive simulation results considering 19 IoT environments of heterogeneous latency requirements corroborate that RL methods always achieve the best possible performance regardless of the IoT environment.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computer Networks
IEEE Communications Surveys & Tutorials, 2020
GLOBECOM 2022 - 2022 IEEE Global Communications Conference
IEEE Transactions on Network Science and Engineering, 2020
ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019
IEEE Transactions on Services Computing, 2021
Advances in computer and electrical engineering book series, 2020
Procedia Computer Science, 2021
IEEE Transactions on Vehicular Technology
arXiv (Cornell University), 2021
2019 53rd Asilomar Conference on Signals, Systems, and Computers, 2019
Mobile Information Systems
IEEE Journal on Selected Areas in Communications, 2021
Mobile Information Systems
Applied Sciences