Topic > Survey on Energy Aware Framework by Manipulating Cloud Framework to Reduce Energy Consumption

Keywords: Server Consolidation, VM Migration, Quality of Service, Virtualized Data Center, Service Level Agreements, Max Setting thermostat, energy efficiency, virtual machine placement, migration, dynamic resource allocation, cloud computing, data centerSay no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get Original Essay Cloud computing is an architecture for providing computing services via the Internet on demand and paid access to a pool of shared resources, i.e., networks, storage, servers, services, and applications, without physically acquiring them [1]. This type of computing offers many benefits for businesses, shorter startup times for new services, lower operation and maintenance costs, increased utilization through virtualization, and easier disaster recovery making cloud computing an attractive option [ 2]. This technological trend has enabled the creation of a new computing model, in which resources (e.g., CPU and storage space) are provided as general utilities that can be rented and released by users through the Internet in an on-demand manner [3 ] [4]. Furthermore, user data files can be accessed and manipulated from any other computer using Internet services [5]. Cloud computing is associated with service delivery, where service providers offer computer-based services to consumers over the network. [6]Cloud computing is one of the Internet-based service providers that allows users to access services on demand. It provides a shared resource pool of information, software, databases and other devices based on customer demand [7]. Cloud computing provides various services based on customer request related to software, platform, infrastructure, data, identity and policy management [8]. Model provision in cloud environment states into three main types; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) [9]. In IaaS, basic infrastructure layer services such as storage, database management, and computing capacity are offered on demand [10]. In PaaS, this platform is used to design, develop, build and test applications. While SaaS are highly scalable Internet-based applications offered as services to the end user [11]. Where end users can enjoy software or services provided by SaaS without purchasing and maintaining overhead costs [12]. The four fundamental organizational models in cloud computing are public cloud, private cloud, community model, and hybrid cloud [13]. To broadcast cloud computing services, numerous IT service providers including Yahoo, Microsoft, IBM, and Google are rapidly deploying data centers to various locations [14]. With the specific end goal of increasing efficiency and saving energy through IT expansion, cloud computing has entered the IT business. The spread of cloud computing worldwide has consequently led to dramatic increases in the energy consumption of data centers. In data centers, thousands of interconnected servers are composed and managed to provide different cloud services [15]. With the rapid growth of cloud computing technology and the construction of a large number of data centers, the problem of high power consumption is becoming more and more serious. Data center performance and efficiency can be expressed in terms of the amount of electrical energy supplied [16]. In the cloud environment the services required by the customer are rectified by employingvirtual machines present on a server. Each virtual machine has different capabilities, so it becomes more complex to schedule the work and balance the workload between nodes [17]. Load balancing is one of the central issues in cloud computing, it is a mechanism that distributes dynamic local workload evenly across all servers in the entire cloud to avoid a situation where some servers are heavily loaded while others are idle or they do little work [18]. The trend towards server-side computing and the growing popularity of Internet services have meant that data centers are quickly becoming an integral part of the Internet fabric. Data centers are becoming increasingly popular in large enterprises, banks, telecommunications, portals, etc. [19]. As data centers are inevitably becoming more complex and larger, this brings many challenges in terms of deployment, resource management and service reliability, etc. [20]. A data center built using server virtualization technology with virtual machines (VMs) as core computing elements is called a virtualized (or virtual) data center (VDC) [21] [22] Virtualization is seen as an effective way to face these challenges. Server virtualization opens up the possibility of achieving higher server consolidation and more agile dynamic resource provisioning than possible in traditional platforms [23] [24]. Server virtualization opens up the possibility of achieving greater server consolidation and more agile dynamic resource provisioning than possible in traditional platforms [25]. Consolidating multiple servers and their workloads aims to minimize the number of resources, such as computer servers, needed to support the workloads. In addition to reducing costs, this can also lead to a reduction in peak and average power requirements. Reducing peak power consumption may be important in some data centers if peak power cannot be easily increased [26] [27]. Server consolidation is especially important when user workloads are unpredictable and need to be revisited periodically. Whenever user demand changes, VMs can be resized and, if necessary, migrated to other physical servers [28]. Antonio Corradi et.al [29] illustrated the problem of VM consolidation in Cloud scenarios, clarifying the main optimization objectives, design guidelines and challenges. To better support the hypotheses in this paper, he introduced and used Open Stack, an open source platform for cloud computing that has now been widely adopted in both academia and industry. Our experimental results convinced us that VM consolidation was an extremely feasible solution to reduce power consumption but, at the same time, needed to be carefully guided to prevent excessive performance degradation. Using three significant case studies, very representative of different uses. This article has shown that performance degradation is not easy to predict, due to many intertwined and interconnected aspects. At this point, this work is interested in exploring other important research directions. First, he wants to better understand how server consolidation impacts the performance of individual services and the role of SLAs in decision making. Our main research objective in this direction has been the automatic identification of meaningful service profiles useful for detailing the workload introduced, for example CPU- or network-related, to better predict theinterference in the consolidation of VMs. Second, it wants to implement a larger testbed of the OpenStack cloud, so as to enable and test more complex VM placement algorithms. Third, we want to extend the management infrastructure to perform automatic live migration of VMs, in order to dynamically reduce Cloud energy consumption: our main research guideline is to consider historical data and service profiles to characterize better the side effects of VM consolidation. Ayan Banerjee et. al [30] proposed a coordinated algorithm for job placement and cooling management, called Highest Thermostat Setting (HTS). HTS was aware of the dynamic behavior of the computer room air conditioner (CRAC) units and initiated work to reduce cooling demands from the CRACs. HTS also dynamically updates the CRAC thermostat setpoint to reduce cooling energy consumption. Furthermore, the energy inefficiency index of the SPatial work scheduling (i.e. job placement) algorithms, also known as SP-EIR, was analyzed by comparing the total energy consumption (computing + cooling) incurred by the algorithms with the total energy consumption (computing + cooling) supported by algorithms. Lowest possible energy consumption, assuming work start times have already been decided to meet service level agreements (SLAs). This analysis was performed for two cooling models, constant and dynamic, to show how the constant cooling model assumption in previous research misses energy saving opportunities. Simulation results based on power measurements and job traces from ASU's HPC data center show that: (i) HTS has a 15% lower SP-EIR than LRH, a temperature-sensitive spatial scheduling algorithm; and (ii) in combination with FCFS-Backfill, HTS increases the throughput per unit energy by 6.89% and 5.56%, respectively, compared to LRH and MTDP (an energy-efficient spatial scheduling algorithm with server consolidation). Gaurav Chadha et.al [ 31] illustrated LIMO, a runtime system that dynamically manages the number of running threads of an application to maximize performance and energy efficiency. LIMO monitors the progress of threads along with the utilization of shared hardware resources to determine the best number of threads to run and the level of voltage and frequency. With dynamic adaptation, LIMO provides an average performance improvement of 21% and a 2x improvement in power efficiency on a 32-core system compared to the default configuration of 32 threads for a number of concurrent applications from the PARSEC suite, the Apache web server, and the Sphinx speech recognition system. Jordi Guitart et al [32] proposed an overload control strategy for secure web applications that combines dynamic provisioning of platform resources and admission control based on Secure Socket Layer (SSL) connection differentiation. Dynamic provisioning allows additional resources to be allocated to an application on demand to handle workload increases, while the admission control mechanism avoids server performance degradation by dynamically limiting the number of new SSL connections accepted and serving preferentially SSL connections are restored (to maximize performance on session-based environments) while additional resources are provided. It demonstrates the benefit of the topic of this work to efficiently manage resources and prevent server overload on a platform4-way multiprocessor Linux hosting, especially when the hosting platform was completely overloaded. Anton Beloglazov et.al [33] proposed an architectural framework and principles for energy-efficient cloud computing. Based on this architecture, open research challenges and resource provisioning and allocation algorithms for energy-efficient management of cloud computing environments. The proposed energy-aware allocation heuristic provides data center resources to client applications in a way that improves data center energy efficiency while ensuring negotiated quality of service (QoS). In particular, this paper conducts a survey of research in energy-efficient computing and proposes: (a) architectural principles for energy-efficient management of clouds; (b) energy-efficient resource allocation policies and scheduling algorithms that take into account QoS expectations and power usage characteristics of devices; and (c) a number of open research challenges, the resolution of which could bring substantial benefits to both resource providers and consumers. In this work, the proposed approach was validated by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that the cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for improving energy efficiency in dynamic workload scenarios. Nadjia Kara et.al [34] proposed to address the issues for a specific application of IVR. It defines task scheduling and computational resource sharing strategies based on genetic algorithms, in which different objectives are optimized. The purpose of genetic algorithms chosen for their robustness and efficiency for designing efficient planners has been widely demonstrated in the literature. More specifically, this method identifies task assignments that ensure maximum resource utilization while minimizing task execution time. This paper also proposes a resource allocation strategy that minimizes substrate resource utilization and resource allocation time. Furthermore, this method simulated the algorithms used by the proposed strategies and measured and analyzed their performance. To solve the problem of high energy consumption, an energy-efficient virtual machine consolidation algorithm called prediction-based VM deployment algorithm for energy efficiency (PVDE) was presented. by Zhou et al [35]. Linear weighted method was used to classify the hosts in the data center and predict the host load. They ran high-performance analytics. In their work, the algorithm reduces power consumption and maintains low service level violation (SLA) compared to other power saving algorithms in the experimental result. Li et al [36] presented an elaborate thermal model to address the complexity of energy and thermal modeling of realistic cloud data center operation that analyzes the temperature distribution of server airflow and CPU. To minimize the total energy consumption of the datacenter, the author presented GRANITE, a holistic virtual machine scheduling algorithm. The algorithm was evaluated against other existing workload scheduling algorithms IQR, TASA, MaxUtil, and Random using real cloud workload characteristics extracted from the data center trace log.