Cloud Resource Management

Cloud Resource Management

One of the big promises of Cloud Computing is the ability to provide high quality services at a low cost, which is a result of sharing the resources among many users. However, the actual ability to provide these services at a low cost critically depends on the efficiency of resource utilization in the Cloud. We address this important aspect of cloud computing by studying the tradeoff between computation and communication resources and developing methods that can improve resource utilization in the Cloud. Our recent papers and ongoing work are described below.

Current and Past Students: Gilad Kutiel (currently at Google), Amir Nahir, Asaf Israel, Alexander Zlotnik, Assaf Rappaport, Alexander Nus

Main Collaborators: Ariel Orda (EE Technion), Seffi Naor (CS Technion), David Breitgand (IBM), Rami Cohen (IBM), Liane Lewin-Eytan (Yahoo), Amir Epstein (IBM)

This research is supported in part byThe Israel Science Foundation (ISF), the Seventh program of the European Commission Research Directorates – project SAILAmazon AWS in Education grant, and The Technion-Microsoft Electronic-Commerce Research Center.

Rami Cohen, Liane Lewin-Eytan, Joseph (Seffi) Naor, and Danny Raz, On the Effect of Forwarding Table Size on SDN Network Utilization, IEEE Infocom ’14, Toronto Canada, April 2014.

Abstract:

Software Defined Networks (SDNs) are becoming the leading technology behind many traffic engineering solutions, both for backbone and data-center networks, since it allows a central controller to globally plan the path of the flows according to the operators objective. Nevertheless, networking devices forwarding table is a limited and expensive resource (e.g., TCAMbased switches) which should thus be considered upon configuring the network. In this paper, we concentrate on satisfying global network objectives, such as maximum flow, in environments where the size of the forwarding table in network devices is limited. We formulate this problem as an (NP-hard) optimization problem and present approximation algorithms for it. We show through extensive simulations that practical use of our algorithms (both in DC and backbone scenarios) result in a significant reduction (factor 3) in forwarding table size, while having a small effect on the global objective (maximum flow).

 

Alexander Nus and Danny Raz, Migration Plans With Minimum Overall Migration Time, The IEEE/IFIP Network Operations and Management Symposium, NOMS 2014, Krakow, Poland May 2014.

Abstract:

In this paper we concentrate on finding the best migration plan, that is, a partial ordering of live migrations that realizes a move from the current to the desired placement, takes the minimal possible time, and maintains the placement constrains throughout the process. This is not an easy task since additional resources and intermediate migrations may be needed in order to maintain feasibility; in fact, we show that even for a simple model, capturing only the critical aspects of the problem, computing the optimal migration plan is NP hard. We develop algorithms that find feasible migration plans and prove that their overall migration time is within an adaptive constant factor from the optimal possible time. Then, using data from real cloud placements, we evaluate the expected performance of these algorithms in realistic scenarios. Our results indicate that the algorithms perform very well under these realistic conditions.

 

Assaf Rappaport and Danny Raz, Update Aware Replica Placement,The 9th International Conference on Network and Service Management (CNSM 13), Zurich, Switzeland, October 2013.

Abstract:

In recent years, companies such as eBay, Facebook, Google, Microsoft, and Yahoo! have made large investments in massive data centers supporting cloud services. These data centers are becoming the hosting platform for a wide spectrum of composite applications with an increasing trend towards more communication intensive applications. As a result, the bandwidth requirements within and between data centers is rapidly growing, and the efficient management of these networking resources is becoming a key ingredient in the ability to offer cost effective cloud services. Data centers placement is a specific aspect of cloud management where the goal is to optimally place the applications and their related data over the available cloud infrastructure. The problem is inherently complex since data is continuously updated, and the cost associated with this update increases with the number of data replica and the network distance between them. We model this problem as a soft-capacitated connected facility location problem, which is NP-Hard in the general case. We present the first deterministic constant approximation algorithm for this problem and show, using extensive simulations and realistic data center and network topology, that our algorithm provides practically good placement decisions..

David Breitgand, Amir Epstein, Alex Glikson, Assaf Israel, and Danny Raz, Network Aware Virtual Machine and Image Placement in a Cloud, The 9th International Conference on Network and Service Management (CNSM 13), Zurich, Switzeland, October 2013.

Abstract:

Optimal resource allocation is a key ingredient in the ability of cloud providers to offer agile data centers and cloud computing services at a competitive cost. Due to the large scale this must be done in an autonomic manner. Autonomic resource allocation occurs at different levels. In this paper we study the problem of placing images and virtual machine instances on physical containers in a way that maximizes the affinity between the images and virtual machine instances created from them. This reduces communication overhead and latency imposed by the on-going communication between the virtual machine instances and their respective images. We model this problem as a novel placement problem that extends \emph{class constrained multiple knapsack problem} (CCMK) previously studied in the literature, and present a polynomial time local search algorithm for the case where all the relevant images have the same size. We prove that this algorithm has an approximation ratio of $(3+\epsilon)$ and then evaluate its performance in a general setting where images and virtual machine instances are of arbitrary sizes, using production data from a private cloud. The results indicate that our algorithm can obtain significant improvements (up to 20\%) compared to the greedy approach, in cases where local image storage or main memory resources are scarce. .

Assaf Israel, and Danny Raz, Cost Aware Fault Recovery in Clouds, IFIP/IEEE International Symposium on Integrated Network Management IM 2013 Ghent, Belgium, May 2013.

Abstract:

Maintaining high availability of IaaS services at a reasonable cost is a challenging task that received recent attention due to the growing popularity of Cloud computing as a preferred means of affordable IT outsourcing. In large data-centers faults are prone to happen and thus the only reasonable cost-effective method of providing high availability of services is an SLA aware recovery plan; that is, a mapping of the service VMs onto backup machines where they can be executed in case of a failure. The recovery process may benefit from powering on some of these machines in advance, since redeployment on powered machines is much faster. However, this comes with an additional maintenance cost, so the real problem is how to balance between the expected recovery time improvement and the cost of machines activation. We model this problem as an offline optimization problem and present a bicriteria approximation algorithm for it. While this is the first performance guaranteed algorithm for this problem, it is somewhat complex to implement in practice. Thus, we further present a much simpler and practical heuristic based on a greedy approach. We evaluate the performance of this heuristic over real data-center data, and show that it performs well in terms of scale, hierarchical faults and variant costs. Our results indicate that it reduces the overall recovery costs by 10-15\% when compared to currently used approaches. We also show that fault recovery cost aware VM placement may farther help reducing the expected recovery costs, as it can reduce the backup machine activations costs.

Yossi Kanizo, Danny Raz, and Alexander Zlotnik, Efficient Use of Geographically Spread Cloud Resources, Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID ’13 ), Delft, Netherland, May 2013. (also available as Computer Science Technical Report CS-2012-11).

Abstract:

The demand for cloud services in each geographical location changes over time depending on the time of the day. Thus, when one data center (say in the east coast of the US) experiences peak load, other data centers (say in Europe) experience lower load. This paper addresses the efficiency of load sharing between geographically spread cloud resources. We observe that despite the network latency, for several common services it is very beneficial to share the load across two or more data centers, each located in a different time zone.

We rigorously analyze a simple setting in which customers can be redirected between two servers, each experiencing a different local load. We show that a threshold-based load sharing scheme, in which loads are redirected when exceeding some threshold, is significantly more efficient than a static load sharing scheme, where loads are redirected independently of the current state. Our load sharing techniques can reduce the average service time by 40%during peak demand in typical service scenarios. Looking at the same result from a different perspective, we show that (in the same setting) deploying our geographically based load sharing scheme can provide similar user experience with 15%-20% less resources.

To further validate our approach, we deployed Wikipedia instances on Amazon EC2 both in Europe and the US and tested our techniques using real Wikimedia access logs. Our results show that threshold-based load sharing between the US and Europe, achieves an improvement of up to 32% in average service time over these logs.

 

Rami Cohen, Liane Lewin-Eytan, Joseph (Seffi) Naor, and Danny Raz, Almost Optimal Virtual Machine Placement for Traffic Intense Data Center, IEEE Infocom ’13, Mini-conference, Turin, Italy, April 2013.

Abstract: 

The recent growing popularity of cloud-based solutions and the variety of new applications present new challenges for cloud management and resource utilization. In this paper we concentrate on the networking aspect and consider the placement problem of virtual machines (VMs) of applications with intense bandwidth requirements. Optimizing the available network bandwidth is far more complex than optimizing resources like memory or CPU, since every network link may be used by many physical hosts and thus by VMs residing in these hosts. We focus on maximizing the benefit from the overall communication sent by the VMs to a single designated point in the data center (called the root). This is the typical case when considering a storage area network of applications with intense storage requirements. We formulate a bandwidth-constrained VM placement optimization problem that models this setting. This problem is NP hard, and we present a polynomial-time constant approximation algorithm for its most general version, in which hosts are connected to the root by a general network graph. For more practical cases, in which the network topology is a tree and the benefit is a simple function of the allocated bandwidth, we present improved approximation algorithms that are more efficient in terms of running time. We evaluate the expected performance of our proposed algorithms through a simulation study over traces from a real production data center, providing strong indications to the superiority of our proposed solutions.

Amir Nahir, Ariel Orda, and Danny Raz, Schedule First, Manage Later: Network-aware Load Balancing, IEEE Infocom ’13, Mini-conference, Turin, Italy, April 2013.

 Abstract:

Load balancing in large distributed server systems is a complex optimization problem of critical importance in cloud systems and data centers. Existing schedulers often incur a high overhead in communication when collecting the data required to make the scheduling decision, hence delaying the job request on its way to the executing server. We propose a novel scheme that incurs no communication overhead between the users and the servers upon job arrival, thus removing any scheduling overhead from the job’s critical path. Our approach is based on creating several replicas of each job and sending each replica to a different server. Upon the arrival of a replica to the head of the queue at its server, the latter signals the servers holding replicas of that job, so as to remove them from their queues. We show, through analysis and simulations, that this scheme improves the expected queuing overhead over traditional schemes by a factor of $9$ (or more) under all load conditions. In addition, we show that our scheme remains efficient even when the inter-server signal propagation delay is significant (relative to the job’s execution time). We provide heuristic solutions to the performance degradation that occurs in such cases and show, by simulations, that they efficiently mitigate the detrimental effect of propagation delays. Finally, we demonstrate the efficiency of our proposed scheme in a real-world environment by implementing a load balancing system based on it, deploying the system on the Amazon Elastic Compute Cloud (EC2), and measuring its performance.

Amir Nahir , Ariel Orda, and Danny Raz, Distributed Oblivious Load Balancing Using Prioritized Job Replication, The 8th International Conference on Network and Service Management (CNSM 12), Las Vegas, USA, October 2012.

Abstract:

Load balancing in large distributed server systems is a complex optimization problem, of critical importance in cloud systems and data centers. However, any full (i.e., optimal) solution incurs significant, often prohibitive, overhead due to the need to collect state-dependent information. We propose a novel scheme that incurs no communication overhead between the users and the servers upon job arrivals, thus removing any scheduling overhead from the job executions critical path. Furthermore, our scheme is oblivious, that is, it does not use any state information. Our approach is based on creating, in addition to the regular job requests that are assigned to randomly chosen servers, also replicas that are sent to different servers; these replicas are served in low priority, such that they do not add any real burden on the servers. Through analysis and simulations we show that the expected system performance improves up to a factor of 2 (even under high load conditions), if job lengths are exponentially distributed, and over a factor of 5, when job lengths adhere to heavy-tailed distributions. We implemented a load balancing system based on our approach and deployed it on the Amazon Elastic Compute Cloud (EC2). Realistic load tests on that system indicate that the actual performance is as predicted.

Ofer Biran, Antonio Corradi, Mario Fanelli, Luca Foschini, Alexander Nus, Danny Raz, and Ezra Silvera, A Stable Network-Aware VM Placement for Cloud Systems, , Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID ’12 ), Ottawa, Canada, May 2012.

Abstract:

Virtual Machine (VM) placement has to carefully consider the aggregated resource consumption of co-located VMs in order to obey service level agreements at lower possible cost. In this paper, we focus on satisfying the traffic demands of the VMs in addition to CPU and memory requirements. This is a much more complex problem both due to its quadratic nature (being the communication between a pair of VMs) and since it involves many factors beyond the physical host, like the network topologies and the routing scheme. Moreover, traffic patterns may vary over time and predicting the resulting effect on the actual available bandwidth between hosts within the data center is extremely difficult. We address this problem by trying to allocate a placement that not only satisfies the predicted communication demand but is also resilient to demand time-variations. This gives rise to a new optimization problem that we call the Min Cut Ratio-aware VM Placement (MCRVMP). The general MCRVMP problem is NP-Hard, hence, we introduce several heuristics to solve it in reasonable time. We present extensive experimental results, associated with both placement computation and run-time performance under time-varying traffic demands, to show that our heuristics provide good results (compared to the optimal solution) for medium size data centers.

David Breitgand, Gilad Kutiel, and Danny Raz, Cost-aware live migration of services in the cloud, Proceedings of the 11th USENIX conference on Hot topics in management of internet, cloud, and enterprise networks and services (Hot-ICE’11), Boston, USA, March 2011.

Abstract:

Live migration of virtual machines is an important component of the emerging cloud computing paradigm. While live migration provides extreme versatility of management, it comes at a price of degraded service performance during migration. The bulk of studies devoted to live migration of virtual machines focus on the duration of the copy phase as a primary metric of migration performance. While shorter down times are clearly desirable, the pre-copy phase imposes an overhead on the infrastructure that may result in severe performance degradation of the migrated and collocated services offsetting the benefits accrued through live migration.

We observe that there is a non-trivial trade-off between minimizing the copy phase duration and maintaining an acceptable quality of service during the pre-copy phase, and introduce a new model to quantify this trade-off. We then show that using our model, an optimal migration schedule can be efficiently calculated. Finally, we simulate, using real traces, live migrations of a virtual machine running a web server and compare the migration cost using our algorithm and commonly used live-migration methods.