US20080034366A1 - Virtual computer system with dynamic resource reallocation - Google Patents

Virtual computer system with dynamic resource reallocation Download PDF

Info

Publication number
US20080034366A1
US20080034366A1 US11/905,517 US90551707A US2008034366A1 US 20080034366 A1 US20080034366 A1 US 20080034366A1 US 90551707 A US90551707 A US 90551707A US 2008034366 A1 US2008034366 A1 US 2008034366A1
Authority
US
United States
Prior art keywords
lpar
load
allocation
reallocation
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/905,517
Inventor
Tsuyoshi Tanaka
Naoki Hamanaka
Toshiaki Tarui
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/905,517 priority Critical patent/US20080034366A1/en
Publication of US20080034366A1 publication Critical patent/US20080034366A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/81Threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/815Virtual
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/88Monitoring involving counting

Definitions

  • the present invention relates to a virtual computer system, in particular, to a technique for dynamically reallocating a resource for a virtual computer corresponding to a load of the virtual computer.
  • a virtual computer system physical resources such as a CPU, a main memory, and an IO are logically partitioned and allocated to each virtual computer LPAR (Logical Partition) realized on the virtual computer system.
  • Mechanisms which allocate physical resources to a virtual computer dynamically in a virtual computer system are disclosed in Japanese Laid-Open Patent Publications No. 6-103092 and No. 6-110715.
  • operation by an operator or time driven to drive when a timer reaches a set time
  • the hypervisor dynamically changes the allocation of LPAR according to a resource allocation method configured before the operator issues the reallocation demand.
  • the hypervisor also includes a monitoring function for collecting a system operation condition such as a CPU time of each LPAR.
  • a system operation condition such as a CPU time of each LPAR.
  • operators need to decide allocation of physical resources, and after allocating resources automatically from the system operation condition, it is not possible to automatically re-allocate them.
  • a device is suggested in which one LPAR inquiries a hypervisor about CPU time of another LPAR within the same virtual computer system, and when there is a difference between the actual CPU time and set allocated CPU time, the allocated CPU time is matched with the actual CPU time.
  • the CPU time does not always representing a load condition of the computer system correctly.
  • it is difficult to improve a response property of the system by simply increasing the CPU time.
  • a virtual computer system including methods of: operating a plurality of LPAR on a physical computer; dynamically re-allocating physical resources such as CPU or a main memory between each LPAR by a hypervisor; measuring a load of the system such as a swap frequency of the main memory, a length of queue for execution of process, and a CPU occupation rate of each LPAR, and process corresponding time of the application program, in which, based on the load of the LPAR measured by the measuring method, resource reallocation for LPARs is conducted by changing an amount of physical resources to be allocated for each LPAR.
  • FIG. 1 is a diagram illustrating a first embodiment of the present invention
  • FIG. 2 is a diagram illustrating a configuration example of a physical computer system composed of one virtual computer system according to all embodiments of the present invention
  • FIG. 3 is an overview illustrating a virtual computer system according to embodiments of the present invention
  • FIG. 4 is a diagram illustrating area allocation of a main memory device according to embodiments of the present invention.
  • FIG. 5 is a diagram illustrating LPAR information table according to embodiments of the present invention.
  • FIG. 6 is a diagram illustrating a configuration of a hypervisor according to embodiments of the present invention.
  • FIG. 7 is a diagram illustrating a table for regulating a CPU allocation rate for every LPAR according to the above-described embodiment
  • FIG. 8 is a diagram illustrating a table showing a load condition of CPU for every LPAR according to the above-described embodiment
  • FIG. 9 is a diagram illustrating reallocation policy table according to the above-described embodiment.
  • FIG. 10 is a diagram illustrating an action table according to the above-described embodiment.
  • FIG. 11 is a diagram illustrating a CPU allocation time comparison table according to the above-described embodiment.
  • FIG. 12 is a diagram illustrating an average CPU load of a LPAR according to yet another embodiment of the present invention.
  • FIG. 13 is a diagram illustrating a sampling data of the CPU load of LPAR according to the other embodiment of the present invention.
  • FIG. 14 is a diagram illustrating spectrum distribution of the sampling data of the CPU load according to the above-described embodiment
  • FIG. 15 is a diagram illustrating a mounting example of a policy server according to the embodiments.
  • FIG. 16 is a diagram for illustrating another mounting example of a policy server
  • FIG. 17 is a diagram illustrating yet another embodiment of the present invention.
  • FIG. 18 is a diagram illustrating an application average response time table for every LPAR according to the above-described embodiment
  • FIG. 19 is a diagram illustrating another mounting example of a reallocation policy generator and a load condition monitoring circuit according to another embodiment of the present invention.
  • FIG. 20 is a diagram illustrating another mounting example of a reallocation policy generator and a load monitor according to another embodiment of the present invention.
  • FIG. 21 is a diagram illustrating another acquisition method of response time of an application program according to another embodiment of the present invention.
  • FIG. 22 is a diagram illustrating a chart showing a dealing content with respect to a load condition according to another embodiment of the present invention.
  • FIG. 23 is a flow-chart illustrating a process for conducting a sequential dealing in accordance with FIG. 22 .
  • FIG. 24 is a diagram illustrating correspondence between agreement fee and agreement class of a user at a data center
  • FIG. 25 is a diagram illustrating correspondence among an agreement class, priority, upper load threshold and lower load threshold,
  • FIG. 26 is a diagram illustrating correspondence among a customer, a customers agreement class, and an occupation LAPR, and
  • FIG. 27 is a flowchart of a management program at a data center.
  • Embodiment includes methods of: operating a plurality of LPAR on a physical computer; dynamically re-allocating physical resources such as CPU or a main memory between each LPAR by a hypervisor; measuring process corresponding time of the application program and a load of the system such as a swap frequency of the main memory, a length of queue for execution of process, and a CPU occupation rate of each LPAR, in which, based on the load of the LPAR measured by the measuring method, resource reallocation for LPARs is conducted by changing an amount of physical resources to be allocated for each LPAR.
  • the OS having a function of dynamically changing the number of CPUs at the time of operation and changing the maim memory amount is used so as to conduct reallocation of the LPAR corresponding to the number of the CPUs or the main memory amount according to the load of the LPAR.
  • a load of each LPAR after re-allocation is measured to determine if the load of LPAR, which was high before the reallocation, becomes low.
  • reallocation of the LPAR appropriately is conducted.
  • FIG. 2 shows a physical computer system configuration composing a virtual computer system which is common to all embodiments of the present invention. There may be a plurality of the physical computer systems.
  • FIG. 2 shows a tightly coupled multiprocessor which is the physical computer composing the virtual computer system.
  • Reference numerals 10 , 11 . . . , and 1 n respectively denotes CPU 0 , CPU 1 , . . . , and CPUn.
  • Reference numeral 20 denotes a main memory device.
  • Reference numerals 30 , 31 , . . . 3 m respectively denotes I/O device I/O 0 , I/O 1 , . . . I/Om.
  • Reference numeral 40 denotes a hypervisor which controls the whole virtual computer system by residing in the main memory.
  • FIG. 3 shows an overview of a virtual computer system. What is shown in FIG. 3 is one virtual computer system corresponding to the physical computer system shown in FIG. 2 .
  • Reference numeral 40 denotes the hypervisor.
  • Reference numerals 50 , . . . , and 5 k denote virtual computers LPAR 0 , . . . LPARk.
  • Reference numerals 50 - 0 , . . . , and 50 - n denote logical processors LP 0 , . . . , and LPn contained in the LPAR 0 , and reference numerals 5 k - 0 , . . .
  • Each LPAR includes a plurality of logical processors LP because a physical configuration is a multiprocessor system.
  • FIG. 4 shows an overview of the main memory device 20 .
  • the main memory device 20 there are areas allocated to the hypervisor and each LPAR.
  • FIG. 5 shows an LPAR information table 100 .
  • the LPAR information table 100 shows allocation of physical resources of each LPAR.
  • Reference numeral 101 denote a field showing a name of LPAR
  • reference numeral 102 denotes a field showing a start address of the area of on the main memory device which is allocated to each LPAR
  • reference numeral 103 denotes a field defining a physical main memory capacity of each LPAR.
  • Reference numeral 104 denotes a field defining percentage of allocated CPU time which is allocated to each LPAR. Based on the percentage of allocated CPU time, the hypervisor 40 allocates CPU 10 , . . . , and CPU 1 n to LPAR 0 , . . . , and LPARk in time divided manner.
  • FIG. 6 shows a configuration of the hypervisor.
  • the hypervisor 40 is composed of: a scheduler 200 for scheduling each LPAR; a resource manager 201 for managing physical resources allocated to each LPAR; an LPAR controller 202 for controlling operation commands and the like to each LPAR; a logical processor controller 203 for controlling a logical processor 204 in which an operating system of each LPAR is conducted; a frame controller 205 for controlling a frame as a screen of a system console for inputting information for an operator to operate the LPAR, and a frame as a screen having information notifying the operator about a condition of the LPAR; a reallocation plan controller 206 for planning allocation of physical resources of the LPAR; and a load monitor 207 for monitoring a condition of a load applied to each LPAR.
  • CPU 0 is used by 100% by the LAPR 0
  • CPU 1 is used 50% by the LPAR 0 and 50% by the LPAR 1
  • CPU 2 and CPU 3 are used 100% respectively by the LPAR 1 and LPAR 2 .
  • the reallocation policy controller 206 and the load monitor 207 according to the present invention are not limited to be mounted as a part of the hypervisor, and alternatively, they may be mounted as a user program operated on an operation system.
  • a computer in which a program having functions as the reallocation policy controller 206 and the load monitor 207 is operated, is called a policy server.
  • the policy server as shown in FIG. 15 , may be a specific LPAR 5 x on the virtual computer system operating LPAR 50 , . . . , and LPAR 5 k which is a examination target regarding the load condition.
  • FIG. 15 may be a specific LPAR 5 x on the virtual computer system operating LPAR 50 , . . . , and LPAR 5 k which is a examination target regarding the load condition.
  • FIG. 15 may be a specific LPAR 5 x on the virtual computer system operating LPAR 50 , . . . , and LPAR 5 k which is a examination target regarding
  • a policy server for measuring a load of LPAR operated on the physical computer 60 - 1 may be mounted on the physical computer 60 - x .
  • the physical computer 60 - x either single OS or a plurality of LPAR may be operated.
  • the LPAR 5 k or the physical computer 60 - x is not exclusive for the policy server, but may conduct other application processes.
  • load condition means a length of queue for execution of process or a CPU occupation rate.
  • An operator of a virtual computer system sets, in a frame, a demand for examining a load condition of LPAR and time interval of the examination of the load.
  • the frame controller 205 notifies the load monitor 207 of a monitoring demand and a monitoring interval of the LPAR load condition through the scheduler 200 ( 300 , 301 ).
  • the load monitor 207 notifies the LPAR controller 2020 of a load condition examination demand ( 302 , 303 ) through scheduler 200 .
  • the LPAR controller 202 examines a load condition of each logical processor 204 with respect to each logical processor controller 203 ( 305 ), and issues a demand ( 304 ) for transferring the examination results ( 306 , 307 ) to the load monitor 207 .
  • the load monitor 207 saves the load condition of each LPAR inside thereof. A saved amount of the load condition information is directed by the operator to the frame controller 205 through the frame so as to notify the load monitor 207 of the amount through the scheduler 200 .
  • FIG. 8 shows an example of a load condition of each LPAR. This shows each CPU occupation rate and a length of queue for execution of process of task or thread for every LPAR.
  • a demand 310 collecting the information is notified to the logical processor controller 203 of each LPAR from the LPAR controller 202 .
  • the logical processor controller 203 interrupts to the OS 1 operating on each LPAR through the logical processor 204 .
  • the logical processor controller 203 transfers the examination result ( 306 , 307 ) to the load monitor 207 .
  • an average load condition is calculated for a certain period of time (for one hour, for example), and when it exceed a threshold set at the frame by the operator, a reallocation demand of each LPAR is issued against the reallocation policy generator 206 ( 320 ).
  • the reallocation policy generator 206 read the load condition of each LPAR from the load monitor 207 ( 330 ), and the current CPU allocation toward each LPAR is read from the resource manager 201 ( 331 ). Next, the reallocation policy generator 206 generates inside a reallocation policy table 900 ( FIG. 9 ) from the current CPU allocation and the load condition. This shows a condition in which, from the current CPU allocation stored in the resource manager 201 shown in FIG. 7 , LPAR 0 identifies that a load of CPU is high so that the CPU allocation to the LPAR 0 is increased.
  • a policy of reallocation differs depending upon what is observed amongst the load information of the system. If CPU is a problem, it is possible to form a policy to increase the allocation time of CPU or increasing the number focus. Moreover, a policy differs depending on whether or not the OS operating on each LPAR has a function of increasing and decreasing the number focus at a time of start up.
  • OS There are two kinds of OS: one is the OS enables to activate a new CPU without terminating the OS; and the other is the OS in which operation of the OS has to reset once to reallocate thereafter so as to change the number of CPUs for activation. If an OS is unable to increase/decrease the number of CPUs at the time of activation, only option is to change the CPU allocation time.
  • the OS can change the number of CPU at the time of OS activation, it is possible to change the number of CPUs as well as the CPU allocation time.
  • an OS with a function of reallocating the number of CPUs at the activation is used.
  • frequency of paging of the main memory re-writing of a page on actual memory
  • swap swap (swapping of application programs) is examined, and if the frequency is high, it is possible to form a policy to increase the main memory amount.
  • an example is shown, in which reallocation of each LPAR is conducted based on the load condition of the CPU.
  • FIG. 10 there is a table ( FIG. 10 ) in which correspondence actions is provided with respect to: a type of load observed, a threshold which is identifies as a heavy load, a priority of load to be dealt with, and a case of a high load.
  • This table is set by the operator at the table, and notifies, from the frame controller 205 to the reallocation policy generator 206 via the scheduler 200 , about that writing has been done ( 304 , 341 ).
  • the reallocation policy generator 206 receives the notification, read data in the frame controller 205 ( 342 , 343 ), and write in the correspondence table inside the reallocation policy generator 206 .
  • the CPU allocation time is increased because of high the CPU occupation rate of the LPAR 0 , and the number of simultaneously executable processes is increased by increasing the number of the CPUs allocated to the LPAR 0 because of long queue for execution of process, thereby reducing the load.
  • a method is applied, in which for every average CPU occupation ratio as shown in FIG. 11 , for example, less loaded LPAR offers certain percentages of the CPU time to another heavily loaded LPAR so as to allocate it therefor.
  • FIG. 11 for every average CPU occupation ratio as shown in FIG. 11 , for example, less loaded LPAR offers certain percentages of the CPU time to another heavily loaded LPAR so as to allocate it therefor.
  • the reallocation policy in the reallocation policy table 900 as shown in FIG. 9 is formed base on this allocation.
  • the reallocation policy generator 206 issues a reallocation demand to the scheduler 200 ( 350 ). At the same time, it issues, to the load monitor 207 , a demand to stop measuring performance ( 351 ).
  • a reallocation procedure for a LPAR it is conventionally conducted by operation of the operator or time divided scheduling. However, in the present invention, it is conducted by the hypervisor which issues the reallocation demand at an event that the load to the system exceeds the threshold.
  • the scheduler 200 reads the reallocation policy table 900 in FIG. 9 inside the reallocation policy generator 206 ( 380 ). Then, it rewrites a LPAR information table inside the resource manager 201 ( 381 ) so as to direct allocation change to each LPAR controller 202 .
  • the LPAR controller 202 stops the OS 1 of the logical processor 204 which belongs to the LPAR to be reallocated ( 360 , 361 , 362 ). Next, the LPAR controller 202 issues a demand to read the LPAR information table of the resource manager 201 ( 364 ). The allocation that has just been read ( 365 ) is stored inside the LPAR controller 202 .
  • the LPAR controller 202 instructs, to each logical processor controller 203 , re-operation of the OS to the logical processor 204 ( 370 , 371 , 372 ).
  • the LPAR controller After rebooting the OS 1 , the LPAR controller notifies the reallocation policy generator 206 and the load monitor 207 of completion of LPAR reallocation ( 375 ).
  • the load monitor 207 issues a demand to examine a load condition of a LPAR to the LPAR controller 202 as described above ( 302 , 303 ). With the processes above, a change of resource allocation is completed.
  • the scheduler 200 When the time divided CPU allocation time is changed, the scheduler 200 only needs to execute a newly defined CPU allocation time, so that the allocation change is completed by the process in the hypervisor. If a new CPU (logical processor) is to be added, the LPAR controller 202 notifies the OS on a LPAR, directly or via the logical processor controller 203 , of a newly allocated CPU (logical processor) by interruption or the like. Thereby, the OS on the LPAR sends to the corresponding LPAR controller 202 a command to boot the newly added CPU (logical processor) spontaneously.
  • resource allocation such as the CPU allocation time or the main memory capacity, is changed. Thereby, it is possible to accurately comprehend a degree of the LPAR load rather than measuring the CPU time. Moreover, it is possible to allocate more resource to LPAR having a higher load without sequential command from the operator.
  • the load condition of the application used herein is a response time of the application program process.
  • it means the response time of a transaction process such as retrieving a table from a data base so as to update contents of the table.
  • An operator of the virtual computer system requests to examine the load condition of the LPAR in a frame, and set a time interval to be examined.
  • the frame controller 205 notifies the load monitor 207 of a monitoring demand and a monitoring interval about the LPAR load condition through the scheduler 200 ( 300 , 301 ).
  • the load monitor 207 notifies the LPAR controller 202 of a load condition examination demand ( 302 , 303 ) through the scheduler 200 .
  • the LPAR controller 202 examines a load condition of each logical processor 204 with respect to each logical processor controller 203 in the set monitoring interval ( 305 ), and issues a demand ( 304 ) to transfer the examination results ( 306 , 307 ) to the load monitor 207 .
  • the load monitor 207 saves the load condition of each LPAR inside thereof. A saved amount of the load condition information is directed by the operator to the frame controller 205 through the frame so as to notify the load monitor 207 of the amount through the scheduler 200 .
  • a demand 310 to collect the load condition of the application is notified to the logical processor controller 203 of each LPAR from the LPAR controller 202 .
  • the logical processor controller 203 sends an interruption signal to the logical processor 204 and the OS 1 .
  • a signal to demand for the load condition information of an application 400 is sent to the application 400 ( 313 , 314 ).
  • the load condition of the application 400 is transferred to the load monitor 207 through the logical processor controller 203 ( 315 , 311 , 312 ).
  • the CPU load condition as shown in Embodiment 1 is transferred to the load monitor 207 at the same time.
  • an average load condition is calculated for a certain period of time (for one hour, for example) ( FIG. 18 ).
  • the response time is measured.
  • a threshold set at the frame by the operator a reallocation demand of each LPAR is issued to the reallocation policy generator 206 ( 320 ). For example, if a threshold of the response time is 5 seconds, according to the response time distribution as shown in FIG. 18 , the reallocation of the physical resources of the LPAR 0 is conducted so as to improve its performance.
  • the reallocation policy generator 206 reads the load condition of each LPAR from the load monitor 207 ( 330 ), and the current CPU allocation toward each LPAR is read from the resource manager 201 ( 331 ). Next, the reallocation policy generator 206 generates inside a reallocation policy table 900 ( FIG. 9 ) from the current CPU allocation and the load condition. Inside the reallocation policy generator 206 , there is a table ( FIG. 10 ) in which correspondence actions is provided with respect to: a type of load observed, a threshold which is identifies as a heavy load, a priority of load to be dealt with, and a case of a high load.
  • This table is set by the operator at the table, and notifies, from the frame controller 205 to the reallocation policy generator 206 via the scheduler 200 , about that writing has been done ( 340 , 341 ).
  • the reallocation policy generator 206 receives the notification, read data in the frame controller 205 ( 342 , 343 ), and write in the correspondence table inside the reallocation policy generator 206 .
  • the reallocation policy becomes different depending on observed information in the load condition of the system.
  • CPU allocation time to be added to the LPAR 0 is calculated, i.e., conduct a similar process in Embodiment 1.
  • the CPU allocation time is increased because of high CPU occupation rate of the LPAR 0 , and the number of simultaneously executable processes is increased by increasing the number of the CPUs allocated to the LPAR 0 because of long queue for execution of process, thereby taking an action to reduce the load.
  • a method is applied, in which for every average CPU occupation ratio as shown in FIG. 11 , for example, less loaded LPAR offers certain percentages of the CPU time to another heavily loaded LPAR so as to allocate it therefor.
  • FIG. 9 is a table having the reallocation policy formed therein based on the allocation described above.
  • the reallocation policy generator 206 issues the reallocation demand to the scheduler 200 ( 350 ). Simultaneously, it issues a demand to stop measuring performance to the load monitor 207 ( 305 ).
  • a reallocation procedure for a LPAR it is conventionally conducted by operation of the operator or time divided scheduling. However, in the present invention, it is conducted by the hypervisor which issues the reallocation demand at an event that the load to the system exceeds the threshold.
  • the scheduler 200 reads the reallocation policy table inside the reallocation policy generator 206 . Then, it rewrites a LPAR information table inside the resource manager 201 so as to direct allocation change to each LPAR controller 202 .
  • the LPAR controller 202 stops the OS 1 of the logical processor 204 which belongs to the LPAR to be reallocated ( 360 , 361 , 362 ). Next, the LPAR controller 202 issues a demand to read the LPAR information table of the resource manager 201 ( 364 ). The allocation that has just been read ( 365 ) is stored inside the LPAR controller 202 .
  • the LPAR controller 202 instructs each logical processor 203 to re-operate OS of the logical processor 204 ( 370 , 371 , 372 ). After re-booting the OS 1 , the LPAR controller notifies the reallocation policy generator 206 and the load monitor 207 of completion of reallocation of the LPAR ( 375 ). The load monitor 207 issues an examination demand of the load condition of the LPAR to the LPAR controller 202 as described above ( 302 , 303 ). With the above-described process, change of resource allocation is completed.
  • the scheduler 200 When the time divided CPU allocation time is changed, the scheduler 200 only needs to execute a newly defined CPU allocation time, so that the allocation change is completed by the process in the hypervisor. If a new CPU (logical processor) is to be added, the LPAR controller 202 notifies the OS, on a LPAR directly or via the logical processor controller 203 , of a newly allocated CPU (logical processor) by interruption or the like. Thereby, the OS on the LPAR sends to the corresponding LPAR controller 202 a command to boot the newly added CPU (logical processor) spontaneously. As such, the reallocation is completed. Then, the load monitor 207 restart monitoring load conditions of each LPAR again.
  • the present embodiment is an example of a system, in which the reallocation policy generator 206 and the load monitor 207 are mounted to a program operating on a certain LPAR provided in the same virtual computer system according to Embodiment 2.
  • FIG. 19 shows a configuration of the present embodiment.
  • a monitoring program 190 to be executed on LPAR 5 x issues reallocation demand of physical resources and monitoring of the load condition of LPAR 50 , . . . , LPAR 5 k.
  • the monitoring program 190 on the LPAR 5 x transfers the load condition examination demand to each LAPR 50 , . . . LPAR 5 k .
  • a communication method the following is known as shown in Japanese Laid-Open Patent Publication No. 10-301795: a method for emulating the communication virtually by a hypervisor; a method using an IO channel; and a method using CPUs within LPARs whereas using channel to communicate with a computer outside the LPAR.
  • any method can be applied, but in the present embodiment, the method, in which the hypervisor emulates the communication path between LPARs, is used.
  • the monitoring program 190 demands load conditions of other LPAR 50 , . . . , LPAR 5 k ( 500 ).
  • Each LPAR receiving the demand transfers load information (the CPU occupation rate, a length of queue for execution of process as in Embodiment 1, and a process response time of an application as in Embodiment 2) to the LPAR 5 x ( 501 ).
  • An issuing timing 510 of the load condition examination demand 500 is set in the monitoring program 190 by an operator.
  • the operator Similar to the load monitor 207 of Embodiment 1, the operator sets threshold 511 of a load in advance, which is held inside the monitoring program 190 . When a load exceeding the threshold 511 is monitored, the monitoring program 190 issues a demand for notification of the current resource allocation to the hypervisor 40 ( 502 ), and receives the resource allocation information to the hypervisor 40 ( 503 ).
  • a load action table 512 describes combination of modifying policy about allocation such as the load condition and the CPU allocation time or the number of CPUs, and it is set by the operator in the monitoring program 190 .
  • the load action table 512 and an allocation policy table 513 are generated, the allocation policy table 513 showing a new resource allocation policy from the load condition.
  • the reallocation policy table 513 is generated by the method shown in Embodiment 1, description thereof is omitted in the present embodiment.
  • the monitoring program 190 issues the reallocation policy table 513 and the reallocation demand 505 to the hypervisor 40 , the reallocation demand including a command for demanding reallocation of allocated resources to the LPAR 50 , . . . , LPAR 5 k .
  • the hypervisor 40 transfers the reallocation completion acknowledgment 506 to the monitoring program 190 after completion of reallocation.
  • reallocation of the LPAR reflecting the load condition is completed.
  • the monitoring program 190 restart monitoring of the load condition of each LPAR.
  • the above description shows the system having the reallocation policy generator 206 and the load monitor 207 mounted on the monitoring program operating on the certain LPAR provided in the same virtual computer system.
  • the algorithm does not exist in the hypervisor, and thus, it is not necessary to enable the hypervisor, a core of the system, to be operable for the operator. Therefore, there is no need to worry about a security problem or an operation by the operator which may cause trouble to the hypervisor.
  • malfunction even if malfunction has occurred to the reallocation policy, by setting the hypervisor to monitor unreasonable process, malfunction to the whole system cannot be generated.
  • the present embodiment is an example of a system, in which the monitoring program 190 operating on the certain LPAR provided in the same virtual computer system in Embodiment 3 is mounted on another physical computer.
  • FIG. 20 shows a configuration of the present embodiment.
  • the monitoring program 190 executing on LPAR 60 x - x on a physical computer 60 - x issues a reallocation demand of physical resources and monitoring of load conditions of LPAR 600 - 0 , . . . , LPAR 600 - k of a physical computer 60 - 0 .
  • the monitoring program 190 on the LPAR 60 x - x issues the load condition examination demand 500 -A to the physical computer 60 - 0 .
  • a hypervisor 40 - 0 of the physical computer 60 - 0 transfers the load condition examination demand to each of the LAPR 600 - 0 , . . . , LPAR 600 - k .
  • the hypervisor 40 - 0 ( 40 - x ) on the physical computer 60 - 0 ( 60 - x ) communicates with another physical computer 60 - x ( 60 - x ) by using the IO channel.
  • the monitoring program 190 is mounted as a program on the LPAR.
  • single computer may be operated instead of the virtual computer system.
  • LPAR 60 x - x may be a single physical computer.
  • the monitoring program 190 demands load conditions of LPAR 600 - 0 , . . . , LPAR 600 - k mounted on another physical computer 60 - 0 through I/O 520 - x ( 500 -A, 501 -A).
  • Each LPAR receiving the demand transfers load information (the CPU occupation rate, a length of queue for execution of process as in Embodiment 1, and a process response time of an application as in Embodiment 2) to the LPAR 60 x - x ( 500 -B, 501 -B).
  • An issuing timing 510 of the load condition examination demands 500 -A and 500 -B is set in the monitoring program 190 by the operator.
  • the operator Similar to the load monitor 207 of Embodiment 1, the operator sets threshold 511 of a load in advance, which is held inside the monitoring program 190 .
  • the monitoring program 190 issues a demand to notify the hypervisor 40 - 0 of the current resource allocation ( 502 -A), and receives the resource allocation information from the hypervisor 40 - 0 ( 502 -B, 503 -B).
  • a load action table 512 describes combination of modifying policy about allocation such as the load condition and the CPU allocation time or the number of CPUs, and it is set by the operator in the monitoring program 190 .
  • the load action table 512 and an allocation policy table 513 are generated, the allocation policy table 513 showing a new resource allocation policy from the load condition. Since the reallocation policy table 513 is generated by the method shown in Embodiment 1, description thereof is omitted in the present embodiment.
  • the monitoring program 190 issues the allocation demands 504 -A, 504 -B including a command to demand of reallocation of resources allocated to the LPAR 600 - 0 , . . . , LPAR 600 - k on a physical computer 60 - 0 and the reallocation policy table 513 to the hypervisor 40 - 0 .
  • the hypervisor 40 - 0 transfers the reallocation completion acknowledgment 505 -A and 505 -B to the monitoring program 190 after completion of reallocation.
  • reallocation of LPAR which reflects the load condition is completed.
  • the monitoring program 190 restart monitoring load conditions of each LPAR.
  • Embodiment 2 in order to examine the response time of application programs, interruption to OS is generated from the hypervisor, and the OS sent a signal to the application program so as to demand response time of the process measured by the application program.
  • FIG. 21 a method for examining load conditions of the application program operated on the LPAR without having particular interface with respect to the application program will be described.
  • the application program does not measure response time. Or even if it measures the response time, there is no interface to read out.
  • a procedure of changing physical resource allocation of the application follows that of Embodiment 4, and thus, description of the LPAR reallocation is omitted in the present embodiment.
  • FIG. 21 there is provided physical computers 60 - 0 , 60 - x and a network 61 for linking therewith, and on the physical computers, LPAR 600 - 0 , LPAR 600 - k , LPAR 60 x - 0 , LPAR 60 x - k , and LPAR 60 x - x are operated.
  • an application program 195 such as WWW (World Wide Web) server is operated.
  • the monitoring program 190 operating in LPAR 60 x - x on the physical computer 60 - x issues an access demand 700 of data to the application program 195 .
  • the application program 195 is the WWW server, a demand to read homepages is issued.
  • the application program 195 issues a response 701 for the demand 700 .
  • the response time from issue of the demand 700 till reception of the response 703 is recorded to a response time history 703 .
  • the demand 700 is issued in an interval which does not deteriorate a performance of the application program 195 . Or, it may be set by the operator in advance within the monitoring program (not shown).
  • the monitoring program 190 observes transition of the response time history 703 .
  • the monitoring program 190 demands that allocation of physical resources of LPAR having the application program with the long response time operating therein to be increased.
  • a procedure to change the resource allocation follows in Embodiment 4.
  • the monitoring program 190 may gather the response time history for a relatively long period of time (for several days) to find out regularity of load fluctuations so that physical resource allocation of LPAR may be designedly changed according to a cycle of the fluctuation.
  • the application program does not measure response time, or when there is no interface to read out a measurement result, the monitoring program issues an access demand of the data so that it measures a duration until the response therefor is received.
  • the response time of the application program is comprehended. Thereby, it is possible to comprehend the response time even if the application program does not measure the response time or there is no response time.
  • Embodiments 2 and 3 action plans corresponding to load conditions are determined in advance.
  • a table FIG. 22 which lists a priority order of possible polices by physical resource allocation, an action can be taken sequentially with a procedure shown in FIG. 23 .
  • FIG. 23 the procedure shown in FIG. 23 will be described.
  • Load conditions of LAPRs are gathered in a manner shown in previous embodiments.
  • preparation for reallocation of LPARs starts ( 800 ).
  • the action plan of priority 1 801
  • the allocation is reversed to the previous allocation ( 806 ).
  • Whether all action plans are taken is confirmed ( 804 ). If not all the plans are conducted, then, next action plan is implemented ( 805 ). After all plans are implemented, it is checked if there has been any effect action ( 806 ).
  • a plurality of action plans are prepared along with priority thereof so that one or more action plans contributing to load lowering. Accordingly, actions effective for lowering the load is selected by trial.
  • an application method in which allocation changes according to a plan, that is, during hours when load is high, resources are collected from other LPARs whereas during night time, some of resources are released to the other LPARs.
  • the method having the means for finding regularity in load change combined with Embodiment 1 will be described.
  • the load is recorded for several days, and the load monitor examines up and down of the load at the same time zone. For example, as shown in FIG. 12 , changes in average values of load fluctuation taken at the same hour for several days are examined. Then, threshold of the load is set so that allocation of physical resources increased during the time zone of high load while the resources are offered to other LPARs during the low load time zone. Moreover, the load monitor 207 schedules to return resource allocation to initially set amount during time zones other than above.
  • Resource allocation method is as follows: by setting a system condition with a maximum load during the time zone above high load threshold (i.e., when a length of queue for execution of process becomes 3 or more) as a basis, the reallocation policy generator 206 generates reallocation plan in the method shown in Embodiment 1 so as to conduct dynamic reallocation of resources. Moreover, not only changing resource allocation periodically, dynamic resource allocation may be conducted with respect to load conditions so as to conduct find adjustment for reducing the load.
  • a method for finding changing regularity of the load analytically may be employed.
  • FFT fast Fourier transformation
  • algorithm of the FFT textbooks of signal processes such as “Dejitaru Shingou Shori no Kiso Tujii Shigeo kanshu Denshi Jouho Tsushin Gakkai” (1998 Mar. 15 first edition published).
  • FIG. 13 assume that load conditions of time series having 32 measuring points in T time are provided.
  • a length of queue for execution of process is uses as an example.
  • FIG. 13 is calculated for its spectrum distribution by FFT, the result becomes as shown in FIG.
  • LPAR is reallocated according to the allocation configuration generated previously.
  • allocation configuration generated previously.
  • allocation changes for every half cycle there is a case where allocation changes for every half cycle.
  • allocation of physical resources to LPARs may be conducted by dividing more finely. The procedure of reallocation of LPAR is as described in Embodiment 1.
  • loads related to CPU has been described as examples.
  • resource allocation of LPAR may be changed according to load conditions of main memory. Resource allocations of CPU and main memory may be conducted simultaneously.
  • the number of times of swap or paging may be used. Similar to the case of CPU loads according to Embodiment 1, load conditions are monitored for those, and LPAR having high load condition is reallocated dynamically so as to increase main memory allocated therefor. LPAR which changes the main memory amount stops the OS operating on the LPAR similar to the case where the number of CPUs are increased as shown in Embodiment 1. After physical resources is allocated (i.e., the amount of main memory is changed herein), the LPAR controller 202 notifies the OS on the LPAR, directly or through the logical processor controller 203 , of newly allocated main memory by interruption or the like. Then, the OS on the LPAR spontaneously sends a command to expand the newly added main memory to corresponding LPAR controller 202 . In the present embodiment, the allocation changing method as described in Embodiment 2 or Embodiment 3 may be applied.
  • a data center administrator makes an agreement with each customer with a content of an agreement table as shown in FIG. 24 .
  • the agreement class 1000 has priority in an order of A, B and C, and the agreement has the highest priority.
  • the agreement fee is determined as PA, PB and PC.
  • the agreement A is made with customers in such a manner that the higher the priority of agreement is, the more priority is given to the performance guarantee of the response time or the like of the application.
  • the data center administrator sets an allocation 1007 and an agreement class 1006 of LPAR for each customer 1005 as shown in FIG. 26 .
  • the application program of each customer is operated on the LPAR.
  • the LPAR having higher priority 1002 is allocated with physical resources by priority.
  • the data center administrator follows a table shown in FIG. 25 to decide the priority 1002 , an upper threshold 1003 , and a lower threshold 1004 of load conditions of LPAR for every agreement class 1000 .
  • the upper threshold 1003 is the maximum value of allowable limit in which load of the OS or the application operating on the LPAR is large, i.e., it is a numerical value used to judge an opportunity to demand an increase of resource allocation toward the LPAR.
  • the lower threshold 1004 is used to judge an opportunity for returning the resource amount allocated to the LPAR to the initial value when the load is smaller than the threshold.
  • a table shown in FIG. 25 is stored inside a means for monitoring load conditions of the virtual computer.
  • the load conditions of LPAR are observed ( 950 ). It is checked that if there is a load condition which exceeds the upper threshold 1003 shown in FIG. 25 ( 951 ). If there is no LPAR exceeding the upper threshold 1003 , administration thereof continues without changing allocation of LPAR. However, if there is the LPAR exceeding the upper threshold, and not all LPAR has high load ( 952 ), then an action for loads can be taken.

Abstract

A virtual computer system including a reallocation means, in which a plurality of LPAR are operated by logically dividing physical resources composing a physical computer exclusively or in time dividing manner so as to dynamically change reallocation of physical resources among each of LPARs. Based on load conditions measured by an application or an OS of each LPAR, physical resource allocation to each LPAR is determined, thereby conducting reallocation of LPAR.

Description

    CROSS-REFERENCED TO RELATED APPLICATION
  • The present application is a continuation of application Ser. No. 09/942,611, filed Aug. 31, 2001, the contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to a virtual computer system, in particular, to a technique for dynamically reallocating a resource for a virtual computer corresponding to a load of the virtual computer.
  • In a virtual computer system, physical resources such as a CPU, a main memory, and an IO are logically partitioned and allocated to each virtual computer LPAR (Logical Partition) realized on the virtual computer system. Mechanisms which allocate physical resources to a virtual computer dynamically in a virtual computer system are disclosed in Japanese Laid-Open Patent Publications No. 6-103092 and No. 6-110715. In the virtual computer system disclosed in the above-mentioned publications, when allocation of physical resources of LPAR is to change, operation by an operator or time driven (to drive when a timer reaches a set time) issues a re-allocation demand to a program (hypervisor) for controlling a whole virtual computer system. The hypervisor dynamically changes the allocation of LPAR according to a resource allocation method configured before the operator issues the reallocation demand.
  • Moreover, the hypervisor also includes a monitoring function for collecting a system operation condition such as a CPU time of each LPAR. In these devices, operators need to decide allocation of physical resources, and after allocating resources automatically from the system operation condition, it is not possible to automatically re-allocate them. However, according to Japanese Laid-Open Patent Publication No. 9-26889, a device is suggested in which one LPAR inquiries a hypervisor about CPU time of another LPAR within the same virtual computer system, and when there is a difference between the actual CPU time and set allocated CPU time, the allocated CPU time is matched with the actual CPU time. However, the CPU time does not always representing a load condition of the computer system correctly. Moreover, it is difficult to improve a response property of the system by simply increasing the CPU time.
  • SUMMARY OF THE INVENTION
  • Unlike such simple case, there is not suggested a method for automatically adjusting physical resources of a computer according to a load of the computer other than the CPU time such as corresponding time in applications, namely, a web server or an application server. There are provided examples of good aspects of allocating resources automatically. When a computer is used for a purpose of a data center (in which, a server for the Internet business is set up for undertaking its management), the number of computers to manage becomes extremely large. If physical resources can be increased or decreased automatically according to a load of each LPAR so as to use the physical resource of the virtual computer system effectively, it would be effective in term of reduction of the management cost, as well as a performance guarantee of the system.
  • In view of above, it is an object of the present invention to provide a virtual computer system which performs re-allocation of LPAR corresponding to a load condition of the LPAR observed by operating systems or applications of the virtual computer system.
  • In order to achieve the above-described object, in one aspect of the present invention, a virtual computer system including methods of: operating a plurality of LPAR on a physical computer; dynamically re-allocating physical resources such as CPU or a main memory between each LPAR by a hypervisor; measuring a load of the system such as a swap frequency of the main memory, a length of queue for execution of process, and a CPU occupation rate of each LPAR, and process corresponding time of the application program, in which, based on the load of the LPAR measured by the measuring method, resource reallocation for LPARs is conducted by changing an amount of physical resources to be allocated for each LPAR.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram illustrating a first embodiment of the present invention,
  • FIG. 2 is a diagram illustrating a configuration example of a physical computer system composed of one virtual computer system according to all embodiments of the present invention,
  • FIG. 3 is an overview illustrating a virtual computer system according to embodiments of the present invention,
  • FIG. 4 is a diagram illustrating area allocation of a main memory device according to embodiments of the present invention,
  • FIG. 5 is a diagram illustrating LPAR information table according to embodiments of the present invention,
  • FIG. 6 is a diagram illustrating a configuration of a hypervisor according to embodiments of the present invention,
  • FIG. 7 is a diagram illustrating a table for regulating a CPU allocation rate for every LPAR according to the above-described embodiment,
  • FIG. 8 is a diagram illustrating a table showing a load condition of CPU for every LPAR according to the above-described embodiment,
  • FIG. 9 is a diagram illustrating reallocation policy table according to the above-described embodiment,
  • FIG. 10 is a diagram illustrating an action table according to the above-described embodiment,
  • FIG. 11 is a diagram illustrating a CPU allocation time comparison table according to the above-described embodiment,
  • FIG. 12 is a diagram illustrating an average CPU load of a LPAR according to yet another embodiment of the present invention,
  • FIG. 13 is a diagram illustrating a sampling data of the CPU load of LPAR according to the other embodiment of the present invention,
  • FIG. 14 is a diagram illustrating spectrum distribution of the sampling data of the CPU load according to the above-described embodiment,
  • FIG. 15 is a diagram illustrating a mounting example of a policy server according to the embodiments,
  • FIG. 16 is a diagram for illustrating another mounting example of a policy server,
  • FIG. 17 is a diagram illustrating yet another embodiment of the present invention,
  • FIG. 18 is a diagram illustrating an application average response time table for every LPAR according to the above-described embodiment,
  • FIG. 19 is a diagram illustrating another mounting example of a reallocation policy generator and a load condition monitoring circuit according to another embodiment of the present invention,
  • FIG. 20 is a diagram illustrating another mounting example of a reallocation policy generator and a load monitor according to another embodiment of the present invention,
  • FIG. 21 is a diagram illustrating another acquisition method of response time of an application program according to another embodiment of the present invention,
  • FIG. 22 is a diagram illustrating a chart showing a dealing content with respect to a load condition according to another embodiment of the present invention,
  • FIG. 23 is a flow-chart illustrating a process for conducting a sequential dealing in accordance with FIG. 22,
  • FIG. 24 is a diagram illustrating correspondence between agreement fee and agreement class of a user at a data center,
  • FIG. 25 is a diagram illustrating correspondence among an agreement class, priority, upper load threshold and lower load threshold,
  • FIG. 26 is a diagram illustrating correspondence among a customer, a customers agreement class, and an occupation LAPR, and
  • FIG. 27 is a flowchart of a management program at a data center.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
  • In embodiments of the present invention, the following process is conducted.
  • Embodiment includes methods of: operating a plurality of LPAR on a physical computer; dynamically re-allocating physical resources such as CPU or a main memory between each LPAR by a hypervisor; measuring process corresponding time of the application program and a load of the system such as a swap frequency of the main memory, a length of queue for execution of process, and a CPU occupation rate of each LPAR, in which, based on the load of the LPAR measured by the measuring method, resource reallocation for LPARs is conducted by changing an amount of physical resources to be allocated for each LPAR.
  • Moreover, as an OS operated on the LPAR, the OS having a function of dynamically changing the number of CPUs at the time of operation and changing the maim memory amount is used so as to conduct reallocation of the LPAR corresponding to the number of the CPUs or the main memory amount according to the load of the LPAR.
  • Moreover, in order to realize more effective allocation of the physical resources, a load of each LPAR after re-allocation is measured to determine if the load of LPAR, which was high before the reallocation, becomes low. In a case where the reallocation is not effective, by putting the allocation back to pre-reallocation state, reallocation of the LPAR appropriately is conducted.
  • Likewise, for effective reallocation of the physical resources, changes in the load of the virtual computer are monitored. When periodical load changes is observed, the physical resources of the CPU allocation time or the number of the CPUs and the like is increased at the time of the high load state, while allocating the physical resources to another high load LPAR at the time of low load state, so as to change the load condition according to the configuration periodically.
  • Below, with reference to the drawings, examples of a virtual computer system according to the present invention will be described.
  • FIG. 2 shows a physical computer system configuration composing a virtual computer system which is common to all embodiments of the present invention. There may be a plurality of the physical computer systems. FIG. 2 shows a tightly coupled multiprocessor which is the physical computer composing the virtual computer system. Reference numerals 10, 11 . . . , and 1 n respectively denotes CPU0, CPU1, . . . , and CPUn. Reference numeral 20 denotes a main memory device. Reference numerals 30, 31, . . . 3 m respectively denotes I/O device I/O0, I/O1, . . . I/Om. Reference numeral 40 denotes a hypervisor which controls the whole virtual computer system by residing in the main memory.
  • FIG. 3 shows an overview of a virtual computer system. What is shown in FIG. 3 is one virtual computer system corresponding to the physical computer system shown in FIG. 2. Reference numeral 40 denotes the hypervisor. Reference numerals 50, . . . , and 5 k denote virtual computers LPAR0, . . . LPARk. Reference numerals 50-0, . . . , and 50-n denote logical processors LP0, . . . , and LPn contained in the LPAR0, and reference numerals 5 k-0, . . . , and 5 k-n denote logical processors LP0, . . . , and LPn contained in the LPARk. Each LPAR includes a plurality of logical processors LP because a physical configuration is a multiprocessor system.
  • FIG. 4 shows an overview of the main memory device 20. In the main memory device 20, there are areas allocated to the hypervisor and each LPAR.
  • FIG. 5 shows an LPAR information table 100. The LPAR information table 100 shows allocation of physical resources of each LPAR. Reference numeral 101 denote a field showing a name of LPAR, reference numeral 102 denotes a field showing a start address of the area of on the main memory device which is allocated to each LPAR, and reference numeral 103 denotes a field defining a physical main memory capacity of each LPAR. Reference numeral 104 denotes a field defining percentage of allocated CPU time which is allocated to each LPAR. Based on the percentage of allocated CPU time, the hypervisor 40 allocates CPU10, . . . , and CPU1 n to LPAR0, . . . , and LPARk in time divided manner.
  • FIG. 6 shows a configuration of the hypervisor. The hypervisor 40 is composed of: a scheduler 200 for scheduling each LPAR; a resource manager 201 for managing physical resources allocated to each LPAR; an LPAR controller 202 for controlling operation commands and the like to each LPAR; a logical processor controller 203 for controlling a logical processor 204 in which an operating system of each LPAR is conducted; a frame controller 205 for controlling a frame as a screen of a system console for inputting information for an operator to operate the LPAR, and a frame as a screen having information notifying the operator about a condition of the LPAR; a reallocation plan controller 206 for planning allocation of physical resources of the LPAR; and a load monitor 207 for monitoring a condition of a load applied to each LPAR.
  • Below, operation of the virtual computer system according to the present invention will be described by taking an example of a case where three LPARs are operated on the physical computer having four CPUs. Herein, it is assumed that a CPU is time shared or used exclusively according to the allocation shown in FIG. 7. Specifically, CPU0 is used by 100% by the LAPR0, CPU1 is used 50% by the LPAR0 and 50% by the LPAR1, and CPU2 and CPU3 are used 100% respectively by the LPAR1 and LPAR2.
  • The reallocation policy controller 206 and the load monitor 207 according to the present invention are not limited to be mounted as a part of the hypervisor, and alternatively, they may be mounted as a user program operated on an operation system. Hereinafter, a computer, in which a program having functions as the reallocation policy controller 206 and the load monitor 207 is operated, is called a policy server. The policy server, as shown in FIG. 15, may be a specific LPAR5 x on the virtual computer system operating LPAR50, . . . , and LPAR5 k which is a examination target regarding the load condition. Moreover, as shown in FIG. 16, in physical computers 60-1 and 60-x connected by a network, a policy server for measuring a load of LPAR operated on the physical computer 60-1 may be mounted on the physical computer 60-x. In the physical computer 60-x, either single OS or a plurality of LPAR may be operated. The LPAR5 k or the physical computer 60-x is not exclusive for the policy server, but may conduct other application processes.
  • A description for the system configuration as a condition for embodiments of the present invention has finished as above, and each embodiment will now be described in detail.
  • Embodiment 1
  • Hereinbelow, referring to FIG. 1, a process flow will be described, in which a load condition of LPAR measured by the OS on each LPAR is examined and dynamically reallocated. The term “load condition” as used herein means a length of queue for execution of process or a CPU occupation rate.
  • An operator of a virtual computer system sets, in a frame, a demand for examining a load condition of LPAR and time interval of the examination of the load. The frame controller 205 notifies the load monitor 207 of a monitoring demand and a monitoring interval of the LPAR load condition through the scheduler 200 (300, 301). Then, the load monitor 207 notifies the LPAR controller 2020 of a load condition examination demand (302, 303) through scheduler 200. The LPAR controller 202 examines a load condition of each logical processor 204 with respect to each logical processor controller 203 (305), and issues a demand (304) for transferring the examination results (306, 307) to the load monitor 207. The load monitor 207 saves the load condition of each LPAR inside thereof. A saved amount of the load condition information is directed by the operator to the frame controller 205 through the frame so as to notify the load monitor 207 of the amount through the scheduler 200.
  • In the present embodiment, as a numerical value to express a load condition, the CPU occupation rate and the length of queue for execution of process are used, the length of queue for execution of process being the number of processes waiting to be executed. The term “CPU occupation rate” as used herein means percentages of time that an LPAR actually occupies as opposed to allocated CPU time to the LPAR. FIG. 8 shows an example of a load condition of each LPAR. This shows each CPU occupation rate and a length of queue for execution of process of task or thread for every LPAR. A demand 310 collecting the information is notified to the logical processor controller 203 of each LPAR from the LPAR controller 202. The logical processor controller 203 interrupts to the OS1 operating on each LPAR through the logical processor 204. Then, from a counter regarding an operation state of the OS1, acquisition of information about the CPU occupation rate and the length of queue for execution of process is demanded (313) so as to acquire load condition information (311, 312). The logical processor controller 203 transfers the examination result (306, 307) to the load monitor 207.
  • Generally, it may be considered that the higher CPU usage and the longer length of queue for execution of process, the larger the load condition of the system. Accordingly, in the load monitor 207, an average load condition is calculated for a certain period of time (for one hour, for example), and when it exceed a threshold set at the frame by the operator, a reallocation demand of each LPAR is issued against the reallocation policy generator 206 (320).
  • The reallocation policy generator 206 read the load condition of each LPAR from the load monitor 207 (330), and the current CPU allocation toward each LPAR is read from the resource manager 201 (331). Next, the reallocation policy generator 206 generates inside a reallocation policy table 900 (FIG. 9) from the current CPU allocation and the load condition. This shows a condition in which, from the current CPU allocation stored in the resource manager 201 shown in FIG. 7, LPAR0 identifies that a load of CPU is high so that the CPU allocation to the LPAR0 is increased.
  • A policy of reallocation differs depending upon what is observed amongst the load information of the system. If CPU is a problem, it is possible to form a policy to increase the allocation time of CPU or increasing the number focus. Moreover, a policy differs depending on whether or not the OS operating on each LPAR has a function of increasing and decreasing the number focus at a time of start up. There are two kinds of OS: one is the OS enables to activate a new CPU without terminating the OS; and the other is the OS in which operation of the OS has to reset once to reallocate thereafter so as to change the number of CPUs for activation. If an OS is unable to increase/decrease the number of CPUs at the time of activation, only option is to change the CPU allocation time. However, if the OS can change the number of CPU at the time of OS activation, it is possible to change the number of CPUs as well as the CPU allocation time. In the present embodiment, an OS with a function of reallocating the number of CPUs at the activation is used. Moreover, when an OS, which can change the main memory capacity at the time of OS activation, is used, as a load condition, frequency of paging of the main memory (re-writing of a page on actual memory) or swap (swapping of application programs) is examined, and if the frequency is high, it is possible to form a policy to increase the main memory amount. In the present embodiment, an example is shown, in which reallocation of each LPAR is conducted based on the load condition of the CPU.
  • Inside the reallocation policy generator 206, there is a table (FIG. 10) in which correspondence actions is provided with respect to: a type of load observed, a threshold which is identifies as a heavy load, a priority of load to be dealt with, and a case of a high load. This table is set by the operator at the table, and notifies, from the frame controller 205 to the reallocation policy generator 206 via the scheduler 200, about that writing has been done (304, 341). The reallocation policy generator 206 receives the notification, read data in the frame controller 205 (342, 343), and write in the correspondence table inside the reallocation policy generator 206.
  • In the present embodiment, based on the action table shown in FIG. 10, the CPU allocation time is increased because of high the CPU occupation rate of the LPAR0, and the number of simultaneously executable processes is increased by increasing the number of the CPUs allocated to the LPAR0 because of long queue for execution of process, thereby reducing the load. In order for such a transfer of physical resources not to cause deterioration of performance of less loaded LPAR, a method is applied, in which for every average CPU occupation ratio as shown in FIG. 11, for example, less loaded LPAR offers certain percentages of the CPU time to another heavily loaded LPAR so as to allocate it therefor. In FIG. 11, the less the current CPU occupation ratio of a LPAR is, the more ratio is allocated to the other LPAR. Herein, even if a current CPU occupation ratio is low for one LPAR, by allocating most or all of the CPU allocation to another LPAR, an increase of load to the LPAR is prevented. The reallocation policy in the reallocation policy table 900 as shown in FIG. 9 is formed base on this allocation.
  • The reallocation policy generator 206 issues a reallocation demand to the scheduler 200 (350). At the same time, it issues, to the load monitor 207, a demand to stop measuring performance (351).
  • In terms of a reallocation procedure for a LPAR, it is conventionally conducted by operation of the operator or time divided scheduling. However, in the present invention, it is conducted by the hypervisor which issues the reallocation demand at an event that the load to the system exceeds the threshold.
  • First, the scheduler 200 reads the reallocation policy table 900 in FIG. 9 inside the reallocation policy generator 206 (380). Then, it rewrites a LPAR information table inside the resource manager 201 (381) so as to direct allocation change to each LPAR controller 202.
  • The LPAR controller 202 stops the OS1 of the logical processor 204 which belongs to the LPAR to be reallocated (360, 361, 362). Next, the LPAR controller 202 issues a demand to read the LPAR information table of the resource manager 201 (364). The allocation that has just been read (365) is stored inside the LPAR controller 202.
  • The LPAR controller 202 instructs, to each logical processor controller 203, re-operation of the OS to the logical processor 204 (370, 371, 372). After rebooting the OS1, the LPAR controller notifies the reallocation policy generator 206 and the load monitor 207 of completion of LPAR reallocation (375). The load monitor 207 issues a demand to examine a load condition of a LPAR to the LPAR controller 202 as described above (302, 303). With the processes above, a change of resource allocation is completed.
  • When the time divided CPU allocation time is changed, the scheduler 200 only needs to execute a newly defined CPU allocation time, so that the allocation change is completed by the process in the hypervisor. If a new CPU (logical processor) is to be added, the LPAR controller 202 notifies the OS on a LPAR, directly or via the logical processor controller 203, of a newly allocated CPU (logical processor) by interruption or the like. Thereby, the OS on the LPAR sends to the corresponding LPAR controller 202 a command to boot the newly added CPU (logical processor) spontaneously.
  • As described above, by acquiring the CPU occupation rate and/or the length for execution of process from the OS of the LPAR, resource allocation, such as the CPU allocation time or the main memory capacity, is changed. Thereby, it is possible to accurately comprehend a degree of the LPAR load rather than measuring the CPU time. Moreover, it is possible to allocate more resource to LPAR having a higher load without sequential command from the operator.
  • Embodiment 2
  • Hereinbelow, by using FIG. 17, process flow is described below, the process being from examination of a load condition of application operating on the LPAR to dynamic reallocation. The load condition of the application used herein is a response time of the application program process. For example, it means the response time of a transaction process such as retrieving a table from a data base so as to update contents of the table.
  • An operator of the virtual computer system requests to examine the load condition of the LPAR in a frame, and set a time interval to be examined. The frame controller 205 notifies the load monitor 207 of a monitoring demand and a monitoring interval about the LPAR load condition through the scheduler 200 (300, 301). Then, the load monitor 207 notifies the LPAR controller 202 of a load condition examination demand (302, 303) through the scheduler 200. The LPAR controller 202 examines a load condition of each logical processor 204 with respect to each logical processor controller 203 in the set monitoring interval (305), and issues a demand (304) to transfer the examination results (306, 307) to the load monitor 207. The load monitor 207 saves the load condition of each LPAR inside thereof. A saved amount of the load condition information is directed by the operator to the frame controller 205 through the frame so as to notify the load monitor 207 of the amount through the scheduler 200.
  • A demand 310 to collect the load condition of the application is notified to the logical processor controller 203 of each LPAR from the LPAR controller 202. The logical processor controller 203 sends an interruption signal to the logical processor 204 and the OS1. From the OS1, a signal to demand for the load condition information of an application 400 is sent to the application 400 (313, 314). The load condition of the application 400 is transferred to the load monitor 207 through the logical processor controller 203 (315, 311, 312). Moreover, not only the response time of the application, but also the CPU load condition as shown in Embodiment 1 is transferred to the load monitor 207 at the same time.
  • In the load monitor 207, an average load condition is calculated for a certain period of time (for one hour, for example) (FIG. 18). By providing a measuring means for measuring a time from receiving a transaction to the application program until the completion thereof, the response time is measured. When it exceeds a threshold set at the frame by the operator, a reallocation demand of each LPAR is issued to the reallocation policy generator 206 (320). For example, if a threshold of the response time is 5 seconds, according to the response time distribution as shown in FIG. 18, the reallocation of the physical resources of the LPAR0 is conducted so as to improve its performance.
  • The reallocation policy generator 206 reads the load condition of each LPAR from the load monitor 207 (330), and the current CPU allocation toward each LPAR is read from the resource manager 201 (331). Next, the reallocation policy generator 206 generates inside a reallocation policy table 900 (FIG. 9) from the current CPU allocation and the load condition. Inside the reallocation policy generator 206, there is a table (FIG. 10) in which correspondence actions is provided with respect to: a type of load observed, a threshold which is identifies as a heavy load, a priority of load to be dealt with, and a case of a high load. This table is set by the operator at the table, and notifies, from the frame controller 205 to the reallocation policy generator 206 via the scheduler 200, about that writing has been done (340, 341). The reallocation policy generator 206 receives the notification, read data in the frame controller 205 (342, 343), and write in the correspondence table inside the reallocation policy generator 206.
  • The reallocation policy becomes different depending on observed information in the load condition of the system. In the present embodiment, from distribution of the CPU time, CPU allocation time to be added to the LPAR0 is calculated, i.e., conduct a similar process in Embodiment 1.
  • First, based on the action table shown in FIG. 10, the CPU allocation time is increased because of high CPU occupation rate of the LPAR0, and the number of simultaneously executable processes is increased by increasing the number of the CPUs allocated to the LPAR0 because of long queue for execution of process, thereby taking an action to reduce the load. In order for such a transfer of physical resources not to cause deterioration of performance of less loaded LPAR, a method is applied, in which for every average CPU occupation ratio as shown in FIG. 11, for example, less loaded LPAR offers certain percentages of the CPU time to another heavily loaded LPAR so as to allocate it therefor. FIG. 9 is a table having the reallocation policy formed therein based on the allocation described above.
  • The reallocation policy generator 206 issues the reallocation demand to the scheduler 200 (350). Simultaneously, it issues a demand to stop measuring performance to the load monitor 207 (305).
  • In terms of a reallocation procedure for a LPAR, it is conventionally conducted by operation of the operator or time divided scheduling. However, in the present invention, it is conducted by the hypervisor which issues the reallocation demand at an event that the load to the system exceeds the threshold.
  • First, the scheduler 200 reads the reallocation policy table inside the reallocation policy generator 206. Then, it rewrites a LPAR information table inside the resource manager 201 so as to direct allocation change to each LPAR controller 202.
  • The LPAR controller 202 stops the OS1 of the logical processor 204 which belongs to the LPAR to be reallocated (360, 361, 362). Next, the LPAR controller 202 issues a demand to read the LPAR information table of the resource manager 201 (364). The allocation that has just been read (365) is stored inside the LPAR controller 202.
  • The LPAR controller 202 instructs each logical processor 203 to re-operate OS of the logical processor 204 (370, 371, 372). After re-booting the OS1, the LPAR controller notifies the reallocation policy generator 206 and the load monitor 207 of completion of reallocation of the LPAR (375). The load monitor 207 issues an examination demand of the load condition of the LPAR to the LPAR controller 202 as described above (302, 303). With the above-described process, change of resource allocation is completed.
  • When the time divided CPU allocation time is changed, the scheduler 200 only needs to execute a newly defined CPU allocation time, so that the allocation change is completed by the process in the hypervisor. If a new CPU (logical processor) is to be added, the LPAR controller 202 notifies the OS, on a LPAR directly or via the logical processor controller 203, of a newly allocated CPU (logical processor) by interruption or the like. Thereby, the OS on the LPAR sends to the corresponding LPAR controller 202 a command to boot the newly added CPU (logical processor) spontaneously. As such, the reallocation is completed. Then, the load monitor 207 restart monitoring load conditions of each LPAR again.
  • As described above, from the duration of the response time in the application program, a degree of the load of CPU is identified so as to conduct resource allocation. Thereby, it is possible to comprehend if the operation condition of load is large or not.
  • Embodiment 3
  • The present embodiment is an example of a system, in which the reallocation policy generator 206 and the load monitor 207 are mounted to a program operating on a certain LPAR provided in the same virtual computer system according to Embodiment 2.
  • FIG. 19 shows a configuration of the present embodiment. A monitoring program 190 to be executed on LPAR5 x issues reallocation demand of physical resources and monitoring of the load condition of LPAR50, . . . , LPAR5 k.
  • The monitoring program 190 on the LPAR5 x transfers the load condition examination demand to each LAPR50, . . . LPAR5 k. At that time, as a communication method, the following is known as shown in Japanese Laid-Open Patent Publication No. 10-301795: a method for emulating the communication virtually by a hypervisor; a method using an IO channel; and a method using CPUs within LPARs whereas using channel to communicate with a computer outside the LPAR. In terms of a communication method between LPARs, any method can be applied, but in the present embodiment, the method, in which the hypervisor emulates the communication path between LPARs, is used.
  • (Acquisition of the Load Condition)
  • The monitoring program 190 demands load conditions of other LPAR50, . . . , LPAR5 k (500). Each LPAR receiving the demand transfers load information (the CPU occupation rate, a length of queue for execution of process as in Embodiment 1, and a process response time of an application as in Embodiment 2) to the LPAR5 x (501). An issuing timing 510 of the load condition examination demand 500 is set in the monitoring program 190 by an operator.
  • (Issue of the Reallocation Demand)
  • Similar to the load monitor 207 of Embodiment 1, the operator sets threshold 511 of a load in advance, which is held inside the monitoring program 190. When a load exceeding the threshold 511 is monitored, the monitoring program 190 issues a demand for notification of the current resource allocation to the hypervisor 40 (502), and receives the resource allocation information to the hypervisor 40 (503). A load action table 512 describes combination of modifying policy about allocation such as the load condition and the CPU allocation time or the number of CPUs, and it is set by the operator in the monitoring program 190. The load action table 512 and an allocation policy table 513 are generated, the allocation policy table 513 showing a new resource allocation policy from the load condition. The reallocation policy table 513 is generated by the method shown in Embodiment 1, description thereof is omitted in the present embodiment.
  • Next, the monitoring program 190 issues the reallocation policy table 513 and the reallocation demand 505 to the hypervisor 40, the reallocation demand including a command for demanding reallocation of allocated resources to the LPAR50, . . . , LPAR5 k. The hypervisor 40 transfers the reallocation completion acknowledgment 506 to the monitoring program 190 after completion of reallocation. As described above, reallocation of the LPAR reflecting the load condition is completed. Thereafter, the monitoring program 190 restart monitoring of the load condition of each LPAR.
  • The above description shows the system having the reallocation policy generator 206 and the load monitor 207 mounted on the monitoring program operating on the certain LPAR provided in the same virtual computer system. According to the system, even if algorithm of the reallocation policy is set to be changed freely, the algorithm does not exist in the hypervisor, and thus, it is not necessary to enable the hypervisor, a core of the system, to be operable for the operator. Therefore, there is no need to worry about a security problem or an operation by the operator which may cause trouble to the hypervisor. Moreover, even if malfunction has occurred to the reallocation policy, by setting the hypervisor to monitor unreasonable process, malfunction to the whole system cannot be generated.
  • Embodiment 4
  • The present embodiment is an example of a system, in which the monitoring program 190 operating on the certain LPAR provided in the same virtual computer system in Embodiment 3 is mounted on another physical computer.
  • FIG. 20 shows a configuration of the present embodiment. The monitoring program 190 executing on LPAR60 x-x on a physical computer 60-x issues a reallocation demand of physical resources and monitoring of load conditions of LPAR600-0, . . . , LPAR600-k of a physical computer 60-0.
  • The monitoring program 190 on the LPAR60 x-x issues the load condition examination demand 500-A to the physical computer 60-0. A hypervisor 40-0 of the physical computer 60-0 transfers the load condition examination demand to each of the LAPR600-0, . . . , LPAR600-k. The hypervisor 40-0 (40-x) on the physical computer 60-0 (60-x) communicates with another physical computer 60-x (60-x) by using the IO channel. In the present embodiment, the monitoring program 190 is mounted as a program on the LPAR. Alternatively, on the physical computer 60-x, single computer may be operated instead of the virtual computer system. Specifically, LPAR60 x-x may be a single physical computer.
  • (Acquisition of the Load Condition)
  • The monitoring program 190 demands load conditions of LPAR600-0, . . . , LPAR600-k mounted on another physical computer 60-0 through I/O520-x (500-A, 501-A). Each LPAR receiving the demand transfers load information (the CPU occupation rate, a length of queue for execution of process as in Embodiment 1, and a process response time of an application as in Embodiment 2) to the LPAR60 x-x (500-B, 501-B). An issuing timing 510 of the load condition examination demands 500-A and 500-B is set in the monitoring program 190 by the operator.
  • (Issue of the Reallocation Demand)
  • Similar to the load monitor 207 of Embodiment 1, the operator sets threshold 511 of a load in advance, which is held inside the monitoring program 190. When a load exceeding the threshold 511 is monitored, the monitoring program 190 issues a demand to notify the hypervisor 40-0 of the current resource allocation (502-A), and receives the resource allocation information from the hypervisor 40-0 (502-B, 503-B). A load action table 512 describes combination of modifying policy about allocation such as the load condition and the CPU allocation time or the number of CPUs, and it is set by the operator in the monitoring program 190. The load action table 512 and an allocation policy table 513 are generated, the allocation policy table 513 showing a new resource allocation policy from the load condition. Since the reallocation policy table 513 is generated by the method shown in Embodiment 1, description thereof is omitted in the present embodiment.
  • Next, the monitoring program 190 issues the allocation demands 504-A, 504-B including a command to demand of reallocation of resources allocated to the LPAR600-0, . . . , LPAR600-k on a physical computer 60-0 and the reallocation policy table 513 to the hypervisor 40-0. The hypervisor 40-0 transfers the reallocation completion acknowledgment 505-A and 505-B to the monitoring program 190 after completion of reallocation. As described above, reallocation of LPAR which reflects the load condition is completed. Then, the monitoring program 190 restart monitoring load conditions of each LPAR.
  • As described above, the example in which the monitoring program mounted on the other physical computer is shown. Accordingly, there is an effect that makes it possible to conduct integrated management of the other physical computers having LPARs mounted therein.
  • Embodiment 5
  • In Embodiment 2, in order to examine the response time of application programs, interruption to OS is generated from the hypervisor, and the OS sent a signal to the application program so as to demand response time of the process measured by the application program. Alternatively, in the present invention, referring now to FIG. 21, a method for examining load conditions of the application program operated on the LPAR without having particular interface with respect to the application program will be described. Herein, the application program does not measure response time. Or even if it measures the response time, there is no interface to read out. After examining the load condition of the application, a procedure of changing physical resource allocation of the application follows that of Embodiment 4, and thus, description of the LPAR reallocation is omitted in the present embodiment.
  • In FIG. 21, there is provided physical computers 60-0, 60-x and a network 61 for linking therewith, and on the physical computers, LPAR600-0, LPAR600-k, LPAR60 x-0, LPAR60 x-k, and LPAR60 x-x are operated. In the LPAR600-0 on the physical computer 60-0, an application program 195 such as WWW (World Wide Web) server is operated. The monitoring program 190 operating in LPAR60 x-x on the physical computer 60-x issues an access demand 700 of data to the application program 195. If the application program 195 is the WWW server, a demand to read homepages is issued. The application program 195 issues a response 701 for the demand 700. In the monitoring program 190, the response time from issue of the demand 700 till reception of the response 703 is recorded to a response time history 703. The demand 700 is issued in an interval which does not deteriorate a performance of the application program 195. Or, it may be set by the operator in advance within the monitoring program (not shown).
  • The monitoring program 190 observes transition of the response time history 703. When long response time continues, the monitoring program 190 demands that allocation of physical resources of LPAR having the application program with the long response time operating therein to be increased. A procedure to change the resource allocation follows in Embodiment 4. Moreover, the monitoring program 190 may gather the response time history for a relatively long period of time (for several days) to find out regularity of load fluctuations so that physical resource allocation of LPAR may be designedly changed according to a cycle of the fluctuation.
  • As described above, the application program does not measure response time, or when there is no interface to read out a measurement result, the monitoring program issues an access demand of the data so that it measures a duration until the response therefor is received. In the embodiment, by storing the response time history, the response time of the application program is comprehended. Thereby, it is possible to comprehend the response time even if the application program does not measure the response time or there is no response time.
  • Embodiment 6
  • In Embodiments 2 and 3, action plans corresponding to load conditions are determined in advance. Alternatively, by forming a table (FIG. 22) which lists a priority order of possible polices by physical resource allocation, an action can be taken sequentially with a procedure shown in FIG. 23. Hereinbelow, the procedure shown in FIG. 23 will be described.
  • Load conditions of LAPRs are gathered in a manner shown in previous embodiments. When the load conditions exceeds the threshold set by the operator, preparation for reallocation of LPARs starts (800). Assume that there are total of Nmax action plans. First, the action plan of priority 1 (801) as shown in FIG. 22 is conducted (802). When the load condition does not improve after operating LPAR which has applied the action plan, the allocation is reversed to the previous allocation (806). Whether all action plans are taken is confirmed (804). If not all the plans are conducted, then, next action plan is implemented (805). After all plans are implemented, it is checked if there has been any effect action (806). In a case where no effective plan existed, the fact that the load action has been impossible is notified to the operator by way of screen display, a log file, or a buzzer (809). By following this flow, a plurality of actions are collectively taken as long as these are effective.
  • As described above, a plurality of action plans are prepared along with priority thereof so that one or more action plans contributing to load lowering. Accordingly, actions effective for lowering the load is selected by trial.
  • Embodiment 7
  • For an operation mode which greatly changes its load between day time and night time, there is provided an application method in which allocation changes according to a plan, that is, during hours when load is high, resources are collected from other LPARs whereas during night time, some of resources are released to the other LPARs. There is also a method combining means for finding regularity in load change with Embodiment 1. Hereinbelow, the method having the means for finding regularity in load change combined with Embodiment 1 will be described.
  • One way to find out regularly changing load, the load is recorded for several days, and the load monitor examines up and down of the load at the same time zone. For example, as shown in FIG. 12, changes in average values of load fluctuation taken at the same hour for several days are examined. Then, threshold of the load is set so that allocation of physical resources increased during the time zone of high load while the resources are offered to other LPARs during the low load time zone. Moreover, the load monitor 207 schedules to return resource allocation to initially set amount during time zones other than above. Resource allocation method is as follows: by setting a system condition with a maximum load during the time zone above high load threshold (i.e., when a length of queue for execution of process becomes 3 or more) as a basis, the reallocation policy generator 206 generates reallocation plan in the method shown in Embodiment 1 so as to conduct dynamic reallocation of resources. Moreover, not only changing resource allocation periodically, dynamic resource allocation may be conducted with respect to load conditions so as to conduct find adjustment for reducing the load.
  • As a means to obtain regularity in load, a method for finding changing regularity of the load analytically may be employed. Herein, FFT (fast Fourier transformation) is used as an example to find out regularity of change in load analytically. Regarding algorithm of the FFT, textbooks of signal processes such as “Dejitaru Shingou Shori no Kiso Tujii Shigeo kanshu Denshi Jouho Tsushin Gakkai” (1998 Mar. 15 first edition published). As shown in FIG. 13, assume that load conditions of time series having 32 measuring points in T time are provided. Herein, a length of queue for execution of process is uses as an example. FIG. 13 is calculated for its spectrum distribution by FFT, the result becomes as shown in FIG. 14 (In the signal process, the spectrum distribution equals power spectrum distribution, but it is only expressed as the spectrum distribution herein.) According to sampling theorem, order of higher harmonic that can be analyzed is 16. This is to examine the regularity of change in load, and thus, by ignoring 0 frequency which equals a direct current component, the strongest degree of spectrum is frequency of order of 3 (i.e. 3/2π T). Now, assuming that the load fluctuate with frequency of 3/2π T, physical resource allocation to LPAR is changed. At that time, based on numerical values of the maximum load in FIG. 13, physical resource allocation configuration is formed. Intermediate value between the minimum load and the maximum load is set as a threshold, and if the load increased to reach the threshold, LPAR is reallocated according to the allocation configuration generated previously. In the present embodiment, there is a case where allocation changes for every half cycle. Alternately, allocation of physical resources to LPARs may be conducted by dividing more finely. The procedure of reallocation of LPAR is as described in Embodiment 1.
  • As described above, as a means to obtain regularity of load, a method for finding the regularity of change in load analytically, i.e., FFT, is used, and therefore, the load fluctuation can be obtained accurately without relying on subjectivity of a person such as an operator.
  • Embodiment 8
  • From Embodiments 1 to 7, loads related to CPU (physical processor) has been described as examples. Alternatively, resource allocation of LPAR may be changed according to load conditions of main memory. Resource allocations of CPU and main memory may be conducted simultaneously.
  • As an index expressing load conditions of main memory, the number of times of swap or paging may be used. Similar to the case of CPU loads according to Embodiment 1, load conditions are monitored for those, and LPAR having high load condition is reallocated dynamically so as to increase main memory allocated therefor. LPAR which changes the main memory amount stops the OS operating on the LPAR similar to the case where the number of CPUs are increased as shown in Embodiment 1. After physical resources is allocated (i.e., the amount of main memory is changed herein), the LPAR controller 202 notifies the OS on the LPAR, directly or through the logical processor controller 203, of newly allocated main memory by interruption or the like. Then, the OS on the LPAR spontaneously sends a command to expand the newly added main memory to corresponding LPAR controller 202. In the present embodiment, the allocation changing method as described in Embodiment 2 or Embodiment 3 may be applied.
  • As described above, it is possible to judge if capacity of allocated main memory is insufficient.
  • Embodiment 9
  • In the present embodiment, an example using the virtual computer system in data center is shown. A data center administrator makes an agreement with each customer with a content of an agreement table as shown in FIG. 24. The agreement class 1000 has priority in an order of A, B and C, and the agreement has the highest priority. For every agreement class 1000, the agreement fee is determined as PA, PB and PC. The agreement A is made with customers in such a manner that the higher the priority of agreement is, the more priority is given to the performance guarantee of the response time or the like of the application.
  • The data center administrator sets an allocation 1007 and an agreement class 1006 of LPAR for each customer 1005 as shown in FIG. 26. The application program of each customer is operated on the LPAR. The LPAR having higher priority 1002 is allocated with physical resources by priority. The data center administrator follows a table shown in FIG. 25 to decide the priority 1002, an upper threshold 1003, and a lower threshold 1004 of load conditions of LPAR for every agreement class 1000. The upper threshold 1003 is the maximum value of allowable limit in which load of the OS or the application operating on the LPAR is large, i.e., it is a numerical value used to judge an opportunity to demand an increase of resource allocation toward the LPAR. The lower threshold 1004 is used to judge an opportunity for returning the resource amount allocated to the LPAR to the initial value when the load is smaller than the threshold. A table shown in FIG. 25 is stored inside a means for monitoring load conditions of the virtual computer.
  • Referring now to FIG. 27, a flow of administrating the data center will be described. First, in the means for monitoring load conditions of LPARs as shown in Embodiments 1 to 8, the load conditions of LPAR are observed (950). It is checked that if there is a load condition which exceeds the upper threshold 1003 shown in FIG. 25 (951). If there is no LPAR exceeding the upper threshold 1003, administration thereof continues without changing allocation of LPAR. However, if there is the LPAR exceeding the upper threshold, and not all LPAR has high load (952), then an action for loads can be taken.
  • Before starting resource allocation to the LPAR having high load, physical resources allocated to the LPAR having loads not exceeding the lower threshold is released (953). At that time, the amount of resources allocated to the LPAR is returned to the initially set numerical value. Next, to the LPAR exceeding the upper threshold and having the highest priority, physical resources released previously or a part of resources from LPAR having low priority and not exceeding threshold are transferred (954). Algorithm for transferring those resources may be the method in Embodiment 1. As such, recourse allocation of the LPAR is changed.
  • As described above, the example for allocating resources by priority according to the agreement class is shown. According to this, it is possible to provide a service corresponding to the agreement fee and a customer property.

Claims (9)

1. A virtual computer system, comprising:
a reallocation policy table which describes priority of Logical Partitions (LPARS) and system resources allocation to LPARS;
an action table which describes corresponding actions to be taken based on information regarding loads; and
alternation means which alters system resources including at least one of Central Processor Unit (CPU) number, CPU allocation time and memory allocation that are objects of countermeasures for a load based on a combination of information of said reallocation policy table and said action table.
2. A virtual computer system according to claim 1, wherein said action table describes priority of countermeasures and countermeasures of alternation for system resources allocation.
3. A virtual computer system according to claim 3, wherein said alternation means is set from external of computers which are objects of countermeasures for load.
4. A virtual computer system, comprising:
a plurality of virtual computers operating on a physical computer having one or more CPUs and a main memory device;
a hypervisor;
a storing section for storing contents of a plurality of actions for changing physical resources allocated to virtual computers judged as having high loads by a load monitor which monitors load conditions of said virtual computers; and
means for implementing said plurality of actions sequentially and for conducting physical resource allocation according to contents of said actions that are deemed most effective in lowering loads of said virtual computers,
wherein said hypervisor comprises:
a reallocation policy table which describes priority of Logical Partitions (LPARS) and system resources allocation to LPARS,
an action table which describes corresponding actions to be taken based on information regarding loads, and
alternation means which alters system resources including at least one of Central Processor Unit (CPU) number, CPU allocation time and memory allocation that are objects of countermeasures for a load based on a combination of information of said reallocation policy table and said action table.
5. A virtual computer system according to claim 4, wherein said action table describes priority of countermeasures and countermeasures of alternation for system resources allocation.
6. A virtual computer system according to claim 4, wherein said alternation means is set from external of computers which are objects of countermeasures for load.
7. A virtual computer system, comprising:
a plurality of virtual computers operating on a physical computer having one or more CPUs and a main memory device;
a hypervisor;
a storing section for storing contents of a plurality of actions for changing physical resources allocated to virtual computers judged as having high loads by a load monitor which monitors load conditions of said virtual computers; and
means for implementing said plurality of actions sequentially and for conducting physical resource allocation according to contents of said actions that are deemed most effective in lowering loads of said virtual computers,
wherein said hypervisor comprises:
said load monitor for monitoring load conditions of said virtual computers based on load conditions of said main memory device,
a reallocation section for providing an output for dynamically changing allocation of physical resources to said virtual computers based on said load conditions monitored by said load monitor,
a controller for controlling physical resource allocation to said virtual computers based on load conditions monitored by said load monitor, and for demanding reallocation in response to said output from said reallocation section,
a reallocation policy table which describes priority of Logical Partitions (LPARS) and system resources allocation to LPARS,
an action table which describes corresponding actions to be taken based on information regarding loads, and
alternation means which alters system resources including at least one of Central Processor Unit (CPU) number, CPU allocation time and memory allocation that are objects of countermeasures for a load based on a combination of information of said reallocation policy table and said action table.
8. A virtual computer system according to claim 7, wherein said action table describes priority of countermeasures and countermeasures of alternation for system resources allocation.
9. A virtual computer system according to claim 7, wherein said alternation means is set from external of computers which are objects of countermeasures for load.
US11/905,517 2000-12-28 2007-10-02 Virtual computer system with dynamic resource reallocation Abandoned US20080034366A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/905,517 US20080034366A1 (en) 2000-12-28 2007-10-02 Virtual computer system with dynamic resource reallocation

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2000401048A JP2002202959A (en) 2000-12-28 2000-12-28 Virtual computer system for performing dynamic resource distribution
JP2000-401048 2000-12-28
US09/942,611 US7290259B2 (en) 2000-12-28 2001-08-31 Virtual computer system with dynamic resource reallocation
US11/905,517 US20080034366A1 (en) 2000-12-28 2007-10-02 Virtual computer system with dynamic resource reallocation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/942,611 Continuation US7290259B2 (en) 2000-12-28 2001-08-31 Virtual computer system with dynamic resource reallocation

Publications (1)

Publication Number Publication Date
US20080034366A1 true US20080034366A1 (en) 2008-02-07

Family

ID=18865538

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/942,611 Expired - Fee Related US7290259B2 (en) 2000-12-28 2001-08-31 Virtual computer system with dynamic resource reallocation
US11/905,517 Abandoned US20080034366A1 (en) 2000-12-28 2007-10-02 Virtual computer system with dynamic resource reallocation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/942,611 Expired - Fee Related US7290259B2 (en) 2000-12-28 2001-08-31 Virtual computer system with dynamic resource reallocation

Country Status (2)

Country Link
US (2) US7290259B2 (en)
JP (1) JP2002202959A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136695A1 (en) * 2004-12-22 2006-06-22 International Business Machines Corporation Method and system for controlling the capacity usage of a logically partitioned data processing system
US20090013153A1 (en) * 2007-07-04 2009-01-08 Hilton Ronald N Processor exclusivity in a partitioned system
US20090106409A1 (en) * 2007-10-18 2009-04-23 Fujitsu Limited Method, apparatus and recording medium for migrating a virtual machine
US20100138829A1 (en) * 2008-12-01 2010-06-03 Vincent Hanquez Systems and Methods for Optimizing Configuration of a Virtual Machine Running At Least One Process
US20100138828A1 (en) * 2008-12-01 2010-06-03 Vincent Hanquez Systems and Methods for Facilitating Virtualization of a Heterogeneous Processor Pool
US20100153679A1 (en) * 2008-12-16 2010-06-17 International Business Machines Corporation Selection of a redundant controller based on resource view
US20100223616A1 (en) * 2009-02-27 2010-09-02 International Business Machines Corporation Removing operating system jitter-induced slowdown in virtualized environments
US20100306382A1 (en) * 2009-06-01 2010-12-02 International Business Machines Corporation Server consolidation using virtual machine resource tradeoffs
US20100325634A1 (en) * 2009-03-17 2010-12-23 Hitachi, Ltd. Method of Deciding Migration Method of Virtual Server and Management Server Thereof
US20110107035A1 (en) * 2009-11-02 2011-05-05 International Business Machines Corporation Cross-logical entity accelerators
US20110173617A1 (en) * 2010-01-11 2011-07-14 Qualcomm Incorporated System and method of dynamically controlling a processor
US20110225300A1 (en) * 2008-10-13 2011-09-15 Mitsubishi Electric Corporation Resource allocation apparatus, resource allocation program and recording media, and resource allocation method
US20120221730A1 (en) * 2011-02-28 2012-08-30 Fujitsu Limited Resource control system and resource control method
US20120218268A1 (en) * 2011-02-24 2012-08-30 International Business Machines Corporation Analysis of operator graph and dynamic reallocation of a resource to improve performance
US20130263117A1 (en) * 2012-03-28 2013-10-03 International Business Machines Corporation Allocating resources to virtual machines via a weighted cost ratio
US8719834B2 (en) 2010-05-24 2014-05-06 Panasonic Corporation Information processing system, method, program and integrated circuit for maintaining balance of processing loads with respect to real-time tasks
US9128771B1 (en) * 2009-12-08 2015-09-08 Broadcom Corporation System, method, and computer program product to distribute workload
US20150355943A1 (en) * 2014-06-05 2015-12-10 International Business Machines Corporation Weighted stealing of resources
US20180060134A1 (en) * 2016-09-01 2018-03-01 Microsoft Technology Licensing, Llc Resource oversubscription based on utilization patterns in computing systems

Families Citing this family (218)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8538843B2 (en) 2000-07-17 2013-09-17 Galactic Computing Corporation Bvi/Bc Method and system for operating an E-commerce service provider
JP2002229806A (en) * 2001-02-02 2002-08-16 Hitachi Ltd Computer system
JP3943865B2 (en) * 2001-06-05 2007-07-11 株式会社日立製作所 Computer apparatus and diagnostic method
JP2003067351A (en) * 2001-08-28 2003-03-07 Nec System Technologies Ltd Configuration control system of distributed computer
KR100422132B1 (en) * 2001-09-06 2004-03-11 엘지전자 주식회사 cpu task occupation ratio testing equipment of the realtime system
JP4018900B2 (en) 2001-11-22 2007-12-05 株式会社日立製作所 Virtual computer system and program
US7158972B2 (en) * 2001-12-11 2007-01-02 Sun Microsystems, Inc. Methods and apparatus for managing multiple user systems
US7245626B1 (en) * 2002-01-17 2007-07-17 Juniper Networks, Inc. Systems and methods for permitting queues to oversubscribe
US7266823B2 (en) * 2002-02-21 2007-09-04 International Business Machines Corporation Apparatus and method of dynamically repartitioning a computer system in response to partition workloads
US9344235B1 (en) * 2002-06-07 2016-05-17 Datacore Software Corporation Network managed volumes
US20030236852A1 (en) * 2002-06-20 2003-12-25 International Business Machines Corporation Sharing network adapter among multiple logical partitions in a data processing system
US7765299B2 (en) 2002-09-16 2010-07-27 Hewlett-Packard Development Company, L.P. Dynamic adaptive server provisioning for blade architectures
GB2402785B (en) 2002-11-18 2005-12-07 Advanced Risc Mach Ltd Processor switching between secure and non-secure modes
US20060294238A1 (en) * 2002-12-16 2006-12-28 Naik Vijay K Policy-based hierarchical management of shared resources in a grid environment
US20040158834A1 (en) * 2003-02-06 2004-08-12 International Business Machines Corporation Apparatus and method for dynamically allocating resources of a dead logical partition
US7290260B2 (en) 2003-02-20 2007-10-30 International Business Machines Corporation Dynamic processor redistribution between partitions in a computing system
US7451183B2 (en) 2003-03-21 2008-11-11 Hewlett-Packard Development Company, L.P. Assembly and method for balancing processors in a partitioned server
US20040202185A1 (en) * 2003-04-14 2004-10-14 International Business Machines Corporation Multiple virtual local area network support for shared network adapters
US7478393B2 (en) 2003-04-30 2009-01-13 International Business Machines Corporation Method for marketing to instant messaging service users
US7472246B2 (en) 2003-04-30 2008-12-30 International Business Machines Corporation Method and system for automated memory reallocating and optimization between logical partitions
WO2004104825A1 (en) 2003-05-15 2004-12-02 Applianz Technologies, Inc. Systems and methods of creating and accessing software simulated computers
US7171568B2 (en) * 2003-06-13 2007-01-30 International Business Machines Corporation Remote power control in a multi-node, partitioned data processing system
US7543296B2 (en) * 2003-08-26 2009-06-02 International Business Machines Corporation Time based multi-tiered management of resource systems
JP2007508623A (en) * 2003-10-08 2007-04-05 ユニシス コーポレーション Virtual data center that allocates and manages system resources across multiple nodes
US20050132362A1 (en) * 2003-12-10 2005-06-16 Knauerhase Robert C. Virtual machine management using activity information
CN1658185A (en) * 2004-02-18 2005-08-24 国际商业机器公司 Computer system with mutual independence symbiont multiple eperation system and its switching method
US20050192937A1 (en) * 2004-02-26 2005-09-01 International Business Machines Corporation Dynamic query optimization
US7584476B2 (en) * 2004-03-04 2009-09-01 International Business Machines Corporation Mechanism for reducing remote memory accesses to shared data in a multi-nodal computer system
US7574708B2 (en) * 2004-03-04 2009-08-11 International Business Machines Corporation Mechanism for enabling the distribution of operating system resources in a multi-node computer system
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
JP2005309644A (en) * 2004-04-20 2005-11-04 Hitachi Ltd Resource control method and its system
US7257811B2 (en) 2004-05-11 2007-08-14 International Business Machines Corporation System, method and program to migrate a virtual machine
EP1769353A2 (en) * 2004-05-21 2007-04-04 Computer Associates Think, Inc. Method and apparatus for dynamic memory resource management
US7979863B2 (en) * 2004-05-21 2011-07-12 Computer Associates Think, Inc. Method and apparatus for dynamic CPU resource management
US8024726B2 (en) * 2004-05-28 2011-09-20 International Business Machines Corporation System for correct distribution of hypervisor work
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US7421575B2 (en) * 2004-07-16 2008-09-02 Hewlett-Packard Development Company, L.P. Configuring a physical platform in a reconfigurable data center
US20060015866A1 (en) * 2004-07-16 2006-01-19 Ang Boon S System installer for a reconfigurable data center
US20060015589A1 (en) * 2004-07-16 2006-01-19 Ang Boon S Generating a service configuration
JP2008510259A (en) * 2004-08-17 2008-04-03 ショー パーシング リミティド ライアビリティ カンパニー Modular event-driven processing
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
US7752623B1 (en) * 2004-09-16 2010-07-06 Hewlett-Packard Development Company, L.P. System and method for allocating resources by examining a system characteristic
US7296133B2 (en) * 2004-10-14 2007-11-13 International Business Machines Corporation Method, apparatus, and computer program product for dynamically tuning amount of physical processor capacity allocation in shared processor systems
CA2586763C (en) 2004-11-08 2013-12-17 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US20060101464A1 (en) * 2004-11-09 2006-05-11 Dohrmann Stephen H Determining a number of processors to execute a task
US20060123111A1 (en) * 2004-12-02 2006-06-08 Frank Dea Method, system and computer program product for transitioning network traffic between logical partitions in one or more data processing systems
US20060123204A1 (en) * 2004-12-02 2006-06-08 International Business Machines Corporation Method and system for shared input/output adapter in logically partitioned data processing system
US20060123217A1 (en) * 2004-12-07 2006-06-08 International Business Machines Corporation Utilization zones for automated resource management
US7694298B2 (en) * 2004-12-10 2010-04-06 Intel Corporation Method and apparatus for providing virtual server blades
US7721292B2 (en) * 2004-12-16 2010-05-18 International Business Machines Corporation System for adjusting resource allocation to a logical partition based on rate of page swaps and utilization by changing a boot configuration file
US7707578B1 (en) 2004-12-16 2010-04-27 Vmware, Inc. Mechanism for scheduling execution of threads for fair resource allocation in a multi-threaded and/or multi-core processing system
US7979862B2 (en) 2004-12-21 2011-07-12 Hewlett-Packard Development Company, L.P. System and method for replacing an inoperable master workload management process
US8621458B2 (en) * 2004-12-21 2013-12-31 Microsoft Corporation Systems and methods for exposing processor topology for virtual machines
JP4058038B2 (en) 2004-12-22 2008-03-05 株式会社日立製作所 Load monitoring device and load monitoring method
US20060168587A1 (en) * 2005-01-24 2006-07-27 Shahzad Aslam-Mir Interoperable communications apparatus and method
US20060167891A1 (en) * 2005-01-27 2006-07-27 Blaisdell Russell C Method and apparatus for redirecting transactions based on transaction response time policy in a distributed environment
US7631073B2 (en) 2005-01-27 2009-12-08 International Business Machines Corporation Method and apparatus for exposing monitoring violations to the monitored application
US8826287B1 (en) * 2005-01-28 2014-09-02 Hewlett-Packard Development Company, L.P. System for adjusting computer resources allocated for executing an application using a control plug-in
US7770173B2 (en) * 2005-02-03 2010-08-03 International Business Machines Corporation System for dynamic processor enablement
US7730486B2 (en) 2005-02-28 2010-06-01 Hewlett-Packard Development Company, L.P. System and method for migrating virtual machines on cluster systems
US20060195845A1 (en) * 2005-02-28 2006-08-31 Rhine Scott A System and method for scheduling executables
US20060206891A1 (en) * 2005-03-10 2006-09-14 International Business Machines Corporation System and method of maintaining strict hardware affinity in a virtualized logical partitioned (LPAR) multiprocessor system while allowing one processor to donate excess processor cycles to other partitions when warranted
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
JP2006259793A (en) * 2005-03-15 2006-09-28 Hitachi Ltd Shared resource management method, and its implementation information processing system
JP2006259812A (en) * 2005-03-15 2006-09-28 Hitachi Ltd Dynamic queue load distribution method, system, and program
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
JP4684696B2 (en) * 2005-03-22 2011-05-18 株式会社日立製作所 Storage control method and system
US20060224925A1 (en) * 2005-04-05 2006-10-05 International Business Machines Corporation Method and system for analyzing an application
EP3203374B1 (en) * 2005-04-07 2021-11-24 III Holdings 12, LLC On-demand access to compute resources
JP4367856B2 (en) * 2005-07-07 2009-11-18 レノボ シンガポール プライヴェート リミテッド Process control system and control method thereof
US7774794B2 (en) * 2005-08-19 2010-08-10 Intel Corporation Method and system for managing bandwidth in a virtualized system
US7370331B2 (en) * 2005-09-08 2008-05-06 International Business Machines Corporation Time slicing in a shared partition
US20070061227A1 (en) * 2005-09-13 2007-03-15 International Business Machines Corporation Determining a computer system inventory
US8104033B2 (en) * 2005-09-30 2012-01-24 Computer Associates Think, Inc. Managing virtual machines based on business priorty
US7493515B2 (en) * 2005-09-30 2009-02-17 International Business Machines Corporation Assigning a processor to a logical partition
US8225313B2 (en) * 2005-10-19 2012-07-17 Ca, Inc. Object-based virtual infrastructure management
US7831972B2 (en) 2005-11-03 2010-11-09 International Business Machines Corporation Method and apparatus for scheduling jobs on a network
JP4377369B2 (en) * 2005-11-09 2009-12-02 株式会社日立製作所 Resource allocation arbitration device and resource allocation arbitration method
US7861244B2 (en) * 2005-12-15 2010-12-28 International Business Machines Corporation Remote performance monitor in a virtual data center complex
JPWO2007072544A1 (en) * 2005-12-20 2009-05-28 富士通株式会社 Information processing apparatus, computer, resource allocation method, and resource allocation program
CN100561404C (en) * 2005-12-29 2009-11-18 联想(北京)有限公司 Save the method for power consumption of processing unit
JP2007188212A (en) * 2006-01-12 2007-07-26 Seiko Epson Corp Multiprocessor, and program for making computer execute control method of multiprocessor
JP2007241873A (en) * 2006-03-10 2007-09-20 Fujitsu Ltd Program for monitoring change in computer resource on network
JP4702127B2 (en) 2006-03-22 2011-06-15 日本電気株式会社 Virtual computer system, physical resource reconfiguration method and program thereof
JP4519098B2 (en) * 2006-03-30 2010-08-04 株式会社日立製作所 Computer management method, computer system, and management program
US9397944B1 (en) 2006-03-31 2016-07-19 Teradici Corporation Apparatus and method for dynamic communication scheduling of virtualized device traffic based on changing available bandwidth
US7814495B1 (en) * 2006-03-31 2010-10-12 V Mware, Inc. On-line replacement and changing of virtualization software
US7653832B2 (en) * 2006-05-08 2010-01-26 Emc Corporation Storage array virtualization using a storage block mapping protocol client and server
US8209668B2 (en) * 2006-08-30 2012-06-26 International Business Machines Corporation Method and system for measuring the performance of a computer system on a per logical partition basis
US8365182B2 (en) * 2006-10-02 2013-01-29 International Business Machines Corporation Method and system for provisioning of resources
US8296760B2 (en) * 2006-10-27 2012-10-23 Hewlett-Packard Development Company, L.P. Migrating a virtual machine from a first physical machine in response to receiving a command to lower a power mode of the first physical machine
US8185893B2 (en) * 2006-10-27 2012-05-22 Hewlett-Packard Development Company, L.P. Starting up at least one virtual machine in a physical machine by a load balancer
US8082547B1 (en) * 2006-10-31 2011-12-20 Hewlett-Packard Development Company, L.P. Reallocating hardware resources among workloads in accordance with license rights
JP4571609B2 (en) * 2006-11-08 2010-10-27 株式会社日立製作所 Resource allocation method, resource allocation program, and management computer
US8171484B2 (en) * 2006-11-17 2012-05-01 Fujitsu Limited Resource management apparatus and radio network controller
WO2008062864A1 (en) * 2006-11-24 2008-05-29 Nec Corporation Virtual machine locating system, virtual machine locating method, program, virtual machine management device and server
US8584130B2 (en) * 2006-11-30 2013-11-12 International Business Machines Corporation Allocation of resources on computer systems
US8479213B2 (en) * 2007-01-25 2013-07-02 General Electric Company Load balancing medical imaging applications across healthcare imaging devices in reference to projected load based on user type
WO2008102739A1 (en) * 2007-02-23 2008-08-28 Nec Corporation Virtual server system and physical server selecting method
JP4871174B2 (en) * 2007-03-09 2012-02-08 株式会社日立製作所 Virtual computer system
JP4982216B2 (en) * 2007-03-14 2012-07-25 株式会社日立製作所 Policy creation support method, policy creation support system, and program
US8219995B2 (en) * 2007-03-28 2012-07-10 International Business Machins Corporation Capturing hardware statistics for partitions to enable dispatching and scheduling efficiency
US8510741B2 (en) * 2007-03-28 2013-08-13 Massachusetts Institute Of Technology Computing the processor desires of jobs in an adaptively parallel scheduling environment
US20080271030A1 (en) * 2007-04-30 2008-10-30 Dan Herington Kernel-Based Workload Management
US9274847B2 (en) * 2007-05-04 2016-03-01 Microsoft Technology Licensing, Llc Resource management platform
US20080295097A1 (en) * 2007-05-24 2008-11-27 Advanced Micro Devices, Inc. Techniques for sharing resources among multiple devices in a processor system
US8495627B2 (en) 2007-06-27 2013-07-23 International Business Machines Corporation Resource allocation based on anticipated resource underutilization in a logically partitioned multi-processor environment
CN101354663A (en) * 2007-07-25 2009-01-28 联想(北京)有限公司 Method and apparatus for scheduling true CPU resource applied to virtual machine system
US20090055831A1 (en) * 2007-08-24 2009-02-26 Bauman Ellen M Allocating Network Adapter Resources Among Logical Partitions
WO2009037915A1 (en) * 2007-09-18 2009-03-26 Nec Corporation Server recombination support system and server reassignment support method
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
JP5000456B2 (en) * 2007-10-31 2012-08-15 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. Resource management system, resource management apparatus and method
JP4957806B2 (en) 2007-11-13 2012-06-20 富士通株式会社 Transmission apparatus, switching processing method, and switching processing program
JP4906686B2 (en) * 2007-11-19 2012-03-28 三菱電機株式会社 Virtual machine server sizing apparatus, virtual machine server sizing method, and virtual machine server sizing program
US8352950B2 (en) * 2008-01-11 2013-01-08 International Business Machines Corporation Algorithm to share physical processors to maximize processor cache usage and topologies
JP5256744B2 (en) * 2008-01-16 2013-08-07 日本電気株式会社 Resource allocation system, resource allocation method and program
US20090210873A1 (en) * 2008-02-15 2009-08-20 International Business Machines Corporation Re-tasking a managed virtual machine image in a virtualization data processing system
US8245236B2 (en) * 2008-02-27 2012-08-14 International Business Machines Corporation Lock based moving of threads in a shared processor partitioning environment
US8903983B2 (en) * 2008-02-29 2014-12-02 Dell Software Inc. Method, system and apparatus for managing, modeling, predicting, allocating and utilizing resources and bottlenecks in a computer network
US8935701B2 (en) * 2008-03-07 2015-01-13 Dell Software Inc. Unified management platform in a computer network
US7539987B1 (en) * 2008-03-16 2009-05-26 International Business Machines Corporation Exporting unique operating system features to other partitions in a partitioned environment
US8489995B2 (en) * 2008-03-18 2013-07-16 Rightscale, Inc. Systems and methods for efficiently managing and configuring virtual servers
US8443363B1 (en) * 2008-05-30 2013-05-14 Symantec Corporation Coordinated virtualization activities
US8127086B2 (en) 2008-06-06 2012-02-28 International Business Machines Corporation Transparent hypervisor pinning of critical memory areas in a shared memory partition data processing system
US8225068B2 (en) * 2008-06-09 2012-07-17 International Business Machines Corporation Virtual real memory exportation for logical partitions
US9081624B2 (en) * 2008-06-26 2015-07-14 Microsoft Technology Licensing, Llc Automatic load balancing, such as for hosted applications
US20090328036A1 (en) * 2008-06-27 2009-12-31 Oqo, Inc. Selection of virtual computing resources using hardware model presentations
US8352868B2 (en) * 2008-06-27 2013-01-08 Google Inc. Computing with local and remote resources including user mode control
US7809875B2 (en) * 2008-06-30 2010-10-05 Wind River Systems, Inc. Method and system for secure communication between processor partitions
WO2010016104A1 (en) * 2008-08-04 2010-02-11 富士通株式会社 Multiprocessor system, management device for multiprocessor system, and computer-readable recording medium in which management program for multiprocessor system is recorded
US20100131959A1 (en) * 2008-11-26 2010-05-27 Spiers Adam Z Proactive application workload management
WO2010064277A1 (en) 2008-12-03 2010-06-10 Hitachi, Ltd. Techniques for managing processor resource for a multi-processor server executing multiple operating systems
US9086913B2 (en) 2008-12-31 2015-07-21 Intel Corporation Processor extensions for execution of secure embedded containers
JP5343586B2 (en) 2009-01-29 2013-11-13 富士通株式会社 Information processing apparatus, information processing method, and computer program
EP2395430B1 (en) * 2009-02-09 2017-07-12 Fujitsu Limited Virtual computer allocation method, allocation program, and information processing device having a virtual computer environment
JP5365237B2 (en) * 2009-02-16 2013-12-11 株式会社リコー Emulation device and emulation system
JP2010218445A (en) * 2009-03-18 2010-09-30 Toshiba Corp Multicore processor system, scheduling method and scheduler program
US9535767B2 (en) * 2009-03-26 2017-01-03 Microsoft Technology Licensing, Llc Instantiating a virtual machine with a virtual non-uniform memory architecture
US9529636B2 (en) * 2009-03-26 2016-12-27 Microsoft Technology Licensing, Llc System and method for adjusting guest memory allocation based on memory pressure in virtual NUMA nodes of a virtual machine
JP5347648B2 (en) * 2009-03-30 2013-11-20 富士通株式会社 Program, information processing apparatus, and status output method
US8595740B2 (en) * 2009-03-31 2013-11-26 Microsoft Corporation Priority-based management of system load level
US20100274947A1 (en) * 2009-04-27 2010-10-28 Hitachi, Ltd. Memory management method, memory management program, and memory management device
US8261266B2 (en) * 2009-04-30 2012-09-04 Microsoft Corporation Deploying a virtual machine having a virtual hardware configuration matching an improved hardware profile with respect to execution of an application
US8195879B2 (en) 2009-05-08 2012-06-05 International Business Machines Corporation Demand based partitioning of microprocessor caches
US8650562B2 (en) * 2009-06-12 2014-02-11 International Business Machines Corporation Method and apparatus for scalable monitoring of virtual machine environments combining base virtual machine and single monitoring agent for measuring common characteristics and individual virtual machines measuring individualized characteristics
US9152200B2 (en) * 2009-06-23 2015-10-06 Hewlett-Packard Development Company, L.P. Resource and power management using nested heterogeneous hypervisors
US8286178B2 (en) * 2009-06-24 2012-10-09 International Business Machines Corporation Allocation and regulation of CPU entitlement for virtual processors in logical partitioned platform
JP5507136B2 (en) * 2009-07-09 2014-05-28 株式会社日立製作所 Management apparatus and method, and computer system
US8656396B2 (en) * 2009-08-11 2014-02-18 International Business Machines Corporation Performance optimization based on threshold performance measure by resuming suspended threads if present or by creating threads within elastic and data parallel operators
US8631415B1 (en) 2009-08-25 2014-01-14 Netapp, Inc. Adjustment of threads for execution based on over-utilization of a domain in a multi-processor system by sub-dividing parallizable group of threads to sub-domains
US8521472B2 (en) * 2009-09-18 2013-08-27 International Business Machines Corporation Method to compute wait time
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9122537B2 (en) * 2009-10-30 2015-09-01 Cisco Technology, Inc. Balancing server load according to availability of physical resources based on the detection of out-of-sequence packets
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US8832683B2 (en) * 2009-11-30 2014-09-09 Red Hat Israel, Ltd. Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine
US8869160B2 (en) * 2009-12-24 2014-10-21 International Business Machines Corporation Goal oriented performance management of workload utilizing accelerators
US9229779B2 (en) * 2009-12-28 2016-01-05 Empire Technology Development Llc Parallelizing heterogeneous network communications in smart devices based on selection of task allocation strategy
JP5484117B2 (en) * 2010-02-17 2014-05-07 株式会社日立製作所 Hypervisor and server device
JP2011170483A (en) * 2010-02-17 2011-09-01 Nec Corp Virtual computer device and control method for the same
WO2011102833A1 (en) * 2010-02-18 2011-08-25 Hewlett-Packard Development Company, L.P. A system and method for dynamically allocating high-quality and low-quality facility assets at the datacenter level
US9122538B2 (en) * 2010-02-22 2015-09-01 Virtustream, Inc. Methods and apparatus related to management of unit-based virtual resources within a data center environment
CN102754079A (en) 2010-02-23 2012-10-24 富士通株式会社 Multi-core processor system, control program, and control method
JP5544967B2 (en) 2010-03-24 2014-07-09 富士通株式会社 Virtual machine management program and virtual machine management apparatus
US8589941B2 (en) 2010-04-23 2013-11-19 International Business Machines Corporation Resource affinity via dynamic reconfiguration for multi-queue network adapters
US8826271B2 (en) * 2010-04-28 2014-09-02 Cavium, Inc. Method and apparatus for a virtual system on chip
US8745633B2 (en) * 2010-05-11 2014-06-03 Lsi Corporation System and method for managing resources in a partitioned computing system based on resource usage volatility
WO2011142227A1 (en) * 2010-05-14 2011-11-17 インターナショナル・ビジネス・マシーンズ・コーポレーション Computer system, method and program
KR101690652B1 (en) * 2010-08-25 2016-12-28 삼성전자주식회사 Scheduling apparatus and method for a multicore system
EP2635972A4 (en) * 2010-10-13 2016-10-26 Zte Usa Inc System and method for multimedia multi-party peering (m2p2)
CN103154896A (en) * 2010-10-19 2013-06-12 株式会社日立制作所 Method and device for deploying virtual computers
WO2012066604A1 (en) * 2010-11-19 2012-05-24 Hitachi, Ltd. Server system and method for managing the same
US9639273B2 (en) * 2011-01-31 2017-05-02 Nokia Technologies Oy Method and apparatus for representing content data
US8738972B1 (en) 2011-02-04 2014-05-27 Dell Software Inc. Systems and methods for real-time monitoring of virtualized environments
US9141410B2 (en) 2011-03-08 2015-09-22 Rackspace Us, Inc. Pluggable allocation in a cloud computing system
US20130205028A1 (en) * 2012-02-07 2013-08-08 Rackspace Us, Inc. Elastic, Massively Parallel Processing Data Warehouse
JP5673233B2 (en) 2011-03-09 2015-02-18 富士通株式会社 Information processing apparatus, virtual machine management method, and virtual machine management program
US8776055B2 (en) 2011-05-18 2014-07-08 Vmware, Inc. Combining profiles based on priorities
US9450873B2 (en) 2011-06-28 2016-09-20 Microsoft Technology Licensing, Llc Performance isolation for clouds
US9495222B1 (en) 2011-08-26 2016-11-15 Dell Software Inc. Systems and methods for performance indexing
JP2013109556A (en) * 2011-11-21 2013-06-06 Bank Of Tokyo-Mitsubishi Ufj Ltd Monitoring controller
JP5842646B2 (en) * 2012-02-02 2016-01-13 富士通株式会社 Information processing system, virtual machine management program, virtual machine management method
JP2013214146A (en) * 2012-03-30 2013-10-17 Toshiba Corp Virtual computer system, hypervisor, and virtual computer system management method
CN102693160B (en) * 2012-05-15 2016-05-18 浪潮电子信息产业股份有限公司 A kind of method that computer system dynamic resource is reshuffled
JP5740352B2 (en) * 2012-06-04 2015-06-24 株式会社日立製作所 Virtual computer system and virtual computer system load control method
KR101393237B1 (en) 2012-07-23 2014-05-08 인하대학교 산학협력단 Dynamic available resource reallocation based job allocation system and method in grid computing thereof
US10187452B2 (en) 2012-08-23 2019-01-22 TidalScale, Inc. Hierarchical dynamic scheduling
US9166895B1 (en) * 2012-12-13 2015-10-20 Vmware, Inc. Detecting process execution state change using measurement of resource consumption
US9183016B2 (en) * 2013-02-27 2015-11-10 Vmware, Inc. Adaptive task scheduling of Hadoop in a virtualized environment
US9152450B2 (en) * 2013-03-12 2015-10-06 International Business Machines Corporation Offloading service requests to a second guest hypervisor in a logical partition shared by a plurality of guest hypervisors
JP6094288B2 (en) * 2013-03-15 2017-03-15 日本電気株式会社 Resource management apparatus, resource management system, resource management method, and resource management program
US20140280577A1 (en) * 2013-03-15 2014-09-18 Salesforce.Com, Inc. Systems and methods for interacting with an application in a publisher
US9106391B2 (en) 2013-05-28 2015-08-11 International Business Machines Corporation Elastic auto-parallelization for stream processing applications based on a measured throughput and congestion
GB2515537A (en) * 2013-06-27 2014-12-31 Ibm Backup management for a plurality of logical partitions
US20150081400A1 (en) * 2013-09-19 2015-03-19 Infosys Limited Watching ARM
JP6158751B2 (en) * 2014-05-30 2017-07-05 日本電信電話株式会社 Computer resource allocation apparatus and computer resource allocation program
EP2955631B1 (en) * 2014-06-09 2019-05-01 Nokia Solutions and Networks Oy Controlling of virtualized network functions for usage in communication network
US9690608B2 (en) * 2014-06-12 2017-06-27 Vmware, Inc. Method and system for managing hosts that run virtual machines within a cluster
JP2017199044A (en) * 2014-07-31 2017-11-02 日本電気株式会社 Virtual computer system, scheduling method, and program
CN104503838B (en) * 2014-11-23 2017-06-27 华中科技大学 A kind of virtual cpu dispatching method
JP6495645B2 (en) * 2014-12-19 2019-04-03 株式会社東芝 Resource control apparatus, method, and program
JP6447217B2 (en) * 2015-02-17 2019-01-09 富士通株式会社 Execution information notification program, information processing apparatus, and information processing system
US20180067780A1 (en) * 2015-06-30 2018-03-08 Hitachi, Ltd. Server storage system management system and management method
US10154091B1 (en) * 2015-12-28 2018-12-11 Amazon Technologies, Inc. Deploying infrastructure units according to resource hosting constraints
US10235211B2 (en) 2016-04-22 2019-03-19 Cavium, Llc Method and apparatus for dynamic virtual system on chip
WO2017203647A1 (en) * 2016-05-26 2017-11-30 株式会社日立製作所 Computer and i/o adaptor allocation management method
CN106911592B (en) 2016-06-01 2020-06-12 创新先进技术有限公司 Self-adaptive resource allocation method and device
US10353736B2 (en) 2016-08-29 2019-07-16 TidalScale, Inc. Associating working sets and threads
US10176550B1 (en) * 2017-03-20 2019-01-08 Nutanix, Inc. GPU resource usage display and dynamic GPU resource allocation in a networked virtualization system
JPWO2018173481A1 (en) * 2017-03-24 2020-01-30 日本電気株式会社 Service configuration design apparatus and service configuration design method
US10503233B2 (en) * 2017-04-14 2019-12-10 Intel Corporation Usage scenario based monitoring and adjustment
US10489184B2 (en) 2017-05-12 2019-11-26 At&T Intellectual Property I, L.P. Systems and methods for management of virtual machine resources in a network environment through localized assignment of virtual machines having complimentary resource requirements
US11023135B2 (en) 2017-06-27 2021-06-01 TidalScale, Inc. Handling frequently accessed pages
US10817347B2 (en) 2017-08-31 2020-10-27 TidalScale, Inc. Entanglement of pages and guest threads
JP7035858B2 (en) * 2018-07-03 2022-03-15 富士通株式会社 Migration management program, migration method and migration system
US10678480B1 (en) * 2019-01-31 2020-06-09 EMC IP Holding Company LLC Dynamic adjustment of a process scheduler in a data storage system based on loading of the data storage system during a preceding sampling time period
CN109889608B (en) * 2019-03-29 2021-12-10 北京金山安全软件有限公司 Dynamic resource loading method and device, electronic equipment and storage medium
US11537436B2 (en) * 2019-10-02 2022-12-27 Qualcomm Incorporated Method of configuring a memory block allocation of a machine learning network
JP7012921B2 (en) 2020-02-06 2022-01-28 三菱電機株式会社 Setting change device, setting change method and setting change program

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4564903A (en) * 1983-10-05 1986-01-14 International Business Machines Corporation Partitioned multiprocessor programming system
US4843541A (en) * 1987-07-29 1989-06-27 International Business Machines Corporation Logical resource partitioning of a data processing system
US5564040A (en) * 1994-11-08 1996-10-08 International Business Machines Corporation Method and apparatus for providing a server function in a logically partitioned hardware machine
US5592671A (en) * 1993-03-02 1997-01-07 Kabushiki Kaisha Toshiba Resource management system and method
US6279046B1 (en) * 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US20020069369A1 (en) * 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services
US6438671B1 (en) * 1999-07-01 2002-08-20 International Business Machines Corporation Generating partition corresponding real address in partitioned mode supporting system
US20030065835A1 (en) * 1999-09-28 2003-04-03 Juergen Maergner Processing channel subsystem pending i/o work queues based on priorities
US6587938B1 (en) * 1999-09-28 2003-07-01 International Business Machines Corporation Method, system and program products for managing central processing unit resources of a computing environment
US6625638B1 (en) * 1998-04-30 2003-09-23 International Business Machines Corporation Management of a logical partition that supports different types of processors
US6633916B2 (en) * 1998-06-10 2003-10-14 Hewlett-Packard Development Company, L.P. Method and apparatus for virtual resource handling in a multi-processor computer system
US6985937B1 (en) * 2000-05-11 2006-01-10 Ensim Corporation Dynamically modifying the resources of a virtual server

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3186244B2 (en) 1992-09-18 2001-07-11 株式会社日立製作所 Virtual computer system
JPH06110715A (en) 1992-09-25 1994-04-22 Hitachi Ltd Dynamic allocating method for computer resources in virtual computer system
JPH0926889A (en) 1995-07-13 1997-01-28 Hitachi Ltd Virtual machine system
JPH10301795A (en) 1997-04-28 1998-11-13 Hitachi Ltd Virtual computer system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4564903A (en) * 1983-10-05 1986-01-14 International Business Machines Corporation Partitioned multiprocessor programming system
US4843541A (en) * 1987-07-29 1989-06-27 International Business Machines Corporation Logical resource partitioning of a data processing system
US5592671A (en) * 1993-03-02 1997-01-07 Kabushiki Kaisha Toshiba Resource management system and method
US5564040A (en) * 1994-11-08 1996-10-08 International Business Machines Corporation Method and apparatus for providing a server function in a logically partitioned hardware machine
US6625638B1 (en) * 1998-04-30 2003-09-23 International Business Machines Corporation Management of a logical partition that supports different types of processors
US6633916B2 (en) * 1998-06-10 2003-10-14 Hewlett-Packard Development Company, L.P. Method and apparatus for virtual resource handling in a multi-processor computer system
US6279046B1 (en) * 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US6438671B1 (en) * 1999-07-01 2002-08-20 International Business Machines Corporation Generating partition corresponding real address in partitioned mode supporting system
US20030065835A1 (en) * 1999-09-28 2003-04-03 Juergen Maergner Processing channel subsystem pending i/o work queues based on priorities
US6587938B1 (en) * 1999-09-28 2003-07-01 International Business Machines Corporation Method, system and program products for managing central processing unit resources of a computing environment
US6651125B2 (en) * 1999-09-28 2003-11-18 International Business Machines Corporation Processing channel subsystem pending I/O work queues based on priorities
US6985937B1 (en) * 2000-05-11 2006-01-10 Ensim Corporation Dynamically modifying the resources of a virtual server
US20020069369A1 (en) * 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136695A1 (en) * 2004-12-22 2006-06-22 International Business Machines Corporation Method and system for controlling the capacity usage of a logically partitioned data processing system
US7752415B2 (en) * 2004-12-22 2010-07-06 International Business Machines Corporation Method for controlling the capacity usage of a logically partitioned data processing system
US20090013153A1 (en) * 2007-07-04 2009-01-08 Hilton Ronald N Processor exclusivity in a partitioned system
US8161476B2 (en) * 2007-07-04 2012-04-17 International Business Machines Corporation Processor exclusivity in a partitioned system
US8910159B2 (en) 2007-07-04 2014-12-09 International Business Machines Corporation Processor exclusivity in a partitioned system
US20090106409A1 (en) * 2007-10-18 2009-04-23 Fujitsu Limited Method, apparatus and recording medium for migrating a virtual machine
US8468230B2 (en) 2007-10-18 2013-06-18 Fujitsu Limited Method, apparatus and recording medium for migrating a virtual machine
US20110225300A1 (en) * 2008-10-13 2011-09-15 Mitsubishi Electric Corporation Resource allocation apparatus, resource allocation program and recording media, and resource allocation method
US8452875B2 (en) 2008-10-13 2013-05-28 Mitsubishi Electric Corporation Resource allocation apparatus, resource allocation program and recording media, and resource allocation method
US20100138829A1 (en) * 2008-12-01 2010-06-03 Vincent Hanquez Systems and Methods for Optimizing Configuration of a Virtual Machine Running At Least One Process
US8943512B2 (en) 2008-12-01 2015-01-27 Citrix Systems, Inc. Systems and methods for facilitating virtualization of a heterogeneous processor pool
US8352952B2 (en) * 2008-12-01 2013-01-08 Citrix Systems, Inc. Systems and methods for facilitating virtualization of a heterogeneous processor pool
US20100138828A1 (en) * 2008-12-01 2010-06-03 Vincent Hanquez Systems and Methods for Facilitating Virtualization of a Heterogeneous Processor Pool
US20100153679A1 (en) * 2008-12-16 2010-06-17 International Business Machines Corporation Selection of a redundant controller based on resource view
US8245233B2 (en) * 2008-12-16 2012-08-14 International Business Machines Corporation Selection of a redundant controller based on resource view
US8271990B2 (en) 2009-02-27 2012-09-18 International Business Machines Corporation Removing operating system jitter-induced slowdown in virtualized environments
US20100223616A1 (en) * 2009-02-27 2010-09-02 International Business Machines Corporation Removing operating system jitter-induced slowdown in virtualized environments
US20100325634A1 (en) * 2009-03-17 2010-12-23 Hitachi, Ltd. Method of Deciding Migration Method of Virtual Server and Management Server Thereof
US8595737B2 (en) * 2009-03-17 2013-11-26 Hitachi, Ltd. Method for migrating a virtual server to physical server according to a variation ratio, a reference execution time, a predetermined occupied resource amount and a occupancy amount
US10282234B2 (en) 2009-06-01 2019-05-07 International Business Machines Corporation Server consolidation using virtual machine resource tradeoffs
US9424094B2 (en) 2009-06-01 2016-08-23 International Business Machines Corporation Server consolidation using virtual machine resource tradeoffs
US10789106B2 (en) 2009-06-01 2020-09-29 International Business Machines Corporation Server consolidation using virtual machine resource tradeoffs
US20100306382A1 (en) * 2009-06-01 2010-12-02 International Business Machines Corporation Server consolidation using virtual machine resource tradeoffs
US20110107035A1 (en) * 2009-11-02 2011-05-05 International Business Machines Corporation Cross-logical entity accelerators
US8656375B2 (en) 2009-11-02 2014-02-18 International Business Machines Corporation Cross-logical entity accelerators
US9128771B1 (en) * 2009-12-08 2015-09-08 Broadcom Corporation System, method, and computer program product to distribute workload
US8671413B2 (en) * 2010-01-11 2014-03-11 Qualcomm Incorporated System and method of dynamic clock and voltage scaling for workload based power management of a wireless mobile device
US8996595B2 (en) 2010-01-11 2015-03-31 Qualcomm Incorporated User activity response dynamic frequency scaling processor power management system and method
US20110173617A1 (en) * 2010-01-11 2011-07-14 Qualcomm Incorporated System and method of dynamically controlling a processor
US8719834B2 (en) 2010-05-24 2014-05-06 Panasonic Corporation Information processing system, method, program and integrated circuit for maintaining balance of processing loads with respect to real-time tasks
US8782656B2 (en) * 2011-02-24 2014-07-15 International Business Machines Corporation Analysis of operator graph and dynamic reallocation of a resource to improve performance
US20130081046A1 (en) * 2011-02-24 2013-03-28 International Business Machines Corporation Analysis of operator graph and dynamic reallocation of a resource to improve performance
US8997108B2 (en) * 2011-02-24 2015-03-31 International Business Machines Corporation Analysis of operator graph and dynamic reallocation of a resource to improve performance
US20120218268A1 (en) * 2011-02-24 2012-08-30 International Business Machines Corporation Analysis of operator graph and dynamic reallocation of a resource to improve performance
US20120221730A1 (en) * 2011-02-28 2012-08-30 Fujitsu Limited Resource control system and resource control method
US20130263117A1 (en) * 2012-03-28 2013-10-03 International Business Machines Corporation Allocating resources to virtual machines via a weighted cost ratio
US10162683B2 (en) * 2014-06-05 2018-12-25 International Business Machines Corporation Weighted stealing of resources
US10599484B2 (en) 2014-06-05 2020-03-24 International Business Machines Corporation Weighted stealing of resources
US20150355943A1 (en) * 2014-06-05 2015-12-10 International Business Machines Corporation Weighted stealing of resources
US20180060134A1 (en) * 2016-09-01 2018-03-01 Microsoft Technology Licensing, Llc Resource oversubscription based on utilization patterns in computing systems
US10678603B2 (en) * 2016-09-01 2020-06-09 Microsoft Technology Licensing, Llc Resource oversubscription based on utilization patterns in computing systems
US11714686B2 (en) * 2016-09-01 2023-08-01 Microsoft Technology Licensing, Llc Resource oversubscription based on utilization patterns in computing systems

Also Published As

Publication number Publication date
US20020087611A1 (en) 2002-07-04
JP2002202959A (en) 2002-07-19
US7290259B2 (en) 2007-10-30

Similar Documents

Publication Publication Date Title
US7290259B2 (en) Virtual computer system with dynamic resource reallocation
US7765552B2 (en) System and method for allocating computing resources for a grid virtual system
US7748005B2 (en) System and method for allocating a plurality of resources between a plurality of computing domains
US9058218B2 (en) Resource allocation based on anticipated resource underutilization in a logically partitioned multi-processor environment
US8370472B2 (en) System and method for efficient machine selection for job provisioning
US7617375B2 (en) Workload management in virtualized data processing environment
US8365183B2 (en) System and method for dynamic resource provisioning for job placement
US7178145B2 (en) Queues for soft affinity code threads and hard affinity code threads for allocation of processors to execute the threads in a multi-processor system
US7996842B2 (en) Computer resource management for workloads or applications based on service level objectives
US20150317179A1 (en) Efficient input/output-aware multi-processor virtual machine scheduling
TWI235952B (en) Thread dispatch mechanism and method for multiprocessor computer systems
US7698531B2 (en) Workload management in virtualized data processing environment
Wang et al. Smartharvest: Harvesting idle cpus safely and efficiently in the cloud
US20080103728A1 (en) Providing Policy-Based Operating System Services in an Operating System on a Computing System
US8020164B2 (en) System for determining and reporting benefits of borrowed computing resources in a partitioned environment
US6587865B1 (en) Locally made, globally coordinated resource allocation decisions based on information provided by the second-price auction model
US20020004966A1 (en) Painting apparatus
JP2001331333A (en) Computer system and method for controlling computer system
US8332850B2 (en) Thread starvation profiler by utilizing a set of counters
Lama et al. Performance isolation of data-intensive scale-out applications in a multi-tenant cloud
Iorgulescu et al. Don't cry over spilled records: Memory elasticity of data-parallel applications and its application to cluster scheduling
US7698530B2 (en) Workload management in virtualized data processing environment
CN110543355A (en) method for automatically balancing cloud platform resources
Zhang et al. Workload consolidation in alibaba clusters: the good, the bad, and the ugly
JP5243822B2 (en) Workload management in a virtualized data processing environment

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION