WO1993000636A1 - Translation buffer for virtual machines with address space match - Google Patents

Translation buffer for virtual machines with address space match Download PDF

Info

Publication number
WO1993000636A1
WO1993000636A1 PCT/US1992/005351 US9205351W WO9300636A1 WO 1993000636 A1 WO1993000636 A1 WO 1993000636A1 US 9205351 W US9205351 W US 9205351W WO 9300636 A1 WO9300636 A1 WO 9300636A1
Authority
WO
WIPO (PCT)
Prior art keywords
match
virtual
address space
memory
address
Prior art date
Application number
PCT/US1992/005351
Other languages
French (fr)
Inventor
Andrew H. Mason
Judith S. Hall
Richard T. Witek
Paul T. Robinson
Original Assignee
Digital Equipment Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Equipment Corporation filed Critical Digital Equipment Corporation
Priority to AU22954/92A priority Critical patent/AU654204B2/en
Priority to DE69223386T priority patent/DE69223386T2/en
Priority to EP92914596A priority patent/EP0548315B1/en
Publication of WO1993000636A1 publication Critical patent/WO1993000636A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1036Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1045Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
    • G06F12/1054Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache the data cache being concurrently physically addressed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/656Address space sharing

Definitions

  • This invention relates to digital computers, and more particularly to a virtual memory management system for a CPU executing multiple processes.
  • a reduced instruction set processor chip which implements a virtual memory management system.
  • a virtual address is translated to a physical address in such a system before a memory reference is executed, where the physical address is that used to access the main memory.
  • the physical addresses are maintained in a page table, indexed by the virtual address, so whenever a virtual address is presented the memory management system finds the physical address by referencing the page table.
  • a process executing on such a machine will probably be using only a few pages, and so these most-likely used page table entries are kept in a translation buffer within the CPU chip itself, eliminating the need to make a memory reference to fetch the page table entry.
  • the page table entries contain other information used by the memory management system, such as privilege information, access rights, etc., to provide secure and robust operation.
  • the current state of the processor and the characteristics of the memory reference are checked against information in the page table entry to make sure the memory reference is proper.
  • a number of processes may be executing in a time-shared manner on a CPU at a given time, and these processes will each have their own areas of virtual memory.
  • the operating system itself will contain a number of pages which must be referenced by each one of these processes.
  • the pages thus shared by processes will thus best be kept in main memory rather than swapped out, and also the page table entries for such pages will preferably remain in the translation buffer since continuing reference will be made to these.
  • the translation buffer is usually flushed by invalidating all entries when a context switch is made.
  • a mechanism using so-called "address space numbers" is implemented in a processor to reduce the need for invalidation of cached address translations in the translation buffer for process-specific addresses when a context switch occurs.
  • the address space number (process tag) for the current process is loaded to a register in the processor to become part of the current state; this loading is done by a privileged instruction from a process-specifi.c block in memory.
  • each process has associated with it an address space number, which is an arbitrarily-assigned number generated by the operating system. This address space number is maintained as part of the machine state, and also stored in the translation buffer for each page entry belonging to that process.
  • the current address space number is compared with the entry in the translation buffer, to see if there is a match.
  • an address space match function can be added to the comparison; a "match" bit in the entry can turn on or off the requirement for matching address space numbers. If turned on, the entry will "match” if the address tags match, regardless of the address space numbers. The operating system can thus load certain page table entries with this match bit on so these pages are shared by all processes.
  • each virtual machine In operating a CPU using virtual machines, each virtual machine functions as if it were an independent processor, and each virtual machine has a virtual machine operating system and a number of processes running, just as when only one CPU is functioning.
  • Virtual machines are described by Siewiorek et al in "Computer Structures: Principles and Examples” published by McGraw-Hill, 1982, pp. 227-228.
  • a virtual machine monitor another execution level
  • the several virtual machines and the virtual machine monitor running on a CPU must have their memory spaces kept separate and isolated from one another, but yet maximize system performance. To this end, the virtual machines and the virtual machine monitor must be able to use the same virtual addresses for different purposes. However, when context switching from one virtual machine to another, or to or from the virtual machine monitor, needlessly flushing entries in the translation buffer which will be used in the new context imposes a performance penalty. Therefore it is important to offer both address space numbers and the match feature when implementing virtual machines.
  • the present invention in its broad form resides in a in a CPU executing a virtual memory management, a method of operating the CPU, comprising: providing a translation buffer for translating virtual addresses to physical addresses, comprising the steps of: storing in said translation buffer a plurality of page table entries, each page table entry containing a page frame number indexed by a virtual address tag; also storing in said translation buffer for each said page table entry an address space number, and an address space match entry; where said address space number is a value related to a process executed on said processor, said match entry is a value indicating that the address space number is to be required to be matched or not so required; and storing a third match value indicating whether or not said match entry is to be disabled; comparing said virtual address tag with a field of a virtual address generated by said processor, and also comparing said address space number with a current number maintained as part of the state of said processor, if comparing said virtual address tag with a field of said virtual address produces a match, and if said comparison of address space numbers
  • the invention also consists in a processor system a processor system having a CPU and a memory, comprising: a) means in the CPU for fetching instructions from said memory, decoding said instructions, and executing said instructions, said executing including accessing said memory for read and write data; b) means in said CPU for generating virtual addresses used for said fetching of instructions and said accessing said memory for data; c) a page table stored in said memory and containing a plurality of page table entries, each page table entry including a page frame number referencing a page of said memory; d) means for translating said virtual addresses to physical addresses for said memory, including a translation buffer storing selected ones of said page table entries; e) and means for addressing said memory using the page frame number from said translation buffer and using a part of said virtual address; f) said translation buffer storing for each said page table entry an address space number, and storing an address space match entry, where said address space number is a value related to a process executed on said CPU, and said match entry is a value indicating that
  • a CPU executing a virtual memory management system employs an address space number feature to allow entries to remain in the translation buffer for processes not currently executing, and the separate processes or the operating system can reuse entries in the translation buffer for such pages of memory that are commonly referenced.
  • an "address space match" entry in the page table entry signals that the translation buffer content can be used when the address tag matches, even though the address space numbers do not necessarily match.
  • the address space match feature is employed within a virtual machine, but an additional entry is provided to disable the address space match feature for all address space numbers for the virtual machine monitor.
  • an additional entry is provided in the translation buffer to restrict the address space match feature to those address spaces associated with a single virtual machine or virtual machine monitor.
  • Figure 1 is an electrical diagram in block form of a computer system having a CPU which may employ features of the invention
  • FIG. 1 is an electrical diagram in block form of the instruction unit or I-box of the CPU of Figure 1;
  • FIG. 3 is an electrical diagram in block form of the integer execution unit or E-box in the CPU of Figure 1;
  • Figure 4 is an electrical diagram in block form of the addressing unit or A-box in the CPU of Figure 1;
  • FIG. 5 is an electrical diagram in block form of the floating point execution unit or F-box in the CPU of Figure 1;
  • Figure 6 is a timing diagram of the pipelining in the CPU of Figures 1-5;
  • Figure 7 is a diagram of the instruction formats used in the instruction set of the CPU of Figures 1-5;
  • Figure 8 is a diagram of the format of a virtual address used in the CPU of Figures 1-5;
  • Figure 9 is a diagram of the format of a page table entry used in the CPU of Figures 1-5;
  • Figure 10 is a diagram of the address translation mechanism used in the CPU of Figures 1-5;
  • Figure 11 is a diagram of the virtual-to-physical address mapping used in the system of Figures 1-5;
  • Figure 12 is a diagram of a process control block used in the system of Figures 1-5;
  • Figure 13 is a table of address space numbers for an example of operating the system of Figures 1-5 with virtual machines
  • Figure 14 is a diagram of the evaluation mechanism for translation buffer entries using the features of the invention.
  • Figure 15 is a diagram of a translation buffer entry for another embodiment of the invention.
  • a computer system which may use features of the invention, according to one embodiment, includes a CPU 10 connected by a system bus 11 to a main memory 12, with a disk memory 13 also accessed via the system bus 11.
  • the system may use a single CPU, but may also be of a multiprocessor configuration, in which case other CPUs 14 also access the main memory 12 via the system bus 11.
  • the CPU 10 is of a single-chip integrated circuit device, in an example embodiment, although features of the invention could be employed as well in a processor constructed of integrated circuit devices mounted on boards.
  • an integer execution unit 16 (referred to as the "E-box") is included, along with a floating point execution unit 17 (referred to as the F-box").
  • Instruction fetch and decoding is performed in an instruction unit 18 or "I-box”.
  • An address unit or "A- box” 19 performs the functions of address generation, memory management, write buffering and bus interface; the virtual address system with translation buffer according to the invention is implemented in the address unit 19.
  • the memory is hierarchical, with on-chip instruction and data caches being included in the instruction unit 18 and address unit 19 in one embodiment, while a larger, second-level cache 20 is provided off- chip, being controlled by a cache controller in the address unit 19.
  • the CPU 10 employs an instruction set in which all instructions are of a fixed size, in this case 32-bit or one longword.
  • Memory references are generally aligned quadwords, although integer data types of byte, word, longword and quadword are handled internally.
  • a byte is 8-bits
  • a word is 16-bits or two bytes
  • a longword is 32-bits or four bytes
  • a quadword is 64-bits or eight bytes.
  • the data paths and registers within the CPU 10 are generally 64-bit or quadword size, and the memory 12 and caches use the quadword as the basic unit of transfer. Performance is enhanced by allowing only quadword or longword loads and stores.
  • the instruction unit 18 or I-box is shown in more detail.
  • the primary function of the instruction unit 18 is to issue instructions to the E-box 16, A-box 19 and F-box 17.
  • the instruction unit 18 includes an instruction cache 21 which stores perhaps 8Kbytes of instruction stream data, and this instruction stream data is loaded to an instruction register 22 in each cycle for decoding. In one embodiment, two instructions are decoded in parallel. An instruction is decoded in a decoder 23, producing register addresses on lines 26 and control bits on microcontrol bus 28 to the appropriate elements in the CPU 10.
  • the instruction unit 18 contains address generation circuitry 29, including a branch prediction circuit 30 responsive to the instructions in the instruction stream to be loaded into register 22.
  • the prediction circuit 30 is used to predict branch addresses and to cause address generating circuitry 29 to prefetch the instruction stream before needed.
  • the virtual PC (program counter) 33 is included in the address generation circuitry 29 to produce addresses for instruction stream data in the selected order.
  • the instruction unit 18 contains a fully associative translation buffer (TB) 36 to cache recently used instruction-stream address translations and protection information for 8Kbyte pages. Although 64-bit addresses are nominally possible, as a practical matter 43-bit addresses are adequate. Every cycle the 43-bit virtual program counter 33 is presented to the instruction stream TB 36. If the page table entry (PTE) associated with the virtual address from the virtual PC is cached in the TB 36 then the page frame number (PFN) and protection bits for the page which contains the virtual PC are used by the instruction unit 18 to complete the address translation and access checks. A physical address is thus applied to the address input 37 of the instruction cache 21, or if there is a cache miss then this instruction stream physical address is applied by the bus 38 through the address unit 19 to the cache 20 or memory 12.
  • PTE page table entry
  • PPN page frame number
  • the execution unit or E-box 16 is shown in more detail in Figure 3.
  • the execution unit 16 contains the 64-bit integer execution datapath including an arithmetic/logic unit (ALU) 40, a barrel shifter 41, and an integer multiplier 42.
  • the execution unit 16 also contains the 32- register 64-bit wide register file 43, containing registers R0 to R31, although R31 is hardwired as all zeros.
  • the register file 43 has four read ports and two write ports which allow the sourcing (sinking) of operands (results) to both the integer execution datapath and the address unit 19.
  • a bus structure 44 connects two of the read ports of the register file 43 to the selected inputs of the ALU 40, the shifter 41 or the multiplier 42 as specified by the control bits of the decoded instruction on lines 28 from the instruction unit 18, and connects the output of the appropriate function to one of the write ports of the register file to store the result. That is, the address fields from the instruction are applied by the lines 26 to select the registers to be used in executing the instruction, and the control bits 28 define the operation in the ALU, etc., and define which internal busses of the bus structure 44 are to be used when, etc.
  • the A-box or address unit 19 is shown in more detail in Figure 4.
  • the A-box 19 includes five functions: address translation using a datapath translation buffer 48, a load silo 49 for incoming data, a write buffer 50 for outgoing write data, an interface 51 to a data cache, and the external interface 52 to the bus 11.
  • the address translation datapath has the displacement adder 53 which generates the effective address (by accessing the register file 43 via the second set of read and write ports, and the PC), and the data TB 48 which generates the physical address on address bus 54.
  • the datapath translation buffer 48 caches a number (e.g., thirty- two) of the recently-used data-stream page table entries (as described below) for pages of 8Kbyte size. Each entry supports any of four granularity hint block sizes, and a detector 55 is responsive to the granularity hint as described in application Serial No. 547,630 to change the number of low-order bits of the virtual address passed through from virtual address bus 56 to the physical address bus 54.
  • the effective 43-bit virtual address is presented to TB 48 via bus 56. If the PTE of the supplied virtual address is cached in the TB 48, the PFN and protection bits for the page which contains the address are used by the address unit 19 to complete the address translation and access checks.
  • the on-chip pipelined floating point unit 17 or F-box as shown in more detail in Figure 5 is capable of executing both DEC and IEEE floating point instructions.
  • the floating point unit 17 contains a 32-entry, 64- bit, floating point register file 61 which includes floating-point registers F0 to F31, and contains a floating point arithmetic and logic unit 62. Divides and multiplies are performed in a multiply/divide circuit 63.
  • a bus structure 64 interconnects two read ports of the register file 61 to the appropriate functional circuit as directed by the control bits of the decoded instruction on lines 28 from the instruction unit 18.
  • the registers selected for an operation are defined by the output bus 26 from the instruction decode.
  • the floating point unit 17 can accept an instruction every cycle, with the exception of floating point divide instructions, which can be accepted only every several cycles. A latency of more than one cycle is exhibited for all floating point instructions, during which the integer unit can continue to execute other instructions.
  • the CPU 10 has an 8Kbyte data cache 59, and 8Kbyte instruction cache 21, with the size of the caches depending on the available chip area.
  • the on-chip data cache 59 is write-through, direct mapped, read-allocate physical cache and has 32-byte (1-hexaword) blocks.
  • the system may keep the data cache 59 coherent with memory 12 by using an invalidate bus, not shown.
  • the instruction cache 21 may be ⁇ Kbytes, or I ⁇ Kbytes, for example, or may be larger or smaller, depending upon die area. Although described above as using physical addressing with a TB 36, it may also be a virtual cache, in which case it will contain no provision for maintaining its coherence with memory 12.
  • the cache 21 is a physical addressed cache the chip will contain circuitry for maintaining its coherence with memory: (1) when the write buffer 50 entries are sent to the bus interface 52, the address will be compared against a duplicate instruction cache 21 tag, and the corresponding block of instruction cache 21 will be conditionally invalidated; (2) the invalidate bus will be connected to the instruction cache 21.
  • the main data paths and registers in the CPU 10 are all 64-bits wide. That is, each of the integer registers 43, as well as each of the floating point registers 61, is a 64-bit register, and the ALU 40 has two 64-bit inputs 40a and 40b and a 64-bit output 40c.
  • the bus structure 44 in the execution unit 16, which actually consists of more than one bus, has 64-bit wide data paths for transferring operands between the integer registers 43 and the inputs and output of the ALU 40.
  • the instruction decoder 23 produces register address outputs 26 which are applied to the addressing circuits of the integer registers 43 and/or floating point registers 61 to select which register operands are used as inputs to the ALU 41 or 62, and which of the registers 43 or registers 61 is the destination for the ALU (or other functional unit) output.
  • a feature of the CPU 10 of Figures 1-6 of this example embodiment is its RISC characteristic.
  • the instructions executed by this CPU 10 are always of the same size, in this case 32-bits, instead of allowing variable-length instructions.
  • the instructions execute on average in one machine cycle (pipelined as described below, and assuming no stalls), rather than a variable number of cycles.
  • the instruction set includes only register-to-register arithmetic/logic type of operations, or register-to-memory (or memory-to-register) load/store type of operations, and there are no complex memory addressing modes such as indirect, etc.
  • An instruction performing an operation in the ALU 40 always gets its operands from the register file 43 (or from a field of the instruction itself) and always writes the result to the register file 43; these operands are never obtained from memory and the result is never written to memory in the same instruction that performs the ALU operation.
  • Loads from memory are always to a register in register files 43 or 61, and stores to memory are always from a register in the register files.
  • the CPU 10 has a seven stage pipeline for integer operate and memory reference instructions.
  • Figure 6 is a pipeline diagram for the pipeline of execution unit 16, instruction unit 18 and address unit 19.
  • the floating point unit 17 defines a pipeline in parallel with that of the execution unit 16, but ordinarily employs more stages to execute.
  • the seven stages are referred to as S0-S6, where a stage is to be executed in one machine cycle (clock cycle).
  • the first four stages SO, SI, S2 and S3 are executed in the instruction unit 18, and the last three stages S4, S5 and S6 are executed in one or the other of the execution unit 16 or address unit 19, depending upon whether the instruction is an operate or a load/store.
  • the first stage SO of the pipeline is the instruction fetch or IF stage, during which the instruction unit 18 fetches new instructions from the instruction cache 21, using the PC 33 address as a base.
  • the second stage SI is the evaluate stage, during which two fetched instructions are evaluated to see if dual issue is possible.
  • the third stage S2 is the decode stage, during which the instructions are decoded in the decoder 23 to produce the control signals 28 and register addresses on lines 26.
  • the fourth stage S3 is the register file 43 access stage for operate instructions, and the instruction issue stage.
  • the fifth stage S4 is cycle-one of the computation (in ALU 40, for example) if it is an operate instruction, and also the instruction unit 18 computes the new PC 33 in address generator 29; if it is a memory reference instruction the address unit 19 calculates the effective data stream address using the adder 53.
  • the sixth stage S5 is cycle-two of the computation (e.g., in ALU 40) if it is an operate instruction, and also the data TB 48 lookup stage for memory references.
  • the last stage S6 is the write stage for operate instructions having a register write, during which, for example, the output 40c of the ALU 40 is written to the register file 43 via the write port, and is the data cache 59 or instruction cache 21 hit/miss decision point for instruction stream or data stream references.
  • FIG. 7 the formats of the various types of instructions of the instruction set executed by the CPU 10 of Figures 1-7 are illustrated.
  • the CPU of. Figure 1 executes an instruction set which includes nine types of instructions. These include (1) integer load and store instructions, (2) integer control instructions, (3) integer arithmetic, (4) logical and shift instructions, (5) byte manipulation, (6) floating point load and store, (7) floating point control, (8) floating point arithmetic, and (9) miscellaneous.
  • the instruction set is described in more detail in application Ser. No. 547,630.
  • One type is a memory (i.e., load and store) instruction 70, which contains a 6-bit opcode in bits ⁇ 31:26>, two 5-bit register address fields Ra and Rb in bits ⁇ 25:21> and ⁇ 20:16>, and a 16-bit signed displacement in bits ⁇ 15:0>.
  • This instruction is used to transfer data between registers 43 and memory (memory 12 or caches 59 or 20), to load an effective address to a register of the register file 43, and for subroutine jumps.
  • the displacement field ⁇ 15:0> is a byte offset; it is sign-extended and added to the contents of register Rb to form a virtual address.
  • the virtual address is used as a memory load/store address or a result value depending upon the specific instruction.
  • the branch instruction format 71 is also shown in Figure 7, and includes a 6-bit opcode in bits ⁇ 31:26>, a 5-bit address field in bits ⁇ 25:21>, and a 21-bit signed branch displacement in bits ⁇ 20:0>.
  • the displacement is treated as a longword offset, meaning that it is shifted left two bits (to address a longword boundary), sign-extended to 64-bits and added to the updated contents of PC 33 to form the target virtual address (overflow is ignored).
  • the operate instructions 72 and 73 are of the formats shown in Figure 7, one format 72 for three register operands and one format 73 for two register operands and a literal.
  • the operate format is used for instructions that perform integer register operations, allowing two source operands and one destination operand in register file 43.
  • One of the source operands can be a literal constant.
  • Bit-12 defines whether the operate instruction is for a two source register operation or one source register and a literal.
  • the operate format has a 7-bit function field at bits ⁇ 11:5> to allow a wider range of choices for arithmetic and logical operation.
  • the source register Ra is specified in either case at bits ⁇ 25:21>, and the destination register Re at ⁇ 4:0>. If bit-12 is a zero, the source register Rb is defined at bits ⁇ 20:16>, while if bit-12 is a one then an 8-bit zero-extended literal constant is formed by bits ⁇ 20:13> of the instruction. This literal is interpreted as a positive integer in the range 0-255, and is zero-extended to 64-bits.
  • Figure 7 also illustrates the floating point operate instruction format 74, used for instructions that perform floating point register 61 to floating point register 61 operations.
  • the floating point operate instructions contain a 6-bit opcode at bits ⁇ 31:26> as before, along with an 11-bit function field at bits ⁇ 15:5>.
  • Floating point conversions use a subset of the floating point operate format 74 of Figure 7 and perform register-to-register conversion operations; the Fb operand specifies the source and the Fa operand should be reg-31 (all zeros).
  • the other instruction format 75 of Figure 7 is that for privileged architecture library (PAL or PALcode) instructions, which are used to specify extended processor functions.
  • PAL or PALcode privileged architecture library
  • PAL or PALcode privileged architecture library
  • a 6-bit opcode is present at bits ⁇ 31:26> as before, and a 26-bit PALcode function field ⁇ 25:0> specifies the operation.
  • the source and destination operands for PALcode instructions are supplied in fixed registers that are specified in the individual instruction definitions.
  • a PALcode instruction usually uses a number of instructions of formats 70-74 stored in memory to make up a more complex instruction which is executed in a privileged mode, as part of the operating system, for example.
  • the six-bit opcode field ⁇ 31:26> in the instruction formats of Figure 7 allows only 2 6 or sixty-four different instructions to be coded. Thus the instruction set would be limited to sixty-four.
  • the "function" fields in the instruction formats 72, 73 and 74 allow variations of instructions having the same opcode in bits ⁇ 31:26>.
  • the "hint" bits in the jump instruction allow variations such as JSR or RET.
  • the format 76 of the virtual address asserted on the internal address bus 56 is shown.
  • This address is nominally 64- bits in width, but of course practical implementations at present use much smaller addresses.
  • an address of 43-bits provides an addressing range of 8-Terabytes.
  • the format includes a byte offset 77 of, for example, 13-bits to 16-bits in size, depending upon the page size employed. If pages are 8-Kbytes, the byte-within-page field 77 is 13- bits, while for 16-Kbyte pages the field 77 is 14-bits.
  • the format 76 as shown includes three segment fields 78, 79 and 80, labelled Segl, Seg2 and Seg3, also of variable size depending upon the implementation.
  • the segments Segl, Seg2, and Seg3 can be 10-to-13 bits, for example. If each segment size is 10-bits, then a segment defined by Seg3 is IK pages in length, a segment for Seg2 is 1M pages, and a segment for Segl is 1G pages.
  • the page frame number (PFN) field in the PTE is always 32-bits wide; thus, as the page size grows the virtual and physical address size also grows.
  • the physical addresses are at most 48-bits, but a processor may implement a smaller physical address space by not implementing some number of high-order bits.
  • the two most significant implemented physical address bits select a caching policy or implementation-dependent type of address space. Different implementations may put different uses and restrictions on these bits as appropriate for the system. For example, in a workstation with a 30-bit ⁇ 29:0> physical address space, bit ⁇ 29> may select between memory and I/O, and bit ⁇ 28> may enable or disenable caching in I/O space and must be zero in memory space.
  • Typical processes may reside in physical memory 12 (or caches) at the same time, so memory protection and multiple address spaces (using address space numbers) are used by the CPU 10 to ensure that one process will not interfere with either other processes or the operating system.
  • four hierarchical access modes provide memory access control. They are, from most to least privileged: kernel, executive, supervisor, and user, referring to operating system modes and application programs. Protection is specified at the individual page level, where a page may be inaccessible, read-only, or read/write for each of the four access modes. Accessible pages can be restricted to have only data or instruction access.
  • the PTE 81 is a quadword (64-bits) in width, and includes a 32-bit page frame number or PFN 82 at bits ⁇ 63:32>, as well as certain software and hardware control information in a field 83 having bits ⁇ 15:0> as set forth in Table A to implement the protection features and the like.
  • the translation buffers 36 and 48 store a number of the page table entries 81, each associated with a tag consisting of certain high-order bits of the virtual address to which this PFN is assigned by the operating system.
  • the tag may consist of the fields 78, 79 and 80 for the Segl, Seg2 and Seg3 values of the virtual address 76 of Figure 8.
  • each entry contains a valid bit, indicating whether or not the entry has been invalidated, as when the TB " is flushed. It is conventional to flush the TB when a context switch is made, invalidating all the entries; the features of the invention, however, allow continued use of entries still useful, so performance is improved.
  • the translation buffers 36 and 48 include in addition an address space number field, perhaps sixteen bits in width, loaded from the process control block as will be described.
  • the virtual address 76 on the bus 56 (seen in Figure 8) is used to search for tag match for a PTE in the translation buffer, and, if not found, then Segl field 78 is used to index into a first page table 85 found at a base address stored in an internal register 86 referred to as the page table base register.
  • the entry 87 found at the Segl index in table 85 is the base address for a second page table 88, for which the Seg2 field 79 is used to index to an entry 89.
  • the entry 89 points to the base of a third page table 90, and Seg3 field 80 is used to index to a PTE 91.
  • the physical page frame number from PTE 91 is combined with the byte offset 77 from the virtual address, in adder 92, to produce the physical address on bus 54.
  • the size of the page mapped by a PTE, along with size of the byte offset 77, can vary depending upon the granularity hint.
  • an address space number 94 stored in the translation buffer is compared to the address space number in the current state of the CPU 10, stored in an ASN register 95 which is one of the internal processor registers.
  • address space match field (bit ⁇ 4>, Table A) is clear (zero)
  • the current ASN and the field 94 must match for the PTE to be used, but if set (logic one) then there need not be a match, i.e., the address space numbers are ignored.
  • a particular feature of the invention is an additional match- disable bit 96 stored for each PTE, disabling the address space match feature under certain conditions (i.e., when the virtual machine monitor process is being executed).
  • the match-disable bit is not required to be maintained on each entry in the translation buffer. Whether or not address-space matches should be disabled is properly a function of the execution environment of the CPU rather than of the virtual address. When the virtual machine monitor is being executed (as discussed below), address-space matches are disabled; when a virtual machine or some process in a VM is being executed, address-space matches are enabled.
  • the match-disable bit could be stored globally in the translation buffer.
  • the match-disable bit could be maintained in the CPU itself. In either case, its value would be changed on transition from the VMM to a VM or from a VM to the VMM, and its current value must be made available to the match comparison logic in the translation buffer.
  • the CPU 10 generates memory references by first forming a virtual address 76 on bus 56, representing the address within the entire virtual range 97 as seen in Figure 11, defined by the 43-bit address width referred to, or that portion of the address width used by the operating system. Then using the page tables 85, 88, 90 in memory, or the translation buffer 36 or 48, the virtual address is translated to a physical address represented by an address map 98; the physical memory is constrained by the size of the main memory 12. The translation is done for each page (e.g., an 8Kbyte block), so a virtual page address for a page 99 in the virtual memory map 97 is translated to a physical address 99' for a page (referred to as a page frame) in the physical memory map 98.
  • a virtual address 76 on bus 56 representing the address within the entire virtual range 97 as seen in Figure 11, defined by the 43-bit address width referred to, or that portion of the address width used by the operating system.
  • the page tables are maintained in memory 12 or cache 20 to provide the translation between virtual address and physical address, and the translation buffer is included in the CPU to hold the most recently used translations so a reference to the page tables in memory 12 need not be made in most cases to obtain the translation before a data reference can be made; the time needed to make the reference to a page table in memory 12 would far exceed the time needed to obtain the translation from the translation buffer. Only the pages used by tasks currently executing (and the operating system itself) are likely to be in the physical memory 12 at a given time; a translation to an address 99' is in the page table 85, 88, 90 for only those pages actually present in physical memory 12.
  • a page fault is executed to initiate a swap operation in which a page from the physical memory 12 is swapped with the desired page maintained in the disk memory 13, this swap being under control of the operating system.
  • Some pages in physical memory 12 used by the operating system kernel, for example, or the page tables 85, 88, 90 themselves, are in fixed positions and may not be swapped to disk 13 or moved to other page translations; most pages used by executing tasks, however, may be moved freely within the physical memory 12 by merely keeping the page tables updated.
  • a process is a basic entity that is scheduled for execution by the CPU 10.
  • a process represents a single thread of execution and consists of an address space and both hardware and software context.
  • the hardware context is defined by the integer registers 43 and floating point registers 61, the processor status contained in internal processor registers, the program counter 33, four stack pointers, the page table base register 86, the address space number 95, and other values depending upon the CPU design.
  • the software context of a process is defined by operating system software and is system dependent.
  • a process may share the same address space with other processes or have an address space of its own; there is, however, no separate address space for system software, and therefore the operating system must be mapped into the address space of each process.
  • Context switching occurs as one process after another is scheduled by the operating system for execution.
  • the hardware context of a process is defined by a privileged part and nonprivileged part.
  • the privileged part is stored in memory in a 128-byte block 100 as shown in Figure 12 when a process is not executing, and context is switched by a privileged instruction. There is one block 100 for each process.
  • the nonprivileged part is context switched by the operating system software.
  • the context block 100 contains four stack pointers in fields 101, these being stack pointers for the kernel, the executive, the supervisor and the user.
  • the page table base register 86 is in field 102.
  • the address space number for this process (to be loaded to register 95) is in field 103.
  • Other fields 104 are for values not material here.
  • the location of this block 100 in memory is specified for the current process by a context block base register 105.
  • a swap context instruction saves the privileged context of the current process into the context block specified by this register 105, loads a new value into the register 105, and then loads the privileged context of the new process from the new block 100 into the appropriate hardware registers 43, etc.
  • the architecture as described above allows a processor to implement address space numbers (process tags) to reduce the need for invalidation of cached address translations in the translation buffer for process- specific addresses when a context switch occurs.
  • the address space number for the current process is loaded by a privileged instruction from field 103 into an internal processor register 95.
  • address space match This feature allows an operating system to designate locations in the system's virtual address space 97 which are shared among all processes. Such a virtual address refers to the same physical address in each process's address space.
  • the CPU 10 of Figures 1-5 may employ a "virtual machine .system" which uses a combination of hardware, firmware, and software mechanisms to create the illusion of multiple, independent simulated or virtual computers each running on the same physical computer.
  • Virtual machine systems create a set of virtual machines (VMs) in an analogous fashion to how time sharing systems create a set of independent user processes.
  • VMs virtual machines
  • Virtual machines are created by a layer executing on the CPU called the virtual machine monitor (VMM).
  • the VMM is in control of the hardware, including memory management (address translation and protection mechanisms) and the page tables.
  • Each virtual machine runs in a separate virtual address space in the range 97, and has a distinct set of address space numbers assigned by the VMM.
  • the VMM also runs in its own, independent set of virtual address spaces in the range " 97.
  • the purpose of the invention is to maximize system performance, while allowing the virtual machines to run in kernel mode so they can execute privileged instructions (otherwise they would have to use traps for these operations, at considerable performance penalty); to do this the VMM must constrain the VMs.
  • the problem addressed in implementing this invention is to keep the address spaces of the several VMs and the VMM itself separate from each other, while at the same time maximizing system performance by providing the match function in the TB to the VMs.
  • a further constraint on the solution is that it is expected that the VMs and the VMM will use the same virtual addresses for different purposes. Thus, it is not a solution to allocate separate address regions to the VMM from those allocated to the VMs.
  • the two virtual machines be completely independent. That means that they cannot reference each other's memory.
  • the VMM's memory must also be isolated from the virtual machines.
  • the address-space-match feature should work correctly, allowing the individual processes to share memory, under control of each VM's operating system.
  • a straightforward translation buffer design assigns each address space an address space number (ASN).
  • ASN address space number
  • Part of the CPU's state information is the ASN assigned to the currently running entity or process, register 95.
  • TB entries are tagged in field 94 with the ASN of the address space to which they belong.
  • bit or field in the TB (Table A, ASM bit ⁇ 4>) which indicates that the entry matches any address space, in effect overriding the ASN tag 94.
  • This kind of TB can be used in a virtual machine system at a considerable performance penalty.
  • the TB In order to enforce the memory isolation requirements, the TB must be completely flushed whenever (1) there are any entries in the TB that have the match field (bit ⁇ 4>) indicating match all, and (2) any of the following events occurs: (a) the currently executing entity on the real machine changes from one VM to another VM, (b) the currently executing entity on the real machine changes from some VM to the VMM, or (c) the currently executing entity on the real machine changes from the VMM to any VM.
  • VMM VMM-to-VMM
  • the restriction imposed on the VMM is to require it to use the disable match feature. This reserves the address space match feature for use by the virtual machines, and guarantees that no VMM address will be mapped by the TB with the match field set.
  • the TB itself is modified by adding another piece of CPU state called "disable match" which is the field 96 of Figure 10.
  • the VMM determines the value of the disable match field 96 on a per-address-space basis and forwards the current value to the TB.
  • the field 96 is a flag that can be either set or clear. If the disable match field 96 is clear, the TB will match a reference if the address is correct and (1) the ASN in the TB entry matches the CPU's current ASN, or (2) the match field in the TB entry is set.
  • ADDR.TAG is the tag fields 78, 79, 80 of the virtual address 76
  • TB.TAG is the tag 93 in the translation buffer
  • CPU.ASN is the value stored in the processor register 95 from the field 103 of the control block
  • TB.ASN is the address space number stored in the field 94 of the translation buffer
  • TB.ASM is the match bit
  • TB.DIS is the disable bit in field 96 of the translation buffer.
  • the TB will match a reference only if the address is correct and the ASN 94 in the TB entry matches the CPU's current ASN 95. In effect, the disable match state bit 96 overrides the match field in the TB.
  • the VMM causes the disable match bit 96 to be set for all ASNs that are dedicated to the VMM itself (ASN-0 and
  • the VMM causes the disable match bit 96 to be clear for all other ASNs.
  • One key feature of the disable match bit 96 is that its value is constant as long as the CPU executes in the same context, either the VMM or the same VM/process pair. So, the bit's value need not be calculated during a memory reference. Instead, depending on specific TB design, the value can be calculated and loaded when a TB entry is being filled or when the CPU changes context. The result of the modified design according to the invention is that the TB need be flushed only when the currently executing entity on the machine changes from one
  • a multi-bit field for a virtual machine number VMN is added to each translation buffer entry as seen in Figure 15, instead of the single-bit disable indicator as discussed above.
  • a multi-bit virtual machine number VMN is added to the state of the CPU in an internal processor register, like the address space number field 95.
  • the translation buffer then contains logic to match a virtual address on the tags and match the ASNs, as before, and also logic to match on the virtual machine numbers. The logic implemented may be expressed
  • CPU.ASN TB.ASN
  • CPU.VMN is the virtual machine number stored as machine state
  • TB.VMN is the VMN field in the translation buffer entry.
  • each VM is assigned one (or more) VMN, and the
  • VMM is assigned one (or more) VMN, and each must maintain operation using a VMN distinct to itself.
  • the translation buffer does not need to be cleared upon any switch between VMs.
  • An additional advantage is the virtual machine monitor can use the address space match to share translation buffer entries among its processes. The disadvantage of this embodiment is that more bits are needed in the translation buffer entries.
  • Table A Page Table Entry Fields in the page table entry are interpreted as follows: Bits Description
  • ASM Address Space Match
  • Granularity hint (GH) Software may set these bits to a non ⁇ zero value to supply a hint to the translation buffer that a block of pages can be treated as a single larger page.
  • UWE User Write Enable
  • PFN Page Frame Number

Abstract

A CPU executing a virtual memory management system employs a translation buffer for catching recently used page table entries. When more than one process is executing on the CPU, the translation buffer is usually flushed when a context switch is made, even though some of the entries would still be valid for commonly-referenced memory areas. An address space number feature is employed to allow entries to remain in the translation buffer for processes not currently executing, and the separate processes or the operating system can reuse entries in the translation buffer for such pages of memory that are commonly referenced. To allow this, an 'address space match' entry in the page table entry signals that the translation buffer content can be used when the address tag matches, even though the address space numbers do not necessarily match. When executing virtual machines on this CPU, with a virtual machine monitor, the address space match feature is employed among processes of a virtual machine, but an additional entry is provided to disable the address space match feature for all address space numbers for the virtual machine monitor.

Description

TRANSLATION BUFFER FOR VIRTUAL MACHINES WITH ADDRESS SPACE MATCH
BACKGROUND OF THE INVENTION
This invention relates to digital computers, and more particularly to a virtual memory management system for a CPU executing multiple processes.
Known is a reduced instruction set processor chip which implements a virtual memory management system. A virtual address is translated to a physical address in such a system before a memory reference is executed, where the physical address is that used to access the main memory. The physical addresses are maintained in a page table, indexed by the virtual address, so whenever a virtual address is presented the memory management system finds the physical address by referencing the page table. At a given time, a process executing on such a machine will probably be using only a few pages, and so these most-likely used page table entries are kept in a translation buffer within the CPU chip itself, eliminating the need to make a memory reference to fetch the page table entry.
The page table entries, including those in the translation buffer, contain other information used by the memory management system, such as privilege information, access rights, etc., to provide secure and robust operation. Before a memory reference is allowed, the current state of the processor and the characteristics of the memory reference are checked against information in the page table entry to make sure the memory reference is proper. A number of processes (tasks) may be executing in a time-shared manner on a CPU at a given time, and these processes will each have their own areas of virtual memory. The operating system itself will contain a number of pages which must be referenced by each one of these processes. The pages thus shared by processes will thus best be kept in main memory rather than swapped out, and also the page table entries for such pages will preferably remain in the translation buffer since continuing reference will be made to these. However, the translation buffer is usually flushed by invalidating all entries when a context switch is made.
A mechanism using so-called "address space numbers" is implemented in a processor to reduce the need for invalidation of cached address translations in the translation buffer for process-specific addresses when a context switch occurs. The address space number (process tag) for the current process is loaded to a register in the processor to become part of the current state; this loading is done by a privileged instruction from a process-specifi.c block in memory. Thus, each process has associated with it an address space number, which is an arbitrarily-assigned number generated by the operating system. This address space number is maintained as part of the machine state, and also stored in the translation buffer for each page entry belonging to that process. When a memory reference is made, as part of the tag match in the translation buffer, the current address space number is compared with the entry in the translation buffer, to see if there is a match. To accommodate sharing of entries, an address space match function can be added to the comparison; a "match" bit in the entry can turn on or off the requirement for matching address space numbers. If turned on, the entry will "match" if the address tags match, regardless of the address space numbers. The operating system can thus load certain page table entries with this match bit on so these pages are shared by all processes.
In operating a CPU using virtual machines, each virtual machine functions as if it were an independent processor, and each virtual machine has a virtual machine operating system and a number of processes running, just as when only one CPU is functioning. Virtual machines are described by Siewiorek et al in "Computer Structures: Principles and Examples" published by McGraw-Hill, 1982, pp. 227-228. To operate these virtual machines, a virtual machine monitor (another execution level) is implemented. As before, there are pages used by the operating system that are used by all of the processes on a virtual machine. Again, performance is improved if entries for the pages remain in the translation buffer when making a context switch between processes or between virtual machines.
The several virtual machines and the virtual machine monitor running on a CPU must have their memory spaces kept separate and isolated from one another, but yet maximize system performance. To this end, the virtual machines and the virtual machine monitor must be able to use the same virtual addresses for different purposes. However, when context switching from one virtual machine to another, or to or from the virtual machine monitor, needlessly flushing entries in the translation buffer which will be used in the new context imposes a performance penalty. Therefore it is important to offer both address space numbers and the match feature when implementing virtual machines.
SUMMARY OF THE INVENTION
The present invention in its broad form resides in a in a CPU executing a virtual memory management, a method of operating the CPU, comprising: providing a translation buffer for translating virtual addresses to physical addresses, comprising the steps of: storing in said translation buffer a plurality of page table entries, each page table entry containing a page frame number indexed by a virtual address tag; also storing in said translation buffer for each said page table entry an address space number, and an address space match entry; where said address space number is a value related to a process executed on said processor, said match entry is a value indicating that the address space number is to be required to be matched or not so required; and storing a third match value indicating whether or not said match entry is to be disabled; comparing said virtual address tag with a field of a virtual address generated by said processor, and also comparing said address space number with a current number maintained as part of the state of said processor, if comparing said virtual address tag with a field of said virtual address produces a match, and if said comparison of address space numbers produces a match, and said match entry is of one value, then using said page frame number for a memory reference; or if said match entry is another value, then using said page frame number for a memory reference regardless of whether said address space number matches said current number, if said third match value is in one condition.
The invention also consists in a processor system a processor system having a CPU and a memory, comprising: a) means in the CPU for fetching instructions from said memory, decoding said instructions, and executing said instructions, said executing including accessing said memory for read and write data; b) means in said CPU for generating virtual addresses used for said fetching of instructions and said accessing said memory for data; c) a page table stored in said memory and containing a plurality of page table entries, each page table entry including a page frame number referencing a page of said memory; d) means for translating said virtual addresses to physical addresses for said memory, including a translation buffer storing selected ones of said page table entries; e) and means for addressing said memory using the page frame number from said translation buffer and using a part of said virtual address; f) said translation buffer storing for each said page table entry an address space number, and storing an address space match entry, where said address space number is a value related to a process executed on said CPU, and said match entry is a value indicating that the address space number is to be required to be matched or not so required; said translation buffer also storing for each said entry a match disable indicator; g) means in said translation buffer for comparing said virtual address tag with a field of said virtual address generated by said CPU, and also comparing said address space number with a current number maintained as part of the state of said CPU, and if both of said comparisons produce a match, and said match entry is of one value, then said addressing means using said page frame number for a memory reference; or if comparing said virtual address tag with a field of said virtual address produces a match, and if said match entry is another value, then said addressing means using said page frame number for a memory reference regardless of whether said address space number matches said current number, unless said disable match indicator is set.
In accordance with one embodiment of the invention, a CPU executing a virtual memory management system employs an address space number feature to allow entries to remain in the translation buffer for processes not currently executing, and the separate processes or the operating system can reuse entries in the translation buffer for such pages of memory that are commonly referenced. To allow this, an "address space match" entry in the page table entry signals that the translation buffer content can be used when the address tag matches, even though the address space numbers do not necessarily match. When executing virtual machines on this CPU, with a virtual machine monitor, the address space match feature is employed within a virtual machine, but an additional entry is provided to disable the address space match feature for all address space numbers for the virtual machine monitor. In another embodiment, an additional entry is provided in the translation buffer to restrict the address space match feature to those address spaces associated with a single virtual machine or virtual machine monitor.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as other features and advantages thereof, will be best understood by reference to the detailed description of specific embodiments which fol¬ lows, when read in conjunction with the accompanying drawings; wherein:
Figure 1 is an electrical diagram in block form of a computer system having a CPU which may employ features of the invention;
Figure 2 is an electrical diagram in block form of the instruction unit or I-box of the CPU of Figure 1;
Figure 3 is an electrical diagram in block form of the integer execution unit or E-box in the CPU of Figure 1;
Figure 4 is an electrical diagram in block form of the addressing unit or A-box in the CPU of Figure 1;
Figure 5 is an electrical diagram in block form of the floating point execution unit or F-box in the CPU of Figure 1;
Figure 6 is a timing diagram of the pipelining in the CPU of Figures 1-5; Figure 7 is a diagram of the instruction formats used in the instruction set of the CPU of Figures 1-5;
Figure 8 is a diagram of the format of a virtual address used in the CPU of Figures 1-5;
Figure 9 is a diagram of the format of a page table entry used in the CPU of Figures 1-5;
Figure 10 is a diagram of the address translation mechanism used in the CPU of Figures 1-5;
Figure 11 is a diagram of the virtual-to-physical address mapping used in the system of Figures 1-5;
Figure 12 is a diagram of a process control block used in the system of Figures 1-5;
Figure 13 is a table of address space numbers for an example of operating the system of Figures 1-5 with virtual machines;
Figure 14 is a diagram of the evaluation mechanism for translation buffer entries using the features of the invention; and
Figure 15 is a diagram of a translation buffer entry for another embodiment of the invention.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENT
Referring to Figure 1, a computer system which may use features of the invention, according to one embodiment, includes a CPU 10 connected by a system bus 11 to a main memory 12, with a disk memory 13 also accessed via the system bus 11. The system may use a single CPU, but may also be of a multiprocessor configuration, in which case other CPUs 14 also access the main memory 12 via the system bus 11.
The CPU 10 is of a single-chip integrated circuit device, in an example embodiment, although features of the invention could be employed as well in a processor constructed of integrated circuit devices mounted on boards. Within the single chip an integer execution unit 16 (referred to as the "E-box") is included, along with a floating point execution unit 17 (referred to as the F-box"). Instruction fetch and decoding is performed in an instruction unit 18 or "I-box". An address unit or "A- box" 19 performs the functions of address generation, memory management, write buffering and bus interface; the virtual address system with translation buffer according to the invention is implemented in the address unit 19. The memory is hierarchical, with on-chip instruction and data caches being included in the instruction unit 18 and address unit 19 in one embodiment, while a larger, second-level cache 20 is provided off- chip, being controlled by a cache controller in the address unit 19.
The CPU 10 employs an instruction set in which all instructions are of a fixed size, in this case 32-bit or one longword. Memory references are generally aligned quadwords, although integer data types of byte, word, longword and quadword are handled internally. As used herein, a byte is 8-bits, a word is 16-bits or two bytes, a longword is 32-bits or four bytes, and a quadword is 64-bits or eight bytes. The data paths and registers within the CPU 10 are generally 64-bit or quadword size, and the memory 12 and caches use the quadword as the basic unit of transfer. Performance is enhanced by allowing only quadword or longword loads and stores.
Referring to Figure 2, the instruction unit 18 or I-box is shown in more detail. The primary function of the instruction unit 18 is to issue instructions to the E-box 16, A-box 19 and F-box 17. The instruction unit 18 includes an instruction cache 21 which stores perhaps 8Kbytes of instruction stream data, and this instruction stream data is loaded to an instruction register 22 in each cycle for decoding. In one embodiment, two instructions are decoded in parallel. An instruction is decoded in a decoder 23, producing register addresses on lines 26 and control bits on microcontrol bus 28 to the appropriate elements in the CPU 10.
The instruction unit 18 contains address generation circuitry 29, including a branch prediction circuit 30 responsive to the instructions in the instruction stream to be loaded into register 22. The prediction circuit 30 is used to predict branch addresses and to cause address generating circuitry 29 to prefetch the instruction stream before needed. The virtual PC (program counter) 33 is included in the address generation circuitry 29 to produce addresses for instruction stream data in the selected order.
The instruction unit 18 contains a fully associative translation buffer (TB) 36 to cache recently used instruction-stream address translations and protection information for 8Kbyte pages. Although 64-bit addresses are nominally possible, as a practical matter 43-bit addresses are adequate. Every cycle the 43-bit virtual program counter 33 is presented to the instruction stream TB 36. If the page table entry (PTE) associated with the virtual address from the virtual PC is cached in the TB 36 then the page frame number (PFN) and protection bits for the page which contains the virtual PC are used by the instruction unit 18 to complete the address translation and access checks. A physical address is thus applied to the address input 37 of the instruction cache 21, or if there is a cache miss then this instruction stream physical address is applied by the bus 38 through the address unit 19 to the cache 20 or memory 12.
The execution unit or E-box 16 is shown in more detail in Figure 3. The execution unit 16 contains the 64-bit integer execution datapath including an arithmetic/logic unit (ALU) 40, a barrel shifter 41, and an integer multiplier 42. The execution unit 16 also contains the 32- register 64-bit wide register file 43, containing registers R0 to R31, although R31 is hardwired as all zeros. The register file 43 has four read ports and two write ports which allow the sourcing (sinking) of operands (results) to both the integer execution datapath and the address unit 19. A bus structure 44 connects two of the read ports of the register file 43 to the selected inputs of the ALU 40, the shifter 41 or the multiplier 42 as specified by the control bits of the decoded instruction on lines 28 from the instruction unit 18, and connects the output of the appropriate function to one of the write ports of the register file to store the result. That is, the address fields from the instruction are applied by the lines 26 to select the registers to be used in executing the instruction, and the control bits 28 define the operation in the ALU, etc., and define which internal busses of the bus structure 44 are to be used when, etc.
The A-box or address unit 19 is shown in more detail in Figure 4. The A-box 19 includes five functions: address translation using a datapath translation buffer 48, a load silo 49 for incoming data, a write buffer 50 for outgoing write data, an interface 51 to a data cache, and the external interface 52 to the bus 11. The address translation datapath has the displacement adder 53 which generates the effective address (by accessing the register file 43 via the second set of read and write ports, and the PC), and the data TB 48 which generates the physical address on address bus 54.
The datapath translation buffer 48 caches a number (e.g., thirty- two) of the recently-used data-stream page table entries (as described below) for pages of 8Kbyte size. Each entry supports any of four granularity hint block sizes, and a detector 55 is responsive to the granularity hint as described in application Serial No. 547,630 to change the number of low-order bits of the virtual address passed through from virtual address bus 56 to the physical address bus 54.
For load and store instructions, the effective 43-bit virtual address is presented to TB 48 via bus 56. If the PTE of the supplied virtual address is cached in the TB 48, the PFN and protection bits for the page which contains the address are used by the address unit 19 to complete the address translation and access checks.
The on-chip pipelined floating point unit 17 or F-box as shown in more detail in Figure 5 is capable of executing both DEC and IEEE floating point instructions. The floating point unit 17 contains a 32-entry, 64- bit, floating point register file 61 which includes floating-point registers F0 to F31, and contains a floating point arithmetic and logic unit 62. Divides and multiplies are performed in a multiply/divide circuit 63. A bus structure 64 interconnects two read ports of the register file 61 to the appropriate functional circuit as directed by the control bits of the decoded instruction on lines 28 from the instruction unit 18. The registers selected for an operation are defined by the output bus 26 from the instruction decode. The floating point unit 17 can accept an instruction every cycle, with the exception of floating point divide instructions, which can be accepted only every several cycles. A latency of more than one cycle is exhibited for all floating point instructions, during which the integer unit can continue to execute other instructions.
In an example embodiment, the CPU 10 has an 8Kbyte data cache 59, and 8Kbyte instruction cache 21, with the size of the caches depending on the available chip area. The on-chip data cache 59 is write-through, direct mapped, read-allocate physical cache and has 32-byte (1-hexaword) blocks. The system may keep the data cache 59 coherent with memory 12 by using an invalidate bus, not shown. The instruction cache 21 may be δKbytes, or IδKbytes, for example, or may be larger or smaller, depending upon die area. Although described above as using physical addressing with a TB 36, it may also be a virtual cache, in which case it will contain no provision for maintaining its coherence with memory 12. If the cache 21 is a physical addressed cache the chip will contain circuitry for maintaining its coherence with memory: (1) when the write buffer 50 entries are sent to the bus interface 52, the address will be compared against a duplicate instruction cache 21 tag, and the corresponding block of instruction cache 21 will be conditionally invalidated; (2) the invalidate bus will be connected to the instruction cache 21.
The main data paths and registers in the CPU 10 are all 64-bits wide. That is, each of the integer registers 43, as well as each of the floating point registers 61, is a 64-bit register, and the ALU 40 has two 64-bit inputs 40a and 40b and a 64-bit output 40c. The bus structure 44 in the execution unit 16, which actually consists of more than one bus, has 64-bit wide data paths for transferring operands between the integer registers 43 and the inputs and output of the ALU 40. The instruction decoder 23 produces register address outputs 26 which are applied to the addressing circuits of the integer registers 43 and/or floating point registers 61 to select which register operands are used as inputs to the ALU 41 or 62, and which of the registers 43 or registers 61 is the destination for the ALU (or other functional unit) output.
A feature of the CPU 10 of Figures 1-6 of this example embodiment is its RISC characteristic. The instructions executed by this CPU 10 are always of the same size, in this case 32-bits, instead of allowing variable-length instructions. The instructions execute on average in one machine cycle (pipelined as described below, and assuming no stalls), rather than a variable number of cycles. The instruction set includes only register-to-register arithmetic/logic type of operations, or register-to-memory (or memory-to-register) load/store type of operations, and there are no complex memory addressing modes such as indirect, etc. An instruction performing an operation in the ALU 40 always gets its operands from the register file 43 (or from a field of the instruction itself) and always writes the result to the register file 43; these operands are never obtained from memory and the result is never written to memory in the same instruction that performs the ALU operation. Loads from memory are always to a register in register files 43 or 61, and stores to memory are always from a register in the register files.
Referring to Figure 6, the CPU 10 has a seven stage pipeline for integer operate and memory reference instructions. Figure 6 is a pipeline diagram for the pipeline of execution unit 16, instruction unit 18 and address unit 19. The floating point unit 17 defines a pipeline in parallel with that of the execution unit 16, but ordinarily employs more stages to execute. The seven stages are referred to as S0-S6, where a stage is to be executed in one machine cycle (clock cycle). The first four stages SO, SI, S2 and S3 are executed in the instruction unit 18, and the last three stages S4, S5 and S6 are executed in one or the other of the execution unit 16 or address unit 19, depending upon whether the instruction is an operate or a load/store.
The first stage SO of the pipeline is the instruction fetch or IF stage, during which the instruction unit 18 fetches new instructions from the instruction cache 21, using the PC 33 address as a base. The second stage SI is the evaluate stage, during which two fetched instructions are evaluated to see if dual issue is possible. The third stage S2 is the decode stage, during which the instructions are decoded in the decoder 23 to produce the control signals 28 and register addresses on lines 26. The fourth stage S3 is the register file 43 access stage for operate instructions, and the instruction issue stage. The fifth stage S4 is cycle-one of the computation (in ALU 40, for example) if it is an operate instruction, and also the instruction unit 18 computes the new PC 33 in address generator 29; if it is a memory reference instruction the address unit 19 calculates the effective data stream address using the adder 53. The sixth stage S5 is cycle-two of the computation (e.g., in ALU 40) if it is an operate instruction, and also the data TB 48 lookup stage for memory references. The last stage S6 is the write stage for operate instructions having a register write, during which, for example, the output 40c of the ALU 40 is written to the register file 43 via the write port, and is the data cache 59 or instruction cache 21 hit/miss decision point for instruction stream or data stream references.
Referring to Figure 7, the formats of the various types of instructions of the instruction set executed by the CPU 10 of Figures 1-7 are illustrated. Using the instruction formats of Figure 7, the CPU of. Figure 1 executes an instruction set which includes nine types of instructions. These include (1) integer load and store instructions, (2) integer control instructions, (3) integer arithmetic, (4) logical and shift instructions, (5) byte manipulation, (6) floating point load and store, (7) floating point control, (8) floating point arithmetic, and (9) miscellaneous. The instruction set is described in more detail in application Ser. No. 547,630.
One type is a memory (i.e., load and store) instruction 70, which contains a 6-bit opcode in bits <31:26>, two 5-bit register address fields Ra and Rb in bits <25:21> and <20:16>, and a 16-bit signed displacement in bits <15:0>. This instruction is used to transfer data between registers 43 and memory (memory 12 or caches 59 or 20), to load an effective address to a register of the register file 43, and for subroutine jumps. The displacement field <15:0> is a byte offset; it is sign-extended and added to the contents of register Rb to form a virtual address. The virtual address is used as a memory load/store address or a result value depending upon the specific instruction.
The branch instruction format 71 is also shown in Figure 7, and includes a 6-bit opcode in bits <31:26>, a 5-bit address field in bits <25:21>, and a 21-bit signed branch displacement in bits <20:0>. The displacement is treated as a longword offset, meaning that it is shifted left two bits (to address a longword boundary), sign-extended to 64-bits and added to the updated contents of PC 33 to form the target virtual address (overflow is ignored).
The operate instructions 72 and 73 are of the formats shown in Figure 7, one format 72 for three register operands and one format 73 for two register operands and a literal. The operate format is used for instructions that perform integer register operations, allowing two source operands and one destination operand in register file 43. One of the source operands can be a literal constant. Bit-12 defines whether the operate instruction is for a two source register operation or one source register and a literal. In addition to the 6-bit opcode at bits <31:26>, the operate format has a 7-bit function field at bits <11:5> to allow a wider range of choices for arithmetic and logical operation. The source register Ra is specified in either case at bits <25:21>, and the destination register Re at <4:0>. If bit-12 is a zero, the source register Rb is defined at bits <20:16>, while if bit-12 is a one then an 8-bit zero-extended literal constant is formed by bits <20:13> of the instruction. This literal is interpreted as a positive integer in the range 0-255, and is zero-extended to 64-bits.
Figure 7 also illustrates the floating point operate instruction format 74, used for instructions that perform floating point register 61 to floating point register 61 operations. The floating point operate instructions contain a 6-bit opcode at bits <31:26> as before, along with an 11-bit function field at bits <15:5>. There are three operand fields, Fa, Fb and Fc, each specifying either an integer or a floating-point operand as defined by the instruction; only the registers 61 are specified by Fa, Fb and Fc, but these registers can contain either integer or floating-point values. Literals are not supported. Floating point conversions use a subset of the floating point operate format 74 of Figure 7 and perform register-to-register conversion operations; the Fb operand specifies the source and the Fa operand should be reg-31 (all zeros).
The other instruction format 75 of Figure 7 is that for privileged architecture library (PAL or PALcode) instructions, which are used to specify extended processor functions. In these instructions a 6-bit opcode is present at bits <31:26> as before, and a 26-bit PALcode function field <25:0> specifies the operation. The source and destination operands for PALcode instructions are supplied in fixed registers that are specified in the individual instruction definitions. A PALcode instruction usually uses a number of instructions of formats 70-74 stored in memory to make up a more complex instruction which is executed in a privileged mode, as part of the operating system, for example.
The six-bit opcode field <31:26> in the instruction formats of Figure 7 allows only 26 or sixty-four different instructions to be coded. Thus the instruction set would be limited to sixty-four. However, the "function" fields in the instruction formats 72, 73 and 74 allow variations of instructions having the same opcode in bits <31:26>. Also, the "hint" bits in the jump instruction allow variations such as JSR or RET.
Referring to Figure 8, the format 76 of the virtual address asserted on the internal address bus 56 is shown. This address is nominally 64- bits in width, but of course practical implementations at present use much smaller addresses. For example, an address of 43-bits provides an addressing range of 8-Terabytes. The format includes a byte offset 77 of, for example, 13-bits to 16-bits in size, depending upon the page size employed. If pages are 8-Kbytes, the byte-within-page field 77 is 13- bits, while for 16-Kbyte pages the field 77 is 14-bits. The format 76 as shown includes three segment fields 78, 79 and 80, labelled Segl, Seg2 and Seg3, also of variable size depending upon the implementation. The segments Segl, Seg2, and Seg3 can be 10-to-13 bits, for example. If each segment size is 10-bits, then a segment defined by Seg3 is IK pages in length, a segment for Seg2 is 1M pages, and a segment for Segl is 1G pages. The page frame number (PFN) field in the PTE is always 32-bits wide; thus, as the page size grows the virtual and physical address size also grows.
The physical addresses are at most 48-bits, but a processor may implement a smaller physical address space by not implementing some number of high-order bits. The two most significant implemented physical address bits select a caching policy or implementation-dependent type of address space. Different implementations may put different uses and restrictions on these bits as appropriate for the system. For example, in a workstation with a 30-bit <29:0> physical address space, bit <29> may select between memory and I/O, and bit <28> may enable or disenable caching in I/O space and must be zero in memory space.
Typically, in a multitasking system, several processes may reside in physical memory 12 (or caches) at the same time, so memory protection and multiple address spaces (using address space numbers) are used by the CPU 10 to ensure that one process will not interfere with either other processes or the operating system. To further improve software reliability, four hierarchical access modes (privilege modes) provide memory access control. They are, from most to least privileged: kernel, executive, supervisor, and user, referring to operating system modes and application programs. Protection is specified at the individual page level, where a page may be inaccessible, read-only, or read/write for each of the four access modes. Accessible pages can be restricted to have only data or instruction access.
A page table entry or PTE 81, as stored in the translation buffers
36 or 48 or in the page tables set up in the memory 12 by the operating system, is illustrated in Figure 9. The PTE 81 is a quadword (64-bits) in width, and includes a 32-bit page frame number or PFN 82 at bits <63:32>, as well as certain software and hardware control information in a field 83 having bits <15:0> as set forth in Table A to implement the protection features and the like.
The translation buffers 36 and 48 store a number of the page table entries 81, each associated with a tag consisting of certain high-order bits of the virtual address to which this PFN is assigned by the operating system. For example, the tag may consist of the fields 78, 79 and 80 for the Segl, Seg2 and Seg3 values of the virtual address 76 of Figure 8. In addition, each entry contains a valid bit, indicating whether or not the entry has been invalidated, as when the TB "is flushed. It is conventional to flush the TB when a context switch is made, invalidating all the entries; the features of the invention, however, allow continued use of entries still useful, so performance is improved. To this end, the translation buffers 36 and 48 include in addition an address space number field, perhaps sixteen bits in width, loaded from the process control block as will be described.
Referring to Figure 10, the virtual address 76 on the bus 56 (seen in Figure 8) is used to search for tag match for a PTE in the translation buffer, and, if not found, then Segl field 78 is used to index into a first page table 85 found at a base address stored in an internal register 86 referred to as the page table base register. The entry 87 found at the Segl index in table 85 is the base address for a second page table 88, for which the Seg2 field 79 is used to index to an entry 89. The entry 89 points to the base of a third page table 90, and Seg3 field 80 is used to index to a PTE 91. The physical page frame number from PTE 91 is combined with the byte offset 77 from the virtual address, in adder 92, to produce the physical address on bus 54. As mentioned above, the size of the page mapped by a PTE, along with size of the byte offset 77, can vary depending upon the granularity hint. In addition to matching the tag field of the virtual address 76 with the tag field 93 of the translation buffer 36 or 48, an address space number 94 stored in the translation buffer is compared to the address space number in the current state of the CPU 10, stored in an ASN register 95 which is one of the internal processor registers. If the address space match field (bit <4>, Table A) is clear (zero), the current ASN and the field 94 must match for the PTE to be used, but if set (logic one) then there need not be a match, i.e., the address space numbers are ignored. A particular feature of the invention, however, is an additional match- disable bit 96 stored for each PTE, disabling the address space match feature under certain conditions (i.e., when the virtual machine monitor process is being executed).
The match-disable bit is not required to be maintained on each entry in the translation buffer. Whether or not address-space matches should be disabled is properly a function of the execution environment of the CPU rather than of the virtual address. When the virtual machine monitor is being executed (as discussed below), address-space matches are disabled; when a virtual machine or some process in a VM is being executed, address-space matches are enabled. In another embodiment of the invention, the match-disable bit could be stored globally in the translation buffer. In yet another embodiment of the invention, the match-disable bit could be maintained in the CPU itself. In either case, its value would be changed on transition from the VMM to a VM or from a VM to the VMM, and its current value must be made available to the match comparison logic in the translation buffer.
The CPU 10 generates memory references by first forming a virtual address 76 on bus 56, representing the address within the entire virtual range 97 as seen in Figure 11, defined by the 43-bit address width referred to, or that portion of the address width used by the operating system. Then using the page tables 85, 88, 90 in memory, or the translation buffer 36 or 48, the virtual address is translated to a physical address represented by an address map 98; the physical memory is constrained by the size of the main memory 12. The translation is done for each page (e.g., an 8Kbyte block), so a virtual page address for a page 99 in the virtual memory map 97 is translated to a physical address 99' for a page (referred to as a page frame) in the physical memory map 98. The page tables are maintained in memory 12 or cache 20 to provide the translation between virtual address and physical address, and the translation buffer is included in the CPU to hold the most recently used translations so a reference to the page tables in memory 12 need not be made in most cases to obtain the translation before a data reference can be made; the time needed to make the reference to a page table in memory 12 would far exceed the time needed to obtain the translation from the translation buffer. Only the pages used by tasks currently executing (and the operating system itself) are likely to be in the physical memory 12 at a given time; a translation to an address 99' is in the page table 85, 88, 90 for only those pages actually present in physical memory 12. When the page being referenced by the CPU 10 is found not to be in the physical memory 12, a page fault is executed to initiate a swap operation in which a page from the physical memory 12 is swapped with the desired page maintained in the disk memory 13, this swap being under control of the operating system. Some pages in physical memory 12 used by the operating system kernel, for example, or the page tables 85, 88, 90 themselves, are in fixed positions and may not be swapped to disk 13 or moved to other page translations; most pages used by executing tasks, however, may be moved freely within the physical memory 12 by merely keeping the page tables updated.
A process (or task) is a basic entity that is scheduled for execution by the CPU 10. A process represents a single thread of execution and consists of an address space and both hardware and software context. The hardware context is defined by the integer registers 43 and floating point registers 61, the processor status contained in internal processor registers, the program counter 33, four stack pointers, the page table base register 86, the address space number 95, and other values depending upon the CPU design. The software context of a process is defined by operating system software and is system dependent. A process may share the same address space with other processes or have an address space of its own; there is, however, no separate address space for system software, and therefore the operating system must be mapped into the address space of each process. In order for a process to execute, its hardware context must be loaded into the integer registers 43, floating point registers 61, and internal processor registers. While a process is executing, its hardware context is continuously updated, as the various registers are loaded and written over. When a process is not being executed, its hardware context is stored in memory 12. Saving the hardware context of the current process in memory, followed by loading the hardware context for a new process, is referred to as context switching. Context switching occurs as one process after another is scheduled by the operating system for execution. The hardware context of a process is defined by a privileged part and nonprivileged part. The privileged part is stored in memory in a 128-byte block 100 as shown in Figure 12 when a process is not executing, and context is switched by a privileged instruction. There is one block 100 for each process. The nonprivileged part is context switched by the operating system software.
Referring to Figure 12, the context block 100 contains four stack pointers in fields 101, these being stack pointers for the kernel, the executive, the supervisor and the user. The page table base register 86 is in field 102. The address space number for this process (to be loaded to register 95) is in field 103. Other fields 104 are for values not material here. The location of this block 100 in memory is specified for the current process by a context block base register 105. A swap context instruction saves the privileged context of the current process into the context block specified by this register 105, loads a new value into the register 105, and then loads the privileged context of the new process from the new block 100 into the appropriate hardware registers 43, etc. The architecture as described above allows a processor to implement address space numbers (process tags) to reduce the need for invalidation of cached address translations in the translation buffer for process- specific addresses when a context switch occurs. The address space number for the current process is loaded by a privileged instruction from field 103 into an internal processor register 95.
In the page table entry 81 of Figure 9 and Table A, there is a field
(bit <4>) called "address space match." This feature allows an operating system to designate locations in the system's virtual address space 97 which are shared among all processes. Such a virtual address refers to the same physical address in each process's address space.
The CPU 10 of Figures 1-5 may employ a "virtual machine .system" which uses a combination of hardware, firmware, and software mechanisms to create the illusion of multiple, independent simulated or virtual computers each running on the same physical computer. Virtual machine systems create a set of virtual machines (VMs) in an analogous fashion to how time sharing systems create a set of independent user processes. In virtualizing the architecture of Figure 1-5, it is desirable from performance and functional standpoints to provide the address-space-match feature to virtual machines through hardware means.
Virtual machines are created by a layer executing on the CPU called the virtual machine monitor (VMM). The VMM is in control of the hardware, including memory management (address translation and protection mechanisms) and the page tables. Each virtual machine runs in a separate virtual address space in the range 97, and has a distinct set of address space numbers assigned by the VMM. The VMM also runs in its own, independent set of virtual address spaces in the range "97.
The purpose of the invention is to maximize system performance, while allowing the virtual machines to run in kernel mode so they can execute privileged instructions (otherwise they would have to use traps for these operations, at considerable performance penalty); to do this the VMM must constrain the VMs. The problem addressed in implementing this invention is to keep the address spaces of the several VMs and the VMM itself separate from each other, while at the same time maximizing system performance by providing the match function in the TB to the VMs. A further constraint on the solution is that it is expected that the VMs and the VMM will use the same virtual addresses for different purposes. Thus, it is not a solution to allocate separate address regions to the VMM from those allocated to the VMs.
To describe this invention, an example is illustrative. Suppose that the hardware provides fifteen address spaces for use by software, numbered ASN-0 through ASN-14. Assume that address spaces ASN-0 and ASN-9 are dedicated for use by the VMM. On this example system, there are running two virtual machines, A and B, each of which is running five user processes. One possible assignment of address spaces to this mix would be to dedicate address spaces ASN-1 through ASN-5 to VM A and address spaces ASN-6, -7, -8, -10, and -11 to VM B. This example is illustrated in Figure 13.
To preserve system security and integrity, it is required that the two virtual machines be completely independent. That means that they cannot reference each other's memory. Similarly, the VMM's memory must also be isolated from the virtual machines. However, within a virtual machine, the address-space-match feature should work correctly, allowing the individual processes to share memory, under control of each VM's operating system.
A straightforward translation buffer design assigns each address space an address space number (ASN). Part of the CPU's state information is the ASN assigned to the currently running entity or process, register 95. TB entries are tagged in field 94 with the ASN of the address space to which they belong. In addition, there is a bit or field in the TB (Table A, ASM bit <4>) which indicates that the entry matches any address space, in effect overriding the ASN tag 94.
This kind of TB can be used in a virtual machine system at a considerable performance penalty. In order to enforce the memory isolation requirements, the TB must be completely flushed whenever (1) there are any entries in the TB that have the match field (bit <4>) indicating match all, and (2) any of the following events occurs: (a) the currently executing entity on the real machine changes from one VM to another VM, (b) the currently executing entity on the real machine changes from some VM to the VMM, or (c) the currently executing entity on the real machine changes from the VMM to any VM.
To minimize the TB flushes, and thus improve overall system performance, some restriction is imposed on the VMM and the TB construction is changed, according to the invention. The restriction imposed on the VMM is to require it to use the disable match feature. This reserves the address space match feature for use by the virtual machines, and guarantees that no VMM address will be mapped by the TB with the match field set.
The TB itself is modified by adding another piece of CPU state called "disable match" which is the field 96 of Figure 10. The VMM determines the value of the disable match field 96 on a per-address-space basis and forwards the current value to the TB. The field 96 is a flag that can be either set or clear. If the disable match field 96 is clear, the TB will match a reference if the address is correct and (1) the ASN in the TB entry matches the CPU's current ASN, or (2) the match field in the TB entry is set.
Referring to Figure 14, the logic implemented according to the invention may be illustrated by the relationship: IF (ADDR.TAG = TB.TAG) & ((CPU.ASN = TB.ASN) |
(TB.ASM &N0T TB.DIS)) THEN MATCH where ADDR.TAG is the tag fields 78, 79, 80 of the virtual address 76, TB.TAG is the tag 93 in the translation buffer, CPU.ASN is the value stored in the processor register 95 from the field 103 of the control block, TB.ASN is the address space number stored in the field 94 of the translation buffer, TB.ASM is the match bit, and TB.DIS is the disable bit in field 96 of the translation buffer. These values are applied to a logic circuit 106 in the A-box of Figure 4 to generate a match signal to indicate whether or not the PTE selected in the TB is to be used.
The logic implemented in the circuit 106 and given in the preceding paragraph in equation form may also be expressed a truth table as follows (in each case assuming the address tags match):
Figure imgf000027_0001
If the disable match field 96 is set, the TB will match a reference only if the address is correct and the ASN 94 in the TB entry matches the CPU's current ASN 95. In effect, the disable match state bit 96 overrides the match field in the TB. To use the modified design, the VMM causes the disable match bit 96 to be set for all ASNs that are dedicated to the VMM itself (ASN-0 and
ASN-9 in the example above), and the VMM causes the disable match bit 96 to be clear for all other ASNs. One key feature of the disable match bit 96 is that its value is constant as long as the CPU executes in the same context, either the VMM or the same VM/process pair. So, the bit's value need not be calculated during a memory reference. Instead, depending on specific TB design, the value can be calculated and loaded when a TB entry is being filled or when the CPU changes context. The result of the modified design according to the invention is that the TB need be flushed only when the currently executing entity on the machine changes from one
VM to another VM.
In another embodiment of the invention, a multi-bit field for a virtual machine number VMN is added to each translation buffer entry as seen in Figure 15, instead of the single-bit disable indicator as discussed above. Likewise, a multi-bit virtual machine number VMN is added to the state of the CPU in an internal processor register, like the address space number field 95. The translation buffer then contains logic to match a virtual address on the tags and match the ASNs, as before, and also logic to match on the virtual machine numbers. The logic implemented may be expressed
IF (ADDR.TAG = TB.TAG) & (CPU.VMN = TB.VMN)
& ((CPU.ASN = TB.ASN) | (TB.ASM)) THEN MATCH where CPU.VMN is the virtual machine number stored as machine state and TB.VMN is the VMN field in the translation buffer entry.
In this embodiment, each VM is assigned one (or more) VMN, and the
VMM is assigned one (or more) VMN, and each must maintain operation using a VMN distinct to itself. In contrast to the previous embodiment, however, the translation buffer does not need to be cleared upon any switch between VMs. An additional advantage is the virtual machine monitor can use the address space match to share translation buffer entries among its processes. The disadvantage of this embodiment is that more bits are needed in the translation buffer entries.
While this invention has been described with reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to this description. It is therefore contemplated that the appended claims will cover any such modifications or embodiments as fall within the true scope of the invention.
Table A: Page Table Entry Fields in the page table entry are interpreted as follows: Bits Description
<0> Valid (V) - Indicates the validity of the PFN field.
<1> Fault On Read (FOR) - When set, a Fault On Read exception occurs on an attempt to read any location in the page.
<2> Fault On Write (FOW) - When set, a Fault On Write exception occurs on an attempt to write any location in the page.
<3> Fault on Execute (FOE) - When set, a Fault On Execute exception occurs on an attempt to execute an instruction in the page.
<4> Address Space Match (ASM) - When set, this PTE matches all Address Space Numbers. For a given VA, ASM must be set consistently in all processes.
<6:5> Granularity hint (GH) - Software may set these bits to a non¬ zero value to supply a hint to the translation buffer that a block of pages can be treated as a single larger page..
<7> Reserved for future use.
<8> Kernel Read Enable (KRE) - This bit enables reads from kernel mode. If this bit is a 0 and a LOAD or instruction fetch is attempted while in kernel mode, an Access Violation occurs. This bit is valid even when V=0.
<9> Executive Read Enable (ERE) - This bit enables reads from executive mode. If this bit is a 0 and a LOAD or instruction fetch is attempted while in executive mode, an Access Violation occurs. This bit is valid even when V=0.
<10> Supervisor Read Enable (SRE) - This bit enables reads from supervisor mode. If this bit is a 0 and a LOAD or instruction fetch is attempted while in supervisor mode, an Access Viola¬ tion occurs. This bit is valid even when V=0. <11> User Read Enable (URE) - This bit enables reads from user mode. If this bit is a 0 and a LOAD or instruction fetch is attempted while in user mode, an Access Violation occurs. This bit is valid even when V=0.
<12> Kernel Write Enable (KWE) - This bit enables writes from kernel mode. If this bit is a 0 and a STORE is attempted while in kernel mode, an Access Violation occurs. This bit is valid even when V=0.
<13> Executive Write Enable (EWE) - The bit enables writes from executive mode. If this bit ia a 0 and a STORE is attempted while in executive mode, an Access Violation occurs.
<14> Supervisor Write Enable (SWE) - This bit enables writes from supervisor mode. If this bit is a 0 and a STORE is attempted while in supervisor mode, an Access Violation occurs.
<15> User Write Enable (UWE) - This bit enables writes from user mode. If this bit is a 0 and a STORE is attempted .while in user mode, an Access Violation occurs.
<31:16> Reserved for software.
<63:32> Page Frame Number (PFN) - The PFN field always points to a page boundary. If V is set, the PFN is concatenated with the Byte Within Page bits of the virtual address to obtain the physical address. If V is clear, this field may be used by software.

Claims

WHAT IS CLAIMED IS:
1. In a CPU executing a virtual memory management, a method of operating the CPU, comprising: providing a translation buffer for translating virtual addresses to physical addresses, comprising the steps of: storing in said translation buffer a plurality of page table entries, each page table entry containing a page frame number indexed by a virtual address tag; also storing in said translation buffer for each said page table entry an address space number, and an address space match entry; where said address space number is a value related to a process executed on said processor, said match entry is a value indicating that the address space number is to be required to be matched or not so required; and storing a third match value indicating whether or not said match entry is to be disabled; comparing said virtual address tag with a field of a virtual address generated by said processor, and also comparing said address space number with a current number maintained as part of the state of said processor, if comparing said virtual address tag with a field of said virtual address produces a match, and if said comparison of address space numbers produces a match, and said match entry is of one value, then using said page frame number for a memory reference; or if said match entry is another value, then using said page frame number for a memory reference regardless of whether said address space number matches said current number, if said third match value is in one condition.
2. A method according to claim 1 wherein said processor is executing a plurality of virtual machines, each having a number of processes, and is executing a virtual machine monitor.
3. A method according to claim 2 wherein said third match value is in said one condition for all page table entries in said translation buffer for said virtual machine monitor.
4. A method according to claim 1 including storing in said page table entries protection and access rights information, and including the steps of fetching instructions from an external memory, decoding said instructions, and executing said instructions, said executing including accessing said external memory for read and write data; said external memory storing a page table of said page table entries.
5. A method according to claim 1 wherein said third match value is a disable bit stored in said translation buffer, wherein one of said disable bits is stored for each entry of said translation buffer.
6. A method of operating a processor system having a CPU and a memory, the CPU having a translation buffer for translating virtual addresses to physical addresses, comprising the steps of: storing in said translation buffer selected page table entries, each page table entry containing a virtual address tag, a page frame number and an address space number to characterize the memory referenced by said page frame number; also storing in said translation buffer for each page table entry (a) an address space match indication to specify whether or not said address space number must match a value stored as part of the state of said processor, and (b) a match disable indication specifying whether or not said address space match indicator is to be operable for an entry; comparing a field of a virtual address generated by said processor with said virtual address tag of one of said page table entries in said translation buffer, and, if said comparing indicates a match, and said address space match indicator is off, and said address space number matches a value stored as part of the state of said processor, then using said page frame number for addressing said memory; if said comparing indicates a match, and said address space match indicator is on, regardless of whether said address space number matches a value stored as part of the state of said processor, using said page frame number for addressing said memory; if said comparing indicates a match, and said match disable indicator is on, regardless of whether said address space match indicator is on or off, then using said page frame number for addressing said memory only if said address space number matches a value stored as part of the state of said processor;
7. A method according to claim 6 including the steps of fetching instructions from an external memory, decoding said instructions, and executing said instructions, said executing including accessing said external memory for read and write data; said external memory storing a page table of said page table entries.
8. A method according to claim 6 including storing in said page table entries protection and access rights information, wherein said processor is executing a plurality of virtual machines, each having a number of processes, and executing a virtual machine monitor.
9. A method according to claim 8 wherein said disable match indicator is on for all page table entries in said translation buffer for said virtual machine monitor.
10. A method according to claim 6 wherein said match disable indication is stored in said translation buffer.
11. A method according to claim 10 wherein a match disable indication is stored for each entry of said translation buffer, wherein said system includes a plurality of said CPUs accessing said memory, and said steps are carried out separately in each said CPU.
12. A processor having a virtual memory management system, comprising: addressing means including a translation buffer for translating virtual addresses to physical addresses, said translation buffer storing a plurality of page table entries, each page table entry containing a page frame number indexed by a virtual address tag; said translation buffer also storing for each said page table entry an address space number, an address space match entry, and a match disable indicator; where said address space number is a value related to a process executed on said processor, said match entry is a value indicating that the address space number is to be required to be matched or not so required, and said match disable indicator is an indication of whether said match entry is to be ignored; means for comparing said virtual address tag with a field of a virtual address generated by said processor, and also comparing said address space number with a current number maintained as part of the state of said processor, and if both of said comparisons produce a match, and said match entry is of one value, then said addressing means using said page frame number for a memory reference; or if comparing said virtual address tag with a field of said virtual address produces a match, and if said match entry is another value, then said addressing means using said page frame number for a memory reference regardless of whether said address space number matches said current number, unless said disable indicator is set.
13. A processor according to claim 12 including means for fetching instructions from an external memory, decoding said instructions, and executing said instructions, said executing including accessing said external memory for read and write data; said external memory storing a page table of said page table entries.
14. A processor according to claim 13 including means for generating virtual addresses used for said fetching of instructions and said accessing said external memory for data, said virtual addresses being compared to address tags in said translation buffer.
15. A processor according to claim 13 wherein said page table entries also contain protection and access rights information.
16. A processor system having a CPU and a memory, comprising: a) means in the CPU for fetching instructions from said memory, decoding said instructions, and executing said instructions, said executing including accessing said memory for read and write data; b) means in said CPU for generating virtual addresses used for said fetching of instructions and said accessing said memory for data; c) a page table stored in said memory and containing a plurality of page table entries, each page table entry including a page frame number referencing a page of said memory; d) means for translating said virtual addresses to physical addresses for said memory, including a translation buffer storing selected ones of said page table entries; e) and means for addressing said memory using the page frame number from said translation buffer and using a part of said virtual address; f) said translation buffer storing for each said page table entry an address space number, and storing an address space match entry, where said address space number is a value related to a process executed on said CPU, and said match entry is a value indicating that the address space number is to be required to be matched or not so required; said translation buffer also storing for each said entry a match disable indicator; g) means in said translation buffer for comparing said virtual address tag with a field of said virtual address generated by said CPU, and also comparing said address space number with a current number maintained as part of the state of said CPU, and if both of said comparisons produce a match, and said match entry is of one value, then said addressing means using said page frame number for a memory reference; or if comparing said virtual address tag with a field of said virtual address produces a match, and if said match entry is another value, then said addressing means using said page frame number for a memory reference regardless of whether said address space number matches said current number, unless said disable match indicator is set.
17. A system according to claim 16 wherein said CPU is executing a plurality of virtual machines, each having a number of processes, and executing a virtual machine monitor, wherein said disable match indicator is set for all page table entries in said translation buffer for said virtual machine monitor.
18. A method according to claim 16 wherein said system includes a plurality of said CPUs accessing said memory, and said steps are carried out separately in each said CPU.
19. A method of operating a processor having a translation buffer for translating virtual addresses to physical addresses, comprising the steps of: storing in said translation buffer a plurality of page table entries, each page table entry containing a page frame number indexed by a virtual address tag; 7 also storing in said translation buffer for each said page
8 table entry an address space number, an address space match entry, and a
9 third match value; where said address space number is a value related to
10 a process executed on said processor, said match entry is a value
11 indicating that the address space number is to be required to be matched
12 or not so required, and said third match value indicating a virtual
13 machine number;
14 comparing said virtual address tag with a field of a virtual
15 address generated by said processor, comparing said third match value with
16 a machine value maintained as part of the state of said processor, and
17 also comparing said address space number with a current number maintained
18 as part of the state of said processor, and
19. if all three of said comparisons produce a match, and said
20 match entry is of one value, then using said page frame number for 1 a memory reference; or 2 if comparing said virtual address tag with a field of said 3 virtual address produces a match, and comparing said third match 4 value with said machine value produces a match, and further if said 5 match entry is another value, then using said page frame number for 6 a memory reference regardless of whether said address space number 7 matches said current number.
1 20. A method according to claim 19 wherein said processor is
2 executing a plurality of virtual machines, each having a number of
3 processes, and is executing a virtual machine monitor; each said virtual
4 machine and said monitor having a different machine value.
PCT/US1992/005351 1991-06-28 1992-06-25 Translation buffer for virtual machines with address space match WO1993000636A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU22954/92A AU654204B2 (en) 1991-06-28 1992-06-25 Translation buffer for virtual machines with address space match
DE69223386T DE69223386T2 (en) 1991-06-28 1992-06-25 IMPLEMENTATION BUFFER FOR VIRTUAL MACHINES WITH ADDRESS MATCH
EP92914596A EP0548315B1 (en) 1991-06-28 1992-06-25 Translation buffer for virtual machines with address space match

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/722,869 US5319760A (en) 1991-06-28 1991-06-28 Translation buffer for virtual machines with address space match
US722,869 1992-06-10

Publications (1)

Publication Number Publication Date
WO1993000636A1 true WO1993000636A1 (en) 1993-01-07

Family

ID=24903754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/005351 WO1993000636A1 (en) 1991-06-28 1992-06-25 Translation buffer for virtual machines with address space match

Country Status (7)

Country Link
US (1) US5319760A (en)
EP (1) EP0548315B1 (en)
AU (1) AU654204B2 (en)
CA (1) CA2088978C (en)
DE (1) DE69223386T2 (en)
IE (1) IE922106A1 (en)
WO (1) WO1993000636A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0797149A2 (en) * 1996-03-22 1997-09-24 Sun Microsystems, Inc. Architecture and method for sharing tlb entries

Families Citing this family (154)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2260004B (en) * 1991-09-30 1995-02-08 Apple Computer Memory management unit for a computer system
US5423014A (en) * 1991-10-29 1995-06-06 Intel Corporation Instruction fetch unit with early instruction fetch mechanism
US5627987A (en) * 1991-11-29 1997-05-06 Kabushiki Kaisha Toshiba Memory management and protection system for virtual memory in computer system
US5696925A (en) * 1992-02-25 1997-12-09 Hyundai Electronics Industries, Co., Ltd. Memory management unit with address translation function
US5675762A (en) * 1992-04-03 1997-10-07 International Business Machines Corporation System for locking down part of portion of memory and updating page directory with entry corresponding to part of portion of the memory locked down
JPH07507893A (en) * 1992-06-12 1995-08-31 ザ、ダウ、ケミカル、カンパニー Security front-end communication system and method for process control computers
US5996062A (en) * 1993-11-24 1999-11-30 Intergraph Corporation Method and apparatus for controlling an instruction pipeline in a data processing system
TW253946B (en) * 1994-02-04 1995-08-11 Ibm Data processor with branch prediction and method of operation
CN1084005C (en) * 1994-06-27 2002-05-01 国际商业机器公司 Method and apparatus for dynamically controlling address space allocation
JP3740195B2 (en) * 1994-09-09 2006-02-01 株式会社ルネサステクノロジ Data processing device
US5682495A (en) * 1994-12-09 1997-10-28 International Business Machines Corporation Fully associative address translation buffer having separate segment and page invalidation
US5715420A (en) * 1995-02-10 1998-02-03 International Business Machines Corporation Method and system for efficient memory management in a data processing system utilizing a dual mode translation lookaside buffer
US5680566A (en) * 1995-03-03 1997-10-21 Hal Computer Systems, Inc. Lookaside buffer for inputting multiple address translations in a computer system
JP3802061B2 (en) * 1995-03-03 2006-07-26 富士通株式会社 Parallel access micro-TLB to increase address translation speed
US5924125A (en) * 1995-08-01 1999-07-13 Arya; Siamak Method and apparatus for parallel access to consecutive TLB entries
US6101590A (en) * 1995-10-10 2000-08-08 Micro Unity Systems Engineering, Inc. Virtual memory system with local and global virtual address translation
US5809522A (en) * 1995-12-18 1998-09-15 Advanced Micro Devices, Inc. Microprocessor system with process identification tag entries to reduce cache flushing after a context switch
US6088779A (en) * 1996-12-30 2000-07-11 Fujitsu Limited System and method for execution management of computer programs
US6075938A (en) * 1997-06-10 2000-06-13 The Board Of Trustees Of The Leland Stanford Junior University Virtual machine monitors for scalable multiprocessors
US6301645B1 (en) * 1998-01-20 2001-10-09 Micron Technology, Inc. System for issuing device requests by proxy
US6067581A (en) * 1998-01-20 2000-05-23 Micron Electronics, Inc. Method for identifying the orignal source device in a transaction request initiated from address translator to memory control module and directly performing the transaction therebetween
US6681238B1 (en) * 1998-03-24 2004-01-20 International Business Machines Corporation Method and system for providing a hardware machine function in a protected virtual machine
US6253224B1 (en) * 1998-03-24 2001-06-26 International Business Machines Corporation Method and system for providing a hardware machine function in a protected virtual machine
US6374286B1 (en) * 1998-04-06 2002-04-16 Rockwell Collins, Inc. Real time processor capable of concurrently running multiple independent JAVA machines
US6496847B1 (en) * 1998-05-15 2002-12-17 Vmware, Inc. System and method for virtualizing computer systems
US8631066B2 (en) * 1998-09-10 2014-01-14 Vmware, Inc. Mechanism for providing virtual machines for use by multiple users
US7516453B1 (en) * 1998-10-26 2009-04-07 Vmware, Inc. Binary translator with precise exception synchronization mechanism
US7089418B1 (en) 2000-03-31 2006-08-08 Intel Corporation Managing accesses in a processor for isolated execution
US7356817B1 (en) 2000-03-31 2008-04-08 Intel Corporation Real-time scheduling of virtual machines
US6990579B1 (en) 2000-03-31 2006-01-24 Intel Corporation Platform and method for remote attestation of a platform
US6934817B2 (en) * 2000-03-31 2005-08-23 Intel Corporation Controlling access to multiple memory zones in an isolated execution environment
US7013481B1 (en) 2000-03-31 2006-03-14 Intel Corporation Attestation key memory device and bus
US7082615B1 (en) 2000-03-31 2006-07-25 Intel Corporation Protecting software environment in isolated execution
US7194634B2 (en) * 2000-03-31 2007-03-20 Intel Corporation Attestation key memory device and bus
US7111176B1 (en) 2000-03-31 2006-09-19 Intel Corporation Generating isolated bus cycles for isolated execution
US7013484B1 (en) 2000-03-31 2006-03-14 Intel Corporation Managing a secure environment using a chipset in isolated execution mode
US6996710B1 (en) 2000-03-31 2006-02-07 Intel Corporation Platform and method for issuing and certifying a hardware-protected attestation key
US7073071B1 (en) 2000-03-31 2006-07-04 Intel Corporation Platform and method for generating and utilizing a protected audit log
US6760441B1 (en) 2000-03-31 2004-07-06 Intel Corporation Generating a key hieararchy for use in an isolated execution environment
US6957332B1 (en) * 2000-03-31 2005-10-18 Intel Corporation Managing a secure platform using a hierarchical executive architecture in isolated execution mode
US6769058B1 (en) 2000-03-31 2004-07-27 Intel Corporation Resetting a processor in an isolated execution environment
US6678815B1 (en) * 2000-06-27 2004-01-13 Intel Corporation Apparatus and method for reducing power consumption due to cache and TLB accesses in a processor front-end
US6728962B1 (en) * 2000-06-28 2004-04-27 Emc Corporation Context swapping in multitasking kernel
US6976162B1 (en) * 2000-06-28 2005-12-13 Intel Corporation Platform and method for establishing provable identities while maintaining privacy
US6802063B1 (en) * 2000-07-13 2004-10-05 International Business Machines Corporation 64-bit open firmware implementation and associated api
EP1182570A3 (en) * 2000-08-21 2004-08-04 Texas Instruments Incorporated TLB with resource ID field
EP1182571B1 (en) * 2000-08-21 2011-01-26 Texas Instruments Incorporated TLB operations based on shared bit
EP1182568A3 (en) * 2000-08-21 2004-07-21 Texas Instruments Incorporated TLB operation based on task-id
US7793111B1 (en) 2000-09-28 2010-09-07 Intel Corporation Mechanism to handle events in a machine with isolated execution
US7389427B1 (en) 2000-09-28 2008-06-17 Intel Corporation Mechanism to secure computer output from software attack using isolated execution
US7215781B2 (en) * 2000-12-22 2007-05-08 Intel Corporation Creation and distribution of a secret value between two devices
US7818808B1 (en) 2000-12-27 2010-10-19 Intel Corporation Processor mode for limiting the operation of guest software running on a virtual machine supported by a virtual machine monitor
US6907600B2 (en) 2000-12-27 2005-06-14 Intel Corporation Virtual translation lookaside buffer
US7225441B2 (en) 2000-12-27 2007-05-29 Intel Corporation Mechanism for providing power management through virtualization
US7035963B2 (en) * 2000-12-27 2006-04-25 Intel Corporation Method for resolving address space conflicts between a virtual machine monitor and a guest operating system
US7117376B2 (en) * 2000-12-28 2006-10-03 Intel Corporation Platform and method of creating a secure boot that enforces proper user authentication and enforces hardware configurations
US20020144121A1 (en) * 2001-03-30 2002-10-03 Ellison Carl M. Checking file integrity using signature generated in isolated execution
US7096497B2 (en) * 2001-03-30 2006-08-22 Intel Corporation File checking using remote signing authority via a network
US7191440B2 (en) * 2001-08-15 2007-03-13 Intel Corporation Tracking operating system process and thread execution and virtual machine execution in hardware or in a virtual machine monitor
US6792521B2 (en) * 2001-10-16 2004-09-14 International Business Machines Corporation Behavioral memory mechanism for a data processing system
US6877083B2 (en) * 2001-10-16 2005-04-05 International Business Machines Corporation Address mapping mechanism for behavioral memory enablement within a data processing system
US7024555B2 (en) 2001-11-01 2006-04-04 Intel Corporation Apparatus and method for unilaterally loading a secure operating system within a multiprocessor environment
US6961806B1 (en) 2001-12-10 2005-11-01 Vmware, Inc. System and method for detecting access to shared structures and for maintaining coherence of derived structures in virtualized multiprocessor systems
US7103771B2 (en) * 2001-12-17 2006-09-05 Intel Corporation Connecting a virtual token to a physical token
US20030126454A1 (en) * 2001-12-28 2003-07-03 Glew Andrew F. Authenticated code method and apparatus
US7308576B2 (en) * 2001-12-31 2007-12-11 Intel Corporation Authenticated code module
US20030126453A1 (en) * 2001-12-31 2003-07-03 Glew Andrew F. Processor supporting execution of an authenticated code instruction
US7480806B2 (en) * 2002-02-22 2009-01-20 Intel Corporation Multi-token seal and unseal
US7124273B2 (en) * 2002-02-25 2006-10-17 Intel Corporation Method and apparatus for translating guest physical addresses in a virtual machine environment
US7631196B2 (en) 2002-02-25 2009-12-08 Intel Corporation Method and apparatus for loading a trustable operating system
JP4056768B2 (en) * 2002-03-04 2008-03-05 富士通株式会社 Microcomputer, cache memory control method, and clock control method
US7069442B2 (en) 2002-03-29 2006-06-27 Intel Corporation System and method for execution of a secured environment initialization instruction
US7028149B2 (en) * 2002-03-29 2006-04-11 Intel Corporation System and method for resetting a platform configuration register
US20030191943A1 (en) * 2002-04-05 2003-10-09 Poisner David I. Methods and arrangements to register code
US20030196096A1 (en) * 2002-04-12 2003-10-16 Sutton James A. Microcode patch authentication
US20030196100A1 (en) * 2002-04-15 2003-10-16 Grawrock David W. Protection against memory attacks following reset
US7058807B2 (en) * 2002-04-15 2006-06-06 Intel Corporation Validation of inclusion of a platform within a data center
US7076669B2 (en) * 2002-04-15 2006-07-11 Intel Corporation Method and apparatus for communicating securely with a token
US7127548B2 (en) 2002-04-16 2006-10-24 Intel Corporation Control register access virtualization performance improvement in the virtual-machine architecture
US7139890B2 (en) * 2002-04-30 2006-11-21 Intel Corporation Methods and arrangements to interface memory
US20030229794A1 (en) * 2002-06-07 2003-12-11 Sutton James A. System and method for protection against untrusted system management code by redirecting a system management interrupt and creating a virtual machine container
US6820177B2 (en) 2002-06-12 2004-11-16 Intel Corporation Protected configuration space in a protected environment
US7142674B2 (en) * 2002-06-18 2006-11-28 Intel Corporation Method of confirming a secure key exchange
US7392415B2 (en) * 2002-06-26 2008-06-24 Intel Corporation Sleep protection
US20040003321A1 (en) * 2002-06-27 2004-01-01 Glew Andrew F. Initialization of protected system
US7124327B2 (en) 2002-06-29 2006-10-17 Intel Corporation Control over faults occurring during the operation of guest software in the virtual-machine architecture
US6996748B2 (en) 2002-06-29 2006-02-07 Intel Corporation Handling faults associated with operation of guest software in the virtual-machine architecture
US7296267B2 (en) * 2002-07-12 2007-11-13 Intel Corporation System and method for binding virtual machines to hardware contexts
US7165181B2 (en) * 2002-11-27 2007-01-16 Intel Corporation System and method for establishing trust without revealing identity
US20040117532A1 (en) * 2002-12-11 2004-06-17 Bennett Steven M. Mechanism for controlling external interrupts in a virtual machine system
US7073042B2 (en) * 2002-12-12 2006-07-04 Intel Corporation Reclaiming existing fields in address translation data structures to extend control over memory accesses
US20040117318A1 (en) * 2002-12-16 2004-06-17 Grawrock David W. Portable token controlling trusted environment launch
US7318235B2 (en) 2002-12-16 2008-01-08 Intel Corporation Attestation using both fixed token and portable token
US7318141B2 (en) * 2002-12-17 2008-01-08 Intel Corporation Methods and systems to control virtual machines
US7793286B2 (en) * 2002-12-19 2010-09-07 Intel Corporation Methods and systems to manage machine state in virtual machine operations
US7900017B2 (en) * 2002-12-27 2011-03-01 Intel Corporation Mechanism for remapping post virtual machine memory pages
US20040128345A1 (en) * 2002-12-27 2004-07-01 Robinson Scott H. Dynamic service registry
US20040128465A1 (en) * 2002-12-30 2004-07-01 Lee Micheil J. Configurable memory bus width
US20040128528A1 (en) * 2002-12-31 2004-07-01 Poisner David I. Trusted real time clock
US7076802B2 (en) * 2002-12-31 2006-07-11 Intel Corporation Trusted system clock
US7069413B1 (en) 2003-01-29 2006-06-27 Vmware, Inc. Method and system for performing virtual to physical address translations in a virtual machine monitor
US7107397B2 (en) * 2003-05-29 2006-09-12 International Business Machines Corporation Magnetic tape data storage system buffer management
US7415708B2 (en) * 2003-06-26 2008-08-19 Intel Corporation Virtual machine management using processor state information
US7089397B1 (en) 2003-07-03 2006-08-08 Transmeta Corporation Method and system for caching attribute data for matching attributes with physical addresses
US20050034125A1 (en) * 2003-08-05 2005-02-10 Logicube, Inc. Multiple virtual devices
US20050044292A1 (en) * 2003-08-19 2005-02-24 Mckeen Francis X. Method and apparatus to retain system control when a buffer overflow attack occurs
US7424709B2 (en) 2003-09-15 2008-09-09 Intel Corporation Use of multiple virtual machine monitors to handle privileged events
US7287197B2 (en) * 2003-09-15 2007-10-23 Intel Corporation Vectoring an interrupt or exception upon resuming operation of a virtual machine
US7739521B2 (en) * 2003-09-18 2010-06-15 Intel Corporation Method of obscuring cryptographic computations
US7610611B2 (en) * 2003-09-19 2009-10-27 Moran Douglas R Prioritized address decoder
US20050080934A1 (en) 2003-09-30 2005-04-14 Cota-Robles Erik C. Invalidating translation lookaside buffer entries in a virtual machine (VM) system
US7366305B2 (en) * 2003-09-30 2008-04-29 Intel Corporation Platform and method for establishing trust without revealing identity
US7237051B2 (en) * 2003-09-30 2007-06-26 Intel Corporation Mechanism to control hardware interrupt acknowledgement in a virtual machine system
US7177967B2 (en) * 2003-09-30 2007-02-13 Intel Corporation Chipset support for managing hardware interrupts in a virtual machine system
US7636844B2 (en) 2003-11-17 2009-12-22 Intel Corporation Method and system to provide a trusted channel within a computer system for a SIM device
US20050108534A1 (en) * 2003-11-19 2005-05-19 Bajikar Sundeep M. Providing services to an open platform implementing subscriber identity module (SIM) capabilities
US8156343B2 (en) 2003-11-26 2012-04-10 Intel Corporation Accessing private data about the state of a data processing machine from storage that is publicly accessible
US7159095B2 (en) * 2003-12-09 2007-01-02 International Business Machines Corporation Method of efficiently handling multiple page sizes in an effective to real address translation (ERAT) table
US8037314B2 (en) * 2003-12-22 2011-10-11 Intel Corporation Replacing blinded authentication authority
US20050152539A1 (en) * 2004-01-12 2005-07-14 Brickell Ernie F. Method of protecting cryptographic operations from side channel attacks
US7802085B2 (en) 2004-02-18 2010-09-21 Intel Corporation Apparatus and method for distributing private keys to an entity with minimal secret, unique information
US20050216920A1 (en) * 2004-03-24 2005-09-29 Vijay Tewari Use of a virtual machine to emulate a hardware device
US7356735B2 (en) * 2004-03-30 2008-04-08 Intel Corporation Providing support for single stepping a virtual machine in a virtual machine environment
US7620949B2 (en) * 2004-03-31 2009-11-17 Intel Corporation Method and apparatus for facilitating recognition of an open event window during operation of guest software in a virtual machine environment
US7490070B2 (en) 2004-06-10 2009-02-10 Intel Corporation Apparatus and method for proving the denial of a direct proof signature
US20050288056A1 (en) * 2004-06-29 2005-12-29 Bajikar Sundeep M System including a wireless wide area network (WWAN) module with an external identity module reader and approach for certifying the WWAN module
US7305592B2 (en) * 2004-06-30 2007-12-04 Intel Corporation Support for nested fault in a virtual machine environment
US7606995B2 (en) * 2004-07-23 2009-10-20 Hewlett-Packard Development Company, L.P. Allocating resources to partitions in a partitionable computer
US7562179B2 (en) 2004-07-30 2009-07-14 Intel Corporation Maintaining processor resources during architectural events
US7930539B2 (en) * 2004-08-03 2011-04-19 Hewlett-Packard Development Company, L.P. Computer system resource access control
US20060031672A1 (en) * 2004-08-03 2006-02-09 Soltis Donald C Jr Resource protection in a computer system with direct hardware resource access
US7840962B2 (en) * 2004-09-30 2010-11-23 Intel Corporation System and method for controlling switching between VMM and VM using enabling value of VMM timer indicator and VMM timer value having a specified time
US8146078B2 (en) 2004-10-29 2012-03-27 Intel Corporation Timer offsetting mechanism in a virtual machine environment
US8924728B2 (en) 2004-11-30 2014-12-30 Intel Corporation Apparatus and method for establishing a secure session with a device without exposing privacy-sensitive information
US7721292B2 (en) * 2004-12-16 2010-05-18 International Business Machines Corporation System for adjusting resource allocation to a logical partition based on rate of page swaps and utilization by changing a boot configuration file
US8533777B2 (en) * 2004-12-29 2013-09-10 Intel Corporation Mechanism to determine trust of out-of-band management agents
US7395405B2 (en) * 2005-01-28 2008-07-01 Intel Corporation Method and apparatus for supporting address translation in a virtual machine environment
US7793160B1 (en) 2005-03-29 2010-09-07 Emc Corporation Systems and methods for tracing errors
US8856473B2 (en) * 2005-07-01 2014-10-07 Red Hat, Inc. Computer system protection based on virtualization
US7809957B2 (en) 2005-09-29 2010-10-05 Intel Corporation Trusted platform module for generating sealed data
US20070150685A1 (en) * 2005-12-28 2007-06-28 Gbs Laboratories Llc Computer architecture for providing physical separation of computing processes
US8014530B2 (en) 2006-03-22 2011-09-06 Intel Corporation Method and apparatus for authenticated, recoverable key distribution with no database secrets
US20070226451A1 (en) * 2006-03-22 2007-09-27 Cheng Antonio S Method and apparatus for full volume mass storage device virtualization
US7934073B2 (en) * 2007-03-14 2011-04-26 Andes Technology Corporation Method for performing jump and translation state change at the same time
US8271989B2 (en) * 2008-02-07 2012-09-18 International Business Machines Corporation Method and apparatus for virtual processor dispatching to a partition based on shared memory pages
US8086585B1 (en) 2008-09-30 2011-12-27 Emc Corporation Access control to block storage devices for a shared disk based file system
US8776245B2 (en) * 2009-12-23 2014-07-08 Intel Corporation Executing trusted applications with reduced trusted computing base
US8405668B2 (en) 2010-11-19 2013-03-26 Apple Inc. Streaming translation in display pipe
KR20120071554A (en) * 2010-12-23 2012-07-03 한국전자통신연구원 Address space switching method and apparatus for full virtualization
US20130055335A1 (en) * 2011-08-22 2013-02-28 Shih-Wei Chien Security enhancement methods and systems
US9584628B2 (en) * 2015-03-17 2017-02-28 Freescale Semiconductor, Inc. Zero-copy data transmission system
US11513779B2 (en) 2020-03-19 2022-11-29 Oracle International Corporation Modeling foreign functions using executable references
US11875168B2 (en) 2020-03-19 2024-01-16 Oracle International Corporation Optimizing execution of foreign method handles on a virtual machine
US11543976B2 (en) * 2020-04-01 2023-01-03 Oracle International Corporation Methods for reducing unsafe memory access when interacting with native libraries

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0145960A2 (en) * 1983-12-14 1985-06-26 International Business Machines Corporation Selective guest system purge control
US4802084A (en) * 1985-03-11 1989-01-31 Hitachi, Ltd. Address translator
US4816991A (en) * 1986-03-14 1989-03-28 Hitachi, Ltd. Virtual machine system with address translation buffer for holding host and plural guest entries
DE3911182A1 (en) * 1988-04-06 1989-10-19 Hitachi Ltd Address conversion device in a virtual machine system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4075686A (en) * 1976-12-30 1978-02-21 Honeywell Information Systems Inc. Input/output cache system including bypass capability
US4525778A (en) * 1982-05-25 1985-06-25 Massachusetts Computer Corporation Computer memory control
US4638426A (en) * 1982-12-30 1987-01-20 International Business Machines Corporation Virtual memory address translation mechanism with controlled data persistence
US4787031A (en) * 1985-01-04 1988-11-22 Digital Equipment Corporation Computer with virtual machine mode and multiple protection rings
US4933835A (en) * 1985-02-22 1990-06-12 Intergraph Corporation Apparatus for maintaining consistency of a cache memory with a primary memory
US5095424A (en) * 1986-10-17 1992-03-10 Amdahl Corporation Computer system architecture implementing split instruction and operand cache line-pair-state management
US4811215A (en) * 1986-12-12 1989-03-07 Intergraph Corporation Instruction execution accelerator for a pipelined digital machine with virtual memory
US4802085A (en) * 1987-01-22 1989-01-31 National Semiconductor Corporation Apparatus and method for detecting and handling memory-mapped I/O by a pipelined microprocessor
US5029070A (en) * 1988-08-25 1991-07-02 Edge Computer Corporation Coherent cache structures and methods
US4965717A (en) * 1988-12-09 1990-10-23 Tandem Computers Incorporated Multiple processor system having shared memory with private-write capability
US5155843A (en) * 1990-06-29 1992-10-13 Digital Equipment Corporation Error transition mode for multi-processor system
US5067609A (en) * 1990-10-01 1991-11-26 The Mead Corporation Packaging and display case for dissimilar objects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0145960A2 (en) * 1983-12-14 1985-06-26 International Business Machines Corporation Selective guest system purge control
US4802084A (en) * 1985-03-11 1989-01-31 Hitachi, Ltd. Address translator
US4816991A (en) * 1986-03-14 1989-03-28 Hitachi, Ltd. Virtual machine system with address translation buffer for holding host and plural guest entries
DE3911182A1 (en) * 1988-04-06 1989-10-19 Hitachi Ltd Address conversion device in a virtual machine system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 9, no. 90 (P-350)19 April 1985 & JP,A,59 218 693 ( NIPPON DENSHIN DENWA KOSHA ) 8 December 1984 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0797149A2 (en) * 1996-03-22 1997-09-24 Sun Microsystems, Inc. Architecture and method for sharing tlb entries
EP0797149A3 (en) * 1996-03-22 1998-09-02 Sun Microsystems, Inc. Architecture and method for sharing tlb entries

Also Published As

Publication number Publication date
IE922106A1 (en) 1992-12-30
DE69223386T2 (en) 1998-06-10
EP0548315B1 (en) 1997-12-03
CA2088978C (en) 1996-08-13
CA2088978A1 (en) 1992-12-29
DE69223386D1 (en) 1998-01-15
US5319760A (en) 1994-06-07
AU2295492A (en) 1993-01-25
AU654204B2 (en) 1994-10-27
EP0548315A1 (en) 1993-06-30

Similar Documents

Publication Publication Date Title
AU654204B2 (en) Translation buffer for virtual machines with address space match
US5367705A (en) In-register data manipulation using data shift in reduced instruction set processor
EP0465321B1 (en) Ensuring data integrity in multiprocessor or pipelined processor system
EP0463975B1 (en) Byte-compare operation for high-performance processor
US5454091A (en) Virtual to physical address translation scheme with granularity hint for identifying subsequent pages to be accessed
US6076158A (en) Branch prediction in high-performance processor
US5778423A (en) Prefetch instruction for improving performance in reduced instruction set processor
US6167509A (en) Branch performance in high speed processor
US5469551A (en) Method and apparatus for eliminating branches using conditional move instructions
JPH04251352A (en) Selective locking of memory position in on-chip cache of microprocessor
JP2608680B2 (en) CPU execution method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MG MW NL NO PL RO RU SD SE

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2088978

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1992914596

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1992914596

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 1992914596

Country of ref document: EP

ENP Entry into the national phase

Ref country code: CA

Ref document number: 2088978

Kind code of ref document: A

Format of ref document f/p: F