Vantage Point Technology, Inc. v. Apple Inc.

Filing 1

COMPLAINT against Apple Inc. ( Filing fee $ 400 receipt number 0540-4414171.), filed by Vantage Point Technology, Inc.. (Attachments: # 1 Exhibit A, # 2 Civil Cover Sheet)(Storm, Paul)

Download PDF
Exhibit A lllllllllllllllllllllllllllllllllllllllllllllllIllllllllllllllllllllll USOO546375OA United States Patent [19] [11] Patent Number: [45] Date of Patent: Sachs [54] 5,463,750 Oct. 31, 1995 METHOD AND APPARATUS FOR TRANSLATING VIRTUAL ADDRESSES [N A 5,305,444 5,386,530 4/1994 Becker et a1. ........................ .. 395/400 l/l995 Hattorl ...... .. 395/400 DATA PROCESSING SYSTEM HAVING 5,404,476 4/1995 Kadaira 395/400 MULTIPLE INSTRUCTION PIPELINES AND 5,404,478 4/1995 Arai et al. .............. .. 395/400 SEPARATE TLB,S FOR EACH PIPELINE 5,412,787 5/1995 Forsyth et a1. ........................ .. 395/400 Primary Examiner-Ken S. Kim Attorney, Agent, or Firm—Townsend and Townsend and Crew [57] ABSTRACT [75] IIIVBHIOTI HOWaI‘d G- Sachs, Belved?fe, Calif. [73] Assi nee: Inter a h Cor oration, Huntsville, g Ala‘ gr p p A computing system has multiple instruction pipelines, [21] Appl. No.: 146,818 . _ [22] F?ed' Nov‘ 2’ 1993 [51] Int. C1.6 .................................................... .. G06F 12/10 [52] US. Cl. ..................... .. 395/496; 364/228; 2164/2557; translator is Provided for each Such Pipeline for translating 3 virtual address Tecieved from its associated Pipeline into 1364/2434; 364/DIG_ 1; 364/964343; 364/DIG_ 2; 364/955’5; 395/80O; 395/42L03; 395/416 corresponding real addresses. Each address translator com prises a translation buffer accessing circuit for accessing the [53] Field of Search ................................... .. 395/400 425 wherein one or more pipelines require translating virtual addresses to real addresses. A TLB is provided for each pipeline requiring address translation services, and an adress TLB’ *1 translation indicating circuit f0‘ indicating Whether 39137806 References Cited U-S- PATENT DOCUMENTS [56l translation data for the virtual address is stored in the translation bulfer, and an update control circuit for activating the direct address translation circuit when the translation update control circuit also stores the translation data data for the virtual address is not stored in the TLB. The 4 758 951 7/1988 sznyter HI _ _ _ _ _ .‘“ 395/400 retrieved from the main memory into the TLB. If it is desired 395/400 4:980:816 12/1990 Fukuzaz?a et' a1_ to have the same translation information available for all the 5,197,139 3/1993 Emma et a1. ............. .. 395/400 Pipelines in 3 amp, then the update control Circuit also 5,226,133 7/1993 updates all the other TLB’s in the group. Taylor et al. .. . ... ... . . _ . .. 395/400 5,247,629 9/1993 Lasamatta et a1. 5,293,612 3/1994 Shingai ................................. .. 395/425 .... .. 395/400 2101“ 2105\ LOAD INSTRUCTION PIPELINE 215A“ 14 Claims, 4 Drawing Sheets 2100\ [200 LOAD INSTRUCTION PIPELINE 2155* ADDRESS /2”‘ ADDRESS / REGISTER REGISTER 225A STORE INSTRUCTION PIPELINE 21a.c\ 2148 ADDRESS / 225B REGISTER 225C 2226‘ TLB 23”] \2341" CMP 2450 /2350 UPDATE CONTROL /240 U/244 DTU 151.9 j TO MAIN MEMORY 34 2.146 229C US. Patent Oct. 31, 1995 Sheet 1 of 4 72 10 [50 REGISTER i) CACHE \ 14\ 5,463,750 FILE 22 MEMORY 75 INSTRUCTION L 0/55 42 [30 MAIN 154/ L155 L154 MASS MEMORY ISSUING UNIT STORAGE DEVICE &5'4 50 F16‘ ‘1 F45 DATA TRANSFER UNIT VIRTUAL MEMORY REAL MEMORY 4G BYTE 16M BYTE 224 232 BYTES FIG. 2A BYTES RM = 212 PAGES PAGE = 212 BYTES FIG. 2B 54 US. Patent Oct. 31, 1995 Sheet 3 0f 4 5,463,750 150A\ 1w11%\Fm 127 VA<18 :12> \ 154 TLB/ VAT RA 42 //" 1&2WL19u TRANSFER UNIT To DATA UPDATE CNTRL DTU —~—*-To MAIN MEMORY 1za-»// 1 A 155J/ 0 j7v‘\\\ “\ FOR RA TAG ‘\455 \\455 195 , MISS? - 190 VA<31?2> COMPARISON TO DETERMINE [124 VA<31H2> -cAcHE HIT/MISS VIRTUAL ADDRESS 31 19 18 12 11 154 O VIRTUAL ADDRESSES FROM OTHER PIPELINES VAT = VIRTUAL ADDRESS TAG RA vA REAL ADDRESS (BITS <31n2>) = VIRTUAL ADDRESS FIG. 4 34 US. Patent 0a. 31, 1995 Sheet 4 0f 4 5,463,750 210“ 2IOB-\ 2100\ [200 LOAD LOAD STORE INSTRUCTION PIPELINE INSTRUCTION PIPELINE INSTRUCTION PIPELINE ADDRESS /214A 2254 ADDRESS / REGISTER REGISTER 225A L__L 2143 225B 225B __ 1 |_____£ 2256‘ 2224 \ i; 2335 -\ : TLB :I) CMP C: 23014-1 ADDRESS / REGISTER I L_.[ 225C 222C~ TLB ‘:5 CMP (I: TLB CMP <:_ E45B-\ /_ 238A /2355 \2485 /'238C 2484 j ‘ UPDATE ----~ CONTROL / 240 250 j DTU 2140 3 TO MAIN MEMORY 34 5,463,750 1 2 METHOD AND APPARATUS FOR TRANSLATING VIRTUAL ADDRESSES IN A DATA PROCESSDIG SYSTEM HAVING MULTIPLE INSTRUCTION PIPELINES AND SEPARATE TLB’S FOR EACH PIPELINE To accommodate the difference between virtual addresses and real addresses and the mapping between them, the physical memory available in computing system 10 is divided into a set of uniform-size blocks, called pages. If a page contains 212 or 4 kilobytes (4 KB), then the full 32-bit address space contains 220 or 1 million (lM) pages (4 KB><1M=4 GB). Of course, if main memory 34 has 16 megabytes of memory, only 212 or 4K of the 1 million potential pages actually could be in memory at the same time BACKGROUND OF THE INVENTION The present invention relates to computing systems and, (4KX4 KB=16 MB). Computing system 10 keeps track of which pages of data more particularly, to a method and apparatus for translating virtual addresses in a computing system having multiple instruction pipelines. from the 4 GB address space currently reside in main memory 34 (and exactly where each page of data is physi cally located in main memory 34) by means of a set of page FIG. 1 is a block diagram of a typical computing system 10 which employs virtual addressing of data. Computing system 10 includes an instruction issuing unit 14 which communicates instructions to a plurality of (e.g., eight) instruction pipelines 18A-H over a communication path 22. 15 tables 100 (FIG. 3) typically stored in main memory 34. Assume computing system 10 speci?es 4 KB pages and each page table 100 contains 1K entries for providing the location of 1K separate pages. Thus, each page table maps 4 MB of The data referred to by the instructions in a program are memory (lK><4KB=4 MB), and 4 page tables su?ice for a stored in a mass storage device 30 which may be, for 20 machine with 16 megabytes of physical main memory (16 example, a disk or tape drive. Since mass storage devices MB/4 MB=4). operate very slowly (e. g., a million or more clock cycles per The set of potential page tables are tracked by a page access) compared to instruction issuing unit 14 and instruc tion pipelines 18A-H, data currently being worked on by the program is stored in a main memory 34 which may be a random access memory (RAM) capable of providing data to 25 directory 104 which may contain, for example, 1K entries (not all of which need to be used). The starting location of this directory (its origin) is stored in a page directory origin the program at a much faster rate (e.g., 30 or so clock (PDO) register 108. cycles). Data stored in main memory 34 is transferred to and from mass storage device 30 over a communication path 42. To locate a page in main memory 34, the input virtual address is conceptually split into a 12-bit displacement The communication of data between main memory 34 and mass storage device 30 is controlled by a data transfer unit address (VA<I1:0>), a 10-bit page table address (VA<21:12>) for accessing page table 100, and a lO-bit 46 which communicates with main memory 34 over a directory address (<VA 31:22>) for accessing page directory communication path 50 and with mass storage device 30 104. The address stored in PDO register 108 is added to the directory address VA<31:22> of the input virtual address in a page directory entry address accumulator 112. The address in page directory entry address accumulator 112 is used to address page directory 104 to obtain the starting address of page table 100. The starting address of page table 100 is then added to the page table address VA<21:12> of the input virtual address in a page table entry address accumulator 116, and the resulting address is used to address page table 100. An address ?eld in the addressed page table entry gives the starting location of the page in main memory 34 corre sponding to the input virtual address, and a page fault ?eld PF indicates whether the page is actually present in main memory 34. The location of data within each page is over a communication path 54. Although main memory 34 operates much faster than 35 mass storage device 30, it still does not operate as quickly as instruction issuing unit 14 or instruction pipelines 18A-H. Consequently, computing system 10 includes a high speed cache memory 60 for storing a subset of data from main memory 34, and a very high speed register ?le 64 for storing a subset of data from cache memory 60. Cache memory 60 communicates with main memory 34 over a communication path 68 and with register ?le 64 over a communication path 72. Register ?le 64 communicates with instruction pipelines 18A—H over a communication path 76. Register ?le 64 operates at approximately the same speed as 45 instruction issuing unit 14 and instruction pipelines 18A-H typically speci?ed by the 12 lower-order displacement bits (e.g., a fraction of a clock cycle), whereas cache memory 60 / of the virtual address. operates at a speed somewhere between register ?le 64 and main memory 34 (e.g., approximately two or three clock in main memory 34, a page fault occurs, and the faulting cycles). When an instruction uses data that is not currently stored 50 instruction abnormally terminates. Thereafter, data transfer unit 42 must ?nd an unused 4 KB portion of memory in main PIGS. 2A-B are block diagrams illustrating the concept of virtual addressing. Assume computing system 10 has 32 bits available to address data. The addressable memory space is then 232 bytes, or four gigabytes (4 GB), as shown in FIG. 2A. However, the physical (real) memory available in main memory 34 typically is much less than that, e.g., 1—256 megabytes. Assuming a 16 megabyte (16 MB) real memory, as shown in FIG. 2B, only 24 address bits are needed to address the memory. Thus, multiple virtual addresses inevitably will be translated to the same real address used to address main memory 34. The same is true for cache memory 60, which typically stores only l—36 kilobytes of data. Register ?le 64 typically comprises, e.g., 55 memory 34, transfer the requested page from mass storage device 30 into main memory 34, and make the appropriate update to the page table (indicating both the presence and location of the page in memory). The program then may be restarted. FIG. 4 is a block diagram showing how virtual addresses are translated in the computing system shown in FIG. 1. Components which remain the same as FIGS. 1 and 3 retain their original numbering. An address register 154 receives an input virtual address which references data used by an instruction issued to one of instruction pipelines 14A—H, a translation memory (e.g., a translation lookaside buifer 32 32-bit registers, and it stores data from cache memory 60 65 (TLB)) 158 and comparator 170 for initially determining as needed. The registers are addressed by instruction pipe whether data requested by the input virtual address resides lines 18A—H using a different addressing scheme. in main memory 34, and a dynamic translation unit (DTU) 5,463,750 3 4 162 for accessing page tables in main memory 34. Bits VA[18112] of the input virtual address are communicated to TLB 158 over a communication path 166, bits VA[31:12] of by DTU 162 for the same or another pipeline at a later time. This increases the chance that DTU 162 will have to be the input virtual address are communicated to DTU 162 over a communication path 174, and bits VA[31:19] are commu nicated to comparator 170 over a communication path 176. effect is particularly severe and counterproductive when a ?rst pipeline repeatedly refers to the same general area of activated more often, which degrades performance. The memory, but the translation information is replaced by the other pipelines between accesses by the ?rst pipeline. TLB 158 includes a plurality of addressable storage locations 178 that are addressed by bits VA[18:12] of the input virtual address. Each storage location stores a virtual address tag (VAT) 180, a real address (RA) 182 correspond ing to the virtual address tag, and control information (CNTRL) 184. How much control information is included depends on the particular design and may include, for SUMMARY OF THE INVENTION 10 The present invention is directed to a method and appa ratus for translating virtual addresses in a computing system having multiple pipelines wherein a separate TLB is pro vided for each pipeline requiring address translation ser example, access protection ?ags, dirty ?ags, referenced vices. Each TLB may operate independently so that it ?ags, etc. The addressed virtual address tag is communicated to comparator 170 over a communication path 186, and the contains its own set of virtual-to-real address translations, or else each TLB in a selected group may be simultaneously updated with the same address translation information whenever the address translation tables in main memory are accessed to obtain address translation information for any other TLB in the group. In one embodiment of the present invention, a TLB is provided for each load/store pipeline in the system, and an addressed real address is output on a communication path 188. Comparator 170 compares the virtual address tag with bits VA[31:22] of the input virtual address. If they match (a TLB hit), then the real address output on communication path 188 is compared with a real address tag (not shown) of a selected line in cache memory 60 to determine if the requested data is in the cache memory (a cache hit). An example of this procedure is discussed in US. Pat. No. 4,933,835 issued to Howard G. Sachs, et a1. and incorpo address translator is provided for each such pipeline for 25 pipeline into corresponding real addresses. Each address rated herein by reference. If there is a cache hit, then the pipelines may continue to run at their highest sustainable speed. If the requested data is not in cache memory 60, then the real address bits on communication path 188 are com 30 bined with bits [11:0] of the input virtual address and used to obtain the requested data from main memory 34. If the virtual address tag did not match bits VA[31:19] of the input virtual address, then comparator 17 0 provides a miss signal on a communication path 190 to DTU 162. The 35 stored in main memory 34, or else the data is in fact present in main memory 34 but the corresponding entry in TLB 158 has been deleted. When the miss signal is generated, DTU 162 accesses the page tables in main memory 34 to determine whether in fact the requested data is currently stored in main memory 34. If not, then DTU 162 instructs data transfer unit 42 through a communication path 194 to fetch the page containing the requested data from mass storage device 30. In any event, desired to have the same translation information available BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a known computing system; FIGS. 2A and 2B are each diagrams illustrating virtual addressing; FIG. 3 is a diagram showing how page tables are accessed in the computing system shown in FIG. 1; FIG. 4 is a block diagram illustrating how virtual TLB 158 is updated through a communication path 196, and addresses are translated in the computing system shown in FIG. 1; and instruction issuing resumes. TLB 158 has multiple ports to accommodate the FIG. 5 is a block diagram of a particular embodiment of 50 a multiple TLB apparatus for translating virtual addresses in a computing system according to the present invention. BRIEF DESCRIPTION OF THE PREFERRED array in TLB 158 is used to service all address translation requests. translator comprises a translation bu?’er accessing circuit for accessing the TLB, a translation indicating circuit for indi cating whether translation data for the virtual address is stored in the translation bu?fer, and an update control circuit for activating the direct address translation circuit when the translation data for the virtual address is not stored in the TLB. The update control circuit also stores the translation data retrieved from the main memory into the TLB. If it is for all the pipelines in a group, then the update control circuit also updates all the other TLB’s in the group. miss signal indicates that the requested data is not currently addresses from the pipelines needing address translation services. For example, if two load instruction pipelines and one store instruction pipeline are used in computing system 10, then TLB 158 has three ports, and the single memory translating a virtual address recieved from its associated 55 EMBODIMENTS As noted above, new virtual-to-real address translation information is stored in TLB 158 whenever a miss signal is generated by comparator 170. The new translation informa FIG. 5 is a block diagram of a particular embodiment of an apparatus 200 according to the present invention for translating virtual addresses in a computing system such as tion typically replaces the oldest and least used entry pres ently stored in TLB 158. While this mode of operation is ordinarily desirable, it may have disadvantages when a single memory array is used to service address translation computing system 10 shown in FIG. 1. Apparatus 200 includes, for example, a load instruction pipeline 210A, a load instruction pipeline 2103, and a store instruction pipe line 210C. These pipelines may be three of the pipelines 18A—H shown in FIG. 1. Pipelines 210A-C communicate requests from multiple pipelines. For example, if each pipeline refers to different areas of memory each time an virtual addresses to address registers 214A-C over respec address is to be translated, then the translation information 65 tive communication paths 218A-C. Relevant portions of the stored in TLB 158 for one pipeline may not get very old virtual addresses stored in address registers 218A-C are before it is replaced by the translation information obtained communicated to TLB’s 222A—C and to comparators 5,463,750 5 6 230A-C over communication paths 226A—C and 228A—C, TLB 222B. On the other hand, update control circuit 240 respectively. TLB’s 222A—C are accessed in the manner activates DTU 162 whenever a miss signal is received over noted in the Background of the Invention, and the addressed communication path 238C and stores the desired translation information only in TLB 222C. virtual address tags in each TLB are communicated to comparators 230A-C over respective communication paths 234A-C. Comparators 238A-C compare the virtual address tags to the higher order bits of the respective virtual addresses and provide hit/miss signals on communication paths 238A-C to an update control circuit 240. Update control circuit 240 controls the operation of DTU 162 through a communication path 244 and updates TLB’s 222A-C through respective update circuits 241-243 and While the above is a complete description of a preferred embodiment of the present invention, various modi?cations may be employed. For example, signals on a communication path 260 could be used to control which TLB’s are com 10 monly updated and which TLB’s are separately updated (e.g., all TLB’s updated independently, TLB’s 222A and 222C updated in common while TLB 222B is updated independently, or TLB’s 222A-C all updated in common). communication paths 248A-C whenever there is a miss That is useful when common memory references by the pipelines are application or program dependent. Conse signal generated on one or more of communication paths 238A—C. That is, update control circuit 240 activates DTU quently, the scope of the invention should be ascertained by the following claims. 162 whenever a miss signal is received over communication path 238A and stores the desired translation information in trol circuit 240 activates DTU 162 whenever a miss signal is received over communication path 238B and stores the desired translation information in TLB 222B through com What is claimed is: 1. An apparatus for translating virtual addresses in a computing system having at least a ?rst and a second instruction pipeline and a direct address translation unit for munication path 248B; and update control circuit 240 acti translating virtual addresses into real addresses, the direct TLB 222A through communication path 248A; update con vates DTU 162 whenever a miss signal is received over address translation unit including a master translation communication path 238C and stores the desired translation memory for storing translation data, the direct address infonnation in TLB 222C through communication path 248C. If desired, each TLB 222A-C may be updated indepen dently of the others, which results in separate and indepen translation unit for translating a virtual address into a 25 corresponding real address, comprising: a ?rst translation buffer, associated with the ?rst instruc tion pipeline, for storing a ?rst subset of translation data dent sets of virtual-to-real address translation data in each TLB. Thus, if, for example, pipeline 210A tends to refer to a particular area of memory more than the other pipelines 210B—C, then TLB 222A will store a set of virtual-to-real from the master translation memory; a ?rst address translator, coupled to the ?rst instruction address translations that maximize the hit rate for pipeline tion pipeline into a corresponding ?rst real address, the ?rst address translator comprising: ?rst translation bulfer accessing means for accessing the ?rst translation buffer; ?rst translation indicating means, coupled to the ?rst translation bu?’er accessing means, for indicating pipeline and to the ?rst translation buffer, for translat ing a ?rst virtual address received from the ?rst instruc 210A. Even if pipeline 210A does not favor a particular area of memory, having a separate and independent set of virtual to~real address translation data eliminates the possibility that needed translation information in TLB 222A is deleted and replaced by translation data for another pipeline. If all three pipelines tend to refer to a common area of memory, then update control circuit 240 can be hardware or 40 software programmed to simultaneously update all TLB’s with the same translation data whenever the address trans lation tables in main memory are accessed to obtain address translation information for any other TLB. That is, every time DTU 162 is activated for translating a virtual address 45 supplied by pipeline 210A, then update control circuit stores the translation data in each of TLB’s 222A-C. While this mode of operation resembles that described for a multi ported TLB as described in the Background of the Invention, this embodiment still has bene?ts in that three separate single-port TLB’s are easier to implement than one multi port TLB and takes up only slightly more chip area. whether translation data for the ?rst virtual address is stored in the ?rst translation buffer; and ?rst direct address translating means, coupled to the ?rst translation indicating means and to the direct address translation unit to translate the ?rst virtual address when the ?rst translation indicating means indicates that the translation data for the ?rst virtual address is not stored in the ?rst translation buffer, the ?rst direct address translating means including ?rst translation buffer storing means, coupled to the ?rst translation buffer, for storing the translation data for the ?rst virtual address from the master translation memory into the ?rst translation buffer; a second nanslation buffer, associated with the second If one group of pipelines tends to refer to a common area instruction pipeline, for storing a second subset of of memory and other pipelines do not, then update control translation data from the master translation memory; and a second address translator, coupled to the second instruc tion pipeline and to the second translation buifer, for translating a second virtual address received from the second instruction pipeline into a corresponding second circuit 240 can be hardware or software programmed to 55 maintain a common set of translations in the TLB’s associ— ated with the group while independently updating the other TLB’s. For example, if load pipelines 210A and 210B tend to refer to a common area in memory and store pipeline 210C tends to refer to a di?’erent area of memory (or to random areas of memory), then control circuit 240 activates DTU 162 whenever a miss signal is received over commu nication path 238A and stores the desired translation infor mation in both TLB 222A and TLB 222B. Similarly, update control circuit 240 activates DTU 162 whenever a miss 65 signal is received over communication path 238B and stores the desired translation information in both TLB 222A and real address, the second address translator comprising: second translation buffer accessing means for accessing the second translation buffer; second translation indicating means, coupled to the second translation buffer accessing means, for indi cating whether translation data for the second virtual address is stored in the second translation bu?cer; and second direct address translating means, coupled to the 5,463,750 7 8 second translation indicating means and to the ?rst 0nd load instruction pipeline for processing instruc address translation unit, for activating the direct address when the second translation indicating tions which cause data to be loaded from the memory. 8. A method for translating virtual addresses in a com puting system having at least a ?rst and a second instruction means indicates that the translation data for the second virtual address is not stored in the second pipeline and a direct address translation unit for translating virtual addresses into real addresses, the direct address translation buffer, the second direct address translat ing means including second translation bulfer storing means, coupled to the second translation butter, for storing the translation data for the second virtual translation unit including a master translation memory for address translation unit to translate the second virtual address from the master translation memory into the second translation buffer. storing translation data, the direct address translation unit for translating a virtual address into a corresponding real 10 address, comprising the steps of: storing a ?rst subset of translation data from the master translation memory into a ?rst translation buffer asso 2. The apparatus according to claim 1, ciated with the ?rst instruction pipeline; translating a ?rst virtual address received from the ?rst instruction pipeline into a corresponding ?rst real wherein the ?rst direct address translating means further comprises second translation buffer storing means, coupled to the second translation butter, for storing the address, wherein the ?rst virtual address translating step comprises the steps of: accessing the ?rst translation buffer; indicating whether translation data for the ?rst virtual translation data for the ?rst virtual address from the master translation memory into the second translation buiTer. 3. The apparatus according to claim 2: wherein the second direct address translating means fur ther comprises ?rst translation buffer storage means, coupled to the ?rst translation bu?’er, for storing the translation data for the second virtual address from the master translation memory into the ?rst translation 25 buffer. 4. The apparatus according to claim 3 further comprising: a third translation buffer, associated with a third instruc address is stored in the ?rst translation buffer; activating the direct address translation unit to translate the ?rst virtual address when the translation data for the ?rst virtual address is not stored in the ?rst translation buffer; and storing the translation data for the ?rst virtual address from the master translation memory into the ?rst translation bu?er; tion pipeline, for storing a third subset of translation storing a second subset of translation data from the master translation memory into a second translation buffer data from the master translation memory; a third address translator, coupled to the third instruction associated with the second instruction pipeline; and translating a second virtual address received from the pipeline and to the third translation bu?'er, for translat ing a third virtual address received from the third instruction pipeline into a corresponding third real second instruction pipeline into a corresponding second real address, wherein the second virtual address trans address, the third address translator comprising: third translation buffer accessing means for accessing the third translation buffer; third translation indicating means, coupled to the third translation buffer accessing means, for indicating 35 whether translation data for the third virtual address is stored in the third translation buffer; and third direct address translating means, coupled to the third translation indicating means and to the direct address translation unit, for activating the direct address translation unit to translate the third virtual address when the third translation indicating means indicates that the translation data for the third virtual address is not stored in the third translation buifer, the third direct address translating means including third translation butfer storing means, coupled to the 40 45 third translation buffer, for storing the translation data for the third virtual address from the master translation memory into the third translation buffer. 5. The apparatus according to claim 4, wherein the third translation buffer storing means is the 55 only means for storing translation data into the third translation buifer. 6. The apparatus according to claim 5, wherein the ?rst instruction pipeline comprises a ?rst load cause data to be loaded from a memory; and wherein the third instruction pipeline comprises a store wherein the second instruction pipeline comprises a sec from the master translation memory into the ?rst trans lation buifer whenever translation data for the second virtual address from the master translation memory is stored into the second translation buffer. translation memory into a third translation bulfer asso instruction pipeline for processing instructions which 7. The apparatus according to claim 6, indicating whether translation data for the second vir tual address is stored in the second translation bulfer; activating the direct address translation unit to translate the second virtual address when the translation data for the second virtual address is not stored in the second translation bulfer; and storing the translation data for the second virtual address from the master translation memory into the second translation buffer. 9. The method according to claim 8 further comprising the step of: storing the translation data for the ?rst virtual address from the master translation memory into the second translation buffer whenever translation data for the ?rst virtual address from the master translating memory is stored into the ?rst translation bu?rer. 10. The method according to claim 9 further comprising the step of: storing the translation data for the second virtual address 11. The method according to claim 10 further comprising the steps of: storing a third subset of translation data from the master instruction pipeline for processing instructions which cause data to be stored into the memory. lating step comprises the steps of: accessing the second translation buffer; 65 ciated with the third instruction pipeline; and translating a third virtual address received from the third instruction pipeline into a corresponding third real address, where in the third virtual address translating 5,463,750 9 10 step comprises the steps of: accessing the third translation butler; indicating whether translation data for the third virtual address is stored in the third translation buffer; activating the direct address translation unit to translate the third virtual address when the translation data for the third virtual address is not stored in the third translation bu?’er; and storing the translation data for the third virtual address third translation bulfer. 13. The method according to claim 12, wherein the ?rst instruction pipeline comprises a ?rst load 5 from the master translation memory into the third 10 translation buifer. 12. The method according to claim 11, wherein the step of storing the translation data for the third virtual address comprises the step of storing translation data for only the third virtual address in the instruction pipeline for processing instructions which cause data to be loaded from a memory; and wherein the third instruction pipeline comprises a store instruction pipeline for processing instructions which cause data to be stored in the memory. 14. The method according to claim 13, wherein the second instruction pipeline comprises a sec ond load instruction pipeline for processing instruc— tions which cause data to be loaded from the memory.

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?