US20050152192A1 - Reducing occupancy of digital storage devices - Google Patents

Reducing occupancy of digital storage devices Download PDF

Info

Publication number
US20050152192A1
US20050152192A1 US11/019,099 US1909904A US2005152192A1 US 20050152192 A1 US20050152192 A1 US 20050152192A1 US 1909904 A US1909904 A US 1909904A US 2005152192 A1 US2005152192 A1 US 2005152192A1
Authority
US
United States
Prior art keywords
block
storage device
fingerprint
blocks
digital data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/019,099
Inventor
Manfred Boldy
Peter Sander
Hermann Stamm-Wilbrandt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STAMM-WILBRANDT, HERMANN, SANDER, PETER, BOLDY, MANIFRED
Publication of US20050152192A1 publication Critical patent/US20050152192A1/en
Priority to US12/892,468 priority Critical patent/US8327061B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers

Definitions

  • the invention generally relates to digital data storage devices such as magnetic hard disk drives, optical disk drives, tape storage devices, semiconductor-based storages emulating or virtually realizing hard disk drives like solid hard disks or RAM disks storing information in continuous data blocks. More specifically, the invention concerns operation of such a digital storage device in order to reduce storage occupancy.
  • HDDs hard disk drives
  • optical disk drives built-up of one or a stack of multiple hard disks (platters) on which data is stored on a concentric pattern of magnetic/optical tracks using read/write heads. These tracks are divided into equal arcs or sectors. Two kinds of sectors on such disks are known. The first and at the very lowest level is a servo sector.
  • a magnetic storage device when the hard disks are manufactured, a special binary digit (bit) pattern is written in a code called ‘gray code’ on the surface of the disks, while the drive is open in a clean room, with a device called “servo writer”.
  • This gray code consists of successive numbers that differ by only a single bit like the three bit code sequence, 000′, 001′, 011′, 010′, 110′, etc. Although many gray codes are possible, one specific type of gray code is considered the gray code of choice because of its efficiency in computation. Although there are other schemes, the gray code is written in a wedge at the start of each sector. There are a fixed number of servo sectors per track and the sectors are adjacent to one another. This pattern is permanent and cannot be changed by writing normal data to the disk drive. It also cannot be changed by low-level formatting the drive.
  • Disk drive electronics use feedback from the heads which read the gray code pattern, to very accurately position and constantly correct the radial position of the appropriate head over the desired track, at the beginning of each sector, to compensate for variations in disk (platter) geometry caused by mechanical stress and thermal expansion or contraction.
  • the hard disk storage devices generally are low-level formatted. Afterwards, only high-level operations are performed such as known partitioning procedures, high-level formatting and read/write of data in the form of blocks as mentioned above. All high-level operations can be derived from only two base operations, namely a BlockRead and a BlockWrite operation. Thus even partitioning and formatting, the latter independently of the underlying formatting scheme like MS-DOS FAT, FAT32, NTFS or LINUX EXT2, are accomplished using the mentioned base operations.
  • each disk (platter) is arranged into blocks of fixed length by repeatedly writing with a definite patch like “$5A”. After formatting, when storing data in such disk storage devices, these data are stored as continuous data segments on the disk (platter). These continuous data segments are also referred to as “data” or, simply, “blocks” and such terminology will be used hereinafter.
  • a disk drive system comprising a sector buffer having a plurality of segments for storing data and reducing storage occupancy is disclosed in U.S. Pat. No. 6,092,145 assigned to the assignee of the present invention.
  • HDD systems require a sector buffer memory to temporarily store data in the HDD system because the data transfer rate of the disk is not equal to the data transfer rate of a host computer and thus a sector buffer is provided in order to increase the data I/O rate of new high capacity HDD systems.
  • the system described therein particularly includes a controller for classifying data to be stored in the sector buffer and for storing a portion of the classified data in a segment of the sector buffer such that the portion of classified data stored in the segment is not stored in any other segment in the sector buffer. Therefore, the sector buffer is handled more efficiently, and the computational load to check for duplicated data is reduced and the disk drive thus improves data transfer efficiency.
  • 5,732,265 it is further disclosed to implement such an encoder in an operating system or file system to dynamically optimize storage in the memory system of the computer wherein the above-described mechanism is applied at the time a file is created or saved on a data volume to detect whether the file is a duplicate of another existing file on the data volume.
  • a further object is to provide such a data storage device with enhanced data access and transmission performance.
  • Another object is to provide a mechanism for minimizing data occupancy in such a data storage device that is transparent to an operating system of a computer using the data storage device.
  • the underlying concept of the invention is to physically store blocks of identical data only once on the storage medium of the data storage device wherein a second block or even further identical blocks are stored only as reference(s) referring to the first block of these identical blocks.
  • storage of duplicate data is most effectively avoided at the lowest storage level of the disk storage device, even in cases where identical blocks are written by different operating systems.
  • the proposed method thereby effectively avoids data duplicates being created on the sector level of the storage medium.
  • the proposed mechanism is operating system independent or fully transparent to an operating system, respectively, since it operates on the pre-mentioned block/sector level which is not known by the operating system.
  • the invention proposes, when writing to an existing block of information onto the storage medium, not to modify the real block itself but, moreover, to modify only the relatively small reference table.
  • identical blocks of information are stored only once on the block level of the storage device and accessed or addressed only using reference information stored in the reference table.
  • the underlying storage medium magnetic hard disk (platter), optical disk, tape, or M-RAM) is segmented into two areas, the first area comprising a relatively small block reference table (in the following briefly referred to as “reference table”) and the remaining physical storage area for storing real blocks of information.
  • reference table a relatively small block reference table
  • the present invention can also be applied to tape storage devices since it does not depend on the underlying data access mechanism.
  • the possible entries of the reference table are continuously numbered wherein the reference table contains, for each real block, at least one entry.
  • This entry contains a unique identifier for identifying the physical sector where the real block is stored in the remaining physical storage area.
  • the length of this entry is preferably defined as the maximum amount of required binary digits (bits) for real sector IDs.
  • a real block stored in the second area of the storage medium comprises, in addition to other required information like a header, the stored data and a Cyclic Redundancy Checking (CRC), a reference counter. That counter counts the number of references to the present real block.
  • CRC Cyclic Redundancy Checking
  • the reference counter is preferably used to identify whether a block is used or not.
  • the number of real blocks available for storing equals the number of entries of the reference table. Only later, during operation of the storage medium where the second area of the storage medium is filled with blocks of real data, the size of the reference table will be adapted or its optimum size being determined. Thus the optimum size can be re-calculated on a periodic time basis.
  • a fingerprint table is created during the above-described low-level formatting or a successive formatting step after the low-level formatting of a so-called “intermediate format” of the storage medium.
  • three tables, the above mentioned reference table, a linkage or chain table and a fingerprint table are created.
  • Implementation of the fingerprint table presumes that for each block to be written a “fingerprint” can be calculated.
  • An exemplary fingerprint algorithm is a cyclic redundancy check (CRC) mechanism which preferably is used for calculation of the entries of the fingerprint table.
  • CRC is a well-known mechanism of checking for errors in data that has been transmitted on a communications link.
  • a sending device applies a 16- or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code to the block.
  • the receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.
  • the fingerprint table for a given fingerprint value, contains the first block identified by a block identifier (BLOCK-ID) with that fingerprint.
  • BLOCK-ID block identifier
  • the chain table in that embodiment, is bi-linked and contains, for each real block, its predecessor and successor in the list of blocks with equal fingerprint and the reference count of the corresponding block and the fingerprint of the block.
  • the reference table in that embodiment, is continuously numbered and contains at least an entry for each real block. That entry preferably consists of the mentioned BLOCK-ID.
  • a particular storage area on the storage medium is reserved for the reference table and thus can not be occupied by real (user) data.
  • the real data is only stored in a real sector wherein occupation of the real sector advantageously can move from outer tracks to inner tracks of the storage medium.
  • the reference table is stored outside the storage medium of the storage device, preferably in an Electronically Erasable Programmable Read-Only Memory (EEPROM/Flash RAM) being part of the storage device or a virtual RAM disk storage being part of the main storage of an underlying computer system.
  • EEPROM/Flash RAM Electronically Erasable Programmable Read-Only Memory
  • FIGS. 1A and 1B depict schematic views of an available storage space of a storage device for illustrating segmentation of the storage medium into two different areas ( FIG. 1 a ) and for illustrating the principle of expandable sector storage ( FIG. 1 b ) in accordance with the invention;
  • FIG. 2 depicts a reference table according to the preferred embodiment of the invention
  • FIG. 3 depicts a fingerprint table according to the preferred embodiment of the invention.
  • FIG. 4 depicts a LIFO stack of free blocks according to the preferred embodiment of the invention.
  • FIG. 5 depicts a linkage/chain table according to the preferred embodiment of the invention.
  • FIGS. 6A, 6B and 6 C comprise a multiple-part flow diagram illustrating a BLOCK WRITE procedure conducted in an HDD device in accordance with the invention
  • FIG. 7 is a flow diagram illustrating a BLOCK READ procedure conducted in an HDD device in accordance with the invention.
  • FIG. 8 is a flow diagram illustrating a HIGH-LEVEL FORMATTING procedure conducted in a Hard Disk Drive (HDD) in accordance with the invention
  • FIG. 9A is a flow diagram illustrating a procedure for FINDING THE POSITION OF A BLOCK IN A LIST USING A FINGERPRINT conducted in an HDD in accordance with the invention
  • FIG. 9B is a flow diagram illustrating a procedure for REMOVING A BLOCK FROM A LIST USING A FINGERPRINT conducted in an HDD in accordance with the invention.
  • FIG. 9C is a flow diagram illustrating a procedure for PREPENDING ‘B’ TO LIST WITH FINGERPRINT IN conducted in an HDD in accordance with the invention.
  • FIG. 10A is a flow diagram illustrating INITIALIZATION OF AN EMPTY STACK
  • FIG. 10B is a flow diagram illustrating an operation of PUSHING AN ELEMENT ONTO A STACK.
  • FIG. 10C is a flow diagram illustrating an operation of RETRIEVING THE LAST PUSHED ELEMENT FROM THE STACK.
  • FIGS. 1A and 1B schematically show the available storage space of a storage medium of an underlying storage device, the storage space being arranged in accordance with the invention.
  • the underlying storage device can be any storage device storing information in continuous data blocks like sector-oriented magnetic hard disk drives, optical disk drives or tape storage devices, and even semiconductor storage devices emulating or virtually realizing hard disk drives like solid hard disks or RAM disks.
  • FIG. 1A more particularly, illustrates how the underlying storage medium is segmented into two different storage areas 100 , 105 , the first area 100 containing a sector directory (e.g. implemented as a table or the like) used for operational administration of the underlying storage device according to the mechanism described hereinafter and the second area (‘Real Sector’) 105 representing physical storage space for physically storing data.
  • a sector directory e.g. implemented as a table or the like
  • Real Sector second area
  • FIG. 1A it is further illustrated by the two arrows 110 , 115 , that the size of each of the two storage areas 100 , 105 can be adapted dynamically during operation of the underlying storage device, mainly depending on the storage capacity requirements of the mentioned sector directory.
  • the required storage size for storing the sector directory again, mainly depends on the number of currently existing data duplicates on sector level to be administered by means of the sector directory.
  • FIG. 1B shows a similar segmentation according to another embodiment of the invention where a number of different storage devices or storage subunits are involved.
  • the sector directory is stored on a storage medium 150 of a first storage device wherein the real blocks are stored on the storage media 155 , 160 , 165 of other devices.
  • the sector storage area can be expanded nearly arbitrarily, as indicated by arrow 170 .
  • a fingerprint value (fn) can be calculated.
  • a known example for a fingerprint used in storage media is the above mentioned mechanism of Cyclic Redundancy Checking (CRC).
  • CRC Cyclic Redundancy Checking
  • the receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.
  • the mechanism for reducing storage occupancy in accordance with the invention is based on segmentation of the storage area of the HDD or other storage device into two different areas, the first area containing a sector table and the second area intended for physically storing data.
  • that sector table area there is stored a reference table R containing at least one entry for each real block of data.
  • the possible entries of that table are continuously numbered whereby each entry comprises a unique identifier (ID) of a stored block.
  • the sector table area also includes a fingerprint table ‘FP’.
  • the FP table contains, for each possible fingerprint value A034, A035, . . . , the ID of the first block with that fingerprint.
  • it comprises a LIFO (last in—first out) stack U ( FIG. 4 ) of unused (real) blocks and a doubly-linked table L ( FIG. 5 ) that comprises for a given block indicated by block number . . . , 14557, 14558, . . . the following information:
  • the number of available fingerprint values should be on the order of the number of real blocks available in the HDD.
  • the number of fingerprints is equal to the number of blocks which guarantees that the average number of blocks with equal fingerprint value is smaller than 1. Even in case that in some lists of the above tables the number of blocks with identical fingerprint is larger than 1, then other fingerprint values are not realized (or are not presented) at all and the inequality of a new block compared with all blocks already stored on the HDD is ascertained also without a physical read of the block.
  • n is the number of bytes required for storing block numbers. For example, four bytes (thirty-two bits) are sufficient up to a storage capacity of two terabytes of the underlying storage device if the block size is 512 byte (2 ⁇ circumflex over (0) ⁇ 32*512) and three bytes are sufficient for a storage capacity of 16 million blocks.
  • FIGS. 6 to 9 it is described in more detail by way of flow diagrams how the particular operations ‘BLOCK WRITE’, ‘BLOCK READ’, ‘HIGH-LEVEL FORMATTING’, ‘FINDING THE POSITION OF A BLOCK IN A LIST’, ‘REMOVING A BLOCK FROM A LIST’ and ‘INSERTING A BLOCK INTO A LIST’ (the last three operations by using a fingerprint) are performed in a sector-oriented storage in accordance with the invention.
  • These operations and method of operating a storage device are sufficient to guarantee that any block is stored exactly once in the storage medium and that different sectors containing the same block only contain references to this one block while limiting the processing overhead to do so.
  • the mechanism and method in accordance with the invention must quickly check for a block, blk, that is already stored on the storage medium which can be very large. This reduction in processing time is achieved by calculating a fingerprint for block blk and then quickly searching the relatively short list of blocks already present with the same fingerprint, fn. It should be noted that blocks containing different data may, nevertheless, result in the same fingerprint being calculated. However, since the number of possible fingerprints which can result from calculation based on the data content of block is very large, the list of blocks having different content which may have the same (or any given) fingerprint will be a very small fraction of the number of blocks stored and the search can thus be performed very quickly on a list of blocks which will generally be very short.
  • BlockWrite operation it is assumed that a data block ‘blk’ is to be written at a position of the HDD designated with block number ‘s’.
  • procedural steps shown in FIGS. 6A-6C are performed. It is noted that the three parts of the entire flow diagram are linked at cardinal points ‘B’ and ‘C’, respectively.
  • a fingerprint ‘fn’ is calculated.
  • An appropriate method for calculating the fingerprint is the above-mentioned known CRC mechanism although other appropriate and possible techniques for computing a fingerprint will be evident to those skilled in the art.
  • the HDD position number, s, at which the block is to be written is looked up 605 in the reference table R at block position ‘s’ and the resulting ID entry ‘b’ is checked in the next step 610 to determine if the entry ‘b’ is undefined (‘undef’). If this condition is fulfilled (i.e. b is not defined because nothing has been previously stored for sector s) then the procedure continues with step 655 shown in FIG.
  • step 620 the whole bit pattern of ‘b’ is read and stored in ‘orig’.
  • step 615 if the fingerprint ‘fn’ calculated by means of the linkage table L is not identical with the fingerprint value stored in table L for the present block entry ‘b’, it is checked in step 650 , if the reference count value ‘rc’ contained in table L for entry ‘b’ is equal to ‘1’. If so, the procedure is continued with the next step linked to point ‘B’ shown in FIG. 6B . Otherwise the reference count ‘rc’ is decreased by ‘1’ in following step 645 .
  • FIG. 6B it is described how the above BlockWrite procedure is continued at cardinal point ‘B’ to make entry ‘b’ available for writing with step 655 where the reference count value of entry ‘b’ in the linkage table L is set ‘0’.
  • step 660 the entry ‘b’ is removed from the list contained in the fingerprint table FP for fingerprint ‘fn’.
  • the underlying procedure for the removal of entry ‘b’ is described in more detail referring to FIG. 9B .
  • steps 665 - 680 surrounded by line 690 relate to a mechanism for handling physically defective blocks in a HDD and thus represent an optional but further advantageous perfecting feature of the invention.
  • a gray code is physically be written at block ‘b’ of the HDD.
  • that block ‘b’ is physically read and stored temporarily as variable ‘aux’.
  • step 675 it is then checked if the data pattern temporarily stored in ‘aux’ is equal with the original gray code. If not, the present block can be assumed to be defective and thus in the following step 680 that block is marked as defective simply by setting the reference count ‘rc’ of that block to ‘ ⁇ 1’.
  • the necessary stack operations are described in detail below with reference to FIGS. 10A-10C .
  • FIG. 6C it is illustrated how the presently described procedure continues at cardinal point ‘C’.
  • the position of a block ‘blk’ with fingerprint value ‘fn’ in the list with all blocks of fingerprint value ‘fn’ (FP[fn]) is determined.
  • the underlying procedure for finding that position is described in more detail hereinafter referring to FIG. 9A .
  • step 705 it is checked if b is undefined (‘undef’) indicating, that no block in the list is identical to ‘blk’.
  • step 710 If ‘YES’, the above described pop(U) operation is performed with the LIFO stack U in step 710 to receive a free block for storing.
  • step 715 ‘b’ is inserted into the fingerprint table FP with the above calculated fingerprint value fn. For the details of that insertion procedure it is referred to the following description of FIG. 9C .
  • step 720 the bit pattern of block ‘blk’ is physically written to the HDD at real block ‘b’ accordingly. Thereafter, the reference count ‘rc’ of ‘b’ is set 725 to the value ‘1’ in the linkage table L, because the block ‘blk’ is stored for the first time on the storage device.
  • the fingerprint value of table L is set 730 with the above calculated value fn.
  • step 735 at the position s of the reference table R, the value b is entered. Then the present procedure is terminated by step 740 .
  • step 745 the reference count ‘rc’ of ‘b’ in the linkage table L is increased by ‘1’.
  • step 800 it is checked if the entry at position ‘s’ is undefined (‘undef’). If so, in step 805 , an arbitrary bit pattern is returned. Otherwise, in step 810 , the block ‘blk’ at the position ‘s’ of the reference table R is physically read and returned.
  • FIG. 8 a preferred embodiment of a procedure for high-level formatting a HDD is described in detail by way of the depicted flow diagram. This procedures serves for initializing an HDD for applying the HDD operation method according to the invention.
  • a first step 900 for all sectors s of the HDD, the corresponding entries of the reference table R are set undefined (‘undef’), namely all entries of R. Then, in the fingerprint table FP, for all possible fingerprint values fn, FP[fn] is set 905 undefined (‘undef’).
  • the LIFO stack U is initialized as an empty stack.
  • the corresponding entries for the parameters previous block ‘prev’, next block ‘next’ and fingerprint value ‘fn’ contained in the linkage table L are set undefined (‘undef’) wherein the entry for the parameter reference count ‘rc’ is set ‘0’.
  • the procedure starts with step 1000 where it is checked if a given fingerprint entry of the fingerprint table FP is undefined (‘undef’). If ‘yes’ then it is returned 1005 ‘undef’ since the list of blocks with fingerprint fn is empty in this case. Otherwise, the first block of the list of blocks with fingerprint fn 1010 is denoted by ‘b’. In the following step 1015 the block ‘b’ is physically read and temporarily stored as variable ‘orig’. Then it is checked 1020 if the bit patterns of blk and orig are identical. If so, then the block ID ‘b’ is returned 1025 .
  • next block stored in column ‘next’ of the linkage table ‘L’ for present block ‘b’ is set 1030 as a new block ‘b’. Thereafter it is checked 1035 if the new block ‘b’ is undefined (‘undef’) (i.e. the list is completely traversed). If so ‘undef’ is returned 1040 . Otherwise it is jumped back to step 1015 and this step executed for a next block ‘b’.
  • the block ID of a block in the list identical to ‘blk’ is returned, if one exists, and otherwise ‘undef is returned as indication of non-existence of such a block.
  • the procedure for removing a block ‘b’ with a fingerprint value ‘fn’ from the linkage table L starts with checking 1100 in the fingerprint table ‘FP‘if block ‘b’ is the first block in that list. If so then the fingerprint value ‘fn’ of the next block contained in linkage table ‘L’ is fetched and in the fingerprint table ‘FP‘set 1105 as first block with that ‘fn’ value. Thereafter in the linkage table ‘L’ the corresponding entry with that ‘fn’ value is fetched and the corresponding ‘prev’ value set 1110 ‘undef’. In the following steps 1115 and 1120 the ‘next’ and ‘prev’ values of the present block ‘b’ are both set ‘undef’.
  • step 1100 if the current block ‘b’ is not the first block in the list, it is jumped to the entry for the previous block ‘prev’ of present block ‘b’ in the linkage table ‘L’ and the next block ‘next’ entry for that entry is set 1125 as the next block ‘next’ for the current block ‘b’ in the ‘L’ table.
  • step 1130 following in the path it is then checked if the status of the current entry set in step 1125 is ‘undef’. If so, this path is continued with step 1115 followed by step 1120 as described beforehand.
  • an intermediate step 1135 is executed where it is jumped to the entry for the next block ‘next’ of present block ‘b’ in the linkage table ‘L’ and the previous block ‘prev’ entry for that entry is set 1125 as the previous block ‘prev’ for the current block ‘b’ in the ‘L’ table.
  • this procedure for insertion of a block ‘b’ having a fingerprint value ‘fn’ into a linkage table ‘L’ and a fingerprint table ‘FP’ shown in FIG. 9C it is first checked 1200 if the underlying entry for ‘fn’ in the fingerprint table ‘FP’ is ‘undef’. If so then the next block entry and the previous block entry of the ‘L’ table for ‘b’ are both set 1205 , 1210 ‘undef’. After this ‘b’ is inserted with its fingerprint value ‘fn’ in the ‘FP’ table and the procedure is terminated 1220 , i.e. the list consists of block ‘b’ in this case only.
  • step 1225 of a second path where the present fingerprint value ‘fn’ gathered from the fingerprint table ‘FP’ is set for next block ‘next’ contained in the linkage table ‘L’. Thereafter the previous block ‘prev’ contained in the linkage table ‘L’ for that fingerprint is set 1230 ‘b’ (b is pre-pended to the list FP[fn]).
  • a SAN is a high-speed special purpose digital network that interconnects different kinds of data storage devices with associated data servers on behalf of a larger network of users.
  • a storage area network is part of the overall network of computing resources for an enterprise.
  • a storage area network is usually clustered in close proximity to other computing resources such as IBM S/390 mainframe computers but may also extend to remote locations for backup and archival storage, using wide area network carrier technologies such as asynchronous transfer mode or synchronous optical networks.
  • a NAS is a hard disk storage that is set up with its own network address rather than being attached to the department computer that is serving applications to a network's workstation users. By removing storage access and its management from the department server, both application programming and files can be served faster because they are not competing for the same processor resources.
  • the network-attached storage device is attached to a local area network (typically, an Ethernet network, the most widely-installed local area network (LAN) technology) and assigned an IP address. File requests are mapped by the main server to the NAS file server.
  • a network-attached storage consists of hard disk storage, including multi-disk RAID (redundant array of independent disks) systems, and software for configuring and mapping file locations to the network-attached device.
  • Network-attached storage can be a step toward and included as part of the above mentioned more sophisticated storage system known as SAN.
  • the sector table (including the above described tables) is separated physically from the sector storage, i.e. both are implemented on different disk storage devices (e.g. HDDs).
  • HDDs disk storage devices
  • today's HDD controllers are able to manage 100 or even more HDDs.
  • the mentioned stack of sector storage HDDs in case of need can be extended easily insofar as the sector table arranged on the first HDD has only to be enlarged.
  • the sector table in another embodiment, can also be arranged in a solid-state random access memory (RAM) thus enhancing processing speed for managing the sector table.
  • RAM solid-state random access memory

Abstract

A digital data storage device physically stores blocks of identical data only once on its storage medium wherein a second or even further identical blocks are stored only as reference referring to the first block of these identical blocks. By this technique, storage of duplicate data is most effectively avoided on the lowest storage level of the disk storage device, even in cases where identical blocks are written by different operating systems. In the preferred embodiment, the underlying storage medium (magnetic hard disk, optical disk, tape, or M-RAM) is segmented into two areas, the first area particularly comprising a relatively small block reference table and the remaining physical storage area for storing real blocks of information.

Description

    FIELD OF THE INVENTION
  • The invention generally relates to digital data storage devices such as magnetic hard disk drives, optical disk drives, tape storage devices, semiconductor-based storages emulating or virtually realizing hard disk drives like solid hard disks or RAM disks storing information in continuous data blocks. More specifically, the invention concerns operation of such a digital storage device in order to reduce storage occupancy.
  • BACKGROUND OF THE INVENTION
  • In computer hardware technology it is well-known to use disk storage devices like hard disk drives (HDDs) or optical disk drives built-up of one or a stack of multiple hard disks (platters) on which data is stored on a concentric pattern of magnetic/optical tracks using read/write heads. These tracks are divided into equal arcs or sectors. Two kinds of sectors on such disks are known. The first and at the very lowest level is a servo sector. In the case of a magnetic storage device, when the hard disks are manufactured, a special binary digit (bit) pattern is written in a code called ‘gray code’ on the surface of the disks, while the drive is open in a clean room, with a device called “servo writer”.
  • This gray code consists of successive numbers that differ by only a single bit like the three bit code sequence, 000′, 001′, 011′, 010′, 110′, etc. Although many gray codes are possible, one specific type of gray code is considered the gray code of choice because of its efficiency in computation. Although there are other schemes, the gray code is written in a wedge at the start of each sector. There are a fixed number of servo sectors per track and the sectors are adjacent to one another. This pattern is permanent and cannot be changed by writing normal data to the disk drive. It also cannot be changed by low-level formatting the drive.
  • Disk drive electronics use feedback from the heads which read the gray code pattern, to very accurately position and constantly correct the radial position of the appropriate head over the desired track, at the beginning of each sector, to compensate for variations in disk (platter) geometry caused by mechanical stress and thermal expansion or contraction.
  • At the end of the manufacturing process, the hard disk storage devices generally are low-level formatted. Afterwards, only high-level operations are performed such as known partitioning procedures, high-level formatting and read/write of data in the form of blocks as mentioned above. All high-level operations can be derived from only two base operations, namely a BlockRead and a BlockWrite operation. Thus even partitioning and formatting, the latter independently of the underlying formatting scheme like MS-DOS FAT, FAT32, NTFS or LINUX EXT2, are accomplished using the mentioned base operations.
  • When high-level formatting such a disk drive, each disk (platter) is arranged into blocks of fixed length by repeatedly writing with a definite patch like “$5A”. After formatting, when storing data in such disk storage devices, these data are stored as continuous data segments on the disk (platter). These continuous data segments are also referred to as “data” or, simply, “blocks” and such terminology will be used hereinafter.
  • It is to be noted that, in known tape storage devices, data are stored in form of data blocks, as well. The only difference between the above described hard disk devices and these tape storage devices is that data stored on HDDs are directly accessible by means of the read/write head (so-called direct memory access DMA operation mode), whereas data stored on tapes are only accessible in a sequential manner since the tape has to be wound to the location where the data of interest are stored before these data can be accessed.
  • In order to minimize storage occupancy in those storage devices, it is known to avoid duplicate data. A disk drive system comprising a sector buffer having a plurality of segments for storing data and reducing storage occupancy is disclosed in U.S. Pat. No. 6,092,145 assigned to the assignee of the present invention. Generally, HDD systems require a sector buffer memory to temporarily store data in the HDD system because the data transfer rate of the disk is not equal to the data transfer rate of a host computer and thus a sector buffer is provided in order to increase the data I/O rate of new high capacity HDD systems. The system described therein particularly includes a controller for classifying data to be stored in the sector buffer and for storing a portion of the classified data in a segment of the sector buffer such that the portion of classified data stored in the segment is not stored in any other segment in the sector buffer. Therefore, the sector buffer is handled more efficiently, and the computational load to check for duplicated data is reduced and the disk drive thus improves data transfer efficiency.
  • The subject matter of the U.S. Pat. No. 6,092,145, in other words, concerns an improved method for read-ahead and write-ahead operations using a sector buffer wherein duplicates are eliminated only in the sector buffer implemented on the hard disk or a separate Random Access Memory (RAM), in order to provide the improved transfer efficiency mentioned above.
  • Another approach for optimizing storage occupancy is disclosed in U.S. Pat. No. 5,732,265 assigned to Microsoft Corporation. Particularly disclosed is an encoder for use in CD-ROM pre-mastering software. The storage in the computer readable recording medium (CD-ROM) is optimized by eliminating redundant storage of identical data streams for duplicate files whereby two files having equivalent data streams are detected and encoded as a single data stream referenced by the respective directory entries of the files. More particularly addressed therein is the problem of data consistency that arises when multiple files are encoded as a single data stream and when these files are separately modified by an operating system or application program. In U.S. Pat. No. 5,732,265 it is further disclosed to implement such an encoder in an operating system or file system to dynamically optimize storage in the memory system of the computer wherein the above-described mechanism is applied at the time a file is created or saved on a data volume to detect whether the file is a duplicate of another existing file on the data volume.
  • The above-discussed prior art approaches, however, have a disadvantage in that they do not address reduction of storage occupancy of stored user data (e.g. within a file or between files) which is stored in an above identified data storage device. Only as an example, it is referred to text or picture files where blocks frequently are fully represented by a recurring data byte being regarded as duplicate data in the present context. Nevertheless, as computer usage and application programs supporting it has become more sophisticated, there is an increased likelihood that relatively large portions of individual (possibly large) files comprising many blocks of data may be duplicated in many stored files; letterhead and watermarks stored in documents and portions of image files representing relatively large image areas having little detail therein being only a few examples. Further, as the capacity of memory devices increases, it becomes even more clearly impractical to compare a block to the stored with all blocks which-may have been previously stored in one or more memory devices to determine if an identical block has been previously stored.
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide an improved mechanism for minimizing data occupancy in an above specified digital data storage device.
  • A further object is to provide such a data storage device with enhanced data access and transmission performance.
  • Another object is to provide a mechanism for minimizing data occupancy in such a data storage device that is transparent to an operating system of a computer using the data storage device.
  • The above objects are achieved by a digital data storage device and a method for operating same in accordance with the respective independent claims. Advantageous features are subject matter of the corresponding subclaims.
  • The underlying concept of the invention is to physically store blocks of identical data only once on the storage medium of the data storage device wherein a second block or even further identical blocks are stored only as reference(s) referring to the first block of these identical blocks. As a consequence, storage of duplicate data is most effectively avoided at the lowest storage level of the disk storage device, even in cases where identical blocks are written by different operating systems. The proposed method thereby effectively avoids data duplicates being created on the sector level of the storage medium. The proposed mechanism is operating system independent or fully transparent to an operating system, respectively, since it operates on the pre-mentioned block/sector level which is not known by the operating system. In contrast to the above-discussed known approaches, the invention proposes, when writing to an existing block of information onto the storage medium, not to modify the real block itself but, moreover, to modify only the relatively small reference table. Thus identical blocks of information are stored only once on the block level of the storage device and accessed or addressed only using reference information stored in the reference table.
  • In the preferred embodiment, the underlying storage medium (magnetic hard disk (platter), optical disk, tape, or M-RAM) is segmented into two areas, the first area comprising a relatively small block reference table (in the following briefly referred to as “reference table”) and the remaining physical storage area for storing real blocks of information. Despite differences in the storage mechanism, it is emphasized that the present invention can also be applied to tape storage devices since it does not depend on the underlying data access mechanism.
  • The possible entries of the reference table, in another embodiment, are continuously numbered wherein the reference table contains, for each real block, at least one entry. This entry contains a unique identifier for identifying the physical sector where the real block is stored in the remaining physical storage area. The length of this entry is preferably defined as the maximum amount of required binary digits (bits) for real sector IDs.
  • In yet another embodiment, a real block stored in the second area of the storage medium comprises, in addition to other required information like a header, the stored data and a Cyclic Redundancy Checking (CRC), a reference counter. That counter counts the number of references to the present real block. The reference counter is preferably used to identify whether a block is used or not.
  • According to another aspect, as the result of a low-level formatting of the storage medium after manufacture/assembly of the storage device, the number of real blocks available for storing equals the number of entries of the reference table. Only later, during operation of the storage medium where the second area of the storage medium is filled with blocks of real data, the size of the reference table will be adapted or its optimum size being determined. Thus the optimum size can be re-calculated on a periodic time basis.
  • According to still another aspect of the invention, during the above-described low-level formatting or a successive formatting step after the low-level formatting of a so-called “intermediate format” of the storage medium, three tables, the above mentioned reference table, a linkage or chain table and a fingerprint table are created. Implementation of the fingerprint table presumes that for each block to be written a “fingerprint” can be calculated. An exemplary fingerprint algorithm is a cyclic redundancy check (CRC) mechanism which preferably is used for calculation of the entries of the fingerprint table. CRC is a well-known mechanism of checking for errors in data that has been transmitted on a communications link. A sending device applies a 16- or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code to the block. The receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.
  • In the preferred embodiment, the fingerprint table, for a given fingerprint value, contains the first block identified by a block identifier (BLOCK-ID) with that fingerprint. The chain table, in that embodiment, is bi-linked and contains, for each real block, its predecessor and successor in the list of blocks with equal fingerprint and the reference count of the corresponding block and the fingerprint of the block. The reference table, in that embodiment, is continuously numbered and contains at least an entry for each real block. That entry preferably consists of the mentioned BLOCK-ID.
  • In order to enable dynamic expansion of the reference table in accordance with the above-mentioned process for optimizing the storage area of the storage medium, in a further embodiment, a particular storage area on the storage medium is reserved for the reference table and thus can not be occupied by real (user) data. The real data is only stored in a real sector wherein occupation of the real sector advantageously can move from outer tracks to inner tracks of the storage medium.
  • According to yet another embodiment, the reference table is stored outside the storage medium of the storage device, preferably in an Electronically Erasable Programmable Read-Only Memory (EEPROM/Flash RAM) being part of the storage device or a virtual RAM disk storage being part of the main storage of an underlying computer system.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the following, the present invention is described in more detail by way of preferred embodiments from which further features and advantages of the invention become evident wherein
  • FIGS. 1A and 1B depict schematic views of an available storage space of a storage device for illustrating segmentation of the storage medium into two different areas (FIG. 1 a) and for illustrating the principle of expandable sector storage (FIG. 1 b) in accordance with the invention;
  • FIG. 2 depicts a reference table according to the preferred embodiment of the invention;
  • FIG. 3 depicts a fingerprint table according to the preferred embodiment of the invention;
  • FIG. 4 depicts a LIFO stack of free blocks according to the preferred embodiment of the invention;
  • FIG. 5 depicts a linkage/chain table according to the preferred embodiment of the invention;
  • FIGS. 6A, 6B and 6C comprise a multiple-part flow diagram illustrating a BLOCK WRITE procedure conducted in an HDD device in accordance with the invention;
  • FIG. 7 is a flow diagram illustrating a BLOCK READ procedure conducted in an HDD device in accordance with the invention;
  • FIG. 8 is a flow diagram illustrating a HIGH-LEVEL FORMATTING procedure conducted in a Hard Disk Drive (HDD) in accordance with the invention;
  • FIG. 9A is a flow diagram illustrating a procedure for FINDING THE POSITION OF A BLOCK IN A LIST USING A FINGERPRINT conducted in an HDD in accordance with the invention;
  • FIG. 9B is a flow diagram illustrating a procedure for REMOVING A BLOCK FROM A LIST USING A FINGERPRINT conducted in an HDD in accordance with the invention;
  • FIG. 9C is a flow diagram illustrating a procedure for PREPENDING ‘B’ TO LIST WITH FINGERPRINT IN conducted in an HDD in accordance with the invention;
  • FIG. 10A is a flow diagram illustrating INITIALIZATION OF AN EMPTY STACK;
  • FIG. 10B is a flow diagram illustrating an operation of PUSHING AN ELEMENT ONTO A STACK; and
  • FIG. 10C is a flow diagram illustrating an operation of RETRIEVING THE LAST PUSHED ELEMENT FROM THE STACK.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIGS. 1A and 1B schematically show the available storage space of a storage medium of an underlying storage device, the storage space being arranged in accordance with the invention. The underlying storage device, as mentioned above, can be any storage device storing information in continuous data blocks like sector-oriented magnetic hard disk drives, optical disk drives or tape storage devices, and even semiconductor storage devices emulating or virtually realizing hard disk drives like solid hard disks or RAM disks.
  • FIG. 1A, more particularly, illustrates how the underlying storage medium is segmented into two different storage areas 100, 105, the first area 100 containing a sector directory (e.g. implemented as a table or the like) used for operational administration of the underlying storage device according to the mechanism described hereinafter and the second area (‘Real Sector’) 105 representing physical storage space for physically storing data. In FIG. 1A it is further illustrated by the two arrows 110, 115, that the size of each of the two storage areas 100, 105 can be adapted dynamically during operation of the underlying storage device, mainly depending on the storage capacity requirements of the mentioned sector directory. The required storage size for storing the sector directory, again, mainly depends on the number of currently existing data duplicates on sector level to be administered by means of the sector directory.
  • FIG. 1B shows a similar segmentation according to another embodiment of the invention where a number of different storage devices or storage subunits are involved. In this scenario, the sector directory is stored on a storage medium 150 of a first storage device wherein the real blocks are stored on the storage media 155, 160, 165 of other devices. In this way, the sector storage area can be expanded nearly arbitrarily, as indicated by arrow 170.
  • In the following it is assumed that, for each block to be written into a sector of the underlying HDD, a fingerprint value (fn) can be calculated. A known example for a fingerprint used in storage media is the above mentioned mechanism of Cyclic Redundancy Checking (CRC). CRC is a method of checking for errors in data that has been transmitted on a communications link whereby a sending device applies a 16-bit or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code (CRC) to the block. The receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.
  • The mechanism for reducing storage occupancy in accordance with the invention, as illustrated in FIG. 1A, is based on segmentation of the storage area of the HDD or other storage device into two different areas, the first area containing a sector table and the second area intended for physically storing data. In that sector table area, there is stored a reference table R containing at least one entry for each real block of data. As illustrated in FIG. 2, in the preferred embodiment, the possible entries of that table are continuously numbered whereby each entry comprises a unique identifier (ID) of a stored block.
  • The sector table area also includes a fingerprint table ‘FP’. As illustrated by the preferred embodiment shown in FIG. 3, the FP table contains, for each possible fingerprint value A034, A035, . . . , the ID of the first block with that fingerprint. In addition, it comprises a LIFO (last in—first out) stack U (FIG. 4) of unused (real) blocks and a doubly-linked table L (FIG. 5) that comprises for a given block indicated by block number . . . , 14557, 14558, . . . the following information:
      • the block's predecessor or previous block (column ‘prev’) in the list of blocks with identical fingerprint value;
      • the block's successor or next block (column ‘next’) in the list of blocks with identical fingerprint value;
      • the block's reference count (column ‘.rc’); and
      • the block‘s fingerprint value (column ‘.fp’).
  • The number of available fingerprint values should be on the order of the number of real blocks available in the HDD. In the preferred embodiment, the number of fingerprints is equal to the number of blocks which guarantees that the average number of blocks with equal fingerprint value is smaller than 1. Even in case that in some lists of the above tables the number of blocks with identical fingerprint is larger than 1, then other fingerprint values are not realized (or are not presented) at all and the inequality of a new block compared with all blocks already stored on the HDD is ascertained also without a physical read of the block.
  • The following are examples for the calculation of the table sizes showing that the tables require less than 2% of memory:
  • Assume that ‘n’ is the number of bytes required for storing block numbers. For example, four bytes (thirty-two bits) are sufficient up to a storage capacity of two terabytes of the underlying storage device if the block size is 512 byte (2{circumflex over (0)}32*512) and three bytes are sufficient for a storage capacity of 16 million blocks.
  • The resulting sizes of each of the above tables is:
      • Size (R)=#sectors*n;
      • size (FP)=#fingerprints*n;
      • size (U)=#blocks*n;
      • size (L)=#blocks*4*n.
  • Thus, in case of #sectors=#blocks=#fingerprints the resulting table size is #blocks*7*n.
  • The above calculation shall now be illustrated by the following four different quantitative estimations a)-d):
      • a) 2 GB HDD: Provides 1 million blocks (<2{circumflex over (0)}24) of block size 2048 byte; therefore three bytes (n=3) are sufficient, i.e. 21*1.000.000=21 MB (can even be kept in an EEPROM disposed in the HDD);
      • b) 30 GB HDD: Provides 15 million blocks (<2{circumflex over (0)}24) of block size 2048 byte; therefore three bytes are sufficient, i.e. 21*15.000.000=315 MB (is about 1.05% of the entire storage capacity of the HDD);
      • c) 100 GB HDD: Provides 50 million blocks (<2ô32) of block size 2048 byte; therefore four bytes (n=4) are sufficient, i.e. 28*50.000.000=1.4 GB (is about 1.4% of the entire storage capacity of the HDD);
      • d) 8 TB HDD: Provides 4 billion blocks (<2{circumflex over (0)}32) of block size 2048 byte; therefore four bytes are sufficient, i.e. 28*4.000.000.000=112 GB (is about 1.4% of the entire storage capacity of the HDD).
  • Statistical investigations have revealed that data stored on block-oriented server storage devices, on an average scale, contain up to 30% of duplicate files, and non-compressed picture formats like .bmp files often contain equally colored areas, which are stored as identical blocks on the storage device (e.g. black or white areas in these pictures), even for different pictures. In the following it is described how formatting or reading and writing blocks are performed or executed in the preferred embodiment, based on the above described storage device architecture. It should be noted that the necessary procedural steps do not depend on the underlying storage device technology and thus can be used either in a hard disk storage device or any other storage device where data are stored as data blocks.
  • Referring now to FIGS. 6 to 9 it is described in more detail by way of flow diagrams how the particular operations ‘BLOCK WRITE’, ‘BLOCK READ’, ‘HIGH-LEVEL FORMATTING’, ‘FINDING THE POSITION OF A BLOCK IN A LIST’, ‘REMOVING A BLOCK FROM A LIST’ and ‘INSERTING A BLOCK INTO A LIST’ (the last three operations by using a fingerprint) are performed in a sector-oriented storage in accordance with the invention. These operations and method of operating a storage device are sufficient to guarantee that any block is stored exactly once in the storage medium and that different sectors containing the same block only contain references to this one block while limiting the processing overhead to do so. The mechanism and method in accordance with the invention must quickly check for a block, blk, that is already stored on the storage medium which can be very large. This reduction in processing time is achieved by calculating a fingerprint for block blk and then quickly searching the relatively short list of blocks already present with the same fingerprint, fn. It should be noted that blocks containing different data may, nevertheless, result in the same fingerprint being calculated. However, since the number of possible fingerprints which can result from calculation based on the data content of block is very large, the list of blocks having different content which may have the same (or any given) fingerprint will be a very small fraction of the number of blocks stored and the search can thus be performed very quickly on a list of blocks which will generally be very short.
  • BLOCK WRITE Operation
  • For the present BlockWrite operation it is assumed that a data block ‘blk’ is to be written at a position of the HDD designated with block number ‘s’. For that operation, procedural steps shown in FIGS. 6A-6C are performed. It is noted that the three parts of the entire flow diagram are linked at cardinal points ‘B’ and ‘C’, respectively.
  • In first step 600 shown in FIG. 6A, for the bit pattern of the block ‘blk’, a fingerprint ‘fn’ is calculated. An appropriate method for calculating the fingerprint is the above-mentioned known CRC mechanism although other appropriate and possible techniques for computing a fingerprint will be evident to those skilled in the art. Next, the HDD position number, s, at which the block is to be written is looked up 605 in the reference table R at block position ‘s’ and the resulting ID entry ‘b’ is checked in the next step 610 to determine if the entry ‘b’ is undefined (‘undef’). If this condition is fulfilled (i.e. b is not defined because nothing has been previously stored for sector s) then the procedure continues with step 655 shown in FIG. 6B (through linking cardinal point B). If the condition is not fulfilled (i.e. b is already defined) then it is checked in next step 615 by means of the linkage table L if the above calculated fingerprint ‘fn’ is identical with the fingerprint value stored in table L for the present block entry ‘b’.
  • If condition 615 is fulfilled then in step 620 the whole bit pattern of ‘b’ is read and stored in ‘orig’. In the following step 625 it is then checked if the bit pattern of block ‘blk’ is identical with the bit pattern ‘orig’. If so then the procedure is terminated 630 because block ‘blk’ is already in place (blk==orig) in storage. Otherwise it is further checked 635 if the reference count ‘rc’ for the present block ‘b’ contained in the linkage table L is equal to ‘1’. If so then the bit pattern ‘b’ of block ‘blk’ is physically written 640 to the HDD at block position ‘s’ and the procedure terminated 630 accordingly. Otherwise, in step 645, in the linkage table L the reference count ‘rc’ of ‘b’ is decreased by ‘1’.
  • Referring now back to step 615, if the fingerprint ‘fn’ calculated by means of the linkage table L is not identical with the fingerprint value stored in table L for the present block entry ‘b’, it is checked in step 650, if the reference count value ‘rc’ contained in table L for entry ‘b’ is equal to ‘1’. If so, the procedure is continued with the next step linked to point ‘B’ shown in FIG. 6B. Otherwise the reference count ‘rc’ is decreased by ‘1’ in following step 645.
  • Now referring to FIG. 6B, it is described how the above BlockWrite procedure is continued at cardinal point ‘B’ to make entry ‘b’ available for writing with step 655 where the reference count value of entry ‘b’ in the linkage table L is set ‘0’. In next step 660 the entry ‘b’ is removed from the list contained in the fingerprint table FP for fingerprint ‘fn’. The underlying procedure for the removal of entry ‘b’ is described in more detail referring to FIG. 9B.
  • The following steps 665-680 surrounded by line 690 relate to a mechanism for handling physically defective blocks in a HDD and thus represent an optional but further advantageous perfecting feature of the invention. In step 665 of that optional procedure, a gray code is physically be written at block ‘b’ of the HDD. In the following step 670, that block ‘b’ is physically read and stored temporarily as variable ‘aux’. In step 675 it is then checked if the data pattern temporarily stored in ‘aux’ is equal with the original gray code. If not, the present block can be assumed to be defective and thus in the following step 680 that block is marked as defective simply by setting the reference count ‘rc’ of that block to ‘−1’. Otherwise, whether or not the optional procedures indicated by line 690 are performed, the procedure continues with step 685 where a stack operation push(U, x) with x=‘b’ in the present case is executed, making block ‘b’ available as an unused block. The necessary stack operations are described in detail below with reference to FIGS. 10A-10C.
  • In FIG. 6C it is illustrated how the presently described procedure continues at cardinal point ‘C’. In the first step 695 the entry for block ‘b’ in the reference table R is set ‘undef’ (=undefined). In the following step 700, the position of a block ‘blk’ with fingerprint value ‘fn’ in the list with all blocks of fingerprint value ‘fn’ (FP[fn]) is determined. The underlying procedure for finding that position is described in more detail hereinafter referring to FIG. 9A. In following step 705 it is checked if b is undefined (‘undef’) indicating, that no block in the list is identical to ‘blk’. If ‘YES’, the above described pop(U) operation is performed with the LIFO stack U in step 710 to receive a free block for storing. In the next step 715 ‘b’ is inserted into the fingerprint table FP with the above calculated fingerprint value fn. For the details of that insertion procedure it is referred to the following description of FIG. 9C.
  • Similarly to preceding step 640, in present step 720 the bit pattern of block ‘blk’ is physically written to the HDD at real block ‘b’ accordingly. Thereafter, the reference count ‘rc’ of ‘b’ is set 725 to the value ‘1’ in the linkage table L, because the block ‘blk’ is stored for the first time on the storage device. In addition, the fingerprint value of table L is set 730 with the above calculated value fn. In the last step 735, at the position s of the reference table R, the value b is entered. Then the present procedure is terminated by step 740.
  • However, if the check box 705 reveals ‘NO’, i.e. that the entry b of the reference table R is not undefined (‘undef’), then the procedure continues with step 745 where the reference count ‘rc’ of ‘b’ in the linkage table L is increased by ‘1’. The reason for this alternating path is that an already existing block ‘b’ with content identical to ‘blk’ was found in the list, and the new reference to that block increases the number of blocks referring to it.
  • Thus, in summary, the block write operation in accoredance with the invention first determines the fingerprint of the block to be written and then searches to determine if a block with the same fingerprint already exists in memory. This search is performed by looking up all blocks in a doubly linked list of blocks with fingerprint fn. The first element of this list is accessed in a constant time by the array FP. FP[fn] is either ‘undef’ (e.g. an empty list with blocks of fingerprint fn) or holds the first physical block with fingerprint fn. If a stored physical block b of content identical to blk is found, all that has to be done is to set the reference for s (R[s]=b) and increment the reference count for block b by 1.
  • That is, if no block is already stored which has the same fingerprint, the block to be written is not a duplicate of any other previously written block. While blocks having different content could have the same fingerprint computed for them, this screening by fingerprints reduces the number of block which must be considered to a list which is generally very short (and, as will be demonstrated, will only be a relatively few blocks, on average) compared to the number of blocks which can be stored in a potentially very large memory.
  • BLOCK READ Operation
  • It is now assumed accordingly that a data block with block number ‘s’ is to be read from the storage device. The following are the steps sufficient for that block read operation in accordance with the preferred embodiment of the present invention.
  • In step 800 (FIG. 7) it is checked if the entry at position ‘s’ is undefined (‘undef’). If so, in step 805, an arbitrary bit pattern is returned. Otherwise, in step 810, the block ‘blk’ at the position ‘s’ of the reference table R is physically read and returned.
  • HIGH-LEVEL FORMATTING Operation
  • Referring now to FIG. 8, a preferred embodiment of a procedure for high-level formatting a HDD is described in detail by way of the depicted flow diagram. This procedures serves for initializing an HDD for applying the HDD operation method according to the invention.
  • In a first step 900 (FIG. 8), for all sectors s of the HDD, the corresponding entries of the reference table R are set undefined (‘undef’), namely all entries of R. Then, in the fingerprint table FP, for all possible fingerprint values fn, FP[fn] is set 905 undefined (‘undef’). In step 910, the LIFO stack U is initialized as an empty stack. In the following step 915, for all remaining real blocks b contained in the area 105 shown in FIG. 1A, the above described push operation, as shown too in FIG. 8, is applied for x=b. In the final step 920 of the present formatting procedure, for all real blocks b, the corresponding entries for the parameters previous block ‘prev’, next block ‘next’ and fingerprint value ‘fn’ contained in the linkage table L are set undefined (‘undef’) wherein the entry for the parameter reference count ‘rc’ is set ‘0’.
  • FINDING THE POSITION OF A BLOCK IN A LIST Operation
  • According to the preferred embodiment illustrated by way of the flow diagram depicted in FIG. 9A, the procedure starts with step 1000 where it is checked if a given fingerprint entry of the fingerprint table FP is undefined (‘undef’). If ‘yes’ then it is returned 1005 ‘undef’ since the list of blocks with fingerprint fn is empty in this case. Otherwise, the first block of the list of blocks with fingerprint fn 1010 is denoted by ‘b’. In the following step 1015 the block ‘b’ is physically read and temporarily stored as variable ‘orig’. Then it is checked 1020 if the bit patterns of blk and orig are identical. If so, then the block ID ‘b’ is returned 1025. Otherwise, the next block stored in column ‘next’ of the linkage table ‘L’ for present block ‘b’ is set 1030 as a new block ‘b’. Thereafter it is checked 1035 if the new block ‘b’ is undefined (‘undef’) (i.e. the list is completely traversed). If so ‘undef’ is returned 1040. Otherwise it is jumped back to step 1015 and this step executed for a next block ‘b’. Thus the block ID of a block in the list identical to ‘blk’ is returned, if one exists, and otherwise ‘undef is returned as indication of non-existence of such a block.
  • REMOVING A BLOCK FROM A LIST Operation
  • In the preferred embodiment illustrated in FIG. 9B, the procedure for removing a block ‘b’ with a fingerprint value ‘fn’ from the linkage table L starts with checking 1100 in the fingerprint table ‘FP‘if block ‘b’ is the first block in that list. If so then the fingerprint value ‘fn’ of the next block contained in linkage table ‘L’ is fetched and in the fingerprint table ‘FP‘set 1105 as first block with that ‘fn’ value. Thereafter in the linkage table ‘L’ the corresponding entry with that ‘fn’ value is fetched and the corresponding ‘prev’ value set 1110 ‘undef’. In the following steps 1115 and 1120 the ‘next’ and ‘prev’ values of the present block ‘b’ are both set ‘undef’.
  • Referring back to step 1100, if the current block ‘b’ is not the first block in the list, it is jumped to the entry for the previous block ‘prev’ of present block ‘b’ in the linkage table ‘L’ and the next block ‘next’ entry for that entry is set 1125 as the next block ‘next’ for the current block ‘b’ in the ‘L’ table. In the next step 1130 following in the path it is then checked if the status of the current entry set in step 1125 is ‘undef’. If so, this path is continued with step 1115 followed by step 1120 as described beforehand. Otherwise, an intermediate step 1135 is executed where it is jumped to the entry for the next block ‘next’ of present block ‘b’ in the linkage table ‘L’ and the previous block ‘prev’ entry for that entry is set 1125 as the previous block ‘prev’ for the current block ‘b’ in the ‘L’ table.
  • PREPEND ‘B’ TO LIST WITH FINGERPRINT FN
  • As illustrated by the preferred embodiment for this procedure for insertion of a block ‘b’ having a fingerprint value ‘fn’ into a linkage table ‘L’ and a fingerprint table ‘FP’ shown in FIG. 9C, it is first checked 1200 if the underlying entry for ‘fn’ in the fingerprint table ‘FP’ is ‘undef’. If so then the next block entry and the previous block entry of the ‘L’ table for ‘b’ are both set 1205, 1210 ‘undef’. After this ‘b’ is inserted with its fingerprint value ‘fn’ in the ‘FP’ table and the procedure is terminated 1220, i.e. the list consists of block ‘b’ in this case only.
  • If the condition in step 1200 is not being fulfilled (the list of blocks with fingerprint ‘fn’ is not empty) then the procedure continues with step 1225 of a second path where the present fingerprint value ‘fn’ gathered from the fingerprint table ‘FP’ is set for next block ‘next’ contained in the linkage table ‘L’. Thereafter the previous block ‘prev’ contained in the linkage table ‘L’ for that fingerprint is set 1230 ‘b’ (b is pre-pended to the list FP[fn]).
  • It is important to note that, of all the above-described operations, only the block write operation requires more than a constant time and that the block read operation only has the small and constant additional processing burden of following the reference R[s]. Therefore, the effect of the invention on memory input/output rates is very slight while optimally reducing memory occupancy by eliminating all duplication of blocks of stored data with a granularity potentially much smaller than files.
  • It is also noteworthy that the above described tables R, FP, L and the LIFO stack U, in part or even all, can be implemented in a static approach with predefined size or in a dynamic approach where the size is dynamically adapted to the actual storage requirements for storing the corresponding necessary data. The above described search procedures for finding data duplicates on storage sector level can be implemented by way of a known indexing mechanism in order to enhance overall processing performance of the described storage management mechanism.
  • In summary, it is clearly seen that the invention provides for optimally reduced occupancy of memory devices with minimal penalty in processing burden or use of storage. For example in case c discussed above in regard to a 100 GB HDD, providing 50,000,000 blocks of 2048 byte storage, if working with 24-bit fingerprints, then even on a fully written memory of totally different blocks, the average length of a list of blocks having the same fingerprint will be 50,000,000/224=2.9802, or, on average, less than three blocks which must be read to determine if a block to be stored is a duplicate of a block previously written. This meritorious effect is increased with increasing memory capacity since the average number of blocks which must be read remains relatively small while the memory capacity may greatly increase and the difference between the number of blocks of storage and the number of blocks which must be read to identify or disprove the presence of a block blk becomes increasingly great. Further, although the invention has been described for a hard disk drive (HDD) only, it is understood hereby, that the invention can be applied accordingly in a tape storage or semiconductor storage or any CPU-based storage using block memory devices, if that storage comprises segmentation into blocks as described beforehand.
  • Further, the invention can also be implemented either in a storage area network (SAN) or a network attached storage (NAS) environment. A SAN is a high-speed special purpose digital network that interconnects different kinds of data storage devices with associated data servers on behalf of a larger network of users. Typically, a storage area network is part of the overall network of computing resources for an enterprise. A storage area network is usually clustered in close proximity to other computing resources such as IBM S/390 mainframe computers but may also extend to remote locations for backup and archival storage, using wide area network carrier technologies such as asynchronous transfer mode or synchronous optical networks.
  • A NAS is a hard disk storage that is set up with its own network address rather than being attached to the department computer that is serving applications to a network's workstation users. By removing storage access and its management from the department server, both application programming and files can be served faster because they are not competing for the same processor resources. The network-attached storage device is attached to a local area network (typically, an Ethernet network, the most widely-installed local area network (LAN) technology) and assigned an IP address. File requests are mapped by the main server to the NAS file server. A network-attached storage consists of hard disk storage, including multi-disk RAID (redundant array of independent disks) systems, and software for configuring and mapping file locations to the network-attached device. Network-attached storage can be a step toward and included as part of the above mentioned more sophisticated storage system known as SAN.
  • In these environments, as pointed out in FIG. 1 b, the sector table (including the above described tables) is separated physically from the sector storage, i.e. both are implemented on different disk storage devices (e.g. HDDs). Hereby it is enabled to implement a large sector table that is used to access sector storages arranged in a stack of other HDDs. It is mentioned hereby that today's HDD controllers are able to manage 100 or even more HDDs. The mentioned stack of sector storage HDDs in case of need can be extended easily insofar as the sector table arranged on the first HDD has only to be enlarged.
  • It is further to be noted that the sector table, in another embodiment, can also be arranged in a solid-state random access memory (RAM) thus enhancing processing speed for managing the sector table.
  • Thereupon it is noteworthy that although the underlying storage device is an HDD storage in the present embodiment, the concepts and mechanisms described hereinafter can also be applied to other types of storage devices like semiconductor-based storages.

Claims (14)

1. A digital data storage device storing information on a storage medium segmented into blocks, wherein said storage medium is segmented into two areas, wherein the first area comprises reference means and the remaining area of the storage medium is used for storing said information and wherein a second or further block being identical with a first block on block level is stored only as reference referring to the first block.
2. A digital data storage device according to claim 1, comprising at least one reference table containing at least one entry for each block, at least one fingerprint table containing fingerprint information for each block and at least one chain table containing, for each block, at least information about blocks having same fingerprints.
3. A digital data storage device according to claim 2, wherein the entries of said reference table are numbered consecutively.
4. A digital data storage device according to claim 2, wherein each of said entries consists of at least one field containing a unique identifier for identifying the physical sector where the real block is stored in the remaining physical storage area.
5. A digital data storage device according to claim 4, wherein the length of a reference field is defined as the maximum amount of required binary digits (bits) for real sector IDs.
6. A digital data storage device according to claim 1, wherein a real block stored in said remaining area of the storage medium comprises a reference counter for counting the number of references to that real block.
7. A digital data storage device according to claim 6, wherein said reference counter is used to identify how many times a block is referred to.
8. A digital data storage device according to claim 2, wherein said fingerprint table contains, for each fingerprint, the first unique identifier of a block corresponding to said fingerprint.
9. A digital data storage device according to claim 2, wherein said chain table contains, for a particular block, its preceding block in the linkage table having same fingerprint, its successive block in the linkage table having same fingerprint, its reference count, and its fingerprint.
10. A digital data storage device according to claim 1, wherein said storage medium is formatted so that the number of real blocks equals the number of entries of the reference table.
11. A digital data storage device according to claim 10, wherein the number of real blocks is adapted on a periodic time basis.
12. A digital data storage device according to claim 1, wherein a particular area of said storage medium is reserved for the reference table and thus can not be occupied by real (user) data wherein these real data is only stored in a real sector wherein occupation of the real sector advantageously can move from outer tracks to inner tracks of the storage medium.
13. A digital data storage device according to claim 1, wherein said reference means is stored outside the storage medium of the storage device, preferably in a Random-Access-Memory (RAM) being part of the storage device or a virtual RAM disk storage being part of the main storage of an underlying computer system.
14. A digital data storage device according to claim 13, further comprising a fail-over means for storing the reference table entries in a non-volatile storage, preferably an Electrically Erasable Programmable Read-Only Memory (EEPROM).
US11/019,099 2003-12-22 2004-12-22 Reducing occupancy of digital storage devices Abandoned US20050152192A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/892,468 US8327061B2 (en) 2003-12-22 2010-09-28 Reducing occupancy of digital storage devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03104922 2003-12-22
EP03104922.4 2003-12-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/892,468 Continuation US8327061B2 (en) 2003-12-22 2010-09-28 Reducing occupancy of digital storage devices

Publications (1)

Publication Number Publication Date
US20050152192A1 true US20050152192A1 (en) 2005-07-14

Family

ID=34717239

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/019,099 Abandoned US20050152192A1 (en) 2003-12-22 2004-12-22 Reducing occupancy of digital storage devices
US12/892,468 Expired - Fee Related US8327061B2 (en) 2003-12-22 2010-09-28 Reducing occupancy of digital storage devices

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/892,468 Expired - Fee Related US8327061B2 (en) 2003-12-22 2010-09-28 Reducing occupancy of digital storage devices

Country Status (1)

Country Link
US (2) US20050152192A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083549A1 (en) * 2005-10-10 2007-04-12 Oracle International Corporation Method and mechanism for providing a caching mechanism for contexts
US20070168703A1 (en) * 2005-11-16 2007-07-19 Elliott John C Apparatus and method to assign network addresses in a storage array
US20080195666A1 (en) * 2007-02-09 2008-08-14 Asustek Computer Inc. Automatic file saving method for digital home appliance system
US7555620B1 (en) 2006-04-28 2009-06-30 Network Appliance, Inc. Method and system of using a backup image for multiple purposes
US20090248979A1 (en) * 2008-03-25 2009-10-01 Hitachi, Ltd. Storage apparatus and control method for same
US20130262805A1 (en) * 2005-04-13 2013-10-03 Ling Zheng Method and Apparatus for Identifying and Eliminating Duplicate Data Blocks and Sharing Data Blocks in a Storage System
US9223511B2 (en) 2011-04-08 2015-12-29 Micron Technology, Inc. Data deduplication
US9373380B2 (en) 2012-10-04 2016-06-21 Samsung Electronics Co., Ltd. Multi-port semiconductor memory device with multi-interface
CN105794199A (en) * 2013-11-08 2016-07-20 曲克赛尔股份有限公司 Integrated circuit having multiple identified identical blocks
US20160314141A1 (en) * 2015-04-26 2016-10-27 International Business Machines Corporation Compression-based filtering for deduplication
CN110532198A (en) * 2019-09-09 2019-12-03 成都西山居互动娱乐科技有限公司 A kind of method and device of memory allocation

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8204867B2 (en) * 2009-07-29 2012-06-19 International Business Machines Corporation Apparatus, system, and method for enhanced block-level deduplication
US8589350B1 (en) 2012-04-02 2013-11-19 Axcient, Inc. Systems, methods, and media for synthesizing views of file system backups
US8924360B1 (en) 2010-09-30 2014-12-30 Axcient, Inc. Systems and methods for restoring a file
US9705730B1 (en) 2013-05-07 2017-07-11 Axcient, Inc. Cloud storage using Merkle trees
US9235474B1 (en) * 2011-02-17 2016-01-12 Axcient, Inc. Systems and methods for maintaining a virtual failover volume of a target computing system
US8954544B2 (en) 2010-09-30 2015-02-10 Axcient, Inc. Cloud-based virtual machines and offices
US10284437B2 (en) 2010-09-30 2019-05-07 Efolder, Inc. Cloud-based virtual machines and offices
US8954683B2 (en) 2012-08-16 2015-02-10 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Translation table and method for compressed data
US9785647B1 (en) 2012-10-02 2017-10-10 Axcient, Inc. File system virtualization
US9852140B1 (en) 2012-11-07 2017-12-26 Axcient, Inc. Efficient file replication
US9397907B1 (en) 2013-03-07 2016-07-19 Axcient, Inc. Protection status determinations for computing devices
US9292153B1 (en) 2013-03-07 2016-03-22 Axcient, Inc. Systems and methods for providing efficient and focused visualization of data
WO2016048331A1 (en) 2014-09-25 2016-03-31 Hewlett Packard Enterprise Development Lp Storage of a data chunk with a colliding fingerprint
US10289113B2 (en) 2016-02-25 2019-05-14 Ford Global Technologies, Llc Autonomous occupant attention-based control
US10026317B2 (en) 2016-02-25 2018-07-17 Ford Global Technologies, Llc Autonomous probability control
US9989963B2 (en) 2016-02-25 2018-06-05 Ford Global Technologies, Llc Autonomous confidence control
US10417202B2 (en) 2016-12-21 2019-09-17 Hewlett Packard Enterprise Development Lp Storage system deduplication

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5559991A (en) * 1991-11-04 1996-09-24 Lucent Technologies Inc. Incremental computer file backup using check words
US5732265A (en) * 1995-11-02 1998-03-24 Microsoft Corporation Storage optimizing encoder and method
US5732365A (en) * 1995-10-30 1998-03-24 Dakota Catalyst Products, Inc. Method of treating mixed waste in a molten bath
US5758347A (en) * 1993-05-12 1998-05-26 Apple Computer, Inc. Layered storage structure for computer data storage manager
US5765173A (en) * 1996-01-11 1998-06-09 Connected Corporation High performance backup via selective file saving which can perform incremental backups and exclude files and uses a changed block signature list
US5933842A (en) * 1996-05-23 1999-08-03 Microsoft Corporation Method and system for compressing publication documents in a computer system by selectively eliminating redundancy from a hierarchy of constituent data structures
US6092145A (en) * 1994-12-27 2000-07-18 International Business Machines Corporation Disk drive system using sector buffer for storing non-duplicate data in said sector buffer
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US20020169934A1 (en) * 2001-03-23 2002-11-14 Oliver Krapp Methods and systems for eliminating data redundancies
US20020178176A1 (en) * 1999-07-15 2002-11-28 Tomoki Sekiguchi File prefetch contorol method for computer system
US20020178332A1 (en) * 2001-05-22 2002-11-28 Wilson Kenneth Mark Method and system to pre-fetch compressed memory blocks suing pointers
US6505305B1 (en) * 1998-07-16 2003-01-07 Compaq Information Technologies Group, L.P. Fail-over of multiple memory blocks in multiple memory modules in computer system
US20040103254A1 (en) * 2002-08-29 2004-05-27 Hitachi, Ltd. Storage apparatus system and data reproduction method
US7143251B1 (en) * 2003-06-30 2006-11-28 Data Domain, Inc. Data storage using identifiers

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6327936A (en) * 1986-07-22 1988-02-05 Mitsubishi Electric Corp File management method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5559991A (en) * 1991-11-04 1996-09-24 Lucent Technologies Inc. Incremental computer file backup using check words
US5758347A (en) * 1993-05-12 1998-05-26 Apple Computer, Inc. Layered storage structure for computer data storage manager
US6092145A (en) * 1994-12-27 2000-07-18 International Business Machines Corporation Disk drive system using sector buffer for storing non-duplicate data in said sector buffer
US5732365A (en) * 1995-10-30 1998-03-24 Dakota Catalyst Products, Inc. Method of treating mixed waste in a molten bath
US5732265A (en) * 1995-11-02 1998-03-24 Microsoft Corporation Storage optimizing encoder and method
US5765173A (en) * 1996-01-11 1998-06-09 Connected Corporation High performance backup via selective file saving which can perform incremental backups and exclude files and uses a changed block signature list
US5933842A (en) * 1996-05-23 1999-08-03 Microsoft Corporation Method and system for compressing publication documents in a computer system by selectively eliminating redundancy from a hierarchy of constituent data structures
US6505305B1 (en) * 1998-07-16 2003-01-07 Compaq Information Technologies Group, L.P. Fail-over of multiple memory blocks in multiple memory modules in computer system
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US20020178176A1 (en) * 1999-07-15 2002-11-28 Tomoki Sekiguchi File prefetch contorol method for computer system
US20020169934A1 (en) * 2001-03-23 2002-11-14 Oliver Krapp Methods and systems for eliminating data redundancies
US6889297B2 (en) * 2001-03-23 2005-05-03 Sun Microsystems, Inc. Methods and systems for eliminating data redundancies
US20020178332A1 (en) * 2001-05-22 2002-11-28 Wilson Kenneth Mark Method and system to pre-fetch compressed memory blocks suing pointers
US20040103254A1 (en) * 2002-08-29 2004-05-27 Hitachi, Ltd. Storage apparatus system and data reproduction method
US7143251B1 (en) * 2003-06-30 2006-11-28 Data Domain, Inc. Data storage using identifiers

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9256378B2 (en) * 2005-04-13 2016-02-09 Netapp, Inc. Deduplicating data blocks in a storage system
US20130262805A1 (en) * 2005-04-13 2013-10-03 Ling Zheng Method and Apparatus for Identifying and Eliminating Duplicate Data Blocks and Sharing Data Blocks in a Storage System
US8849767B1 (en) * 2005-04-13 2014-09-30 Netapp, Inc. Method and apparatus for identifying and eliminating duplicate data blocks and sharing data blocks in a storage system
US20070083549A1 (en) * 2005-10-10 2007-04-12 Oracle International Corporation Method and mechanism for providing a caching mechanism for contexts
US20070168703A1 (en) * 2005-11-16 2007-07-19 Elliott John C Apparatus and method to assign network addresses in a storage array
US7404104B2 (en) * 2005-11-16 2008-07-22 International Business Machines Corporation Apparatus and method to assign network addresses in a storage array
US7555620B1 (en) 2006-04-28 2009-06-30 Network Appliance, Inc. Method and system of using a backup image for multiple purposes
US20080195666A1 (en) * 2007-02-09 2008-08-14 Asustek Computer Inc. Automatic file saving method for digital home appliance system
US20090248979A1 (en) * 2008-03-25 2009-10-01 Hitachi, Ltd. Storage apparatus and control method for same
US9223511B2 (en) 2011-04-08 2015-12-29 Micron Technology, Inc. Data deduplication
US9778874B2 (en) 2011-04-08 2017-10-03 Micron Technology, Inc. Data deduplication
US10282128B2 (en) 2011-04-08 2019-05-07 Micron Technology, Inc. Data deduplication
US9373380B2 (en) 2012-10-04 2016-06-21 Samsung Electronics Co., Ltd. Multi-port semiconductor memory device with multi-interface
CN105794199A (en) * 2013-11-08 2016-07-20 曲克赛尔股份有限公司 Integrated circuit having multiple identified identical blocks
US9503089B2 (en) * 2013-11-08 2016-11-22 Trixell Integrated circuit having multiple identified identical blocks
US20160314141A1 (en) * 2015-04-26 2016-10-27 International Business Machines Corporation Compression-based filtering for deduplication
US9916320B2 (en) * 2015-04-26 2018-03-13 International Business Machines Corporation Compression-based filtering for deduplication
CN110532198A (en) * 2019-09-09 2019-12-03 成都西山居互动娱乐科技有限公司 A kind of method and device of memory allocation

Also Published As

Publication number Publication date
US20110082998A1 (en) 2011-04-07
US8327061B2 (en) 2012-12-04

Similar Documents

Publication Publication Date Title
US8327061B2 (en) Reducing occupancy of digital storage devices
US10169383B2 (en) Method and system for scrubbing data within a data storage subsystem
US9880746B1 (en) Method to increase random I/O performance with low memory overheads
US8914597B2 (en) Data archiving using data compression of a flash copy
US8954710B2 (en) Variable length encoding in a storage system
US7533330B2 (en) Redundancy for storage data structures
US8165221B2 (en) System and method for sampling based elimination of duplicate data
US8332616B2 (en) Methods and systems for vectored data de-duplication
US7584229B2 (en) Method and system for priority-based allocation in a storage pool
US7716445B2 (en) Method and system for storing a sparse file using fill counts
US9990390B2 (en) Methods and systems for vectored data de-duplication
US6832290B2 (en) Method, system, program, and data structures for maintaining metadata in a storage system
JP2007234026A (en) Data storage system including unique block pool manager and application in hierarchical storage device
US7353299B2 (en) Method and apparatus for managing autonomous third party data transfers
US7480684B2 (en) Method and system for object allocation using fill counts
US7281188B1 (en) Method and system for detecting and correcting data errors using data permutations
US10338850B2 (en) Split-page queue buffer management for solid state storage drives
US20200081787A1 (en) Increasing data recoverability during central inode list loss

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOLDY, MANIFRED;SANDER, PETER;STAMM-WILBRANDT, HERMANN;REEL/FRAME:016403/0110;SIGNING DATES FROM 20050314 TO 20050327

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION