US20080301132A1 - Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information - Google Patents

Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information Download PDF

Info

Publication number
US20080301132A1
US20080301132A1 US12/222,192 US22219208A US2008301132A1 US 20080301132 A1 US20080301132 A1 US 20080301132A1 US 22219208 A US22219208 A US 22219208A US 2008301132 A1 US2008301132 A1 US 2008301132A1
Authority
US
United States
Prior art keywords
data
storage
server
available
disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/222,192
Inventor
Kyoko Yamada
Motoaki Hirabayashi
Mitsugu Yamada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/222,192 priority Critical patent/US20080301132A1/en
Publication of US20080301132A1 publication Critical patent/US20080301132A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1466Management of the backup or restore process to make the backup process non-disruptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention relates to storage backup techniques, and more particularly, to a technique for backing up storage in a remote place.
  • the present invention has been made in view of such a background as mentioned above and an object of the present invention is to provide a high safe data backup storage device advantageous in terms of cost.
  • a user data backup technique by computer means which resides between a user side computer environment and a storage service side computer environment to support storage service, the data backup method comprising the steps of: selecting a storage device which meets the user side conditions from a plurality of entirely or partially empty storage devices owned by the storage service side computer environment; receiving user data from the user side computer environment; and transmitting the user data to the storage service side computer environment for storage in the selected storage device.
  • FIG. 1 shows the configuration of a storage service system according to a preferred embodiment of the present invention
  • FIG. 2 shows an example of disk dividing and data storage
  • FIG. 3 shows an example of a data allocation TBL 123 ;
  • FIG. 4 shows an example of a user conditions TBL 122 ;
  • FIG. 5 shows an example of a SSP conditions TBL 124 ;
  • FIG. 6 shows an example of a disk dividing TBL 125 ;
  • FIG. 7 is a flowchart showing a processing procedure by a matching unit 115 in the preferred embodiment
  • FIG. 8 is a time chart showing a processing procedure for data backup in the preferred embodiment
  • FIG. 9 is a time chart showing a processing procedure for data restoration in the preferred embodiment.
  • FIG. 10 is a time chart showing a processing procedure for data migration in the preferred embodiment.
  • FIG. 11 is a time chart showing a processing procedure for disk return in the preferred embodiment.
  • FIG. 1 shows a configuration of a backup storage service system according to an embodiment of the present invention.
  • a management service provider corporation hereinafter denoted as a MSP
  • USRs data backup service to users
  • SSPs storage service provider corporations
  • SSP server 103 which is a computer environment on the storage service side
  • USR server 102 which is a computer environment on the user side
  • a MSP server 101 is computing means which interfaces between the two computer environments in order to support the storage service.
  • the MSP server 101 receives data from USR servers 102 - 1 and 102 - 2 , divides the data into records and stores the records in idle resources managed by SSP servers 103 - 1 , 103 - 2 and 103 - 3 .
  • idle resources mean the currently unused areas of the storage devices such as tape libraries and RAID devices provided for data center business and storage service business.
  • the MSP server 101 , the USR servers 102 and the SSP servers 103 are connected via a network such as the Internet.
  • the MSP server 101 comprises a processing section composed of a service reception unit 113 , a resource management unit 114 , a matching unit 115 , a data transfer unit 116 , a data dividing unit 117 , a data restore unit 118 and a data migration unit 119 . Also there are provided, on its storage device, a user conditions TBL (table) 122 , a data allocation TBL 123 , a SSP conditions TBL 124 and a disk dividing TBL 125 .
  • the service reception unit 113 is notified by the USR servers 102 of the size of each data to be backed up and the user's preferred condition (cost, etc.) for using the backup storage service.
  • the service reception unit 113 stores those obtained conditions into the user conditions TBL 122 .
  • the resource management unit 114 is notified by the SSP servers 103 of their conditions (empty disk capacity, availability period, etc.) for providing disks.
  • the resource management unit 114 stores those obtained conditions into the SSP conditions TBL 124 .
  • the matching unit 115 searches the user conditions TBL 122 and SSP conditions TB 124 for mutually conforming combinations.
  • the data transfer unit 116 controls data transfer between the USR servers 102 and the MSP server 101 and between the MSP server 101 and the SSP services 103 .
  • the data dividing unit 117 divides user data into records whose size is determined, depending on the empty disk capacities offered by the SSP servers for backup.
  • the data restore unit 118 refers to the data allocation TBL 123 and reassembles original data from records distributed to a plurality of SSP disks.
  • the data migration unit 119 moves data to an empty area in another SSP server 103 if the availability term of the current backup disk expires or if it becomes necessary during the availability term to return the disk which is currently used as an idle resource. The user is not required to be aware of any data migration executed since the pertinent processing completes within the MSP, which results in a reduced operational cost for the user.
  • the user conditions TBL 122 stores the user's preferred conditions such as cost and availability term. This table will be described later in detail with reference to FIG. 4 .
  • the data allocation TBL 123 stores information about how SSP disks are allocated to user data. In order to raise the safety of data, the data allocation TBL is duplicated. The other copy is held in a separate alternative MSP server. If the MSP server 101 becomes not available due to disaster or failure, the data allocation TBL 123 in the alternative MSP server is accessed. This table will be described later in detail with reference to FIG. 3 .
  • the SSP conditions TBL 124 stores disk lending conditions such as empty capacity and cost. This table will be described later in detail with reference to FIG. 5 .
  • the disk dividing TBL 125 stores how the lent disks are divided by the MSP into partitions. This table will be described later in detail with reference to FIG. 6 .
  • Each USR server 102 comprises a service demanding unit 111 and a data transfer unit 112 .
  • the USR server 102 - 1 manages user data A ( 131 ) while the USR server 102 - 2 manages user data B ( 132 ) and user data C ( 133 ).
  • the number of distributions which may be specified arbitrarily by the user, is an index determining the number of sites to which the data is apportioned for storage.
  • RAID technology is used so that accesses are dispersed to a plurality of storage devices such as hard disks.
  • data is apportioned to a plurality of separate SSP disks connected via a network. How many SSP disks are to be used is determined by the number of distributions, a variable specified by the user. Reliability can be raised by using them like a single disk.
  • the data is sent to the MSP server 101 from the data transfer unit 112 .
  • the USR server 102 issues a restore demanding request to the MSP server 101 and receives the data from the MSP server 101 via the data transfer unit 112 .
  • Each SSP server 103 comprises a resource registering unit 120 and a data transfer unit 121 .
  • the SSP server 103 - 1 manages disk A ( 141 ), disk B ( 142 ) and disk C ( 143 ).
  • the SSP server 103 - 2 manages disk ID ( 144 ) and disk E ( 145 ).
  • the SSP server 103 - 3 manages disk F ( 146 ), disk G ( 147 ) and disk H ( 148 ).
  • the data transfer unit 121 manages data exchange between the USR server and the MSP server.
  • a disk means a logically independent storage device. Empty disks lent as idle resources are the disk B ( 142 ) and disk C ( 143 ) under management of the SSP server 103 - 1 , the disk D ( 144 ) under management of the SSP server 103 - 2 and the disk F ( 146 ) and disk G ( 147 ) under management of the SSP server 103 - 3 . A total of five disks are offered for the backup storage service.
  • Disk B is divided into partition 1 ( 142 - 1 ), partition 2 ( 142 - 2 ) and partition 31 ( 142 - 3 );
  • Disk C is divided into partition 1 ( 143 - 1 ) and partition 2 ( 143 - 2 );
  • Disk D is divided into partition 1 ( 144 - 1 ), partition 2 ( 144 - 2 ) and partition 3 ( 144 - 3 );
  • Disk F is divided into partition 1 ( 146 - 1 ) and partition 2 ( 146 - 2 ); and
  • Disk G is divided into partition 1 ( 147 - 1 ), partition 2 ( 147 - 2 ) and partition 3 ( 147 - 3 ).
  • User data A ( 131 ), after given the matching processing and then divided into five records in the MSP server 101 , is stored to disk B partition 1 ( 142 - 1 ) in the SSP server 103 - 1 , disk C partition 1 ( 143 - 1 ) in the SSP server 103 - 1 , disk D partition 1 ( 144 - 1 ) in the SSP server 103 - 2 , disk F partition 1 ( 146 - 1 ) in the SSP server 103 - 3 and disk G partition 1 ( 147 - 1 ) in the SSP server 103 - 3 .
  • FIG. 2 shows how data on a user disk may be divided and stored for backup in the aforementioned embodiment.
  • An original disk 201 is divided into a plurality of records according to the capacities of the backup disks.
  • the original disk 201 corresponds to user data A ( 131 ) or user data B ( 132 ).
  • the backup disks are represented by disks given numerals 211 , 212 , 213 , 214 and 215 respectively.
  • Each record, labeled with symbol R may be either a block, a character or a bit.
  • One block is data consisting of characters.
  • the resultant records are sequentially stored on the respective disk partitions in accordance with the number of distributions. In the case of FIG.
  • the records are sequentially stored on the five backup disk 211 , 212 , 213 , 214 and 215 .
  • an EDD or parity record is stored for every four data records (R 1 , R 2 , R 3 and R 4 for instance). If one of some four adjacent records is lost, its corresponding ECC or parity record, code information, lean be used to regenerate the lost record.
  • R 1 , R 2 , R 3 , R 4 and their ECC or parity are stored on backup disks 211 , 212 , 213 , 214 and 215 , respectively. In this manner, the divided records are sequentially stored on the backup disks.
  • ECC or parity information is stored as a separate record, even if one disk becomes not available due to a failure or the like, it is possible to restore data from records on the other disks.
  • each disk is a separate SSP disk, it is not possible to restore the whole data from one disk, which brings about a merit that the security of important data can be protected. Generally, making the size of each record smaller raises security although this requires longer processing time.
  • FIG. 3 is an example of the data allocation TBL 123 showing how user data is allocated to backup disks.
  • data 401 contains a user server name and a disk name, indicating which user data is backed up.
  • Each backup disk 402 contains a SSP server name, disk name and a partition name, indicating which partition is hit by the matching unit 115 .
  • user data stored on USR 1 -A is divided into five sets and stored respectively in SSP 1 -B 1 , SSP 1 -C 1 , SSP 2 -D 1 , SSP 3 -F 1 and SSP 3 -G 1 .
  • user data stored on USR 2 -B is divided into five sets and stored respectively in SSP 1 -B 2 , SSP 1 -C 2 , SSP 2 -D 2 , SSP 3 -F 2 and SSP 3 -G 2 .
  • FIG. 4 is an example of the user conditions TBL 122 where user-specified conditions for using the backup storage service are stored.
  • each user data 501 contains a user server name and a disk name, indicating which user data is concerned.
  • Each capacity 502 contains the size of the data.
  • Each cost 503 contains a monthly rental fee per unit capacity.
  • Each term (start-end) 504 contains two dates between which the service is to be used or the user data is to be backed up.
  • Each number of distributions 505 contains an index indicating the number of disks to which the user data, including ECC or parity records, is to be apportioned. Specifying a higher value I for the number of distributions results in higher safety since the user data will be apportioned among a large number of disks.
  • FIG. 5 is an example of the SSP conditions TBL 124 where SSP specified conditions or providing the backup storage service are stored.
  • each SSP disk 601 contains a SSP name and a disk name, identifying a disk registered by the SSP
  • Each capacity 602 indicates the empty capacity of the disk.
  • Each cost 603 indicates the monthly rental fee per unit capacity charged for the disk.
  • Each term (start-end) 6041 contains two dates between which the disk is available.
  • Each installation site 605 contains the name of the site where the disk resides.
  • FIG. 6 is an example of the disk dividing TBL 125 where how available SSP disks ate divided into partitions before lent to users when the storage service is used in the present embodiment.
  • each disk 701 contains a SSP name and a disk name, identifying a SSP disk registered by the SSP.
  • Each partition 702 contains the name of a partition on the disk.
  • disk SSP 1 -B is divided into three partitions named B 1 , B 2 and B 3 respectively.
  • SSP 1 -C is divided into two partitions named C 1 and C 2 respectively.
  • Information in each partition 702 field corresponds to the logical block number associated with the partition within the disk.
  • FIG. 7 shows a flowchart describing how the matching unit 115 operates to search the user conditions and SSP conditions for mutually conforming combinations.
  • the minimum backup disk capacity is compared with the capacity 602 of a SSP disk (Step 306 ) and if the minimum backup disk capacity is smaller, the term (start and end) during which the SSP disk is available is compared with the term (start and end) during which the user wants to back up the data (Step 307 ). If the term during which the user wants to back up the data is within the term during which the SSP disk is available, the identifier of the SSP disk is stored to the memo y (Step 309 ). This judgment flow is executed for each SSP disk registered (Step 305 ).
  • the lowest cost SSP disk is selected (Step 310 ) and the minimum backup dirk capacity is allocated from the selected SSP disk (Step 311 ).
  • the hit SSP disk is excluded from the object of comparison in Steps 306 and 307 (Step 312 ).
  • This loop is repeated as many times as the number of distributions (Step 304 ) so that as many conforming backup disks as the number of distributions are detected.
  • the total cost for using the backup disks hit in this manner is calculated and compared with the user cost (Step 313 ). If the calculated total cost exceeds the user cost, the condition level is incremented by 1 (Step 314 ) to execute another search flow.
  • the date from which the SSP disk is available is compared with the date from which the user wants to back up the data (Step 308 ). Not like in Step 307 , the date until which the SSP disk is available is not compared with the date until which the user wants to back up the data. Then, the total cost for using the backup disks hit in this manner is calculated and compared with the user cost (Step 313 ). If the calculated total cost exceeds the user cost, the condition level is incremented by 1 (Step 314 ) to execute another search flow.
  • condition level 3 it is possible that all the user data is allocated to a single SSP disk.
  • the processing procedure may also be altered in such a manner that 1 is subtracted from the number of distributions specified by the user and hen condition level 1 is executed again from Step 302 .
  • the total cost may be reduced to the user cost or below without ignoring the user-desired number of distributions.
  • concentrating the user data to only one SSP disk or SSP server 103 is also an implementation of the present invention although the safety of the user data is sacrificed.
  • Step 315 the allocation and disk dividing are registered. That is, the SSP disk partitions allocated according to the minimum backup disk capacity are registered to the disk dividing TBL 125 and data allocation TBL 123 . Note that if the matching unit 115 fails to find any SSP disk conforming to the user conditions even at condition level 3 , the matching unit 115 terminates its processing after notifying the USR server 102 of the failure.
  • FIG. 8 is a time chart indicating how exchanges are done when data is backed up in the embodiment.
  • the resource management unit 114 in the MSP server 101 requests SSP conditions from the resou1ce registering unit 120 in the SSP server 103 ( 1 ).
  • the resource management unit 120 registers the conditions to the resource management unit 114 ( 2 ).
  • the resource management unit 114 requests the resource registering unit 120 to reserve the empty disk for backup ( 3 ).
  • the service demanding unit 111 issues a service demanding request to the service reception unit 113 and notifies the unit of user-desired conditions ( 4 ).
  • the service reception unit 113 issues a matching processing request to the matching unit 115 ( 5 ).
  • the matching unit 115 searches the user conditions and SSP conditions for mutually conforming combinations. After the search, the matching unit 115 notifies the resource management unit 114 of the matching result ( 6 ).
  • the resource management unit 114 refers to the data allocation TBL 123 and disk dividing TBL 125 and issues a rental request unit 120 in a backup SSP 101 ( 7 ).
  • the rental request includes the specification of which disk partitions is to be used for backup.
  • the resource management unit 114 After a rental request is issued to each backup SSP server 103 , the resource management unit 114 notifies the matching unit 115 of the completion ( 8 ). Then, the matching unit 115 issues a rental settlement notice to the service reception unit 113 ( 9 ). Finally, the service reception unit 113 issues a rental settlement notice to the service demanding unit 111 ( 10 ).
  • the USR server 102 Upon receiving the rental settlement notice, the USR server 102 transmits the original data from the data transfer unit 112 to the data transfer unit 116 in the MSP server 101 ( 11 ). The transfer unit 116 passes the data to the data dividing unit 117 ( 12 ).
  • the data dividing unit 117 divides the data into records for distribution to the backup SSP servers 103 and transmits the records to the data transfer unit 116 ( 13 ).
  • the data transfer unit 116 transfer the records to the data transfer unit 121 of each backup SSP server 103 ( 14 ).
  • FIG. 9 is a time c 1 art indicating how exchanges are done when data is restored in the embodiment.
  • the service demanding unit 111 issues a service demanding request to the service reception unit 113 and specifies which user data 401 is to be restored ( 1 ).
  • the service reception unit 113 requests the resource management unit 114 to restore the data ( 2 ).
  • the resource management unit 114 requests the resource registering unit 120 in each backup SSP server 103 to transfer the target data ( 3 ).
  • This request includes the specification of a backup disk partition from which data is to be transferred.
  • the data transfer unit 121 passes the data to the data transfer unit t 16 in the MSP server 101 ( 4 ).
  • the transferred data is allowed to enter the data restore unit 118 where restore processing is done ( 5 ). If the restore processing succeeds, the data is transferred from the data restore unit 118 via the data transfer unit 116 ( 6 ) to the data transfer unit 112 in the USR server 102 ( 7 ).
  • the data restore unit 118 regenerates the lost records by using ECC or parity information.
  • FIG. 10 is a time chart indicating how processing is done for data migration. If the available term of a resource expires, the MSP server 1011 must perform data migration.
  • the resource management unit t 14 checks the term 604 fields of the SSP conditions TBL 124 , and if the available term of any SSP disk expires, it issues a data migration request to the data migration unit 119 ( 1 ).
  • the data migration unit 119 issues a matching processing request for the user data to be moved to the matching unit 115 ( 2 ).
  • the user data to be moved is identified by referring to the disk dividing TBL 125 and data allocation TBL 123 .
  • the matching unit 115 searches the user conditions and SSP conditions for mutually conforming combinations. Note that the data to be moved is limited to the data stored in resources whose availability term has expired. Therefore, the minimum backup capacity and the number of distributions are set for the data to be moved without using the initially set minimum backup space and number of distributions. At step 302 , the number of SSP disks whose availability term has expired is used instead of (the number of distributions ⁇ 1). Cost comparison at Step 313 is omitted, too. After the search, the matching unit 115 notifies the resource management unit 114 of the matching result ( 3 ), and the resource management unit 114 issuer a rental notice to the resource registering unit 120 of, for example, the SSP server 103 - 2 ( 4 ).
  • the rental notice includes the specification of a new disk partition for backup.
  • the resource management unit 114 requests the resource registering unit 120 of the SSP server 103 - 1 (whose availability term has expired) to collect the data ( 5 ).
  • the collection 1eQuest includes the specification of the disk partition to which the data is to be moved.
  • the data transfer unit 121 transfers the data to the data transfer unit 116 of the MSP server 101 ( 6 ).
  • the data migration unit 119 is notified that the data is transferred to the MSP server 101 ( 7 ).
  • the data migration unit 119 notifies the data transfer unit 116 of the address of the data destination (the address of the SSP server 103 - 2 , disk identifier, partition identifier, logical block number of the partition in the disk, etc.) ( 8 ).
  • the data transfer unit 116 transfers the received data to the data transfer unit 121 of the data destination SSP server 103 - 2 ( 9 ).
  • the data migration unit 119 updates the disk dividing TBL 125 and data allocation TBL 123 so as to indicate the new backup disks.
  • the resource management unit 114 of the MSP server 101 issues a rental ending request to the resource management unit 120 of the SSP server 103 - 1 used formerly for backup ( 10 ).
  • This request includes the specification of the disk partition whose rental is to be terminated.
  • FIG. 11 is a time chart indicating how exchanges are done when a disk is returned. If a SSP server 103 requests the MSP server 101 to return a lent storage device within its availability term, the MSP server 101 must execute data migration, too. Upon receiving a storage return request from the resource registering unit 120 of, for example, the SSP server 103 - 1 ( 0 ), the resource management unit 114 starts the data migration procedure based on this request. This request includes the identifier of a disk which is to be returned. The subsequent procedure is same as described with reference to FIG. 10 .
  • Term 504 fields can be set so as to always retain a predetermined number of such backup copies.
  • the MSP server 101 monitors the term 504 fields of the user conditions TBL 122 , and if the term of a disk completes or expires, issues a rental ending request to the SSP server 103 having the disk in order to release its partitions.

Abstract

User data backup functions are realized through a computer, which is located on the management service provider corporation side and interfaces between a user side computer environment and a storage service side computer environment to support storage service. This computer selects storage devices which meet the user side conditions from a plurality of entirely or partially empty storage devices owned by the storage service side computer environment. The computer receives user data from the user side computer environment, divides the user data into records of a predetermined size and transmits the records to the storage service side computer environment so that the records are distributed and stored to the selected storage devices.

Description

  • The present application is a continuation of application Ser. No. 11/197,499, filed Aug. 5, 2005; which is a continuation of application Ser. No. 10/367,767, filed Feb. 19, 2003, now U.S. Pat. No. 7,024,529, the contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to storage backup techniques, and more particularly, to a technique for backing up storage in a remote place.
  • Since contents of disk storage may be lost-by an unexpected accident, data backup is made in most computer systems. Further, data backup tape and other media are kept in a remote site so that they will not be lost together with the original copies in case of a fire, earthquake or the like. Accordingly, a SAN (storage area network)-used backup method is disclosed in Japanese Patent Laid-open No. 2002-7304. Also in Japanese Patent Laid-open No. 2000-82008, a data sharing-based backup method is disclosed. Large-scale earthquakes, synchronized attacks by viruses, etc. prove a further growing threat to computer systems and their data, which is making it mandatory to keep two or three copies of each data as well as backing them up in a remote site. Backup is therefore becoming a swelling burden in terms of storage capacity, cost and overhead.
  • SUMMARY OF THE INVENTION
  • The present invention has been made in view of such a background as mentioned above and an object of the present invention is to provide a high safe data backup storage device advantageous in terms of cost.
  • According to one aspect of the present invention, there is provided a user data backup technique by computer means which resides between a user side computer environment and a storage service side computer environment to support storage service, the data backup method comprising the steps of: selecting a storage device which meets the user side conditions from a plurality of entirely or partially empty storage devices owned by the storage service side computer environment; receiving user data from the user side computer environment; and transmitting the user data to the storage service side computer environment for storage in the selected storage device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows the configuration of a storage service system according to a preferred embodiment of the present invention;
  • FIG. 2 shows an example of disk dividing and data storage;
  • FIG. 3 shows an example of a data allocation TBL 123;
  • FIG. 4 shows an example of a user conditions TBL 122;
  • FIG. 5 shows an example of a SSP conditions TBL 124;
  • FIG. 6 shows an example of a disk dividing TBL 125;
  • FIG. 7 is a flowchart showing a processing procedure by a matching unit 115 in the preferred embodiment;
  • FIG. 8 is a time chart showing a processing procedure for data backup in the preferred embodiment;
  • FIG. 9 is a time chart showing a processing procedure for data restoration in the preferred embodiment;
  • FIG. 10 is a time chart showing a processing procedure for data migration in the preferred embodiment; and
  • FIG. 11 is a time chart showing a processing procedure for disk return in the preferred embodiment.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings below.
  • FIG. 1 shows a configuration of a backup storage service system according to an embodiment of the present invention. In the system of FIG. 1, a management service provider corporation (hereinafter denoted as a MSP) provides data backup service to users (hereinafter denoted as USRs) by using idle resources of a plurality of storage service provider corporations (hereinafter denoted as SSPs). Each SSP has a SSP server 103 which is a computer environment on the storage service side. Each USR has a USR server 102 which is a computer environment on the user side. A MSP server 101 is computing means which interfaces between the two computer environments in order to support the storage service.
  • In FIG. 1, the MSP server 101 receives data from USR servers 102-1 and 102-2, divides the data into records and stores the records in idle resources managed by SSP servers 103-1, 103-2 and 103-3. Here, idle resources mean the currently unused areas of the storage devices such as tape libraries and RAID devices provided for data center business and storage service business. The MSP server 101, the USR servers 102 and the SSP servers 103 are connected via a network such as the Internet.
  • The MSP server 101 comprises a processing section composed of a service reception unit 113, a resource management unit 114, a matching unit 115, a data transfer unit 116, a data dividing unit 117, a data restore unit 118 and a data migration unit 119. Also there are provided, on its storage device, a user conditions TBL (table) 122, a data allocation TBL 123, a SSP conditions TBL 124 and a disk dividing TBL 125.
  • The service reception unit 113 is notified by the USR servers 102 of the size of each data to be backed up and the user's preferred condition (cost, etc.) for using the backup storage service. The service reception unit 113 stores those obtained conditions into the user conditions TBL 122. The resource management unit 114 is notified by the SSP servers 103 of their conditions (empty disk capacity, availability period, etc.) for providing disks. The resource management unit 114 stores those obtained conditions into the SSP conditions TBL 124.
  • The matching unit 115 searches the user conditions TBL 122 and SSP conditions TB 124 for mutually conforming combinations. The data transfer unit 116 controls data transfer between the USR servers 102 and the MSP server 101 and between the MSP server 101 and the SSP services 103.
  • The data dividing unit 117 divides user data into records whose size is determined, depending on the empty disk capacities offered by the SSP servers for backup. The data restore unit 118 refers to the data allocation TBL 123 and reassembles original data from records distributed to a plurality of SSP disks. The data migration unit 119 moves data to an empty area in another SSP server 103 if the availability term of the current backup disk expires or if it becomes necessary during the availability term to return the disk which is currently used as an idle resource. The user is not required to be aware of any data migration executed since the pertinent processing completes within the MSP, which results in a reduced operational cost for the user.
  • The user conditions TBL 122 stores the user's preferred conditions such as cost and availability term. This table will be described later in detail with reference to FIG. 4. The data allocation TBL 123 stores information about how SSP disks are allocated to user data. In order to raise the safety of data, the data allocation TBL is duplicated. The other copy is held in a separate alternative MSP server. If the MSP server 101 becomes not available due to disaster or failure, the data allocation TBL 123 in the alternative MSP server is accessed. This table will be described later in detail with reference to FIG. 3. The SSP conditions TBL 124 stores disk lending conditions such as empty capacity and cost. This table will be described later in detail with reference to FIG. 5. The disk dividing TBL 125 stores how the lent disks are divided by the MSP into partitions. This table will be described later in detail with reference to FIG. 6.
  • Each USR server 102 comprises a service demanding unit 111 and a data transfer unit 112. The USR server 102-1 manages user data A (131) while the USR server 102-2 manages user data B (132) and user data C (133).
  • When a USR server 102 uses the storage service, the USR server 102 issues a backup demanding request to notify the MSP of the required disk capacity, preferred cost, term of use and number of distributions. The number of distributions, which may be specified arbitrarily by the user, is an index determining the number of sites to which the data is apportioned for storage. Generally in a local site, RAID technology is used so that accesses are dispersed to a plurality of storage devices such as hard disks. In this case of the present invention, data is apportioned to a plurality of separate SSP disks connected via a network. How many SSP disks are to be used is determined by the number of distributions, a variable specified by the user. Reliability can be raised by using them like a single disk.
  • Once some disks are judged appropriate for backup by the MSP server 101, the data is sent to the MSP server 101 from the data transfer unit 112. To restore data from backup disks, the USR server 102 issues a restore demanding request to the MSP server 101 and receives the data from the MSP server 101 via the data transfer unit 112.
  • Each SSP server 103 comprises a resource registering unit 120 and a data transfer unit 121. The SSP server 103-1 manages disk A (141), disk B (142) and disk C (143). The SSP server 103-2 manages disk ID (144) and disk E (145). The SSP server 103-3 manages disk F (146), disk G (147) and disk H (148).
  • Of the disks managed by a SSP server 103, those disks available for the backup storage service are registered to the MSP server 101 by the resource registering unit 120. The data transfer unit 121 manages data exchange between the USR server and the MSP server.
  • For example, assume that the backup storage service is to be applied to the user data A (131) in the USR server 102-1 and the user data B (132) in the USR server 102-2. Hereinafter, a disk means a logically independent storage device. Empty disks lent as idle resources are the disk B (142) and disk C (143) under management of the SSP server 103-1, the disk D (144) under management of the SSP server 103-2 and the disk F (146) and disk G (147) under management of the SSP server 103-3. A total of five disks are offered for the backup storage service.
  • Disk B is divided into partition 1 (142-1), partition 2 (142-2) and partition 31 (142-3); Disk C is divided into partition 1 (143-1) and partition 2 (143-2); Disk D is divided into partition 1 (144-1), partition 2 (144-2) and partition 3 (144-3); Disk F is divided into partition 1 (146-1) and partition 2 (146-2); and Disk G is divided into partition 1 (147-1), partition 2 (147-2) and partition 3 (147-3).
  • User data A (131), after given the matching processing and then divided into five records in the MSP server 101, is stored to disk B partition 1 (142-1) in the SSP server 103-1, disk C partition 1 (143-1) in the SSP server 103-1, disk D partition 1 (144-1) in the SSP server 103-2, disk F partition 1 (146-1) in the SSP server 103-3 and disk G partition 1 (147-1) in the SSP server 103-3.
  • FIG. 2 shows how data on a user disk may be divided and stored for backup in the aforementioned embodiment. An original disk 201 is divided into a plurality of records according to the capacities of the backup disks. The original disk 201 corresponds to user data A (131) or user data B (132). The backup disks are represented by disks given numerals 211, 212, 213, 214 and 215 respectively. Each record, labeled with symbol R, may be either a block, a character or a bit. One block is data consisting of characters. The resultant records are sequentially stored on the respective disk partitions in accordance with the number of distributions. In the case of FIG. 2 where the specified number of distributions is assumed to be 5, the records are sequentially stored on the five backup disk 211, 212, 213, 214 and 215. Also note that in this example an EDD or parity record is stored for every four data records (R1, R2, R3 and R4 for instance). If one of some four adjacent records is lost, its corresponding ECC or parity record, code information, lean be used to regenerate the lost record. Thus, R1, R2, R3, R4 and their ECC or parity are stored on backup disks 211, 212, 213, 214 and 215, respectively. In this manner, the divided records are sequentially stored on the backup disks. Combining the apportioning of data among a plurality of disks depending on the number of distributions with a parity check or ECC technique, this method is aimed at not only improving access performance but also securing the data.
  • Since ECC or parity information is stored as a separate record, even if one disk becomes not available due to a failure or the like, it is possible to restore data from records on the other disks. In addition, since each disk is a separate SSP disk, it is not possible to restore the whole data from one disk, which brings about a merit that the security of important data can be protected. Generally, making the size of each record smaller raises security although this requires longer processing time.
  • FIG. 3 is an example of the data allocation TBL 123 showing how user data is allocated to backup disks. In the data allocation TBL 123, data 401 contains a user server name and a disk name, indicating which user data is backed up. Each backup disk 402 contains a SSP server name, disk name and a partition name, indicating which partition is hit by the matching unit 115. In the case of FIG. 3, user data stored on USR1-A is divided into five sets and stored respectively in SSP1-B1, SSP1-C1, SSP2-D1, SSP3-F1 and SSP3-G1. Likewise, user data stored on USR2-B is divided into five sets and stored respectively in SSP1-B2, SSP1-C2, SSP2-D2, SSP3-F2 and SSP3-G2.
  • FIG. 4 is an example of the user conditions TBL 122 where user-specified conditions for using the backup storage service are stored. In the user conditions TBL 122, each user data 501 contains a user server name and a disk name, indicating which user data is concerned. Each capacity 502 contains the size of the data. Each cost 503 contains a monthly rental fee per unit capacity. Each term (start-end) 504 contains two dates between which the service is to be used or the user data is to be backed up. Each number of distributions 505 contains an index indicating the number of disks to which the user data, including ECC or parity records, is to be apportioned. Specifying a higher value I for the number of distributions results in higher safety since the user data will be apportioned among a large number of disks.
  • FIG. 5 is an example of the SSP conditions TBL 124 where SSP specified conditions or providing the backup storage service are stored. In the SSP conditions TBL 124, each SSP disk 601 contains a SSP name and a disk name, identifying a disk registered by the SSP Each capacity 602 indicates the empty capacity of the disk. Each cost 603 indicates the monthly rental fee per unit capacity charged for the disk. Each term (start-end) 6041 contains two dates between which the disk is available. Each installation site 605 contains the name of the site where the disk resides.
  • FIG. 6 is an example of the disk dividing TBL 125 where how available SSP disks ate divided into partitions before lent to users when the storage service is used in the present embodiment. In the disk providing TBL 125, each disk 701 contains a SSP name and a disk name, identifying a SSP disk registered by the SSP. Each partition 702 contains the name of a partition on the disk. In FIG. 6, disk SSP1-B is divided into three partitions named B1, B2 and B3 respectively. Likewise, SSP1-C is divided into two partitions named C1 and C2 respectively. Information in each partition 702 field corresponds to the logical block number associated with the partition within the disk.
  • FIG. 7 shows a flowchart describing how the matching unit 115 operates to search the user conditions and SSP conditions for mutually conforming combinations. If the operation of the matching unit 115 is started, conditions for using the backup service are obtained from the user conditions TBL 122 (Step 300). Then SSP conditions for providing the backup service are obtained from the SSP conditions TBL 124 (Step 301). Then, the minimum backup capacity per disk is calculated by dividing the capacity to back up by (the number of distributions−1) (Step 302). The matching unit 115 searches for appropriate which meet this minimum backup disk capacity and other conditions. At first, the condition level is set toll before condition level judgment is done (Step 303). Search is done at each condition level.
  • At condition level 1, the minimum backup disk capacity is compared with the capacity 602 of a SSP disk (Step 306) and if the minimum backup disk capacity is smaller, the term (start and end) during which the SSP disk is available is compared with the term (start and end) during which the user wants to back up the data (Step 307). If the term during which the user wants to back up the data is within the term during which the SSP disk is available, the identifier of the SSP disk is stored to the memo y (Step 309). This judgment flow is executed for each SSP disk registered (Step 305). Of the hit SSP disks, the lowest cost SSP disk is selected (Step 310) and the minimum backup dirk capacity is allocated from the selected SSP disk (Step 311). The hit SSP disk is excluded from the object of comparison in Steps 306 and 307 (Step 312). This loop is repeated as many times as the number of distributions (Step 304) so that as many conforming backup disks as the number of distributions are detected. The total cost for using the backup disks hit in this manner is calculated and compared with the user cost (Step 313). If the calculated total cost exceeds the user cost, the condition level is incremented by 1 (Step 314) to execute another search flow.
  • At condition level 2, the date from which the SSP disk is available is compared with the date from which the user wants to back up the data (Step 308). Not like in Step 307, the date until which the SSP disk is available is not compared with the date until which the user wants to back up the data. Then, the total cost for using the backup disks hit in this manner is calculated and compared with the user cost (Step 313). If the calculated total cost exceeds the user cost, the condition level is incremented by 1 (Step 314) to execute another search flow.
  • At condition level 3, it is not required to distribute the data to different SSP disks and therefore it is allowed to store the data on the same SSP disk. That is, backup disk search is repeated without executing Step 312 where each hit SSP disk is excluded from the object of comparison.
  • At condition level 3, it is possible that all the user data is allocated to a single SSP disk. Instead of condition level 3, the processing procedure may also be altered in such a manner that 1 is subtracted from the number of distributions specified by the user and hen condition level 1 is executed again from Step 302. In this case, the total cost may be reduced to the user cost or below without ignoring the user-desired number of distributions. Note that concentrating the user data to only one SSP disk or SSP server 103 is also an implementation of the present invention although the safety of the user data is sacrificed.
  • If backup disks are determined as mentioned above, the allocation and disk dividing are registered (Step 315). That is, the SSP disk partitions allocated according to the minimum backup disk capacity are registered to the disk dividing TBL 125 and data allocation TBL 123. Note that if the matching unit 115 fails to find any SSP disk conforming to the user conditions even at condition level 3, the matching unit 115 terminates its processing after notifying the USR server 102 of the failure.
  • FIG. 8 is a time chart indicating how exchanges are done when data is backed up in the embodiment. The resource management unit 114 in the MSP server 101 requests SSP conditions from the resou1ce registering unit 120 in the SSP server 103 (1). When an empty disk is registered in the SSP server 103, the resource management unit 120 registers the conditions to the resource management unit 114 (2). To make the empty disk available, the resource management unit 114 requests the resource registering unit 120 to reserve the empty disk for backup (3).
  • For the USR server 102 to use the backup storage service, the service demanding unit 111 issues a service demanding request to the service reception unit 113 and notifies the unit of user-desired conditions (4). Upon receiving the request, the service reception unit 113 issues a matching processing request to the matching unit 115 (5). The matching unit 115 searches the user conditions and SSP conditions for mutually conforming combinations. After the search, the matching unit 115 notifies the resource management unit 114 of the matching result (6). The resource management unit 114 refers to the data allocation TBL 123 and disk dividing TBL 125 and issues a rental request unit 120 in a backup SSP 101 (7). The rental request includes the specification of which disk partitions is to be used for backup. After a rental request is issued to each backup SSP server 103, the resource management unit 114 notifies the matching unit 115 of the completion (8). Then, the matching unit 115 issues a rental settlement notice to the service reception unit 113 (9). Finally, the service reception unit 113 issues a rental settlement notice to the service demanding unit 111 (10).
  • Upon receiving the rental settlement notice, the USR server 102 transmits the original data from the data transfer unit 112 to the data transfer unit 116 in the MSP server 101 (11). The transfer unit 116 passes the data to the data dividing unit 117 (12).
  • According to the data allocation TBL 123, the data dividing unit 117 divides the data into records for distribution to the backup SSP servers 103 and transmits the records to the data transfer unit 116 (13). The data transfer unit 116 transfer the records to the data transfer unit 121 of each backup SSP server 103 (14).
  • FIG. 9 is a time c1 art indicating how exchanges are done when data is restored in the embodiment. For a USR server 102 to restore data by using the backup storage service, the service demanding unit 111 issues a service demanding request to the service reception unit 113 and specifies which user data 401 is to be restored (1). Upon receiving the service demanding request, the service reception unit 113 requests the resource management unit 114 to restore the data (2).
  • According to the data allocation TBL 123 and disk dividing TBL 125, the resource management unit 114 requests the resource registering unit 120 in each backup SSP server 103 to transfer the target data (3). This request includes the specification of a backup disk partition from which data is to be transferred. The data transfer unit 121 passes the data to the data transfer unit t16 in the MSP server 101 (4). The transferred data is allowed to enter the data restore unit 118 where restore processing is done (5). If the restore processing succeeds, the data is transferred from the data restore unit 118 via the data transfer unit 116 (6) to the data transfer unit 112 in the USR server 102 (7).
  • Note that if records cannot be obtained from a SSP server 103 due to failure or the like, the data restore unit 118 regenerates the lost records by using ECC or parity information.
  • FIG. 10 is a time chart indicating how processing is done for data migration. If the available term of a resource expires, the MSP server 1011 must perform data migration. The resource management unit t14 checks the term 604 fields of the SSP conditions TBL 124, and if the available term of any SSP disk expires, it issues a data migration request to the data migration unit 119 (1). The data migration unit 119 issues a matching processing request for the user data to be moved to the matching unit 115 (2). The user data to be moved is identified by referring to the disk dividing TBL 125 and data allocation TBL 123.
  • The matching unit 115 searches the user conditions and SSP conditions for mutually conforming combinations. Note that the data to be moved is limited to the data stored in resources whose availability term has expired. Therefore, the minimum backup capacity and the number of distributions are set for the data to be moved without using the initially set minimum backup space and number of distributions. At step 302, the number of SSP disks whose availability term has expired is used instead of (the number of distributions−1). Cost comparison at Step 313 is omitted, too. After the search, the matching unit 115 notifies the resource management unit 114 of the matching result (3), and the resource management unit 114 issuer a rental notice to the resource registering unit 120 of, for example, the SSP server 103-2 (4). The rental notice includes the specification of a new disk partition for backup. The resource management unit 114 requests the resource registering unit 120 of the SSP server 103-1 (whose availability term has expired) to collect the data (5). The collection 1eQuest includes the specification of the disk partition to which the data is to be moved. The data transfer unit 121 transfers the data to the data transfer unit 116 of the MSP server 101 (6).
  • The data migration unit 119 is notified that the data is transferred to the MSP server 101 (7). The data migration unit 119 notifies the data transfer unit 116 of the address of the data destination (the address of the SSP server 103-2, disk identifier, partition identifier, logical block number of the partition in the disk, etc.) (8). Then, the data transfer unit 116 transfers the received data to the data transfer unit 121 of the data destination SSP server 103-2 (9). When migration of the user data which must be moved to new backup disks is complete, the data migration unit 119 updates the disk dividing TBL 125 and data allocation TBL 123 so as to indicate the new backup disks. In addition, the resource management unit 114 of the MSP server 101 issues a rental ending request to the resource management unit 120 of the SSP server 103-1 used formerly for backup (10). This request includes the specification of the disk partition whose rental is to be terminated.
  • FIG. 11 is a time chart indicating how exchanges are done when a disk is returned. If a SSP server 103 requests the MSP server 101 to return a lent storage device within its availability term, the MSP server 101 must execute data migration, too. Upon receiving a storage return request from the resource registering unit 120 of, for example, the SSP server 103-1 (0), the resource management unit 114 starts the data migration procedure based on this request. This request includes the identifier of a disk which is to be returned. The subsequent procedure is same as described with reference to FIG. 10.
  • For user data under generation management, it is usual that a plurality of backup copies are held for the corresponding generations. Term 504 fields can be set so as to always retain a predetermined number of such backup copies. In this case, the MSP server 101 monitors the term 504 fields of the user conditions TBL 122, and if the term of a disk completes or expires, issues a rental ending request to the SSP server 103 having the disk in order to release its partitions.
  • According to the present invention, it is possible to provide users with a data backup storage device which shows high safety against disasters, etc. and is advantageous in terms of cost.

Claims (12)

1. A computer system comprising:
a plurality of storage systems; and
a first server coupled to the plurality of storage systems through a network,
wherein the storage system comprises:
a plurality of storage devices, and
a second server which manages the plurality of storage devices,
wherein the first server comprises:
a matching unit for searching for one or more storage devices to be selected whose storage condition satisfies a data condition indicated by a user server, the storage condition including an available term during which the storage device is available,
a data dividing unit for dividing data received from the user server to preserve the data in a distributed manner over the selected storage devices, and
a data transfer unit for transmitting the divided data to the storage system having the selected storage devices,
wherein when the first server finds the available term of any of the selected devices has expired, the matching unit re-selects other storage devices whose storage condition satisfies the data condition, and
wherein the data transfer unit receives the divided data from the storage devices whose available term has expired and transmits the divided data to the re-selected storage devices.
2. The computer system according to claim 1, wherein the storage condition further includes:
information of at least one of available storage areas and a cost to use the available storage area.
3. The computer system according to claim 1, wherein the data dividing unit divides the data based on a number of distributions included in the data condition.
4. The computer system according to claim 1, wherein the second server examines whether the available term of any of the selected storage devices has expired, and
wherein, when the second server finds the selected storage device has expired in the available term, the second server transmits the divided data stored in the selected storage device whose available term has expired to the first server.
5. The computer system according to claim 1, wherein the data dividing unit determines a number of distributions based on available storage areas to store the divided data.
6. The computer system according to claim 1, wherein the first server further comprises:
a data restore unit,
wherein when the first server receives a read request from the user server, the first server issues a read request to the storage system where the divided data exists,
wherein the data transfer unit receives the divided data from the second server, and
wherein the data restore unit restores the data on the user server based on the divided data received from the second server.
7. A data storing method related to a computer system having a service server and a plurality of storage systems each of which houses a plurality of storage devices, said method realized by the service server comprising:
searching for one or more storage devices to be selected whose storage condition satisfies a data condition indicated by a client server, the storage condition including an available term during which the storage device is available;
dividing data received from the client server to preserve the data in a distributed manner over the selected storage devices; and
transmitting the divided data to the storage system having the selected storage devices,
wherein, found that the available term of any of the selected devices has expired, the searching step further re-selects other storage devices whose storage condition satisfies the data condition, and
wherein the transmitting step further receives the divided data from the storage devices whose available term has expired and transmits the divided data to the re-selected storage devices.
8. The data storing method according to claim 7, wherein the storage condition further includes information of at least one of available storage areas and a cost to use the available storage area.
9. The data storing method according to claim 7, wherein the dividing step divides the data based on a number of distributions included in the data condition.
10. The data storing method according to claim 7, wherein the storage system that stores the divided data examines whether the available term of any of the selected storage devices has expired, and
wherein when the storage system finds the selected storage device has expired in the available term, the storage system transmits the divided data stored in the selected storage device whose available term has expired to the first server.
11. The data storing method according to claim 7, wherein the dividing step determines a number of distributions based on available storage areas to store the divided data.
12. The data storing method according to claim 7, wherein the method further comprising:
a data restoring step,
wherein, when receiving a read request from the client server, the service server issues a read request to the storage system where the divided data exists,
wherein the transmitting step further receives the divided data from the storage system, and
wherein, in the data restoring step, restoring the data on the client server based on the divided data received from the storage system.
US12/222,192 2002-04-26 2008-08-05 Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information Abandoned US20080301132A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/222,192 US20080301132A1 (en) 2002-04-26 2008-08-05 Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2002-125447 2002-04-26
JP2002125447A JP2003316635A (en) 2002-04-26 2002-04-26 Method for backing up data and program therefor
US10/367,767 US7024529B2 (en) 2002-04-26 2003-02-19 Data back up method and its programs
US11/197,499 US20050278299A1 (en) 2002-04-26 2005-08-05 Data back up method and its programs
US12/222,192 US20080301132A1 (en) 2002-04-26 2008-08-05 Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/197,499 Continuation US20050278299A1 (en) 2002-04-26 2005-08-05 Data back up method and its programs

Publications (1)

Publication Number Publication Date
US20080301132A1 true US20080301132A1 (en) 2008-12-04

Family

ID=29243769

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/367,767 Expired - Fee Related US7024529B2 (en) 2002-04-26 2003-02-19 Data back up method and its programs
US11/197,499 Abandoned US20050278299A1 (en) 2002-04-26 2005-08-05 Data back up method and its programs
US12/222,192 Abandoned US20080301132A1 (en) 2002-04-26 2008-08-05 Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/367,767 Expired - Fee Related US7024529B2 (en) 2002-04-26 2003-02-19 Data back up method and its programs
US11/197,499 Abandoned US20050278299A1 (en) 2002-04-26 2005-08-05 Data back up method and its programs

Country Status (2)

Country Link
US (3) US7024529B2 (en)
JP (1) JP2003316635A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070214330A1 (en) * 2006-03-10 2007-09-13 Seiko Epson Corporation Method for processing backup, backup processing device, and storage medium storing program
US20100274762A1 (en) * 2009-04-24 2010-10-28 Microsoft Corporation Dynamic placement of replica data
US20140156666A1 (en) * 2012-11-30 2014-06-05 Futurewei Technologies, Inc. Method for Automated Scaling of a Massive Parallel Processing (MPP) Database
US8769049B2 (en) 2009-04-24 2014-07-01 Microsoft Corporation Intelligent tiers of backup data
US8769055B2 (en) 2009-04-24 2014-07-01 Microsoft Corporation Distributed backup and versioning
US8935366B2 (en) 2009-04-24 2015-01-13 Microsoft Corporation Hybrid distributed and cloud backup architecture

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003316635A (en) * 2002-04-26 2003-11-07 Hitachi Ltd Method for backing up data and program therefor
WO2004090742A1 (en) 2003-04-03 2004-10-21 Commvault Systems, Inc. System and method for dynamically sharing storage volumes in a computer network
US7287086B2 (en) * 2003-07-09 2007-10-23 Internatinonal Business Machines Corporation Methods, systems and computer program products for controlling data transfer for data replication or backup based on system and/or network resource information
JP2005165516A (en) * 2003-12-01 2005-06-23 Hitachi Ltd Storage controller, storage system and control method for storage system
US20050172072A1 (en) * 2004-01-30 2005-08-04 Cochran Robert A. Multiple site data replication
JP2005301684A (en) * 2004-04-12 2005-10-27 Hitachi Ltd Storage system
US7330997B1 (en) * 2004-06-03 2008-02-12 Gary Odom Selective reciprocal backup
JP2006133955A (en) * 2004-11-04 2006-05-25 Nec Corp Backup system and method for data inside mobile communication terminal, and mobile communication terminal and backup device used therefor
CA2583912A1 (en) 2004-11-05 2006-05-18 Commvault Systems, Inc. System and method to support single instance storage operations
US20060288057A1 (en) * 2005-06-15 2006-12-21 Ian Collins Portable data backup appliance
US8069271B2 (en) * 2005-10-12 2011-11-29 Storage Appliance Corporation Systems and methods for converting a media player into a backup device
US20070162271A1 (en) * 2005-10-12 2007-07-12 Storage Appliance Corporation Systems and methods for selecting and printing data files from a backup system
US8195444B2 (en) * 2005-10-12 2012-06-05 Storage Appliance Corporation Systems and methods for automated diagnosis and repair of storage devices
US7822595B2 (en) * 2005-10-12 2010-10-26 Storage Appliance Corporation Systems and methods for selectively copying embedded data files
US7899662B2 (en) * 2005-10-12 2011-03-01 Storage Appliance Corporation Data backup system including a data protection component
US7813913B2 (en) * 2005-10-12 2010-10-12 Storage Appliance Corporation Emulation component for data backup applications
US20080028008A1 (en) * 2006-07-31 2008-01-31 Storage Appliance Corporation Optical disc initiated data backup
US7844445B2 (en) * 2005-10-12 2010-11-30 Storage Appliance Corporation Automatic connection to an online service provider from a backup system
US20070091746A1 (en) * 2005-10-12 2007-04-26 Storage Appliance Corporation Optical disc for simplified data backup
US7702830B2 (en) * 2005-10-12 2010-04-20 Storage Appliance Corporation Methods for selectively copying data files to networked storage and devices for initiating the same
US7818160B2 (en) * 2005-10-12 2010-10-19 Storage Appliance Corporation Data backup devices and methods for backing up data
JP2007140887A (en) 2005-11-18 2007-06-07 Hitachi Ltd Storage system, disk array device, method of presenting volume, and method of verifying data consistency
JP2007219611A (en) * 2006-02-14 2007-08-30 Hitachi Ltd Backup device and backup method
US7941404B2 (en) 2006-03-08 2011-05-10 International Business Machines Corporation Coordinated federated backup of a distributed application environment
JP4508137B2 (en) * 2006-03-10 2010-07-21 セイコーエプソン株式会社 Data backup processing apparatus and method
US20080082453A1 (en) * 2006-10-02 2008-04-03 Storage Appliance Corporation Methods for bundling credits with electronic devices and systems for implementing the same
US20080172487A1 (en) * 2007-01-03 2008-07-17 Storage Appliance Corporation Systems and methods for providing targeted marketing
US20080172442A1 (en) * 2007-01-17 2008-07-17 Inventec Corporation Multi-computer system and configuration method therefor
JP4853717B2 (en) * 2007-02-23 2012-01-11 日本電気株式会社 Server migration plan creation system, server migration plan creation method
US20080226082A1 (en) * 2007-03-12 2008-09-18 Storage Appliance Corporation Systems and methods for secure data backup
US20090030955A1 (en) * 2007-06-11 2009-01-29 Storage Appliance Corporation Automated data backup with graceful shutdown for vista-based system
US20090031298A1 (en) * 2007-06-11 2009-01-29 Jeffrey Brunet System and method for automated installation and/or launch of software
US8589354B1 (en) * 2008-12-31 2013-11-19 Emc Corporation Probe based group selection
US8972352B1 (en) 2008-12-31 2015-03-03 Emc Corporation Probe based backup
US8788462B1 (en) * 2008-12-31 2014-07-22 Emc Corporation Multi-factor probe triggers
JP2010287104A (en) * 2009-06-12 2010-12-24 Nec Personal Products Co Ltd File management device, method and program
US9047217B2 (en) * 2009-08-27 2015-06-02 Cleversafe, Inc. Nested distributed storage unit and applications thereof
US8413137B2 (en) * 2010-02-04 2013-04-02 Storage Appliance Corporation Automated network backup peripheral device and method
US8423735B2 (en) 2010-05-21 2013-04-16 International Business Machines Corporation Space reservation in a deduplication system
US8555142B2 (en) * 2010-06-22 2013-10-08 Cleversafe, Inc. Verifying integrity of data stored in a dispersed storage memory
WO2012044924A1 (en) * 2010-09-30 2012-04-05 Verisign, Inc. System for configurable reporting of network data and related method
WO2012042509A1 (en) * 2010-10-01 2012-04-05 Peter Chacko A distributed virtual storage cloud architecture and a method thereof
JP5647058B2 (en) * 2011-04-19 2014-12-24 佐藤 美代子 Information processing system and data backup method
EP2546747A1 (en) * 2011-07-13 2013-01-16 Thomson Licensing Method for optimization of data transfer
US10095587B1 (en) * 2011-12-23 2018-10-09 EMC IP Holding Company LLC Restricted data zones for backup servers
GB2501098A (en) * 2012-04-12 2013-10-16 Qatar Foundation Fragmenting back up copy for remote storage
CN104685475A (en) * 2012-08-31 2015-06-03 惠普发展公司,有限责任合伙企业 Selecting a resource to be used in a data backup or restore operation
CN102968460B (en) * 2012-11-01 2015-09-02 陶光毅 Based on CD database storage system and utilize the method for this system
US9298617B2 (en) 2013-04-16 2016-03-29 International Business Machines Corporation Parallel destaging with replicated cache pinning
US9298398B2 (en) 2013-04-16 2016-03-29 International Business Machines Corporation Fine-grained control of data placement
US9423981B2 (en) 2013-04-16 2016-08-23 International Business Machines Corporation Logical region allocation with immediate availability
US9104597B2 (en) 2013-04-16 2015-08-11 International Business Machines Corporation Destaging cache data using a distributed freezer
US9329938B2 (en) * 2013-04-16 2016-05-03 International Business Machines Corporation Essential metadata replication
US9104332B2 (en) 2013-04-16 2015-08-11 International Business Machines Corporation Managing metadata and data for a logical volume in a distributed and declustered system
US9619404B2 (en) 2013-04-16 2017-04-11 International Business Machines Corporation Backup cache with immediate availability
US11593223B1 (en) 2021-09-02 2023-02-28 Commvault Systems, Inc. Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4310883A (en) * 1978-02-13 1982-01-12 International Business Machines Corporation Method and apparatus for assigning data sets to virtual volumes in a mass store
US5131087A (en) * 1988-12-29 1992-07-14 Storage Technology Corporation Computer system having apparatus for automatically redistributing data records stored therein
US5276867A (en) * 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5410598A (en) * 1986-10-14 1995-04-25 Electronic Publishing Resources, Inc. Database usage metering and protection system and method
US5491810A (en) * 1994-03-01 1996-02-13 International Business Machines Corporation Method and system for automated data storage system space allocation utilizing prioritized data set parameters
US5652613A (en) * 1995-06-07 1997-07-29 Lazarus; David Beryl Intelligent electronic program guide memory management system and method
US6263350B1 (en) * 1996-10-11 2001-07-17 Sun Microsystems, Inc. Method and system for leasing storage
US6493804B1 (en) * 1997-10-01 2002-12-10 Regents Of The University Of Minnesota Global file system and data storage device locks
US6513101B1 (en) * 2000-01-04 2003-01-28 International Business Machines Corporation Expiring host selected scratch logical volumes in an automated data storage library
US6516348B1 (en) * 1999-05-21 2003-02-04 Macfarlane Druce Ian Craig Rattray Collecting and predicting capacity information for composite network resource formed by combining ports of an access server and/or links of wide arear network
US20030204571A1 (en) * 2002-04-24 2003-10-30 International Business Machines Corporation Distributed file system using scatter-gather
US6775703B1 (en) * 2000-05-01 2004-08-10 International Business Machines Corporation Lease based safety protocol for distributed system with multiple networks
US6775792B2 (en) * 2001-01-29 2004-08-10 Snap Appliance, Inc. Discrete mapping of parity blocks
US20040162940A1 (en) * 2003-02-17 2004-08-19 Ikuya Yagisawa Storage system
US6785768B2 (en) * 1997-12-24 2004-08-31 Avid Technology, Inc. Computer system and process for transferring streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US20040181641A1 (en) * 2003-03-12 2004-09-16 International Business Machines Corporation System, method and computer program product to automatically select target volumes for a fast copy to optimize performance and availability
US20040205310A1 (en) * 2002-06-12 2004-10-14 Hitachi, Ltd. Method and apparatus for managing replication volumes
US20040215831A1 (en) * 2003-04-25 2004-10-28 Hitachi, Ltd. Method for operating storage system
US6988087B2 (en) * 2001-04-16 2006-01-17 Hitachi, Ltd. Service method of a rental storage and a rental storage system
US7024529B2 (en) * 2002-04-26 2006-04-04 Hitachi, Ltd. Data back up method and its programs

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1124846A (en) 1997-07-03 1999-01-29 Hitachi Ltd Backup system using network
JP2000082008A (en) 1998-09-04 2000-03-21 Kenwood Corp Data security method
JP2002007304A (en) 2000-06-23 2002-01-11 Hitachi Ltd Computer system using storage area network and data handling method therefor

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4310883A (en) * 1978-02-13 1982-01-12 International Business Machines Corporation Method and apparatus for assigning data sets to virtual volumes in a mass store
US5410598A (en) * 1986-10-14 1995-04-25 Electronic Publishing Resources, Inc. Database usage metering and protection system and method
US5131087A (en) * 1988-12-29 1992-07-14 Storage Technology Corporation Computer system having apparatus for automatically redistributing data records stored therein
US5276867A (en) * 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5491810A (en) * 1994-03-01 1996-02-13 International Business Machines Corporation Method and system for automated data storage system space allocation utilizing prioritized data set parameters
US5652613A (en) * 1995-06-07 1997-07-29 Lazarus; David Beryl Intelligent electronic program guide memory management system and method
US6263350B1 (en) * 1996-10-11 2001-07-17 Sun Microsystems, Inc. Method and system for leasing storage
US6493804B1 (en) * 1997-10-01 2002-12-10 Regents Of The University Of Minnesota Global file system and data storage device locks
US6785768B2 (en) * 1997-12-24 2004-08-31 Avid Technology, Inc. Computer system and process for transferring streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6516348B1 (en) * 1999-05-21 2003-02-04 Macfarlane Druce Ian Craig Rattray Collecting and predicting capacity information for composite network resource formed by combining ports of an access server and/or links of wide arear network
US6513101B1 (en) * 2000-01-04 2003-01-28 International Business Machines Corporation Expiring host selected scratch logical volumes in an automated data storage library
US6775703B1 (en) * 2000-05-01 2004-08-10 International Business Machines Corporation Lease based safety protocol for distributed system with multiple networks
US6775792B2 (en) * 2001-01-29 2004-08-10 Snap Appliance, Inc. Discrete mapping of parity blocks
US6988087B2 (en) * 2001-04-16 2006-01-17 Hitachi, Ltd. Service method of a rental storage and a rental storage system
US20030204571A1 (en) * 2002-04-24 2003-10-30 International Business Machines Corporation Distributed file system using scatter-gather
US7024529B2 (en) * 2002-04-26 2006-04-04 Hitachi, Ltd. Data back up method and its programs
US20040205310A1 (en) * 2002-06-12 2004-10-14 Hitachi, Ltd. Method and apparatus for managing replication volumes
US20040162940A1 (en) * 2003-02-17 2004-08-19 Ikuya Yagisawa Storage system
US20040181641A1 (en) * 2003-03-12 2004-09-16 International Business Machines Corporation System, method and computer program product to automatically select target volumes for a fast copy to optimize performance and availability
US20040215831A1 (en) * 2003-04-25 2004-10-28 Hitachi, Ltd. Method for operating storage system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070214330A1 (en) * 2006-03-10 2007-09-13 Seiko Epson Corporation Method for processing backup, backup processing device, and storage medium storing program
US7698520B2 (en) 2006-03-10 2010-04-13 Seiko Epson Corporation Method for processing backup, devices for backup processing, and storage mediums for storing a program for operating a backup processing device
US20100274762A1 (en) * 2009-04-24 2010-10-28 Microsoft Corporation Dynamic placement of replica data
US8560639B2 (en) * 2009-04-24 2013-10-15 Microsoft Corporation Dynamic placement of replica data
US8769049B2 (en) 2009-04-24 2014-07-01 Microsoft Corporation Intelligent tiers of backup data
US8769055B2 (en) 2009-04-24 2014-07-01 Microsoft Corporation Distributed backup and versioning
US8935366B2 (en) 2009-04-24 2015-01-13 Microsoft Corporation Hybrid distributed and cloud backup architecture
US20140156666A1 (en) * 2012-11-30 2014-06-05 Futurewei Technologies, Inc. Method for Automated Scaling of a Massive Parallel Processing (MPP) Database
US8799284B2 (en) * 2012-11-30 2014-08-05 Futurewei Technologies, Inc. Method for automated scaling of a massive parallel processing (MPP) database

Also Published As

Publication number Publication date
US7024529B2 (en) 2006-04-04
US20050278299A1 (en) 2005-12-15
US20030204690A1 (en) 2003-10-30
JP2003316635A (en) 2003-11-07

Similar Documents

Publication Publication Date Title
US7024529B2 (en) Data back up method and its programs
JP5254611B2 (en) Metadata management for fixed content distributed data storage
US7406473B1 (en) Distributed file system using disk servers, lock servers and file servers
US8229897B2 (en) Restoring a file to its proper storage tier in an information lifecycle management environment
US9836244B2 (en) System and method for resource sharing across multi-cloud arrays
JP3130536B2 (en) Apparatus and method for transferring and storing data from multiple networked computer storage devices
JP4508554B2 (en) Method and apparatus for managing replicated volumes
US6950871B1 (en) Computer system having a storage area network and method of handling data in the computer system
US20040153481A1 (en) Method and system for effective utilization of data storage capacity
KR101434128B1 (en) Distributed replica storage system with web services interface
JP3864244B2 (en) System for transferring related data objects in a distributed data storage environment
US6880102B1 (en) Method and system for managing storage systems containing multiple data storage devices
US7574570B2 (en) Billing system for information dispersal system
US20050166011A1 (en) System for consolidating disk storage space of grid computers into a single virtual disk drive
US20030200275A1 (en) File transfer method and system
JP2002007304A (en) Computer system using storage area network and data handling method therefor
EP1462956A2 (en) Computer system for managing file management information
US8626722B2 (en) Consolidating session information for a cluster of sessions in a coupled session environment
US20080154988A1 (en) Hsm control program and method
EP1204028A1 (en) Computer file storage and recovery method
Cohen Database systems: Implementation of a distributed database management system to support logical subnetworks
Benjamin Improving information storage reliability using a data network
Ritchie RAID Unbound: Storage Fault Tolerance in a Distributed Environment
To et al. Oracle Database High Availability Best Practices 11g Release 2 (11.2) E10803-06
To et al. Oracle Database High Availability Best Practices 11g Release 2 (11.2) E10803-02

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION