US8464074B1 - Storage media encryption with write acceleration - Google Patents

Storage media encryption with write acceleration Download PDF

Info

Publication number
US8464074B1
US8464074B1 US12/434,477 US43447709A US8464074B1 US 8464074 B1 US8464074 B1 US 8464074B1 US 43447709 A US43447709 A US 43447709A US 8464074 B1 US8464074 B1 US 8464074B1
Authority
US
United States
Prior art keywords
data
status
write command
network device
processors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/434,477
Inventor
Anand Parthasarathy
Chandra Sekhar Kondamuri
Ramkumar Chinchani
Gian Carlo Boffa
Maurilio Cometto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US12/434,477 priority Critical patent/US8464074B1/en
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOFFA, GIAN CARLO, CHINCHANI, RAMKUMAR, COMETTO, MAURILIO, KONDAMURI, CHANDRA SEKHAR, PARTHASARATHY, ANAND
Application granted granted Critical
Publication of US8464074B1 publication Critical patent/US8464074B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • G06F11/3082Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting the data filtering being achieved by aggregating or compressing the monitored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself

Definitions

  • the present disclosure relates generally to methods and apparatus for performing compression and/or encryption of data that is being transmitted to storage media in association with a write command.
  • SME Storage Media Encryption
  • targets generally refer to the encryption of data prior the storage of the data to various storage media, which may be referred to as “targets.” SME may also involve compression of data prior to its encryption, as well as other processes.
  • An SME device in the network may perform SME in association with a write command received from a host. Unfortunately, substantial delays are introduced as a result of the SME process.
  • FIG. 1 is a transaction flow diagram illustrating a method of performing a SCSI write operation without performing compression or encryption.
  • FIG. 2 is a transaction flow diagram illustrating a prior art method of performing a SCSI write operation with compression and encryption.
  • FIG. 3 is a transaction flow diagram illustrating an example method of performing a SCSI write operation with compression and encryption with write acceleration in accordance with various embodiments.
  • FIG. 4 is a process flow diagram illustrating an example general method of performing storage media encryption and/or compression with write acceleration in accordance with various embodiments.
  • FIG. 5 is a process flow diagram illustrating an example method of performing storage media compression and encryption with write acceleration via an apparatus having a plurality of processors in accordance with various embodiments.
  • FIG. 6 is a block diagram illustrating an example data structure that may be used to track the status of tasks being processed by a plurality of processors in accordance with various embodiments.
  • FIG. 7 is a process flow diagram illustrating a method of transmitting compressed and/or encrypted data to a target as shown at 516 of FIG. 5 in accordance with various embodiments.
  • FIG. 8 is a diagrammatic representation of an example network device in which various embodiments may be implemented.
  • an apparatus in one embodiment, includes a memory and a plurality of processors.
  • the apparatus receives a write command from a network device.
  • the apparatus sends a transfer ready to the network device in response to the write command.
  • the apparatus composes a status and sends the status the network device. The status is sent to the network device after the data has been received from the network device and prior to both compressing and encrypting the data.
  • the apparatus compresses the data to generate compressed data.
  • One of the plurality of processors encrypts the compressed data to generate modified data.
  • the apparatus then sends the modified data to a target indicated by the write command.
  • the disclosed embodiments relate to storage media, including but not limited to tape, as well as to compression and/or encryption of data that is transmitted to the storage media (i.e., targets). Encryption of data prior to being stored to storage media may be referred to as storage media encryption (“SME”). SME may also involve compression of data prior to its encryption, as well as other processes.
  • SME storage media encryption
  • SME write input/output (“I/O”) performance for a single tape flow drops by approximately 50% when compared to the throughput of the host that is initiating the I/O (e.g., write commands).
  • I/O write input/output
  • Various noticeable latencies may be introduced by SME itself, as a result of processes such as the following: encryption and integrity digest; compression; buffering latency for the complete tape block; and cyclic redundancy check (“CRC”) verification/calculation. All of these latencies can contribute to significant delay and decrease the effective host throughput.
  • Tapes are sequential devices and typically support only one outstanding I/O command.
  • a new I/O may be originated by the host after the status for the previous I/O is received by the host.
  • each Read/Write I/O may represent a logical block in the tape medium.
  • SME processes may involve, for example, the Advanced Encryption Standard (“AES”), Galois/Counter Mode (“GCM”) or other encryption and/or authentication processes.
  • AES Advanced Encryption Standard
  • GCM Galois/Counter Mode
  • compression and encryption may be applied for the entire logical block, as compression is effective across a large-sized logical block and encryption can protect each distinct logical block (e.g., using a separate Initialization Vector (IV)).
  • IV Initialization Vector
  • an SME process may, for example, involve the following: buffering all the data frames corresponding to a single I/O; applying a compression transform followed by an encryption transform; and segmenting (and re-computing a cyclic redundancy check value (CRC) for) the resultant logical block into multiple data frames.
  • CRC cyclic redundancy check value
  • the term SME device may refer to a network device that implements SME.
  • the network device may be a router or switch.
  • the SME device may include one or more line cards, where at least one of the line cards implements SME.
  • the disclosed embodiments provide hardware (such as network devices and components of network devices) that may be configured to perform the methods of the invention, as well as software to control devices to perform these methods. For example, some processes may be performed by a logic system that includes one or more logic devices (such as processors, programmable logic devices, etc.), associated memory devices, etc.
  • An SME device for example, may be implemented by one or more such logic devices and associated memory.
  • FIG. 1 is a transaction flow diagram illustrating a prior art method of performing a Small Computer System Interface (SCSI) write operation without performing compression or encryption. Steps performed by a host, SME device, and target will be described with reference to vertical lines 102 , 104 , and 106 , respectively.
  • the host 102 sends a write command (WRITE_CMD) at 108 to the SME device 104 , which then sends the write command at 110 to the target 106 .
  • a write command typically indicates the amount of data to be written to a target.
  • the target 106 sends a transfer ready (XFR_RDY) at 112 to the SME device 104 indicating that it is ready to receive data.
  • XFR_RDY transfer ready
  • the SME device 104 then sends a transfer ready at 114 to the host 102 .
  • the host sends the data at 116 , 118 , 120 to the SME device 104 , which then sends the data at 122 , 124 , 126 to the target 106 .
  • the target 106 then sends a status at 128 to the SME device 104 indicating whether the data was received successfully.
  • the SME device 104 then sends the status at 130 to the host 102 . Since the data is neither compressed nor encrypted in this example, the host 102 does not perceive a delay between the time it sends the data and the time it receives the status.
  • FIG. 2 is a transaction flow diagram illustrating a prior art method of performing a SCSI write operation with compression and encryption.
  • SME device 204 performs compression of the data and/or encryption of the data (e.g., compressed or uncompressed data).
  • the host 102 sends a write command (WRITE_CMD) at 208 to the SME device 204
  • the write command specifies the amount (i.e, size) of the data to be written to the target 106 .
  • the SME device 204 composes and sends a transfer ready (XFR_RDY) at 210 to the host 102 to indicate that the SME device 204 is ready to receive a transfer of data.
  • XFR_RDY transfer ready
  • the host 102 then sends the data to the SME device 204 at 212 , 214 , 216 .
  • the SME device 204 may then compress and encrypt the data at 218 . Compression and encryption can each be processing intensive tasks, and therefore can introduce a substantial delay.
  • the SME device 204 sends a write command that includes the compressed size at 220 to the target 106 .
  • the target 106 then sends a transfer ready (XFR_RDY) at 222 to the SME device 204 .
  • the SME device 204 sends the compressed data at 224 , 226 , 228 to the target 106 .
  • the target 106 sends a status at 230 to the SME device 204 , where the status indicates whether the data has been successfully written to the target 106 .
  • the SME device 204 then sends the status at 232 to the host 102 .
  • the host 102 may send additional write commands. Unfortunately, the host 102 in this instance will perceive a significant delay between the time that the host 102 sends the data and the time that the host 102 receives the status.
  • FIG. 3 is a transaction flow diagram illustrating an example method of performing a SCSI “accelerated write” operation with compression and encryption in accordance with various embodiments.
  • SME device 304 performs compression of the data and/or encryption of the data (e.g., compressed or uncompressed data).
  • the host 102 in this example sends multiple, consecutive write commands.
  • the write command may specify the amount (i.e, size) of the data to be written to the target 106 .
  • the SME device 304 may compose and send a transfer ready (XFR_RDY) at 310 to the host 102 to indicate that the SME device 304 is ready to receive a transfer of data.
  • the host 102 may then send the data to the SME device 304 at 312 , 314 , 316 .
  • the SME 304 may compose and send a status to the host 102 as shown at 318 after the data has been received.
  • the status sent to the host 102 may indicate that the write has been successful. Specifically, the status may be sent to the host 102 prior to completion of the compression and/or encryption of the data (Data 1 ) at 320 . More particularly, the status may be sent to the host 102 prior to initiating compression and/or encryption of the data.
  • the data compression and/or encryption processing may be distributed to different logic devices, e.g., to processors or co-processors (such as OcteonTM co-processors).
  • processors or co-processors such as OcteonTM co-processors.
  • the host since the host has received the status message (or the like), the host may send subsequent write commands while the compression, encryption and/or target side processing occur in parallel.
  • the host 102 may send another write command, WRITE_CMD 2 , as shown at 322 .
  • the host does not experience any delay as a result of the compression and/or encryption of the data transmitted in association with the first write command.
  • the SME device 304 After the SME device 304 has compressed and/or encrypted the data, Data 1 , transmitted in association with the first write command at 320 , the SME device 304 continues to send a write command, WRITE_CMD 1 , to the target 106 as shown at 324 . Since the data has been compressed, the size of the data transmitted (specified as a parameter to the write command) will be smaller than that specified by the host in the write command sent by the host at 308 .
  • the SME device 304 may send a transfer ready, XFR_RDY, to the host 102 at 326 in response to the second write command.
  • the host 102 may then send the data, Data 2 , to the SME at 332 , 334 , 338 in association with the second write command.
  • the SME device 304 may then send a status at 342 to the host 102 in association with the second write command.
  • the host 102 may continue to send write commands to the SME device 304 .
  • the SME device 304 may receive a transfer ready, XFR_RDY, from the target 106 in association with the first write command, as shown at 328 .
  • the SME device 304 may send the compressed data, Compressed Data 1 , to the target 106 in association with the first write command at 332 , 336 , 340 .
  • the SME device 304 may determine from the status whether the write to the target 106 was successful. If the write was successful, the SME device 304 may merely drop the status.
  • the SME device 304 may drop the status and re-send the data to the target 106 .
  • the SME device 304 may perform processing in association with the second write command (e.g., via another one of the plurality of processors).
  • the SME device 304 may compress and/or encrypt the data, Data 2 , sent in association with the second write command as shown at 344 .
  • the SME device 304 may send a second write command, WRITE_CMD 2 , to the target 106 at 348 . Since the data, Data 2 , has been compressed, the size specified as a parameter to the write command will be less than the size specified by the host at 322 .
  • the SME device 304 may receive a transfer ready, XFR_RDY, from the target 106 as shown at 350 .
  • the SME device 304 may thereafter send the compressed data, Compressed Data 2 , as shown at 352 , 354 , 356 to the target 106 .
  • the target 106 may then send a status at 358 to the SME device 304 .
  • the disclosed embodiments enable a host to send multiple, sequential write commands without perceiving a substantial delay typically introduced in systems implementing compression and/or encryption.
  • this is accomplished through an SME device that is capable of simultaneously performing processing for two or more write commands, which may be sent by the same host. This may be accomplished through the use of multiple processors, as will be described in further detail below.
  • FIG. 4 is a process flow diagram illustrating an example general method of performing storage media encryption and/or compression with write acceleration in accordance with various embodiments.
  • the SME device may receive a write command from a host at 402 .
  • the SME device may then compose and send a response such as a transfer ready to the network device in response to the write command at 404 .
  • the SME device may receive data from the host at 406 in the form of one or more data frames.
  • the SME device may buffer the data until all data transmitted in association with the write command has been received.
  • the SME device may then compose a status and send the status to the host at 408 after the data has been received from the host.
  • the status may be sent to the host prior to performing compression and/or encryption of the data (e.g., before initiation and/or completion of the compression and/or encryption).
  • the status sent to the network device indicates that the write command has been successfully completed (e.g., the target has successfully received the data), the data has not yet been processed or transmitted to the target.
  • the SME device may perform encryption and/or compression of the data to generate modified data at 410 .
  • Compression of the data may be performed via an Application Specific Integrated Circuit configured for performing compression of data.
  • Encryption of compressed or uncompressed data may be performed by a processor (e.g., via processing software instructions).
  • the compressed (or uncompressed) data may be encrypted by one of the plurality of processors.
  • compression of data may be performed via the same processor that performs the encryption.
  • the SME device may then send the modified data to a target at 412 indicated by the write command.
  • the target may be identified directly or indirectly in the write command received from the host.
  • the write command may specifically identify the target to which the data is being written.
  • the SME device may perform a virtual-physical mapping in order to identify the target(s) based upon a virtual storage identifier provided in the write command.
  • the SME device may drop the status. Where the status indicates that an error has occurred, the SME device may re-send the data to the target without notifying the host of the error, where possible. Alternatively, the SME device may send the status to the host so that the host may re-send the data to the SME device.
  • the SME device may send a Deferred Check Condition indication to the host on future I/O.
  • Hosts and backup applications may, for example, perform a large number of writes followed by Write File Marks.
  • the Deferred Check Condition may be sent to the host in response to either a Write or a Write File Mark, depending on the positioning of the error for an accelerated write.
  • the SME write acceleration service may be stopped after certain outstanding writes.
  • the limit may be tunable according to, e.g., an optimum throughput of the logic device(s) performing the data compression and/or encryption processing with regard to the processing latencies and maximum host ingress rate.
  • SCSI position modification commands may be pended till the SME write accelerated commands are completed. If so, this procedure may ensure that a consistent image is provided to the host.
  • the host since the SME device composed and sent a status to the host prior to receiving a status from the target, the host perceives there to be no delay introduced by the SME device. Thus, after the SME device sends the status to the host, the host may send an additional write command. In other words, this additional write command may be received prior to completion of processing (e.g., compression and/or encryption) with respect to the prior write command (and therefore prior to receiving a status from the target with respect to the prior write command). Processing associated with this additional write command may then proceed at 402 - 412 , as described above. It is important to note that steps 410 and 412 may be performed with respect to a prior write command simultaneously with the processing of the additional, subsequent write command. As a result, the host will not perceive a delay resulting from the compression or encryption performed by the SME device at 410 .
  • processing e.g., compression and/or encryption
  • FIG. 5 is a process flow diagram illustrating an example method of performing storage media compression and encryption with write acceleration via an SME device having a plurality of processors in accordance with various embodiments.
  • the SME device may receive a write command from a host at 502 .
  • the SME device may then compose and send a transfer ready to the host at 504 in response to the write command.
  • the SME device may receive data (e.g., data frames) from the host at 506 , where the data corresponds to the write command.
  • the SME device may buffer the data until all data transmitted in association with the write command has been received.
  • the SME device may compose a status and send the status to the host at 508 after the data has been received.
  • the SME device may send the status to the host after the data has been received from the host, but prior to compression and/or encryption of the data. More specifically, the SME device may send the status to the host prior to initiating compression of the data and/or prior to initiating encryption of the data. Alternatively, the SME device may simply send the status prior to completion of the compression and/or prior to completion of the encryption of the data.
  • the SME device may compress the data to generate compressed data. Specifically, the SME device may provide the data to a compression Application Specific Integrated Circuit (ASIC) that is configured to perform data compression at 510 such that compressed data is generated. The SME device may then provide the compressed data to one of the plurality of processors (e.g., a processor that is not currently in use) at 512 , enabling the one of the plurality of processors to encrypt the compressed data at 514 .
  • ASIC Application Specific Integrated Circuit
  • the data may be encrypted without being compressed.
  • the SME device may provide the data to one of the plurality of processors (e.g., a processor that is not currently encrypting data).
  • the processor may then encrypt the data.
  • the result of the encryption of the compressed (or uncompressed) data by one of the plurality of processors may be referred to as modified data.
  • the data frames that are received are buffered until all of the data frames associated with a logical data block are received.
  • the buffering of data frames enables a large amount of data to be compressed. Since there may be redundancy in the data frames, the compression ratio may be improved as the amount of data in the logical data block being compressed is increased. Compression may be performed on the logical data block to generate compressed data. Encryption may then be performed on the compressed data associated with the logical data block. After encryption has been performed on the compressed data, the result for the logical data block may be segmented into multiple data frames such that modified data is generated. A cyclic redundancy check (CRC) value may also be re-computed for the modified data and appended to one of the data frames.
  • CRC cyclic redundancy check
  • all data frames associated with a logical data block need not be received and buffered in order to initiate compression and/or encryption of data in the logical data block.
  • the compression and/or encryption process may be initiated well before all of the data associated with a logical data block or write command has been received from the host. Accordingly, encryption and/or encryption may be initiated and performed while data frames are still being received in association with a particular logical block or write command.
  • the SME device may then send the modified data (e.g., in the form of one or more data frames) to one or more targets at 516 .
  • the modified data e.g., in the form of one or more data frames
  • One method of sending the modified data to a target will be described in further detail below with reference to FIG. 7 .
  • the target When the target has received the data, the target may send a status to the SME device. However, the SME device has already sent a status to the host before actually receiving the status from the target. Since the SME device has already sent a status to the host, the SME device may drop the status that it has received from the target.
  • the host may continue to send additional write commands, which may be processed at 502 .
  • additional write commands may be processed at 502 .
  • data associated with a write command is being compressed/encrypted
  • data associated with a subsequent write command may be transmitted by the host. Since the data associated with multiple write commands may be simultaneously processed (e.g., encrypted) by different processors, the disclosed embodiments eliminate the delay that typically results from encryption of the data.
  • FIG. 6 is a block diagram illustrating an example data structure that may be used to track the status of tasks being processed by a plurality of processors in accordance with various embodiments.
  • the data structure is a queue that indicates the status of various host tasks being processed by the SME device.
  • the SME device may maintain a queue of host tasks such as write commands being processed by the plurality of processors.
  • the queue may indicate a status 602 of each of the write commands (e.g., tasks), as shown.
  • the queue may indicate which one of the plurality of processors is processing the corresponding write command.
  • the queue may be implemented via a variety of data structures, such as a linked list or array.
  • the SME device may set a corresponding indicator in the queue to indicate that processing of a write command is completed.
  • the SME device may consider the processing of the write command to be completed if the modified data has been generated. In some embodiments, the SME device may not consider the processing of the write command to be completed until a transfer ready command has been received from the target (e.g., for that particular write command).
  • the SME device may indicate that the processing of the write command is completed by setting the status 602 of the corresponding write command to indicate that the task is “Done.” As shown in this example, the status of each write command that is being processed may be initialized to “Pending.” Although different I/Os may be completed at different times, the SME device may send data to the target in the same order in which it received the data from the initiator(s). The order in which the SME device sends data to the target may vary with the type of I/O, as well as the type of the target.
  • FIG. 7 is a process flow diagram illustrating a method of transmitting compressed and/or encrypted data to a target as shown at 516 of FIG. 5 in accordance with various embodiments.
  • the SME device may send the modified data to a target indicated by the write command if a queue such as that shown in FIG. 6 indicates that processing of the write command is completed and the write command is the next write command in the queue.
  • the SME device obtains a status of the first task (e.g, write command) represented in the queue at 702 .
  • the SME device may send the modified data (compressed and/or encrypted data) generated in association with the write command to the target at 704 if the status indicates that processing in association with the write command is completed.
  • the SME device may obtain the status of the next task in the queue at 708 .
  • the process may continue at 704 to send modified data that has been generated in association with write commands to the target, where the modified data is sent in the order that the write commands have been received by the SME device.
  • the process may end at 710 .
  • the embodiment described with reference to FIG. 7 implements polling to ascertain the status of tasks represented in the queue.
  • the SME device may poll the status of the next task in the queue until the status indicates that processing is completed.
  • a signal may be generated upon completion of a task in the queue so that polling of the queue need not be performed.
  • the disclosed embodiments may substantially reduce the latency due to encryption and compression. Hosts may be able to store data to storage media in an efficient and secure manner with little or no performance cost. Moreover, where the SME process compresses the data prior to its encryption, the target I/O latencies may be minimized. In this manner, hosts may obtain more bandwidth.
  • the techniques for performing the disclosed embodiments may be implemented on software and/or hardware.
  • they can be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specially constructed machine, or on a network interface card.
  • the techniques of the present invention are implemented in software such as an operating system or in an application running on an operating system.
  • a software or software/hardware hybrid packet processing system of this invention may be implemented on a general-purpose programmable machine selectively activated or reconfigured by a computer program stored in memory.
  • Such programmable machine may be a network device designed to handle network traffic.
  • Such network devices typically have multiple network interfaces including Fibre Channel interfaces, for example. Specific examples of such network devices include routers and switches.
  • a general architecture for some of these machines will appear from the description given below.
  • various embodiments may be at least partially implemented on a card (e.g., an interface card) for a network device or a general-purpose computing device.
  • a router or switch 810 suitable for implementing embodiments of the invention includes a master central processing unit (CPU) 862 , interfaces 868 , and a bus 815 (e.g., a PCI bus).
  • the CPU 862 When acting under the control of appropriate software or firmware, the CPU 862 is responsible for such router tasks as routing table computations and network management. It may also be responsible for implementing the disclosed embodiments, in whole or in part.
  • the router may accomplish these functions under the control of software including an operating system (e.g., the Nexus Operating System (NX-OS®) of Cisco Systems, Inc.) and any appropriate applications software.
  • an operating system e.g., the Nexus Operating System (NX-OS®) of Cisco Systems, Inc.
  • CPU 862 may include one or more processors 863 such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, processor 863 is specially designed hardware for controlling the operations of router 810 . In a specific embodiment, a memory 861 (such as non-volatile RAM and/or ROM) also forms part of CPU 862 . However, there are many different ways in which memory could be coupled to the system. Memory block 861 may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, etc.
  • the interfaces 868 are typically provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets or data segments over the network and sometimes support other peripherals used with the router 810 .
  • interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like.
  • various very high-speed interfaces may be provided such as Fibre Channel interfaces, fast Ethernet interfaces, Gigabit Ethernet interfaces, POS interfaces, LAN interfaces, WAN interfaces, metropolitan area network (MAN) interfaces and the like.
  • these interfaces may include ports appropriate for communication with the appropriate media.
  • the independent processors may control such communications intensive tasks as packet switching, media control and management, as well as compression and/or encryption, as described herein.
  • these interfaces allow the master microprocessor 862 to efficiently perform routing computations, network diagnostics, security functions, etc.
  • FIG. 8 is one specific router of the present invention, it is by no means the only router architecture on which the disclosed embodiments can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used with the router.
  • network device may employ one or more memories or memory modules (such as, for example, memory block 765 ) configured to store data, program instructions for the general-purpose network operations and/or the inventive techniques described herein.
  • the program instructions may control the operation of an operating system and/or one or more applications, for example.
  • machine-readable media that include program instructions, state information, etc. for performing various operations described herein.
  • machine-readable media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks and DVDs; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM).
  • program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.

Abstract

Methods and apparatus for performing Storage Media Encryption (SME) are disclosed. In one embodiment, an apparatus includes a memory and a plurality of processors. The apparatus receives a write command from a network device. The apparatus sends a transfer ready to the network device in response to the write command. The apparatus receives data from the network device. The apparatus composes a status and sends the status the network device. The status is sent to the network device after the data has been received from the network device and prior to both compressing and encrypting the data. The apparatus compresses the data to generate compressed data. One of the plurality of processors encrypts the compressed data to generate modified data. The apparatus then sends the modified data to a target indicated by the write command.

Description

RELATED APPLICATIONS
This application claims priority from Provisional Application No. 61/057,712, entitled, “Storage Media Encryption with Write Acceleration,” filed on May 30, 2008, by Parthasarathy et al, which is incorporated herein by reference for all purposes.
BACKGROUND
1. Technical Field
The present disclosure relates generally to methods and apparatus for performing compression and/or encryption of data that is being transmitted to storage media in association with a write command.
2. Description of the Related Art
Storage Media Encryption (SME) generally refers to the encryption of data prior the storage of the data to various storage media, which may be referred to as “targets.” SME may also involve compression of data prior to its encryption, as well as other processes. An SME device in the network may perform SME in association with a write command received from a host. Unfortunately, substantial delays are introduced as a result of the SME process.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a transaction flow diagram illustrating a method of performing a SCSI write operation without performing compression or encryption.
FIG. 2 is a transaction flow diagram illustrating a prior art method of performing a SCSI write operation with compression and encryption.
FIG. 3 is a transaction flow diagram illustrating an example method of performing a SCSI write operation with compression and encryption with write acceleration in accordance with various embodiments.
FIG. 4 is a process flow diagram illustrating an example general method of performing storage media encryption and/or compression with write acceleration in accordance with various embodiments.
FIG. 5 is a process flow diagram illustrating an example method of performing storage media compression and encryption with write acceleration via an apparatus having a plurality of processors in accordance with various embodiments.
FIG. 6 is a block diagram illustrating an example data structure that may be used to track the status of tasks being processed by a plurality of processors in accordance with various embodiments.
FIG. 7 is a process flow diagram illustrating a method of transmitting compressed and/or encrypted data to a target as shown at 516 of FIG. 5 in accordance with various embodiments.
FIG. 8 is a diagrammatic representation of an example network device in which various embodiments may be implemented.
DESCRIPTION OF EXAMPLE EMBODIMENTS
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be obvious, however, to one skilled in the art, that the disclosed embodiments may be practiced without some or all of these specific details. In other instances, well-known process steps have not been described in detail in order not to unnecessarily obscure the disclosed embodiments.
Overview
In one embodiment, an apparatus includes a memory and a plurality of processors. The apparatus receives a write command from a network device. The apparatus sends a transfer ready to the network device in response to the write command. The apparatus receives data from the network device. The apparatus composes a status and sends the status the network device. The status is sent to the network device after the data has been received from the network device and prior to both compressing and encrypting the data. The apparatus compresses the data to generate compressed data. One of the plurality of processors encrypts the compressed data to generate modified data. The apparatus then sends the modified data to a target indicated by the write command.
Specific Example Embodiments
The disclosed embodiments relate to storage media, including but not limited to tape, as well as to compression and/or encryption of data that is transmitted to the storage media (i.e., targets). Encryption of data prior to being stored to storage media may be referred to as storage media encryption (“SME”). SME may also involve compression of data prior to its encryption, as well as other processes.
It has been observed that SME write input/output (“I/O”) performance for a single tape flow drops by approximately 50% when compared to the throughput of the host that is initiating the I/O (e.g., write commands). Various noticeable latencies may be introduced by SME itself, as a result of processes such as the following: encryption and integrity digest; compression; buffering latency for the complete tape block; and cyclic redundancy check (“CRC”) verification/calculation. All of these latencies can contribute to significant delay and decrease the effective host throughput.
Tapes are sequential devices and typically support only one outstanding I/O command. A new I/O may be originated by the host after the status for the previous I/O is received by the host. In the case of tapes, each Read/Write I/O may represent a logical block in the tape medium.
Some SME processes may involve, for example, the Advanced Encryption Standard (“AES”), Galois/Counter Mode (“GCM”) or other encryption and/or authentication processes. With SME, compression and encryption may be applied for the entire logical block, as compression is effective across a large-sized logical block and encryption can protect each distinct logical block (e.g., using a separate Initialization Vector (IV)).
If compression and encryption are applied for the entire logical block, an SME process may, for example, involve the following: buffering all the data frames corresponding to a single I/O; applying a compression transform followed by an encryption transform; and segmenting (and re-computing a cyclic redundancy check value (CRC) for) the resultant logical block into multiple data frames. Unfortunately, the target does not send a status until it receives all of the data frames. As a result, the host that has sent a write command will perceive a substantial delay between the time it sends the data and the time it receives a status.
Various embodiments for performing an “accelerated write” to reduce this delay and increase the efficiency with which data may be written to storage media are disclosed. In order to illustrate the advantages of the disclosed embodiments, prior art methods for performing SME are described with reference to FIGS. 1 and 2. Embodiments for performing SME will be described in further detail below with reference to FIGS. 3-8.
In the following description, the term SME device may refer to a network device that implements SME. For example, the network device may be a router or switch. More specifically, the SME device may include one or more line cards, where at least one of the line cards implements SME.
The disclosed embodiments provide hardware (such as network devices and components of network devices) that may be configured to perform the methods of the invention, as well as software to control devices to perform these methods. For example, some processes may be performed by a logic system that includes one or more logic devices (such as processors, programmable logic devices, etc.), associated memory devices, etc. An SME device, for example, may be implemented by one or more such logic devices and associated memory.
FIG. 1 is a transaction flow diagram illustrating a prior art method of performing a Small Computer System Interface (SCSI) write operation without performing compression or encryption. Steps performed by a host, SME device, and target will be described with reference to vertical lines 102, 104, and 106, respectively. The host 102 sends a write command (WRITE_CMD) at 108 to the SME device 104, which then sends the write command at 110 to the target 106. A write command typically indicates the amount of data to be written to a target. The target 106 sends a transfer ready (XFR_RDY) at 112 to the SME device 104 indicating that it is ready to receive data. The SME device 104 then sends a transfer ready at 114 to the host 102. The host sends the data at 116, 118, 120 to the SME device 104, which then sends the data at 122, 124, 126 to the target 106. The target 106 then sends a status at 128 to the SME device 104 indicating whether the data was received successfully. The SME device 104 then sends the status at 130 to the host 102. Since the data is neither compressed nor encrypted in this example, the host 102 does not perceive a delay between the time it sends the data and the time it receives the status.
FIG. 2 is a transaction flow diagram illustrating a prior art method of performing a SCSI write operation with compression and encryption. In this example, SME device 204 performs compression of the data and/or encryption of the data (e.g., compressed or uncompressed data). When the host 102 sends a write command (WRITE_CMD) at 208 to the SME device 204, the write command specifies the amount (i.e, size) of the data to be written to the target 106. The SME device 204 composes and sends a transfer ready (XFR_RDY) at 210 to the host 102 to indicate that the SME device 204 is ready to receive a transfer of data. The host 102 then sends the data to the SME device 204 at 212, 214, 216. The SME device 204 may then compress and encrypt the data at 218. Compression and encryption can each be processing intensive tasks, and therefore can introduce a substantial delay. Upon compression of the data, the size of the data to be written to the target will be smaller than the size of the data prior to compression. Thus, the SME device 204 sends a write command that includes the compressed size at 220 to the target 106. The target 106 then sends a transfer ready (XFR_RDY) at 222 to the SME device 204. The SME device 204 sends the compressed data at 224, 226, 228 to the target 106. The target 106 sends a status at 230 to the SME device 204, where the status indicates whether the data has been successfully written to the target 106. The SME device 204 then sends the status at 232 to the host 102. After receiving the status indicating that the write was successful, the host 102 may send additional write commands. Unfortunately, the host 102 in this instance will perceive a significant delay between the time that the host 102 sends the data and the time that the host 102 receives the status.
As described with reference to FIG. 2, the host 102 will typically not proceed with sending additional write commands until it has received a status indicating that a previous write was successful. Moreover, a substantial delay is introduced when the data being written is compressed and/or encrypted. The disclosed embodiments enable data to be written in a more efficient manner by eliminating the delays perceived by a host sending write commands, as will be described in further detail below with reference to FIGS. 3-7. FIG. 3 is a transaction flow diagram illustrating an example method of performing a SCSI “accelerated write” operation with compression and encryption in accordance with various embodiments. In this example, SME device 304 performs compression of the data and/or encryption of the data (e.g., compressed or uncompressed data). In order to illustrate the advantages that may be achieved via the disclosed embodiments, the host 102 in this example sends multiple, consecutive write commands.
When the host 102 sends a first write command (WRITE_CMD1) at 308 to the SME device 304, the write command may specify the amount (i.e, size) of the data to be written to the target 106. The SME device 304 may compose and send a transfer ready (XFR_RDY) at 310 to the host 102 to indicate that the SME device 304 is ready to receive a transfer of data. The host 102 may then send the data to the SME device 304 at 312, 314, 316. Rather than waiting to receive a status from the target 106, the SME 304 may compose and send a status to the host 102 as shown at 318 after the data has been received. Although the data (or compressed data) has not been written to the target 106 and therefore a status has not yet been received from the target 106, the status sent to the host 102 may indicate that the write has been successful. Specifically, the status may be sent to the host 102 prior to completion of the compression and/or encryption of the data (Data1) at 320. More particularly, the status may be sent to the host 102 prior to initiating compression and/or encryption of the data.
As will be described in further detail below, the data compression and/or encryption processing may be distributed to different logic devices, e.g., to processors or co-processors (such as Octeon™ co-processors). In some such examples, since the host has received the status message (or the like), the host may send subsequent write commands while the compression, encryption and/or target side processing occur in parallel.
For example, once the host 102 receives the status, the host 102 may send another write command, WRITE_CMD2, as shown at 322. Thus, the host does not experience any delay as a result of the compression and/or encryption of the data transmitted in association with the first write command.
After the SME device 304 has compressed and/or encrypted the data, Data1, transmitted in association with the first write command at 320, the SME device 304 continues to send a write command, WRITE_CMD1, to the target 106 as shown at 324. Since the data has been compressed, the size of the data transmitted (specified as a parameter to the write command) will be smaller than that specified by the host in the write command sent by the host at 308.
The SME device 304 may send a transfer ready, XFR_RDY, to the host 102 at 326 in response to the second write command. The host 102 may then send the data, Data2, to the SME at 332, 334, 338 in association with the second write command. The SME device 304 may then send a status at 342 to the host 102 in association with the second write command. Although not shown in this example, the host 102 may continue to send write commands to the SME device 304.
While the host 102 proceeds to send the data, Data2, at 332, 334, 338 in association with the second write command, the SME device 304 may receive a transfer ready, XFR_RDY, from the target 106 in association with the first write command, as shown at 328. Thus, the SME device 304 may send the compressed data, Compressed Data1, to the target 106 in association with the first write command at 332, 336, 340. Upon receiving a status from the target 106 in association with the first write command as shown at 346, the SME device 304 may determine from the status whether the write to the target 106 was successful. If the write was successful, the SME device 304 may merely drop the status. However, if the write was unsuccessful, the SME device 304 may drop the status and re-send the data to the target 106. Alternatively, it may be desirable to send the status to the host 102 so that the host 102 will re-send the data to the SME device 304.
While the SME device 304 completes processing in association with the first write command (e.g., via one of a plurality of processors), the SME device 304 may perform processing in association with the second write command (e.g., via another one of the plurality of processors). As shown in this example, the SME device 304 may compress and/or encrypt the data, Data2, sent in association with the second write command as shown at 344. The SME device 304 may send a second write command, WRITE_CMD2, to the target 106 at 348. Since the data, Data2, has been compressed, the size specified as a parameter to the write command will be less than the size specified by the host at 322. The SME device 304 may receive a transfer ready, XFR_RDY, from the target 106 as shown at 350. The SME device 304 may thereafter send the compressed data, Compressed Data2, as shown at 352, 354, 356 to the target 106. The target 106 may then send a status at 358 to the SME device 304.
As described above with reference to FIG. 3, the disclosed embodiments enable a host to send multiple, sequential write commands without perceiving a substantial delay typically introduced in systems implementing compression and/or encryption. In accordance with various embodiments, this is accomplished through an SME device that is capable of simultaneously performing processing for two or more write commands, which may be sent by the same host. This may be accomplished through the use of multiple processors, as will be described in further detail below.
FIG. 4 is a process flow diagram illustrating an example general method of performing storage media encryption and/or compression with write acceleration in accordance with various embodiments. The SME device may receive a write command from a host at 402. The SME device may then compose and send a response such as a transfer ready to the network device in response to the write command at 404. The SME device may receive data from the host at 406 in the form of one or more data frames. The SME device may buffer the data until all data transmitted in association with the write command has been received. The SME device may then compose a status and send the status to the host at 408 after the data has been received from the host. The status may be sent to the host prior to performing compression and/or encryption of the data (e.g., before initiation and/or completion of the compression and/or encryption). Thus, although the status sent to the network device indicates that the write command has been successfully completed (e.g., the target has successfully received the data), the data has not yet been processed or transmitted to the target.
The SME device may perform encryption and/or compression of the data to generate modified data at 410. Compression of the data may be performed via an Application Specific Integrated Circuit configured for performing compression of data. Encryption of compressed or uncompressed data may be performed by a processor (e.g., via processing software instructions). Specifically, where the SME device includes a plurality of processors, the compressed (or uncompressed) data may be encrypted by one of the plurality of processors. In other embodiments, compression of data may be performed via the same processor that performs the encryption. The SME device may then send the modified data to a target at 412 indicated by the write command. The target may be identified directly or indirectly in the write command received from the host. Specifically, the write command may specifically identify the target to which the data is being written. Alternatively, where the SME device implements virtualization of storage, the SME device may perform a virtual-physical mapping in order to identify the target(s) based upon a virtual storage identifier provided in the write command.
Upon receiving a status from the target, the SME device may drop the status. Where the status indicates that an error has occurred, the SME device may re-send the data to the target without notifying the host of the error, where possible. Alternatively, the SME device may send the status to the host so that the host may re-send the data to the SME device.
Where an error occurs during an accelerated write, the SME device may send a Deferred Check Condition indication to the host on future I/O. Hosts and backup applications may, for example, perform a large number of writes followed by Write File Marks. Hence, the Deferred Check Condition may be sent to the host in response to either a Write or a Write File Mark, depending on the positioning of the error for an accelerated write.
There may be situations in which the logic device(s) performing the data compression and/or encryption processing cannot keep up with the host write commands. This may result in buffer underflow. To avoid such problems, in some implementations the SME write acceleration service may be stopped after certain outstanding writes. For example, the limit may be tunable according to, e.g., an optimum throughput of the logic device(s) performing the data compression and/or encryption processing with regard to the processing latencies and maximum host ingress rate.
In addition, SCSI position modification commands may be pended till the SME write accelerated commands are completed. If so, this procedure may ensure that a consistent image is provided to the host.
It is important to note that the since the SME device composed and sent a status to the host prior to receiving a status from the target, the host perceives there to be no delay introduced by the SME device. Thus, after the SME device sends the status to the host, the host may send an additional write command. In other words, this additional write command may be received prior to completion of processing (e.g., compression and/or encryption) with respect to the prior write command (and therefore prior to receiving a status from the target with respect to the prior write command). Processing associated with this additional write command may then proceed at 402-412, as described above. It is important to note that steps 410 and 412 may be performed with respect to a prior write command simultaneously with the processing of the additional, subsequent write command. As a result, the host will not perceive a delay resulting from the compression or encryption performed by the SME device at 410.
FIG. 5 is a process flow diagram illustrating an example method of performing storage media compression and encryption with write acceleration via an SME device having a plurality of processors in accordance with various embodiments. The SME device may receive a write command from a host at 502. The SME device may then compose and send a transfer ready to the host at 504 in response to the write command. The SME device may receive data (e.g., data frames) from the host at 506, where the data corresponds to the write command. The SME device may buffer the data until all data transmitted in association with the write command has been received.
The SME device may compose a status and send the status to the host at 508 after the data has been received. The SME device may send the status to the host after the data has been received from the host, but prior to compression and/or encryption of the data. More specifically, the SME device may send the status to the host prior to initiating compression of the data and/or prior to initiating encryption of the data. Alternatively, the SME device may simply send the status prior to completion of the compression and/or prior to completion of the encryption of the data.
The SME device may compress the data to generate compressed data. Specifically, the SME device may provide the data to a compression Application Specific Integrated Circuit (ASIC) that is configured to perform data compression at 510 such that compressed data is generated. The SME device may then provide the compressed data to one of the plurality of processors (e.g., a processor that is not currently in use) at 512, enabling the one of the plurality of processors to encrypt the compressed data at 514.
In other embodiments, the data may be encrypted without being compressed. Thus, the SME device may provide the data to one of the plurality of processors (e.g., a processor that is not currently encrypting data). The processor may then encrypt the data. The result of the encryption of the compressed (or uncompressed) data by one of the plurality of processors may be referred to as modified data.
In one embodiment, the data frames that are received are buffered until all of the data frames associated with a logical data block are received. The buffering of data frames enables a large amount of data to be compressed. Since there may be redundancy in the data frames, the compression ratio may be improved as the amount of data in the logical data block being compressed is increased. Compression may be performed on the logical data block to generate compressed data. Encryption may then be performed on the compressed data associated with the logical data block. After encryption has been performed on the compressed data, the result for the logical data block may be segmented into multiple data frames such that modified data is generated. A cyclic redundancy check (CRC) value may also be re-computed for the modified data and appended to one of the data frames.
In another embodiment, all data frames associated with a logical data block need not be received and buffered in order to initiate compression and/or encryption of data in the logical data block. Thus, the compression and/or encryption process may be initiated well before all of the data associated with a logical data block or write command has been received from the host. Accordingly, encryption and/or encryption may be initiated and performed while data frames are still being received in association with a particular logical block or write command.
Once compressed (or uncompressed) data has been encrypted, the SME device may then send the modified data (e.g., in the form of one or more data frames) to one or more targets at 516. One method of sending the modified data to a target will be described in further detail below with reference to FIG. 7.
When the target has received the data, the target may send a status to the SME device. However, the SME device has already sent a status to the host before actually receiving the status from the target. Since the SME device has already sent a status to the host, the SME device may drop the status that it has received from the target.
It is important to note that after the status has been sent to the host at 508, the host may continue to send additional write commands, which may be processed at 502. As a result, while data associated with a write command is being compressed/encrypted, data associated with a subsequent write command may be transmitted by the host. Since the data associated with multiple write commands may be simultaneously processed (e.g., encrypted) by different processors, the disclosed embodiments eliminate the delay that typically results from encryption of the data.
FIG. 6 is a block diagram illustrating an example data structure that may be used to track the status of tasks being processed by a plurality of processors in accordance with various embodiments. In this example, the data structure is a queue that indicates the status of various host tasks being processed by the SME device. Specifically, the SME device may maintain a queue of host tasks such as write commands being processed by the plurality of processors. The queue may indicate a status 602 of each of the write commands (e.g., tasks), as shown. In addition, the queue may indicate which one of the plurality of processors is processing the corresponding write command. The queue may be implemented via a variety of data structures, such as a linked list or array.
The SME device may set a corresponding indicator in the queue to indicate that processing of a write command is completed. The SME device may consider the processing of the write command to be completed if the modified data has been generated. In some embodiments, the SME device may not consider the processing of the write command to be completed until a transfer ready command has been received from the target (e.g., for that particular write command). The SME device may indicate that the processing of the write command is completed by setting the status 602 of the corresponding write command to indicate that the task is “Done.” As shown in this example, the status of each write command that is being processed may be initialized to “Pending.” Although different I/Os may be completed at different times, the SME device may send data to the target in the same order in which it received the data from the initiator(s). The order in which the SME device sends data to the target may vary with the type of I/O, as well as the type of the target.
FIG. 7 is a process flow diagram illustrating a method of transmitting compressed and/or encrypted data to a target as shown at 516 of FIG. 5 in accordance with various embodiments. In one embodiment, the SME device may send the modified data to a target indicated by the write command if a queue such as that shown in FIG. 6 indicates that processing of the write command is completed and the write command is the next write command in the queue. In this example, the SME device obtains a status of the first task (e.g, write command) represented in the queue at 702. The SME device may send the modified data (compressed and/or encrypted data) generated in association with the write command to the target at 704 if the status indicates that processing in association with the write command is completed. If there are more tasks in the queue at 706, the SME device may obtain the status of the next task in the queue at 708. The process may continue at 704 to send modified data that has been generated in association with write commands to the target, where the modified data is sent in the order that the write commands have been received by the SME device. When no tasks in the queue remain, the process may end at 710.
The embodiment described with reference to FIG. 7 implements polling to ascertain the status of tasks represented in the queue. Thus, the SME device may poll the status of the next task in the queue until the status indicates that processing is completed. In other embodiments, a signal may be generated upon completion of a task in the queue so that polling of the queue need not be performed.
The disclosed embodiments may substantially reduce the latency due to encryption and compression. Hosts may be able to store data to storage media in an efficient and secure manner with little or no performance cost. Moreover, where the SME process compresses the data prior to its encryption, the target I/O latencies may be minimized. In this manner, hosts may obtain more bandwidth.
Generally, the techniques for performing the disclosed embodiments may be implemented on software and/or hardware. For example, they can be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specially constructed machine, or on a network interface card. In a specific embodiment of this invention, the techniques of the present invention are implemented in software such as an operating system or in an application running on an operating system.
A software or software/hardware hybrid packet processing system of this invention may be implemented on a general-purpose programmable machine selectively activated or reconfigured by a computer program stored in memory. Such programmable machine may be a network device designed to handle network traffic. Such network devices typically have multiple network interfaces including Fibre Channel interfaces, for example. Specific examples of such network devices include routers and switches. A general architecture for some of these machines will appear from the description given below. Further, various embodiments may be at least partially implemented on a card (e.g., an interface card) for a network device or a general-purpose computing device.
The disclosed embodiments may be implemented at network devices such as switches or routers. Referring now to FIG. 8, a router or switch 810 suitable for implementing embodiments of the invention includes a master central processing unit (CPU) 862, interfaces 868, and a bus 815 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU 862 is responsible for such router tasks as routing table computations and network management. It may also be responsible for implementing the disclosed embodiments, in whole or in part. The router may accomplish these functions under the control of software including an operating system (e.g., the Nexus Operating System (NX-OS®) of Cisco Systems, Inc.) and any appropriate applications software. CPU 862 may include one or more processors 863 such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, processor 863 is specially designed hardware for controlling the operations of router 810. In a specific embodiment, a memory 861 (such as non-volatile RAM and/or ROM) also forms part of CPU 862. However, there are many different ways in which memory could be coupled to the system. Memory block 861 may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, etc.
The interfaces 868 are typically provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets or data segments over the network and sometimes support other peripherals used with the router 810. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as Fibre Channel interfaces, fast Ethernet interfaces, Gigabit Ethernet interfaces, POS interfaces, LAN interfaces, WAN interfaces, metropolitan area network (MAN) interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include one or more independent processors and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management, as well as compression and/or encryption, as described herein. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 862 to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown in FIG. 8 is one specific router of the present invention, it is by no means the only router architecture on which the disclosed embodiments can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used with the router.
Regardless of network device's configuration, it may employ one or more memories or memory modules (such as, for example, memory block 765) configured to store data, program instructions for the general-purpose network operations and/or the inventive techniques described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example.
Because such information and program instructions may be employed to implement the systems/methods described herein, the disclosed embodiments relate to machine readable media that include program instructions, state information, etc. for performing various operations described herein. Examples of machine-readable media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks and DVDs; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM). Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
Although illustrative embodiments and applications of the disclosed embodiments are shown and described herein, many variations and modifications are possible which remain within the concept, scope, and spirit of the embodiments of the invention, and these variations would become clear to those of ordinary skill in the art after perusal of this application. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the disclosed embodiments are not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims (26)

What is claimed is:
1. A method, comprising:
receiving a write command from a network device, wherein the write command corresponds to a particular task;
composing and sending a transfer ready to the network device in response to the write command;
receiving data from the network device;
composing a status and sending the status to the network device, the status indicating that the write to a target indicated by the write command was successful;
performing at least one of encryption or compression of the data to generate modified data; and
sending the modified data to the target indicated by the write command, wherein the modified data is sent to the target when a task status associated with the particular task indicates that processing in association with the particular task is completed;
wherein the status is sent to the network device after the data has been received from the network device and prior to completion of performing at least one of encryption or compression of the data, wherein the status is sent to the network device prior to receiving a status from the target;
wherein performing at least one of encryption or compression of the data to generate modified data is initiated before all data sent via the write command has been received.
2. The method as recited in claim 1, wherein the target is identified in the write command.
3. The method as recited in claim 1, wherein performing at least one of encryption or compression of the data to generate the modified data comprises:
providing the data to an Application Specific Integrated Circuit (ASIC) configured to perform data compression such that compressed data is generated;
providing the compressed data to one of a plurality of processors; and
encrypting the compressed data by the one of the plurality of processors to generate the modified data.
4. The method as recited in claim 1, wherein performing at least one of encryption or compression of the data to generate the modified data comprises:
providing the data to an Application Specific Integrated Circuit (ASIC) configured to perform data compression such that compressed data is generated.
5. The method as recited in claim 1, wherein performing at least one of encryption or compression of the data to generate the modified data comprises:
providing the data to one of a plurality of processors; and
encrypting the data by the one of the plurality of processors to generate the modified data.
6. The method as recited in claim 1, where performing at least one of encryption or compression of the data to generate modified data comprises:
compressing the data such that compressed data is generated;
providing the compressed data to one of a plurality of processors; and
encrypting the compressed data by the one of the plurality of processors to generate the modified data.
7. The method as recited in claim 6, further comprising:
receiving a second write command prior to completion of encrypting the compressed data.
8. The method as recited in claim 7, wherein the second write command is received prior to receiving a status from the target.
9. The apparatus as recited in claim 1, wherein the data received includes a plurality of data frames, the method further comprising:
buffering the plurality of data frames prior to performing at least one of encryption or compression of the data.
10. The method as recited in claim 1,
wherein the receiving a write command and composing and sending a transfer ready is performed by a second device; and
wherein the receiving data, composing a status and sending the status, performing at least one of encryption or compression, and sending steps are performed by a third second network device.
11. The method as recited in claim 1, further comprising:
prior to sending the modified data, obtaining the task status of the particular task from a queue, the queue indicating the task status of each of a plurality of tasks, wherein each of the plurality of tasks corresponds to a different write command;
wherein the plurality of tasks includes the particular task corresponding to the write command.
12. An apparatus, comprising:
a plurality of processors; and
a memory, the plurality of processors and the memory being adapted configured for:
receiving a write command from a network device, wherein the write command corresponds to a particular task;
sending a transfer ready to the network device in response to the write command;
receiving data from the network device;
composing a status and sending the status to the network device;
compressing the data to generate compressed data;
encrypting the compressed data by one of the plurality of processors to generate modified data; and
sending the modified data to a target indicated by the write command, wherein the modified data is sent to the target when a task status associated with the particular task indicates that processing in association with the particular task is completed;
wherein the status is sent to the network device after the data has been received from the network device and prior to both compressing and encrypting the data, the status indicating that the write to the target was successful, wherein the status is sent to the network device prior to receiving a status from the target;
wherein compressing the data or encrypting the compressed data to generate modified data is initiated before all data sent via the write command has been received.
13. The apparatus as recited in claim 12, the plurality of processors and the memory being further adapted configured for:
receiving an additional write command from the network device;
sending a transfer ready to the network device in response to the additional write command;
receiving additional data from the network device that corresponds to the additional write command;
sending an additional status to the network device;
compressing the additional data to generate additional compressed data; and
encrypting the additional compressed data by another one of the plurality of processors to generate additional modified data.
14. The apparatus as recited in claim 13, wherein the additional status is sent to the network device after the additional data has been received from the network device and prior to both compressing and encrypting the additional data.
15. The apparatus as recited in claim 12, wherein compressing is initiated before all data sent in association with the write command has been received.
16. The apparatus as recited in claim 12, wherein the data received includes a plurality of data frames, the plurality of processors and the memory being further configured adapted for:
buffering the plurality of data frames by the one of the plurality of processors prior to compressing the data.
17. The apparatus as recited in claim 12, wherein encrypting the compressed data to generate modified data is initiated before all data sent via the write command has been received.
18. A non-transitory computer-readable medium storing thereon computer-readable instructions, comprising:
instructions for composing and sending a status to a network device after data associated with a write command has been received from the network device that has initiated the write command;
instructions for encrypting by one of a plurality of processors the data or compressed data generated from compression of the data such that modified data is generated by the one of the plurality of processors;
instructions for maintaining a queue of write commands being processed by the plurality of processors, wherein the queue of write commands includes the write command; and
instructions for sending the modified data to a target indicated by the write command when a task status associated with the write command in the queue indicates that processing of the write command is completed;
wherein the status is sent to the network device prior to completion of encryption by the one of the plurality of processors, the status indicating that the write to the target was successful, wherein the status is sent to the network device prior to receiving a status from the target.
19. The non-transitory computer-readable medium as recited in claim 18, wherein the queue indicates that processing of the write command is completed if the modified data has been generated.
20. The non-transitory computer-readable medium as recited in claim 18, wherein the queue indicates that processing of the write command is completed if the modified data has been generated and a transfer ready has been received from the target.
21. The non-transitory computer-readable medium as recited in claim 18, further comprising:
instructions for dropping a status received from the target, wherein the status is sent to the network device prior to receiving the status from the target.
22. The non-transitory computer-readable medium as recited in claim 18, wherein the status sent to the network device indicates that the write command has been successfully completed.
23. The non-transitory computer-readable medium as recited in claim 22, further comprising:
instructions for providing the compressed data to the one of the plurality of processors.
24. The non-transitory computer-readable medium as recited in claim 22, wherein the instructions for initiating compression of the data comprise:
instructions for providing the data to an Application Specific Integrated Circuit (ASIC) configured to perform data compression.
25. The non-transitory computer-readable medium as recited in claim 22, wherein the instructions for selecting one of a plurality of processors to perform encryption of the data or compressed data generated from compression of the data such that modified data is generated by the one of the plurality of processors comprises:
selecting one of the plurality of processors to perform encryption of the compressed data.
26. The non-transitory computer-readable medium as recited in claim 18, wherein the queue further indicates the task status of each of the commands, further comprising:
instructions for obtaining the task status of a next one of the commands in the queue.
US12/434,477 2008-05-30 2009-05-01 Storage media encryption with write acceleration Active 2031-01-14 US8464074B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/434,477 US8464074B1 (en) 2008-05-30 2009-05-01 Storage media encryption with write acceleration

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5771208P 2008-05-30 2008-05-30
US12/434,477 US8464074B1 (en) 2008-05-30 2009-05-01 Storage media encryption with write acceleration

Publications (1)

Publication Number Publication Date
US8464074B1 true US8464074B1 (en) 2013-06-11

Family

ID=48538527

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/434,477 Active 2031-01-14 US8464074B1 (en) 2008-05-30 2009-05-01 Storage media encryption with write acceleration

Country Status (1)

Country Link
US (1) US8464074B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010434A1 (en) * 2009-07-08 2011-01-13 International Business Machines Corporation Storage system
US20130227213A1 (en) * 2012-02-27 2013-08-29 Samsung Electronics Co., Ltd. Memory controller and operation method thereof

Citations (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4423480A (en) 1981-03-06 1983-12-27 International Business Machines Corporation Buffered peripheral system with priority queue and preparation for signal transfer in overlapped operations
US4435762A (en) 1981-03-06 1984-03-06 International Business Machines Corporation Buffered peripheral subsystems
US4932826A (en) 1987-01-27 1990-06-12 Storage Technology Corporation Automated cartridge system
US5016277A (en) 1988-12-09 1991-05-14 The Exchange System Limited Partnership Encryption key entry method in a microcomputer-based encryption system
US5347648A (en) 1990-06-29 1994-09-13 Digital Equipment Corporation Ensuring write ordering under writeback cache error conditions
US5692124A (en) 1996-08-30 1997-11-25 Itt Industries, Inc. Support of limited write downs through trustworthy predictions in multilevel security of computer network communications
US5754658A (en) * 1996-04-19 1998-05-19 Intel Corporation Adaptive encryption to avoid processor oversaturation
US5758151A (en) 1994-12-09 1998-05-26 Storage Technology Corporation Serial data storage for multiple access demand
US5765213A (en) 1995-12-08 1998-06-09 Emc Corporation Method providing for the flexible prefetching of data from a data storage system
US5809328A (en) 1995-12-21 1998-09-15 Unisys Corp. Apparatus for fibre channel transmission having interface logic, buffer memory, multiplexor/control device, fibre channel controller, gigabit link module, microprocessor, and bus control device
US5842040A (en) 1996-06-18 1998-11-24 Storage Technology Corporation Policy caching method and apparatus for use in a communication device based on contents of one data unit in a subset of related data units
US5892915A (en) 1997-04-25 1999-04-06 Emc Corporation System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list
US5930344A (en) 1997-10-14 1999-07-27 At & T Corp. Method and apparatus for tracing a specific communication
US6026468A (en) 1997-08-18 2000-02-15 Fujitsu Limited Method of controlling magnetic tape unit
US6049546A (en) 1996-10-15 2000-04-11 At&T Corporation System and method for performing switching in multipoint-to-multipoint multicasting
US6070200A (en) 1998-06-02 2000-05-30 Adaptec, Inc. Host adapter having paged data buffers for continuously transferring data between a system bus and a peripheral bus
US6141728A (en) 1997-09-29 2000-10-31 Quantum Corporation Embedded cache manager
US6148421A (en) 1997-05-30 2000-11-14 Crossroads Systems, Inc. Error detection and recovery for sequential access devices in a fibre channel protocol
US6172520B1 (en) 1997-12-30 2001-01-09 Xilinx, Inc. FPGA system with user-programmable configuration ports and method for reconfiguring the FPGA
US6178244B1 (en) 1996-01-12 2001-01-23 Mitsubishi Denki Kabushiki Kaisha Cryptosystem
US6219728B1 (en) 1996-04-22 2001-04-17 Nortel Networks Limited Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US20010016878A1 (en) 2000-02-17 2001-08-23 Hideki Yamanaka Communicating system and communicating method for controlling throughput
US6317819B1 (en) 1996-01-11 2001-11-13 Steven G. Morton Digital signal processor containing scalar processor and a plurality of vector processors operating from a single instruction
US6327253B1 (en) 1998-04-03 2001-12-04 Avid Technology, Inc. Method and apparatus for controlling switching of connections among data processing devices
US20020024970A1 (en) 2000-04-07 2002-02-28 Amaral John M. Transmitting MPEG data packets received from a non-constant delay network
US6381665B2 (en) 1997-12-23 2002-04-30 Intel Corporation Mechanisms for converting interrupt request signals on address and data lines to interrupt message signals
US20020059439A1 (en) 1999-02-26 2002-05-16 Arroyo Keith M. Streaming method and system for fibre channel network devices
US20020124007A1 (en) * 2001-03-02 2002-09-05 Wuhan P&S Electronics Company Ltd. Network server and database therein
US6449697B1 (en) 1999-04-23 2002-09-10 International Business Machines Corporation Prestaging data into cache in preparation for data transfer operations
US20020169521A1 (en) 2001-05-10 2002-11-14 Goodman Brian G. Automated data storage library with multipurpose slots providing user-selected control path to shared robotic device
US6507893B2 (en) 2001-01-26 2003-01-14 Dell Products, L.P. System and method for time window access frequency based caching for memory controllers
US20030021417A1 (en) 2000-10-20 2003-01-30 Ognjen Vasic Hidden link dynamic key manager for use in computer systems with database structure for storage of encrypted data and method for storage and retrieval of encrypted data
US20030065882A1 (en) 2001-10-01 2003-04-03 Beeston Ralph Thomas System for fast tape file positioning
US20030084209A1 (en) 2001-10-31 2003-05-01 Chadalapaka Mallikarjun B. System and method for storage virtualization
US20030093567A1 (en) 2001-09-28 2003-05-15 Lolayekar Santosh C. Serverless storage services
US6570848B1 (en) 1999-03-30 2003-05-27 3Com Corporation System and method for congestion control in packet-based communication networks
US6625750B1 (en) 1999-11-16 2003-09-23 Emc Corporation Hardware and software failover services for a file server
US20030185154A1 (en) 2002-03-29 2003-10-02 Nishan Systems, Inc. Network congestion management systems and methods
US6633548B2 (en) 2001-01-30 2003-10-14 Nokia Intelligent Edge Routers Inc. Method and apparatus for ternary content addressable memory (TCAM) table management
US6651162B1 (en) 1999-11-04 2003-11-18 International Business Machines Corporation Recursively accessing a branch target address cache using a target address previously accessed from the branch target address cache
US6658540B1 (en) 2000-03-31 2003-12-02 Hewlett-Packard Development Company, L.P. Method for transaction command ordering in a remote data replication system
US20040010660A1 (en) 2002-07-11 2004-01-15 Storage Technology Corporation Multi-element storage array
US20040049564A1 (en) 2002-09-09 2004-03-11 Chan Ng Method and apparatus for network storage flow control
US20040081082A1 (en) 2002-07-12 2004-04-29 Crossroads Systems, Inc. Mechanism for enabling enhanced fibre channel error recovery across redundant paths using SCSI level commands
US20040088574A1 (en) 2002-10-31 2004-05-06 Brocade Communications Systems, Inc. Method and apparatus for encryption or compression devices inside a storage area network fabric
US6751758B1 (en) 2001-06-20 2004-06-15 Emc Corporation Method and system for handling errors in a data storage environment
US6757767B1 (en) 2000-05-31 2004-06-29 Advanced Digital Information Corporation Method for acceleration of storage devices by returning slightly early write status
US20040148376A1 (en) 2002-06-28 2004-07-29 Brocade Communications Systems, Inc. Storage area network processing device
US20040153566A1 (en) 2003-01-31 2004-08-05 Brocade Communications Systems, Inc. Dynamic link distance configuration for extended fabric
US6775749B1 (en) 2002-01-29 2004-08-10 Advanced Micro Devices, Inc. System and method for performing a speculative cache fill
US20040158668A1 (en) 2003-02-11 2004-08-12 Richard Golasky System and method for managing target resets
US20040160903A1 (en) 2003-02-13 2004-08-19 Andiamo Systems, Inc. Security groups for VLANs
US6782473B1 (en) 1998-11-03 2004-08-24 Lg Information & Communications, Ltd. Network encryption system
US20040170432A1 (en) 1999-05-24 2004-09-02 Reynolds Robert A. Method and system for multi-initiator support to streaming devices in a fibre channel network
US6788680B1 (en) 1999-08-25 2004-09-07 Sun Microsystems, Inc. Defferrable processing option for fast path forwarding
US6791989B1 (en) 1999-12-30 2004-09-14 Agilent Technologies, Inc. Fibre channel interface controller that performs non-blocking output and input of fibre channel data frames and acknowledgement frames to and from a fibre channel
US20040202073A1 (en) 2003-04-09 2004-10-14 Yung-Hsiao Lai Systems and methods for caching multimedia data
US20050021949A1 (en) 2002-05-09 2005-01-27 Niigata Seimitsu Co., Ltd. Encryption apparatus, encryption method, and encryption system
US20050031126A1 (en) 2001-08-17 2005-02-10 Jonathan Edney Security in communications networks
US6880062B1 (en) 2001-02-13 2005-04-12 Candera, Inc. Data mover mechanism to achieve SAN RAID at wire speed
US20050114663A1 (en) 2003-11-21 2005-05-26 Finisar Corporation Secure network access devices with data encryption
US20050117522A1 (en) 2003-12-01 2005-06-02 Andiamo Systems, Inc. Apparatus and method for performing fast fibre channel write operations over relatively high latency networks
US20050144394A1 (en) 2003-12-24 2005-06-30 Komarla Eshwari P. For adaptive caching
US20050192923A1 (en) 2004-02-27 2005-09-01 Daiki Nakatsuka Computer system for allocating storage area to computer based on security level
US6941429B1 (en) 2002-06-25 2005-09-06 Emc Corporation System and method for improving performance of a data backup operation
US20060018293A1 (en) * 2001-02-01 2006-01-26 Ipr Licensing, Inc. Alternate channel for carrying selected message types
US7000025B1 (en) 2001-05-07 2006-02-14 Adaptec, Inc. Methods for congestion mitigation in infiniband
US20060036821A1 (en) 2004-04-30 2006-02-16 Frey Robert T Storage switch mirrored write sequence count management
US20060039370A1 (en) 2004-08-23 2006-02-23 Warren Rosen Low latency switch architecture for high-performance packet-switched networks
US20060059313A1 (en) 2004-09-10 2006-03-16 Lange Stephan J Method and apparatus for reading a data store
US20060059336A1 (en) 2004-08-30 2006-03-16 Miller Daryl R Secure communication port redirector
US20060104269A1 (en) 2004-11-15 2006-05-18 Perozo Angel G Method and system for processing frames in storage controllers
US20060112149A1 (en) 2004-11-19 2006-05-25 Nec Corporation Storage system, method for replication thereof and program
US20060126520A1 (en) 2004-12-15 2006-06-15 Cisco Technology, Inc. Tape acceleration
US7065582B1 (en) 1999-12-21 2006-06-20 Advanced Micro Devices, Inc. Automatic generation of flow control frames
US20060214766A1 (en) * 2005-03-28 2006-09-28 Riad Ghabra Secret key programming technique for transponders using encryption
US20060248278A1 (en) 2005-05-02 2006-11-02 International Business Machines Corporation Adaptive read ahead method of data recorded on a sequential media readable via a variable data block size storage device
US20060262784A1 (en) 2005-05-19 2006-11-23 Cisco Technology, Inc. Technique for in order delivery of traffic across a storage area network
US7165180B1 (en) 2001-11-27 2007-01-16 Vixs Systems, Inc. Monolithic semiconductor device for preventing external access to an encryption key
US7181578B1 (en) 2002-09-12 2007-02-20 Copan Systems, Inc. Method and apparatus for efficient scalable storage management
US20070088795A1 (en) 2005-09-29 2007-04-19 Emc Corporation Internet small computer systems interface (iSCSI) distance acceleration device
US20070101134A1 (en) 2005-10-31 2007-05-03 Cisco Technology, Inc. Method and apparatus for performing encryption of data at rest at a port of a network device
US7219158B2 (en) 2000-07-21 2007-05-15 Hughes Network Systems Llc Method and system for improving network performance using a performance enhancing proxy
US7219237B1 (en) 2002-03-29 2007-05-15 Xilinx, Inc. Read- and write-access control circuits for decryption-key memories on programmable logic devices
US20070115966A1 (en) 2005-11-21 2007-05-24 Broadcom Corporation Compact packet operation device and method
US7237045B2 (en) 2002-06-28 2007-06-26 Brocade Communications Systems, Inc. Apparatus and method for storage processing through scalable port processors
US7286476B2 (en) 2003-08-01 2007-10-23 F5 Networks, Inc. Accelerating network performance by striping and parallelization of TCP connections
US7295519B2 (en) 2003-06-20 2007-11-13 Motorola, Inc. Method of quality of service based flow control within a distributed switch fabric network
US7389462B1 (en) 2003-02-14 2008-06-17 Istor Networks, Inc. System and methods for high rate hardware-accelerated network protocol processing
US7397764B2 (en) 2003-04-30 2008-07-08 Lucent Technologies Inc. Flow control between fiber channel and wide area networks
US7411958B2 (en) 2004-10-01 2008-08-12 Qlogic, Corporation Method and system for transferring data directly between storage devices in a storage area network
US7414973B2 (en) 2005-01-24 2008-08-19 Alcatel Lucent Communication traffic management systems and methods
US7415574B2 (en) 2006-07-05 2008-08-19 Cisco Technology, Inc. Dynamic, on-demand storage area network (SAN) cache
US7436773B2 (en) 2004-12-07 2008-10-14 International Business Machines Corporation Packet flow control in switched full duplex ethernet networks
US7472231B1 (en) 2001-09-07 2008-12-30 Netapp, Inc. Storage area network data cache
US7568067B1 (en) 1999-08-06 2009-07-28 Fujitsu Limited Method for controlling magnetic tape unit
US7583597B2 (en) 2003-07-21 2009-09-01 Qlogic Corporation Method and system for improving bandwidth and reducing idles in fibre channel switches
US7617365B2 (en) 2004-04-28 2009-11-10 Emc Corporation Systems and methods to avoid deadlock and guarantee mirror consistency during online mirror synchronization and verification
US7890655B2 (en) 2006-02-16 2011-02-15 Cisco Technology, Inc. Storage area network port based data transfer acceleration

Patent Citations (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4435762A (en) 1981-03-06 1984-03-06 International Business Machines Corporation Buffered peripheral subsystems
US4423480A (en) 1981-03-06 1983-12-27 International Business Machines Corporation Buffered peripheral system with priority queue and preparation for signal transfer in overlapped operations
US4932826A (en) 1987-01-27 1990-06-12 Storage Technology Corporation Automated cartridge system
US5016277A (en) 1988-12-09 1991-05-14 The Exchange System Limited Partnership Encryption key entry method in a microcomputer-based encryption system
US5347648A (en) 1990-06-29 1994-09-13 Digital Equipment Corporation Ensuring write ordering under writeback cache error conditions
US5758151A (en) 1994-12-09 1998-05-26 Storage Technology Corporation Serial data storage for multiple access demand
US5765213A (en) 1995-12-08 1998-06-09 Emc Corporation Method providing for the flexible prefetching of data from a data storage system
US5809328A (en) 1995-12-21 1998-09-15 Unisys Corp. Apparatus for fibre channel transmission having interface logic, buffer memory, multiplexor/control device, fibre channel controller, gigabit link module, microprocessor, and bus control device
US6317819B1 (en) 1996-01-11 2001-11-13 Steven G. Morton Digital signal processor containing scalar processor and a plurality of vector processors operating from a single instruction
US6178244B1 (en) 1996-01-12 2001-01-23 Mitsubishi Denki Kabushiki Kaisha Cryptosystem
US5754658A (en) * 1996-04-19 1998-05-19 Intel Corporation Adaptive encryption to avoid processor oversaturation
US6219728B1 (en) 1996-04-22 2001-04-17 Nortel Networks Limited Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US5842040A (en) 1996-06-18 1998-11-24 Storage Technology Corporation Policy caching method and apparatus for use in a communication device based on contents of one data unit in a subset of related data units
US5692124A (en) 1996-08-30 1997-11-25 Itt Industries, Inc. Support of limited write downs through trustworthy predictions in multilevel security of computer network communications
US6049546A (en) 1996-10-15 2000-04-11 At&T Corporation System and method for performing switching in multipoint-to-multipoint multicasting
US5892915A (en) 1997-04-25 1999-04-06 Emc Corporation System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list
US6148421A (en) 1997-05-30 2000-11-14 Crossroads Systems, Inc. Error detection and recovery for sequential access devices in a fibre channel protocol
US6026468A (en) 1997-08-18 2000-02-15 Fujitsu Limited Method of controlling magnetic tape unit
US6141728A (en) 1997-09-29 2000-10-31 Quantum Corporation Embedded cache manager
US5930344A (en) 1997-10-14 1999-07-27 At & T Corp. Method and apparatus for tracing a specific communication
US6381665B2 (en) 1997-12-23 2002-04-30 Intel Corporation Mechanisms for converting interrupt request signals on address and data lines to interrupt message signals
US6172520B1 (en) 1997-12-30 2001-01-09 Xilinx, Inc. FPGA system with user-programmable configuration ports and method for reconfiguring the FPGA
US6327253B1 (en) 1998-04-03 2001-12-04 Avid Technology, Inc. Method and apparatus for controlling switching of connections among data processing devices
US6070200A (en) 1998-06-02 2000-05-30 Adaptec, Inc. Host adapter having paged data buffers for continuously transferring data between a system bus and a peripheral bus
US6782473B1 (en) 1998-11-03 2004-08-24 Lg Information & Communications, Ltd. Network encryption system
US20020059439A1 (en) 1999-02-26 2002-05-16 Arroyo Keith M. Streaming method and system for fibre channel network devices
US6570848B1 (en) 1999-03-30 2003-05-27 3Com Corporation System and method for congestion control in packet-based communication networks
US6449697B1 (en) 1999-04-23 2002-09-10 International Business Machines Corporation Prestaging data into cache in preparation for data transfer operations
US20040170432A1 (en) 1999-05-24 2004-09-02 Reynolds Robert A. Method and system for multi-initiator support to streaming devices in a fibre channel network
US7568067B1 (en) 1999-08-06 2009-07-28 Fujitsu Limited Method for controlling magnetic tape unit
US6788680B1 (en) 1999-08-25 2004-09-07 Sun Microsystems, Inc. Defferrable processing option for fast path forwarding
US6651162B1 (en) 1999-11-04 2003-11-18 International Business Machines Corporation Recursively accessing a branch target address cache using a target address previously accessed from the branch target address cache
US6625750B1 (en) 1999-11-16 2003-09-23 Emc Corporation Hardware and software failover services for a file server
US7065582B1 (en) 1999-12-21 2006-06-20 Advanced Micro Devices, Inc. Automatic generation of flow control frames
US6791989B1 (en) 1999-12-30 2004-09-14 Agilent Technologies, Inc. Fibre channel interface controller that performs non-blocking output and input of fibre channel data frames and acknowledgement frames to and from a fibre channel
US20010016878A1 (en) 2000-02-17 2001-08-23 Hideki Yamanaka Communicating system and communicating method for controlling throughput
US6658540B1 (en) 2000-03-31 2003-12-02 Hewlett-Packard Development Company, L.P. Method for transaction command ordering in a remote data replication system
US20020024970A1 (en) 2000-04-07 2002-02-28 Amaral John M. Transmitting MPEG data packets received from a non-constant delay network
US6757767B1 (en) 2000-05-31 2004-06-29 Advanced Digital Information Corporation Method for acceleration of storage devices by returning slightly early write status
US7219158B2 (en) 2000-07-21 2007-05-15 Hughes Network Systems Llc Method and system for improving network performance using a performance enhancing proxy
US20030021417A1 (en) 2000-10-20 2003-01-30 Ognjen Vasic Hidden link dynamic key manager for use in computer systems with database structure for storage of encrypted data and method for storage and retrieval of encrypted data
US6507893B2 (en) 2001-01-26 2003-01-14 Dell Products, L.P. System and method for time window access frequency based caching for memory controllers
US6633548B2 (en) 2001-01-30 2003-10-14 Nokia Intelligent Edge Routers Inc. Method and apparatus for ternary content addressable memory (TCAM) table management
US20060018293A1 (en) * 2001-02-01 2006-01-26 Ipr Licensing, Inc. Alternate channel for carrying selected message types
US6880062B1 (en) 2001-02-13 2005-04-12 Candera, Inc. Data mover mechanism to achieve SAN RAID at wire speed
US20020124007A1 (en) * 2001-03-02 2002-09-05 Wuhan P&S Electronics Company Ltd. Network server and database therein
US7000025B1 (en) 2001-05-07 2006-02-14 Adaptec, Inc. Methods for congestion mitigation in infiniband
US20020169521A1 (en) 2001-05-10 2002-11-14 Goodman Brian G. Automated data storage library with multipurpose slots providing user-selected control path to shared robotic device
US6751758B1 (en) 2001-06-20 2004-06-15 Emc Corporation Method and system for handling errors in a data storage environment
US20050031126A1 (en) 2001-08-17 2005-02-10 Jonathan Edney Security in communications networks
US7472231B1 (en) 2001-09-07 2008-12-30 Netapp, Inc. Storage area network data cache
US7185062B2 (en) 2001-09-28 2007-02-27 Emc Corporation Switch-based storage services
US20030093567A1 (en) 2001-09-28 2003-05-15 Lolayekar Santosh C. Serverless storage services
US20030065882A1 (en) 2001-10-01 2003-04-03 Beeston Ralph Thomas System for fast tape file positioning
US20030084209A1 (en) 2001-10-31 2003-05-01 Chadalapaka Mallikarjun B. System and method for storage virtualization
US7165180B1 (en) 2001-11-27 2007-01-16 Vixs Systems, Inc. Monolithic semiconductor device for preventing external access to an encryption key
US6775749B1 (en) 2002-01-29 2004-08-10 Advanced Micro Devices, Inc. System and method for performing a speculative cache fill
US20030185154A1 (en) 2002-03-29 2003-10-02 Nishan Systems, Inc. Network congestion management systems and methods
US7219237B1 (en) 2002-03-29 2007-05-15 Xilinx, Inc. Read- and write-access control circuits for decryption-key memories on programmable logic devices
US20050021949A1 (en) 2002-05-09 2005-01-27 Niigata Seimitsu Co., Ltd. Encryption apparatus, encryption method, and encryption system
US6941429B1 (en) 2002-06-25 2005-09-06 Emc Corporation System and method for improving performance of a data backup operation
US20040148376A1 (en) 2002-06-28 2004-07-29 Brocade Communications Systems, Inc. Storage area network processing device
US7237045B2 (en) 2002-06-28 2007-06-26 Brocade Communications Systems, Inc. Apparatus and method for storage processing through scalable port processors
US20040010660A1 (en) 2002-07-11 2004-01-15 Storage Technology Corporation Multi-element storage array
US20040081082A1 (en) 2002-07-12 2004-04-29 Crossroads Systems, Inc. Mechanism for enabling enhanced fibre channel error recovery across redundant paths using SCSI level commands
US20040049564A1 (en) 2002-09-09 2004-03-11 Chan Ng Method and apparatus for network storage flow control
US7181578B1 (en) 2002-09-12 2007-02-20 Copan Systems, Inc. Method and apparatus for efficient scalable storage management
US20040088574A1 (en) 2002-10-31 2004-05-06 Brocade Communications Systems, Inc. Method and apparatus for encryption or compression devices inside a storage area network fabric
US20040153566A1 (en) 2003-01-31 2004-08-05 Brocade Communications Systems, Inc. Dynamic link distance configuration for extended fabric
US20040158668A1 (en) 2003-02-11 2004-08-12 Richard Golasky System and method for managing target resets
US20040160903A1 (en) 2003-02-13 2004-08-19 Andiamo Systems, Inc. Security groups for VLANs
US7389462B1 (en) 2003-02-14 2008-06-17 Istor Networks, Inc. System and methods for high rate hardware-accelerated network protocol processing
US20040202073A1 (en) 2003-04-09 2004-10-14 Yung-Hsiao Lai Systems and methods for caching multimedia data
US7397764B2 (en) 2003-04-30 2008-07-08 Lucent Technologies Inc. Flow control between fiber channel and wide area networks
US7295519B2 (en) 2003-06-20 2007-11-13 Motorola, Inc. Method of quality of service based flow control within a distributed switch fabric network
US7583597B2 (en) 2003-07-21 2009-09-01 Qlogic Corporation Method and system for improving bandwidth and reducing idles in fibre channel switches
US7286476B2 (en) 2003-08-01 2007-10-23 F5 Networks, Inc. Accelerating network performance by striping and parallelization of TCP connections
US20050114663A1 (en) 2003-11-21 2005-05-26 Finisar Corporation Secure network access devices with data encryption
US20050117522A1 (en) 2003-12-01 2005-06-02 Andiamo Systems, Inc. Apparatus and method for performing fast fibre channel write operations over relatively high latency networks
US20050144394A1 (en) 2003-12-24 2005-06-30 Komarla Eshwari P. For adaptive caching
US20050192923A1 (en) 2004-02-27 2005-09-01 Daiki Nakatsuka Computer system for allocating storage area to computer based on security level
US7617365B2 (en) 2004-04-28 2009-11-10 Emc Corporation Systems and methods to avoid deadlock and guarantee mirror consistency during online mirror synchronization and verification
US20060036821A1 (en) 2004-04-30 2006-02-16 Frey Robert T Storage switch mirrored write sequence count management
US20060039370A1 (en) 2004-08-23 2006-02-23 Warren Rosen Low latency switch architecture for high-performance packet-switched networks
US20060059336A1 (en) 2004-08-30 2006-03-16 Miller Daryl R Secure communication port redirector
US20060059313A1 (en) 2004-09-10 2006-03-16 Lange Stephan J Method and apparatus for reading a data store
US7411958B2 (en) 2004-10-01 2008-08-12 Qlogic, Corporation Method and system for transferring data directly between storage devices in a storage area network
US20060104269A1 (en) 2004-11-15 2006-05-18 Perozo Angel G Method and system for processing frames in storage controllers
US20060112149A1 (en) 2004-11-19 2006-05-25 Nec Corporation Storage system, method for replication thereof and program
US7436773B2 (en) 2004-12-07 2008-10-14 International Business Machines Corporation Packet flow control in switched full duplex ethernet networks
US20060126520A1 (en) 2004-12-15 2006-06-15 Cisco Technology, Inc. Tape acceleration
US7414973B2 (en) 2005-01-24 2008-08-19 Alcatel Lucent Communication traffic management systems and methods
US20060214766A1 (en) * 2005-03-28 2006-09-28 Riad Ghabra Secret key programming technique for transponders using encryption
US20060248278A1 (en) 2005-05-02 2006-11-02 International Business Machines Corporation Adaptive read ahead method of data recorded on a sequential media readable via a variable data block size storage device
US20060262784A1 (en) 2005-05-19 2006-11-23 Cisco Technology, Inc. Technique for in order delivery of traffic across a storage area network
US20070088795A1 (en) 2005-09-29 2007-04-19 Emc Corporation Internet small computer systems interface (iSCSI) distance acceleration device
US20070101134A1 (en) 2005-10-31 2007-05-03 Cisco Technology, Inc. Method and apparatus for performing encryption of data at rest at a port of a network device
US8266431B2 (en) 2005-10-31 2012-09-11 Cisco Technology, Inc. Method and apparatus for performing encryption of data at rest at a port of a network device
US20070115966A1 (en) 2005-11-21 2007-05-24 Broadcom Corporation Compact packet operation device and method
US7890655B2 (en) 2006-02-16 2011-02-15 Cisco Technology, Inc. Storage area network port based data transfer acceleration
US7415574B2 (en) 2006-07-05 2008-08-19 Cisco Technology, Inc. Dynamic, on-demand storage area network (SAN) cache

Non-Patent Citations (40)

* Cited by examiner, † Cited by third party
Title
"Affordable Remote Tape Backup/Restore via Tape Pipelining Over IP", CNT White Paper, pp. 1-8.
Austin Modine, "HP adds Encryption Gear for Storage Systems, Tape and Virtual Tape get Algorithmic," The Register, printed on Apr. 28, 2009 from URL: http://www.theregister.co.uk/2008/04/04/hp-storage-encryption-add-ons/, 2 pgs.
Cisco MDS 9000 Intelligent Fabric Applications, "Fibre Channel Write Acceleration" printed on Apr. 28, 2009 from website: http://www.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps5989/ps6217/prod-white-paper, 7 pgs.
CN First Office Action dated Jan. 26, 2011, from CN Appl. No. 200680023522.7.
CN Second Office Action dated Aug. 2, 2011, from CN Appl. No. 200680023522.7.
Forouzan et al., "TCP/IP Protocol Suite", 2002, McGraw-Hill Professional, Second Edition, Section 12.4, pp. 305-308.
ISR and Written Opinion dated Mar. 26, 2008 from related PCT Application No. PCT/US06/42477, 7 pgs.
Notice of Allowance dated May 3, 2012, U.S. Appl. No. 11/264,191.
StorageTek: The Leader in Information Lifecycle Management Solutions, http://www.storagetek.com, printed Aug. 4, 2005, p. 1.
U.S. Appl. No. 11/015,383, filed Dec. 15, 2004.
U.S. Appl. No. 11/220,490, filed Sep. 6, 2005, 30 pgs.
U.S. Final Office Action dated Aug. 31, 2011 from U.S. Appl. No. 11/264,191.
U.S. Final Office Action dated Dec. 23, 2010, U.S. Appl. No. 11/015,383.
U.S. Final Office Action dated Jul. 26, 2010 from U.S. Appl. No. 11/264,191.
U.S. Final Office Action dated Mar. 24, 2009 from related U.S. Appl. No. 11/220,490, 19 pgs.
U.S. Final Office Action dated Mar. 3, 2010 from related U.S. Appl. No. 11/220,490.
U.S. Final Office Action dated Nov. 14, 2008 from U.S. Appl. No. 11/356,881.
U.S. Final Office Action dated Nov. 9, 2009 from U.S. Appl. No. 11/356,881.
U.S. Final Office Action dated Oct. 16, 2009 from U.S. Appl. No. 11/015,383.
U.S. Final Office Action dated Sep. 14, 2011 from U.S. Appl. No. 11/015,383.
U.S. Final Office Action dated Sep. 9, 2009 from U.S. Appl. No. 11/264,191.
U.S. Non-Final Office Action dated Aug. 24, 2010, from U.S. Appl. No. 11/015,383.
U.S. Non-Final Office Action dated May 6, 2011 from U.S. Appl. No. 11/015,383.
U.S. Notice of Allowance dated Feb. 2, 2011 from related U.S. Appl. No. 11/220,490.
U.S. Notice of Allowance dated Jul. 27, 2010 from related U.S. Appl. No. 11/220,490.
U.S. Notice of Allowance dated May 19, 2011 from related U.S. Appl. No. 11/220,490.
U.S. Notice of Allowance dated Oct. 14, 2010 from U.S. Appl. No. 11/356,881.
U.S. Notice of Allowance dated Sep. 29, 2011 from related U.S. Appl. No. 11/220,490.
U.S. Office Action dated Apr. 2, 2009 from U.S. Appl. No. 11/015,383, 16 pgs.
U.S. Office Action dated Aug. 7, 2008 from U.S. Appl. No. 11/356,881.
U.S. Office Action dated Dec. 5, 2008 from related U.S. Appl. No. 11/220,490, 17 pgs.
U.S. Office Action dated Jan. 25, 2010 from U.S. Appl. No. 11/264,191.
U.S. Office Action dated Jun. 3, 2010 from U.S. Appl. No. 11/356,881.
U.S. Office Action dated Mar. 18, 2009 from related U.S. Appl. No. 11/264,191, 13 pgs.
U.S. Office Action dated Mar. 4, 2011 from U.S. Appl. No. 11/264,191.
U.S. Office Action dated May 28, 2008 from related U.S. Appl. No. 11/220,490, 22 pgs.
U.S. Office Action dated May 8, 2009 from U.S. Appl. No. 11/356,881.
U.S. Office Action dated Nov. 15, 2007 from related U.S. Appl. No. 11/220,490, 14 pgs.
U.S. Office Action dated Sep. 3, 2009 from related U.S. Appl. No. 11/220,490.
Viega J. et al., Network Working Group, "The use of Galois/Counter mode (GMC) in Ipsec Encapsulating Security Payload (ESP)," RFC 4106, Secure Software, Inc., Jun. 2005, 16 pgs.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010434A1 (en) * 2009-07-08 2011-01-13 International Business Machines Corporation Storage system
US8924513B2 (en) * 2009-07-08 2014-12-30 International Business Machines Corporation Storage system
US20130227213A1 (en) * 2012-02-27 2013-08-29 Samsung Electronics Co., Ltd. Memory controller and operation method thereof

Similar Documents

Publication Publication Date Title
US9645951B1 (en) Accelerator system for remote data storage
US20220278965A1 (en) Technologies for accelerated quic packet processing with hardware offloads
US10013274B2 (en) Migrating virtual machines to perform boot processes
US20190163364A1 (en) System and method for tcp offload for nvme over tcp-ip
US7827251B2 (en) Fast write operations to a mirrored volume in a volume manager
US7349999B2 (en) Method, system, and program for managing data read operations on network controller with offloading functions
US9264495B2 (en) Apparatus and methods for handling network file operations over a fibre channel network
WO2015058699A1 (en) Data forwarding
CN104021069A (en) Management method and system for software performance test based on distributed virtual machine system
WO2021073546A1 (en) Data access method, device, and first computer device
JP4347351B2 (en) Data encryption apparatus, data decryption apparatus, data encryption method, data decryption method, and data relay apparatus
US7302500B2 (en) Apparatus and method for packet based storage virtualization
CN107622207B (en) Encrypted system-level data structure
US7639715B1 (en) Dedicated application interface for network systems
US8464074B1 (en) Storage media encryption with write acceleration
US8676928B1 (en) Method and system for writing network data
US8131857B2 (en) Fibre channel intelligent target management system
US7234101B1 (en) Method and system for providing data integrity in storage systems
US8793542B2 (en) Controlling IPSec offload enablement during hardware failures
US9292225B2 (en) Methods for frame order control and devices in storage area network
US8868674B2 (en) Streaming and bulk data transfer transformation with context switching
US11622029B2 (en) Optimizing information transmitted over a direct communications connection
US11330032B2 (en) Method and system for content proxying between formats
WO2017074450A1 (en) Combining data blocks from virtual machines
CN116264507A (en) Data encryption method, data decryption method, device and equipment

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARTHASARATHY, ANAND;KONDAMURI, CHANDRA SEKHAR;CHINCHANI, RAMKUMAR;AND OTHERS;REEL/FRAME:022628/0481

Effective date: 20090427

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8