US20110082947A1 - Connection rate limiting - Google Patents

Connection rate limiting Download PDF

Info

Publication number
US20110082947A1
US20110082947A1 US12/723,615 US72361510A US2011082947A1 US 20110082947 A1 US20110082947 A1 US 20110082947A1 US 72361510 A US72361510 A US 72361510A US 2011082947 A1 US2011082947 A1 US 2011082947A1
Authority
US
United States
Prior art keywords
load balancing
balancing service
firewall load
counter
firewall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/723,615
Inventor
Ronald W. Szeto
David Chun Ying Cheung
Rajkumar Jalan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foundry Networks LLC
Original Assignee
Foundry Networks LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foundry Networks LLC filed Critical Foundry Networks LLC
Priority to US12/723,615 priority Critical patent/US20110082947A1/en
Assigned to FOUNDRY NETWORKS, LLC reassignment FOUNDRY NETWORKS, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: FOUNDRY NETWORKS, INC.
Publication of US20110082947A1 publication Critical patent/US20110082947A1/en
Assigned to FOUNDRY NETWORKS, INC. reassignment FOUNDRY NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEUNG, DAVID CHUN YING, JALAN, RAJKUMAR, SZETO, RONALD W.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1036Load balancing of requests to servers for services different from user content provisioning, e.g. load balancing across domain name servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • H04L63/0218Distributed architectures, e.g. distributed firewalls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • the present invention relates to the field of web switches. More particularly, the present invention relates to connection rate limiting to ensure proper functioning of components on a web switch.
  • Web switches provide traffic management to computer networks.
  • the traffic management extends to packets received both from an outside network, such as the Internet, and from an internal network.
  • a web switch may provide a series of software components to better handle the traffic. These components may include server load balancing (SLB), transparent cache switching (TCS), and firewall load balancing (FWLB).
  • SLB server load balancing
  • TCS transparent cache switching
  • FWLB firewall load balancing
  • Server load balancing allows IP-based services to be transparently balanced across multiple servers. This distributed design prevents servers from getting overloaded.
  • Transparent cache switching allows for distributed cache servers, and likewise prevents the cache servers from getting overloaded.
  • Firewall load balancing increases the network's overall firewall performance by distributing the Internet traffic load across multiple firewalls.
  • Each service in a computer network may have a connection rate limit.
  • the number of new connections per time period may be limited by using a series of rules.
  • a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests.
  • Each service may have its own set of rules to best handle the new traffic for its particular situation.
  • FIG. 1 is a flow diagram illustrating a method for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 2 is a flow diagram illustrating a method for managing a traffic management service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 3 is a flow diagram illustrating a method for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 4 is a flow diagram illustrating a method for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 5 is a flow diagram illustrating a method for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 6 is a flow diagram illustrating a method for managing a server load balancing service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 7 is a flow diagram illustrating a method for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 8 is a flow diagram illustrating a method for managing a transparent cache switching service distributed over multiple caches in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 9 is a block diagram illustrating an apparatus for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 10 is a block diagram illustrating an apparatus for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating an apparatus for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 12 is a block diagram illustrating an apparatus for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • the components, process steps, and/or data structures may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines.
  • devices of a less general purpose nature such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
  • a traffic management component may be distributed over many different servers. Therefore, for purposes of this application a specific component type (such as TCS) may be referred to as a service.
  • each service has a connection rate limit. The number of new connections per time period may be limited by using a series of rules.
  • a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests.
  • Each service may have its own set of rules to best handle the new traffic for its particular situation.
  • a new transmission control protocol (TCP) connection request may be detected by looking at the SYN bit of the incoming packet. If it is set to on, then the packet is a new connection request.
  • a new user datagram protocol (UDP) connection request may be detected by looking for any packet that doesn't have a session.
  • connection rate limiting is applied to a server load balancing service.
  • a reset is sent to the client (requesting party).
  • connection rate limiting is applied to transparent cache switching.
  • the request Upon receipt of a connection request that would exceed the maximum number of permitted connections per second, the request is sent to the Internet.
  • the user instead of not getting the service at all, the user still has a strong chance of getting the request served. This process is transparent to the user.
  • connection rate limiting is applied to firewall load balancing.
  • the request Upon receipt of a connection request that would exceed the maximum number of permitted connections per second, the request is hashed to send it to a specific firewall.
  • a hashing scheme may be applied to determine to which firewall to send the connection request. Different criteria may be applied in the hash table. For example, the hash table may be defined to direct the request to the firewall with the least connections. Alternatively, a round robin approach may be applied. In another embodiment, a weighted approach may be applied.
  • the “scheme” may alternatively be a lack of a scheme, i.e., packets are simply dropped if the number of permitted connections per second is exceeded.
  • connection rate limiting may be applied on a per server basis in addition to or instead of a per service basis.
  • the number of connections sent to a particular firewall may be limited, but other firewalls in the system may have no limiting or a different limiting scheme applied.
  • FIG. 1 is a flow diagram illustrating a method for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • a new connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new connection request is received for the service.
  • new connection requests received for the service are denied if the counter increases at a rate exceeding a predetermined connection rate limit for the service.
  • This denial may comprise sending a reset to a source address contained in a new connection request.
  • it may comprise forwarding the new connection request to the Internet. It may also forward the new connection request in accordance with criteria in a hash table.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 2 is a flow diagram illustrating a method for managing a traffic management service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • a new connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new connection request is received for the service on one of the servers.
  • new connection requests received for the service on the one server are denied if the counter increases at a rate exceeding a predetermined connection rate limit for the service on that server.
  • This denying may comprise sending a reset to a source address contained in a new connection request.
  • it may comprise forwarding the new connection request to the Internet. It may also forward the new connection request in accordance with criteria in a hash table.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 3 is a flow diagram illustrating a method for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • a new firewall load balancing service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new firewall load balancing service connection request is received.
  • new firewall load balancing service connection requests are dropped if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 4 is a flow diagram illustrating a method for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • a new firewall load balancing service connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new firewall load balancing service connection request is received.
  • a hashing scheme is applied to determine to which firewall to forward a new firewall load balancing service connection request if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit.
  • the hashing scheme may be one of several different possibilities. It may comprise directing a new firewall load balancing service connection request to the firewall with the least connections. It may comprise directing a new firewall load balancing service connection request to a firewall according to a round robin approach. It may comprise directing a new firewall load balancing service connection request to a firewall according to a weighted approach.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 5 is a flow diagram illustrating a method for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • a new server load balancing service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new server load balancing service connection request is received.
  • a reset is sent to a source address contained in the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined server load balancing service connection rate limit.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 6 is a flow diagram illustrating a method for managing a server load balancing service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • a new server load balancing service connection request for the server is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new server load balancing service connection request for the server is received.
  • a reset is sent to a source address contained in the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined server load balancing service connection rate limit for the server.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 7 is a flow diagram illustrating a method for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention.
  • a new transparent cache switching service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new transparent cache switching service connection request is received.
  • the new transparent cache switching service connection request is sent to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 8 is a flow diagram illustrating a method for managing a transparent cache switching service distributed over multiple caches in a computer network in accordance with a specific embodiment of the present invention.
  • a new transparent cache switching service connection request for one of the caches is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a counter is incremented each time a new transparent cache switching service connection request for the cache is received.
  • the new transparent cache switching service connection request is sent to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit for the cache.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 9 is a block diagram illustrating an apparatus for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • a memory 900 may be used to store a counter.
  • a new connection request detector 902 may detect a new connection request for the service.
  • a SYN bit examiner 904 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • a user datagram protocol packet session examiner 906 may detect a new connection request for the service by looking for any user datagram protocol (UDP) packets without a session.
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a new connection request counter incrementer 908 coupled to the memory 900 and to the new connection request detector 902 increments the counter each time a new connection request is received for the service. If the service is distributed over multiple servers and the request is for one of the servers, the new connection request counter incrementer 908 may increment a counter each time a new connection request is received for the service on the one server.
  • a new connection request denier 910 coupled to the new connection request counter incrementer 908 and to the memory 900 denies new connection requests received for the service if the counter increases at a rate exceeding a predetermined connection rate limit for the service.
  • the new connection request denier 910 may deny new connection requests received for the service on the server if the counter increases at a rate exceeding a predetermined connection rate limit for the service on the server. This denying may comprise sending a reset to a source address contained in a new connection request using a source address reset sender 912 . Alternatively, it may comprise forwarding the new connection request to the Internet using a new connection request Internet forwarder 914 . It may also forward the new connection request as per a hash table using a new connection request hash table forwarder 916 .
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 10 is a block diagram illustrating an apparatus for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • a memory 1000 may be used to store a counter.
  • a new firewall load balancing service connection request detector 1002 may detect a new firewall load balancing service connection request.
  • a SYN bit examiner 1004 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • a user datagram protocol packet session examiner 1006 may detect a new firewall load balancing connection request by looking for any user datagram protocol (UDP) packets without a session.
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a new firewall load balancing service connection request counter incrementer 1008 coupled to the memory 1000 and to the new firewall load balancing service connection request detector 1002 increments the counter each time a new firewall load balancing service connection request is received.
  • a new firewall load balancing service connection request dropper 1010 coupled to the new firewall load balancing service connection request counter incrementer 1008 and to the memory 1000 drops new firewall load balancing service connection requests if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 11 is a block diagram illustrating an apparatus for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • a memory 1100 may be used to store a counter.
  • a new firewall load balancing service connection request detector 1102 may detect a new firewall load balancing service connection request.
  • a SYN bit examiner 1104 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • a user datagram protocol packet session examiner 1106 may detect a new firewall load balancing service connection request by looking for any user datagram protocol (UDP) packets without a session.
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a new firewall load balancing service connection request counter incrementer 1108 coupled to the memory 1100 and to the new firewall load balancing service connection request detector 1102 increments the counter each time a new firewall load balancing service connection request is received.
  • a new firewall load balancing service connection request hashing scheme applier 1110 coupled to the new firewall load balancing service connection request counter incrementer 1108 and to the memory 1100 applies a hashing scheme to determine to which firewall to forward a new firewall load balancing service connection request if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit.
  • the hashing scheme may be one of several different possibilities.
  • It may comprise directing a new firewall load balancing service connection request to the firewall with the least connections. It may comprise directing a new firewall load balancing service connection request to a firewall according to a round robin approach. It may comprise directing a new firewall load balancing service connection request to a firewall according to a weighted approach.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 12 is a block diagram illustrating an apparatus for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • a memory 1200 may be used to store a counter.
  • a new server load balancing service connection request detector 1202 may detect a new server load balancing service connection request.
  • a SYN bit examiner 1204 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • a user datagram protocol packet session examiner 1206 may detect a new server load balancing service connection request for the service by looking for any user datagram protocol (UDP) packets without a session.
  • UDP user datagram protocol
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a new server load balancing service connection request counter incrementer 1208 coupled to the memory 1200 and to the new server load balancing service connection request detector 1202 increments a counter each time a new server load balancing connection request is received. If the service is distributed over multiple servers and the request is for one of the servers, the new server load balancing service connection request counter incrementer 1208 may increment the counter each time a new server load balancing service connection request is received for the server.
  • the new server load balancing service connection request source address reset sender 1210 may send a reset to the source address of the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined connection rate limit for the service on the server.
  • the connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 13 is a block diagram illustrating an apparatus for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention.
  • a memory 1300 may be used to store a counter.
  • a new transparent cache switching service connection request detector 1302 may detect a new transparent cache switching service connection request.
  • a SYN bit examiner 1304 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet.
  • TCP transmission control protocol
  • a user datagram protocol packet session examiner 1306 may detect a new transparent cache switching service connection request for the service by looking for any user datagram protocol (UDP) packets without a session.
  • a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
  • a new transparent cache switching service connection request counter incrementer 1308 coupled to the memory 1300 and to the new transparent cache switching service connection request detector 1302 increments the counter each time a new transparent cache switching connection request is received. If the service is distributed over multiple caches and the request is for one of the caches, the new transparent cache switching service connection request counter incrementer 1308 may increment a counter each time a new transparent cache switching service connection request is received for the cache.
  • a new transparent cache switching service connection request Internet sender 1310 coupled to the new transparent cache switching service connection request counter incrementer 1308 and to the memory 1300 sends the new transparent cache switching service connection request to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit.
  • the new transparent cache switching service connection request Internet sender 1310 may send the new transparent cache switching service connection request to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit for the cache.
  • the connection rate limit may be a number of connections per predetermined time interval.

Abstract

Each service in a computer network may have a connection rate limit. The number of new connections per time period may be limited by using a series of rules. In a specific embodiment of the present invention, a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests. Each service may have its own set of rules to best handle the new traffic for its particular situation.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present application is a continuation of Ser. No. 10/139,073, filed May 3, 2002, by Ronald W. Szeto, David Chun Ying Cheung, and Rajkumar Jalan, entitled “CONNECTION RATE LIMITING” and is related to co-pending application Ser. No. 10/139,076, filed May 3, 2002, by Ronald W. Szeto, David Chun Ying Cheung, and Rajkumar Jalan, entitled “CONNECTION RATE LIMITING FOR SERVER LOAD BALANCING AND TRANSPARENT CACHE SWITCHING”.
  • COPYRIGHT NOTICE
  • A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
  • FIELD OF THE INVENTION
  • The present invention relates to the field of web switches. More particularly, the present invention relates to connection rate limiting to ensure proper functioning of components on a web switch.
  • BACKGROUND OF THE INVENTION
  • Web switches provide traffic management to computer networks. The traffic management extends to packets received both from an outside network, such as the Internet, and from an internal network. A web switch may provide a series of software components to better handle the traffic. These components may include server load balancing (SLB), transparent cache switching (TCS), and firewall load balancing (FWLB). Server load balancing allows IP-based services to be transparently balanced across multiple servers. This distributed design prevents servers from getting overloaded. Transparent cache switching allows for distributed cache servers, and likewise prevents the cache servers from getting overloaded. Firewall load balancing increases the network's overall firewall performance by distributing the Internet traffic load across multiple firewalls.
  • Even though these software components are designed to manage traffic, the components themselves may become overwhelmed when traffic is heavy. For example, a server running TCS may become so overloaded with connections that it fails to properly handle packets sent through the connections. Traditional techniques for handling such a situation involve limiting the packet rate. This involves monitoring the number of packets received in short intervals, and dropping or redirecting packets if the number exceeds a threshold value. Unfortunately, for traffic management components, the number of packets received is not a direct predictor of when the components will become overloaded. These traffic management components are more likely to become overloaded when new connections are being established too quickly, as opposed to when new packets are coming in over those connections.
  • What is needed is a solution to better handle increased traffic to traffic management components.
  • BRIEF DESCRIPTION OF THE INVENTION
  • Each service in a computer network may have a connection rate limit. The number of new connections per time period may be limited by using a series of rules. In a specific embodiment of the present invention, a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests. Each service may have its own set of rules to best handle the new traffic for its particular situation.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more embodiments of the present invention and, together with the detailed description, serve to explain the principles and implementations of the invention.
  • In the drawings:
  • FIG. 1 is a flow diagram illustrating a method for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 2 is a flow diagram illustrating a method for managing a traffic management service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 3 is a flow diagram illustrating a method for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 4 is a flow diagram illustrating a method for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 5 is a flow diagram illustrating a method for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 6 is a flow diagram illustrating a method for managing a server load balancing service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 7 is a flow diagram illustrating a method for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 8 is a flow diagram illustrating a method for managing a transparent cache switching service distributed over multiple caches in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 9 is a block diagram illustrating an apparatus for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 10 is a block diagram illustrating an apparatus for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating an apparatus for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention.
  • FIG. 12 is a block diagram illustrating an apparatus for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention are described herein in the context of a system of computers, servers, and software. Those of ordinary skill in the art will realize that the following detailed description of the present invention is illustrative only and is not intended to be in any way limiting. Other embodiments of the present invention will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the present invention as illustrated in the accompanying drawings. The same reference indicators will be used throughout the drawings and the following detailed description to refer to the same or like parts.
  • In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
  • In accordance with the present invention, the components, process steps, and/or data structures may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
  • A traffic management component may be distributed over many different servers. Therefore, for purposes of this application a specific component type (such as TCS) may be referred to as a service. In accordance with a specific embodiment of the present invention, each service has a connection rate limit. The number of new connections per time period may be limited by using a series of rules. In a specific embodiment of the present invention, a counter is increased each time a server is selected to handle a connection request. For each service, connections coming in are tracked. Therefore, the source of connection-request packets need not be examined. Only the destination service is important. This saves significant time in the examination of the incoming requests. Each service may have its own set of rules to best handle the new traffic for its particular situation.
  • In accordance with a specific embodiment of the present invention, a new transmission control protocol (TCP) connection request may be detected by looking at the SYN bit of the incoming packet. If it is set to on, then the packet is a new connection request. In accordance with another specific embodiment of the present invention, a new user datagram protocol (UDP) connection request may be detected by looking for any packet that doesn't have a session.
  • In accordance with a specific embodiment of the present invention, connection rate limiting is applied to a server load balancing service. Upon receipt of a connection request that would exceed the maximum number of permitted connections per second, a reset is sent to the client (requesting party). Thus, instead of a user's request simply appearing to “hang” indefinitely, feedback is provided to the user to try again.
  • In accordance with a specific embodiment of the present invention, connection rate limiting is applied to transparent cache switching. Upon receipt of a connection request that would exceed the maximum number of permitted connections per second, the request is sent to the Internet. Thus, instead of not getting the service at all, the user still has a strong chance of getting the request served. This process is transparent to the user.
  • In accordance with a specific embodiment of the present invention, connection rate limiting is applied to firewall load balancing. Upon receipt of a connection request that would exceed the maximum number of permitted connections per second, the request is hashed to send it to a specific firewall. A hashing scheme may be applied to determine to which firewall to send the connection request. Different criteria may be applied in the hash table. For example, the hash table may be defined to direct the request to the firewall with the least connections. Alternatively, a round robin approach may be applied. In another embodiment, a weighted approach may be applied. The “scheme” may alternatively be a lack of a scheme, i.e., packets are simply dropped if the number of permitted connections per second is exceeded.
  • In accordance with another embodiment of the present invention, the connection rate limiting may be applied on a per server basis in addition to or instead of a per service basis. For example, the number of connections sent to a particular firewall may be limited, but other firewalls in the system may have no limiting or a different limiting scheme applied.
  • FIG. 1 is a flow diagram illustrating a method for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention. At 100, a new connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new connection request for the service may be detected by looking for any user datagram protocol (UDP) packets without a session. At 102, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 104, a counter is incremented each time a new connection request is received for the service. At 106, new connection requests received for the service are denied if the counter increases at a rate exceeding a predetermined connection rate limit for the service. This denial may comprise sending a reset to a source address contained in a new connection request. Alternatively, it may comprise forwarding the new connection request to the Internet. It may also forward the new connection request in accordance with criteria in a hash table. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 2 is a flow diagram illustrating a method for managing a traffic management service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention. At 200, a new connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new connection request for the service may be detected by looking for any user datagram protocol (UDP) packets without a session. At 202, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 204, a counter is incremented each time a new connection request is received for the service on one of the servers. At 206, new connection requests received for the service on the one server are denied if the counter increases at a rate exceeding a predetermined connection rate limit for the service on that server. This denying may comprise sending a reset to a source address contained in a new connection request. Alternatively, it may comprise forwarding the new connection request to the Internet. It may also forward the new connection request in accordance with criteria in a hash table. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 3 is a flow diagram illustrating a method for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention. At 300, a new firewall load balancing service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new firewall load balancing service connection request may be detected by looking for any user datagram protocol (UDP) packets without a session. At 302, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 304, a counter is incremented each time a new firewall load balancing service connection request is received. At 306, new firewall load balancing service connection requests are dropped if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 4 is a flow diagram illustrating a method for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention. At 400, a new firewall load balancing service connection request for the service is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new firewall load balancing service connection request for the service may be detected by looking for any user datagram protocol (UDP) packets without a session. At 402, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 404, a counter is incremented each time a new firewall load balancing service connection request is received. At 406, a hashing scheme is applied to determine to which firewall to forward a new firewall load balancing service connection request if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit. The hashing scheme may be one of several different possibilities. It may comprise directing a new firewall load balancing service connection request to the firewall with the least connections. It may comprise directing a new firewall load balancing service connection request to a firewall according to a round robin approach. It may comprise directing a new firewall load balancing service connection request to a firewall according to a weighted approach. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 5 is a flow diagram illustrating a method for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention. At 500, a new server load balancing service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new server load balancing connection request may be detected by looking for any user datagram protocol (UDP) packets without a session. At 502, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 504, a counter is incremented each time a new server load balancing service connection request is received. At 506, a reset is sent to a source address contained in the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined server load balancing service connection rate limit. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 6 is a flow diagram illustrating a method for managing a server load balancing service distributed over multiple servers in a computer network in accordance with a specific embodiment of the present invention. At 600, a new server load balancing service connection request for the server is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new server load balancing connection request for the server may be detected by looking for any user datagram protocol (UDP) packets without a session. At 602, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 604, a counter is incremented each time a new server load balancing service connection request for the server is received. At 606, a reset is sent to a source address contained in the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined server load balancing service connection rate limit for the server. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 7 is a flow diagram illustrating a method for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention. At 700, a new transparent cache switching service connection request is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new transparent cache switching service connection request may be detected by looking for any user datagram protocol (UDP) packets without a session. At 702, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 704, a counter is incremented each time a new transparent cache switching service connection request is received. At 706, the new transparent cache switching service connection request is sent to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 8 is a flow diagram illustrating a method for managing a transparent cache switching service distributed over multiple caches in a computer network in accordance with a specific embodiment of the present invention. At 800, a new transparent cache switching service connection request for one of the caches is detected by looking at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a new transparent cache switching service connection request for one of the caches may be detected by looking for any user datagram protocol (UDP) packets without a session. At 802, a counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. At 804, a counter is incremented each time a new transparent cache switching service connection request for the cache is received. At 806, the new transparent cache switching service connection request is sent to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit for the cache. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 9 is a block diagram illustrating an apparatus for managing a traffic management service in a computer network in accordance with a specific embodiment of the present invention. A memory 900 may be used to store a counter. A new connection request detector 902 may detect a new connection request for the service. A SYN bit examiner 904 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a user datagram protocol packet session examiner 906 may detect a new connection request for the service by looking for any user datagram protocol (UDP) packets without a session. A counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. A new connection request counter incrementer 908 coupled to the memory 900 and to the new connection request detector 902 increments the counter each time a new connection request is received for the service. If the service is distributed over multiple servers and the request is for one of the servers, the new connection request counter incrementer 908 may increment a counter each time a new connection request is received for the service on the one server. A new connection request denier 910 coupled to the new connection request counter incrementer 908 and to the memory 900 denies new connection requests received for the service if the counter increases at a rate exceeding a predetermined connection rate limit for the service. If the service is distributed over multiple servers and the request is for one of the servers, the new connection request denier 910 may deny new connection requests received for the service on the server if the counter increases at a rate exceeding a predetermined connection rate limit for the service on the server. This denying may comprise sending a reset to a source address contained in a new connection request using a source address reset sender 912. Alternatively, it may comprise forwarding the new connection request to the Internet using a new connection request Internet forwarder 914. It may also forward the new connection request as per a hash table using a new connection request hash table forwarder 916. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 10 is a block diagram illustrating an apparatus for managing a firewall load balancing service in a computer network in accordance with a specific embodiment of the present invention. A memory 1000 may be used to store a counter. A new firewall load balancing service connection request detector 1002 may detect a new firewall load balancing service connection request. A SYN bit examiner 1004 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a user datagram protocol packet session examiner 1006 may detect a new firewall load balancing connection request by looking for any user datagram protocol (UDP) packets without a session. A counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. A new firewall load balancing service connection request counter incrementer 1008 coupled to the memory 1000 and to the new firewall load balancing service connection request detector 1002 increments the counter each time a new firewall load balancing service connection request is received. A new firewall load balancing service connection request dropper 1010 coupled to the new firewall load balancing service connection request counter incrementer 1008 and to the memory 1000 drops new firewall load balancing service connection requests if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 11 is a block diagram illustrating an apparatus for managing a firewall load balancing service distributed over multiple firewalls in a computer network in accordance with a specific embodiment of the present invention. A memory 1100 may be used to store a counter. A new firewall load balancing service connection request detector 1102 may detect a new firewall load balancing service connection request. A SYN bit examiner 1104 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a user datagram protocol packet session examiner 1106 may detect a new firewall load balancing service connection request by looking for any user datagram protocol (UDP) packets without a session. A counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. A new firewall load balancing service connection request counter incrementer 1108 coupled to the memory 1100 and to the new firewall load balancing service connection request detector 1102 increments the counter each time a new firewall load balancing service connection request is received. A new firewall load balancing service connection request hashing scheme applier 1110 coupled to the new firewall load balancing service connection request counter incrementer 1108 and to the memory 1100 applies a hashing scheme to determine to which firewall to forward a new firewall load balancing service connection request if the counter increases at a rate exceeding a predetermined firewall load balancing service connection rate limit. The hashing scheme may be one of several different possibilities. It may comprise directing a new firewall load balancing service connection request to the firewall with the least connections. It may comprise directing a new firewall load balancing service connection request to a firewall according to a round robin approach. It may comprise directing a new firewall load balancing service connection request to a firewall according to a weighted approach. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 12 is a block diagram illustrating an apparatus for managing a server load balancing service in a computer network in accordance with a specific embodiment of the present invention. A memory 1200 may be used to store a counter. A new server load balancing service connection request detector 1202 may detect a new server load balancing service connection request. A SYN bit examiner 1204 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a user datagram protocol packet session examiner 1206 may detect a new server load balancing service connection request for the service by looking for any user datagram protocol (UDP) packets without a session. A counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. A new server load balancing service connection request counter incrementer 1208 coupled to the memory 1200 and to the new server load balancing service connection request detector 1202 increments a counter each time a new server load balancing connection request is received. If the service is distributed over multiple servers and the request is for one of the servers, the new server load balancing service connection request counter incrementer 1208 may increment the counter each time a new server load balancing service connection request is received for the server. A new server load balancing service connection request source address reset sender 1210 coupled to the new server load balancing service connection request counter incrementer 1208 and to the memory 1200 sends a reset to the source address of the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined server load balancing service connection rate limit. If the service is distributed over multiple servers and the request is for one of the servers, the new server load balancing service connection request source address reset sender 1210 may send a reset to the source address of the new server load balancing service connection request if the counter increases at a rate exceeding a predetermined connection rate limit for the service on the server. The connection rate limit may be a number of connections per predetermined time interval.
  • FIG. 13 is a block diagram illustrating an apparatus for managing a transparent cache switching service in a computer network in accordance with a specific embodiment of the present invention. A memory 1300 may be used to store a counter. A new transparent cache switching service connection request detector 1302 may detect a new transparent cache switching service connection request. A SYN bit examiner 1304 may be used for this purpose to look at a SYN bit of an incoming transmission control protocol (TCP) packet. Alternatively, a user datagram protocol packet session examiner 1306 may detect a new transparent cache switching service connection request for the service by looking for any user datagram protocol (UDP) packets without a session. A counter is reset to zero if the elapsed time since the last counter reset is greater than a predetermined time interval. A new transparent cache switching service connection request counter incrementer 1308 coupled to the memory 1300 and to the new transparent cache switching service connection request detector 1302 increments the counter each time a new transparent cache switching connection request is received. If the service is distributed over multiple caches and the request is for one of the caches, the new transparent cache switching service connection request counter incrementer 1308 may increment a counter each time a new transparent cache switching service connection request is received for the cache. A new transparent cache switching service connection request Internet sender 1310 coupled to the new transparent cache switching service connection request counter incrementer 1308 and to the memory 1300 sends the new transparent cache switching service connection request to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit. If the service is distributed over multiple caches and the request is for one of the caches, the new transparent cache switching service connection request Internet sender 1310 may send the new transparent cache switching service connection request to the Internet if the counter increases at a rate exceeding a predetermined transparent cache switching service connection rate limit for the cache. The connection rate limit may be a number of connections per predetermined time interval.
  • While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.

Claims (24)

1. A computer implemented method for firewall load balancing connection rate limiting, the method comprising:
incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
directing, by the computing platform, the new destination firewall load balancing service connection request to a particular one of the plurality of firewalls of the destination firewall load balancing service, if the counter has not increased at a rate exceeding the predetermined connection rate limit.
2. The method of claim 1, further comprising resetting the counter to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
3. The method of claim 2 wherein the predetermined connection rate limit is a number of transactions per predetermined time interval.
4. The method of claim 1, further comprising detecting a new firewall load balancing service connection request by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
5. The method of claim 1 wherein the incrementing is based at least in part on the identification of the destination firewall load balancing service.
6. A computer implemented method for firewall load balancing connection rate limiting, the method comprising:
incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
dropping, by the computing platform, the new connection requests for the firewall load balancing service if the counter increases at a rate exceeding a predetermined connection rate limit for the firewall load balancing service.
7. The method of claim 6, further comprising resetting the counter to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
8. The method of claim 7 wherein the predetermined connection rate limit is a number of transactions per predetermined time interval.
9. The method of claim 6, further comprising detecting a new firewall load balancing service connection request by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
10. The method of claim 6 wherein the incrementing is based at least in part on the identification of the destination firewall load balancing service.
11. An apparatus for firewall load balancing connection rate limiting, the apparatus comprising:
a memory; and
a computing platform of a network switch, the computing platform configured to:
increment a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
direct the new destination firewall load balancing service connection request to a particular one of the plurality of firewalls of the destination firewall load balancing service, if the counter has not increased at a rate exceeding the predetermined connection rate limit.
12. The apparatus of claim 11 wherein the computing platform is further configured to reset the counter to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
13. The apparatus of claim 12 wherein the predetermined connection rate limit is a number of transactions per predetermined time interval.
14. The apparatus of claim 11 wherein the computing platform is further configured to detect a new firewall load balancing service connection request by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
15. The method of claim 11 wherein the incrementing is based at least in part on the identification of the destination firewall load balancing service.
16. An apparatus for firewall load balancing connection rate limiting, the apparatus comprising:
a memory; and
a computing platform of a network switch, the computing platform configured to:
increment a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
drop the new connection requests for the firewall load balancing service if the counter increases at a rate exceeding a predetermined connection rate limit for the firewall load balancing service.
17. The apparatus of claim 16 wherein the computing platform is further configured to reset the counter to zero if the elapsed time since the last counter reset is greater than a predetermined time interval.
18. The apparatus of claim 17 wherein the predetermined connection rate limit is a number of transactions per predetermined time interval.
19. The apparatus of claim 16 wherein the computing platform is further configured to detect a new firewall load balancing service connection request by looking at a SYN bit of an incoming transmission control protocol (TCP) packet.
20. The apparatus of claim 16 wherein the incrementing is based at least in part on the identification of the destination firewall load balancing service.
21. An apparatus for firewall load balancing connection rate limiting, the apparatus comprising:
a memory;
means for incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
means for directing, by the computing platform, the new destination firewall load balancing service connection request to a particular one of the plurality of firewalls of the destination firewall load balancing service, if the counter has not increased at a rate exceeding the predetermined connection rate limit.
22. An apparatus for firewall load balancing connection rate limiting, the apparatus comprising:
a memory;
means for incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
means for dropping, by the computing platform, the new connection requests for the firewall load balancing service if the counter increases at a rate exceeding a predetermined connection rate limit for the firewall load balancing service.
23. A program storage device readable by a machine, embodying a program of instructions executable by the machine to perform a method for firewall load balancing connection rate limiting, the method comprising:
incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
directing, by the computing platform, the new destination firewall load balancing service connection request to a particular one of the plurality of firewalls of the destination firewall load balancing service, if the counter has not increased at a rate exceeding the predetermined connection rate limit.
24. A program storage device readable by a machine, embodying a program of instructions executable by the machine to perform a method for firewall load balancing connection rate limiting, the method comprising:
incrementing, by a computing platform of a network switch, a counter each time a new connection request for a destination firewall load balancing service is received and a firewall is selected from a plurality of firewalls of the firewall load balancing service to handle the new connection request, the request identifying the destination firewall load balancing service, the counter indicating a total number of times the destination firewall load balancing service has been requested within a predetermined time interval by examining a destination address of the request; and
dropping, by the computing platform, the new connection requests for the firewall load balancing service if the counter increases at a rate exceeding a predetermined connection rate limit for the firewall load balancing service.
US12/723,615 2002-05-03 2010-03-12 Connection rate limiting Abandoned US20110082947A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/723,615 US20110082947A1 (en) 2002-05-03 2010-03-12 Connection rate limiting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/139,073 US7707295B1 (en) 2002-05-03 2002-05-03 Connection rate limiting
US12/723,615 US20110082947A1 (en) 2002-05-03 2010-03-12 Connection rate limiting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/139,073 Continuation US7707295B1 (en) 2002-05-03 2002-05-03 Connection rate limiting

Publications (1)

Publication Number Publication Date
US20110082947A1 true US20110082947A1 (en) 2011-04-07

Family

ID=42112585

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/139,073 Expired - Fee Related US7707295B1 (en) 2002-05-03 2002-05-03 Connection rate limiting
US12/723,615 Abandoned US20110082947A1 (en) 2002-05-03 2010-03-12 Connection rate limiting

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/139,073 Expired - Fee Related US7707295B1 (en) 2002-05-03 2002-05-03 Connection rate limiting

Country Status (1)

Country Link
US (2) US7707295B1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014176461A1 (en) * 2013-04-25 2014-10-30 A10 Networks, Inc. Systems and methods for network access control
US9294503B2 (en) 2013-08-26 2016-03-22 A10 Networks, Inc. Health monitor based distributed denial of service attack mitigation
US9332066B2 (en) 2002-05-03 2016-05-03 Foundry Networks, Llc Connection rate limiting for server load balancing and transparent cache switching
US9537886B1 (en) 2014-10-23 2017-01-03 A10 Networks, Inc. Flagging security threats in web service requests
US9584318B1 (en) 2014-12-30 2017-02-28 A10 Networks, Inc. Perfect forward secrecy distributed denial of service attack defense
US9621575B1 (en) 2014-12-29 2017-04-11 A10 Networks, Inc. Context aware threat protection
US9722918B2 (en) 2013-03-15 2017-08-01 A10 Networks, Inc. System and method for customizing the identification of application or content type
US9756071B1 (en) 2014-09-16 2017-09-05 A10 Networks, Inc. DNS denial of service attack protection
US9787581B2 (en) 2015-09-21 2017-10-10 A10 Networks, Inc. Secure data flow open information analytics
US9848013B1 (en) 2015-02-05 2017-12-19 A10 Networks, Inc. Perfect forward secrecy distributed denial of service attack detection
US9900343B1 (en) 2015-01-05 2018-02-20 A10 Networks, Inc. Distributed denial of service cellular signaling
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9912555B2 (en) 2013-03-15 2018-03-06 A10 Networks, Inc. System and method of updating modules for application or content identification
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US10063591B1 (en) 2015-02-14 2018-08-28 A10 Networks, Inc. Implementing and optimizing secure socket layer intercept
US10187377B2 (en) 2017-02-08 2019-01-22 A10 Networks, Inc. Caching network generated security certificates
US10250475B2 (en) 2016-12-08 2019-04-02 A10 Networks, Inc. Measurement of application response delay time
US10341118B2 (en) 2016-08-01 2019-07-02 A10 Networks, Inc. SSL gateway with integrated hardware security module
US10382562B2 (en) 2016-11-04 2019-08-13 A10 Networks, Inc. Verification of server certificates using hash codes
US10397270B2 (en) 2017-01-04 2019-08-27 A10 Networks, Inc. Dynamic session rate limiter
US10469594B2 (en) 2015-12-08 2019-11-05 A10 Networks, Inc. Implementation of secure socket layer intercept
US10812348B2 (en) 2016-07-15 2020-10-20 A10 Networks, Inc. Automatic capture of network data for a detected anomaly
US20220070055A1 (en) * 2020-08-26 2022-03-03 Mastercard International Incorporated Systems and methods for routing network messages

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7707295B1 (en) * 2002-05-03 2010-04-27 Foundry Networks, Inc. Connection rate limiting
US7675854B2 (en) 2006-02-21 2010-03-09 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US8584199B1 (en) 2006-10-17 2013-11-12 A10 Networks, Inc. System and method to apply a packet routing policy to an application session
US8312507B2 (en) 2006-10-17 2012-11-13 A10 Networks, Inc. System and method to apply network traffic policy to an application session
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US7970861B2 (en) * 2009-11-18 2011-06-28 Microsoft Corporation Load balancing in a distributed computing environment
US9215275B2 (en) 2010-09-30 2015-12-15 A10 Networks, Inc. System and method to balance servers based on server load status
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US8776207B2 (en) 2011-02-16 2014-07-08 Fortinet, Inc. Load balancing in a network with session information
US8897154B2 (en) * 2011-10-24 2014-11-25 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9094364B2 (en) 2011-12-23 2015-07-28 A10 Networks, Inc. Methods to manage services over a service gateway
US9356872B2 (en) * 2012-04-27 2016-05-31 Level 3 Communications, Llc Load balancing of network communications
US8782221B2 (en) 2012-07-05 2014-07-15 A10 Networks, Inc. Method to allocate buffer for TCP proxy session based on dynamic network conditions
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
CN108027805B (en) 2012-09-25 2021-12-21 A10网络股份有限公司 Load distribution in a data network
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
WO2014144837A1 (en) 2013-03-15 2014-09-18 A10 Networks, Inc. Processing data packets using a policy based network path
WO2014179753A2 (en) 2013-05-03 2014-11-06 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US10581976B2 (en) 2015-08-12 2020-03-03 A10 Networks, Inc. Transmission control of protocol state exchange for dynamic stateful service insertion
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies

Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4926480A (en) * 1983-08-22 1990-05-15 David Chaum Card-computer moderated systems
US5761507A (en) * 1996-03-05 1998-06-02 International Business Machines Corporation Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US5956489A (en) * 1995-06-07 1999-09-21 Microsoft Corporation Transaction replication system and method for supporting replicated transaction-based services
US6044260A (en) * 1997-09-02 2000-03-28 Motorola, Inc. Method of controlling the number of messages received by a personal messaging unit
US6075772A (en) * 1997-08-29 2000-06-13 International Business Machines Corporation Methods, systems and computer program products for controlling data flow for guaranteed bandwidth connections on a per connection basis
US6088452A (en) * 1996-03-07 2000-07-11 Northern Telecom Limited Encoding technique for software and hardware
US6195680B1 (en) * 1998-07-23 2001-02-27 International Business Machines Corporation Client-based dynamic switching of streaming servers for fault-tolerance and load balancing
US6314465B1 (en) * 1999-03-11 2001-11-06 Lucent Technologies Inc. Method and apparatus for load sharing on a wide area network
US20010039585A1 (en) * 1999-12-06 2001-11-08 Leonard Primak System and method for directing a client to a content source
US20010042200A1 (en) * 2000-05-12 2001-11-15 International Business Machines Methods and systems for defeating TCP SYN flooding attacks
US20010047415A1 (en) * 2000-01-31 2001-11-29 Skene Bryan D. Method and system for enabling persistent access to virtual servers by an ldns server
US6336133B1 (en) * 1997-05-20 2002-01-01 America Online, Inc. Regulating users of online forums
US20020040400A1 (en) * 1999-07-15 2002-04-04 F5 Networks, Inc. Method and system for storing load balancing information with an HTTP cookie
US6381642B1 (en) * 1999-10-21 2002-04-30 Mcdata Corporation In-band method and apparatus for reporting operational statistics relative to the ports of a fibre channel switch
US6389448B1 (en) * 1999-12-06 2002-05-14 Warp Solutions, Inc. System and method for load balancing
US20020099831A1 (en) * 2001-01-25 2002-07-25 International Business Machines Corporation Managing requests for connection to a server
US6438652B1 (en) * 1998-10-09 2002-08-20 International Business Machines Corporation Load balancing cooperating cache servers by shifting forwarded request
US6457061B1 (en) * 1998-11-24 2002-09-24 Pmc-Sierra Method and apparatus for performing internet network address translation
US6526448B1 (en) * 1998-12-22 2003-02-25 At&T Corp. Pseudo proxy server providing instant overflow capacity to computer networks
US20030041146A1 (en) * 2001-08-16 2003-02-27 International Business Machines Corporation Connection allocation technology
US6546423B1 (en) * 1998-10-22 2003-04-08 At&T Corp. System and method for network load balancing
US6587881B1 (en) * 1999-04-09 2003-07-01 Microsoft Corporation Software server usage governor
US6597661B1 (en) * 1999-08-25 2003-07-22 Watchguard Technologies, Inc. Network packet classification
US20040024861A1 (en) * 2002-06-28 2004-02-05 Coughlin Chesley B. Network load balancing
US6701415B1 (en) * 1999-03-31 2004-03-02 America Online, Inc. Selecting a cache for a request for information
US6763372B1 (en) * 2000-07-06 2004-07-13 Nishant V. Dani Load balancing of chat servers based on gradients
US20040162901A1 (en) * 1998-12-01 2004-08-19 Krishna Mangipudi Method and apparatus for policy based class service and adaptive service level management within the context of an internet and intranet
US6851062B2 (en) * 2001-09-27 2005-02-01 International Business Machines Corporation System and method for managing denial of service attacks
US6857025B1 (en) * 2000-04-05 2005-02-15 International Business Machines Corporation Highly scalable system and method of regulating internet traffic to server farm to support (min,max) bandwidth usage-based service level agreements
US6883033B2 (en) * 2001-02-20 2005-04-19 International Business Machines Corporation System and method for regulating incoming traffic to a server farm
US7007092B2 (en) * 2000-10-05 2006-02-28 Juniper Networks, Inc. Connection management system and method
US7107609B2 (en) * 2001-07-20 2006-09-12 Hewlett-Packard Development Company, L.P. Stateful packet forwarding in a firewall cluster
US7131140B1 (en) * 2000-12-29 2006-10-31 Cisco Technology, Inc. Method for protecting a firewall load balancer from a denial of service attack
US7584262B1 (en) * 2002-02-11 2009-09-01 Extreme Networks Method of and system for allocating resources to resource requests based on application of persistence policies
US7707295B1 (en) * 2002-05-03 2010-04-27 Foundry Networks, Inc. Connection rate limiting
US8554929B1 (en) * 2002-05-03 2013-10-08 Foundry Networks, Llc Connection rate limiting for server load balancing and transparent cache switching

Patent Citations (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4926480A (en) * 1983-08-22 1990-05-15 David Chaum Card-computer moderated systems
US5956489A (en) * 1995-06-07 1999-09-21 Microsoft Corporation Transaction replication system and method for supporting replicated transaction-based services
US5761507A (en) * 1996-03-05 1998-06-02 International Business Machines Corporation Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling
US6088452A (en) * 1996-03-07 2000-07-11 Northern Telecom Limited Encoding technique for software and hardware
US5774660A (en) * 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US6336133B1 (en) * 1997-05-20 2002-01-01 America Online, Inc. Regulating users of online forums
US6075772A (en) * 1997-08-29 2000-06-13 International Business Machines Corporation Methods, systems and computer program products for controlling data flow for guaranteed bandwidth connections on a per connection basis
US6044260A (en) * 1997-09-02 2000-03-28 Motorola, Inc. Method of controlling the number of messages received by a personal messaging unit
US6195680B1 (en) * 1998-07-23 2001-02-27 International Business Machines Corporation Client-based dynamic switching of streaming servers for fault-tolerance and load balancing
US6438652B1 (en) * 1998-10-09 2002-08-20 International Business Machines Corporation Load balancing cooperating cache servers by shifting forwarded request
US6546423B1 (en) * 1998-10-22 2003-04-08 At&T Corp. System and method for network load balancing
US6457061B1 (en) * 1998-11-24 2002-09-24 Pmc-Sierra Method and apparatus for performing internet network address translation
US20040162901A1 (en) * 1998-12-01 2004-08-19 Krishna Mangipudi Method and apparatus for policy based class service and adaptive service level management within the context of an internet and intranet
US6526448B1 (en) * 1998-12-22 2003-02-25 At&T Corp. Pseudo proxy server providing instant overflow capacity to computer networks
US6314465B1 (en) * 1999-03-11 2001-11-06 Lucent Technologies Inc. Method and apparatus for load sharing on a wide area network
US6701415B1 (en) * 1999-03-31 2004-03-02 America Online, Inc. Selecting a cache for a request for information
US6587881B1 (en) * 1999-04-09 2003-07-01 Microsoft Corporation Software server usage governor
US20020040400A1 (en) * 1999-07-15 2002-04-04 F5 Networks, Inc. Method and system for storing load balancing information with an HTTP cookie
US6597661B1 (en) * 1999-08-25 2003-07-22 Watchguard Technologies, Inc. Network packet classification
US6381642B1 (en) * 1999-10-21 2002-04-30 Mcdata Corporation In-band method and apparatus for reporting operational statistics relative to the ports of a fibre channel switch
US6389448B1 (en) * 1999-12-06 2002-05-14 Warp Solutions, Inc. System and method for load balancing
US20010039585A1 (en) * 1999-12-06 2001-11-08 Leonard Primak System and method for directing a client to a content source
US20010047415A1 (en) * 2000-01-31 2001-11-29 Skene Bryan D. Method and system for enabling persistent access to virtual servers by an ldns server
US6857025B1 (en) * 2000-04-05 2005-02-15 International Business Machines Corporation Highly scalable system and method of regulating internet traffic to server farm to support (min,max) bandwidth usage-based service level agreements
US20010042200A1 (en) * 2000-05-12 2001-11-15 International Business Machines Methods and systems for defeating TCP SYN flooding attacks
US6763372B1 (en) * 2000-07-06 2004-07-13 Nishant V. Dani Load balancing of chat servers based on gradients
US7007092B2 (en) * 2000-10-05 2006-02-28 Juniper Networks, Inc. Connection management system and method
US7131140B1 (en) * 2000-12-29 2006-10-31 Cisco Technology, Inc. Method for protecting a firewall load balancer from a denial of service attack
US20020099831A1 (en) * 2001-01-25 2002-07-25 International Business Machines Corporation Managing requests for connection to a server
US6883033B2 (en) * 2001-02-20 2005-04-19 International Business Machines Corporation System and method for regulating incoming traffic to a server farm
US7107609B2 (en) * 2001-07-20 2006-09-12 Hewlett-Packard Development Company, L.P. Stateful packet forwarding in a firewall cluster
US20030041146A1 (en) * 2001-08-16 2003-02-27 International Business Machines Corporation Connection allocation technology
US6851062B2 (en) * 2001-09-27 2005-02-01 International Business Machines Corporation System and method for managing denial of service attacks
US7584262B1 (en) * 2002-02-11 2009-09-01 Extreme Networks Method of and system for allocating resources to resource requests based on application of persistence policies
US8572228B2 (en) * 2002-05-03 2013-10-29 Foundry Networks, Llc Connection rate limiting for server load balancing and transparent cache switching
US7707295B1 (en) * 2002-05-03 2010-04-27 Foundry Networks, Inc. Connection rate limiting
US8554929B1 (en) * 2002-05-03 2013-10-08 Foundry Networks, Llc Connection rate limiting for server load balancing and transparent cache switching
US20040024861A1 (en) * 2002-06-28 2004-02-05 Coughlin Chesley B. Network load balancing

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9332066B2 (en) 2002-05-03 2016-05-03 Foundry Networks, Llc Connection rate limiting for server load balancing and transparent cache switching
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US9722918B2 (en) 2013-03-15 2017-08-01 A10 Networks, Inc. System and method for customizing the identification of application or content type
US10594600B2 (en) 2013-03-15 2020-03-17 A10 Networks, Inc. System and method for customizing the identification of application or content type
US9912555B2 (en) 2013-03-15 2018-03-06 A10 Networks, Inc. System and method of updating modules for application or content identification
US10708150B2 (en) 2013-03-15 2020-07-07 A10 Networks, Inc. System and method of updating modules for application or content identification
US10581907B2 (en) 2013-04-25 2020-03-03 A10 Networks, Inc. Systems and methods for network access control
WO2014176461A1 (en) * 2013-04-25 2014-10-30 A10 Networks, Inc. Systems and methods for network access control
US9838425B2 (en) 2013-04-25 2017-12-05 A10 Networks, Inc. Systems and methods for network access control
US10091237B2 (en) 2013-04-25 2018-10-02 A10 Networks, Inc. Systems and methods for network access control
US9294503B2 (en) 2013-08-26 2016-03-22 A10 Networks, Inc. Health monitor based distributed denial of service attack mitigation
US10187423B2 (en) 2013-08-26 2019-01-22 A10 Networks, Inc. Health monitor based distributed denial of service attack mitigation
US9860271B2 (en) 2013-08-26 2018-01-02 A10 Networks, Inc. Health monitor based distributed denial of service attack mitigation
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US10686683B2 (en) 2014-05-16 2020-06-16 A10 Networks, Inc. Distributed system to determine a server's health
US9756071B1 (en) 2014-09-16 2017-09-05 A10 Networks, Inc. DNS denial of service attack protection
US9537886B1 (en) 2014-10-23 2017-01-03 A10 Networks, Inc. Flagging security threats in web service requests
US10505964B2 (en) 2014-12-29 2019-12-10 A10 Networks, Inc. Context aware threat protection
US9621575B1 (en) 2014-12-29 2017-04-11 A10 Networks, Inc. Context aware threat protection
US9584318B1 (en) 2014-12-30 2017-02-28 A10 Networks, Inc. Perfect forward secrecy distributed denial of service attack defense
US9900343B1 (en) 2015-01-05 2018-02-20 A10 Networks, Inc. Distributed denial of service cellular signaling
US9848013B1 (en) 2015-02-05 2017-12-19 A10 Networks, Inc. Perfect forward secrecy distributed denial of service attack detection
US10834132B2 (en) 2015-02-14 2020-11-10 A10 Networks, Inc. Implementing and optimizing secure socket layer intercept
US10063591B1 (en) 2015-02-14 2018-08-28 A10 Networks, Inc. Implementing and optimizing secure socket layer intercept
US9787581B2 (en) 2015-09-21 2017-10-10 A10 Networks, Inc. Secure data flow open information analytics
US10469594B2 (en) 2015-12-08 2019-11-05 A10 Networks, Inc. Implementation of secure socket layer intercept
US10812348B2 (en) 2016-07-15 2020-10-20 A10 Networks, Inc. Automatic capture of network data for a detected anomaly
US10341118B2 (en) 2016-08-01 2019-07-02 A10 Networks, Inc. SSL gateway with integrated hardware security module
US10382562B2 (en) 2016-11-04 2019-08-13 A10 Networks, Inc. Verification of server certificates using hash codes
US10250475B2 (en) 2016-12-08 2019-04-02 A10 Networks, Inc. Measurement of application response delay time
US10397270B2 (en) 2017-01-04 2019-08-27 A10 Networks, Inc. Dynamic session rate limiter
USRE47924E1 (en) 2017-02-08 2020-03-31 A10 Networks, Inc. Caching network generated security certificates
US10187377B2 (en) 2017-02-08 2019-01-22 A10 Networks, Inc. Caching network generated security certificates
US20220070055A1 (en) * 2020-08-26 2022-03-03 Mastercard International Incorporated Systems and methods for routing network messages
US11765020B2 (en) * 2020-08-26 2023-09-19 Mastercard International Incorporated Systems and methods for routing network messages

Also Published As

Publication number Publication date
US7707295B1 (en) 2010-04-27

Similar Documents

Publication Publication Date Title
US7707295B1 (en) Connection rate limiting
US9332066B2 (en) Connection rate limiting for server load balancing and transparent cache switching
US11924170B2 (en) Methods and systems for API deception environment and API traffic control and security
US10511624B2 (en) Mitigating a denial-of-service attack in a cloud-based proxy service
US7725939B2 (en) System and method for identifying an efficient communication path in a network
US7774492B2 (en) System, method and computer program product to maximize server throughput while avoiding server overload by controlling the rate of establishing server-side net work connections
US6738814B1 (en) Method for blocking denial of service and address spoofing attacks on a private network
US8887265B2 (en) Named sockets in a firewall
US8661544B2 (en) Detecting botnets
US7020783B2 (en) Method and system for overcoming denial of service attacks
WO2008004076A2 (en) Router and method for server load balancing
JP2004507978A (en) System and method for countering denial of service attacks on network nodes
US11178108B2 (en) Filtering for network traffic to block denial of service attacks
US20080104688A1 (en) System and method for blocking anonymous proxy traffic
US11616796B2 (en) System and method to protect resource allocation in stateful connection managers
US20040243843A1 (en) Content server defending system
US8819252B1 (en) Transaction rate limiting
Cao et al. The research on the detection and defense method of the smurf-type DDos attack
Agarwal et al. Lattice: A Scalable Layer-Agnostic Packet Classification Framework
AU2008348253A1 (en) Method and system for controlling a computer application program
Demir et al. Real-time protection against DDoS attacks using active gateways
Chowdhury et al. Packet Classification with Explicit Coordination
Sharma et al. Web Switching

Legal Events

Date Code Title Description
AS Assignment

Owner name: FOUNDRY NETWORKS, LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:FOUNDRY NETWORKS, INC.;REEL/FRAME:024733/0739

Effective date: 20090511

AS Assignment

Owner name: FOUNDRY NETWORKS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SZETO, RONALD W.;CHEUNG, DAVID CHUN YING;JALAN, RAJKUMAR;REEL/FRAME:026227/0645

Effective date: 20020502

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION